I dispute this. I’m admittedly not entirely sure but here is my best explanation.
A lot of EA interventions involve saving lives which influences the number of people who will live in the future. This in turn, we know, will influence the following (to just give two examples):
The number of animals who will be killed for food (i.e. impacting animal welfare).
CO2 emissions and climate change (i.e. impacting the wellbeing of humans and wild animals in the future).
Importantly, we don’t know the sign and magnitude of these “unintended” effects (partly because we don’t in fact know if saving lives now causes more or fewer people in the future). But we do know that these unintended effects will predictably happen and that they will swamp the size of the “intended” effect of saving lives. This is where the complex cluelessness comes in. Considering predictable effects (both intended and not intended), we can’t really weigh them. If you think you can weigh them, then please tell me more.
So I think it’s the saving lives that really gets us into a pickle here—it leads to so much complexity in terms of predictable effects.
There are some EA interventions that don’t involve saving lives and don’t seem to me to run into a cluelessness issue e.g. expanding our moral circle through advocacy, building AI governance structures to (for instance) promote global cooperation, global priorities research. I don’t think these interventions run into the complex cluelessness issue because, in my opinion, it seems easy to say that the expected positives outweigh expected negatives. I explain this a little more in this comment chain.
Also, note that under Greaves’ model there are types of cluelessness that are not problematic, which she calls “simple cluelessness”. An example is if we are deciding whether to conceive a child on a Tuesday or a Wednesday. Any chance that one of the options might have some long-run positive or negative consequence will be counterbalanced by an equal chance that the other will have that consequence. In other words there is evidential symmetry across the available choices.
A lot of “non-EA” altruistic actions I think we will have simple cluelessness about (rather than complex), in large part because they don’t involve saving lives and are often on quite a small scale so aren’t going to predictably influence things like economic growth. For example, giving food to a soup kitchen—other than helping people who need food it isn’t at all predictable what other unintended effects will be so we have evidential symmetry and can ignore them. Basically, a lot of “non-EA” altruistic actions might not have predictable unintended effects, in large part because they don’t involve saving lives. So I don’t think they will run us into the cluelessness issue.
I need to think about this more but would welcome thoughts.
Yes but if I were to ask my non-EA friends what they give to (if they give to anything at all) they will say things like local educational charities, soup kitchens, animal shelters etc. I do think EA generally has more of a focus on saving lives.
I dispute this. I’m admittedly not entirely sure but here is my best explanation.
A lot of EA interventions involve saving lives which influences the number of people who will live in the future. This in turn, we know, will influence the following (to just give two examples):
The number of animals who will be killed for food (i.e. impacting animal welfare).
CO2 emissions and climate change (i.e. impacting the wellbeing of humans and wild animals in the future).
Importantly, we don’t know the sign and magnitude of these “unintended” effects (partly because we don’t in fact know if saving lives now causes more or fewer people in the future). But we do know that these unintended effects will predictably happen and that they will swamp the size of the “intended” effect of saving lives. This is where the complex cluelessness comes in. Considering predictable effects (both intended and not intended), we can’t really weigh them. If you think you can weigh them, then please tell me more.
So I think it’s the saving lives that really gets us into a pickle here—it leads to so much complexity in terms of predictable effects.
There are some EA interventions that don’t involve saving lives and don’t seem to me to run into a cluelessness issue e.g. expanding our moral circle through advocacy, building AI governance structures to (for instance) promote global cooperation, global priorities research. I don’t think these interventions run into the complex cluelessness issue because, in my opinion, it seems easy to say that the expected positives outweigh expected negatives. I explain this a little more in this comment chain.
Also, note that under Greaves’ model there are types of cluelessness that are not problematic, which she calls “simple cluelessness”. An example is if we are deciding whether to conceive a child on a Tuesday or a Wednesday. Any chance that one of the options might have some long-run positive or negative consequence will be counterbalanced by an equal chance that the other will have that consequence. In other words there is evidential symmetry across the available choices.
A lot of “non-EA” altruistic actions I think we will have simple cluelessness about (rather than complex), in large part because they don’t involve saving lives and are often on quite a small scale so aren’t going to predictably influence things like economic growth. For example, giving food to a soup kitchen—other than helping people who need food it isn’t at all predictable what other unintended effects will be so we have evidential symmetry and can ignore them. Basically, a lot of “non-EA” altruistic actions might not have predictable unintended effects, in large part because they don’t involve saving lives. So I don’t think they will run us into the cluelessness issue.
I need to think about this more but would welcome thoughts.
You don’t think a lot of non-EA altruistic actions involve saving lives??
Yes but if I were to ask my non-EA friends what they give to (if they give to anything at all) they will say things like local educational charities, soup kitchens, animal shelters etc. I do think EA generally has more of a focus on saving lives.