Probably far beyond as well, right? Thereās nothing distinctive about EA projects that make them [EDIT: more] subject to potential far future bad consequences we donāt know about. And even (sane) non-consequentialists should care about consequences amongst other things, even if they donāt care only about consequences.
I dispute this. Iām admittedly not entirely sure but here is my best explanation.
A lot of EA interventions involve saving lives which influences the number of people who will live in the future. This in turn, we know, will influence the following (to just give two examples):
The number of animals who will be killed for food (i.e. impacting animal welfare).
CO2 emissions and climate change (i.e. impacting the wellbeing of humans and wild animals in the future).
Importantly, we donāt know the sign and magnitude of these āunintendedā effects (partly because we donāt in fact know if saving lives now causes more or fewer people in the future). But we do know that these unintended effects will predictably happen and that they will swamp the size of the āintendedā effect of saving lives. This is where the complex cluelessness comes in. Considering predictable effects (both intended and not intended), we canāt really weigh them. If you think you can weigh them, then please tell me more.
So I think itās the saving lives that really gets us into a pickle hereāit leads to so much complexity in terms of predictable effects.
There are some EA interventions that donāt involve saving lives and donāt seem to me to run into a cluelessness issue e.g. expanding our moral circle through advocacy, building AI governance structures to (for instance) promote global cooperation, global priorities research. I donāt think these interventions run into the complex cluelessness issue because, in my opinion, it seems easy to say that the expected positives outweigh expected negatives. I explain this a little more in this comment chain.
Also, note that under Greavesā model there are types of cluelessness that are not problematic, which she calls āsimple cluelessnessā. An example is if we are deciding whether to conceive a child on a Tuesday or a Wednesday. Any chance that one of the options might have some long-run positive or negative consequence will be counterbalanced by an equal chance that the other will have that consequence. In other words there is evidential symmetry across the available choices.
A lot of ānon-EAā altruistic actions I think we will have simple cluelessness about (rather than complex), in large part because they donāt involve saving lives and are often on quite a small scale so arenāt going to predictably influence things like economic growth. For example, giving food to a soup kitchenāother than helping people who need food it isnāt at all predictable what other unintended effects will be so we have evidential symmetry and can ignore them. Basically, a lot of ānon-EAā altruistic actions might not have predictable unintended effects, in large part because they donāt involve saving lives. So I donāt think they will run us into the cluelessness issue.
I need to think about this more but would welcome thoughts.
Yes but if I were to ask my non-EA friends what they give to (if they give to anything at all) they will say things like local educational charities, soup kitchens, animal shelters etc. I do think EA generally has more of a focus on saving lives.
Probably far beyond as well, right? Thereās nothing distinctive about EA projects that make them [EDIT: more] subject to potential far future bad consequences we donāt know about. And even (sane) non-consequentialists should care about consequences amongst other things, even if they donāt care only about consequences.
I dispute this. Iām admittedly not entirely sure but here is my best explanation.
A lot of EA interventions involve saving lives which influences the number of people who will live in the future. This in turn, we know, will influence the following (to just give two examples):
The number of animals who will be killed for food (i.e. impacting animal welfare).
CO2 emissions and climate change (i.e. impacting the wellbeing of humans and wild animals in the future).
Importantly, we donāt know the sign and magnitude of these āunintendedā effects (partly because we donāt in fact know if saving lives now causes more or fewer people in the future). But we do know that these unintended effects will predictably happen and that they will swamp the size of the āintendedā effect of saving lives. This is where the complex cluelessness comes in. Considering predictable effects (both intended and not intended), we canāt really weigh them. If you think you can weigh them, then please tell me more.
So I think itās the saving lives that really gets us into a pickle hereāit leads to so much complexity in terms of predictable effects.
There are some EA interventions that donāt involve saving lives and donāt seem to me to run into a cluelessness issue e.g. expanding our moral circle through advocacy, building AI governance structures to (for instance) promote global cooperation, global priorities research. I donāt think these interventions run into the complex cluelessness issue because, in my opinion, it seems easy to say that the expected positives outweigh expected negatives. I explain this a little more in this comment chain.
Also, note that under Greavesā model there are types of cluelessness that are not problematic, which she calls āsimple cluelessnessā. An example is if we are deciding whether to conceive a child on a Tuesday or a Wednesday. Any chance that one of the options might have some long-run positive or negative consequence will be counterbalanced by an equal chance that the other will have that consequence. In other words there is evidential symmetry across the available choices.
A lot of ānon-EAā altruistic actions I think we will have simple cluelessness about (rather than complex), in large part because they donāt involve saving lives and are often on quite a small scale so arenāt going to predictably influence things like economic growth. For example, giving food to a soup kitchenāother than helping people who need food it isnāt at all predictable what other unintended effects will be so we have evidential symmetry and can ignore them. Basically, a lot of ānon-EAā altruistic actions might not have predictable unintended effects, in large part because they donāt involve saving lives. So I donāt think they will run us into the cluelessness issue.
I need to think about this more but would welcome thoughts.
You donāt think a lot of non-EA altruistic actions involve saving lives??
Yes but if I were to ask my non-EA friends what they give to (if they give to anything at all) they will say things like local educational charities, soup kitchens, animal shelters etc. I do think EA generally has more of a focus on saving lives.
Yes I agree with this