True but it’s important for other reasons that we can tell whether the net effect of certain interventions is positive or not. If I’m spreading the message of EA to other people, should I put a lot of effort into getting people to send money to GiveDirectly and other charities? There is no doubt in my mind as to the fact that poverty alleviation is a suboptimal intervention. But if I believe that poverty alleviation is still better than nothing, I’ll be happy to promote and spread it and engage in debates about the best way to reduce poverty. But if I decide that the effects on existential risks and the rise in meat consumption of the developing world (1.66kg per capita per year per $1000 increase in per capita GDP) are significant enough that poverty alleviation is worse than nothing, then I don’t know what I’ll do.
If you are even somewhat of a moral pluralist, or have some normative uncertainty between views that would favor a focus on current people versus future generations, then if you were spending a trillion dollar budget it would include some highly effective poverty reduction, along with interventions that would do very well on different ethical views (with smaller side effects ranked poorly on other views).
I think that both pluralism and uncertainty are relevant, so I favor interventions that most efficiently relieve poverty even if they much less efficiently harm current humans or future generations, and likewise for things that very efficiently reduce factory farming at little cost to poverty or future generations, etc. One can think of this as a sort of moral trade with oneself.
And at the interpersonal level, there is a clear and overwhelming case for moral trade (both links are to Toby Ord’s paper, now published in Ethics). People with different ethical views about the importance of current human welfare, current non-human welfare, and the welfare of future generations have various low-cost high-benefit ways to help each other attain their goals (such as the ones you mention, but also many others like promoting the use of evidence-based charity evaluators). If these are all taken, then the world will be much better by all the metrics, i.e. there will be big gains from moral trade and cooperation.
You shouldn’t hold those benefits of cooperation (in an iterated game, no less), and the cooperate-cooperate equilibrium, hostage to the questionable possibility of some comparatively small drawbacks.
Eh, good points but I don’t see what normative uncertainty can accomplish. I have no particular reason to err on one side or the other: the chance that I might be giving too much weight to any given moral issue is no greater than the chance that I might be giving too little. Poverty alleviation could be better than I thought, or it could be worse. I can imagine moral reasons which would cut either way.
True but it’s important for other reasons that we can tell whether the net effect of certain interventions is positive or not. If I’m spreading the message of EA to other people, should I put a lot of effort into getting people to send money to GiveDirectly and other charities? There is no doubt in my mind as to the fact that poverty alleviation is a suboptimal intervention. But if I believe that poverty alleviation is still better than nothing, I’ll be happy to promote and spread it and engage in debates about the best way to reduce poverty. But if I decide that the effects on existential risks and the rise in meat consumption of the developing world (1.66kg per capita per year per $1000 increase in per capita GDP) are significant enough that poverty alleviation is worse than nothing, then I don’t know what I’ll do.
If you are even somewhat of a moral pluralist, or have some normative uncertainty between views that would favor a focus on current people versus future generations, then if you were spending a trillion dollar budget it would include some highly effective poverty reduction, along with interventions that would do very well on different ethical views (with smaller side effects ranked poorly on other views).
I think that both pluralism and uncertainty are relevant, so I favor interventions that most efficiently relieve poverty even if they much less efficiently harm current humans or future generations, and likewise for things that very efficiently reduce factory farming at little cost to poverty or future generations, etc. One can think of this as a sort of moral trade with oneself.
And at the interpersonal level, there is a clear and overwhelming case for moral trade (both links are to Toby Ord’s paper, now published in Ethics). People with different ethical views about the importance of current human welfare, current non-human welfare, and the welfare of future generations have various low-cost high-benefit ways to help each other attain their goals (such as the ones you mention, but also many others like promoting the use of evidence-based charity evaluators). If these are all taken, then the world will be much better by all the metrics, i.e. there will be big gains from moral trade and cooperation.
You shouldn’t hold those benefits of cooperation (in an iterated game, no less), and the cooperate-cooperate equilibrium, hostage to the questionable possibility of some comparatively small drawbacks.
Eh, good points but I don’t see what normative uncertainty can accomplish. I have no particular reason to err on one side or the other: the chance that I might be giving too much weight to any given moral issue is no greater than the chance that I might be giving too little. Poverty alleviation could be better than I thought, or it could be worse. I can imagine moral reasons which would cut either way.