Say you have two interventions, A and B, and two outcome metrics, X and Y. You expect A will improve X by 100 units per dollar and B will improve Y by 100 units per dollar. However each intervention will have some smaller effect of uncertain sign on the other outcome metric. A will cause +1 or −1 units of Y, and B will cause +1 or −1 units of X.
It would be silly to decide for or against one of these interventions based on its second-order effect on the other outcome metric:
If you think either X or Y is much more important than the other metric, then you just pick based on the more important metric and neglect the other
If you think X and Y are of similar importance, again you focus on the primary effect of each intervention rather than the secondary one
If you are worried about A harming metric Y because you want to ensure you have expected positive impact on both X and Y, you can purchase offsets by putting 1% of your resources into B, or vice versa for B harming X
Cash transfers significantly relieve poverty of humans who are alive today, and are fairly efficient at doing that. They are far less efficient at helping or harming non-human animals today or increasing or reducing existential risk. Even if they have some negative effect here or there (more meat-eating, or habitat destruction, or carbon emissions) the cost of producing a comparable benefit to offset it in that dimension will be small compared to the cash transfer. E.g. an allocation of 90% GiveDirectly, and 10% to offset charities (carbon reduction, meat reduction, nuclear arms control, whatever) will wind up positive on multiple metrics.
If you have good reasons to give to poverty alleviation rather than existential risk reduction in the first place, then minor impacts on existential risk from your poverty charities are unlikely to reverse that conclusion (although you could make some smaller offsetting donations if you wanted to have a positive balance on as many moral theories as possible). It makes sense to ask how good those reasons really are and whether to switch, but not to worry too much about small second-order cross-cause effects.
ETA: As I discuss in a comment below, moral trade gives us good reasons to be reciprocally supportive with efforts to very efficiently serve different conceptions of the good with only comparatively small costs according to other conceptions.
True but it’s important for other reasons that we can tell whether the net effect of certain interventions is positive or not. If I’m spreading the message of EA to other people, should I put a lot of effort into getting people to send money to GiveDirectly and other charities? There is no doubt in my mind as to the fact that poverty alleviation is a suboptimal intervention. But if I believe that poverty alleviation is still better than nothing, I’ll be happy to promote and spread it and engage in debates about the best way to reduce poverty. But if I decide that the effects on existential risks and the rise in meat consumption of the developing world (1.66kg per capita per year per $1000 increase in per capita GDP) are significant enough that poverty alleviation is worse than nothing, then I don’t know what I’ll do.
If you are even somewhat of a moral pluralist, or have some normative uncertainty between views that would favor a focus on current people versus future generations, then if you were spending a trillion dollar budget it would include some highly effective poverty reduction, along with interventions that would do very well on different ethical views (with smaller side effects ranked poorly on other views).
I think that both pluralism and uncertainty are relevant, so I favor interventions that most efficiently relieve poverty even if they much less efficiently harm current humans or future generations, and likewise for things that very efficiently reduce factory farming at little cost to poverty or future generations, etc. One can think of this as a sort of moral trade with oneself.
And at the interpersonal level, there is a clear and overwhelming case for moral trade (both links are to Toby Ord’s paper, now published in Ethics). People with different ethical views about the importance of current human welfare, current non-human welfare, and the welfare of future generations have various low-cost high-benefit ways to help each other attain their goals (such as the ones you mention, but also many others like promoting the use of evidence-based charity evaluators). If these are all taken, then the world will be much better by all the metrics, i.e. there will be big gains from moral trade and cooperation.
You shouldn’t hold those benefits of cooperation (in an iterated game, no less), and the cooperate-cooperate equilibrium, hostage to the questionable possibility of some comparatively small drawbacks.
Eh, good points but I don’t see what normative uncertainty can accomplish. I have no particular reason to err on one side or the other: the chance that I might be giving too much weight to any given moral issue is no greater than the chance that I might be giving too little. Poverty alleviation could be better than I thought, or it could be worse. I can imagine moral reasons which would cut either way.
Specifically, I intend to give (100% or nearly 100%) to existential risk rather than (mostly) poverty alleviation (this due to how much I value future lives (a lot) relative to the quality of currently-existing lives).
Upon trying to think of counter-arguments to change my view back in favor of donating to poverty alleviation charities, the best I can come up with right now:
Maybe the best “poverty alleviation” charities are also the best “existential risk” charities. That is, maybe they are more effective at reducing existential risk than than are charities typically thought of as (the best) existential risk charities. How likely is this to be true? Less than 1%?
Maybe the best “poverty alleviation” charities are also the best “existential risk” charities. That is, maybe they are more effective at reducing existential risk than than are charities typically thought of as (the best) existential risk charities. How likely is this to be true? Less than 1%?
More than 1%. For example, investing in GiveWell (and to a lesser extent, donating to its top charities to increase its money moved and influence) to expedite its development has been a fantastic buy for global poverty historically, and it looks like it will also turn out to have great effects in reducing global catastrophic risks and factory farming.
It could end up best if you think improving general human empowerment or doing “common sense good” (or something like that) is the best way to reduce existential risk, though personally it seems unclear because many existential risks are man-made, and there seems to be more specific things we can do about them.
GiveWell also selects charities on the basis of room for more funding, team quality and transparency—things you’d want in any charity no matter your outcome metric—and that might raise the probability above 1%.
There might be a strong argument made about the existential risk coming from people in poverty contributing to social instability, and the resulting potential for various forms of terrorism, sabotage, and other Black Swan scenarios.
Or looking at it the other way around-perhaps the most effective way of reducing global catastrophic risk also is the most effective way of helping the people in poverty in the present generation, as I argue here.
Based on, for example, this post , would it be reasonable to say that most of the expected total impact of donating to / working on global health and development is linked to the respective long-term effects? If so, as suggested here (see “Response five: “Go longtermist”″), it seems more reasonable to focus on long-termism.
I believe:
The prior for the short-term (1st order) expected impact of (e.g.) GiveWell top charities has low variance.
The estimate for the total expected impact of GiveWell top charities has high variance.
The higher the variance of the estimate, the smaller the update to the prior.
However, I do not think one should conclude from the above that the posterior for the total expected impact of GiveWell top charities is similar to the prior for the short-term expected impact of GiveWell top charities. If I am not mistaken, this update would only be valid if the low-variance prior concerned the total expected impact of GiveWell top charities, but it respects the short-term expected impact.
Interesting. Even more specifically, which particular x-risk charity do you plan to donate to? And why do you think it does a lot of good (i.e. that when you donate a few thousand dollars to it, this will do more good than saving a life, deworming hundreds of children, or lifting several families out of poverty)?
I don’t have a specific charity in mind yet. 2. I’m not very confident in my answer.
I should also mention that I probably won’t be donating much more for at least a couple years, so it probably shouldn’t be my highest priority to try to answer all of these questions. They are good questions though, so thanks.
This is useful but doesnt entirely answer William’s question. To put it another way: suppose GiveDirectly reduced extreme poverty in East Africa by 50%. What would your best estimate of the effect of that on xrisk be? I’d expect it to be quite positive, but havent thought about how to estimate the magnitude.
I believe there’s an important case where this does actually matter.
Suppose there’s a fundraising charity F which raises money for charities X and G. Charity X is an x-risk charity, and F raises money for it at a 2:1 ratio. Charity G is a global poverty charity, and F raises money for it at a 10:1 ratio. If you care more about x-risk than global poverty and believe charity G decreases x-risk, or only increases x-risk by a tiny amount, then you should give to F instead of X. But if G increases x-risk by more than 20% as much as X decreases it, then giving to F is actually net negative and you should give to X instead.
I don’t believe 20% is implausibly high. This only requires that ending global poverty increases x-risk by about 0.01% and charity G is reasonably effective. (I did some Fermi calculations to justify this but they’re pretty complicated so I’ll leave them out.)
Say you have two interventions, A and B, and two outcome metrics, X and Y. You expect A will improve X by 100 units per dollar and B will improve Y by 100 units per dollar. However each intervention will have some smaller effect of uncertain sign on the other outcome metric. A will cause +1 or −1 units of Y, and B will cause +1 or −1 units of X.
It would be silly to decide for or against one of these interventions based on its second-order effect on the other outcome metric:
If you think either X or Y is much more important than the other metric, then you just pick based on the more important metric and neglect the other
If you think X and Y are of similar importance, again you focus on the primary effect of each intervention rather than the secondary one
If you are worried about A harming metric Y because you want to ensure you have expected positive impact on both X and Y, you can purchase offsets by putting 1% of your resources into B, or vice versa for B harming X
Cash transfers significantly relieve poverty of humans who are alive today, and are fairly efficient at doing that. They are far less efficient at helping or harming non-human animals today or increasing or reducing existential risk. Even if they have some negative effect here or there (more meat-eating, or habitat destruction, or carbon emissions) the cost of producing a comparable benefit to offset it in that dimension will be small compared to the cash transfer. E.g. an allocation of 90% GiveDirectly, and 10% to offset charities (carbon reduction, meat reduction, nuclear arms control, whatever) will wind up positive on multiple metrics.
If you have good reasons to give to poverty alleviation rather than existential risk reduction in the first place, then minor impacts on existential risk from your poverty charities are unlikely to reverse that conclusion (although you could make some smaller offsetting donations if you wanted to have a positive balance on as many moral theories as possible). It makes sense to ask how good those reasons really are and whether to switch, but not to worry too much about small second-order cross-cause effects.
ETA: As I discuss in a comment below, moral trade gives us good reasons to be reciprocally supportive with efforts to very efficiently serve different conceptions of the good with only comparatively small costs according to other conceptions.
True but it’s important for other reasons that we can tell whether the net effect of certain interventions is positive or not. If I’m spreading the message of EA to other people, should I put a lot of effort into getting people to send money to GiveDirectly and other charities? There is no doubt in my mind as to the fact that poverty alleviation is a suboptimal intervention. But if I believe that poverty alleviation is still better than nothing, I’ll be happy to promote and spread it and engage in debates about the best way to reduce poverty. But if I decide that the effects on existential risks and the rise in meat consumption of the developing world (1.66kg per capita per year per $1000 increase in per capita GDP) are significant enough that poverty alleviation is worse than nothing, then I don’t know what I’ll do.
If you are even somewhat of a moral pluralist, or have some normative uncertainty between views that would favor a focus on current people versus future generations, then if you were spending a trillion dollar budget it would include some highly effective poverty reduction, along with interventions that would do very well on different ethical views (with smaller side effects ranked poorly on other views).
I think that both pluralism and uncertainty are relevant, so I favor interventions that most efficiently relieve poverty even if they much less efficiently harm current humans or future generations, and likewise for things that very efficiently reduce factory farming at little cost to poverty or future generations, etc. One can think of this as a sort of moral trade with oneself.
And at the interpersonal level, there is a clear and overwhelming case for moral trade (both links are to Toby Ord’s paper, now published in Ethics). People with different ethical views about the importance of current human welfare, current non-human welfare, and the welfare of future generations have various low-cost high-benefit ways to help each other attain their goals (such as the ones you mention, but also many others like promoting the use of evidence-based charity evaluators). If these are all taken, then the world will be much better by all the metrics, i.e. there will be big gains from moral trade and cooperation.
You shouldn’t hold those benefits of cooperation (in an iterated game, no less), and the cooperate-cooperate equilibrium, hostage to the questionable possibility of some comparatively small drawbacks.
Eh, good points but I don’t see what normative uncertainty can accomplish. I have no particular reason to err on one side or the other: the chance that I might be giving too much weight to any given moral issue is no greater than the chance that I might be giving too little. Poverty alleviation could be better than I thought, or it could be worse. I can imagine moral reasons which would cut either way.
Thank you! This just changed where I intend to donate tremendously.
Specifically, I intend to give (100% or nearly 100%) to existential risk rather than (mostly) poverty alleviation (this due to how much I value future lives (a lot) relative to the quality of currently-existing lives).
Upon trying to think of counter-arguments to change my view back in favor of donating to poverty alleviation charities, the best I can come up with right now:
Maybe the best “poverty alleviation” charities are also the best “existential risk” charities. That is, maybe they are more effective at reducing existential risk than than are charities typically thought of as (the best) existential risk charities. How likely is this to be true? Less than 1%?
More than 1%. For example, investing in GiveWell (and to a lesser extent, donating to its top charities to increase its money moved and influence) to expedite its development has been a fantastic buy for global poverty historically, and it looks like it will also turn out to have great effects in reducing global catastrophic risks and factory farming.
It could end up best if you think improving general human empowerment or doing “common sense good” (or something like that) is the best way to reduce existential risk, though personally it seems unclear because many existential risks are man-made, and there seems to be more specific things we can do about them.
GiveWell also selects charities on the basis of room for more funding, team quality and transparency—things you’d want in any charity no matter your outcome metric—and that might raise the probability above 1%.
Indeed. Valuation of outcomes is one of several multiplicative factors.
There might be a strong argument made about the existential risk coming from people in poverty contributing to social instability, and the resulting potential for various forms of terrorism, sabotage, and other Black Swan scenarios.
Or looking at it the other way around-perhaps the most effective way of reducing global catastrophic risk also is the most effective way of helping the people in poverty in the present generation, as I argue here.
Based on, for example, this post , would it be reasonable to say that most of the expected total impact of donating to / working on global health and development is linked to the respective long-term effects? If so, as suggested here (see “Response five: “Go longtermist”″), it seems more reasonable to focus on long-termism.
I believe:
The prior for the short-term (1st order) expected impact of (e.g.) GiveWell top charities has low variance.
The estimate for the total expected impact of GiveWell top charities has high variance.
The higher the variance of the estimate, the smaller the update to the prior.
However, I do not think one should conclude from the above that the posterior for the total expected impact of GiveWell top charities is similar to the prior for the short-term expected impact of GiveWell top charities. If I am not mistaken, this update would only be valid if the low-variance prior concerned the total expected impact of GiveWell top charities, but it respects the short-term expected impact.
Interesting. Even more specifically, which particular x-risk charity do you plan to donate to? And why do you think it does a lot of good (i.e. that when you donate a few thousand dollars to it, this will do more good than saving a life, deworming hundreds of children, or lifting several families out of poverty)?
I don’t have a specific charity in mind yet. 2. I’m not very confident in my answer.
I should also mention that I probably won’t be donating much more for at least a couple years, so it probably shouldn’t be my highest priority to try to answer all of these questions. They are good questions though, so thanks.
This is useful but doesnt entirely answer William’s question. To put it another way: suppose GiveDirectly reduced extreme poverty in East Africa by 50%. What would your best estimate of the effect of that on xrisk be? I’d expect it to be quite positive, but havent thought about how to estimate the magnitude.
I believe there’s an important case where this does actually matter.
Suppose there’s a fundraising charity F which raises money for charities X and G. Charity X is an x-risk charity, and F raises money for it at a 2:1 ratio. Charity G is a global poverty charity, and F raises money for it at a 10:1 ratio. If you care more about x-risk than global poverty and believe charity G decreases x-risk, or only increases x-risk by a tiny amount, then you should give to F instead of X. But if G increases x-risk by more than 20% as much as X decreases it, then giving to F is actually net negative and you should give to X instead.
I don’t believe 20% is implausibly high. This only requires that ending global poverty increases x-risk by about 0.01% and charity G is reasonably effective. (I did some Fermi calculations to justify this but they’re pretty complicated so I’ll leave them out.)