Congrats on your first forum post!! Now in EA Forum style I’m going to disagree with you.… but really, I enjoyed reading this and I’m glad you shared your perspective on this matter. I’m sharing my views not to tell you you’re wrong but to add to the conversation and maybe find a point of synthesis or agreement. I’m actually very glad you posted this.
I don’t think I have an obligation to help all people. I think I have an obligation to do as much good as possible with the resources available to me. This means I should specialize my altruistic work in the areas with the highest EV or marginal return. This is not directly related to the number of morally valuable beings I care about. I don’t think that now valuing future humans means I have additional obligations. What changes is the bar for what’s most effective.
Say I haven’t learned about longtermism, I think GiveWell is awesome, and I am a person who feel obligated to do good. Maybe I can save lives to ~$50,000 per life by donating to GiveDirectly. Then I keep reading and find that AMF saves lives for ~$5,000 per life. I want to do the most good, so I give to the AMF, maximizing the positive impact of my donations.
Then I hear about longtermism and I get confused by the big numbers. But after thinking for awhile I decide that there are some cost-effective things I can fund in the longtermism or x-risk reduction space. I pull some numbers out of thin air and decide that a $500 donation to LTFF will save one life in expectation.
At this point, I think I should do the most good possible per resource, which means donating to the LTFF[1].
My obligation is to do the most good, on the margin where I can, I think. What longtermism changes for me is the cost-effectiveness bar that needs to be cleared. Prior to longtermism, it’s about $5,000 per life saved, via AMF. Now it’s about $500 but with some caveats. Importantly, increasing the pool of money is still good because it is still good to prevent kids dying of malaria; however, this is not the best use of my money.
Importantly, efficiency still matters. If LTFF saves lives for $500 and NTI saves lives for $400 (number also pulled out of thin air), I should give to NTI, all else equal.
I somewhat agree with you about
“Wow, we need to help current people, current animals, and future people and future animals, all with a subset of present-day resources. What a tremendous task”
However, I think it’s better to act according to “do the most good I can with my given resources, targeting the highest EV or marginal return areas”. Doing good well requires making sacrifices, and the second framing better captures this requirement.
Maybe a way I would try to synthesize my view and your conclusion is as follows: We have enormous opportunities to do good, more than ever before. If saving lives is cheaper now than ever before, the alternatives are relatively more expensive. That is, wasting $500 was only worth 0.1 lives before and now it’s worth a whole life. This makes wasting our resources even worse than it used to be.
Edit: Also thank you for writing your post because if gave me an opportunity on my own beliefs about this. :)
Although realistically I would diversify because of moral uncertainty, some psychological benefits of doing good with p~1, empirical uncertainty about how good LTFF is, social benefits of giving to near-term causes, wanting to remain connected to current suffering, and intuitively seems good, etc.
I think both the total view (my argument) and the marginal view (your argument, as I understand it) converge when you think about the second-order effects of your donations on only the most effective causes. You’re right that I argue in this post from the total view of the community, and am effectively saying that going from $50b to $100b is more valuable now than it would have been at any time in the past. But I think this logic also applies to individuals if you believe that your donations will displace other donations to the second-best option, as I think we must believe (from $50b to $50.00001b, for example).
This is why I think it’s important to step back and make these arguments in both total + absolute terms, rather than how they’re typically made for simplicity, in marginal and relative terms (an individual picking earn-to-give vs direct work). It’s ultimately the total + absolute view that matters, even though the marginal + relative view allows for the most simplified decision-making.
Plus, responding to you in your framework it also just so happens that if you believe longtermism, the growth of longtermism has added not just more second-best options, but probably new first-best options, increasing the first-order efficiency like you say. So I think there are multiple ways to arrive at this conclusion :)
Congrats on your first forum post!! Now in EA Forum style I’m going to disagree with you.… but really, I enjoyed reading this and I’m glad you shared your perspective on this matter. I’m sharing my views not to tell you you’re wrong but to add to the conversation and maybe find a point of synthesis or agreement. I’m actually very glad you posted this.
I don’t think I have an obligation to help all people. I think I have an obligation to do as much good as possible with the resources available to me. This means I should specialize my altruistic work in the areas with the highest EV or marginal return. This is not directly related to the number of morally valuable beings I care about. I don’t think that now valuing future humans means I have additional obligations. What changes is the bar for what’s most effective.
Say I haven’t learned about longtermism, I think GiveWell is awesome, and I am a person who feel obligated to do good. Maybe I can save lives to ~$50,000 per life by donating to GiveDirectly. Then I keep reading and find that AMF saves lives for ~$5,000 per life. I want to do the most good, so I give to the AMF, maximizing the positive impact of my donations.
Then I hear about longtermism and I get confused by the big numbers. But after thinking for awhile I decide that there are some cost-effective things I can fund in the longtermism or x-risk reduction space. I pull some numbers out of thin air and decide that a $500 donation to LTFF will save one life in expectation.
At this point, I think I should do the most good possible per resource, which means donating to the LTFF[1].
My obligation is to do the most good, on the margin where I can, I think. What longtermism changes for me is the cost-effectiveness bar that needs to be cleared. Prior to longtermism, it’s about $5,000 per life saved, via AMF. Now it’s about $500 but with some caveats. Importantly, increasing the pool of money is still good because it is still good to prevent kids dying of malaria; however, this is not the best use of my money.
Importantly, efficiency still matters. If LTFF saves lives for $500 and NTI saves lives for $400 (number also pulled out of thin air), I should give to NTI, all else equal.
I somewhat agree with you about
However, I think it’s better to act according to “do the most good I can with my given resources, targeting the highest EV or marginal return areas”. Doing good well requires making sacrifices, and the second framing better captures this requirement.
Maybe a way I would try to synthesize my view and your conclusion is as follows: We have enormous opportunities to do good, more than ever before. If saving lives is cheaper now than ever before, the alternatives are relatively more expensive. That is, wasting $500 was only worth 0.1 lives before and now it’s worth a whole life. This makes wasting our resources even worse than it used to be.
Edit: Also thank you for writing your post because if gave me an opportunity on my own beliefs about this. :)
Although realistically I would diversify because of moral uncertainty, some psychological benefits of doing good with p~1, empirical uncertainty about how good LTFF is, social benefits of giving to near-term causes, wanting to remain connected to current suffering, and intuitively seems good, etc.
I think both the total view (my argument) and the marginal view (your argument, as I understand it) converge when you think about the second-order effects of your donations on only the most effective causes. You’re right that I argue in this post from the total view of the community, and am effectively saying that going from $50b to $100b is more valuable now than it would have been at any time in the past. But I think this logic also applies to individuals if you believe that your donations will displace other donations to the second-best option, as I think we must believe (from $50b to $50.00001b, for example).
This is why I think it’s important to step back and make these arguments in both total + absolute terms, rather than how they’re typically made for simplicity, in marginal and relative terms (an individual picking earn-to-give vs direct work). It’s ultimately the total + absolute view that matters, even though the marginal + relative view allows for the most simplified decision-making.
Plus, responding to you in your framework it also just so happens that if you believe longtermism, the growth of longtermism has added not just more second-best options, but probably new first-best options, increasing the first-order efficiency like you say. So I think there are multiple ways to arrive at this conclusion :)