That’s definitely a good question to ask. After all, people in the future aren’t here now, and there are a lot of problems we’re facing already. That said, I don’t think we should. I mean—do you or I have any less moral value now than the people who lived a thousand years ago? Regardless of where or when they live, the value of a human life doesn’t change. Basically, I think the default hypothesis should be “A human life is worth the same, no matter what” and we need a compelling reason to think otherwise, and I just don’t see that when it comes to future people.
There are some caveats in the real world, where things are messy. Like, if I said “Why shouldn’t we focus on people in the year 3000”, your first thought probably wouldn’t be “Because they don’t matter as much”. It’d probably be something like “How do we know we can actually do anything that’ll impact people a thousand years from now?” That’s the hard part, but that’s discounting based on chance of success, not morality. We’re not saying helping people in a thousand years is less valuable, just that it’s a lot harder to do. Still, EA definitely has some ideas. Investing money to give later can have really big compounding effects, so that the compounding has a bigger effect than our uncertainty. Imagine you could invest a thousand dollars in something that would definitely work, or ten thousand on something just as effective that was about a 50⁄50 shot. There’s a whole mode of thought called “patient philanthropy” that deals with this—I could send you a podcast episode if you’d like?
I’ve definitely leaned into the “conversational” aspect of this one—the argument is less rigorous and sophisticated than a lot of others in this post, but I’ve tried to optimise it for something I could understand in real time if someone was speaking to me, and wouldn’t have to read it twice.
That’s definitely a good question to ask. After all, people in the future aren’t here now, and there are a lot of problems we’re facing already. That said, I don’t think we should. I mean—do you or I have any less moral value now than the people who lived a thousand years ago? Regardless of where or when they live, the value of a human life doesn’t change. Basically, I think the default hypothesis should be “A human life is worth the same, no matter what” and we need a compelling reason to think otherwise, and I just don’t see that when it comes to future people.
There are some caveats in the real world, where things are messy. Like, if I said “Why shouldn’t we focus on people in the year 3000”, your first thought probably wouldn’t be “Because they don’t matter as much”. It’d probably be something like “How do we know we can actually do anything that’ll impact people a thousand years from now?” That’s the hard part, but that’s discounting based on chance of success, not morality. We’re not saying helping people in a thousand years is less valuable, just that it’s a lot harder to do. Still, EA definitely has some ideas. Investing money to give later can have really big compounding effects, so that the compounding has a bigger effect than our uncertainty. Imagine you could invest a thousand dollars in something that would definitely work, or ten thousand on something just as effective that was about a 50⁄50 shot. There’s a whole mode of thought called “patient philanthropy” that deals with this—I could send you a podcast episode if you’d like?
(Followup: Send them to this episode of the 80,000 Hours podcast if interested: https://80000hours.org/podcast/episodes/phil-trammell-patient-philanthropy/)
===
I’ve definitely leaned into the “conversational” aspect of this one—the argument is less rigorous and sophisticated than a lot of others in this post, but I’ve tried to optimise it for something I could understand in real time if someone was speaking to me, and wouldn’t have to read it twice.
Thanks for your submission!