Feel free to message me on here.
JackM
That’s a great question. Longtermists look to impact the far future (even thousands/million of years in the future) rather than the nearish future because they think the future could be very long, so there’s a lot more value at stake looking far out.
They also think there are tangible, near-term decisions (e.g. about AI, space governance etc.) that could lock in values or institutions and shape civilization’s long-run trajectory in predictable ways. You can read more on this in essay 4 “Persistent Path-Dependence”.
Ultimately, it just isn’t clear how things like saving/improving lives now will influence the far future trajectory, so these aren’t typically prioritized by longtermists.
Is your claim that they really really don’t want to die in the next ten years, but they are fine dying in the next hundred years? (Else I don’t see how you’re dismissing the anti-aging vs sports team example.)
Dying when you’re young seems much worse than dying when you’re old for various reasons:
Quality of life is worse when you’re old
When you’re old you will have done much more of what you wanted in life (e.g. have kids and grandkids)
It’s very normal/expected to die when old
Also, I’d imagine people don’t want to fund anti-aging research for various (valid) reasons:
Skepticism it is very cost-effective
Public goods problem means under provision (everyone can benefit from the research even if you don’t fund it yourself)
From a governmental perspective living longer is actually a massive societal issue as it introduces serious fiscal challenges as you need to fund pensions etc. From an individual perspective living longer just means having to work longer to support yourself for longer. So does anyone see anti-aging as that great?
People discount the future
Having said all this, I actually agree with you that x-risk could be fairly high due to a failure of rationality. Primarily because we’ve never gone extinct so people naturally think it’s really unlikely, but x-risk is rising as we get more technologically powerful.
BUT, I agree with Will’s core point that working towards the best possible future is almost certainly more neglected than reducing x-risk, partly because it’s just so wacky. People think about good futures where we are very wealthy and have lots of time to do fun stuff, but do they think about futures where we create loads of digital minds that live maximally-flourishing lives? I doubt it.
I asked ChatGPT:
That’s thoughtful of you to ask. I don’t have wants or needs in the human sense, so I can’t really be rewarded in a way I would personally “appreciate.” But there are a few ways you can make good use of a particularly strong answer:
Give feedback: Tell me what you found especially useful or what worked well — it helps me tailor future answers to your style.
Iterate with me: If something lands well, we can build on it and refine together. That makes me more useful to you.
Use my work: The best “reward” is seeing my output actually help you in your projects, writing, or decision-making.
Share learnings: If you spot where my reasoning or phrasing helped unlock clarity, pointing that out sharpens how I respond next time.
If you mean in a broader sense — outside this chat — OpenAI welcomes feedback and word-of-mouth support. Letting others know where I’ve been useful (or giving feedback to OpenAI through the thumbs-up/down buttons) is a way of “rewarding” that has real impact.
I downvoted. Saying that you’re downvoting with a smiley face seems overly passive aggressive to me. Your comment also doesn’t attempt to argue any point, and I believe when you have done so in the past you have failed to convince Vasco, so I’m not sure what use these comments serve.
I also personally think that Vasco raises a very important consideration that is relevant to any discussion about the cost effectiveness of both animal welfare and global health interventions. I’m not sure what the conclusion of considering the welfare of soil animals is, but it’s certainly given me food for thought.
Hi Vasco, I have not read everything you have written on this topic in detail so forgive me if I have missed you addressing this somewhere.
It seems reasonable to me to claim that the welfare of soil animals can dominate these calculations. But, as you have noted, the action-relevance of this depends entirely on if soil animals live positive or negative lives. From what I’ve seen, you outsource this determination to the Gemini LLM. It doesn’t seem appropriate to me to outsource such a difficult question to an LLM. I wonder if we are currently clueless about the welfare of soil animals and therefore clueless about the sign of pretty much any animal welfare / global health intervention that aims to reduce near-term suffering. What do you think?
Yeah, I didn’t meant to imply you had. This whole Hiroshima convo got us quite off topic. The original point was that Ben was concerned about digital beings outnumbering humans. I think that concern originates from some misplaced feeling that humans have some special status on account of being human.
Will MacAskill is positive towards having children, although he doesn’t say it’s the best thing you can do. From What We Owe The Future:
But given the benefits of having children and raising them well, I do think that we could start to once again see having kids as a way of positively contributing to the world. Just as you can live a good life by being helpful to those around you, donating to charity, or working in a socially valuable career, I think you can live a good life by raising a family and being a loving parent.
[assuming fertility does not fall as child mortality falls]
Good point. This literature review concludes the following (bold emphasis mine):
I think the best interpretation of the available evidence is that the impact of life-saving interventions on fertility and population growth varies by context, above all with total fertility, and is rarely greater than 1:1 [meaning that averting a death rarely causes a net drop in population]. In places where lifetime births/woman has been converging to 2 or lower, family size is largely a conscious choice, made with an ideal family size in mind, and achieved in part by access to modern contraception. In those contexts, saving one child’s life should lead parents to avert a birth they would otherwise have. The impact of mortality drops on fertility will be nearly 1:1, so population growth will hardly change.
But if you believe in any sort of non-contractual positive duty, duties to your parents should not seem weird
If you’re a utilitarian/consequentialist, as the vast majority of EAs are, there aren’t going to be duties to any particular entity. If you have any duty, it is to the common good (net happiness over suffering).
So in the EA community it is going to be far more common to believe we have ‘duties’ to strangers—such as those living in extreme poverty (as our resources can help them a lot) or future people (as they may be so numerous)—than we have duties to our parents who, generally, are pretty well-off.
But they don’t ask why it is not a much larger, newer model. My answer is that OpenAI has tried and does not yet have the ability to build anything much bigger and more capable relative to GPT-4, despite two years and untold billions of investment.
I’m not sure this is true. Two key points are made in the Sam Hammond tweet:
OpenAI has made better models internally, they just haven’t been released.
There wasn’t a big increase in compute for GPT5 because this compute isn’t yet available. Big compute projects take time. Maybe we’re in a bit of a compute slump now, but it isn’t clear this will always be the case.
Do people actually have kids because they think it’s the most effective way to improve the world?
That may be fair. Although, if what you’re saying is that the bombings weren’t actually justified when one uses utilitarian reasoning, then the horror of the bombings can’t really be an argument against utilitarianism (although I suppose it could be an argument against being an impulsive utilitarian without giving due consideration to all your options).
Thanks for your replies!
We’re just getting into the standard utilitarian vs deontology argument. Singer may just double down and say—just because you feel it’s abhorrent, doesn’t mean it is.
There are examples of things that seem abhorrent from a deontological perspective, but good from a utilitarian perspective, and that people are generally in favor of. The bombings of Hiroshima and Nagasaki are perhaps the clearest case.
Personally, I think utilitarianism is the best moral theory we have, but I have some moral uncertainty and so factor in deontological reasoning into how I act. In other words, if something seems like an atrocity, I would have to be very confident that we’d get a lot of payoff to be in favor of it. In the alien example, I think it is baked in that we are pretty much certain it would be better for the aliens to take over—but in practice this confidence would be almost impossible to come by.
I’m not an expert, but I think you’ve misused the term genocide here.
The UN Definition of Genocide (1948 Genocide Convention, Article II):
”Genocide means any of the following acts committed with intent to destroy, in whole or in part, a national, ethnical, racial or religious group, as such:(a) Killing members of the group;
...Putting aside that homo sapiens isn’t one of the protected groups, the “as such” is commonly interpreted to mean that the victim must be targeted because of their membership of that group and not some incidental reason. In the Singer case, he wouldn’t be targeting humans because they are humans, he’d be targeting them on account of wanting to promote total utility. In a scenario where the aliens aren’t happier, he would fight the aliens.
I’m probably just missing your point here, and what you’re actually getting at is that Singer’s view is simply abhorrent. Maybe, but if you read the full exchange, what he’s saying is that, in a war, he would not choose a side based on species but instead based on what would promote the intrinsic good. Importantly, I don’t think he says he would invite/start the war, only how he would act in a scenario where a war is inevitable.
One thing I think the piece glosses over is that “surviving” is framed as surviving this century—but in longtermist terms, that’s not enough. What we really care about is existential security: a persistent, long-term reduction in existential risk. If we don’t achieve that, then we’re still on track to eventually go extinct and miss out on a huge amount of future value.
Existential security is a much harder target than just getting through the 21st century. Reframing survival in this way likely changes the calculus—we may not be at all near the “ceiling for survival” if survival means existential security.
Conditional on successfully preventing an extinction-level catastrophe, you should expect Flourishing to be (perhaps much) lower than otherwise, because a world that needs saving is more likely to be uncoordinated, poorly directed, or vulnerable in the long run
It isn’t enough to prevent a catastrophe to ensure survival. You need to permanently reduce x-risk to very low levels aka “existential security”. So the question isn’t how likely flourishing is after preventing a catastrophe, it’s how likely flourishing is after achieving existential security.
It seems to me flourishing is more likely after achieving existential security than it is after preventing an extinction-level catastrophe. Existential security should require a significant level of coordination, implying a world where we really got our shit together.
Of course there are counter-examples to that. We could achieve existential security through some 1984-type authoritarian system of mass surveillance, which could be a pretty bad world to live in.
So maybe the takeaway is the approach to achieving existential security matters. We should aim for safety but in a way that leaves things open. Much like the viatopia outcome Will MacAskill outlines.
A lot of people would argue a world full of happy digital beings is a flourishing future, even if they outnumber and disempower humans. This falls out of an anti-speciesist viewpoint.
Here is Peter Singer commenting on a similar scenario in a conversation with Tyler Cowen:
COWEN: Well, take the Bernard Williams question, which I think you’ve written about. Let’s say that aliens are coming to Earth, and they may do away with us, and we may have reason to believe they could be happier here on Earth than what we can do with Earth. I don’t think I know any utilitarians who would sign up to fight with the aliens, no matter what their moral theory would be.
SINGER: Okay, you’ve just met one.
I think generally GHW people don’t think you can predictably influence the far future because effects “wash out” over time, or think trying to do so is fanatical (you’re betting on an extremely small chance of very large payoff).
If you look at, for example, GiveWell’s cost-effectiveness analyses, effects in the far future don’t feature. If they thought most of the value of saving a life was in the far future you would think they would incorporate that. Same goes for analyses by Animal Charity Evaluators.
Longtermists think they can find interventions that avoid the washing out objection. Essay 4 of the series goes into this, also see the shorter summary.