“Long-Termism” vs. “Existential Risk”
The phrase “long-termism” is occupying an increasing share of EA community “branding”. For example, the Long-Term Future Fund, the FTX Future Fund (“we support ambitious projects to improve humanity’s long-term prospects”), and the impending launch of What We Owe The Future (“making the case for long-termism”).
Will MacAskill describes long-termism as:
I think this is an interesting philosophy, but I worry that in practical and branding situations it rarely adds value, and might subtract it.
In The Very Short Run, We’re All Dead
AI alignment is a central example of a supposedly long-termist cause.
But Ajeya Cotra’s Biological Anchors report estimates a 10% chance of transformative AI by 2031, and a 50% chance by 2052. Others (eg Eliezer Yudkowsky) think it might happen even sooner.
Let me rephrase this in a deliberately inflammatory way: if you’re under ~50, unaligned AI might kill you and everyone you know. Not your great-great-(...)-great-grandchildren in the year 30,000 AD. Not even your children. You and everyone you know. As a pitch to get people to care about something, this is a pretty strong one.
But right now, a lot of EA discussion about this goes through an argument that starts with “did you know you might want to assign your descendants in the year 30,000 AD exactly equal moral value to yourself? Did you know that maybe you should care about their problems exactly as much as you care about global warming and other problems happening today?”
Regardless of whether these statements are true, or whether you could eventually convince someone of them, they’re not the most efficient way to make people concerned about something which will also, in the short term, kill them and everyone they know.
The same argument applies to other long-termist priorities, like biosecurity and nuclear weapons. Well-known ideas like “the hinge of history”, “the most important century” and “the precipice” all point to the idea that existential risk is concentrated in the relatively near future—probably before 2100.
The average biosecurity project being funded by Long-Term Future Fund or FTX Future Fund is aimed at preventing pandemics in the next 10 or 30 years. The average nuclear containment project is aimed at preventing nuclear wars in the next 10 to 30 years. One reason all of these projects are good is that they will prevent humanity from being wiped out, leading to a flourishing long-term future. But another reason they’re good is that if there’s a pandemic or nuclear war 10 or 30 years from now, it might kill you and everyone you know.
Does Long-Termism Ever Come Up With Different Conclusions Than Thoughtful Short-Termism?
I think yes, but pretty rarely, in ways that rarely affect real practice.
Long-termism might be more willing to fund Progress Studies type projects that increase the rate of GDP growth by .01% per year in a way that compounds over many centuries. “Value change” type work—gradually shifting civilizational values to those more in line with human flourishing—might fall into this category too.
In practice I rarely see long-termists working on these except when they have shorter-term effects. I think there’s a sense that in the next 100 years, we’ll either get a negative technological singularity which will end civilization, or a positive technological singularity which will solve all of our problems - or at least profoundly change the way we think about things like “GDP growth”. Most long-termists I see are trying to shape the progress and values landscape up until that singularity, in the hopes of affecting which way the singularity goes—which puts them on the same page as thoughtful short-termists planning for the next 100 years.
Long-termists might also rate x-risks differently from suffering alleviation. For example, suppose you could choose between saving 1 billion people from poverty (with certainty), or preventing a nuclear war that killed all 10 billion people (with probability 1%), and we assume that poverty is 10% as bad as death. A short-termist might be indifferent between these two causes, but a long-termist would consider the war prevention much more important, since they’re thinking of all the future generations who would never be born if humanity was wiped out.
In practice, I think there’s almost never an option to save 1 billion people from poverty with certainty. When I said that there was, that was a hack I had to put in there to make the math work out so that the short-termist would come to a different conclusion from the long-termist. A 1⁄1 million chance of preventing apocalypse is worth 7,000 lives, which takes $30 million with GiveWell style charities. But I don’t think long-termists are actually asking for $30 million to make the apocalypse 0.0001% less likely—both because we can’t reliably calculate numbers that low, and because if you had $30 million you could probably do much better than 0.0001%. So I’m skeptical that problems like this are likely to come up in real life.
When people allocate money to causes other than existential risk, I think it’s more often as a sort of moral parliament maneuver, rather than because they calculated it out and the other cause is better in a way that would change if we considered the long-term future.
“Long-termism” vs. “existential risk”
Philosophers shouldn’t be constrained by PR considerations. If they’re actually long-termist, and that’s what’s motivating them, they should say so.
But when I’m talking to non-philosophers, I prefer an “existential risk” framework to a “long-termism” framework. The existential risk framework immediately identifies a compelling problem (you and everyone you know might die) without asking your listener to accept controversial philosophical assumptions. It forestalls attacks about how it’s non-empathetic or politically incorrect not to prioritize various classes of people who are suffering now. And it focuses objections on the areas that are most important to clear up (is there really a high chance we’re all going to die soon?) and not on tangential premises (are we sure that we know how our actions will affect the year 30,000 AD?)
I’m interested in hearing whether other people have different reasons for preferring the “long-termism” framework that I’m missing.
- Open Phil Should Allocate Most Neartermist Funding to Animal Welfare by 19 Nov 2023 17:00 UTC; 521 points) (
- Most students who would agree with EA ideas haven’t heard of EA yet (results of a large-scale survey) by 19 May 2022 17:24 UTC; 270 points) (
- My Most Likely Reason to Die Young is AI X-Risk by 4 Jul 2022 15:34 UTC; 237 points) (
- The Base Rate of Longtermism Is Bad by 5 Sep 2022 13:29 UTC; 220 points) (
- Rethink Priorities needs your support. Here’s what we’d do with it. by 21 Nov 2023 17:55 UTC; 211 points) (
- EA and Longtermism: not a crux for saving the world by 2 Jun 2023 23:22 UTC; 211 points) (
- Getting on a different train: can Effective Altruism avoid collapsing into absurdity? by 7 Oct 2022 10:52 UTC; 187 points) (
- Effective altruism is no longer the right name for the movement by 31 Aug 2022 6:07 UTC; 182 points) (
- Prioritizing x-risks may require caring about future people by 14 Aug 2022 0:55 UTC; 182 points) (
- Against “longtermist” as an identity by 13 May 2022 19:17 UTC; 166 points) (
- Testing Framings of EA and Longtermism by 7 Nov 2024 11:58 UTC; 136 points) (
- Before There Was Effective Altruism, There Was Effective Philanthropy by 26 Jun 2022 18:09 UTC; 133 points) (
- [AMA] Announcing Open Phil’s University Group Organizer and Century Fellowships by 29 Jul 2022 18:38 UTC; 111 points) (
- Being nicer than Clippy by 16 Jan 2024 19:44 UTC; 109 points) (LessWrong;
- Effective Altruism as Coordination & Field Incubation by 14 Jun 2022 16:17 UTC; 100 points) (
- New Princeton course on longtermism by 1 Sep 2023 20:31 UTC; 88 points) (
- Posts from 2022 you thought were valuable (or underrated) by 17 Jan 2023 16:42 UTC; 87 points) (
- AI Safety Field Building vs. EA CB by 26 Jun 2023 23:21 UTC; 80 points) (
- What you prioritise is mostly moral intuition by 24 Dec 2022 12:06 UTC; 73 points) (
- The Tree of Life: Stanford AI Alignment Theory of Change by 2 Jul 2022 18:32 UTC; 69 points) (
- “Longtermist causes” is a tricky classification by 29 Aug 2023 17:41 UTC; 63 points) (
- My Most Likely Reason to Die Young is AI X-Risk by 4 Jul 2022 17:08 UTC; 61 points) (LessWrong;
- Future Matters #1: AI takeoff, longtermism vs. existential risk, and probability discounting by 23 Apr 2022 23:32 UTC; 57 points) (
- Announcing the Founders Pledge Global Catastrophic Risks Fund by 26 Oct 2022 13:39 UTC; 49 points) (
- 13 Jun 2022 22:21 UTC; 42 points) 's comment on antimonyanthony’s Quick takes by (
- How much current animal suffering does longtermism let us ignore? by 21 Apr 2022 9:10 UTC; 40 points) (
- Alignment Risk Doesn’t Require Superintelligence by 15 Jun 2022 3:12 UTC; 35 points) (LessWrong;
- 11 Aug 2022 7:08 UTC; 33 points) 's comment on Against longtermism by (
- Future Matters #5: supervolcanoes, AI takeover, and What We Owe the Future by 14 Sep 2022 13:02 UTC; 31 points) (
- New Cause: Radio Ads Against Cousin Marriage in LMIC by 15 Aug 2022 17:36 UTC; 27 points) (
- 11 Aug 2022 0:24 UTC; 26 points) 's comment on Why say ‘longtermism’ and not just ‘extinction risk’? by (
- 19 Apr 2022 20:32 UTC; 26 points) 's comment on Can we agree on a better name than ‘near-termist’? “Not-longermist”? “Not-full-longtermist”? by (
- Next week I’m interviewing Will MacAskill — what should I ask? by 8 Apr 2022 14:20 UTC; 25 points) (
- 5 May 2022 2:09 UTC; 25 points) 's comment on EA is more than longtermism by (
- Being nicer than Clippy by 16 Jan 2024 19:44 UTC; 25 points) (
- The Tree of Life: Stanford AI Alignment Theory of Change by 2 Jul 2022 18:36 UTC; 25 points) (LessWrong;
- 6 Aug 2023 13:37 UTC; 22 points) 's comment on University EA Groups Need Fixing by (
- Red teaming a model for estimating the value of longtermist interventions—A critique of Tarsney’s “The Epistemic Challenge to Longtermism” by 16 Jul 2022 19:05 UTC; 21 points) (
- Giving What We Can April 2022 Newsletter by 4 May 2022 1:30 UTC; 17 points) (
- 31 Aug 2022 15:49 UTC; 16 points) 's comment on Effective altruism is no longer the right name for the movement by (
- 30 Jun 2022 17:57 UTC; 16 points) 's comment on The Future Might Not Be So Great by (
- 5 Jun 2022 20:40 UTC; 14 points) 's comment on A personal take on longtermist AI governance by (
- [AMA] Announcing Open Phil’s University Group Organizer and Century Fellowships [x-post] by 6 Aug 2022 21:48 UTC; 14 points) (LessWrong;
- 19 May 2022 19:25 UTC; 13 points) 's comment on Most students who would agree with EA ideas haven’t heard of EA yet (results of a large-scale survey) by (
- Formalizing Extinction Risk Reduction vs. Longtermism by 17 Oct 2022 15:37 UTC; 12 points) (
- 20 Jun 2022 12:05 UTC; 10 points) 's comment on Critiques of EA that I want to read by (
- 8 Apr 2022 17:05 UTC; 9 points) 's comment on Open Thread: Spring 2022 by (
- A Case Against Strong Longtermism by 2 Sep 2022 16:40 UTC; 9 points) (
- 15 Aug 2022 16:18 UTC; 8 points) 's comment on Prioritizing x-risks may require caring about future people by (
- 7 Feb 2023 16:43 UTC; 7 points) 's comment on Proposal: Create A New Longtermism Organization by (
- 21 Apr 2022 9:23 UTC; 6 points) 's comment on Why I am probably not a longtermist by (
- 7 Apr 2022 9:44 UTC; 6 points) 's comment on Convergence thesis between longtermism and neartermism by (
- 10 Jun 2022 20:19 UTC; 5 points) 's comment on AGI Safety FAQ / all-dumb-questions-allowed thread by (LessWrong;
- 6 Sep 2022 8:08 UTC; 4 points) 's comment on The Base Rate of Longtermism Is Bad by (
- 7 Feb 2023 22:17 UTC; 3 points) 's comment on Doing EA Better by (
- 12 Aug 2022 0:18 UTC; 3 points) 's comment on Against longtermism by (
- 6 Jul 2022 9:02 UTC; 3 points) 's comment on Why AGI Timeline Research/Discourse Might Be Overrated by (
- 5 Sep 2023 17:56 UTC; 3 points) 's comment on Thresholds #1: What does good look like for longtermism? by (
- 28 May 2023 9:53 UTC; 2 points) 's comment on [Linkpost] Longtermists Are Pushing a New Cold War With China by (
- 29 Oct 2023 6:53 UTC; 2 points) 's comment on A personal take on longtermist AI governance by (
- 25 Jul 2022 23:42 UTC; 2 points) 's comment on It’s OK not to go into AI (for students) by (
- 10 Nov 2023 16:36 UTC; 2 points) 's comment on Elizabeth’s Quick takes by (
- 29 Nov 2023 18:42 UTC; 2 points) 's comment on Tom Barnes’s Quick takes by (
- 2 May 2022 19:31 UTC; 2 points) 's comment on Paper summary: The case for strong longtermism (Hilary Greaves and William MacAskill) by (
- 12 Aug 2022 4:18 UTC; 1 point) 's comment on Against longtermism by (
- 6 Apr 2022 23:59 UTC; 1 point) 's comment on DC’s Quick takes by (
- 28 Apr 2022 15:04 UTC; 1 point) 's comment on [$20K In Prizes] AI Safety Arguments Competition by (
- 6 Oct 2022 8:21 UTC; 1 point) 's comment on Concerns with Longtermism by (
Hey Scott—thanks for writing this, and sorry for being so slow to the party on this one!
I think you’ve raised an important question, and it’s certainly something that keeps me up at night. That said, I want to push back on the thrust of the post. Here are some responses and comments! :)
The main view I’m putting forward in this comment is “we should promote a diversity of memes that we believe, see which ones catch on, and mould the ones that are catching on so that they are vibrant and compelling (in ways we endorse).” These memes include both “existential risk” and “longtermism”.
What is longtermism?
The quote of mine you give above comes from Spring 2020. Since then, I’ve distinguished between longtermism and strong longtermism.
My current preferred slogan definitions of each:
Longtermism is the view that we should do much more to protect the interests of future generations. (Alt: that protecting the interests of future generations should be a key moral priority of our time.)
Strong longtermism is the view that protecting the interests of future generations should be the key moral priority of our time. (That’s similar to the quote of mine you give.)
In WWOTF, I promote the weaker claim. In recent podcasts, I’ve described it something like the the following (depending on how flowery I’m feeling at the time):
I prefer to promote longtermism rather than strong longtermism. It’s a weaker claim and so I have a higher credence in it, and I feel much more robustly confident in it; at the same time, it gets almost all the value because in the actual world strong longtermism recommends the same actions most of the time, on the current margin.
Is existential risk a more compelling intro meme than longtermism?
My main take is: What meme is good for which people is highly dependent on the person and the context (e.g., the best framing to use in a back-and-forth conversation may be different from one in a viral tweet). This favours diversity; having a toolkit of memes that we can use depending on what’s best in context.
I think it’s very hard to reason about which memes to promote, and easy to get it wrong from the armchair, for a bunch of reasons:
It’s inherently unpredictable which memes do well.
It’s incredibly context-dependent. To figure this out, the main thing is just about gathering lots of (qualitative and quantitative) data from the demographic you’re interacting with. The memes that resonate most with Ezra Klein podcast listeners are very different from those that resonant most with Tyler Cowen podcast listeners, even though their listeners are very similar people compared to the wider world. And even with respect to one idea, subtly different framings can have radically different audience reactions. (cf. “We care about future generations” vs “We care about the unborn.”)
People vary a lot. Even within very similar demographics, some people can love one message while other people hate it.
“Curse of knowledge”—when you’re really deep down the rabbit hole in a set of ideas, it’s really hard to imagine what it’s like being first exposed to those ideas.
Then, at least when we’re comparing (weak) longtermism with existential risk, it’s not obvious which resonates better in general. (If anything, it seems to me that (weak) longtermism does better.) A few reasons:
First, message testing from Rethink suggests that longtermism and existential risk have similarly-good reactions from the educated general public, and AI risk doesn’t do great. The three best-performing messages they tested were:
“The current pandemic has shown that unforeseen events can have a devastating effect. It is imperative that we prepare both for pandemics and other risks which could threaten humanity’s long-term future.”
“In any year, the risk from any given threat might be small—but the odds that your children or grandchildren will face one of them is uncomfortably high.”
“It is important to ensure a good future not only for our children’s children, but also the children of their children.”
So people actually pretty like messages that are about unspecified, and not necessarily high-probability threats, to the (albeit nearer-term) future.
As terms to describe risk, “global catastrophic risk” and “long-term risk” did the best, coming out a fair amount better than “existential risk”.
They didn’t test a message about AI risk specifically. The related thing was how much the government should prepare for different risks (pandemics, nuclear, etc), and AI came out worst among about 10 (but I don’t think that tells us very much).
Second, most media reception of WWOTF has been pretty positive so far. This is based mainly on early reviews (esp trade reviews), podcast and journalistic interviews, and the recent profiles (although the New Yorker profile was mixed). Though there definitely has been some pushback (especially on Twitter), I think it’s overall been dwarfed by positive articles. And the pushback I have gotten is on the Elon endorsement, association between EA and billionaires, and on standard objections to utilitarianism — less so to the idea of longtemism itself.
Third, anecdotally at least, a lot of people just hate the idea of AI risk (cf Twitter), thinking of it as a tech bro issue, or doomsday cultism. This has been coming up in the twitter response to WWOTF, too, even though existential risk from AI takeover is only a small part of the book. And this is important, because I’d think that the median view among people working on x-risk (including me) is that the large majority of the risk comes from AI rather than bio or other sources. So “holy shit, x-risk” is mainly, “holy shit, AI risk”.
Do neartermists and longtermists agree on what’s best to do?
Here I want to say: maybe. (I personally don’t think so, but YMMV.) But even if you do believe that, I think that’s a very fragile state of affairs, which could easily change as more money and attention flows into x-risk work, or if our evidence changes, and I don’t want to place a lot of weight on it. (I do strongly believe that global catastrophic risk is enormously important even in the near term, and a sane world would be doing far, far better on it, even if everyone only cared about the next 20 years.)
More generally, I get nervous about any plan that isn’t about promoting what we fundamentally believe or care about (or a weaker version of what we fundamentally believe or care about, which is “on track” to the things we do fundamentally believe or care about).
What I mean by “promoting what we fundamentally believe or care about”:
Promoting goals rather than means. This means that (i) if the environment changes (e.g. some new transformative tech comes along, or the political environment changes dramatically, like war breaks out) or (ii) if our knowledge changes (e.g. about the time until transformative AIs, or about what actions to take), then we’ll take different means to pursue our goals. I think this is particularly important for something like AI, but also true more generally.
Promoting the ideas that you believe most robustly—i.e. that you think you are least likely to change in the coming 10 years. Ideally these things aren’t highly conjunctive or relying on speculative premises. This makes it less likely that you will realise that you’ve been wasting your time or done active harm by promoting wrong ideas in ten years’ time. (Of course, this will vary from person to person. I think that (weak) longtermism is really robustly true and neglected, and I feel bullish about promoting it. For others, the thing that might feel really robustly true is “TAI is a BFD and we’re not thinking about it enough”—I suspect that many people feel they more robustly believe this than longtermism.)
Examples of people promoting means rather than goals, and this going wrong:
“Eat less meat because it’s good for your health” → people (potentially) eat less beef and more chicken.
“Stop nuclear power” (in the 70s) → environmentalists hate nuclear power, even though it’s one of the best bits of clean tech we have.
Examples of how this could go wrong by promoting “holy shit x-risk”:
We miss out on non-x-risk ways of promoting a good long-run future:
E.g. the risk that we solve the alignment problem but AI is used to lock in highly suboptimal values. (Personally, I think a large % of future expected value is lost in this way.)
We highlight the importance of AI to people who are not longtermist. They realise how transformatively good it could be for them and for the present generation (a digital immortality of bliss!) if AI is aligned, and they think the risk of misalignment is small compared to the benefits. They become AI-accelerationists (a common view among Silicon Valley types).
AI progress slows considerably in the next 10 years, and actually near-term x-risk doesn’t seem so high. Rather than doing whatever the next-best longtermist thing is, the people who came in via “holy shit x-risk” people just do whatever instead, and the people who promoted the “holy shit x-risk” meme get a bad reputation.
So, overall my take is:
“Existential risk” and “longtermism” are both important ideas that deserve greater recognition in the world.
My inclination is to prefer promoting “longtermism” because that’s closer to what I fundamentally believe (in the sense I explain above), and it’s nonobvious to me which plays better PR-wise, and it’s probably highly context-dependent.
Let’s try promoting them both, and see how they each catch on.
Thanks for writing this! That overall seems pretty reasonable, and from a marketing perspective I am much more excited about promoting “weak” longtermism than strong longtermism.
A few points of pushback:
I think that to work on AI Risk, you need to buy into AI Risk arguments. I’m unconvinced that buying longtermism first really shifts the difficulty of figuring this point out. And I think that if you buy AI Risk, longtermism isn’t really that cruxy. So if our goal is to get people working on AI Risk, marketing longtermism first is strictly harder (even if it may be much easier)
I think that very few people say “I buy the standard AI X-Risk arguments and that this is a pressing thing, but I don’t care about future people so I’m going to rationally work on a more pressing problem”—if someone genuinely goes through that reasoning then more power to them!
I also expect that people have done much more message testing + refinement on longtermism than AI Risk, and that good framings could do much better—I basically buy the claim that it’s a harder sell though
Caveat: This reasoning applies more to “can we get people working on AI X-Risk with their careers” more so than things like broad societal value shifting
Caveat: Plausibly there’s enough social proof that people who care about longtermism start hanging out with EAs and are exposed to a lot of AI Safety memes and get there eventually? And it’s a good gateway thing?
I want AI Risk to be a broad tent where people who don’t buy longtermism feel welcome. I’m concerned about a mood affiliation problem where people who don’t buy longtermism but hear it phrased it as an abstract philosophical problem that requires you to care about the 10^30 future people won’t want to work on it, even though they buy the object level. This kind of thing shouldn’t hinge on your conclusions in contentious questions in moral philosophy!
More speculatively: It’s much less clear to me that pushing on things like general awareness of longtermism or longterm value change matter in a world with <20 year AI Timelines? I expect the world to get super weird after that, where more diffuse forms of longtermism don’t matter much. Are you arguing that this kind of value change over the next 20 years makes it more likely that the correct values are loaded into the AGI, and that’s how it affects the future?
On this particular point
I can’t find info on Rethink’s site, is there anything you can link to?
Of the three best-performing messages you’ve linked, I think the first two emphasise risk much more heavily than longtermism. The third does sound more longtermist, but I still suspect the risk-ish phrase ‘ensure a good future’ is a large part of what resonates.
All that said, more info on the tests they ran would obviously update my position.
This seems correct to me, and I would be excited to see more of them. However, I wouldn’t interpret this as meaning ‘longtermism and existential risk have similarly-good reactions from the educated general public’, I would read this as risk messaging performing better.
Also, messages ‘about unspecified, and not necessarily high-probability threats’ is not how I would characterize most of the EA-related press I’ve seen recently (NYTimes, BBC, Time, Vox).
(More generally, I mostly see journalists trying to convince their readers that an issue is important using negative emphasis. Questioning existing practices is important: they might be ineffective; they might be unsuitable to EA aims (e.g. manipulative, insufficiently truth-seeking, geared to persuade as many people as possible which isn’t EA’s objective, etc.). But I think the amount of buy-in this strategy has in high-stakes, high-interest situations (e.g. US presidential elections) is enough that it would be valuable to be clear on when EA deviates from it and why).
tl;dr: I suspect risk-ish messaging works better. Journalists seem to have a strong preference for it. Most of the EA messaging I’ve seen recently departs from this. I think it would be great to be very clear on why. I’m aware I’m missing a lot of data. It would be great to see the data from rethink that you referenced. Thanks!
Thanks for explaining, really interesting and glad so much careful thinking is going into communication issues!
FWIW I find the “meme” framing you use here offputting. The framing feels kinda uncooperative, as if we’re trying to trick people into believing in something, instead of making arguments to convince people who want to understand the merits of an idea. I associate memes with ideas that are selected for being easy and fun to spread, that likely affirm our biases, and that mostly without the constraint whether the ideas are convincing upon reflection, true or helpful for the brain that gets “infected” by the meme.
Some support for this interpretation from the Wikipedia introduction:
I agree with Scott Alexander that when talking with most non-EA people, an X risk framework is more attention-grabbing, emotionally vivid, and urgency-inducing, partly due to negativity bias, and partly due to the familiarity of major anthropogenic X risks as portrayed in popular science fiction movies & TV series.
However, for people who already understand the huge importance of minimizing X risk, there’s a risk of burnout, pessimism, fatalism, and paralysis, which can be alleviated by longtermism and more positive visions of desirable futures. This is especially important when current events seem all doom’n’gloom, when we might ask ourselves ‘what about humanity is really worth saving?’ or ‘why should we really care about the long-term future, it it’ll just be a bunch of self-replicating galaxy-colonizing AI drones that are no more similar to us than we are to late Permian proto-mammal cynodonts?’
In other words, we in EA need long-termism to stay cheerful, hopeful, and inspired about why we’re so keen to minimize X risks and global catastrophic risks.
But we also need longtermism to broaden our appeal to the full range of personality types, political views, and religious views out there in the public. My hunch as a psych professor is that there are lots of people who might respond better to longtermist positive visions than to X risk alarmism. It’s an empirical question how common that is, but I think it’s worth investigating.
Also, a significant % of humanity is already tacitly longtermist in the sense of believing in an infinite religious afterlife, and trying to act accordingly. Every Christian who takes their theology seriously & literally (i.e. believes in heaven and hell), and who prioritizes Christian righteousness over the ‘temptations of this transient life’, is doing longtermist thinking about the fate of their soul, and the souls of their loved ones. They take Pascal’s wager seriously; they live it every day. To such people, X risks aren’t necessarily that frightening personally, because they already believe that 99.9999+% of sentient experience will come in the afterlife. Reaching the afterlife sooner rather than later might not matter much, given their way of thinking.
However, even the most fundamentalist Christians might be responsive to arguments that the total number of people we could create in the future—who would all have save-able souls—could vastly exceed the current number of Christians. So, more souls for heaven; the more the merrier. Anybody who takes a longtermist view of their individual soul might find it easier to take a longtermist view of the collective human future.
I understand that most EAs are atheists or agnostics, and will find such arguments bizarre. But if we don’t take the views of religious people seriously, as part of the cultural landscape we’re living in, we’re not going to succeed in our public outreach, and we’re going to alienate a lot of potential donors, politicians, and media influencers.
There’s a particular danger in overemphasizing the more exotic transhumanist visions of the future, in alienating religious or political traditionalists. For many Christians, Muslims, and conservatives, a post-human, post-singularity, AI-dominated future would not sound worth saving. Without any humane connection to their human social world as it is, they might prefer a swift nuclear Armageddon followed by heavenly bliss, to a godless, soulless machine world stretching ahead for billions of years.
EAs tend to score very highly on Openness to Experience. We love science fiction. We like to think about post-human futures being potentially much better than human futures. But it that becomes our dominant narrative, we will alienate the vast majority of current living humans, who score much lower on Openness.
If we push the longtermist narrative to the general public, we better make the long-term future sound familiar enough to be worth fighting for.
Based on my memory of how people thought while growing up in the church, I don’t think increasing the number of saveable souls is something that makes sense for a Christian—or even any sort of long termist utilitarian framework at all.
Ultimately god is in control of everything. Your actions are fundamentally about your own soul, and your own eternal future, and not about other people. Their fate is between them and God, and he who knows when each sparrow falls will not forget them.
I remember my father explicitly saying that he regretted not having more children because he’s since learned that God wants us to create more souls for him. Didn’t make sense to me even as a Christian at the time, but the idea is out there.
There are fringe movements (ex: Quiverfull) that focus on procreation as a way of living out God’s will, but very few. What resonates with Christians is a “stewardship” mindset—using our God-given abilities and opportunities wisely. The Bible is full of stories of an otherwise-unspecial person being at the right time and place to make a historically impactful decision.
Eliezer’s underrated fun theory sequence tackles this.
“However, even the most fundamentalist Christians might be responsive to arguments that the total number of people we could create in the future—who would all have save-able souls—could vastly exceed the current number of Christians”.
I had thought about the above before, thanks for pointing it out!
Agree that X-risk is a better initial framing than longtermism—it matches what the community is actually doing a lot better. For this reason, I’m totally on board with “x-risk” replacing “longtermism” in outreach and intro materials. However, I don’t think the idea of longtermism is totally obsolete, for a few reasons:
Longtermism produces a strategic focus on “the last person” that this “near-term x-risk” view doesn’t. This isn’t super relevant for AI, but it makes more sense in the context of biosecurity. Pandemics with the potential to wipe out everyone are way worse than pandemics which merely kill 99% of people, and the ways we prepare for them seem likely to differ. On this view, bunkers and civilizational recovery plans don’t make much sense.
S-risks seem like they could very well be a big part of the overall strategy picture (even when not given normative priority and just considered as part of the total picture), and they aren’t captured by the short-term x-risk view.
The numbers you give for why x-risk might be the most important cause areas even if we ignore the long-term future, $30 million for a 0.0001% reduction in X-risk, don’t seem totally implausible. The world is big, and if you’re particularly pessimistic about changing it, then this might not be enough to budge you. Throw in an extra 10^30, though, and you’ve got a really strong argument, if you’re the kind of person that takes numbers seriously.
Submitting this now because it seems important, and I want to give this comment a chance to bubble to the top. Will fill in more reasons later if any major ones come up as I continue thinking.
Why not?
An existential risk is a risk that threatens the destruction of humanity’s long-term potential. But s-risks are worrisome not only because of the potential they threaten to destroy, but also because of what they threaten to replace this potential with (astronomical amounts of suffering).
I think the “short-term x-risk view” is meant to refer to everyone dying, and ignoring the lost long-term potential. Maybe s-risks could be similarly harmful in the short term, too.
Spreading wild animals to space isn’t bad for any currently existing humans or animals, so it isn’t counted under thoughtful short-termism or is discounted heavily. Same with a variety of S-risks (e.g. eventual stable totalitarian regime 100+ years out, slow space colonization, slow build up of Matrioshka brains with suffering simulations/sub-routines, etc.)
Oop, thanks for correction. To be honest I’m not sure what exactly I was thinking originally, but maybe this is true for non-AI S-risks that are slow, like spreading wild animals to space? I think this is mostly just false tho >:/
See also Neel Nanda’s recent Simplify EA Pitches to “Holy Shit, X-Risk”.
No offense to Neel’s writing, but it’s instructive that Scott manages to write the same thesis so much better. It:
is 1⁄3 the length
Caveats are naturally interspersed, e.g. “Philosophers shouldn’t be constrained by PR.”
No extraneous content about Norman Borlaug, leverage, etc
has a less bossy title
distills the core question using crisp phrasing, e.g. “Does Long-Termism Ever Come Up With Different Conclusions Than Thoughtful Short-Termism?” (my emphasis)
...and a ton of other things. Long-live the short EA Forum post!
FWIW I would not be offended if someone said Scott’s writing is better than mine. Scott’s writing is better than almost everyone’s.
Your comment inspired me to work harder to make my writings more Scott-like.
Thanks, I had read that but failed to internalize how much it was saying this same thing. Sorry to Neel for accidentally plagiarizing him.
I didn’t mean to imply that you were plagiarising Neel. I more wanted to point out that that many reasonable people (see also Carl Shulman’s podcast) are pointing out that the existential risk argument can go through without the longtermism argument.
I posted the graphic below on twitter back in Nov. These three communities & sets of ideas overlap a lot and I think reinforce one another, but they are intellectually & practically separable, and there are people in each section doing great work. Just because someone is in one section doesn’t mean they have to be, or are, committed to others.
No worries, I’m excited to see more people saying this! (Though I did have some eerie deja vu when reading your post initially...)
I’d be curious if you have any easy-to-articulate feedback re why my post didn’t feel like it was saying the same thing, or how to edit it to be better?
(EDIT: I guess the easiest object-level fix is to edit in a link at the top to your’s, and say that I consider you to be making substantially the same point...)
I’m not so sure about this. Speaking as someone who talks with new EAs semi-frequently, it seems much easier to get people to take the basic ideas behind longtermism seriously than, say, the idea that there is a significant risk that they will personally die from unaligned AI. I do think that diving deeper into each issue sometimes flips reactions—longtermism takes you to weird places on sufficient reflection, AI risk looks terrifying just from compiling expert opinions—but favoring the approach that shifts the burden from the philosophical controversy to the empirical controversy doesn’t seem like an obviously winning move. The move that seems both best for hedging this, and just the most honest, is being upfront both about your views on the philosophical and the empirical questions, and assume that convincing someone of even a somewhat more moderate version of either or both views will make them take the issues much more seriously.
Hmmmm, that is weird in a way, but also as someone who has in the last year been talking with new EAs semi-frequently, my intuition is that they often will not think about things the way I expect them to.
Really? I didn’t find their reactions very weird, how would you expect them to react?
Thanks for this post! I think I have a different intuition that there are important practical ways where longtermism and x-risk views can come apart. I’m not really thinking about this from an outreach perspective, more from an internal prioritisation view. (Some of these points have been made in other comments also, and the cases I present are probably not as thoroughly argued as they could be).
Extinction versus Global Catastrophic Risks (GCRs)
It seems likely that a short-termist with the high estimates of risks that Scott describes would focus on GCRs not extinction risks, and these might come apart.
To the extent that a short-termist framing views going from 80% to 81% population loss as equally as bad as 99% to 100%, it seems plausible to care less about e.g. refuges to evade pandemics. Other approaches like ALLFED and civilisational resilience work might look less effective on the short-termist framing also. Even if you also place some intrinsic weight on preventing extinction, this might not be enough to make these approaches look cost-effective.
Sensitivity to views of risk
Some people may be more sceptical of x-risk estimates this century, but might still reach the same prioritisation under the long-termist framing as the cost is so much higher.
This maybe depends how hard you think the “x-risk is really high” pill is to swallow compared to the “future lives matter equally” pill.
Suspicious Convergence
Going from not valuing future generations to valuing future generations seems initially like a huge change in values where you’re adding this enormous group into your moral circle. It seems suspicious that this shouldn’t change our priorities.
It’s maybe not quite as bad as it sounds as it seems reasonable to expect some convergence between what makes lives today good and what makes future lives good. However especially if you’re optimising for maximum impact, you would expect these to come apart.
The world could be plausibly net negative
To the extent you think farmed animals suffer, and that wild animals live net negative lives, a large scale extinction event might not reduce welfare that much in the short-term. This maybe seems less true for a pandemic that would kill all humans (although presumably substantially reduce the number of animals in factory farms). But for example a failed alignment situation where all becomes paperclips doesn’t seem as bad if all the animals were suffering anyway.
The future might be net negative
If you think that, given no deadly pandemic, the future might be net negative (E.g. because of s-risks, or potentially “meh” futures, or you’re very sceptical about AI alignment going well) then preventing pandemics doesn’t actually look that good under a longtermist view.
General improvements for future risks/Patient Philanthropy
As Scott mentions, other possible long-termist approaches such as value spreading, improving institutions, or patient philanthropic investment doesn’t come up under the x-risk view. I think you should be more inclined to these approaches if you expect new risks to appear in the future, providing we make it past current risks.
It seems that a possible objection to all these points is that AI risk is really high and we should just focus on AI alignment (as it’s more than just an extinction risk like bio).
ALLFED-type work is likely highly cost effective from the short-term perspective; see global and country (US) specific analyses.
See also The person-affecting value of existential risk reduction by Gregory Lewis.
I don’t have a strong preference. There a some aspects in which longerism can be better framing, at least sometimes.
I. In a “longetermist” framework, x-risk reduction is the most important thing to work on for many orders of magnitude of uncertainty about the probability of x-risk in the next e.g. 30 years. (due to the weight of the long term future). Even if AI related x-risk is only 10ˆ-3 in next 30 years, it is still an extremely important problem or the most important one. In a “short-termist” view with, say, a discount rate of 5%, it is not nearly so clear.
The short-termist urgency of x-risk (“you and everyone you know will die”) depends on the x-risk probability being actually high, like of order 1 percent, or tens of percents . Arguments why this probability is actually so high are usually brittle pieces of mathematical philosophy (eg many specific individual claims by Eliezer Yudkowsky) or brittle use of proxies with lot of variables obviously missing from the reasoning (eg the report by Ajeya Cotra). Actual disagreements about probabilities are often in fact grounded in black-box intuitions about esoteric mathematical concepts. It is relatively easy to come with brittle pieces of philosophy arguing in the opposite direction: why this number is low. In fact my actual, action guiding estimate is not based on an argument conveyable by a few paragraphs, but more on something like “feeling you get after working on this over years”. What I can offer other is something like “an argument from testimony”, and I don’t think it’s that great.
II. Longermism is a positive word, pointing toward the fact that future could be large and nice. X-risk is the opposite.
Similar: AI safety vs AI alignment. My guess is the “AI safety” framing is by default more controversial and gets more of a pushback (eg “safety department” is usually not the most loved part of an organisation, with connotations like “safety people want to prevent us from doing what we want”)
It’s not clear the loss of human life dominates the welfare effects in the short term, depending on how much moral weight you assign to nonhuman animals and how their lives are affected by continued human presence and activity. It seems like human extinction would be good for farmed animals (dominated by chickens, fish and invertebrates), and would have unclear sign for wild animals (although my own best guess is that it would be bad for wild animals).
Of course, if you take a view that’s totally neutral about moral patients who don’t yet exist, then few of the nonhuman animals that would be affected are alive today, and what happens to the rest wouldn’t matter in itself.
I think there is a key difference between longtermists and thoughtful shorttermists which is surprisingly under-discussed.
Longtermists don’t just want to reduce x-risk, they want to permanently reduce x-risk to a low level I.e achieve existential security. Without existential security the longtermist argument just doesn’t go through. A thoughtful shorttermist who is concerned about x-risk probably won’t care about this existential security, they probably just want to reduce x-risk to the lowest level possible in their lifetime.
Achieving existential security may require novel approaches. Some have said AI can help us achieve it, others say we need to promote international cooperation, and others say we may need to maximise economic growth or technological progress to speed through the time of perils. These approaches may seem lacking to a thoughtful shorttermist who may prefer reducing specific risks.
Maybe, I mean I’ve been thinking about this a lot lately in the context of Phil Torres argument about messianic tendencies in long termism, and I think he’s basically right that it can push people towards ideas that don’t have any guard rails.
A total utilitarian long termist would prefer a 99 percent chance of human extinction with a 1 percent of a glorious transhuman future stretching across the lightcone to a 100 percent chance of humanity surviving for 5 billion years on earth.
That after all is what shutting up and multiplying tells you—so the idea that long termism makes luddite solutions to X-risk (which to be clear, would also be incredibly difficult to impliment and maintain) extra unappealing relative to how a short termist might feel abou them seems right to me.
Of course there is also the other direction: If there was a 1⁄1 trillion chance that activating this AI would kill us all, and a 999 billion/ 1 trillion chance it would be awesome, but if you wait a hundred years you can have an AI that has only a 1/ 1 quadrillion chance of killing us all, a short termist pulls the switch, while the long termist waits.
Also, of course, model error, and any estimate where someone actually uses numbers like ‘1/1 trillion’ that something will happen in the real world that is in the slightest interesting is a nonsense and bad calculation.
I think ASB’s recent post about Peak Defense vs Trough Defense in Biosecurity is a great example of how the longtermist framing can end up mattering a great deal in practical terms.
MacAskill (who I believe coined the term?) does not think that the present is the hinge of history. I think the majority view among self-described longtermists is that the present is the hinge of history. But the term unites everyone who cares about things that are expected to have large effects on the long-run future (including but not limited to existential risk).
I think the term’s agnosticism about whether we live at the hinge of history and whether existential risk in the next few decades is high is a big reason for its popularity.
I think that the longtermist EA community mostly acts as if we’re close to the hinge of history, because most influential longtermists disagree with Will on this. If Will’s take was more influential, I think we’d do quite different things than we’re currently doing.
I’d love to hear what you think we’d be doing differently. With JackM, I think if we thought that hinginess was pretty evenly distributed across centuries ex ante we’d be doing a lot of movement-building and saving, and then distributing some of our resources at the hingiest opportunities we come across at each time interval. And in fact that looks like what we’re doing. Would you just expect a bigger focus on investment? I’m not sure I would, given how much EA is poised to grow and how comparably little we’ve spent so far. (Cf. Phil Trammell’s disbursement tool https://www.philiptrammell.com/dpptool/)
I think if we’re at the most influential point in history “EA community building” doesn’t make much sense. As others have said it would probably make more sense to be shouting about why we’re at the most influential point in history i.e. do “x-risk community building” or of course do more direct x-risk work.
I suspect we’d also do less global priorities research (although perhaps we don’t do that much as it is). If you think we’re at the most influential time you probably have a good reason for thinking that (x-risk abnormally high) which then informs what we should do (reduce it). So you wouldn’t need much more global priorities research. You would still need more granular research into how to reduce x-risk though.
More is also being said on the possibility of investing for the future financially which isn’t a great idea if we’re at the most influential time in history.
I agree the movement is mostly “hingy” in nature but perhaps not to the same extent you do. 80,000 Hours is an influential body that promotes EA community building, global priorities research, and to some extent investing for the future.
I’m not sure I agree with that. It seems to me that EA community building is channelling quite a few people to direct existential risk reduction work.
My point is that you could engage in “x-risk community building” which may more effectively get people working on reducing x-risk than “EA community building” would.
There is a bunch of consideration affecting that, including that we already do EA community building and that big switches tend to be costly. However that pans out in aggregate I think “doesn’t make much sense” is an overstatement.
I never actually said we should switch, but if we knew from the start “oh wow we live at the most influential time ever because x-risk is so high” we probably would have created an x-risk community not an EA one.
And to be clear I’m not sure where I personally come out on the hinginess debate. In fact I would say I’m probably more sympathetic to Will’s view that we currently aren’t at the most influential time than most others are.
My feeling is that it was a bit that people who wanted to attack global poverty efficiently decided to call themselves effective altruists, and then a bunch of Less Wrongers came over and convinced (a lot of) them that ‘hey, going extinct is an even biggler deal’, but the name still stuck, because names are sticky things.
That also depends on how wide you consider a “point”. A lot of longtermists talk of this as the “most important century”, not the most important year, or even decade. Considering EA as a whole is less than twenty years old, investing in EA and global priorities research might still make sense, even under a simplified model where 100% of the impact EA will ever have occurs by 2100, and then we don’t care any more. Given a standard explore/exploit algorithm, we should spend around 37% of the space exploring, so if we assume EA started around 2005, we should still be exploring until 2040 or so before pivoting and going all-in on the best things we’ve found.
Some loose data on this:
Of the ~900 people who filled my Twitter poll about whether we lived in the most important century, about 1⁄3 said “yes,” about 1⁄3 said “no,” and about 1⁄3 said “maybe.”
As Nathan Young mentioned in his comment, this argument is also similar to Carl Shulman’s view expressed in this podcast: https://80000hours.org/podcast/episodes/carl-shulman-common-sense-case-existential-risks/
Speaking about AI Risk particularly, I haven’t bought into the idea there’s a “cognitively substantial” chance AI could kill us all by 2050. And even if I had done, many of my interlocutors haven’t either. There’s two key points to get across to bring the average interlocutor on the street or at a party into an Eliezer Yudkowsky level of worrying:
Transformative AI will happen likely happen within 10 years, or 30
There’s a significant chance it will kill us all, or at least a catastrophic number of people (e.g. >100m)
It’s not trivial to convince people of either of these points without sounding a little nuts. So I understand why some people prefer to take the longtermist framing. Then it doesn’t matter whether transformative AI will happen in 10 years or 30 or 100, and you only have make the argument about why you should care about the magnitude of this problem.
If I think AI has a maybe 1% chance of being a catastrophic disaster, rather than, say, the 1⁄10 that Toby Ord gives it over the next 100 years or the higher risk that Yud gives it (>50%? I haven’t seen him put a number to it)...then I have to go through the additional step of explaining to someone why they should care about a 1% risk of something. After the pandemic, where the statistically average person has a ~1% chance of dying from covid, it has been difficult to convince something like 1⁄3 of the population to give a shit about it. The problem with small numbers like 1%, or even 10%, is a lot of people just shrug and dismiss them. Cognitively they round to zero. But the conversation “convince me 1% matters” can look a lot like just explaining longtermism to someone.
The way I like to describe it to my Intro to EA cohorts in the Existential Risk week is to ask “How many people, probabilistically, would die each year from this?”
So, if I think there’s a 10% chance AI kills us in the next 100 years, that’s 1 in 1,000 people “killed” by AI each year, or 7 million per year—roughly 17x more than malaria.
If I think there’s a 1% chance, AI risk kills 700,000 - it’s still just as important as malaria prevention, and much more neglected.
If I think there’s an 0.1% chance, AI kills 70,000 - a non-trivial problem, but not worth spending as many resources on as more likely concerns.
That said, this only covers part of the inferential distance—people in Week 5 of the Intro to EA cohort are already used to reasoning quantitatively about things and analysing cost-effectiveness.
Thank you for writing this! This helped me understand my negative feelings towards long-termist arguments so much better.
In talking to many EA University students and organizers, so many of them have serious reservations about long-termism as a philosophy, but not as a practical project because long-termism as a practical project usually means don’t die in the next 100 years, which is something we can pretty clearly make progress on (which is important since the usual objection is that maybe we can’t influence the long-term future).
I’ve been frustrated that in the intro fellowship and in EA conversations we must take such a strange path to something so intuitive: let’s try to avoid billions of people dying this century.
Scott, thanks so much for this post. It’s been years coming in my opinion. FWIW, my reason for making ARCHES (AI Research Considerations for Human Existential Safety) explicitly about existential risk, and not about “AI safety” or some other glomarization, is that I think x-risk and x-safety are not long-term/far-off concerns that can be procrastinated away.
https://forum.effectivealtruism.org/posts/aYg2ceChLMRbwqkyQ/ai-research-considerations-for-human-existential-safety (with David Krueger)
Ideally, we need to engage as many researchers as possible, thinking about as many aspects of a functioning civilization as possible, to assess how A(G)I can creep into those corners of civilization and pose an x-risk, with cybersecurity / internet infrastructure and social media being extremely vulnerable fronts that are easily salient today.
As I say this, I worry that other EAs will get worried that talking to folks working on cybersecurity or recommender systems necessarily means abandoning existential risk as a priority, because those fields have not historically taken x-risk seriously.
However, for better or for worse, it’s becoming increasingly easy for everyone to imagine cybersecurity and/or propaganda disasters involving very powerful AI systems, such that x-risk is increasingly not-a-stretch-for-the-imagination. So, I’d encourage anyone who feels like “there is no hope to convince [group x] to care” to start re-evaluating that position (e.g., rather than aiming/advocating for drastic interventions like invasive pivotal acts). I can’t tell whether or not you-specifically are in the “there is no point in trying” camp, but others might be, and in any case I thought it might be good to bring up
In summary: as tech gets scarier, we should have some faith that people will be more amenable to arguments that it is in fact dangerous, and re-examine whether this-group or that-group is worth engaging on the topic of existential safety as a near-term priority.
Are there actually any short-termists? Eg. people who have nonzero pure time preference?
IMO everyone have pure time preference (descriptively, as a revealed preference). To me it just seems commonsensical, but it is also very hard to mathematically make sense of rationality without pure time preference, because of issues with divergent/unbounded/discontinuous utility functions. My speculative 1st approximation theory of pure time preference for humans is: choose a policy according to minimax regret over all exponential time discount constants starting from around the scale of a natural human lifetime and going to infinity. For a better approximation, you need to also account for hyperbolic time discount.
Can’t you get the integral to converge with discounting for exogenous extinction risk and diminishing marginal utility? You can have pure time preference = 0 but still have a positive discount rate.
The question is, what is your prior about extinction risk? If your prior is sufficiently uninformative, you get divergence. If you dogmatically believe in extinction risk, you can get convergence but then it’s pretty close to having intrinsic time discount. To the extent it is not the same, the difference comes through privileging hypotheses that are harmonious with your dogma about extinction risk, which seems questionable.
Yes, if the extinction rate is high (and precise) enough , then it converges, but otherwise not.
Regarding your first comment, I’m focusing on the normative question, not descriptive (ie. what should a social planner do?). So I’m asking if there are EAs who think a social planner should have nonzero pure time preference.
I dunno if I count as “EA”, but I think that a social planner should have nonzero pure time preference, yes.
Why?
Because, ceteris paribus I care about things that happen sooner more than about things that happen latter. And, like I said, not having pure time preference seems incoherent.
As a meta-sidenote, I find that arguments about ethics are rarely constructive, since there is too little in the way of agreed-upon objective criteria and too much in the way of social incentives to voice / not voice certain positions. In particular when someone asks why I have a particular preference, I have no idea what kind of justification they expect (from some ethical principle they presuppose? evolutionary psychology? social contract / game theory?)
This is separate to the normative question of whether or not people should have zero pure time preference when it comes to evaluating the ethics of policies that will affect future generations. Surely the fact that I’d rather have some cake today rather than tomorrow cannot be relevant when I’m considering whether or not I should abate carbon emissions so my great grandchildren can live in a nice world—these simply seem separate considerations with no obvious link to each other. If we’re talking about policies whose effects don’t (predictably) span generations I can perhaps see the relevance of my personal impatience, but otherwise I don’t.
Also, having non-zero pure time preference has counterintuitive implications. From here:
So if hypothetically we were alive around King Tut’s time and we were given the mandatory choice to either torture him or, with certainty, cause the torture of all 7 billion humans today we would easily choose the latter with a 1% rate of pure time preference (which seems obviously wrong to me).
If you do want non-zero rate of pure time preference you will probably need it to decline quickly over time to make much ethical sense (see here and my explanation here).
I am a moral anti-realist. I don’t believe in ethics the way utilitarians (for example) use the word. I believe there are certain things I want, and certain things other people want, and we can coordinate on that. And coordinating on that requires establishing social norms, including what we colloquially refer to as “ethics”. Hypothetically, if I have time preference and other people don’t then I would agree to coordinate on a compromise. In practice, I suspect that everyone have time preference.
You can avoid this kind of conclusions if you accept my decision rule of minimax regret over all discount timescales from some finite value to infinity.
Most people do indeed have pure time preference in the sense that they are impatient and want things earlier rather than later. However, this says nothing about their attitude to future generations.
Being impatient means you place more importance on your present self than your future self, but it doesn’t mean you care more about the wellbeing of some random dude alive now than another random dude alive in 100 years. That simply isn’t what “impatience” means.
For example—I am impatient. I personally want things sooner rather than later in my life. I don’t however think that the wellbeing of a random person now is more important than the wellbeing of a random person alive in 100 years. That’s an entirely separate consideration to my personal impatience.
I mean, physics solves the divergence/unboundedness Problem with the universe achieveing heat death eventually. So one can assume some distribution on the time bound, at the very least. Whether that makes having no time discount reasonable in practice, I highly doubt.
I don’t know of any EAs or philosophers with a nonzero pure time preference, but it’s pretty common to believe that creating new lives is morally neutral. Someone who believes this might plausibly be a short-termist. I have a few friends who are short-termist for that reason.
Hmm, is it consistent to have zero pure time preference and be indifferent to creating new lives?
Yeah, the two things are orthogonal as far as I can see. The person-affecting view is perfectly with consistent with either a zero or a nonzero pure time preference.
Okay, so you could hold the person-affecting view and be indifferent to creating new lives, but also have zero pure time preference in that you don’t value future lives any less because they’re in the future.
So this is really getting at creating new lives vs how to treat them given that they already exist.
Imagine it’s 2022. You wake up and check the EA forum to see that Scott Alexander has a post knocking the premise of longtermism and it’s sitting in at 200 karma. On top, Holden Karnofsky has a post saying he may be only 20% convinced that x-risk itself is overwhelmingly important. Also, Joey Savoie is hanging in there.
Obviously, I’ll write in to support longtermism.
Below is a one long story about how some people might change their views, in this story, x-risk alone wouldn’t work.
TLDR; Some people think the future is really bad and don’t value it. You need something besides x-risk, to engage them, like a competent and coordinated movement to improve the future. Without this, x-risk and other EA work might be meaningless too. This explanation below has an intuitive or experiential quality, not numerical. I don’t know if this is actually longtermism.
Many people don’t consider future generations valuable because they have a pessimistic view of human society. I think this is justifiable.
Then, if you think society will remain in its current state, it’s reasonable that you might not want to preserve it. If you only ever think about one or two generations into the future, like I think most people do, it’s hard to see the possibility of change. So I think this “negative” mentality is self-reinforcing, they’re stuck.
To these people, the idea of x-risk doesn’t make sense, not because these dangers aren’t real but because there isn’t anything to preserve. To these people, giant numbers like 10^30 are really, especially unconvincing, because they seem silly and, if anything, we owe the future a small society.
I think the above is an incredibly mainstream view. Many people with talent, perception and resources might hold it.
The alternative to the mindset above is to see a long future that has possibilities. That there is a substantial possibility that things can be a lot better. And that it is viable to actually try to influence it.
I think these three sentences above seem “simple”, but for this to substantially enter some people’s world view, these ideas need to go together at the same time. Because of this, it’s non-obvious and unconvincing.
I think one reason why the idea or movement for influencing the future is valuable is that most people don’t know anyone who is seriously trying. It takes a huge amount of coordination and resources to do this. It’s bizarre to do this on your own or with a small group of people.
I think everyone, deep down, wants to be optimistic about the future and humanity. But they don’t take any action or spend time thinking about it.
With an actual strong movement that seems competent, it is possible to convince people there can be enough focus and investments that are viable to improve the future. It is this viability and assessment that produces a mental shift to optimism and engagement.
So this is the value of presenting the long term future in some way.
To be clear, in making this shift, people are being drawn in by competence. Competence involves “rational” thinking, planning and calculation, and all sorts of probabilities and numbers.
But for these people, despite what is commonly presented, I’m not sure focusing on numbers, or using Bayes, etc. may play any role in this presentation. If someone told me they changed their worldview because they ran numbers, I would be suspicious. Even now, most of the time, I am skeptical when I see huge numbers or intricate calculations.
Instead, this is a mindset or worldview that is intuitive. To kind of see this, this text seems convincing (“Good ideas change the world, or could possibly save it...”) but doesn’t use any calculations. I think this sort of thinking is how most people actually change their views about complex topics.
To have this particular change in view, I think you still need to have further beliefs that might be weird or unusual:
You need to have a sense of personal agency, that you can affect the future through your own actions, even though there are billions of people. This might be aggressive or wrong.
You might also need to have judgment of society and institutions that are “just right”.
You need to believe society could go down a bad path because institutions are currently dysfunctional and fragile.
Yet, you need to believe it’s possible to design ones that are robust to change the future.
I have no idea if the above is longtermism at all. This seems sort of weak, and seems like it only would compel me to execute my particular beliefs.
It seems sort of surprising if many people had this particular viewpoint in this comment.
This viewpoint does have the benefit that you could ask questions to interrogate these beliefs (people couldn’t just say there’s “10^42 people” or something).
I think this is post is mistaken. (If I remember correctly, not an expert,) estimates that AI will kill us all are put around only 5-10% by AI experts and attendees at an x-risk conference in a paper from Katja Grace. Only AI Safety researchers think AI doom is a highly likely default (presumably due to selection effects.) So from near-termist perspective AI deserves relatively less attention.
Bio-risk and climate change, and maybe nuclear war, on the other hand, I think are all highly concerning from a near-termist perspective, but unlikely to kill EVERYONE, and so relatively low priority for long-termists.
“only” 5-10% of ~8 billion people dying this century is still 400-800 million deaths! Certainly higher than e.g. estimates of malarial deaths within this century!
What’s the case for climate change being highly concerning from a near-termist perspective? It seems unlikely to me that marginal $s in fighting climate change are a better investment in global health than marginal $s spent directly on global health. And also particularly unlikely to be killing >400 million people.
I agree some biosecurity spending may be more cost-effective on neartermist grounds.
Hmm.. I’d have to think more carefully about it. Was very much off-the-cuff. I mostly agree with your criticism, I think I was mainly thinking bio-risk makes most sense as a near-termist priority and so would get most of x-risk funding until solved, since it is much more tractable than AI Risk.
Maybe this is the main point I’m trying to make, and so the spirit of the post seems off, since near-termist x-risky stuff would mostly fund bio-risk and long-termist x-risky stuff would mostly go to AI.
Yes! Thanks for this Scott. X-risk prevention is a cause that both neartermists and longtermists can get behind. I think it should be reinstated as a top-level EA cause area in it’s own right, distinct from longtermism (as I’ve said here).
It’s a sobering thought. See also: AGI x-risk timelines: 10% chance (by year X) estimates should be the headline, not 50%.
Longtermism =/ existential risk, though it seems the community has more or less decided they mean similar things (at least while at our current point in history).
Here is an argument to the contrary- “the civilization dice roll”: Current Human society becoming grabby will be worse for the future of our lightcone than the counterfactual society that will(might) exist and end up becoming grabby if we die out/ our civilization collapses.
Now, to directly answer your point on x-risk vs longtermism, yes you are correct. Fear mongering will always trump empathy mongering in terms of getting people to care. We might worry though that in a society already full of fear mongering, we actually need to push people to build their thoughtful empathy muscles, not their thoughtful fear muscles. That is to say we want people to care about x-risk because they care about other people, not because they care about themselves.
So now turning back to the dice roll argument, we may prefer to survive because we became more empathetic/expanded our moral circle and as a result cared about x-risk, rather than because we just really really didn’t want to die in the short-term. Once (if) we pass the hinge of history, or at least the peak of existential risk, we still have to decide what the fate of our ecosystem will be. Personally, I would prefer we decide with maximal moral circles.
Some potential gaps in my argument. (1) There might be reasons to believe that our lightcone will be better off with current human society becoming grabby, in which case we really should just be optimizing almost exclusively on reducing x-risk (probably). (2) Focusing on Fear mongering x-risk rather than empathy mongering x-risk will not decrease the likelihood of people expanding their moral circles , maybe it will even increase moral circle expansion because it will actually get people to grapple with the possibility of these issues (3) Moral circle expansion won’t actually make the future go better (4) AI will be uncorrelated with human culture, so this whole argument is sort of irrelevant if the AI does the grabbing.
Agreed. Linch’s .01% Fund post proposes a research/funding entity that identifies projects that can reduce existential risk by 0.01% for $100M-$1B. This is 3x-30x as cost-effective as the quoted text and targeting a reduction 100x the size.
I have been working on a tweet length version of this argument for a while. I encourage someone to beat me to it. I agree with Neel and Scott (and Carl Shulman) that this argument is much more succinct and emotive and I think I should get better at making it.
Something like:
[quote tweeting a poll on survival to 2100] 38% of my followers think there is a > 5% chance all humans are dead by 2100. Let’s assume they are way wrong and it’s only .5%.
[how does this compare to other things that might kill you]
[how does this compare in terms of spending to how much ought to be spent to how much is]
Here is v1.0. Can you do better? https://twitter.com/NathanpmYoung/status/1512000005254664194?s=20&t=LnIr0K87oWgFlqP6qKH4IQ
GiveDirectly could get pretty high probabilities (or close for a smaller number of people at lower cost), although it’s not the favoured intervention of those focused on global health and poverty.
Another notable remaining difference is that extinction is all or nothing, so your chance (and the whole community’s chance) of doing any good at all is much lower, although its impact would be much higher when you do make a difference.
I would guess it’s usually based on requiring higher standards of evidence to support an intervention (and greater skepticism without), so they actually think GiveWell interventions are more cost-effective on the margin.
A key difference also surrounds which risks to care about more
all global catastrophic risks or only likely existential ones
and what to do about them
focus on preventing them/reducing their suffering and deaths… … or make them survivable by at least a small contingent to repopulate the world/universe.
If I don’t have a total population utilitarian view (which seems to me like the main crux belief of longtermism) I may not care as much about the extinction part of the risks.
Michael Wiebe comments: “Can we please stop talking about GDP growth like this? There’s no growth dial that you can turn up by 0.01, and then the economy grows at that rate forever. In practice, policy changes have one-off effects on the level of GDP, and at best can increase the growth rate for a short time before fading out. We don’t have the ability to increase the growth rate for many centuries.”
“Value change” type work—gradually shifting civilizational values to those more in line with human flourishing—might fall into this category too.
This is the first I have seen reference to norm changing in EA. Is there other writing on this idea?
Hello
At a lecture I attended, a leading banker said “long term thinking should not be used as an excuse for short term failure”. At the time, he was defending short term profit making as against long term investment, but when applied to discussions of longtermist the point is similar. Our policies and actions can only be implemented in the present and must succeed in the short term as well as the long term. This means careful risk assessment/management but as the future can never be predicted with absolute certainty,the long term effects of policy become increasingly uncertain into the future. It should be remembered that policy can always be adapted at a future date as events/new information dictate. This may be a more efficient way of operating.
I wholeheartedly agree that Governments need more long term thinking rather than indulging voters demands that are driven on by a crisis loving media (please excuse my cynicism) . I assume the EA community is taking action and lobbying MP’s to legally adopt UNESCO’s declaration of Responsibilities of Current Generations towards Future Generations. But I would hate to think that longtermism became an excuse for inaction or delay, as we have many serious problems that need urgent action.
Regards
Trevor Prew
Sheffield UK
I’m not sure how we can expect the public, or even experts, to meaningfully engage a threat as abstract, speculative and undefined as unaligned AI when very close to the entire culture, including experts of all kinds, relentlessly ignores the very easily understood nuclear weapons which literally could kill us all right now, today, before we sit down to lunch.
What I learned from studying nuclear weapons as an average citizen is that there’s little evidence that intellectual analysis is capable of delivering us from this ever present existential threat. Very close to everyone already knows the necessary basic facts about nuclear weapons, and yet we barely even discuss this threat, even in presidential campaigns where we are selecting a single human being to have sole authority over the use of these weapons.
People like us are on the wrong channel when it comes to existential threats. Human beings don’t learn such huge lessons through intellectual analysis, we learn through pain, if we learn at all. As example, even though European culture represents a kind of pinnacle of rational thought, European culture relentlessly warred upon itself for centuries, and stopped only when the pain of WWII became too great to bear, and the threat of nuclear annihilation left no room for further warring. And yet, even then some people didn’t get the message, and have returned to reckless land grab warring today.
The single best hope for escaping the nuclear threat is a small scale nuclear terrorist strike on a single city. Seventy years of failure proves that we’re never going to truly grasp the nuclear threat through facts and reason. We’re going to have to see it for ourselves. The answer is not reason, but pain.
This is bad news for the AI threat, because by the time that threat is converted from abstract to real, and we can see it with our own eyes and feel the pain, it will likely be too late to turn back.