Epistemic status: quickly writtren, rehashing (reheating?) old, old takes. Also, written in a grumpier voice than I’d endorse (I got rained on this morning).
(In what follows, I’ll take “longtermism” as shorthand for: “the effects of our actions on the long term future should be a key moral priority.”)
Critics often have two[2] broad kinds of objections to longtermism:
It’s too revisionary or radical in its implications
It’s not action-guiding, it’s irrelevant in its implications
Here I’ll say a bit more on (2). Specifically, I’m going to argue that (A) longtermism isn’t necessary to motivate most high priority work, and that ((B) for the work longtermism might be necessary to motivate, talking about object-level features of the world[3] is more useful than debating the abstract framework. Given this, I think we should all talk less about longtermism.
Longtermism doesn’t distinctively motivate much work
Okay, so what does longtermism distinctively motivate? Some notes.
My summary: Longtermism isn’t necessary to think that x-risk (of at least some varieties) is a top priority problem. You don’t need a very high credence in e.g. AI x risk for it to be the most likely reason you and your family die, and governments’ implied value of life suggests they should spend much more on mitigations than they do. (“Holy shit, x-risk” is a good pitch.)
Longtermism does add scale and weight. In particular, you might worry that stories about x-risk reduction work can look like this: “Ok, there’s a moderate probability of a bad thing happening. I could work on preventing it, and if I do, there’s a small chance that I reduce the probability of this bad thing by a truly infinitesimal amount – but wait, hang on, why this? I could be working for a normal charity, or making lots of money!”.
Even e.g. deontologists who are sceptical of longtermism will happily work on reducing the risk of catastrophe from new technologies (see e.g. Unruh’s essay).
Longtermism and non-existential GCRs — not a crux
There’s not much to say here: you don’t need longtermism to motivate working on reducing the chances that lots of suffering happens in the near future.
As Askell and Neth discuss, longtermists may be myopic for a few reasons:
“Causal diffusion” – i.e., the claim that the effects of our actions “wash out” over time.
I think this should be your prior, but that if you think we live at a hinge of history, some actions (e.g. steering the development of TAI) seem like they might not wash out. So I’m not very moved by this.
“Epistemic diffusion” – i.e., that it’s increasingly hard to predict the effects of our actions over longer timescales
I’m most sympathetic to this.
Moral uncertainty – they argue that moral uncertainty should make longtermists act more myopically. Broadly, this is because there are many plausible arguments for having a positive discount rate, and fewer arguments for having a zero or negative discount rate, so on balance we should put a small probability on the correct discount rate being small but non-zero.
I agree with something like this when considered from the perspective of “how humanity overall should act”, but disagree that it’s relevant for how longtermists should act. Given that the overwhelming majority of actors have a positive discount rate, it seems basically right to me that longtermists should individually act as if they have a zero discount rate, to move the overall implied discount rate lower.
“Believing that longtermism is true doesn’t necessarily mean you’ll act as if you have zero time preference” isn’t a terribly novel point, so I won’t say more about this.
What about work that seemingly does need longtermism?
As I’ve discussed above, work on reducing existential risk does not need longtermism to motivate it. However, Better Futures-style work aimed at improving the value of the far future seems to me like it probably will need longtermism to be a key moral priority.[5]
I think that even here, it’s more useful to talk about specific features of the world rather than to continue debating whether longtermism is true in general. Concretely, I’m most excited about work that tries to identify the actions one should take if you’re very compelled by better-future-style reasoning, bearing in mind the difficulty of predicting or influencing the future. (See this comment for similar thoughts.)
Some concrete recommendations
So what should we do instead of debating longtermism? Some ideas.
Focus essay collections/research agendas/etc on particular interventions or causes, not on frameworks as general as longtermism. “Essays on Better Futures” or “Essays on Space Governance” would be more useful, imo, than “Essays on Longtermism.”
Think hard about predictability and washing out for specific actions, rather than in the abstract.
Be concrete about empirical assumptions. I think cruxes for particular interventions are often about hinginess, tractability, and the shape of the world. It is unfortunately super fun to just think about philosophy, but I think in general I’d trade lots of marginal philosophizing for more concrete takes. (She says, abstractly philosophizing.)
Here’s a footnote in a cranky voice (apologies). I was pretty underwhelmed with the Essays on Longtermism collection. I broadly agree with Oscar that the articles were either (1) reprints of classic essays which had some relationship to longtermism, or (2) new work, which mostly didn’t seem to succeed at being novel and plausibly true and important.
I guess more accurately, I thought the essay collection was just fine, looked good by academic standards, was probably a decent idea ex ante, and that there is nothing very interesting to say about the essays. So mostly I’m like “hm, I don’t really get why there was an EAF contest to write about them”.
I think the collection can still have some academic value, e.g. by:
Making it higher status (and better for your academic career) to discuss longtermism-related ideas
Collecting some classic foundational essays (and again, making it easier to cite them in academic work)
Broadening the base of support for longtermism, or assessing how robust longtermism is to different moral views (e.g. deontological perspectives, contractualism)
My overall gripe is: longtermism doesn’t seem very important. I think it would have been better to collect essays on a particular intervention longtermists are often interested in, rather than about an axiological claim which (I argue) doesn’t really matter for prioritisation.
E.g., discussing reasons to think this problem in particular must be dealt with now, rather than delegated to future, wiser people to solve; arguing why some actions will likely have persistent, predictable, and robustly good effects.
I’m not making the claim that BF-style work definitely will need longtermism to be motivated. My impression is that lots of the interventions recommended by this work are still quite abstract and general, and I think it’s possible that as we drill down into the details and look more for actions with predictable, persistent, robustly good effects, the kinds of actions that a BF-style longtermist will recommend might look very similar to the kinds of actions that non-longtermists recommend. (E.g.: strengthening institutions, reducing the risks of concentration of power, generally preserving optionality beyond just non-extinction optionality.) However, my current guess is that there will be some things that BF-style researchers are excited about, for which you basically do need to be a longtermist in order to consider them key moral priorities.
Talking about longtermism isn’t very important
Epistemic status: quickly writtren, rehashing (reheating?) old, old takes. Also, written in a grumpier voice than I’d endorse (I got rained on this morning).
Some essays on longtermism came out recently! Perhaps you noticed. I overall think these essays were just fine[1], and that we should all talk less about longtermism.
In which I talk about longtermism
(In what follows, I’ll take “longtermism” as shorthand for: “the effects of our actions on the long term future should be a key moral priority.”)
Critics often have two[2] broad kinds of objections to longtermism:
It’s too revisionary or radical in its implications
It’s not action-guiding, it’s irrelevant in its implications
Here I’ll say a bit more on (2). Specifically, I’m going to argue that (A) longtermism isn’t necessary to motivate most high priority work, and that ((B) for the work longtermism might be necessary to motivate, talking about object-level features of the world[3] is more useful than debating the abstract framework. Given this, I think we should all talk less about longtermism.
Longtermism doesn’t distinctively motivate much work
Okay, so what does longtermism distinctively motivate? Some notes.
Longtermism and existential risk – not a crux
As argued here, here, here, etc.
My summary: Longtermism isn’t necessary to think that x-risk (of at least some varieties) is a top priority problem. You don’t need a very high credence in e.g. AI x risk for it to be the most likely reason you and your family die, and governments’ implied value of life suggests they should spend much more on mitigations than they do. (“Holy shit, x-risk” is a good pitch.)
Longtermism does add scale and weight. In particular, you might worry that stories about x-risk reduction work can look like this: “Ok, there’s a moderate probability of a bad thing happening. I could work on preventing it, and if I do, there’s a small chance that I reduce the probability of this bad thing by a truly infinitesimal amount – but wait, hang on, why this? I could be working for a normal charity, or making lots of money!”.
Even e.g. deontologists who are sceptical of longtermism will happily work on reducing the risk of catastrophe from new technologies (see e.g. Unruh’s essay).
Longtermism and non-existential GCRs — not a crux
There’s not much to say here: you don’t need longtermism to motivate working on reducing the chances that lots of suffering happens in the near future.
Longtermism and better futures
This is plausibly one place you need longtermism to motivate work.
But even here, you still need additional claims to be true, e.g. we need:
Some way of predicting the effects of our actions with reasonable accuracy
Some reason to think these effects won’t wash out
Some way of comparing possible actions
Longtermists act like normal people, mostly[4]
As Askell and Neth discuss, longtermists may be myopic for a few reasons:
“Causal diffusion” – i.e., the claim that the effects of our actions “wash out” over time.
I think this should be your prior, but that if you think we live at a hinge of history, some actions (e.g. steering the development of TAI) seem like they might not wash out. So I’m not very moved by this.
“Epistemic diffusion” – i.e., that it’s increasingly hard to predict the effects of our actions over longer timescales
I’m most sympathetic to this.
Moral uncertainty – they argue that moral uncertainty should make longtermists act more myopically. Broadly, this is because there are many plausible arguments for having a positive discount rate, and fewer arguments for having a zero or negative discount rate, so on balance we should put a small probability on the correct discount rate being small but non-zero.
I agree with something like this when considered from the perspective of “how humanity overall should act”, but disagree that it’s relevant for how longtermists should act. Given that the overwhelming majority of actors have a positive discount rate, it seems basically right to me that longtermists should individually act as if they have a zero discount rate, to move the overall implied discount rate lower.
“Believing that longtermism is true doesn’t necessarily mean you’ll act as if you have zero time preference” isn’t a terribly novel point, so I won’t say more about this.
What about work that seemingly does need longtermism?
As I’ve discussed above, work on reducing existential risk does not need longtermism to motivate it. However, Better Futures-style work aimed at improving the value of the far future seems to me like it probably will need longtermism to be a key moral priority.[5]
I think that even here, it’s more useful to talk about specific features of the world rather than to continue debating whether longtermism is true in general. Concretely, I’m most excited about work that tries to identify the actions one should take if you’re very compelled by better-future-style reasoning, bearing in mind the difficulty of predicting or influencing the future. (See this comment for similar thoughts.)
Some concrete recommendations
So what should we do instead of debating longtermism? Some ideas.
Focus essay collections/research agendas/etc on particular interventions or causes, not on frameworks as general as longtermism. “Essays on Better Futures” or “Essays on Space Governance” would be more useful, imo, than “Essays on Longtermism.”
Think hard about predictability and washing out for specific actions, rather than in the abstract.
Be concrete about empirical assumptions. I think cruxes for particular interventions are often about hinginess, tractability, and the shape of the world. It is unfortunately super fun to just think about philosophy, but I think in general I’d trade lots of marginal philosophizing for more concrete takes. (She says, abstractly philosophizing.)
Here’s a footnote in a cranky voice (apologies). I was pretty underwhelmed with the Essays on Longtermism collection. I broadly agree with Oscar that the articles were either (1) reprints of classic essays which had some relationship to longtermism, or (2) new work, which mostly didn’t seem to succeed at being novel and plausibly true and important.
I guess more accurately, I thought the essay collection was just fine, looked good by academic standards, was probably a decent idea ex ante, and that there is nothing very interesting to say about the essays. So mostly I’m like “hm, I don’t really get why there was an EAF contest to write about them”.
I think the collection can still have some academic value, e.g. by:
Making it higher status (and better for your academic career) to discuss longtermism-related ideas
Collecting some classic foundational essays (and again, making it easier to cite them in academic work)
Broadening the base of support for longtermism, or assessing how robust longtermism is to different moral views (e.g. deontological perspectives, contractualism)
My overall gripe is: longtermism doesn’t seem very important. I think it would have been better to collect essays on a particular intervention longtermists are often interested in, rather than about an axiological claim which (I argue) doesn’t really matter for prioritisation.
Setting aside “it’s false, but not because it’s revisionary, for some other reason”.
E.g., discussing reasons to think this problem in particular must be dealt with now, rather than delegated to future, wiser people to solve; arguing why some actions will likely have persistent, predictable, and robustly good effects.
They look just like me and you! Your friends, colleagues, and neighbours may even be longtermists…
I’m not making the claim that BF-style work definitely will need longtermism to be motivated. My impression is that lots of the interventions recommended by this work are still quite abstract and general, and I think it’s possible that as we drill down into the details and look more for actions with predictable, persistent, robustly good effects, the kinds of actions that a BF-style longtermist will recommend might look very similar to the kinds of actions that non-longtermists recommend. (E.g.: strengthening institutions, reducing the risks of concentration of power, generally preserving optionality beyond just non-extinction optionality.) However, my current guess is that there will be some things that BF-style researchers are excited about, for which you basically do need to be a longtermist in order to consider them key moral priorities.