(Not a response to your whole comment, hope that’s OK.)
I agree that there should be some critiques of longtermism or working on X risk in the curriculum. We’re working on an update at the moment. Does anyone have thoughts on what the best critiques are?
IMO good-faith, strong, fully written-up, readable, explicit critiques of longtermism are in short supply; indeed, I can’t think of any. The three you raise are good, but they are somewhat tentative and limited in scope. I think that stronger objections could be made.
FWIW, on the EA facebook page, I raised three critiques of longtermism in response to Finn Moorhouse’s excellent recent article on the subject, but all my comments were very brief.
The first critique involves defending person-affecting views in population ethics and arguing that, when you look at the details, the assumptions underlying them are surprisingly hard to reject. My own thinking here is very influenced by Bader (2022), which I think is a philosophical masterclass, but is also very dense and doesn’t address longtermism directly. There are other papers arguing for person-affecting views, e.g. Narveson (1967) and Heyd (2012) but both are now a bit dated—particularly Narveson—in the sense they don’t respond to the more sophisticated challenges to their views that have since been raised in the literature. For the latest survey of the literature and those challenges—albeit not one sympathetic to person-affecting views—see Greaves (2017).
The second draws on a couple of suggestions made by Webb (2021) and Berger (2021) about cluelessness. Webb (2021) is a reasonably substantial EA forum post about how we might worry that, the further in the future something happens, the smaller the expected value we should assign to it, which acts as an effective discount. However, Webb (2021) is pretty non-committal about how serious a challenge this is for longtermism and doesn’t frame it as one. Berger (2021) is talking on the 80k podcasts and suggests that longtermist interventions are either ‘narrow’ (e.g. AI safety) or ‘broad’ (‘improving politics’), where the former are not robustly good, and the latter are questionably better than existing ‘near-termist’ interventions such as cash transfers to the global poor. I wouldn’t describe this as a worked-out thesis though and Berger doesn’t state it very directly.
The third critique is that, a la Torres, longtermism might lead us towards totalitarianism. I don’t think this is a really serious objection, but I would like to see longtermists engage with it and say why they don’t believe it is.
I should probably disclose I’m currently in discussion with Forethought about a grant to write up some critiques of longtermism in order to fill some of this literature gap. Ideally, I’ll produce 2-3 articles within the next 18 months.
Why I am probably not a longtermist seems like the best of these options, by a very wide margin. The other two posts are much too technical/jargony for introductory audiences.
Also, A longtermist critique of “The expected value of extinction risk reduction is positive” isn’t even a critique of longtermism, it’s a longtermist arguing against one longtermist cause (x-risk reduction) in favor of other longtermist causes (such as s-risk reduction and trajectory change). So it doesn’t seem like a good fit for even a more advanced curriculum unless it was accompanied by other critiques targeting longtermism itself (e.g. critiques based on cluelessness.)
Reducing the probability of human extinction is a highly popular cause area among longtermist EAs. Unfortunately, this sometimes seems to go as far as conflating longtermism with this specific cause, which can contribute to the neglect of other causes.[1] Here, I will evaluate Brauner and Grosse-Holz’s argument for the positive expected value (EV) of extinction risk reduction from a longtermist perspective. I argue that the EV of extinction risk reduction is not robustly positive,[2] such that other longtermist interventions such as s-risk reduction and trajectory changes are more promising, upon consideration of counterarguments to Brauner and Grosse-Holz’s ethical premises and their predictions of the nature of future civilizations.
The longtermist critique is a critique of arguments for a particular (perhaps the main) priority in the longtermism community, extinction risk reduction. I don’t think it’s necessary to endorse longtermism to be sympathetic to the critique. That extinction risk reduction might not be robustly positive is a separate point from the claim that s-risk reduction and trajectory changes are more promising.
Someone could think extinction risk reduction, s-risk reduction and trajectory changes are all not robustly positive, or that no intervention aimed at any of them is robustly positive. The post can be one piece of this, arguing against extinction risk reduction. I’m personally sympathetic to the claim that no longtermist intervention will look robustly positive or extremely cost-effective when you try to deal with the details and indirect effects.
The case for stable very long-lasting trajectory changes other than those related to extinction hasn’t been argued persuasively, as far as I know, in cost-effectiveness terms over, say, animal welfare, and there are lots of large indirect effects to worry about. S-risk work often has potential for backfire, too. Still, I’m personally sympathetic to both enough to want to investigate further, at least over extinction risk reduction.
The strongest academic critique of longtermism I know of is The Scope of Longtermism by GPI’s David Thorstad. Here’s the abstract:
Longtermism holds roughly that in many decision situations, the best thing we can do is what is best for the long-term future. The scope question for longtermism asks: how large is the class of decision situations for which longtermism holds? Although longtermism was initially developed to describe the situation of cause-neutral philanthropic decisionmaking, it is increasingly suggested that longtermism holds in many or most decision problems that humans face. By contrast, I suggest that the scope of longtermism may be more restricted than commonly supposed. After specifying my target, swamping axiological strong longtermism (swamping ASL), I give two arguments for the rarity thesis that the options needed to vindicate swamping ASL in a given decision problem are rare. I use the rarity thesis to pose two challenges to the scope of longtermism: the area challenge that swamping ASL often fails when we restrict our attention to specific cause areas, and the challenge from option unawareness that swamping ASL may fail when decision problems are modified to incorporate agents’ limited awareness of the options available to them.
(Not a response to your whole comment, hope that’s OK.)
I agree that there should be some critiques of longtermism or working on X risk in the curriculum. We’re working on an update at the moment. Does anyone have thoughts on what the best critiques are?
Some of my current thoughts:
- Why I am probably not a longtermist
- This post arguing that it’s not clear if X risk reduction is positive
- On infinite ethics (and Ajeya’s crazy train metaphor)
IMO good-faith, strong, fully written-up, readable, explicit critiques of longtermism are in short supply; indeed, I can’t think of any. The three you raise are good, but they are somewhat tentative and limited in scope. I think that stronger objections could be made.
FWIW, on the EA facebook page, I raised three critiques of longtermism in response to Finn Moorhouse’s excellent recent article on the subject, but all my comments were very brief.
The first critique involves defending person-affecting views in population ethics and arguing that, when you look at the details, the assumptions underlying them are surprisingly hard to reject. My own thinking here is very influenced by Bader (2022), which I think is a philosophical masterclass, but is also very dense and doesn’t address longtermism directly. There are other papers arguing for person-affecting views, e.g. Narveson (1967) and Heyd (2012) but both are now a bit dated—particularly Narveson—in the sense they don’t respond to the more sophisticated challenges to their views that have since been raised in the literature. For the latest survey of the literature and those challenges—albeit not one sympathetic to person-affecting views—see Greaves (2017).
The second draws on a couple of suggestions made by Webb (2021) and Berger (2021) about cluelessness. Webb (2021) is a reasonably substantial EA forum post about how we might worry that, the further in the future something happens, the smaller the expected value we should assign to it, which acts as an effective discount. However, Webb (2021) is pretty non-committal about how serious a challenge this is for longtermism and doesn’t frame it as one. Berger (2021) is talking on the 80k podcasts and suggests that longtermist interventions are either ‘narrow’ (e.g. AI safety) or ‘broad’ (‘improving politics’), where the former are not robustly good, and the latter are questionably better than existing ‘near-termist’ interventions such as cash transfers to the global poor. I wouldn’t describe this as a worked-out thesis though and Berger doesn’t state it very directly.
The third critique is that, a la Torres, longtermism might lead us towards totalitarianism. I don’t think this is a really serious objection, but I would like to see longtermists engage with it and say why they don’t believe it is.
I should probably disclose I’m currently in discussion with Forethought about a grant to write up some critiques of longtermism in order to fill some of this literature gap. Ideally, I’ll produce 2-3 articles within the next 18 months.
I strongly welcome the critiques you’ll hopefully write, Michael!
Why I am probably not a longtermist seems like the best of these options, by a very wide margin. The other two posts are much too technical/jargony for introductory audiences.
Also, A longtermist critique of “The expected value of extinction risk reduction is positive” isn’t even a critique of longtermism, it’s a longtermist arguing against one longtermist cause (x-risk reduction) in favor of other longtermist causes (such as s-risk reduction and trajectory change). So it doesn’t seem like a good fit for even a more advanced curriculum unless it was accompanied by other critiques targeting longtermism itself (e.g. critiques based on cluelessness.)
The longtermist critique is a critique of arguments for a particular (perhaps the main) priority in the longtermism community, extinction risk reduction. I don’t think it’s necessary to endorse longtermism to be sympathetic to the critique. That extinction risk reduction might not be robustly positive is a separate point from the claim that s-risk reduction and trajectory changes are more promising.
Someone could think extinction risk reduction, s-risk reduction and trajectory changes are all not robustly positive, or that no intervention aimed at any of them is robustly positive. The post can be one piece of this, arguing against extinction risk reduction. I’m personally sympathetic to the claim that no longtermist intervention will look robustly positive or extremely cost-effective when you try to deal with the details and indirect effects.
The case for stable very long-lasting trajectory changes other than those related to extinction hasn’t been argued persuasively, as far as I know, in cost-effectiveness terms over, say, animal welfare, and there are lots of large indirect effects to worry about. S-risk work often has potential for backfire, too. Still, I’m personally sympathetic to both enough to want to investigate further, at least over extinction risk reduction.
The strongest academic critique of longtermism I know of is The Scope of Longtermism by GPI’s David Thorstad. Here’s the abstract:
Just saw this and came here to say thanks! Glad you liked it.