Yeah, this is an excellent list. To me, the OP seems to miss the obvious the point, which is that if you look at what the central EA individuals, organisations, and materials are promoting, you very quickly get the impression that, to misquote Henry Ford, “you can have any view you want, so long as it’s longtermism”. One’s mileage may vary, of course, as to whether one thinks this is a good result.
To add to the list, the 8-week EA Introductory Fellowship curriculum, the main entry point for students, i.e. the EAs of the future, has 5 sections on cause areas, of which 3 are on longtermism. As far as I can tell, there are no critiques of longtermism anywhere, even in the “what might we be missing?” week, which I found puzzling.
[Disclosure: when I saw the Fellowship curriculum about a year ago, I raised this issue with Aaron Gertler, who said it had been created without much/any input from non-longtermists, this was perhaps an oversight, and I would be welcome to make some suggestions. I meant to make some, but never prioritised it, in large part because it was unclear to me if any suggestions I made would get incorporated.]
(Not a response to your whole comment, hope that’s OK.)
I agree that there should be some critiques of longtermism or working on X risk in the curriculum. We’re working on an update at the moment. Does anyone have thoughts on what the best critiques are?
IMO good-faith, strong, fully written-up, readable, explicit critiques of longtermism are in short supply; indeed, I can’t think of any. The three you raise are good, but they are somewhat tentative and limited in scope. I think that stronger objections could be made.
FWIW, on the EA facebook page, I raised three critiques of longtermism in response to Finn Moorhouse’s excellent recent article on the subject, but all my comments were very brief.
The first critique involves defending person-affecting views in population ethics and arguing that, when you look at the details, the assumptions underlying them are surprisingly hard to reject. My own thinking here is very influenced by Bader (2022), which I think is a philosophical masterclass, but is also very dense and doesn’t address longtermism directly. There are other papers arguing for person-affecting views, e.g. Narveson (1967) and Heyd (2012) but both are now a bit dated—particularly Narveson—in the sense they don’t respond to the more sophisticated challenges to their views that have since been raised in the literature. For the latest survey of the literature and those challenges—albeit not one sympathetic to person-affecting views—see Greaves (2017).
The second draws on a couple of suggestions made by Webb (2021) and Berger (2021) about cluelessness. Webb (2021) is a reasonably substantial EA forum post about how we might worry that, the further in the future something happens, the smaller the expected value we should assign to it, which acts as an effective discount. However, Webb (2021) is pretty non-committal about how serious a challenge this is for longtermism and doesn’t frame it as one. Berger (2021) is talking on the 80k podcasts and suggests that longtermist interventions are either ‘narrow’ (e.g. AI safety) or ‘broad’ (‘improving politics’), where the former are not robustly good, and the latter are questionably better than existing ‘near-termist’ interventions such as cash transfers to the global poor. I wouldn’t describe this as a worked-out thesis though and Berger doesn’t state it very directly.
The third critique is that, a la Torres, longtermism might lead us towards totalitarianism. I don’t think this is a really serious objection, but I would like to see longtermists engage with it and say why they don’t believe it is.
I should probably disclose I’m currently in discussion with Forethought about a grant to write up some critiques of longtermism in order to fill some of this literature gap. Ideally, I’ll produce 2-3 articles within the next 18 months.
Why I am probably not a longtermist seems like the best of these options, by a very wide margin. The other two posts are much too technical/jargony for introductory audiences.
Also, A longtermist critique of “The expected value of extinction risk reduction is positive” isn’t even a critique of longtermism, it’s a longtermist arguing against one longtermist cause (x-risk reduction) in favor of other longtermist causes (such as s-risk reduction and trajectory change). So it doesn’t seem like a good fit for even a more advanced curriculum unless it was accompanied by other critiques targeting longtermism itself (e.g. critiques based on cluelessness.)
Reducing the probability of human extinction is a highly popular cause area among longtermist EAs. Unfortunately, this sometimes seems to go as far as conflating longtermism with this specific cause, which can contribute to the neglect of other causes.[1] Here, I will evaluate Brauner and Grosse-Holz’s argument for the positive expected value (EV) of extinction risk reduction from a longtermist perspective. I argue that the EV of extinction risk reduction is not robustly positive,[2] such that other longtermist interventions such as s-risk reduction and trajectory changes are more promising, upon consideration of counterarguments to Brauner and Grosse-Holz’s ethical premises and their predictions of the nature of future civilizations.
The longtermist critique is a critique of arguments for a particular (perhaps the main) priority in the longtermism community, extinction risk reduction. I don’t think it’s necessary to endorse longtermism to be sympathetic to the critique. That extinction risk reduction might not be robustly positive is a separate point from the claim that s-risk reduction and trajectory changes are more promising.
Someone could think extinction risk reduction, s-risk reduction and trajectory changes are all not robustly positive, or that no intervention aimed at any of them is robustly positive. The post can be one piece of this, arguing against extinction risk reduction. I’m personally sympathetic to the claim that no longtermist intervention will look robustly positive or extremely cost-effective when you try to deal with the details and indirect effects.
The case for stable very long-lasting trajectory changes other than those related to extinction hasn’t been argued persuasively, as far as I know, in cost-effectiveness terms over, say, animal welfare, and there are lots of large indirect effects to worry about. S-risk work often has potential for backfire, too. Still, I’m personally sympathetic to both enough to want to investigate further, at least over extinction risk reduction.
The strongest academic critique of longtermism I know of is The Scope of Longtermism by GPI’s David Thorstad. Here’s the abstract:
Longtermism holds roughly that in many decision situations, the best thing we can do is what is best for the long-term future. The scope question for longtermism asks: how large is the class of decision situations for which longtermism holds? Although longtermism was initially developed to describe the situation of cause-neutral philanthropic decisionmaking, it is increasingly suggested that longtermism holds in many or most decision problems that humans face. By contrast, I suggest that the scope of longtermism may be more restricted than commonly supposed. After specifying my target, swamping axiological strong longtermism (swamping ASL), I give two arguments for the rarity thesis that the options needed to vindicate swamping ASL in a given decision problem are rare. I use the rarity thesis to pose two challenges to the scope of longtermism: the area challenge that swamping ASL often fails when we restrict our attention to specific cause areas, and the challenge from option unawareness that swamping ASL may fail when decision problems are modified to incorporate agents’ limited awareness of the options available to them.
I obviously expected this comment would get a mix of upvotes and downvotes, but I’d be pleased if any of the downvoters would be kind enough to explain on what grounds they are downvoting.
Do you disagree with the empirical claim that central EAs entities promote longtermism (the claim we should give priority to improving the longterm)?
Do you disagree with the empirical claim that there is pressure within EA to agree with longtermism, e.g. if you don’t, it carries a perceived or real social or other penalty (such, as, er, getting random downvotes)?
Are my claims about the structure of the EA Introductory Fellowship false?
Is it something about what I put in the disclaimer?
Is a huge contribution. I supported it, it’s great. It’s obviously written by someone who leans toward non-longtermist cause areas, but somehow that makes their impartial vibe more impressive.
The person lays out their ideas and aims transparently, and I think even an strong longtermist “opponent” would appreciate it, maybe even gain some perspective of being sort of “oppressed”.
Your comment:
Your comment slots below this top comment, but doesn’t seem to be a natural reply. It is plausible you replied because you wanted a top slot.
You immediately slide into rhetoric with a quote that would rally supporters, but is unlikely to be appreciated by people who disagree with you (“you can have any view you want, so long as it’s longtermism”). That seems like something you would say at a political rally and is objectively false. This is bad.
As a positive and related to the top comment, you do add your fellowship point, and Max Dalton picks this up, which is productive (from the perspective of a proponent of non-longtermist cause areas).
But I think the biggest issue is that, for a moment, there was this thing where people could have listened.
You sort of just walked past the savasana, slouched a bit and then slugged longtermism in the gut, while the top comment was opening up the issue for thought.
The danger is that people would find this alienating in and scoring points on the internet isn’t a good thing for EA, right?
(As a side issue, I’m unsure or ambivalent if criticism specifically needs to be prescribed into introduction materials, especially as a consequence of activism by opponents. It might be the case that more room or better content for other cause areas should exist. However, prescribing or giving check boxes something rigid could just lead to a unhealthy, adversarial dynamic. But I’m really unsure and obviously the CEO of CEA takes this seriously).
Hmm. This is very helpful, thank you very much. I don’t think we’re on the same page, but it’s useful for indicating where those differences may lie.
You immediately slide into rhetoric with a quote that would rally supporters, but is unlikely to be appreciated by people who disagree with you
I’m not what you mean by ‘supporters’. Supporters of what? Supporters of ‘non-longtermism’? Supporters of the view that “EA is just longtermism”? FWIW, I have a lot of respect for (very many) longtermists: I see them as seriously and sincerely engaged in a credible altruistic project, just not one I (currently?) consider the priority; I hope they would view me in the same way about my efforts to make lives happier, and that we would be able to cooperate engage in moral trade where possible.
What I am less happy is the (growing) sense that EA is only longtermism—it’s the only credible game in town—which is the subject of this post. One can be a longtermist—indeed of any moral persuasion—and object to that if you want the effective altruism community to be a pluralistic and inclusive place.
On the other hand, one could take a different, rather sneering, arrogant, and unpleasant view that longtermism is clearly true, anyone who doesn’t recognise this is just an idiot, and all those idiots should clear off. I have also encountered this perspective—far more often than I expected or hoped to.
Given all this, I find it hard to make sense of your claim I’ve
slugged longtermism in the gut
I’ve not attacked longtermism. If anything, I’m attacking the sense that only longtermists are welcome in EA—a perception based on exactly the sort of evidence raised in the top comment.
Finally, you said
As a side issue, I’m unsure or ambivalent if criticism specifically needs to be prescribed into introduction materials, especially as a consequence of activism by opponents.
Which I am almost stunned by. Criticism of EA? Criticism of longtermism? Am I an opponent of EA? That would be news to me. An introductory course on EA should include, presumably, arguments for and against the various positions one might take about how to do the most good. Everyone seems to agree that doing good is hard, and we need openness and criticism to improve what we are doing, and therefore I don’t see why you want to deliberately minimise, or refuse to include, criticism—that’s what you seem to be suggesting, I don’t know if that’s what you mean. Even an introductory course on just longtermist would, presumably cover objections to the view.
Yeah, this is an excellent list. To me, the OP seems to miss the obvious the point, which is that if you look at what the central EA individuals, organisations, and materials are promoting, you very quickly get the impression that, to misquote Henry Ford, “you can have any view you want, so long as it’s longtermism”. One’s mileage may vary, of course, as to whether one thinks this is a good result.
To add to the list, the 8-week EA Introductory Fellowship curriculum, the main entry point for students, i.e. the EAs of the future, has 5 sections on cause areas, of which 3 are on longtermism. As far as I can tell, there are no critiques of longtermism anywhere, even in the “what might we be missing?” week, which I found puzzling.
[Disclosure: when I saw the Fellowship curriculum about a year ago, I raised this issue with Aaron Gertler, who said it had been created without much/any input from non-longtermists, this was perhaps an oversight, and I would be welcome to make some suggestions. I meant to make some, but never prioritised it, in large part because it was unclear to me if any suggestions I made would get incorporated.]
(Not a response to your whole comment, hope that’s OK.)
I agree that there should be some critiques of longtermism or working on X risk in the curriculum. We’re working on an update at the moment. Does anyone have thoughts on what the best critiques are?
Some of my current thoughts:
- Why I am probably not a longtermist
- This post arguing that it’s not clear if X risk reduction is positive
- On infinite ethics (and Ajeya’s crazy train metaphor)
IMO good-faith, strong, fully written-up, readable, explicit critiques of longtermism are in short supply; indeed, I can’t think of any. The three you raise are good, but they are somewhat tentative and limited in scope. I think that stronger objections could be made.
FWIW, on the EA facebook page, I raised three critiques of longtermism in response to Finn Moorhouse’s excellent recent article on the subject, but all my comments were very brief.
The first critique involves defending person-affecting views in population ethics and arguing that, when you look at the details, the assumptions underlying them are surprisingly hard to reject. My own thinking here is very influenced by Bader (2022), which I think is a philosophical masterclass, but is also very dense and doesn’t address longtermism directly. There are other papers arguing for person-affecting views, e.g. Narveson (1967) and Heyd (2012) but both are now a bit dated—particularly Narveson—in the sense they don’t respond to the more sophisticated challenges to their views that have since been raised in the literature. For the latest survey of the literature and those challenges—albeit not one sympathetic to person-affecting views—see Greaves (2017).
The second draws on a couple of suggestions made by Webb (2021) and Berger (2021) about cluelessness. Webb (2021) is a reasonably substantial EA forum post about how we might worry that, the further in the future something happens, the smaller the expected value we should assign to it, which acts as an effective discount. However, Webb (2021) is pretty non-committal about how serious a challenge this is for longtermism and doesn’t frame it as one. Berger (2021) is talking on the 80k podcasts and suggests that longtermist interventions are either ‘narrow’ (e.g. AI safety) or ‘broad’ (‘improving politics’), where the former are not robustly good, and the latter are questionably better than existing ‘near-termist’ interventions such as cash transfers to the global poor. I wouldn’t describe this as a worked-out thesis though and Berger doesn’t state it very directly.
The third critique is that, a la Torres, longtermism might lead us towards totalitarianism. I don’t think this is a really serious objection, but I would like to see longtermists engage with it and say why they don’t believe it is.
I should probably disclose I’m currently in discussion with Forethought about a grant to write up some critiques of longtermism in order to fill some of this literature gap. Ideally, I’ll produce 2-3 articles within the next 18 months.
I strongly welcome the critiques you’ll hopefully write, Michael!
Why I am probably not a longtermist seems like the best of these options, by a very wide margin. The other two posts are much too technical/jargony for introductory audiences.
Also, A longtermist critique of “The expected value of extinction risk reduction is positive” isn’t even a critique of longtermism, it’s a longtermist arguing against one longtermist cause (x-risk reduction) in favor of other longtermist causes (such as s-risk reduction and trajectory change). So it doesn’t seem like a good fit for even a more advanced curriculum unless it was accompanied by other critiques targeting longtermism itself (e.g. critiques based on cluelessness.)
The longtermist critique is a critique of arguments for a particular (perhaps the main) priority in the longtermism community, extinction risk reduction. I don’t think it’s necessary to endorse longtermism to be sympathetic to the critique. That extinction risk reduction might not be robustly positive is a separate point from the claim that s-risk reduction and trajectory changes are more promising.
Someone could think extinction risk reduction, s-risk reduction and trajectory changes are all not robustly positive, or that no intervention aimed at any of them is robustly positive. The post can be one piece of this, arguing against extinction risk reduction. I’m personally sympathetic to the claim that no longtermist intervention will look robustly positive or extremely cost-effective when you try to deal with the details and indirect effects.
The case for stable very long-lasting trajectory changes other than those related to extinction hasn’t been argued persuasively, as far as I know, in cost-effectiveness terms over, say, animal welfare, and there are lots of large indirect effects to worry about. S-risk work often has potential for backfire, too. Still, I’m personally sympathetic to both enough to want to investigate further, at least over extinction risk reduction.
The strongest academic critique of longtermism I know of is The Scope of Longtermism by GPI’s David Thorstad. Here’s the abstract:
Just saw this and came here to say thanks! Glad you liked it.
I obviously expected this comment would get a mix of upvotes and downvotes, but I’d be pleased if any of the downvoters would be kind enough to explain on what grounds they are downvoting.
Do you disagree with the empirical claim that central EAs entities promote longtermism (the claim we should give priority to improving the longterm)?
Do you disagree with the empirical claim that there is pressure within EA to agree with longtermism, e.g. if you don’t, it carries a perceived or real social or other penalty (such, as, er, getting random downvotes)?
Are my claims about the structure of the EA Introductory Fellowship false?
Is it something about what I put in the disclaimer?
The top comment:
Is a huge contribution. I supported it, it’s great. It’s obviously written by someone who leans toward non-longtermist cause areas, but somehow that makes their impartial vibe more impressive.
The person lays out their ideas and aims transparently, and I think even an strong longtermist “opponent” would appreciate it, maybe even gain some perspective of being sort of “oppressed”.
Your comment:
Your comment slots below this top comment, but doesn’t seem to be a natural reply. It is plausible you replied because you wanted a top slot.
You immediately slide into rhetoric with a quote that would rally supporters, but is unlikely to be appreciated by people who disagree with you (“you can have any view you want, so long as it’s longtermism”). That seems like something you would say at a political rally and is objectively false. This is bad.
As a positive and related to the top comment, you do add your fellowship point, and Max Dalton picks this up, which is productive (from the perspective of a proponent of non-longtermist cause areas).
But I think the biggest issue is that, for a moment, there was this thing where people could have listened.
You sort of just walked past the savasana, slouched a bit and then slugged longtermism in the gut, while the top comment was opening up the issue for thought.
The danger is that people would find this alienating in and scoring points on the internet isn’t a good thing for EA, right?
(As a side issue, I’m unsure or ambivalent if criticism specifically needs to be prescribed into introduction materials, especially as a consequence of activism by opponents. It might be the case that more room or better content for other cause areas should exist. However, prescribing or giving check boxes something rigid could just lead to a unhealthy, adversarial dynamic. But I’m really unsure and obviously the CEO of CEA takes this seriously).
Hmm. This is very helpful, thank you very much. I don’t think we’re on the same page, but it’s useful for indicating where those differences may lie.
I’m not what you mean by ‘supporters’. Supporters of what? Supporters of ‘non-longtermism’? Supporters of the view that “EA is just longtermism”? FWIW, I have a lot of respect for (very many) longtermists: I see them as seriously and sincerely engaged in a credible altruistic project, just not one I (currently?) consider the priority; I hope they would view me in the same way about my efforts to make lives happier, and that we would be able to cooperate engage in moral trade where possible.
What I am less happy is the (growing) sense that EA is only longtermism—it’s the only credible game in town—which is the subject of this post. One can be a longtermist—indeed of any moral persuasion—and object to that if you want the effective altruism community to be a pluralistic and inclusive place.
On the other hand, one could take a different, rather sneering, arrogant, and unpleasant view that longtermism is clearly true, anyone who doesn’t recognise this is just an idiot, and all those idiots should clear off. I have also encountered this perspective—far more often than I expected or hoped to.
Given all this, I find it hard to make sense of your claim I’ve
I’ve not attacked longtermism. If anything, I’m attacking the sense that only longtermists are welcome in EA—a perception based on exactly the sort of evidence raised in the top comment.
Finally, you said
Which I am almost stunned by. Criticism of EA? Criticism of longtermism? Am I an opponent of EA? That would be news to me. An introductory course on EA should include, presumably, arguments for and against the various positions one might take about how to do the most good. Everyone seems to agree that doing good is hard, and we need openness and criticism to improve what we are doing, and therefore I don’t see why you want to deliberately minimise, or refuse to include, criticism—that’s what you seem to be suggesting, I don’t know if that’s what you mean. Even an introductory course on just longtermist would, presumably cover objections to the view.