I tentatively believe (ii), depending on some definitions. I’m somewhat surprised to see Ben and Darius implying it’s a really weird view, and makes me wonder what I’m missing.
I don’t want the EA community to stop working on all non-longtermist things. But the reason is because I think many of those things have positive indirect effects on the EA community. (I just mean indirect effects on the EA community, and maybe on the broader philanthropic community, I don’t mean indirect effects more broadly in the sense of ‘better health in poor countries’ --> ‘more economic growth’ --> ‘more innovation’)
For example non-longtermist interventions are often a good way to demonstrate EA ideas and successes (eg. pointing to GiveWell is really helpful as an intro to EA); non-longtermist causes are a way for people to get involved with EA and end up working on longtermist causes (eg. [name removed] incoming at GPI comes to mind as a great success story along those lines); work on non-longtermist causes has better feedback loops so it might improve the community’s skills (eg. Charity Entrepreneurship incubatees probably are highly skilled 2-5 years after the program. Though I’m not sure that actually translates to more skill-hours going towards longtermist causes).
But none of these reasons are that I think the actual intended impact of non-longtermist interventions is competitive with longtermist interventions. Eg. I think Charity Entrepreneurship is good because it’s creating a community and culture of founding impact-oriented nonprofits, not because [it’s better for shrimp/there’s less lead in paint/fewer children smoke tobacco products]. Basically I think the only reasons the near-term interventions might be good is because they might make the long-term future go better.
I’m not sure what counts as ‘astronomically’ more cost effective, but if it means ~1000x more important/cost-effective I might agree with (ii). It’s hard to come up with a good thought experiment here to test this intuition.
One hypothetical is ‘would you rather $10,000 gets donated to the Longterm Future Fund, or $10 mil gets donated to Give Well’s Maximum Impact Fund’. This is confusing though, because I’m not sure how important extra funding is in these areas. Another hypothetical is ‘would you rather 10 fairly smart people devote their careers to longtermist causes (eg. following 80k advice), or 10,000 fairly smart people devote their careers to neartermist causes (eg. following AAC advice)’. This is confusing because I expect 10,000 people working on effective animal advocacy to have some effect on the long-term future. Some of them might end up working on nearby long-termist things like digital sentience. They might slightly shift the culture of veganism to be more evidence-based and welfarist, which could lead to faster flow of people from veganism to EA over time. They would also do projects which EA could point to as success, which could be helpful for getting more people into EA and eventually into longtermist causes.
If I try to imagine a version of this hypothetical without those externalities, I think I prefer the longtermist option, indicating that the 1000x difference seems plausible to me.
I wonder if some of the reasons people don’t hold the view I do is some combination of (1) ‘this feels weird so maybe it’s wrong’ and (2) ‘I don’t want to be unkind to people working on neartermist causes’.
I think (1) does carry some weight and we should be cautious when acting on new, weird ideas that imply strange actions. However, I’m not sure how much longtermism actually falls into this category.
The idea is not that new, and there’s been quite a lot of energy devoted to criticising the ideas. I don’t know what others in this thread think, but I haven’t found much of this criticism very convincing.
Weak longtermism (future people matter morally) is intuitive for lots of people (though not all, which is fine). I concede strong longtermism is initially very intuitive though
Strong longtermism doesn’t imply we should do particularly weird things. It implies we should do things like: get prepared for pandemics, make it harder for people to create novel pathogens, reduce the risk of nuclear war, take seriously the facts that we can’t get current AI systems to do what we want but AI systems are quickly becoming really impressive, and some/most kinds of trend-extrapolation or forecasts imply AGI in the next 10-120 years. Sure, strong longtermism implies we shouldn’t prioritise helping people in extreme poverty. But helping people in extreme poverty is not the default action, most people don’t spend any resources on that at all. (This is similar to Eliezer’s point above).
I also feel the weight of (2). It makes me squirm to reconcile my tentative belief in strong longtermism with my admiration of many people who do really impressive work on non-longtermist causes and my desire to get along with those people. I really think longtermists shouldn’t make people who work on other causes feel bad. However, I think it’s possible to commit to strong longtermism without making other people feel attacked, or too unappreciated. And I don’t think these kinds of social considerations have any bearing on which cause to prioritise working on.
I feel like a big part of the edge of the EA and rationality community is that we follow arguments to their conclusions even when it’s weird, or it feels difficult, or we’re not completely sure. We make tradeoffs even when it feels really hard—like working on reducing existential risk instead of helping people in extreme poverty or animals in factory farms today.
I feel like I also need to clarify some things:
I don’t try to get everyone I talk to to work on longtermist things. I don’t think that would be good for the people I talk to, the EA community, or the longterm future
I really value hearing arguments against longtermism. These are helpful for finding out if longtermism is wrong, figuring out the best ways to explain longtermism, and spotting potential failure modes of acting on longtermism. I sometimes think about paying someone to write a really good, clear case for why acting on strong longtermism is most likely to be a bad idea
My all-things-considered view is a bit more moderate than this comment suggests, and I’m eager to hear Darius’, Ben’s, and others views on this
I don’t have time to write a detailed and well-argued response, sorry. Here are some very rough quick thoughts on why I downvoted. Happy to expand on any points and have a discussion.
In general, I think criticisms of longtermism from people who ‘get’ longtermism are incredibly valuable to longtermists.
One reason if that if the criticisms carry entirely, you’ll save them from basically wasting their careers. Another reason is that you can point out weaknesses in longtermism or in their application of longtermism that they wouldn’t have spotted themselves. And a third reason is that in the worlds where longtermism is true, this helps longtermists work out better ways to frame the ideas to not put off potential sympathisers.
Clarity
In general, I found it hard to work out the actual arguments of the book and how they interfaced with the case for longtermism.
Sometimes I found that there were some claims being implied but they were not explicit. So please point out any incorrect inferences I’ve made below!
I was unsure what was being critiqued: longtermism, Bostrom’s views, utilitarianism, consequentialism, or something else.
The thesis of the book (for people reading this comment, and to check my understanding)
“Longtermism is a radical ideology that could have disastrous consequences if the wrong people—powerful politicians or even lone actors—were to take its central claims seriously.”
“As outlined in the scholarly literature, it has all the ideological ingredients needed to justify a genocidal catastrophe.”
Utilitarianism (Edit: I think Tyle has added a better reading of this section below)
This section seems to caution against naive utilitarianism, which seems to form a large fraction of the criticism of longtermim. I felt a bit like this section was throwing intuitions at me, and I just disagreed with the intuitions being thrown at me. Also, doing longtermism better obviously means better accounting for all the effects of our actions, which naturally pushes away from naive utilitarianism
In particular, there seems to be a sense of derision at any philosophy where the ‘means justify the end’. I didn’t really feel like this was argued for (please correct me if I’m wrong!)
I don’t know whether that meant the book was arguing against consequentialism in general, or arguing that longtermism overweights consequences in the longterm future compared to other consequences, but is right to focus on consequences generally
I would have preferred if these parts of the book were clear about exactly what the argument was
I would have preferred if these parts of the book did less intuition-fighting (there’s a word for this but I can’t remember it)
Millennialism
“A movement is millennialist if it holds that our current world is replete with suffering and death but will soon “be transformed into a perfect world of justice, peace, abundance, and mutual love.” (pg.24 of the book)
Longtermism does not say our current world is replete with suffering and death
Longtermism does not say the world will be transformed soon
Longtermism does not say that if the world is transformed it will be into a world of justice, peace, abundance, and mutual love.
Therefore, longtermism does not meet the stated definition of a millennialist movement
Granted, there are probably longtermists that do hold these views, but these views are not longtermism. I don’t know whether Bostrom (whose views seems to be the focus of the book) holds these views. Even if he does, these views are not longtermism
Mere Ripples
Some things are bigger than other things
That doesn’t mean that the smaller things aren’t bad or good or important- they are just smaller than the bigger things
If you can make a good big thing happen or make a good small thing happen you can make more good by making the big thing happen
That doesn’t mean the small thing is not important, but it is smaller than the big thing
I feel confused
White Supremacy
The book quotes this section from Beckstead’s Thesis:
The book goes on to say:
I’m pretty sure the book isn’t using ‘white supremacist’ in the normal sense of the phrase. For that reason, I’m confused about this, and would appreciate answers to these questions
The Beckstead quote ends ‘other things being equal’. Doesn’t that imply that the claim is not ‘overall, it’s better to save lives in rich countries than poor countries’ but ‘here is an argument that pushes in favour of saving lives in rich countries over poor countries’?
Imagine longtermism did imply helping rich people instead of helping poor people, and that that made it white supremacist. Does that mean that anything that helps rich people is white supremacist (because the resources could have been used to help poor people)?
What if the poor people are white and the rich people are not white?
Why do rich-nation government health services not meet this definition of white supremacy?
I’d also have preferred if it was clear how this version of white supremacy interfaces with the normal usage of the phrase
Genocide (Edit: I think Tyle and Lowry have added good explanations of this below)
The book argues that a longtermist would support a huge nuclear attack to destroy everyone in Germany if there was a less than one-in-a-million chance of someone in Germany building a nuclear weapon. (Ch.5)
The book says that maybe a longtermist could avoid saying that they would do this if they thought that the nuclear attack would decrease existential risk
The book says that this does not avoid the issue though and implies that because the longtermist would even consider this action, longtermism is dangerous (please correct me if I’m misreading this)
It seems to me that this argument is basically saying that because a consequentialist weighs up the consequences of each potential action against other potential actions, they at least consider many actions, some of which would be terrible (or at least would be terrible from a common-sense perspective). Therefore, consequentialism is dangerous. I think I must be misunderstanding this argument as it seems obviously wrong as stated here. I would have preferred if the argument here was clearer