The Phil Torres essay in Aeon attacking Longtermism might be good
https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo
Note, I do not agree longtermism is bad, I am a longtermist and I was not in the slightest motivated to change that identity/ set of beliefs by reading this article. However, and interestingly unlike Phil Torres’s other recent essay in Current Affairs, in this essay he was the sort of enemy who I think it is good to have.
What I mean by this is mainly that if I had encountered this essay ten years ago, it probably would have led me to start researching longterm and effective altruism because they sounded interesting. EA is still in a context where recruiting new people who are interested in the ideas is vastly more important to the cause than ensuring everyone thinks positively of it.
And anyways, I don’t think the essay will make people think very badly of longtermism. It explains the intellectual motivations of the movement too well, especially in the first half. Even in the cases where the essay was clearly unfair to longtermists—specifically in describing us all as total hedonic utilitarians. Few of us would unhesitatingly accept the repugnant conclusion, and hedonic utilitarianism is also not the primary branch of utilitarianism—he still accurately described a point of view that is occasionally held by people, and which has historically been part of the debate.
I even think that what seems to be his core deep critique of long termism, which is that it can lead to millenarianism, accelerate the development of dangerous technologies, lead to people ignoring issues that help actually existing people, and finally be aimed at creating a sort of future that most current people do not want are all true.
If nothing else, my donations to the Long Term Future Fund and MIRI would have gone to a global poverty charity if I wasn’t a long termist, so that little bit of money has gone to pay the salaries of comfortable first wold people where the best median estimate of how much long term or short term good they will do through this work is zero. That Ord and MacAskill are devolping ideas about moral parliaments, doing things that are good based on many moral systems, and talking about ways to be a long termist but avoid fanaticism does not change the simple fact that when someone takes the well being of future people as a serious moral concern, and is willing to shut up and multiply, it really is natural for them to think that anything bad that could possibly happen today would be worth it to have a tiny percentage increase in the expected number of future happy people. Weird transhumanist visions of digital people using all of the resources of the galaxy to run as many uploaded minds as possible might not be the only longtermist vision of the future, but it has always seemed to me like it is the most popular one.
Of course this essay is often unfair. Of course there is nonsense in the article. Torres wrote, “It is difficult to overstate how influential longtermism has become.” No, it is trivially easy. For example: “Longtermism is one of the top three considerations driving the policy choices of major world governments.” There, I just proved him wrong.
A line by line critique of the essay would find lots of stuff to snarkily complain about, lots of implications in the text that are frustrating and unfair, and several simply inaccurate claims—most notably the description of utiliatarianism and its relationship to the community.
But those sorts of issues are besides the point. My predicction is that outsiders for whom longtermist and utilitarian thinking feels natural will come away from reading this article interested, not repulsed.
I think you’re underestimating the impact bad faith criticism can have. Lots of people just copy their takes from someone else.
One substantive point that I do think is worth making is that Torres isn’t coming from the perspective of common-sense morality Vs longtermism, but rather a different, opposing, non-mainstream morality that (like longtermism) is much more common among elites and academics.
When he says that this Baconian idea is going to damage civilisation, presumably he thinks that we should do something about this, so he’s implicitly arguing for very radical things that most people today, especially in the Global South, wouldn’t endorse at all. If we take this claim at face value, it would probably involve degrowth and therefore massive economic and political change.
I’m not saying that longtermism is in agreement with the moral priorities of most people or that Torres’s (progressive? degrowth?) worldview is overall similarly counterintuitive to longtermism. His perspective is more counterintuitive to me, but on the other hand a lot more people share his worldview, and it’s currently much more influential in politics.
But I think it’s still important to point out that Torres’s world-view goes against common-sense morality as well, and that like longtermists he thinks it’s okay to second guess the deeply held moral views of most people under the right circumstances.
Practically what that means is that, for the reasons you’ve given, many of the criticisms that don’t rely on CSM, but rather on his morality, won’t land with everyone reading the article. So I agree that this probably doesn’t make longtermism look as bad as he thinks.
FWIW, my guess is that if you asked a man in the street whether weak longtermist policies or degrowth environmentalist policies were crazier, he’d probably choose the latter.
This is not consistent with my conversations with longtermists and my impressions speaking to funder(s) in longtermism.
From this, it seems that being seen as weird or negative would actively undermine foundational projects in longtermism today.
Sometimes I think people have a model of movement building that involves mass action, publicity and generally “punching out” (e.g. PETA, people physically fighting in the same two blocks in Portland for months).
I would want someone to have a way better model involving gears level work. I think this talent and insight is probably available today and someone write about it explicitly.
For years, LessWrong and the broader rationalist community has faced similar criticism:
Look at these nerds trying to tally up biases in a naïve way that will only lead them further astray from properly understanding themselves and the world.
What does “rational” even mean, how can we define it? How arrogant to name the whole movement “rationalism”.
This weird stuff about quantum mechanics sounds crazy, like the kind of thing cults believe.
(Misconceptions about “rationality” meaning being cold/emotionless like Spock from Star Trek)
As with this article, some of the criticism was true (tallying up official psychological biases turns out to be not very useful, and the community has moved away from that over time), but it’s mixed in with a lot of nonsense and bad-faith or ill-informed attacks. Nevertheless, lots of people (such as myself, long ago) initially heard of LessWrong via criticism of it, and over time our curiosity kept us coming back to learn more until we were eventually won over. But surely it also turned off many others, or prevented rationalism from becoming as prestigious / authoritative / mainstream as it possibly could have otherwise.
Overall, in retrospect, do you think this kind of criticism helped or hurt the rationalist movement?
Having missed the responses to some of Torres’ earlier articles, I found myself interested to read a response to some of his points. If anyone else is in the same boat, this post from 8 months back covers most of them.
Great, thanks for writing this. I wished you had included a concise and short summary of the article in your post rather than your evaluation. This would have provided more information to people who don’t read the article. I read parts of the original article.
Re. “Few of us would unhesitatingly accept the repugnant conclusion”: I unhesitatingly accept the repugnant conclusion. We all do, except for people who say that it’s repugnant to place the welfare of a human above that of a thousand bacteria. (I think Jainists say something like that.)
Arriving at the repugnant conclusion presumes you have an objective way of comparing the utility of two beings. I can’t just say “My utility function equals your utility function times two”. You have to have some operationalized, common definition of utility, in which values presumably cache out in organismal conscious phenomenal experience, that allows you to compare utilities across beings.
It’s easy to believe that such an objective measure would calculate the utility of pleasure to a human as being more than a thousand times as great as the utility of whatever is pleasurable to a bacterium (probably something like a positive glucose gradient). Every time we try to kill the bacteria in our refrigerator, we’re endorsing the repugnant conclusion.
Can you briefly explain, in your own words, what “accepting the repugnant conclusion” means?
Ummm, I think for me it is believing that for any fixed number of people with really good lives, there is some sufficiently large number of people with lives that are barely worth living that is preferable.
The question was not addressed to you. :)