Note, I do not agree longtermism is bad, I am a longtermist and I was not in the slightest motivated to change that identity/ set of beliefs by reading this article. However, and interestingly unlike Phil Torres’s other recent essay in Current Affairs, in this essay he was the sort of enemy who I think it is good to have.
What I mean by this is mainly that if I had encountered this essay ten years ago, it probably would have led me to start researching longterm and effective altruism because they sounded interesting. EA is still in a context where recruiting new people who are interested in the ideas is vastly more important to the cause than ensuring everyone thinks positively of it.
And anyways, I don’t think the essay will make people think very badly of longtermism. It explains the intellectual motivations of the movement too well, especially in the first half. Even in the cases where the essay was clearly unfair to longtermists—specifically in describing us all as total hedonic utilitarians. Few of us would unhesitatingly accept the repugnant conclusion, and hedonic utilitarianism is also not the primary branch of utilitarianism—he still accurately described a point of view that is occasionally held by people, and which has historically been part of the debate.
I even think that what seems to be his core deep critique of long termism, which is that it can lead to millenarianism, accelerate the development of dangerous technologies, lead to people ignoring issues that help actually existing people, and finally be aimed at creating a sort of future that most current people do not want are all true.
If nothing else, my donations to the Long Term Future Fund and MIRI would have gone to a global poverty charity if I wasn’t a long termist, so that little bit of money has gone to pay the salaries of comfortable first wold people where the best median estimate of how much long term or short term good they will do through this work is zero. That Ord and MacAskill are devolping ideas about moral parliaments, doing things that are good based on many moral systems, and talking about ways to be a long termist but avoid fanaticism does not change the simple fact that when someone takes the well being of future people as a serious moral concern, and is willing to shut up and multiply, it really is natural for them to think that anything bad that could possibly happen today would be worth it to have a tiny percentage increase in the expected number of future happy people. Weird transhumanist visions of digital people using all of the resources of the galaxy to run as many uploaded minds as possible might not be the only longtermist vision of the future, but it has always seemed to me like it is the most popular one.
Of course this essay is often unfair. Of course there is nonsense in the article. Torres wrote, “It is difficult to overstate how influential longtermism has become.” No, it is trivially easy. For example: “Longtermism is one of the top three considerations driving the policy choices of major world governments.” There, I just proved him wrong.
A line by line critique of the essay would find lots of stuff to snarkily complain about, lots of implications in the text that are frustrating and unfair, and several simply inaccurate claims—most notably the description of utiliatarianism and its relationship to the community.
But those sorts of issues are besides the point. My predicction is that outsiders for whom longtermist and utilitarian thinking feels natural will come away from reading this article interested, not repulsed.
The Phil Torres essay in Aeon attacking Longtermism might be good
https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo
Note, I do not agree longtermism is bad, I am a longtermist and I was not in the slightest motivated to change that identity/ set of beliefs by reading this article. However, and interestingly unlike Phil Torres’s other recent essay in Current Affairs, in this essay he was the sort of enemy who I think it is good to have.
What I mean by this is mainly that if I had encountered this essay ten years ago, it probably would have led me to start researching longterm and effective altruism because they sounded interesting. EA is still in a context where recruiting new people who are interested in the ideas is vastly more important to the cause than ensuring everyone thinks positively of it.
And anyways, I don’t think the essay will make people think very badly of longtermism. It explains the intellectual motivations of the movement too well, especially in the first half. Even in the cases where the essay was clearly unfair to longtermists—specifically in describing us all as total hedonic utilitarians. Few of us would unhesitatingly accept the repugnant conclusion, and hedonic utilitarianism is also not the primary branch of utilitarianism—he still accurately described a point of view that is occasionally held by people, and which has historically been part of the debate.
I even think that what seems to be his core deep critique of long termism, which is that it can lead to millenarianism, accelerate the development of dangerous technologies, lead to people ignoring issues that help actually existing people, and finally be aimed at creating a sort of future that most current people do not want are all true.
If nothing else, my donations to the Long Term Future Fund and MIRI would have gone to a global poverty charity if I wasn’t a long termist, so that little bit of money has gone to pay the salaries of comfortable first wold people where the best median estimate of how much long term or short term good they will do through this work is zero. That Ord and MacAskill are devolping ideas about moral parliaments, doing things that are good based on many moral systems, and talking about ways to be a long termist but avoid fanaticism does not change the simple fact that when someone takes the well being of future people as a serious moral concern, and is willing to shut up and multiply, it really is natural for them to think that anything bad that could possibly happen today would be worth it to have a tiny percentage increase in the expected number of future happy people. Weird transhumanist visions of digital people using all of the resources of the galaxy to run as many uploaded minds as possible might not be the only longtermist vision of the future, but it has always seemed to me like it is the most popular one.
Of course this essay is often unfair. Of course there is nonsense in the article. Torres wrote, “It is difficult to overstate how influential longtermism has become.” No, it is trivially easy. For example: “Longtermism is one of the top three considerations driving the policy choices of major world governments.” There, I just proved him wrong.
A line by line critique of the essay would find lots of stuff to snarkily complain about, lots of implications in the text that are frustrating and unfair, and several simply inaccurate claims—most notably the description of utiliatarianism and its relationship to the community.
But those sorts of issues are besides the point. My predicction is that outsiders for whom longtermist and utilitarian thinking feels natural will come away from reading this article interested, not repulsed.