One interesting result from his earlier Existential risk pessimism and the time of perilspaper is that on a simple model, though he expands the results to more complex ones, people with low x-risk should be longtermists about value, and those with high x-risk estimates should be focused on the short term, which is basically the opposite of what we see happening in real life. The best way out for the longtermist, he argues, is to believe in āthe time of perils hypothesisā. I think the main appeals to this being the case are either a) interstellar colonisation giving us existential security so weāre moral value isnāt tethered to one planet,[2] or of course from b) aligned superintelligence allowing us unprecedented control over the universe and the ability to defuse any sources of existential risk. But of course, many working on Existential AI Risk are actually very pessimistic about the prospects for alignmentand so, if they are longtermist,[3] why arenāt they retiring from technical AI Safety and donating to AMF? More disturbingly, are longtermists just using the ātime of perilsā belief to backwards-justify their prior beliefs that interventions in things like AI are the utilitarian-optimal interventions to be supporting? I havenāt seen a good longtermist case answering these questions, which is not to say that one doesnāt exist.
Furthermore, in terms of responses from EA itself, whatās interesting is that when you look at the top uses of the Longtermism tag on the Forum, all of the top 8 were made ~3 years ago, and only 3 of the top 20 within the last 3 years. Longtermism isnāt a used a lot even amongst EA any moreāthe likely result of negative responses from the broader intelligensia during the 2022 soft launch, and then the incredibly toxic result of the FTX collapse shortly after the release of WWOTF. So while I find @trammellās comment below illuminating in some aspects about why there might be fewer responses than expected, I think sociologically it is wrong about the overarching reasonsāI think longtermism doesnāt have much momentum in academic philosophical circles right now. Iām not plugged into the GPI-Sphere though, so I could be wrong about this.
So my answer to your initial question is ānoā if you mean āsomething big published post-Thorstad that responds directly or implicitly to him from a longtermist perspectiveā. Furthermore, were they to do so, or to point at one already done (like The Case for Strong Longtermism) Iād probably just reject many of the premises that give the case legs in the first place, such as that itās reasonable to do risk-neutral-expected-value reasoning about the very long run future in the first place as a guide to moral action. Nevertheless, other objections to Longtermism I am sympathetic to are those from Eric Schwitzgebel (here, here) among others. I donāt think this is Davidās perspective though, I think he believes that the empirical warrant for the claims arenāt there but that he would support longtermist policies if he believed they could be supported this way.
Iām also somewhat disturbed by the implication that some proportion of the EA Brain-Trust, and/āor those running major EA/āAI Safety/āBiorisk organisations, are actually still committed longtermists or justify their work in longtermist terms. If so theyshould make sure this is known publicly and not hide it. If you think your work on AI Policy is justified on strong longtermist grounds, then Iād love to see your the model used for that, and parameters used for the length of the time of perils, the marginal difference to x-risk the policy would make, and the evidence backing up those estimates. Like if 80k have shifted to be AI Safety focused because of longtermist philosophical commitments, then lets see those commitments! The inability of many longtermist organisations to do that is a sign of what Thorstad calls the regression to the inscrutable,[4]which is I think one of his stronger critiques.
Note these considerations donāt apply to you if youāre not an impartial longtermist, but then again, if many people working in this area donāt count themselves as longtermists, it certainly seems like a poor sign for longtermism
Fair enough, I think the lack of a direct response has been due to an interaction between the two things. At first, people familiar with the existing arguments didnāt see much to respond to in Davidās arguments, and figured most people would see through them. Later, when Davidās arguments had gotten around more and it became clear that a response would be worthwhile (and for that matter when new arguments had been made which were genuinely novel), the small handful of people who had been exploring the case for longtermism had mostly moved on to other projects.
I would disagree a bit about why they moved on, though: my impression is that the bad association with FTX the word ālongtermismā got was only slightly responsible for their shift in focus, and the main driver was just that faster-than-expected AI progress mostly convinced them that the most valuable philosophy work to be done was more directly AI-related.
Iām going to actually disagree with your initial premiseāthe basic points are that the expected number of people in the future is much lower than longtermists estimateābecause, at least in the Reflective Altruism blog series, I donāt see that as being the main objection David has to (Strong) Longtermism. Instead, I think he instead argues That the interventions Longtermists support require additional hypotheses (the time of perils) which are probably false and that the empirical evidence longtermists give for their existential pessimism are often non-robust on further inspection.[1] Of course my understanding is not complete, David himself might frame it differently, etc etc.
One interesting result from his earlier Existential risk pessimism and the time of perils paper is that on a simple model, though he expands the results to more complex ones, people with low x-risk should be longtermists about value, and those with high x-risk estimates should be focused on the short term, which is basically the opposite of what we see happening in real life. The best way out for the longtermist, he argues, is to believe in āthe time of perils hypothesisā. I think the main appeals to this being the case are either a) interstellar colonisation giving us existential security so weāre moral value isnāt tethered to one planet,[2] or of course from b) aligned superintelligence allowing us unprecedented control over the universe and the ability to defuse any sources of existential risk. But of course, many working on Existential AI Risk are actually very pessimistic about the prospects for alignment and so, if they are longtermist,[3] why arenāt they retiring from technical AI Safety and donating to AMF? More disturbingly, are longtermists just using the ātime of perilsā belief to backwards-justify their prior beliefs that interventions in things like AI are the utilitarian-optimal interventions to be supporting? I havenāt seen a good longtermist case answering these questions, which is not to say that one doesnāt exist.
Furthermore, in terms of responses from EA itself, whatās interesting is that when you look at the top uses of the Longtermism tag on the Forum, all of the top 8 were made ~3 years ago, and only 3 of the top 20 within the last 3 years. Longtermism isnāt a used a lot even amongst EA any moreāthe likely result of negative responses from the broader intelligensia during the 2022 soft launch, and then the incredibly toxic result of the FTX collapse shortly after the release of WWOTF. So while I find @trammellās comment below illuminating in some aspects about why there might be fewer responses than expected, I think sociologically it is wrong about the overarching reasonsāI think longtermism doesnāt have much momentum in academic philosophical circles right now. Iām not plugged into the GPI-Sphere though, so I could be wrong about this.
So my answer to your initial question is ānoā if you mean āsomething big published post-Thorstad that responds directly or implicitly to him from a longtermist perspectiveā. Furthermore, were they to do so, or to point at one already done (like The Case for Strong Longtermism) Iād probably just reject many of the premises that give the case legs in the first place, such as that itās reasonable to do risk-neutral-expected-value reasoning about the very long run future in the first place as a guide to moral action. Nevertheless, other objections to Longtermism I am sympathetic to are those from Eric Schwitzgebel (here, here) among others. I donāt think this is Davidās perspective though, I think he believes that the empirical warrant for the claims arenāt there but that he would support longtermist policies if he believed they could be supported this way.
Iām also somewhat disturbed by the implication that some proportion of the EA Brain-Trust, and/āor those running major EA/āAI Safety/āBiorisk organisations, are actually still committed longtermists or justify their work in longtermist terms. If so they should make sure this is known publicly and not hide it. If you think your work on AI Policy is justified on strong longtermist grounds, then Iād love to see your the model used for that, and parameters used for the length of the time of perils, the marginal difference to x-risk the policy would make, and the evidence backing up those estimates. Like if 80k have shifted to be AI Safety focused because of longtermist philosophical commitments, then lets see those commitments! The inability of many longtermist organisations to do that is a sign of what Thorstad calls the regression to the inscrutable,[4] which is I think one of his stronger critiques.
Disagreement about future population estimates would be a special case of the latter here
In The Epistemic Challenge to Longtermism, Tarsney notes that:
Note these considerations donāt apply to you if youāre not an impartial longtermist, but then again, if many people working in this area donāt count themselves as longtermists, it certainly seems like a poor sign for longtermism
Term coined in this blog post about WWOTF
A general good rule for life
(I am not a time-invariant-risk-neutral-totally-impartial-utilitarian, for instance)
Fair enough, I think the lack of a direct response has been due to an interaction between the two things. At first, people familiar with the existing arguments didnāt see much to respond to in Davidās arguments, and figured most people would see through them. Later, when Davidās arguments had gotten around more and it became clear that a response would be worthwhile (and for that matter when new arguments had been made which were genuinely novel), the small handful of people who had been exploring the case for longtermism had mostly moved on to other projects.
I would disagree a bit about why they moved on, though: my impression is that the bad association with FTX the word ālongtermismā got was only slightly responsible for their shift in focus, and the main driver was just that faster-than-expected AI progress mostly convinced them that the most valuable philosophy work to be done was more directly AI-related.