I think my basic reaction here is that longtermism is importantly correct about the central goal of EA if there are longtermist interventions that are actionable, promising and genuinely longtermist in the weak sense of âbetter than any other causes because of long-term effectsâ, even if there are zero examples of LT interventions that meet the ânoveltyâ criteria, or lack some significant near-term benefits.
Firstly, Iâd distinguish here between longtermism as a research program, and longtermism as a position about what causes should be prioritized right now by people doing direct work. At most criticisms about novelty seem relevant to evaluating the research program, and deciding whether to fund more research into longtermism itself. I feel like they should be mostly irrelevant to people actually doing cause prioritization over direct interventions.
Why? I donât see why longtermism wouldnât count as an important insight for cause prioritization if it was the case that thinking longtermistly didnât turn up any new intervention that weâre not already known to be good, but it did change the rankings of interventions so that I changed my mind about which interventions were best. That seems to be roughly what longtermists themselves think is the situation with regard to longtermism. Itâs not that there is zero reason to do X-risk reduction type interventions even if LT is false, since they do benefit current people. But the case for those interventions being many times better than other things you can do for current people and animals rests on, or at least is massively strengthened by Parfit-style arguments about how there could be many happy future people. So the practical point of longtermism isnât to produce novel interventions, necessarily, but also to help us prioritize better among the interventions we already knew about. Of course, the idea of Parfit-style arguments being correct in theory is older than using it to prioritize between interventions, but so what? Why does that effect whether or not it is a good idea to use it to prioritize between interventions now? The most relevant question for what EA should fund isnât âis longtermist philosophy post â2017 simultaneously impressively original and of practical importâ but âshould we prioritize X-risk because of Parfit-style arguments about the number of happy people there could be in the future.â If the answer to the latter question is âyesâ, weâve agreed EAs should do what longtermists want in terms of direct work on causes, which is at least as important than how impressed we should or shouldnât be with the longtermists as researchers.* At most the latter is relevant to âshould we fund more research into longtermism itselfâ, which is important, but not as central as what first-order interventions we should fund. To put the point slightly differently, suppose I think the following:
1) Based on Bostrom and Parfit-style arguments-and donât forget John Broomeâs case for making happy people being good-I think itâs at least as influential on Will and Toby-the highest value thing to do is some form of X-risk reduction, say biorisk reduction for concreteness.
2) If it werenât for the fact that there could exist vast numbers of happy people in the far future, the benefits on the margin to current and near future people of global development work would be higher than biorisk reduction, and should be funded by EA instead, although biorisk reduction would still have significant near-term benefits, and society as a whole should have more than zero people working on it.
Well, then I am a longtermist, pretty clearly, and it has made a difference to what I prioritize. If I am correct about 1), then it has made a good difference to what I prioritize, and if I am wrong about it, it might not have done. But itâs just completely irrelevant to whether I am right to change cause prioritization based on 1) and 2) how novel 1) was if said in 2018, or what other insights LT produced as a research program.
None of this is to say 1), or its equivalent about some other purpoted X-risk, is true. But I donât think youâve said anything here that should bother someone who thinks it is.
Whether society ends up spending, in the end, more money on asteroid defense or, possibly, more money on monitoring large volcanoes, is orders of magnitude more important than whether people in the EA community (or outside of it) understand the intellectual lineage of these ideas and how novel or non-novel they are. I donât know if thatâs exactly what you were saying, but Iâm happy to concede that point anyway.
To be clear, NASAâs NEO Surveyor mission is one of the things Iâm most excited about in the world. It makes me feel so happy thinking about it. And exposure to Bostromâs arguments from the early 2000s to the early 2010s is a major part of what convinced me that we, as a society, were underrating low-probability, high-impact risks. (The Canadian journalist Dan Gardnerâs book Risk also helped convince me of that, as did other people Iâm probably forgetting right now.)
Even so, I still think itâs important to point out ideas are not novel or not that novel if they arenât, for all the sorts of reasons you would normally give to sweat the small stuff, and not let something slide that, on its own, seems like an error or a bit of a problem, just because it might plausibly benefit the world in some way. Itâs a slippery slope, for one...
I may not have made this clear enough in the post, but I completely agree that if, for example, asteroid defense is not a novel idea, but a novel idea, X, tells you that you should spend 2x more money on asteroid defense, then spending 2x more on asteroid defense counts as a novel X-ist intervention. Thatâs an important point, Iâm glad you made it, and I probably wasnât clear enough about it.
However, I am making the case that all the compelling arguments to do anything differently, including spend more on asteroid defense, or re-prioritize different interventions, were already made long before âlongtermismâ was coined.
If you want to argue that âlongtermismâ was a successful re-branding of âexistential riskâ, with some mistakes thrown in, Iâm happy to concede that. (Or at least say that I donât care strongly enough to argue against that.) But then I would ask: is everyone aware itâs just a re-branding? Is there truth in advertising here?
I think my basic reaction here is that longtermism is importantly correct about the central goal of EA if there are longtermist interventions that are actionable, promising and genuinely longtermist in the weak sense of âbetter than any other causes because of long-term effectsâ, even if there are zero examples of LT interventions that meet the ânoveltyâ criteria, or lack some significant near-term benefits.
Firstly, Iâd distinguish here between longtermism as a research program, and longtermism as a position about what causes should be prioritized right now by people doing direct work. At most criticisms about novelty seem relevant to evaluating the research program, and deciding whether to fund more research into longtermism itself. I feel like they should be mostly irrelevant to people actually doing cause prioritization over direct interventions.
Why? I donât see why longtermism wouldnât count as an important insight for cause prioritization if it was the case that thinking longtermistly didnât turn up any new intervention that weâre not already known to be good, but it did change the rankings of interventions so that I changed my mind about which interventions were best. That seems to be roughly what longtermists themselves think is the situation with regard to longtermism. Itâs not that there is zero reason to do X-risk reduction type interventions even if LT is false, since they do benefit current people. But the case for those interventions being many times better than other things you can do for current people and animals rests on, or at least is massively strengthened by Parfit-style arguments about how there could be many happy future people. So the practical point of longtermism isnât to produce novel interventions, necessarily, but also to help us prioritize better among the interventions we already knew about. Of course, the idea of Parfit-style arguments being correct in theory is older than using it to prioritize between interventions, but so what? Why does that effect whether or not it is a good idea to use it to prioritize between interventions now? The most relevant question for what EA should fund isnât âis longtermist philosophy post â2017 simultaneously impressively original and of practical importâ but âshould we prioritize X-risk because of Parfit-style arguments about the number of happy people there could be in the future.â If the answer to the latter question is âyesâ, weâve agreed EAs should do what longtermists want in terms of direct work on causes, which is at least as important than how impressed we should or shouldnât be with the longtermists as researchers.* At most the latter is relevant to âshould we fund more research into longtermism itselfâ, which is important, but not as central as what first-order interventions we should fund.
To put the point slightly differently, suppose I think the following:
1) Based on Bostrom and Parfit-style arguments-and donât forget John Broomeâs case for making happy people being good-I think itâs at least as influential on Will and Toby-the highest value thing to do is some form of X-risk reduction, say biorisk reduction for concreteness.
2) If it werenât for the fact that there could exist vast numbers of happy people in the far future, the benefits on the margin to current and near future people of global development work would be higher than biorisk reduction, and should be funded by EA instead, although biorisk reduction would still have significant near-term benefits, and society as a whole should have more than zero people working on it.
Well, then I am a longtermist, pretty clearly, and it has made a difference to what I prioritize. If I am correct about 1), then it has made a good difference to what I prioritize, and if I am wrong about it, it might not have done. But itâs just completely irrelevant to whether I am right to change cause prioritization based on 1) and 2) how novel 1) was if said in 2018, or what other insights LT produced as a research program.
None of this is to say 1), or its equivalent about some other purpoted X-risk, is true. But I donât think youâve said anything here that should bother someone who thinks it is.
Whether society ends up spending, in the end, more money on asteroid defense or, possibly, more money on monitoring large volcanoes, is orders of magnitude more important than whether people in the EA community (or outside of it) understand the intellectual lineage of these ideas and how novel or non-novel they are. I donât know if thatâs exactly what you were saying, but Iâm happy to concede that point anyway.
To be clear, NASAâs NEO Surveyor mission is one of the things Iâm most excited about in the world. It makes me feel so happy thinking about it. And exposure to Bostromâs arguments from the early 2000s to the early 2010s is a major part of what convinced me that we, as a society, were underrating low-probability, high-impact risks. (The Canadian journalist Dan Gardnerâs book Risk also helped convince me of that, as did other people Iâm probably forgetting right now.)
Even so, I still think itâs important to point out ideas are not novel or not that novel if they arenât, for all the sorts of reasons you would normally give to sweat the small stuff, and not let something slide that, on its own, seems like an error or a bit of a problem, just because it might plausibly benefit the world in some way. Itâs a slippery slope, for one...
I may not have made this clear enough in the post, but I completely agree that if, for example, asteroid defense is not a novel idea, but a novel idea, X, tells you that you should spend 2x more money on asteroid defense, then spending 2x more on asteroid defense counts as a novel X-ist intervention. Thatâs an important point, Iâm glad you made it, and I probably wasnât clear enough about it.
However, I am making the case that all the compelling arguments to do anything differently, including spend more on asteroid defense, or re-prioritize different interventions, were already made long before âlongtermismâ was coined.
If you want to argue that âlongtermismâ was a successful re-branding of âexistential riskâ, with some mistakes thrown in, Iâm happy to concede that. (Or at least say that I donât care strongly enough to argue against that.) But then I would ask: is everyone aware itâs just a re-branding? Is there truth in advertising here?