FWIW, my own views are more like ‘regular longtermism’ than ‘strong longtermism,’ and I would agree with Toby that existential risk should be a global priority, not the global priority. I’ve focused my career on reducing existential risk, particularly from AI, because it seems like a substantial chance of happening in my lifetime, with enormous stakes and extremely neglected. I probably wouldn’t have gotten into it when I did if I didn’t think doing so was much more effective than GiveWell top charities at saving current human lives, and outperforming even more on metrics like cost-benefit in $.
Longtermism as such (as one of several moral views commanding weight for me) plays the largest role for things like refuges that would prevent extinction but not catastrophic disaster, or leaving seed vaults and knowledge for apocalypse survivors. And I would say longtermism provides good reason to make at least modest sacrifices for that sort of thing (much more than the ~0 current world effort), but not extreme fanatical ones.
There are definitely some people who are fanatical strong longtermists, but a lot of people who are made out to be such treat it as an important consideration but not one held with certainty or overwhelming dominance over all other moral frames and considerations. In my experience one cause of this is that if you write about implications within a particular worldview people assume you place 100% weight on it, when the correlation is a lot less than 1.
I see the same thing happening with Nick Bostrom, e.g. his old Astronomical Waste article explicitly explores things from a totalist view where existential risk dominates via long-term effects, but also from a person-affecting view where it is balanced strongly by other considerations like speed of development. In Superintelligence he explicitly prefers not making drastic sacrifices of existing people for tiny proportional (but immense absolute) gains to future generations, while also saying that the future generations are neglected and a big deal in expectation.
There are definitely some people who are fanatical strong longtermists, but a lot of people who are made out to be such treat it as an important consideration but not one held with certainty or overwhelming dominance over all other moral frames and considerations. In my experience one cause of this is that if you write about implications within a particular worldview people assume you place 100% weight on it, when the correlation is a lot less than 1.
I agree with this, and the example of Astronomical Waste is particularly notable. (As I understand his views, Bostrom isn’t even a consequentialist!). This is also true for me with respect to the CFSL paper, and to an even greater degree for Hilary: she really doesn’t know whether she buys strong longtermism; her views are very sensitive to current facts about how much we can reduce extinction risk with a given unit of resources.
The language-game of ‘writing a philosophy article’ is very different than ‘stating your exact views on a topic’ (the former is more about making a clear and forceful argument for a particular view, or particular implication of a view someone might have, and much less about conveying every nuance, piece of uncertainty, or in-practice constraints) and once philosophy articles get read more widely, that can cause confusion. Hilary and I didn’t expect our paper to get read so widely—it’s really targeted at academic philosophers.
Hilary is on holiday, but I’ve suggested we make some revisions to the language in the paper so that it’s a bit clearer to people what’s going on. This would mainly be changing phrases like ‘defend strong longtermism’ to ‘explore the case for strong longtermism’, which I think more accurately represents what’s actually going on in the paper.
FWIW, my own views are more like ‘regular longtermism’ than ‘strong longtermism,’ and I would agree with Toby that existential risk should be a global priority, not the global priority. I’ve focused my career on reducing existential risk, particularly from AI, because it seems like a substantial chance of happening in my lifetime, with enormous stakes and extremely neglected. I probably wouldn’t have gotten into it when I did if I didn’t think doing so was much more effective than GiveWell top charities at saving current human lives, and outperforming even more on metrics like cost-benefit in $.
Longtermism as such (as one of several moral views commanding weight for me) plays the largest role for things like refuges that would prevent extinction but not catastrophic disaster, or leaving seed vaults and knowledge for apocalypse survivors. And I would say longtermism provides good reason to make at least modest sacrifices for that sort of thing (much more than the ~0 current world effort), but not extreme fanatical ones.
There are definitely some people who are fanatical strong longtermists, but a lot of people who are made out to be such treat it as an important consideration but not one held with certainty or overwhelming dominance over all other moral frames and considerations. In my experience one cause of this is that if you write about implications within a particular worldview people assume you place 100% weight on it, when the correlation is a lot less than 1.
I see the same thing happening with Nick Bostrom, e.g. his old Astronomical Waste article explicitly explores things from a totalist view where existential risk dominates via long-term effects, but also from a person-affecting view where it is balanced strongly by other considerations like speed of development. In Superintelligence he explicitly prefers not making drastic sacrifices of existing people for tiny proportional (but immense absolute) gains to future generations, while also saying that the future generations are neglected and a big deal in expectation.
I agree with this, and the example of Astronomical Waste is particularly notable. (As I understand his views, Bostrom isn’t even a consequentialist!). This is also true for me with respect to the CFSL paper, and to an even greater degree for Hilary: she really doesn’t know whether she buys strong longtermism; her views are very sensitive to current facts about how much we can reduce extinction risk with a given unit of resources.
The language-game of ‘writing a philosophy article’ is very different than ‘stating your exact views on a topic’ (the former is more about making a clear and forceful argument for a particular view, or particular implication of a view someone might have, and much less about conveying every nuance, piece of uncertainty, or in-practice constraints) and once philosophy articles get read more widely, that can cause confusion. Hilary and I didn’t expect our paper to get read so widely—it’s really targeted at academic philosophers.
Hilary is on holiday, but I’ve suggested we make some revisions to the language in the paper so that it’s a bit clearer to people what’s going on. This would mainly be changing phrases like ‘defend strong longtermism’ to ‘explore the case for strong longtermism’, which I think more accurately represents what’s actually going on in the paper.