However many elements of the philosophical case for longtermism are independent of contingent facts about what is going to happen with AI in the coming decades.
Agree, though there are arguments from one to the other! In particular:
As I understand it, longtermism requires it to be tractable to, in expectation, affect the long-term future (“ltf”).[1]
Some people might think that the only or most tractable way of affecting the ltf is to reduce extinction[2] risk in the coming decades or century (as you might think we can have no idea about the expected effects of basically anything else on the ltf because effects other than “causes ltf to exist or not” are too complicated to predict).
If extinction risk is high, especially from a single source in the near future, it’s plausibly easier to reduce. (this seems questionable but far from crazy)
So thinking extinction risk is high especially from a single source in the near future might reasonably increase someone’s belief in longtermism.
Thinking AI risk is high in the near future is a way of thinking extinction risk is high from a ~single source in the near future
So thinking AI risk is high in the near future is a reason to believe longtermism.
[1] basically because you can’t have reasons to do things that are impossible.
[1] since “existential risk” on the toby ord definition by definition is anything that reduces humanity’s potential (&therefore affects the ltf in expectation) I think it’d be confusing to use that term in this context so I’m going to talk about extinction even though people think there are non-extinction existential catastrophe scenarios from AI as well.
Agree, though there are arguments from one to the other! In particular:
As I understand it, longtermism requires it to be tractable to, in expectation, affect the long-term future (“ltf”).[1]
Some people might think that the only or most tractable way of affecting the ltf is to reduce extinction[2] risk in the coming decades or century (as you might think we can have no idea about the expected effects of basically anything else on the ltf because effects other than “causes ltf to exist or not” are too complicated to predict).
If extinction risk is high, especially from a single source in the near future, it’s plausibly easier to reduce. (this seems questionable but far from crazy)
So thinking extinction risk is high especially from a single source in the near future might reasonably increase someone’s belief in longtermism.
Thinking AI risk is high in the near future is a way of thinking extinction risk is high from a ~single source in the near future
So thinking AI risk is high in the near future is a reason to believe longtermism.
[1] basically because you can’t have reasons to do things that are impossible.
[1] since “existential risk” on the toby ord definition by definition is anything that reduces humanity’s potential (&therefore affects the ltf in expectation) I think it’d be confusing to use that term in this context so I’m going to talk about extinction even though people think there are non-extinction existential catastrophe scenarios from AI as well.