I think there’s some confusion in Scott’s post and the ensuing discussion. Both “longtermism” and “existential risk” make reference to the long-term future; the latter does so because “existential risk” means “risk to humanity’s long-term potential”. On the other hand, threats to this potential can be located in both the near-term and the long-term. But a longtermist is not, as such, any less concerned about near-term risks to humanity’s long-term potential than an existential risk reducer. So the “longtermism vs existential risk” debate seems entirely orthogonal to the “near-term vs long-term risk” debate: if one thinks we should be framing AI as something that “may kill you and everybody else”, rather than as a threat to our long-term future, one should probably avoid either “longtermism” or “existential risk”, and instead use something like “imminent risk of human extinction”.
(The confusion is in part caused by the similarity of the terms “existential” and “extinction”, which causes many people, including some EAs and rationalists, to use the former as a synonym for the latter. But it’s also caused by a failure to distinguish clearly between the two different temporal dimensions involved: the short-term location of the threat to our potential vs. the long-term location of the potential under threat.)
I think there’s some confusion in Scott’s post and the ensuing discussion. Both “longtermism” and “existential risk” make reference to the long-term future; the latter does so because “existential risk” means “risk to humanity’s long-term potential”. On the other hand, threats to this potential can be located in both the near-term and the long-term. But a longtermist is not, as such, any less concerned about near-term risks to humanity’s long-term potential than an existential risk reducer. So the “longtermism vs existential risk” debate seems entirely orthogonal to the “near-term vs long-term risk” debate: if one thinks we should be framing AI as something that “may kill you and everybody else”, rather than as a threat to our long-term future, one should probably avoid either “longtermism” or “existential risk”, and instead use something like “imminent risk of human extinction”.
(The confusion is in part caused by the similarity of the terms “existential” and “extinction”, which causes many people, including some EAs and rationalists, to use the former as a synonym for the latter. But it’s also caused by a failure to distinguish clearly between the two different temporal dimensions involved: the short-term location of the threat to our potential vs. the long-term location of the potential under threat.)