Highlights from recent Twitter (and Twtiter-adjacent) discussion about this post (or the same topic), with the caveat that some views in tweets probably shouldn’t be taken too seriously:
Matt Yglesias: The more I think about it, the more I find the “longtermism” label puzzling since the actual two main ideas that self-described longtermists are advancing about pandemics and AI governance actually involve large probabilities and short time horizons.
Luke Muehlhauser: But tractability is low (esp. for AI), so valuing future people may be needed, at least to beat bednets (if not govt VSLs). That said, in a lot of contexts I think the best pitch skips philosophy: “Sometime this century, humans will no longer be the dominant species on the planet. Pretty terrifying, no? I think I want to throw up.”
Dustin Moskovitz: They are merely 4-20X better (without future people) doesn’t seem like a slam dunk argument, also * lots * of non-xrisk QALYs to be earned in bio along the way
Luke Muelhauser: Agree given Eli’s estimates but I think AI x-risk’s tractability is substantially lower than the 80k estimate Eli uses (given actually available funding opportunities).
Sam Altman: also somewhat confused by the branding of longtermism, since most longermists seem to think agi safety is an existential risk in the very short term :)
Vitalik Buterin: I think the philosophical argument is that bad AI may kill us all in 50 years, but the bulk of the harm of such an extinction event comes from the trillions of future humans that will never have a chance to be born in the billions of years that follow.
Sam Altman: i agree that’s the argument, but isn’t the “unaligned agi could kill all of us, we should really prioritize preventing that” case powerful enough without sort of halfway lumping other things in?
Vitalik Buterin: It depends! Worst-case nuclear war might kill 1-2 billion, full-on extinction will kill 8 billion, but adding in longtermist arguments the latter number grows to 999999...999 billion. So taking longtermist issues into account meaningfully changes relative priorities. Also, bio vs AI risk. A maximally terrible engineered pandemic is more likely to kill ~99.99% of humanity, leaving civilization room to recover from the remainder; AI will kill everyone. So it’s another argument for caring about AI over bio.
Matt Yglesias: What’s long-term about “longtermism”? Trying to curb existential risk seems important even without big philosophy ideas… The typical person’s marginal return on investment for efforts to reduce existential risk from misaligned artificial intelligence is going to diminish at an incredibly rapid pace.
My response thread: In sum, I buy the reasoning for his situation but think it doesn’t apply to most people’s career choices… To clarify: I agree that you don’t need longtermism to think that more work on x-risks by society in general is great on the margin! But I think you might need some level of it to strongly recommend x-risk careers to individuals in an intellectually honest way
Matt Yglesias: I find the heavy focus on career choice in the movement to be a bit eccentric (most people are old) but I agree that to the extent that’s the question of interest, my logic doesn’t apply.
Highlights from recent Twitter (and Twtiter-adjacent) discussion about this post (or the same topic), with the caveat that some views in tweets probably shouldn’t be taken too seriously:
Matt Yglesias: The more I think about it, the more I find the “longtermism” label puzzling since the actual two main ideas that self-described longtermists are advancing about pandemics and AI governance actually involve large probabilities and short time horizons.
Luke Muehlhauser: But tractability is low (esp. for AI), so valuing future people may be needed, at least to beat bednets (if not govt VSLs). That said, in a lot of contexts I think the best pitch skips philosophy: “Sometime this century, humans will no longer be the dominant species on the planet. Pretty terrifying, no? I think I want to throw up.”
Dustin Moskovitz: They are merely 4-20X better (without future people) doesn’t seem like a slam dunk argument, also * lots * of non-xrisk QALYs to be earned in bio along the way
Luke Muelhauser: Agree given Eli’s estimates but I think AI x-risk’s tractability is substantially lower than the 80k estimate Eli uses (given actually available funding opportunities).
(Alexander Berger and Leopold Aschenbrenner share similar skepticism about the tractability)
Sam Altman: also somewhat confused by the branding of longtermism, since most longermists seem to think agi safety is an existential risk in the very short term :)
Vitalik Buterin: I think the philosophical argument is that bad AI may kill us all in 50 years, but the bulk of the harm of such an extinction event comes from the trillions of future humans that will never have a chance to be born in the billions of years that follow.
Sam Altman: i agree that’s the argument, but isn’t the “unaligned agi could kill all of us, we should really prioritize preventing that” case powerful enough without sort of halfway lumping other things in?
Vitalik Buterin: It depends! Worst-case nuclear war might kill 1-2 billion, full-on extinction will kill 8 billion, but adding in longtermist arguments the latter number grows to 999999...999 billion. So taking longtermist issues into account meaningfully changes relative priorities. Also, bio vs AI risk. A maximally terrible engineered pandemic is more likely to kill ~99.99% of humanity, leaving civilization room to recover from the remainder; AI will kill everyone. So it’s another argument for caring about AI over bio.
Matt Yglesias: What’s long-term about “longtermism”? Trying to curb existential risk seems important even without big philosophy ideas… The typical person’s marginal return on investment for efforts to reduce existential risk from misaligned artificial intelligence is going to diminish at an incredibly rapid pace.
My response thread: In sum, I buy the reasoning for his situation but think it doesn’t apply to most people’s career choices… To clarify: I agree that you don’t need longtermism to think that more work on x-risks by society in general is great on the margin! But I think you might need some level of it to strongly recommend x-risk careers to individuals in an intellectually honest way
Matt Yglesias: I find the heavy focus on career choice in the movement to be a bit eccentric (most people are old) but I agree that to the extent that’s the question of interest, my logic doesn’t apply.