I mentioned that you often see journalists or other people not intimately acquainted with effective altruism conflate ideas like longtermism and transhumanism (or related ideas about futuristic technologies). This is a forgivable mistake because people in effective altruism often conflate them too.
If you think superhuman AGI is 90% likely within 30 years, or whatever, then obviously that will impact everyone alive on Earth today who is lucky (or unlucky) enough to live until it arrives, plus all the children who will be born between now and then. Longtermists might think the moral value of the far future makes this even more important. But, in practice, it seems like people who aren’t longtermists who also think superhuman AGI is 90% likely within 30 years are equally concerned about the AI thing. So, is that really longtermist?
I mentioned that you often see journalists or other people not intimately acquainted with effective altruism conflate ideas like longtermism and transhumanism (or related ideas about futuristic technologies). This is a forgivable mistake because people in effective altruism often conflate them too.
If you think superhuman AGI is 90% likely within 30 years, or whatever, then obviously that will impact everyone alive on Earth today who is lucky (or unlucky) enough to live until it arrives, plus all the children who will be born between now and then. Longtermists might think the moral value of the far future makes this even more important. But, in practice, it seems like people who aren’t longtermists who also think superhuman AGI is 90% likely within 30 years are equally concerned about the AI thing. So, is that really longtermist?