Quick question—Is AI safety work considered a “long-termist” intervention? I know it has both short term and long term potential benefits, but what do people working on it generally see it as?
I suppose if you are generally pretty doomer, it wouldn’t meet your 4th criteria. “Genuinely longtermist: it’s something that we wouldn’t want to do anyway based on neartermist concerns”
Also one would hope that it wouldn’t be too long before @Forethought has cranked out one or too, as I think finding these is a big part of why they exist...
Also one would hope that it wouldn’t be too long before @Forethought has cranked out one or too, as I think finding these is a big part of why they exist...
The EA Forum wiki says the Forethought Foundation was created in 2018. Apparently, though, the new organization, Forethought Research, was launched in 2025 and focuses exclusively on near-term AGI.
The Forethought Foundation apparently shut down in 2024. (According to Will MacAskill’s website and LinkedIn.)
I didn’t realize until now these were two different organizations both run by Will MacAskill, both based in Oxford, with the same name.
So, it seems that the Forethought Foundation ran for six years before shutting down and, in that time, wasn’t able to find a novel, actionable, promising longtermist intervention (other than those that had been discussed before its founding).
I mentioned that you often see journalists or other people not intimately acquainted with effective altruism conflate ideas like longtermism and transhumanism (or related ideas about futuristic technologies). This is a forgivable mistake because people in effective altruism often conflate them too.
If you think superhuman AGI is 90% likely within 30 years, or whatever, then obviously that will impact everyone alive on Earth today who is lucky (or unlucky) enough to live until it arrives, plus all the children who will be born between now and then. Longtermists might think the moral value of the far future makes this even more important. But, in practice, it seems like people who aren’t longtermists who also think superhuman AGI is 90% likely within 30 years are equally concerned about the AI thing. So, is that really longtermist?
Quick question—Is AI safety work considered a “long-termist” intervention? I know it has both short term and long term potential benefits, but what do people working on it generally see it as?
I suppose if you are generally pretty doomer, it wouldn’t meet your 4th criteria. “Genuinely longtermist: it’s something that we wouldn’t want to do anyway based on neartermist concerns”
Also one would hope that it wouldn’t be too long before @Forethought has cranked out one or too, as I think finding these is a big part of why they exist...
The EA Forum wiki says the Forethought Foundation was created in 2018. Apparently, though, the new organization, Forethought Research, was launched in 2025 and focuses exclusively on near-term AGI.
The Forethought Foundation apparently shut down in 2024. (According to Will MacAskill’s website and LinkedIn.)
I didn’t realize until now these were two different organizations both run by Will MacAskill, both based in Oxford, with the same name.
So, it seems that the Forethought Foundation ran for six years before shutting down and, in that time, wasn’t able to find a novel, actionable, promising longtermist intervention (other than those that had been discussed before its founding).
I mentioned that you often see journalists or other people not intimately acquainted with effective altruism conflate ideas like longtermism and transhumanism (or related ideas about futuristic technologies). This is a forgivable mistake because people in effective altruism often conflate them too.
If you think superhuman AGI is 90% likely within 30 years, or whatever, then obviously that will impact everyone alive on Earth today who is lucky (or unlucky) enough to live until it arrives, plus all the children who will be born between now and then. Longtermists might think the moral value of the far future makes this even more important. But, in practice, it seems like people who aren’t longtermists who also think superhuman AGI is 90% likely within 30 years are equally concerned about the AI thing. So, is that really longtermist?