I think Transformative AI is unusually powerful and dangerous relative to other things that can plausibly kill us or otherwise drastically affect human trajectories, and many of us believe AI doom is not inevitable.
I think it’s probably correct for EAs to focus on AI more than other things.
Other plausible contenders (some of which I’ve worked on) include global priorities research, biorisk mitigation, and moral circle expansion. But broadly a) I think they’re less important or tractable than AI, b) many of them are entangled with AI (e.g. global priorities research that ignores AI is completely missing the most important thing).
I largely agree with Linch’s answer (primarily: that AI is really likely very dangerous), and want to point out a couple of relevant resources in case a reader is less familiar with some foundations for these claims:
I think Transformative AI is unusually powerful and dangerous relative to other things that can plausibly kill us or otherwise drastically affect human trajectories, and many of us believe AI doom is not inevitable.
I think it’s probably correct for EAs to focus on AI more than other things.
Other plausible contenders (some of which I’ve worked on) include global priorities research, biorisk mitigation, and moral circle expansion. But broadly a) I think they’re less important or tractable than AI, b) many of them are entangled with AI (e.g. global priorities research that ignores AI is completely missing the most important thing).
I largely agree with Linch’s answer (primarily: that AI is really likely very dangerous), and want to point out a couple of relevant resources in case a reader is less familiar with some foundations for these claims:
The 80,000 Hours problem profile for AI is pretty good, and has lots of other useful links
This post is also really helpful, I think: Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover
More broadly, you can explore a lot of discussion on the AI risk topic page in the EA Forum Wiki