Yes—AI research is useful and does help highlight specific advancements or potential risks. However, I fear it is being focused on by many because of personal interest in the topic, rather than the best route to reduce catastrophic and existential risks.
For better or worse, advocacy, policy, and communications are the most likely routes to reduce p(doom) - unless you believe alignment is a plausible and concrete thing.
(Caveat—I read the premises and skimmed the rest)
Yes—AI research is useful and does help highlight specific advancements or potential risks. However, I fear it is being focused on by many because of personal interest in the topic, rather than the best route to reduce catastrophic and existential risks.
For better or worse, advocacy, policy, and communications are the most likely routes to reduce p(doom) - unless you believe alignment is a plausible and concrete thing.
This seems true—EA draws in nerds and technical folks, who are not drawn to policy work and may underestimate its usefulness