Does 80,000 Hours focus too much on AI risk?

(Cross-post from /​r/​EffectiveAltruism, with minor revisions.)

On the home page of 80,000 Hours, they present a key advice article outlining their primary recommendations for EA careers. According to them, this article represents the culmination of years of research and debate, and is one of the most detailed, advanced intros to EAs yet.

However, while their article does go into some background ideas about the foundations of EA, one idea now stands above all else: a single, narrow focus on recruiting people to AI safety.

To be sure, the article mentions other careers. For example, the article brings up mitigation of climate change and nuclear war as potential alternatives before instantly dismissing them because they aren’t neglected. The article also briefly mentions the other two classical EA cause areas, global poverty and animal welfare. However, the article rejects these causes one sentence later for not focusing on the long-term. This ignores the fact that value spreading and ripple effects can affect the distant future. Quote:

Some other issues we’ve focused on in the past include ending factory farming and improving health in poor countries. These areas seem especially promising if you don’t think people can or should focus on the long-term effects of their actions. (emphasis mine)

In the end, the article recommends only AI risk and biorisk as plausible EA cause areas. But even for biorisk it says,

We rate biorisk as a less pressing issue than AI safety, mainly because we think biorisks are less likely to be existential, and AI seems more likely to play a key role in shaping the long-term future in other ways.

This is a stark contrast to the effective altruism of the past, and the community as a whole that focuses on a diversity of cause areas. Now, according to 80,000 Hours, EA should focus on AI alone.

This confuses me. EA is supposed to be about evidence and practicality. Personally, I’m pretty skeptical of some of the claims that AI safety researchers have made for the priority of their work. To be clear, I do think it’s a respectable career, but is it really what we should recommend to everyone? Consider that

  • It’s not clear that advanced artificial intelligence is going to arrive any time within the next several decades. And if AI were far away it would substantially reduce EA leverage. I’m not personally that impressed by the recent deep learning revolution, which I see as essentially a bunch of brittle tools and tricks that don’t generalize well. See Gary Marcus’s critique.

  • Most researchers seem to be moving away from a fast takeoff view of AI safety, and are now opting for a softer takeoff view where the effects of AI are highly distributed. If soft takeoff is true, it’s much harder to see how a lot of safety work is useful. Yet, despite this shift, it seems that top EA orgs have become paradoxically more confident that artificial intelligence is cause X!

  • No one really has a clear idea about what type of AI safety is useful, and one of the top AI safety organizations, MIRI, has now gone private so now we can’t even inspect whether they are doing useful work.

  • Productive AI safety research work is inaccessible to over 99.9% of the population, making this advice almost useless to nearly everyone reading the article.

  • Top AI safety researchers are now saying that they expect AI to be safe by default, without further intervention from EA. See here and here.

AI safety as a field should still exist, and we should still give it funding. But is it responsible for top EA organizations to make it the single cause area that trumps all others?