[Linkpost] Given Extinction Worries, Why Don’t AI Researchers Quit? Well, Several Reasons

Link post

I’ve written a blog post for a lay audience, explaining some of the reasons that AI researchers who are concerned about extinction risk have for continuing to work on AI research, despite their worries

The apparent contradiction is causing a a lot of confusion among people who haven’t followed the relevant discourse closely. In many instances, lack of clarity seems to be leading people to resort to borderline conspiratorial thinking (e.g., about the motives of signatories of the recent statement), or to otherwise dismiss the worries as not totally serious.

I hope that this piece can help make common knowledge some things that aren’t widely known outside of tech and science circles.

As an overview, the reasons I focus on are:

  1. Their specific research isn’t actually risky

  2. Belief that AGI is inevitable and more likely to go better if you personally are involved

  3. Thinking AGI is far enough away that it makes sense to keep working on AI for now

  4. Commitment to science for science sake

  5. Belief that the benefits of AGI would outweigh even the risk of extinction

  6. Belief that advancing AI on net reduces global catastrophic risks, via reducing other risks

  7. Belief that AGI is worth it, even if it causes human extinction

I’ll also note that the piece isn’t meant to defend the decision of researchers who continue to work on AI despite thinking it presents extinction risks, nor to criticize them for their decision, but instead to add clarity.

If you’re interested in reading more, you can follow the link here. And of course feel free to send the link to anyone who’s confused by the current situation.

Crossposted to LessWrong (10 points, 0 comments)