The article is here (note that the Washington Post is paywalled[1]). The headline[2] is “How elite schools like Stanford became fixated on the AI apocalypse,” subtitled “A billionaire-backed movement is recruiting college students to fight killer AI, which some see as the next Manhattan Project.” It’s by Nitasha Tiku.
Notes on the article:
The article centers on how AI existential safety concerns became more of a discussion topic in some communities, especially on campuses. The main example is Stanford.
It also talks about:
EA (including recent scandals)
Funding for work on alignment and AI safety field-building (particularly for university groups and fellowships)
Whether or not extinction/existential risk from AI is plausible in the near future (sort of in passing)
It features comments from:
Paul Edwards, a Stanford Universityfellow “who spent decades studying nuclear war and climate change, considers himself ‘an apocalypse guy’” and who developed a freshman course on human extinction —and generally focuses on pandemics, climate change, nuclear winter, and advanced AI. (He’s also a faculty co-director of SERI.)
Steve Luby, an epidemiologist and professor of medicine and infectious disease and Edwards’s teaching partner for the class on human extinction (very briefly) (who’s the other faculty co-director of SERI)
Open Philanthropy spokesperson Mike Levine (pretty briefly)
I expect that some folks on the Forum might have reactions to the article — I might share some in the comments later, but I just want to remind people about the Forum norms of civility.
Washington Post article about EA university groups
The article is here (note that the Washington Post is paywalled[1]). The headline[2] is “How elite schools like Stanford became fixated on the AI apocalypse,” subtitled “A billionaire-backed movement is recruiting college students to fight killer AI, which some see as the next Manhattan Project.” It’s by Nitasha Tiku.
Notes on the article:
The article centers on how AI existential safety concerns became more of a discussion topic in some communities, especially on campuses. The main example is Stanford.
It also talks about:
EA (including recent scandals)
Funding for work on alignment and AI safety field-building (particularly for university groups and fellowships)
Whether or not extinction/existential risk from AI is plausible in the near future (sort of in passing)
It features comments from:
Paul Edwards, a Stanford University fellow “who spent decades studying nuclear war and climate change, considers himself ‘an apocalypse guy’” and who developed a freshman course on human extinction —and generally focuses on pandemics, climate change, nuclear winter, and advanced AI. (He’s also a faculty co-director of SERI.)
Gabriel Mukobi, a Stanford graduate who organized a campus AI safety group
And in brief:
Timnit Gebru (very briefly)
Steve Luby, an epidemiologist and professor of medicine and infectious disease and Edwards’s teaching partner for the class on human extinction (very briefly) (who’s the other faculty co-director of SERI)
Open Philanthropy spokesperson Mike Levine (pretty briefly)
I expect that some folks on the Forum might have reactions to the article — I might share some in the comments later, but I just want to remind people about the Forum norms of civility.
Up to some number of free articles per month
My understanding is that journalists don’t generally choose their headlines. Someone should correct me in the comments if this is wrong!