Really appreciate those diagramsāthanks for making them! I agree and think there are serious risks from EA being taken over as a field by AI safety.
The core ideas behind EA are too young and too unknown by most of the world for them to be strangled by AI safetyāeven if it is the most pressing problem.
Pulling out a quote from MacAskillās comment (since a lot of people wonāt click)
Iāve also experienced what feels like social pressure to have particular beliefs (e.g. around non-causal decision theory, high AI x-risk estimates, other general pictures of the world), and itās something I also donāt like about the movement. My biggest worries with my own beliefs stem around the worry that Iād have very different views if Iād found myself in a different social environment. Itās just simply very hard to successfully have a group of people who are trying to both figure out whatās correct and trying to change the world: from the perspective of someone who thinks the end of the world is imminent, someone who doesnāt agree is at best useless and at worst harmful (because they are promoting misinformation).
In local groups in particular, I can see how this issue can get aggravated: people want their local group to be successful, and itās much easier to track success with a metric like ānumber of new AI safety researchersā than ānumber of people who have thought really deeply about the most pressing issues and have come to their own well-considered conclusionsā.
One thing Iāll say is that core researchers are often (but not always) much more uncertain and pluralist than it seems from āthe vibeā.
...
What should be done? I have a few thoughts, but my most major best guess is that, now that AI safety is big enough and getting so much attention, it should have its own movement, separate from EA. Currently, AI has an odd relationship to EA. Global health and development and farm animal welfare, and to some extent pandemic preparedness, had movements working on them independently of EA. In contrast, AI safety work currently overlaps much more heavily with the EA/ārationalist community, because itās more homegrown.
If AI had its own movement infrastructure, that would give EA more space to be its own thing. It could more easily be about the question āhow can we do the most good?ā and a portfolio of possible answers to that question, rather than one increasingly common answer ā āAIā.
At the moment, Iām pretty worried that, on the current trajectory, AI safety will end up eating EA. Though Iām very worried about what the next 5-10 years will look like in AI, and though I think we should put significantly more resources into AI safety even than we have done, I still think that AI safety eating EA would be a major loss. EA qua EA, which can live and breathe on its own terms, still has huge amounts of value: if AI progress slows; if it gets so much attention that itās no longer neglected; if it turns out the case for AI safety was wrong in important ways; and because there are other ways of adding value to the world, too. I think most people in EA, even people like Holden who are currently obsessed with near-term AI risk, would agree.
Wow Will really articulates that well, thanks for the quote youāre right I wouldnāt have seen it myself!
I also fear that EA people with an AI focus may focus too hard on āEA alignedā AI safety work (Technical AI alignment and āinner gameā policy work), that they might limit the growth of movements that could grow the AI safety community, outside the AI circle. (E.g. AI ethics, or AI pause activism)
This is of course highly speculative, but I think thatās what weāre doing right now ;).
Really appreciate those diagramsāthanks for making them! I agree and think there are serious risks from EA being taken over as a field by AI safety.
The core ideas behind EA are too young and too unknown by most of the world for them to be strangled by AI safetyāeven if it is the most pressing problem.
Pulling out a quote from MacAskillās comment (since a lot of people wonāt click)
Wow Will really articulates that well, thanks for the quote youāre right I wouldnāt have seen it myself!
I also fear that EA people with an AI focus may focus too hard on āEA alignedā AI safety work (Technical AI alignment and āinner gameā policy work), that they might limit the growth of movements that could grow the AI safety community, outside the AI circle. (E.g. AI ethics, or AI pause activism)
This is of course highly speculative, but I think thatās what weāre doing right now ;).