Really appreciate those diagrams—thanks for making them! I agree and think there are serious risks from EA being taken over as a field by AI safety.
The core ideas behind EA are too young and too unknown by most of the world for them to be strangled by AI safety—even if it is the most pressing problem.
Pulling out a quote from MacAskill’s comment (since a lot of people won’t click)
I’ve also experienced what feels like social pressure to have particular beliefs (e.g. around non-causal decision theory, high AI x-risk estimates, other general pictures of the world), and it’s something I also don’t like about the movement. My biggest worries with my own beliefs stem around the worry that I’d have very different views if I’d found myself in a different social environment. It’s just simply very hard to successfully have a group of people who are trying to both figure out what’s correct and trying to change the world: from the perspective of someone who thinks the end of the world is imminent, someone who doesn’t agree is at best useless and at worst harmful (because they are promoting misinformation).
In local groups in particular, I can see how this issue can get aggravated: people want their local group to be successful, and it’s much easier to track success with a metric like “number of new AI safety researchers” than “number of people who have thought really deeply about the most pressing issues and have come to their own well-considered conclusions”.
One thing I’ll say is that core researchers are often (but not always) much more uncertain and pluralist than it seems from “the vibe”.
...
What should be done? I have a few thoughts, but my most major best guess is that, now that AI safety is big enough and getting so much attention, it should have its own movement, separate from EA. Currently, AI has an odd relationship to EA. Global health and development and farm animal welfare, and to some extent pandemic preparedness, had movements working on them independently of EA. In contrast, AI safety work currently overlaps much more heavily with the EA/rationalist community, because it’s more homegrown.
If AI had its own movement infrastructure, that would give EA more space to be its own thing. It could more easily be about the question “how can we do the most good?” and a portfolio of possible answers to that question, rather than one increasingly common answer — “AI”.
At the moment, I’m pretty worried that, on the current trajectory, AI safety will end up eating EA. Though I’m very worried about what the next 5-10 years will look like in AI, and though I think we should put significantly more resources into AI safety even than we have done, I still think that AI safety eating EA would be a major loss. EA qua EA, which can live and breathe on its own terms, still has huge amounts of value: if AI progress slows; if it gets so much attention that it’s no longer neglected; if it turns out the case for AI safety was wrong in important ways; and because there are other ways of adding value to the world, too. I think most people in EA, even people like Holden who are currently obsessed with near-term AI risk, would agree.
Wow Will really articulates that well, thanks for the quote you’re right I wouldn’t have seen it myself!
I also fear that EA people with an AI focus may focus too hard on “EA aligned” AI safety work (Technical AI alignment and “inner game” policy work), that they might limit the growth of movements that could grow the AI safety community, outside the AI circle. (E.g. AI ethics, or AI pause activism)
This is of course highly speculative, but I think that’s what we’re doing right now ;).
Really appreciate those diagrams—thanks for making them! I agree and think there are serious risks from EA being taken over as a field by AI safety.
The core ideas behind EA are too young and too unknown by most of the world for them to be strangled by AI safety—even if it is the most pressing problem.
Pulling out a quote from MacAskill’s comment (since a lot of people won’t click)
Wow Will really articulates that well, thanks for the quote you’re right I wouldn’t have seen it myself!
I also fear that EA people with an AI focus may focus too hard on “EA aligned” AI safety work (Technical AI alignment and “inner game” policy work), that they might limit the growth of movements that could grow the AI safety community, outside the AI circle. (E.g. AI ethics, or AI pause activism)
This is of course highly speculative, but I think that’s what we’re doing right now ;).