I feel that people in people involved in effective altruism are not very critical of the ways that confirmation bias and hero-of-the-story biases slip into their arguments. It strikes me as… convenient… that one of the biggest problems facing humanity is computers and that a movement popular among Silicon Valley professionals says people can solve it by getting comfortable professional jobs in Silicon Valley and donating some of the money to AI risk groups.
This is obviously not the whole story, as the arguments for taking AI risk seriously are not at all transparently wrong—though I think EA folks are often overconfident regarding the assumptions they make about the future of AI. Still, it seems worth looking into why this community’s agenda ended up meshing so neatly with its members’ hobbies. In my more uncharitable moments, I can’t help but feel that if the trendy jobs were in potato farming, some in EA would be imploring me to deal with the growing threat of tubers.
(I’m EA-adjacent. I seem to know a lot of you, and I’m sympathetic, but I’ve never been completely sold. Also, I notice that anonymous commentator #3 said something similar.)
Most of the people best-known for worrying about AI risk aren’t primarily computer scientists. (Personally, I’ve been surprised by the number of physicists.)
‘It’s self-serving to think that earning to give is useful’ seems like a separate thing from ‘it’s self-serving to think AI is important.’ Programming jobs obviously pay well, so no one objects to people following the logic from ‘earning to give is useful’ to ‘earning to give via programming work is useful’; the question there is just whether earning to give itself is useful, which is a topic that seems less related to AI. (More generally, ‘technology X is a big deal’ will frequently imply both ‘technology X poses important risks’ and ‘knowing how to work with technology X is profitable’, so it isn’t surprising to find those beliefs going together.)
If you were working in AI and wanted to rationalize ‘my current work is the best way to improve the world’, then AI risk is really the worst way imaginable to rationalize that conclusion: accelerating general AI capabilities is very unlikely to be a high-EV way to respond to AI risk as things stand today, and the kinds of technical work involved in AI safety research often require unusual skills and background for CS/AI. (Ryan Carey wrote in the past: “The problem here is that AI risk reducers can’t win. If they’re not computer scientists, they’re decried as uninformed non-experts, and if they do come from computer scientists, they’re promoting and serving themselves.” But the bigger problem is that the latter doesn’t make sense as a self-serving motive.)
Except that on point 3, the policies advocated and strategies being tried aren’t as if people are trying to reduce x risk, they’re as if they’re trying to enable AI to work rather than backfire.
Anonymous #12:
Three points worth mentioning in response:
Most of the people best-known for worrying about AI risk aren’t primarily computer scientists. (Personally, I’ve been surprised by the number of physicists.)
‘It’s self-serving to think that earning to give is useful’ seems like a separate thing from ‘it’s self-serving to think AI is important.’ Programming jobs obviously pay well, so no one objects to people following the logic from ‘earning to give is useful’ to ‘earning to give via programming work is useful’; the question there is just whether earning to give itself is useful, which is a topic that seems less related to AI. (More generally, ‘technology X is a big deal’ will frequently imply both ‘technology X poses important risks’ and ‘knowing how to work with technology X is profitable’, so it isn’t surprising to find those beliefs going together.)
If you were working in AI and wanted to rationalize ‘my current work is the best way to improve the world’, then AI risk is really the worst way imaginable to rationalize that conclusion: accelerating general AI capabilities is very unlikely to be a high-EV way to respond to AI risk as things stand today, and the kinds of technical work involved in AI safety research often require unusual skills and background for CS/AI. (Ryan Carey wrote in the past: “The problem here is that AI risk reducers can’t win. If they’re not computer scientists, they’re decried as uninformed non-experts, and if they do come from computer scientists, they’re promoting and serving themselves.” But the bigger problem is that the latter doesn’t make sense as a self-serving motive.)
Except that on point 3, the policies advocated and strategies being tried aren’t as if people are trying to reduce x risk, they’re as if they’re trying to enable AI to work rather than backfire.