The fact EAs have been so caught off guard by the AI x-risk is a distraction argument and its stickiness in the public consciousness should be worrying for how well calibrated we are to AI governance interventions working the way we collectively think they will. This feels like another Carrick Flynn situation. I might right up an ITT on the AI Ethics side—I think there’s a good analogy to a SSC post that EAs generally like.
Great question that prompted a lot of thinking. I think my internal model looks like this:
On the meta level it feels as if EAs have a systemic error in their model that underestimates public distrust of EA actions which constrains the action space and our collective sense-making of the world.
I think legacy media organisations buy into the framing solidly. Especially, organisations that operate on policing others such as the CJR (Columbia Journalism Review).
Just in my own life I’ve noticed a lot of the “elite” sphere friends I have at ivies and competitive debating etc. are much more apprehensive towards EA and AI Safety types of discourse in general and attribute it to this frame. Specifically, I think the idea from policy debating of inherency—that people look towards frames of explaining the underlying barrier and motivation to change.
I think directly this is bad for cooperation on the governance side (e.g. a lot of the good research on timelines and regulation are currently being done by some people with AI Ethics sympathies).
I think EAs underestimate how many technically gifted people who could be doing technical research are put off by EAs who throw around philosophy ideas that are ungrounded in technical acumen. This frame neatly compounds this aversion.
I’d be very interested to read up a post about your thoughts about this (though I’m not sure what ‘ITT’ means in this context?) and I’m curious about which SSC post that you’re referring to.
I also want to say I’m not sure how universal the ‘EAs have been caught so off guard’ claim is. Some have sure, but plenty were hoping the the AI risk discussion stays out of the public sphere for exactly this kind of reason.
I always thought the average model for don’t let AI Safety enter the mainstream was something like (1) you’ll lose credibility and be called a loon and (2) it’ll drive race dynamics and salience. Instead, I think the argument that AI Ethics makes is “these people aren’t so much loons as they are just doing hype marketing for AI products in the status quo and draining counterfactual political capital from real near term harms”.
I think a bunch of people were hesitant about AI safety entering the mainstream because they feared it would severely harm the discussion climate around AI safety (and/or cause it to become a polarized left/right issue).
The fact EAs have been so caught off guard by the AI x-risk is a distraction argument and its stickiness in the public consciousness should be worrying for how well calibrated we are to AI governance interventions working the way we collectively think they will. This feels like another Carrick Flynn situation. I might right up an ITT on the AI Ethics side—I think there’s a good analogy to a SSC post that EAs generally like.
I am unsure that “AI x-risk as a distaction” is a big deal. Like what are their policy proposals, what major actors use this frame?
Great question that prompted a lot of thinking. I think my internal model looks like this:
On the meta level it feels as if EAs have a systemic error in their model that underestimates public distrust of EA actions which constrains the action space and our collective sense-making of the world.
I think legacy media organisations buy into the framing solidly. Especially, organisations that operate on policing others such as the CJR (Columbia Journalism Review).
Just in my own life I’ve noticed a lot of the “elite” sphere friends I have at ivies and competitive debating etc. are much more apprehensive towards EA and AI Safety types of discourse in general and attribute it to this frame. Specifically, I think the idea from policy debating of inherency—that people look towards frames of explaining the underlying barrier and motivation to change.
I think directly this is bad for cooperation on the governance side (e.g. a lot of the good research on timelines and regulation are currently being done by some people with AI Ethics sympathies).
I think EAs underestimate how many technically gifted people who could be doing technical research are put off by EAs who throw around philosophy ideas that are ungrounded in technical acumen. This frame neatly compounds this aversion.
I’d be very interested to read up a post about your thoughts about this (though I’m not sure what ‘ITT’ means in this context?) and I’m curious about which SSC post that you’re referring to.
I also want to say I’m not sure how universal the ‘EAs have been caught so off guard’ claim is. Some have sure, but plenty were hoping the the AI risk discussion stays out of the public sphere for exactly this kind of reason.
I always thought the average model for don’t let AI Safety enter the mainstream was something like (1) you’ll lose credibility and be called a loon and (2) it’ll drive race dynamics and salience. Instead, I think the argument that AI Ethics makes is “these people aren’t so much loons as they are just doing hype marketing for AI products in the status quo and draining counterfactual political capital from real near term harms”.
I think a bunch of people were hesitant about AI safety entering the mainstream because they feared it would severely harm the discussion climate around AI safety (and/or cause it to become a polarized left/right issue).