Great question that prompted a lot of thinking. I think my internal model looks like this:
On the meta level it feels as if EAs have a systemic error in their model that underestimates public distrust of EA actions which constrains the action space and our collective sense-making of the world.
I think legacy media organisations buy into the framing solidly. Especially, organisations that operate on policing others such as the CJR (Columbia Journalism Review).
Just in my own life I’ve noticed a lot of the “elite” sphere friends I have at ivies and competitive debating etc. are much more apprehensive towards EA and AI Safety types of discourse in general and attribute it to this frame. Specifically, I think the idea from policy debating of inherency—that people look towards frames of explaining the underlying barrier and motivation to change.
I think directly this is bad for cooperation on the governance side (e.g. a lot of the good research on timelines and regulation are currently being done by some people with AI Ethics sympathies).
I think EAs underestimate how many technically gifted people who could be doing technical research are put off by EAs who throw around philosophy ideas that are ungrounded in technical acumen. This frame neatly compounds this aversion.
I am unsure that “AI x-risk as a distaction” is a big deal. Like what are their policy proposals, what major actors use this frame?
Great question that prompted a lot of thinking. I think my internal model looks like this:
On the meta level it feels as if EAs have a systemic error in their model that underestimates public distrust of EA actions which constrains the action space and our collective sense-making of the world.
I think legacy media organisations buy into the framing solidly. Especially, organisations that operate on policing others such as the CJR (Columbia Journalism Review).
Just in my own life I’ve noticed a lot of the “elite” sphere friends I have at ivies and competitive debating etc. are much more apprehensive towards EA and AI Safety types of discourse in general and attribute it to this frame. Specifically, I think the idea from policy debating of inherency—that people look towards frames of explaining the underlying barrier and motivation to change.
I think directly this is bad for cooperation on the governance side (e.g. a lot of the good research on timelines and regulation are currently being done by some people with AI Ethics sympathies).
I think EAs underestimate how many technically gifted people who could be doing technical research are put off by EAs who throw around philosophy ideas that are ungrounded in technical acumen. This frame neatly compounds this aversion.