It is always appalling to see tech lobbying power shut down all the careful work done by safety people.
Yet the article highlights a very fair point: that safety people have not succeeded at being clear and convincing enough about the existential risks posed by AI. Yes, it’s hard, yes it’s a lot about speculations. But that’s exactly where impact lies : trying to have a consistent and pragmatic discourse about AI risks, that is not uselessly alarmist or needlessly vague.
The state of the EA community is a good example of that. I often hear that yes, risks are high, but what risks exactly, and how can they be quantified? Impact measurement is awfully vague when it comes to AI safety (and a minor measure, AI governance).
It seems like the pivot towards AI Pause advocacy has happened relatively recently and hastily. I wonder if now would be a good time to step back and reflect on strategy.
Since Eliezer’s Bankless podcast, it seems like Pause folks have fallen into a strategy of advocating to the general public. This quote may reveal a pitfall of that strategy:
“I think the more people learn about some of these [AI] models, the more comfortable they are that the steps our government has already taken are by-and-large appropriate steps,” Young told POLITICO.
I hypothesize a “midwit curve” for AI risk concern:
At a low level of AI knowledge, members of the general public are apt to anthropomorphize AI models and fear them.
As a person acquires AI expertise, they anthropomorphize AI models less, and become less afraid.
Past that point, some folks become persuaded by specific technical arguments for AI risk.
It puzzles me that Pause folks aren’t more eager to engage with informed skeptics like Nora Belrose, Rohin Shah, Alex Turner, Katja Grace, Matthew Barnett, etc. Seems like an ideal way to workshop arguments that are more robust, and won’t fall apart when the listener becomes more informed about the topic—or simply identify the intersection of what many experts find credible. Why not more adversarial collaborations? Why relativelylittledata on the arguments and framings which persuade domain experts? Was the decision to target the general public a deliberate and considered one, or just something we fell into?
My sense is that some Pause arguments hold up well to scrutiny, some don’t, and you might risk undermining your credibility by making the ones which don’t hold up. I get the sense that people are amplifying messaging which hasn’t been very thoroughly workshopped. Even though I’m quite concerned about AI risk, I often find myself turned off by Pause advocacy. That makes me wonder if there’s room for improvement.
It is always appalling to see tech lobbying power shut down all the careful work done by safety people.
Yet the article highlights a very fair point: that safety people have not succeeded at being clear and convincing enough about the existential risks posed by AI. Yes, it’s hard, yes it’s a lot about speculations. But that’s exactly where impact lies : trying to have a consistent and pragmatic discourse about AI risks, that is not uselessly alarmist or needlessly vague.
The state of the EA community is a good example of that. I often hear that yes, risks are high, but what risks exactly, and how can they be quantified? Impact measurement is awfully vague when it comes to AI safety (and a minor measure, AI governance).
It seems like the pivot towards AI Pause advocacy has happened relatively recently and hastily. I wonder if now would be a good time to step back and reflect on strategy.
Since Eliezer’s Bankless podcast, it seems like Pause folks have fallen into a strategy of advocating to the general public. This quote may reveal a pitfall of that strategy:
I hypothesize a “midwit curve” for AI risk concern:
At a low level of AI knowledge, members of the general public are apt to anthropomorphize AI models and fear them.
As a person acquires AI expertise, they anthropomorphize AI models less, and become less afraid.
Past that point, some folks become persuaded by specific technical arguments for AI risk.
It puzzles me that Pause folks aren’t more eager to engage with informed skeptics like Nora Belrose, Rohin Shah, Alex Turner, Katja Grace, Matthew Barnett, etc. Seems like an ideal way to workshop arguments that are more robust, and won’t fall apart when the listener becomes more informed about the topic—or simply identify the intersection of what many experts find credible. Why not more adversarial collaborations? Why relatively little data on the arguments and framings which persuade domain experts? Was the decision to target the general public a deliberate and considered one, or just something we fell into?
My sense is that some Pause arguments hold up well to scrutiny, some don’t, and you might risk undermining your credibility by making the ones which don’t hold up. I get the sense that people are amplifying messaging which hasn’t been very thoroughly workshopped. Even though I’m quite concerned about AI risk, I often find myself turned off by Pause advocacy. That makes me wonder if there’s room for improvement.