Perhaps this is a bit tangential to the essay, but we ought to make an effort to actually test the assumptions underlying different public relations strategies. Perhaps the EA community ought to either build relations with marketing companies that work on focus grouping idea, or develop its own expertise in this way to test out the relative success of various public facing strategies (always keeping in mind that having just one public facing strategy is a really bad idea, because there is more than one type of person in the ‘public’.)
I feel a bit sceptical of the caricature image of focus group testing that I have in mind… I feel like our main audience in the AI context are fairly smart people, and that you want to communicate the ideas in an honest discussion with high bandwidth. And with high bandwidth communication, like longer blogposts or in-person discussions, you usually receive feedback through comments whether the arguments make sense to the readers.
I don’t think the technical context is the only, or even the most important context where AI risk mitigation can happen. My interpretation of Yudkowsky’s gloom view is that it is mainly a sociological problem (ie someone else will do the cool super profitable thing if the first company/ research group hesitates) rather than a fundamentally technical problem (it would be impossible to figure out how to do it safely if everyone involved moved super slowly).
Thanks, that’s a really good point. Hmm, I might still believe that also for the AI governance side you’ll want to have more high bandwidth discussions specified to somewhat niche audiences, such as specific governmental departments, think tanks, international organizations like the EU, the UN, academic groups. I imagine they all will find different specific framings convincing and others very off-putting, and that you find this out quickly by working with them vs. doing AB testing on a more generic audience.
Perhaps this is a bit tangential to the essay, but we ought to make an effort to actually test the assumptions underlying different public relations strategies. Perhaps the EA community ought to either build relations with marketing companies that work on focus grouping idea, or develop its own expertise in this way to test out the relative success of various public facing strategies (always keeping in mind that having just one public facing strategy is a really bad idea, because there is more than one type of person in the ‘public’.)
Strongly agree!
I feel a bit sceptical of the caricature image of focus group testing that I have in mind… I feel like our main audience in the AI context are fairly smart people, and that you want to communicate the ideas in an honest discussion with high bandwidth. And with high bandwidth communication, like longer blogposts or in-person discussions, you usually receive feedback through comments whether the arguments make sense to the readers.
I don’t think the technical context is the only, or even the most important context where AI risk mitigation can happen. My interpretation of Yudkowsky’s gloom view is that it is mainly a sociological problem (ie someone else will do the cool super profitable thing if the first company/ research group hesitates) rather than a fundamentally technical problem (it would be impossible to figure out how to do it safely if everyone involved moved super slowly).
Thanks, that’s a really good point. Hmm, I might still believe that also for the AI governance side you’ll want to have more high bandwidth discussions specified to somewhat niche audiences, such as specific governmental departments, think tanks, international organizations like the EU, the UN, academic groups. I imagine they all will find different specific framings convincing and others very off-putting, and that you find this out quickly by working with them vs. doing AB testing on a more generic audience.