I too have this experience that most people tend to agree just fine with the case for AI risk when presented this way. Also, as far as I can tell, none of them have changed their actions (unless they were already into EA). Has your experience been different?
EDIT: Tbc, there’s value in getting people to “I’m glad other people are working on this”—it seems to me like this is how things become mainstream. But often I’m not just trying to make an incremental chip at “get AI safety to be mainstream” and instead I want to get some particular action, and in those cases I want to know what strategy I should use.
I too have this experience that most people tend to agree just fine with the case for AI risk when presented this way. Also, as far as I can tell, none of them have changed their actions (unless they were already into EA). Has your experience been different?
EDIT: Tbc, there’s value in getting people to “I’m glad other people are working on this”—it seems to me like this is how things become mainstream. But often I’m not just trying to make an incremental chip at “get AI safety to be mainstream” and instead I want to get some particular action, and in those cases I want to know what strategy I should use.