I think you can stress the “ideological” implications of externalities to lefty audiences while having a more neutral tone with more centrist or conservative audiences. The idea that externalities exist and require intervention is not IMO super ideologically charged.
It’s risky to connect AI safety to one side of an ideological conflict.
There are ways to frame AI safety as (partly) an externality problem without getting mired in a broader ideological conflict.
I think you can stress the “ideological” implications of externalities to lefty audiences while having a more neutral tone with more centrist or conservative audiences. The idea that externalities exist and require intervention is not IMO super ideologically charged.