Holly—this is an excellent and thought-provoking piece, and I agree with most of it. I hope more people in EA and AI Safety take it seriously.
I might just add one point of emphasis: changing public opinion isn’t just useful for smoothing the way towards effective regulation, or pressuring AI companies to change their behavior at the corporate policy level, or raising money for AI safety work.
Changing public opinion can have a much more direct impact in putting social pressure on anybody involved in AI research, AI funding, AI management, and AI regulation. This was a key point in my 2023 EA Forum essay on moral stigmatization of AI, and the potential benefits of promoting a moral backlash against the AI industry. Given strong enough public opinion for an AI Pause, or against runaway AGI development, the public can put direct pressure on people involved in AI to take AI safety more seriously, e.g. by socially, sexually, financially, or professionally stigmatizing reckless AI developers.
Holly—this is an excellent and thought-provoking piece, and I agree with most of it. I hope more people in EA and AI Safety take it seriously.
I might just add one point of emphasis: changing public opinion isn’t just useful for smoothing the way towards effective regulation, or pressuring AI companies to change their behavior at the corporate policy level, or raising money for AI safety work.
Changing public opinion can have a much more direct impact in putting social pressure on anybody involved in AI research, AI funding, AI management, and AI regulation. This was a key point in my 2023 EA Forum essay on moral stigmatization of AI, and the potential benefits of promoting a moral backlash against the AI industry. Given strong enough public opinion for an AI Pause, or against runaway AGI development, the public can put direct pressure on people involved in AI to take AI safety more seriously, e.g. by socially, sexually, financially, or professionally stigmatizing reckless AI developers.