Great post! Some highlights [my emphasis in bold]:
Funnily enough, even though animal advocates do radical stunts, you do not hear this fear expressed much in animal advocacy. If anything, in my experience, the existence of radical vegans can make it easier for “the reasonable ones” to gain access to institutions. Even just within EAA, Good Food Institute celebrates that meat-producer Tyson Foods invests in a clean meat startup at the same time the Humane League targets Tyson in social media campaigns. When the community was much smaller and the idea of AI risk more fringe, it may have been truer that what one member did would be held against the entire group. But today x-risk is becoming a larger and larger topic of conversation that more people have their own opinions on, and the risk of the idea of AI risk getting contaminated by what some people do in its name grows smaller.
This with the additional point that AI Pause should be a much easier sell than animal advocacy as it is each and every person’s life on the line, including the people building AI. No standing up for marginalised groups, altruism or do-gooding of any kind is required to campaign for a Pause.
Much of the public is baffled by the debate about AI Safety, and out of that confusion, AI companies can position themselves as the experts and seize control of the conversation. AI Safety is playing catch-up, and alignment is a difficult topic to teach the masses. Pause is a simple and clear message that the public can understand and get behind that bypasses complex technical jargon and gets right to the heart of the debate– if AI is so risky to build, why are we building it?
Yes! I think a lot of AI Governance work involving complicated regulation, and appeasing powerful pro-AI-industry actors and those who think the risk-reward balance is in favour of reward, loses sight of this.
advocacy activities could be a big morale boost, if we’d let them. Do you remember the atmosphere of burnout and resignation after the “Death with Dignity” post? The feeling of defeat on technical alignment? Well, there’s a new intervention to explore! And it flexes different muscles! And it could even be a good time!
It’s definitely been refreshing to me to just come out and say the sensible thing. Bite the bullet of “if it’s so dangerous, let’s just not build it”. And this post itself is a morale boost :)
This with the additional point that AI Pause should be a much easier sell than animal advocacy as it is each and every person’s life on the line, including the people building AI. No standing up for marginalised groups, altruism or do-gooding of any kind is required to campaign for a Pause
Too true! I can’t believe I forgot to mention this in the post!
Great post! Some highlights [my emphasis in bold]:
This with the additional point that AI Pause should be a much easier sell than animal advocacy as it is each and every person’s life on the line, including the people building AI. No standing up for marginalised groups, altruism or do-gooding of any kind is required to campaign for a Pause.
Yes! I think a lot of AI Governance work involving complicated regulation, and appeasing powerful pro-AI-industry actors and those who think the risk-reward balance is in favour of reward, loses sight of this.
It’s definitely been refreshing to me to just come out and say the sensible thing. Bite the bullet of “if it’s so dangerous, let’s just not build it”. And this post itself is a morale boost :)
Too true! I can’t believe I forgot to mention this in the post!