AI Safety’s old approach of building relationships with AI labs has enabled labs to further scale the training and commercialisation of AI models.
as we need both pressure to take action and the ability to direct it in a productive way.
As far ask I can see, little actual pressure has been put on these labs by folks in this community. I don’t think saying stuff by gently protesting counts as pressure.
That’s not what animal welfare organisations do when applying pressure. They seriously point out the deficiencies of the companies involved. They draft binding commitments for companies to wean off using harmful systems.
And they ratchet up public pressure for more stringent demands with the internal conversations, so it makes sense for company leaders to make more changes.
My sense is that AI Safety people often are not comfortable with confronting companies, and/or hold somewhat naive notions of what it takes to push for reforms on the margin.
If AI Safety funders could not even stomach the notion of supporting another community (creatives) to ensure existing laws are not broken, then they cannot rely on themselves acting to ensure future laws are not broken by the AI companies.
A common reaction in this community to any proposed campaign that pushes for actually restricting the companies is that the leaders might no longer see us as being nice to them and no longer want to work with us. Which is implying that we perceive the company leaders as having the power in this relationship, and we don’t want to cross them lest they drop us.
Companies whose start up we supported have been actively eroding the chance of safe future AI for years now. And we’re going to let them continue, because we want to “maintain” this relationship with them.
From a negotiation stance, this is will not work out. We are not building the leverage for company leaders to actually consider to stop scaling. They will do lip service on “extinction risks” and then bulldoze over our wishes that they slow down.
The default is that the AI companies are going to scale on, and successfully reach the deployment of very harmful and long-term dangerous systems integrated into our economy.
What do you want to do?
Further follow the old approach of trying to make AI labs more safety conscious (with some pause advocacy thrown in)?
I agree that the old approach didn’t work. It was too focused on the inside game. We need to combine the two.
(Update: Just wanted to clarify that it made more sense to focus just on inside game when EA was smaller and it appeared as though it would be extremely challenging to convince the public that they should worry/pay attention. Circumstances have changed since then to increase the importance of outside game).
How much can supporting other communities to restrict data laundering, worker exploitation, unsafe uses, pollutive compute, etc, slow or restrict AI development? (eg. restrict data laundering by supporting lawsuits against unlawful TDM in the EU and state attorney actions against copyright violations in the US)
How much should we work to support other communities to restrict AI development in different areas (“outside game) vs. working with AI companies to slow down development or “differentiatively” develop (“inside game”)?
How much are we supporting those other communities now?
AI Safety’s old approach of building relationships with AI labs has enabled labs to further scale the training and commercialisation of AI models.
As far ask I can see, little actual pressure has been put on these labs by folks in this community. I don’t think saying stuff by gently protesting counts as pressure.
That’s not what animal welfare organisations do when applying pressure. They seriously point out the deficiencies of the companies involved. They draft binding commitments for companies to wean off using harmful systems. And they ratchet up public pressure for more stringent demands with the internal conversations, so it makes sense for company leaders to make more changes.
My sense is that AI Safety people often are not comfortable with confronting companies, and/or hold somewhat naive notions of what it takes to push for reforms on the margin.
If AI Safety funders could not even stomach the notion of supporting another community (creatives) to ensure existing laws are not broken, then they cannot rely on themselves acting to ensure future laws are not broken by the AI companies.
A common reaction in this community to any proposed campaign that pushes for actually restricting the companies is that the leaders might no longer see us as being nice to them and no longer want to work with us. Which is implying that we perceive the company leaders as having the power in this relationship, and we don’t want to cross them lest they drop us.
Companies whose start up we supported have been actively eroding the chance of safe future AI for years now. And we’re going to let them continue, because we want to “maintain” this relationship with them.
From a negotiation stance, this is will not work out. We are not building the leverage for company leaders to actually consider to stop scaling. They will do lip service on “extinction risks” and then bulldoze over our wishes that they slow down.
The default is that the AI companies are going to scale on, and successfully reach the deployment of very harmful and long-term dangerous systems integrated into our economy.
What do you want to do? Further follow the old approach of trying to make AI labs more safety conscious (with some pause advocacy thrown in)?
I agree that the old approach didn’t work. It was too focused on the inside game. We need to combine the two.
(Update: Just wanted to clarify that it made more sense to focus just on inside game when EA was smaller and it appeared as though it would be extremely challenging to convince the public that they should worry/pay attention. Circumstances have changed since then to increase the importance of outside game).
Great, we agree there then.
Questions this raises:
How much can supporting other communities to restrict data laundering, worker exploitation, unsafe uses, pollutive compute, etc, slow or restrict AI development? (eg. restrict data laundering by supporting lawsuits against unlawful TDM in the EU and state attorney actions against copyright violations in the US)
How much should we work to support other communities to restrict AI development in different areas (“outside game) vs. working with AI companies to slow down development or “differentiatively” develop (“inside game”)?
How much are we supporting those other communities now?