A failure mode from the lawsuit / anti-job-loss campaigning route is that a potential solution to those that most might be happy with—i.e. paying creatives fairly via royalties or giving out a UBI + share of profits equal to or greater than people’s lost income—is not a solution to x-risk. The only solution that covers both is an indefinite global pause on frontier AI development. This is what we need to be coordinating on.
That said, I think the current lawsuits are great, and hope that they manage to slow the big AI companies down. But on the margin—especially considering short timelines and the length of time legal proceedings take—I think resources need to be directed toward a global indefinite moratorium on AGI development.
A UBI to creatives is absolutely unrealistic, in my opinion, given how these companies compete for (and lobby and escape taxes for) profit. See also the trend of rising income inequality between workers and owners/managers of the business they work for.
Even then, creatives don’t just want to be paid money. They want to be asked for consent before a company can train a model on their works.
In practice, what creatives I talk with want — no large generative models regurgitating their works — is locally aligned with what AI Safety people want.
~ ~ ~
The concern I have with the global pause framing of somehow first needing to globally coordinate with all relevant AI companies, government offices, etc. to start restrict AI model development.
That turns this into an almost intractable global coordination problem.
It’s adding in a lot of dependencies (versus supporting communities to restrict the exploitation of data, workers, uses and compute of AI now).
Instead of hacking away at the problem of restricting AI development in increments, you are putting all your chips on that all these (self-interested) policy and corporate folks are going to get together, eg. in a Paris Agreement style conference, and actually agree on and enforce strict restrictions.
It is surprising to me that you are concentrating your and others’ efforts on such a lengthy global governance process, given that you predict short timelines to “AGI”. It feels like a Hail Mary to me.
I would suggest focussing on supporting parallelised actions now across the board. Including but not limited to filing lawsuits.
I’m saying you can make meaningful progress by supporting legal actions now. Climate groups have filed hundreds of lawsuits over the years, which they have made meaningful progress with. I’m sure you are also familiar with Rupert Read’s work too on supporting local groups to chip away at climate problems through the Climate Majority Project.
The UBI would probably be government mandated, down to political action from people losing their jobs (e.g. via a politician like Andrew Yang gaining power).
The concern I have with the global pause framing of somehow first needing to globally coordinate with all relevant AI companies, government offices, etc. to start restrict AI model development.
That turns this into an almost intractable global coordination problem.
I don’t actually think it’s actually less tractable than legal cases successfully taking down OpenAI (and it could well be quicker). It’s just on a different scale. We don’t need to coordinate the companies. That route has already failed imo (but there are plenty of people in AIS and EA still trying). We just need to get governments to enact laws regulating general purpose AI (especially: enforced compute and data limits to training). Quickly. Then get them to agree international non-proliferation treaties.
I’m saying you can make meaningful progress now by supporting legal actions now. Climate groups have filed hundreds of lawsuits over the years, which they have made meaningful progress with.
The problem with legal action is how slow it is. We have to do this 10x quicker than with climate change. Is there any prospect of any of the legal cases concluding in the next 6-12 months[1]? If so, I’d chip in.
Presumably there will be all manor of appeals and counter-appeals, to the point where OpenAI are already quite confident that they can kick the can down the road to beyond the singularity before they are actually forced to take any action.
The UBI would probably be government mandated, down to political action from people losing their jobs (e.g. via a politician like Andrew Yang gaining power).
This is really is not realistic. It feels like we are discussing moves on a board game here, rather than what has happened so far economically and politically.
- Even if you have a lot of people worried about losing their jobs, they would also be much more entangled in and divided by the very technology put out by Big Tech companies. That makes it increasingly hard for them to coordinate around a shared narrative like ‘we all want a UBI’.
- Politicians too would be lobbied by and sponsored by Big Tech groups. And given how the US voting system tends to converge on two party blocs, independents like Andrew Yang would not stand a chance.
Both is already happening.
Furthermore, I think much of the uses of AI would be for the extraction of capital from the rest of society, and quite a lot of the collective culture and natural ecosystems that our current economic transactions depend on would get gradually destroyed in the process. Some of that would be offset economically by AI-automation, but overall the living conditions that people would experience will be lower (and even a UBI would not be able to offset that – you can’t buy yourself out a degraded culture and toxified ecosystem).
So I doubt whether even if there was magically the political will by politicians increasingly captured by Big Tech to just give everyone a universal basic income that the AI corporations would have enough free capital to be taxed for that.
We don’t need to coordinate the companies. That route has already failed imo
Agreed.
We just need to get governments to enact laws regulating general purpose AI (especially: enforced compute and data limits to training). Quickly. Then get them to agree international non-proliferation treaties.
“Quickly” does not match my understanding of how global governance can work. Even if you have a small percentage of the population somewhat worried about abstract AI risks, it’s still going to take many years.
Look at how many years it took to get to a nuclear nonproliferation treaty (from 1946 to 1968). And there, citizen groups could actually see photos and videos (and read/listen to corroborated stories) of giant bombs going off.
The problem with legal action is how slow it is. We have to do this 10x quicker than with climate change.
Yeah, and parallelised lawsuits would each involve a tiny number of stakeholders and established processes to come to decisions.
Again, why would you expect that to not take much longer for efforts at global governance? I get how a US president could in theory write a degree but, in practice, the amount of consensus between stakeholders and diligent offsetting against corporate lobbying you have to reach is staggering.
especially: enforced compute and data limits to training
Agreed. And limits on output bandwidth/intensity. And bans on non-human-in-the-loop systems.
I think that you and I are actually pretty much in agreement on what we are working toward. I think we disagree on the means to getting there, and something around empowering people to make choices from within their contexts.
A failure mode from the lawsuit / anti-job-loss campaigning route is that a potential solution to those that most might be happy with—i.e. paying creatives fairly via royalties or giving out a UBI + share of profits equal to or greater than people’s lost income—is not a solution to x-risk. The only solution that covers both is an indefinite global pause on frontier AI development. This is what we need to be coordinating on.
That said, I think the current lawsuits are great, and hope that they manage to slow the big AI companies down. But on the margin—especially considering short timelines and the length of time legal proceedings take—I think resources need to be directed toward a global indefinite moratorium on AGI development.
A UBI to creatives is absolutely unrealistic, in my opinion, given how these companies compete for (and lobby and escape taxes for) profit. See also the trend of rising income inequality between workers and owners/managers of the business they work for.
Even then, creatives don’t just want to be paid money. They want to be asked for consent before a company can train a model on their works.
In practice, what creatives I talk with want — no large generative models regurgitating their works — is locally aligned with what AI Safety people want.
~ ~ ~
The concern I have with the global pause framing of somehow first needing to globally coordinate with all relevant AI companies, government offices, etc. to start restrict AI model development.
That turns this into an almost intractable global coordination problem.
It’s adding in a lot of dependencies (versus supporting communities to restrict the exploitation of data, workers, uses and compute of AI now).
Instead of hacking away at the problem of restricting AI development in increments, you are putting all your chips on that all these (self-interested) policy and corporate folks are going to get together, eg. in a Paris Agreement style conference, and actually agree on and enforce strict restrictions.
It is surprising to me that you are concentrating your and others’ efforts on such a lengthy global governance process, given that you predict short timelines to “AGI”. It feels like a Hail Mary to me.
I would suggest focussing on supporting parallelised actions now across the board. Including but not limited to filing lawsuits.
I’m saying you can make meaningful progress by supporting legal actions now. Climate groups have filed hundreds of lawsuits over the years, which they have made meaningful progress with. I’m sure you are also familiar with Rupert Read’s work too on supporting local groups to chip away at climate problems through the Climate Majority Project.
The UBI would probably be government mandated, down to political action from people losing their jobs (e.g. via a politician like Andrew Yang gaining power).
I don’t actually think it’s actually less tractable than legal cases successfully taking down OpenAI (and it could well be quicker). It’s just on a different scale. We don’t need to coordinate the companies. That route has already failed imo (but there are plenty of people in AIS and EA still trying). We just need to get governments to enact laws regulating general purpose AI (especially: enforced compute and data limits to training). Quickly. Then get them to agree international non-proliferation treaties.
The problem with legal action is how slow it is. We have to do this 10x quicker than with climate change. Is there any prospect of any of the legal cases concluding in the next 6-12 months[1]? If so, I’d chip in.
Presumably there will be all manor of appeals and counter-appeals, to the point where OpenAI are already quite confident that they can kick the can down the road to beyond the singularity before they are actually forced to take any action.
This is really is not realistic. It feels like we are discussing moves on a board game here, rather than what has happened so far economically and politically.
- Even if you have a lot of people worried about losing their jobs, they would also be much more entangled in and divided by the very technology put out by Big Tech companies. That makes it increasingly hard for them to coordinate around a shared narrative like ‘we all want a UBI’.
- Politicians too would be lobbied by and sponsored by Big Tech groups. And given how the US voting system tends to converge on two party blocs, independents like Andrew Yang would not stand a chance.
Both is already happening.
Furthermore, I think much of the uses of AI would be for the extraction of capital from the rest of society, and quite a lot of the collective culture and natural ecosystems that our current economic transactions depend on would get gradually destroyed in the process.
Some of that would be offset economically by AI-automation, but overall the living conditions that people would experience will be lower (and even a UBI would not be able to offset that – you can’t buy yourself out a degraded culture and toxified ecosystem).
So I doubt whether even if there was magically the political will by politicians increasingly captured by Big Tech to just give everyone a universal basic income that the AI corporations would have enough free capital to be taxed for that.
Agreed.
“Quickly” does not match my understanding of how global governance can work. Even if you have a small percentage of the population somewhat worried about abstract AI risks, it’s still going to take many years.
Look at how many years it took to get to a nuclear nonproliferation treaty (from 1946 to 1968). And there, citizen groups could actually see photos and videos (and read/listen to corroborated stories) of giant bombs going off.
Yeah, and parallelised lawsuits would each involve a tiny number of stakeholders and established processes to come to decisions.
Again, why would you expect that to not take much longer for efforts at global governance?
I get how a US president could in theory write a degree but, in practice, the amount of consensus between stakeholders and diligent offsetting against corporate lobbying you have to reach is staggering.
Agreed.
And limits on output bandwidth/intensity.
And bans on non-human-in-the-loop systems.
I think that you and I are actually pretty much in agreement on what we are working toward. I think we disagree on the means to getting there, and something around empowering people to make choices from within their contexts.