The UBI would probably be government mandated, down to political action from people losing their jobs (e.g. via a politician like Andrew Yang gaining power).
This is really is not realistic. It feels like we are discussing moves on a board game here, rather than what has happened so far economically and politically.
- Even if you have a lot of people worried about losing their jobs, they would also be much more entangled in and divided by the very technology put out by Big Tech companies. That makes it increasingly hard for them to coordinate around a shared narrative like ‘we all want a UBI’.
- Politicians too would be lobbied by and sponsored by Big Tech groups. And given how the US voting system tends to converge on two party blocs, independents like Andrew Yang would not stand a chance.
Both is already happening.
Furthermore, I think much of the uses of AI would be for the extraction of capital from the rest of society, and quite a lot of the collective culture and natural ecosystems that our current economic transactions depend on would get gradually destroyed in the process. Some of that would be offset economically by AI-automation, but overall the living conditions that people would experience will be lower (and even a UBI would not be able to offset that – you can’t buy yourself out a degraded culture and toxified ecosystem).
So I doubt whether even if there was magically the political will by politicians increasingly captured by Big Tech to just give everyone a universal basic income that the AI corporations would have enough free capital to be taxed for that.
We don’t need to coordinate the companies. That route has already failed imo
Agreed.
We just need to get governments to enact laws regulating general purpose AI (especially: enforced compute and data limits to training). Quickly. Then get them to agree international non-proliferation treaties.
“Quickly” does not match my understanding of how global governance can work. Even if you have a small percentage of the population somewhat worried about abstract AI risks, it’s still going to take many years.
Look at how many years it took to get to a nuclear nonproliferation treaty (from 1946 to 1968). And there, citizen groups could actually see photos and videos (and read/listen to corroborated stories) of giant bombs going off.
The problem with legal action is how slow it is. We have to do this 10x quicker than with climate change.
Yeah, and parallelised lawsuits would each involve a tiny number of stakeholders and established processes to come to decisions.
Again, why would you expect that to not take much longer for efforts at global governance? I get how a US president could in theory write a degree but, in practice, the amount of consensus between stakeholders and diligent offsetting against corporate lobbying you have to reach is staggering.
especially: enforced compute and data limits to training
Agreed. And limits on output bandwidth/intensity. And bans on non-human-in-the-loop systems.
I think that you and I are actually pretty much in agreement on what we are working toward. I think we disagree on the means to getting there, and something around empowering people to make choices from within their contexts.
This is really is not realistic. It feels like we are discussing moves on a board game here, rather than what has happened so far economically and politically.
- Even if you have a lot of people worried about losing their jobs, they would also be much more entangled in and divided by the very technology put out by Big Tech companies. That makes it increasingly hard for them to coordinate around a shared narrative like ‘we all want a UBI’.
- Politicians too would be lobbied by and sponsored by Big Tech groups. And given how the US voting system tends to converge on two party blocs, independents like Andrew Yang would not stand a chance.
Both is already happening.
Furthermore, I think much of the uses of AI would be for the extraction of capital from the rest of society, and quite a lot of the collective culture and natural ecosystems that our current economic transactions depend on would get gradually destroyed in the process.
Some of that would be offset economically by AI-automation, but overall the living conditions that people would experience will be lower (and even a UBI would not be able to offset that – you can’t buy yourself out a degraded culture and toxified ecosystem).
So I doubt whether even if there was magically the political will by politicians increasingly captured by Big Tech to just give everyone a universal basic income that the AI corporations would have enough free capital to be taxed for that.
Agreed.
“Quickly” does not match my understanding of how global governance can work. Even if you have a small percentage of the population somewhat worried about abstract AI risks, it’s still going to take many years.
Look at how many years it took to get to a nuclear nonproliferation treaty (from 1946 to 1968). And there, citizen groups could actually see photos and videos (and read/listen to corroborated stories) of giant bombs going off.
Yeah, and parallelised lawsuits would each involve a tiny number of stakeholders and established processes to come to decisions.
Again, why would you expect that to not take much longer for efforts at global governance?
I get how a US president could in theory write a degree but, in practice, the amount of consensus between stakeholders and diligent offsetting against corporate lobbying you have to reach is staggering.
Agreed.
And limits on output bandwidth/intensity.
And bans on non-human-in-the-loop systems.
I think that you and I are actually pretty much in agreement on what we are working toward. I think we disagree on the means to getting there, and something around empowering people to make choices from within their contexts.