I’m not sure why you chose to frame your comment in such an unnecessarily aggressive way so I’m just going to ignore that and focus on the substance.
Yes, the Studio Ghibli example is representative of AI decentralizing power:
Previously, only a small group of people had an ability (to make good art, or diagnose illnesses, or translate a document, or do contract review, or sell a car, or be a taxi driver, etc.)
Now, due to a large tech company (e.g. Google, Uber, AirBnB, OpenAI) everyone who used to be able to still can, and also ordinary people can as well. This is a decentralization of power.
The fact that this was not due to an ideological choice made by AI companies is irrelevant. Centralization and decentralization often occurs for non ideological reasons.
The fact that things might change in the future is also not relevant. Yes, maybe one day Uber will raise prices to twice the level taxis used to charge, with four times the wait time and ten times the odor. But for now, they have helped decentralize power.
The group of producers who are now subject to increased competition are unsurprisingly upset. For fairly nakedly self-interested reasons they demand regulation.
Ideological leftists provide rhetorical ammunition to the rent-seekers, in classic baptists and bootleggers style.
These demands for regulation affect four different levels of the power hierarchy:
The government (most powerful): increases power
Tech platform: reduces power
Incumbent producers: increases power
Ordinary people (least powerful): reduces power
Because leftists focus on the second and third bullet points, characterizing it as a battle between small artisans and big business, they falsely claim to be pushing for power to be decentralized.
But actually they are pushing for power to be more centralized: from tech companies towards the leviathan, and from ordinary people towards incumbent producers.
Really sorry to hear that my comment above came off as aggressive. It was very much not meant like that. One mistake is that I too quickly read the comments above—that was my bad.
In terms of the specifics, I find your longer take interesting, though as I’m sure you expect, I disagree with a lot of it. There seem to be a lot of important background assumptions on this topic that both of us have.
I agree that there are a bunch of people on the left who are pushing for many bad regulations and ideas on this. But I think at the same time, some of them raise some certain good points (i.e. paranoia about power consolidation)
I feel like it’s fair to say that power is complex. Things like ChatGPT’s AI art will centralize power in some ways and decentralize it in others. On one hand, it’s very much true that many people can create neat artwork that they couldn’t before. But on the other, a bunch of key decisions and influence are being put into the hands of a few corporate actors, particularly ones with histories of being shady.
I think that some forms of IP protection make sense. I think this conversation gets much messier when it comes to LLMs, for which there just hasn’t been good laws yet on how to adjust for them. I’d hope that future artists who come up with innovative techniques could get some significant ways of being compensated for their contributions. I’d hope that writers and innovators could similarly get certain kinds of credit and rewards for their work.
The government can be democratically elected. You idiot. Ordinary people elect the government and compromise the people making art. Corporations are oligarchical, and ordinary people have effectively no control over their governance. Democratic power concentrated in one government is more decentralized than oligarchical power concentrated in ten corporations. Ideally power would be diffuse to everyone, but creating a system where all ownership of AI is cooperative is only possible through first destroying the oligarchy that currently controls it.
This is self evident to anyone with any political education using sources more recent than 1700, and honestly even the original Leviathan knows that corporations are a threat. That conclusion that governments were more of one was before hundreds of years of capital accumulation and democratization of the west.
This is why the left is so completely done with you clowns. If the AI risk profiles being advocated are accurate and you’re still ignorant of how centralization of power works in a democracy, still so ignorant of the threat of corporate oligarchy, despite the active destruction of America by corporate oligarchs, then nuclear war is preferable to letting the current batch of EA write the future of AI; you’ll enforce a hereditary feudalism by accident, should you ever manage to wrangle the god your building. That you idiots couldn’t predict the future of AI if you had a god AI is the only solace.
And before you snowflakes lock this out via moderation; what can be destroyed by the truth should be. That includes the ego of your membership.
Usually we are the ones accussed (not always unfairly to be honest given Yudkowsky’s TIME article) of being so fanatical we’d risk nuclear war to further our nefararious long-tern goals. The claim that nuclear war is preferable to us is novel at least.
I’m not sure why you chose to frame your comment in such an unnecessarily aggressive way so I’m just going to ignore that and focus on the substance.
Yes, the Studio Ghibli example is representative of AI decentralizing power:
Previously, only a small group of people had an ability (to make good art, or diagnose illnesses, or translate a document, or do contract review, or sell a car, or be a taxi driver, etc.)
Now, due to a large tech company (e.g. Google, Uber, AirBnB, OpenAI) everyone who used to be able to still can, and also ordinary people can as well. This is a decentralization of power.
The fact that this was not due to an ideological choice made by AI companies is irrelevant. Centralization and decentralization often occurs for non ideological reasons.
The fact that things might change in the future is also not relevant. Yes, maybe one day Uber will raise prices to twice the level taxis used to charge, with four times the wait time and ten times the odor. But for now, they have helped decentralize power.
The group of producers who are now subject to increased competition are unsurprisingly upset. For fairly nakedly self-interested reasons they demand regulation.
Ideological leftists provide rhetorical ammunition to the rent-seekers, in classic baptists and bootleggers style.
These demands for regulation affect four different levels of the power hierarchy:
The government (most powerful): increases power
Tech platform: reduces power
Incumbent producers: increases power
Ordinary people (least powerful): reduces power
Because leftists focus on the second and third bullet points, characterizing it as a battle between small artisans and big business, they falsely claim to be pushing for power to be decentralized.
But actually they are pushing for power to be more centralized: from tech companies towards the leviathan, and from ordinary people towards incumbent producers.
Thanks for providing more detail into your views.
Really sorry to hear that my comment above came off as aggressive. It was very much not meant like that. One mistake is that I too quickly read the comments above—that was my bad.
In terms of the specifics, I find your longer take interesting, though as I’m sure you expect, I disagree with a lot of it. There seem to be a lot of important background assumptions on this topic that both of us have.
I agree that there are a bunch of people on the left who are pushing for many bad regulations and ideas on this. But I think at the same time, some of them raise some certain good points (i.e. paranoia about power consolidation)
I feel like it’s fair to say that power is complex. Things like ChatGPT’s AI art will centralize power in some ways and decentralize it in others. On one hand, it’s very much true that many people can create neat artwork that they couldn’t before. But on the other, a bunch of key decisions and influence are being put into the hands of a few corporate actors, particularly ones with histories of being shady.
I think that some forms of IP protection make sense. I think this conversation gets much messier when it comes to LLMs, for which there just hasn’t been good laws yet on how to adjust for them. I’d hope that future artists who come up with innovative techniques could get some significant ways of being compensated for their contributions. I’d hope that writers and innovators could similarly get certain kinds of credit and rewards for their work.
The government can be democratically elected. You idiot. Ordinary people elect the government and compromise the people making art. Corporations are oligarchical, and ordinary people have effectively no control over their governance. Democratic power concentrated in one government is more decentralized than oligarchical power concentrated in ten corporations. Ideally power would be diffuse to everyone, but creating a system where all ownership of AI is cooperative is only possible through first destroying the oligarchy that currently controls it.
This is self evident to anyone with any political education using sources more recent than 1700, and honestly even the original Leviathan knows that corporations are a threat. That conclusion that governments were more of one was before hundreds of years of capital accumulation and democratization of the west.
This is why the left is so completely done with you clowns. If the AI risk profiles being advocated are accurate and you’re still ignorant of how centralization of power works in a democracy, still so ignorant of the threat of corporate oligarchy, despite the active destruction of America by corporate oligarchs, then nuclear war is preferable to letting the current batch of EA write the future of AI; you’ll enforce a hereditary feudalism by accident, should you ever manage to wrangle the god your building. That you idiots couldn’t predict the future of AI if you had a god AI is the only solace.
And before you snowflakes lock this out via moderation; what can be destroyed by the truth should be. That includes the ego of your membership.
Usually we are the ones accussed (not always unfairly to be honest given Yudkowsky’s TIME article) of being so fanatical we’d risk nuclear war to further our nefararious long-tern goals. The claim that nuclear war is preferable to us is novel at least.