Because you are so strongly pushing a particular political perspective on twitter-tech right=good roughly, I worry that your bounties are mostly just you paying people to say things you already believe about those topics. Insofar as you mean to persuade people on the left/âcentre of the community to change their views on these topics, maybe it would be better to do something like make the bounties conditional on people who disagree with your takes finding the investigations move their views in your direction.
I also find the use of the phrase âsuch controversial criminal justice policiesâ a bit rhetorical dark artsy and mildly incompatible with your calls for high intellectual integrity. It implies that a strong reason to be suspicious of Open Philâs actions has been given. But you donât really think the mere fact that a political intervention on an emotive, polarized topic is controversial is actually particularly informative about it. Everything on that sort of topic is controversial, including the negation of the Open Phil view on the US incarceration rate. The phrase would be ok if you were taking a very general view that we should be agnostic all political issues where smart, informed people disagree. But youâre not doing that, you take lots of political stances in the piece: de-regulatory libertarianism, the claim that environmentalism has been net negative and Dominic Cummings can all accurately be described as âhighly controversialâ.
Maybe I am making a mountain out of a molehill here. But I feel like rationalists themselves often catastrophise fairly minor slips into dark arts like this as strong evidence that someone lacks integrity. (I wouldnât say anything as strong as that myself; everyone does this kind of thing sometimes.) And I feel like if the NYT referred to AI safety as âtied to the controversial rationalist communityâ or to âhighly controversial blogger Scott Alexanderâ you and other rationalists would be fairly unimpressed.
More substantively (maybe I should have started with this as it is a more important point), I think it is extremely easy to imagine the left/âDemocrat wing of AI safety becoming concerned with AI concentrating power, if it hasnât already. The entire techlash anti âsurveillanceâ capitalism, âthe algorithms push extremismâ thing from left-leaning tech critics is ostensibly at least about the fact that a very small number of very big companies have acquired massive amounts of unaccountable power to shape political and economic outcomes. More generally, the American left has, I keep reading, been on a big anti-trust kick recently. The explicit point of anti-trust is to break up concentrations of power. (Regardless of whether you think it actually does that, that is how its proponents perceive it. They also tend to see it as âpro-marketâ; remember that Warren used to be a libertarian Republican before she was on the left.) In fact, Lina Khanâs desire to do anti-trust stuff to big tech firms was probably one cause of Silicon Valleyâs rightward shift.
It is true that most people with these sort of views are currently very hostile to even the left-wing of AI safety, but lack of concern about X-risk from AI isnât the same thing as lack of concern about AI concentrating power. And eventually the power of AI will be so obvious that even these people have to concede that it is not just fancy autocorrect.
It is not true that all people with these sort of concerns only care private power and not the state either. Dislike of Palantirâs nat sec ties is a big theme for a lot of these people, and many of them donât like the nat sec-y bits of the state very much either. Also a relatively prominent part of the left-wing critique of DOGE is the idea that itâs the beginning of an attempt by Elon to seize personal effective control of large parts of the US federal bureaucracy, by seizing the boring bits of the bureaucracy that actually move money around. In my view people are correct to be skeptical that Musk will ultimately choose decentralising power over accumulating it for himself.
Now strictly speaking none of this is inconsistent with your claim that the left-wing of AI safety lacks concern about concentration of power, since virtually none of these anti-tech people are safetyists. But I think it still matters for predicting how much the left wing of safety will actually concentrate power, because future co-operation between them and the safetyists against the tech right and the big AI companies is a distinct possibility.
I worry that your bounties are mostly just you paying people to say things you already believe about those topics
This is a fair complaint and roughly the reason I havenât put out the actual bounties yetâbecause Iâm worried that theyâre a bit too skewed. Iâm planning to think through this more carefully before I do; okay to DM you some questions?
I think it is extremely easy to imagine the left/âDemocrat wing of AI safety becoming concerned with AI concentrating power, if it hasnât already
It is not true that all people with these sort of concerns only care private power and not the state either. Dislike of Palantirâs nat sec ties is a big theme for a lot of these people, and many of them donât like the nat sec-y bits of the state very much either.
I definitely agree with you with regard to corporate power (and see dislike of Palantir as an extension of that). But a huge part of the conflict driving the last election was âinsidersâ versus âoutsidersââto the extent that even historically Republican insiders like the Cheneys backed Harris. And itâs hard for insiders to effectively oppose the growth of state power. For instance, the âgovt insiderâ AI governance people I talk to tend to be the ones most strongly on the âAI risk as anarchyâ side of the divide, and I take them as indicative of where other insiders will go once they take AI risk seriously.
But I take your point that the future is uncertain and I should be tracking the possibility of change here.
(This is not a defense of the current administration, it is very unclear whether they are actually effectively opposing the growth of state power, or seizing it for themselves, or just flailing.)
Yeah, this feel particularly weird because, coming from that kind of left-libertarian-ish perspective, I basically agree with most of it but also every time he tries to talk about object-level politics it feels like going into the bizarro universe and I would flip the polarity of the signs of all of it. Which is an impression I generally have with @richard_ngoâs work in general, him being one of the few safetyists on the political right to not have capitulated to accelerationism-because-of-China (as most recently even Elon did). Still, Iâll try to see if I have enough things to say to collect bounties.
him being one of the few safetyists on the political right to not have capitulated to accelerationism-because-of-China (as most recently even Elon did).
Thanks for noticing this. I have a blog post coming out soon criticizing this exact capitulation.
every time he tries to talk about object-level politics it feels like going into the bizarro universe and I would flip the polarity of the signs of all of it
I am torn between writing more about politics to clarify, and writing less about politics to focus on other stuff. I think I will compromise by trying to write about political dynamics more timelessly (e.g. as I did in this post, though I got a bit more object-level in the follow-up post).
Depends how far left. Iâd say centre-left views would get less push back, but not necessarily further left ones. But yeah fair point that there is a standard set of views in the community that he is somewhat outside.
I think it is extremely easy to imagine the left/âDemocrat wing of AI safety becoming concerned with AI concentrating power, if it hasnât already
To back this up: I mostly peruse non-rationalist, left leaning communities, and this is a concern in almost every one of them. There is a huge amount of concern and distrust of AI companies on the left.
Even AI skeptical people are concerned about this: AI that is not âtransformativeâ can concentrate power. Most lefties think that AI art is shit, but they are still concerned that it will cost people jobs: this is not a contradiction as taking jobs does not mean AI needs to better than you, just cheaper. And if AI does massively improve, this is going to make them more likely to oppose it, not less.
Edit: Sincere apologiesâwhen I read this, I read through the previous chain of comments quickly, and missed the importance of AI art specifically in titotalâs comment above. This makes Larkâs comment more reasonable than I assumed. It seems like we do disagree on a bunch of this topic, but much of my comment wasnât correct.
---
This comment makes me uncomfortable, especially with the upvotes. I have a lot of respect for you, and I agree with this specific example. I donât think you were meaning anything bad here. But Iâm very suspicious that this specific example is really representative in a meaningful sense.
Often, when one person cites one and only single example of a thing, they are making an implicit argument that this example is decently representative. See the Cooperative Principle (Iâve been paying more attention to this recently). So I assume readers might take, âHereâs one example, itâs probably the main one that matters. People seem to agree with the example, so they probably agree with the implication from it being the only example.â
Some specifics that come to my mind: - In this specific example, it arguably makes it very difficult for Studio Ghibli to have control over a lot of their style. Iâm sure that people at Studio Ghibli are very upset about this. Instead, OpenAI gets to make this accessibleâbut this is less an ideological choice but instead something thatâs clearly commercially beneficial for OpenAI. If OpenAI wanted to stop this, it could (at least, until much better open models come out). More broadly, it can be argued that a lot of forms of power are being brought from media groups like Studio Ghibli, to a few AI companies like OpenAI. You can definitely argue that this is a good or bad thing on the net, but I think this is not exactly âpower is now more decentralized.ââ I think itâs easy to watch the trend lines and see where we might expect things to change. Generally, startups are highly subsidized in the short-term. Then they eventually âgo badâ (see Enshittification). Iâm absolutely sure that if/âwhen OpenAI has serious monopoly power, they will do things that will upset a whole lot of people. - China has been moderating the ability of their LLMs to say controversial things that would look bad for China. I suspect that the US will do this shortly. Iâm not feeling very optimistic with Elon Musk with X.AI, though that is a broader discussion.
On the flip side of this, I could very much see it being frustrating as âI just wanted to leave a quick example. Canât there be some way to enter useful insights without people complaining about a lack of context?â
Iâm honestly not sure what the solution is here. I think online discussions are very hard, especially when people donât know each other very well, for reasons like this.
But in the very short-term, I just want to gently push back on the implication of this example, for this audience.
I could very much imagine a more extensive analysis suggesting that OpenAIâs image work promotes decentralization or centralization. But I think itâs clearly a complex question, at very least. I personally think that people broadly being able to do AI art now is great, but I still find it a tricky issue.
Iâm not sure why you chose to frame your comment in such an unnecessarily aggressive way so Iâm just going to ignore that and focus on the substance.
Yes, the Studio Ghibli example is representative of AI decentralizing power:
Previously, only a small group of people had an ability (to make good art, or diagnose illnesses, or translate a document, or do contract review, or sell a car, or be a taxi driver, etc.)
Now, due to a large tech company (e.g. Google, Uber, AirBnB, OpenAI) everyone who used to be able to still can, and also ordinary people can as well. This is a decentralization of power.
The fact that this was not due to an ideological choice made by AI companies is irrelevant. Centralization and decentralization often occurs for non ideological reasons.
The fact that things might change in the future is also not relevant. Yes, maybe one day Uber will raise prices to twice the level taxis used to charge, with four times the wait time and ten times the odor. But for now, they have helped decentralize power.
The group of producers who are now subject to increased competition are unsurprisingly upset. For fairly nakedly self-interested reasons they demand regulation.
Ideological leftists provide rhetorical ammunition to the rent-seekers, in classic baptists and bootleggers style.
These demands for regulation affect four different levels of the power hierarchy:
The government (most powerful): increases power
Tech platform: reduces power
Incumbent producers: increases power
Ordinary people (least powerful): reduces power
Because leftists focus on the second and third bullet points, characterizing it as a battle between small artisans and big business, they falsely claim to be pushing for power to be decentralized.
But actually they are pushing for power to be more centralized: from tech companies towards the leviathan, and from ordinary people towards incumbent producers.
Really sorry to hear that my comment above came off as aggressive. It was very much not meant like that. One mistake is that I too quickly read the comments aboveâthat was my bad.
In terms of the specifics, I find your longer take interesting, though as Iâm sure you expect, I disagree with a lot of it. There seem to be a lot of important background assumptions on this topic that both of us have.
I agree that there are a bunch of people on the left who are pushing for many bad regulations and ideas on this. But I think at the same time, some of them raise some certain good points (i.e. paranoia about power consolidation)
I feel like itâs fair to say that power is complex. Things like ChatGPTâs AI art will centralize power in some ways and decentralize it in others. On one hand, itâs very much true that many people can create neat artwork that they couldnât before. But on the other, a bunch of key decisions and influence are being put into the hands of a few corporate actors, particularly ones with histories of being shady.
I think that some forms of IP protection make sense. I think this conversation gets much messier when it comes to LLMs, for which there just hasnât been good laws yet on how to adjust for them. Iâd hope that future artists who come up with innovative techniques could get some significant ways of being compensated for their contributions. Iâd hope that writers and innovators could similarly get certain kinds of credit and rewards for their work.
The government can be democratically elected. You idiot. Ordinary people elect the government and compromise the people making art. Corporations are oligarchical, and ordinary people have effectively no control over their governance. Democratic power concentrated in one government is more decentralized than oligarchical power concentrated in ten corporations. Ideally power would be diffuse to everyone, but creating a system where all ownership of AI is cooperative is only possible through first destroying the oligarchy that currently controls it.
This is self evident to anyone with any political education using sources more recent than 1700, and honestly even the original Leviathan knows that corporations are a threat. That conclusion that governments were more of one was before hundreds of years of capital accumulation and democratization of the west.
This is why the left is so completely done with you clowns. If the AI risk profiles being advocated are accurate and youâre still ignorant of how centralization of power works in a democracy, still so ignorant of the threat of corporate oligarchy, despite the active destruction of America by corporate oligarchs, then nuclear war is preferable to letting the current batch of EA write the future of AI; youâll enforce a hereditary feudalism by accident, should you ever manage to wrangle the god your building. That you idiots couldnât predict the future of AI if you had a god AI is the only solace.
And before you snowflakes lock this out via moderation; what can be destroyed by the truth should be. That includes the ego of your membership.
Usually we are the ones accussed (not always unfairly to be honest given Yudkowskyâs TIME article) of being so fanatical weâd risk nuclear war to further our nefararious long-tern goals. The claim that nuclear war is preferable to us is novel at least.
Because someone else backed off needlessly instead of properly addressing this, let me tell you in no uncertain terms; this is disqualifying wrong. AI art is not only highly reduced in quality, it concentrates power in whomever owns the AI model. Itâs just ownership transfer through intellectual property theft, to the capital class who charges you a subscription for access to the model. That you can get a free taste is an advertisement, not a rebuttal of the economic model.
If the EA movement is not 100% agreed that AI art represents an example of centralization of power you cannot be trusted with shepherding AI. The intellectual gap is too large to reasonably bridge; the community is rotten and beyond saving, acting merely as a tool to further AI misalignment.
This isnât really the best example to use considering AI image generation is very much the one area where all the most popular models are open-weights and not controlled by big tech companies, so any attempt at regulating AI image generation would necessarily mean concentrating power and antagonizing the free and open source software community (something which I agree with OP is very ill-advised), and insofar as AI-skeptics are incapable of realizing that, they arenât reliable.
I basically agree with this, with one particular caveat, in that the EA and LW communities might eventually need to fight/âblock open source efforts due to issues like bioweapons, and itâs very plausible that the open-source community refuses to stop open-sourcing models even if there is clear evidence that they can immensely help/âautomate biorisk, so while I think the fight was done too early, I think the fighty/âuncooperative parts of making AI safe might eventually matter more than is recognized today.
If you mean Meta and Mistral I agree. I trust EleutherAI and probably DeepSeek to not release such models though, and theyâre more centrally who I meant.
Because you are so strongly pushing a particular political perspective on twitter-tech right=good roughly, I worry that your bounties are mostly just you paying people to say things you already believe about those topics. Insofar as you mean to persuade people on the left/âcentre of the community to change their views on these topics, maybe it would be better to do something like make the bounties conditional on people who disagree with your takes finding the investigations move their views in your direction.
I also find the use of the phrase âsuch controversial criminal justice policiesâ a bit rhetorical dark artsy and mildly incompatible with your calls for high intellectual integrity. It implies that a strong reason to be suspicious of Open Philâs actions has been given. But you donât really think the mere fact that a political intervention on an emotive, polarized topic is controversial is actually particularly informative about it. Everything on that sort of topic is controversial, including the negation of the Open Phil view on the US incarceration rate. The phrase would be ok if you were taking a very general view that we should be agnostic all political issues where smart, informed people disagree. But youâre not doing that, you take lots of political stances in the piece: de-regulatory libertarianism, the claim that environmentalism has been net negative and Dominic Cummings can all accurately be described as âhighly controversialâ.
Maybe I am making a mountain out of a molehill here. But I feel like rationalists themselves often catastrophise fairly minor slips into dark arts like this as strong evidence that someone lacks integrity. (I wouldnât say anything as strong as that myself; everyone does this kind of thing sometimes.) And I feel like if the NYT referred to AI safety as âtied to the controversial rationalist communityâ or to âhighly controversial blogger Scott Alexanderâ you and other rationalists would be fairly unimpressed.
More substantively (maybe I should have started with this as it is a more important point), I think it is extremely easy to imagine the left/âDemocrat wing of AI safety becoming concerned with AI concentrating power, if it hasnât already. The entire techlash anti âsurveillanceâ capitalism, âthe algorithms push extremismâ thing from left-leaning tech critics is ostensibly at least about the fact that a very small number of very big companies have acquired massive amounts of unaccountable power to shape political and economic outcomes. More generally, the American left has, I keep reading, been on a big anti-trust kick recently. The explicit point of anti-trust is to break up concentrations of power. (Regardless of whether you think it actually does that, that is how its proponents perceive it. They also tend to see it as âpro-marketâ; remember that Warren used to be a libertarian Republican before she was on the left.) In fact, Lina Khanâs desire to do anti-trust stuff to big tech firms was probably one cause of Silicon Valleyâs rightward shift.
It is true that most people with these sort of views are currently very hostile to even the left-wing of AI safety, but lack of concern about X-risk from AI isnât the same thing as lack of concern about AI concentrating power. And eventually the power of AI will be so obvious that even these people have to concede that it is not just fancy autocorrect.
It is not true that all people with these sort of concerns only care private power and not the state either. Dislike of Palantirâs nat sec ties is a big theme for a lot of these people, and many of them donât like the nat sec-y bits of the state very much either. Also a relatively prominent part of the left-wing critique of DOGE is the idea that itâs the beginning of an attempt by Elon to seize personal effective control of large parts of the US federal bureaucracy, by seizing the boring bits of the bureaucracy that actually move money around. In my view people are correct to be skeptical that Musk will ultimately choose decentralising power over accumulating it for himself.
Now strictly speaking none of this is inconsistent with your claim that the left-wing of AI safety lacks concern about concentration of power, since virtually none of these anti-tech people are safetyists. But I think it still matters for predicting how much the left wing of safety will actually concentrate power, because future co-operation between them and the safetyists against the tech right and the big AI companies is a distinct possibility.
This is a fair complaint and roughly the reason I havenât put out the actual bounties yetâbecause Iâm worried that theyâre a bit too skewed. Iâm planning to think through this more carefully before I do; okay to DM you some questions?
I definitely agree with you with regard to corporate power (and see dislike of Palantir as an extension of that). But a huge part of the conflict driving the last election was âinsidersâ versus âoutsidersââto the extent that even historically Republican insiders like the Cheneys backed Harris. And itâs hard for insiders to effectively oppose the growth of state power. For instance, the âgovt insiderâ AI governance people I talk to tend to be the ones most strongly on the âAI risk as anarchyâ side of the divide, and I take them as indicative of where other insiders will go once they take AI risk seriously.
But I take your point that the future is uncertain and I should be tracking the possibility of change here.
(This is not a defense of the current administration, it is very unclear whether they are actually effectively opposing the growth of state power, or seizing it for themselves, or just flailing.)
Yeah, this feel particularly weird because, coming from that kind of left-libertarian-ish perspective, I basically agree with most of it but also every time he tries to talk about object-level politics it feels like going into the bizarro universe and I would flip the polarity of the signs of all of it. Which is an impression I generally have with @richard_ngoâs work in general, him being one of the few safetyists on the political right to not have capitulated to accelerationism-because-of-China (as most recently even Elon did). Still, Iâll try to see if I have enough things to say to collect bounties.
Thanks for noticing this. I have a blog post coming out soon criticizing this exact capitulation.
I am torn between writing more about politics to clarify, and writing less about politics to focus on other stuff. I think I will compromise by trying to write about political dynamics more timelessly (e.g. as I did in this post, though I got a bit more object-level in the follow-up post).
I think this is a valid concern, but I think itâs important to note that if Richard were a left-winger, this same concern wouldnât be there.
Depends how far left. Iâd say centre-left views would get less push back, but not necessarily further left ones. But yeah fair point that there is a standard set of views in the community that he is somewhat outside.
To back this up: I mostly peruse non-rationalist, left leaning communities, and this is a concern in almost every one of them. There is a huge amount of concern and distrust of AI companies on the left.
Even AI skeptical people are concerned about this: AI that is not âtransformativeâ can concentrate power. Most lefties think that AI art is shit, but they are still concerned that it will cost people jobs: this is not a contradiction as taking jobs does not mean AI needs to better than you, just cheaper. And if AI does massively improve, this is going to make them more likely to oppose it, not less.
AI art seems like a case of power becoming decentralized: before this week, few people could make Studio Ghibli art. Now everyone can.
Edit: Sincere apologiesâwhen I read this, I read through the previous chain of comments quickly, and missed the importance of AI art specifically in titotalâs comment above. This makes Larkâs comment more reasonable than I assumed. It seems like we do disagree on a bunch of this topic, but much of my comment wasnât correct.
---
This comment makes me uncomfortable, especially with the upvotes. I have a lot of respect for you, and I agree with this specific example.
I donât think you were meaning anything bad here. But Iâm very suspicious that this specific example is really representative in a meaningful sense.Often, when one person cites one and only single example of a thing, they are making an implicit argument that this example is decently representative. See theCooperative Principle(Iâve been paying more attention to this recently). So I assume readers might take, âHereâs one example, itâs probably the main one that matters. People seem to agree with the example, so they probably agree with the implication from it being the only example.âSome specifics that come to my mind:
- In this specific example, it arguably makes it very difficult for Studio Ghibli to have control over a lot of their style. Iâm sure that people at Studio Ghibli are very upset about this. Instead, OpenAI gets to make this accessibleâbut this is less an ideological choice but instead something thatâs clearly commercially beneficial for OpenAI. If OpenAI wanted to stop this, it could (at least, until much better open models come out). More broadly, it can be argued that a lot of forms of power are being brought from media groups like Studio Ghibli, to a few AI companies like OpenAI. You can definitely argue that this is a good or bad thing on the net, but I think this is not exactly âpower is now more decentralized.ââ
I think itâs easy to watch the trend lines and see where we might expect things to change. Generally, startups are highly subsidized in the short-term. Then they eventually âgo badâ (see Enshittification). Iâm absolutely sure that if/âwhen OpenAI has serious monopoly power, they will do things that will upset a whole lot of people.
- China has been moderating the ability of their LLMs to say controversial things that would look bad for China. I suspect that the US will do this shortly. Iâm not feeling very optimistic with Elon Musk with X.AI, though that is a broader discussion.
On the flip side of this, I could very much see it being frustrating asâI just wanted to leave a quick example. Canât there be some way to enter useful insights without people complaining about a lack of context?âIâm honestly not sure what the solution is here. I think online discussions are very hard, especially when people donât know each other very well, for reasons like this.But in the very short-term, I just want to gently push back on the implication of this example, for this audience.
I could very much imagine a more extensive analysis suggesting that OpenAIâs image work promotes decentralization or centralization. But I think itâs clearly a complex question, at very least. I personally think that people broadly being able to do AI art now is great, but I still find it a tricky issue.Iâm not sure why you chose to frame your comment in such an unnecessarily aggressive way so Iâm just going to ignore that and focus on the substance.
Yes, the Studio Ghibli example is representative of AI decentralizing power:
Previously, only a small group of people had an ability (to make good art, or diagnose illnesses, or translate a document, or do contract review, or sell a car, or be a taxi driver, etc.)
Now, due to a large tech company (e.g. Google, Uber, AirBnB, OpenAI) everyone who used to be able to still can, and also ordinary people can as well. This is a decentralization of power.
The fact that this was not due to an ideological choice made by AI companies is irrelevant. Centralization and decentralization often occurs for non ideological reasons.
The fact that things might change in the future is also not relevant. Yes, maybe one day Uber will raise prices to twice the level taxis used to charge, with four times the wait time and ten times the odor. But for now, they have helped decentralize power.
The group of producers who are now subject to increased competition are unsurprisingly upset. For fairly nakedly self-interested reasons they demand regulation.
Ideological leftists provide rhetorical ammunition to the rent-seekers, in classic baptists and bootleggers style.
These demands for regulation affect four different levels of the power hierarchy:
The government (most powerful): increases power
Tech platform: reduces power
Incumbent producers: increases power
Ordinary people (least powerful): reduces power
Because leftists focus on the second and third bullet points, characterizing it as a battle between small artisans and big business, they falsely claim to be pushing for power to be decentralized.
But actually they are pushing for power to be more centralized: from tech companies towards the leviathan, and from ordinary people towards incumbent producers.
Thanks for providing more detail into your views.
Really sorry to hear that my comment above came off as aggressive. It was very much not meant like that. One mistake is that I too quickly read the comments aboveâthat was my bad.
In terms of the specifics, I find your longer take interesting, though as Iâm sure you expect, I disagree with a lot of it. There seem to be a lot of important background assumptions on this topic that both of us have.
I agree that there are a bunch of people on the left who are pushing for many bad regulations and ideas on this. But I think at the same time, some of them raise some certain good points (i.e. paranoia about power consolidation)
I feel like itâs fair to say that power is complex. Things like ChatGPTâs AI art will centralize power in some ways and decentralize it in others. On one hand, itâs very much true that many people can create neat artwork that they couldnât before. But on the other, a bunch of key decisions and influence are being put into the hands of a few corporate actors, particularly ones with histories of being shady.
I think that some forms of IP protection make sense. I think this conversation gets much messier when it comes to LLMs, for which there just hasnât been good laws yet on how to adjust for them. Iâd hope that future artists who come up with innovative techniques could get some significant ways of being compensated for their contributions. Iâd hope that writers and innovators could similarly get certain kinds of credit and rewards for their work.
The government can be democratically elected. You idiot. Ordinary people elect the government and compromise the people making art. Corporations are oligarchical, and ordinary people have effectively no control over their governance. Democratic power concentrated in one government is more decentralized than oligarchical power concentrated in ten corporations. Ideally power would be diffuse to everyone, but creating a system where all ownership of AI is cooperative is only possible through first destroying the oligarchy that currently controls it.
This is self evident to anyone with any political education using sources more recent than 1700, and honestly even the original Leviathan knows that corporations are a threat. That conclusion that governments were more of one was before hundreds of years of capital accumulation and democratization of the west.
This is why the left is so completely done with you clowns. If the AI risk profiles being advocated are accurate and youâre still ignorant of how centralization of power works in a democracy, still so ignorant of the threat of corporate oligarchy, despite the active destruction of America by corporate oligarchs, then nuclear war is preferable to letting the current batch of EA write the future of AI; youâll enforce a hereditary feudalism by accident, should you ever manage to wrangle the god your building. That you idiots couldnât predict the future of AI if you had a god AI is the only solace.
And before you snowflakes lock this out via moderation; what can be destroyed by the truth should be. That includes the ego of your membership.
Usually we are the ones accussed (not always unfairly to be honest given Yudkowskyâs TIME article) of being so fanatical weâd risk nuclear war to further our nefararious long-tern goals. The claim that nuclear war is preferable to us is novel at least.
Because someone else backed off needlessly instead of properly addressing this, let me tell you in no uncertain terms; this is disqualifying wrong. AI art is not only highly reduced in quality, it concentrates power in whomever owns the AI model. Itâs just ownership transfer through intellectual property theft, to the capital class who charges you a subscription for access to the model. That you can get a free taste is an advertisement, not a rebuttal of the economic model.
If the EA movement is not 100% agreed that AI art represents an example of centralization of power you cannot be trusted with shepherding AI. The intellectual gap is too large to reasonably bridge; the community is rotten and beyond saving, acting merely as a tool to further AI misalignment.
This isnât really the best example to use considering AI image generation is very much the one area where all the most popular models are open-weights and not controlled by big tech companies, so any attempt at regulating AI image generation would necessarily mean concentrating power and antagonizing the free and open source software community (something which I agree with OP is very ill-advised), and insofar as AI-skeptics are incapable of realizing that, they arenât reliable.
I basically agree with this, with one particular caveat, in that the EA and LW communities might eventually need to fight/âblock open source efforts due to issues like bioweapons, and itâs very plausible that the open-source community refuses to stop open-sourcing models even if there is clear evidence that they can immensely help/âautomate biorisk, so while I think the fight was done too early, I think the fighty/âuncooperative parts of making AI safe might eventually matter more than is recognized today.
If you mean Meta and Mistral I agree. I trust EleutherAI and probably DeepSeek to not release such models though, and theyâre more centrally who I meant.