Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
It’s not obvious to me that AI workers would want a more cautious approach than AI shareholders, AI bosses, and so on. Whether or not this would be the case seems to me to be the main crux behind whether this would be net positive or net harmful.
Even if they were slightly more cautious than management, if they were less cautious than policymakers it could still be net negative due to unions’ lobbying abilities.
Granted, in principle you could also have a situation where they’re less cautious than management but more cautious than policymakers and it winds up being net positive, though I think that situation is pretty unlikely. Agree the consideration you raised is worth paying attention to.
I had explicitly considered this in drafting and whether to state that crux. If so, it could be an empirical question of whether there is greater support from the workers or management, or receptiveness to change.
I did not because I now think the question is not whether AI workers are more cautious than AI shareholders, but whether AI firms where unionised AI workers negotiate with AI shareholders would be more cautious. To answer that question, I think so
Edit: to summarise, the question is not whether unions (in isolation) would be more cautious, but whether an system of management (and policymakers) bargaining with a union would be more cautious—and yes it probably would
I’ve thought about this before and talked to a couple people in labs about it. I’m pretty uncertain if it would actually be positive. It seems possible that most ML researchers and engineers might want AI development to go as quickly or more than leadership if they’re excited about working on cutting edge technologies or changing the world or for equity reasons. I remember some articles about how people left Google for companies like OpenAI because they thought Google was too slow, cautious, and lost its “move fast and break things” ethos.
As you have said there are examples of individuals have left firms because they feel their company is too cautious. Conversely there are individuals who have left for companies that priorities AI safety.
If we zoom out and take the outside view, it is common for those individuals who form a union to take action to slow down or stop their work or take action to improve safety. I do not know an example of a union that has instead prioritised acceleration.
That’s a good point. Although 1) if people leave a company to go to one that prioritizes AI safety, then this means there are fewer workers at all the other companies who feel as strongly. So a union is less likely to improve safety there. 2) It’s common for workers to take action to improve safety conditions for them, and much less common for them to take action on issues that don’t directly affect their work, such as air pollution or carbon pollution, and 3) if safety inclined people become tagged as wanting to just generally slow down the company, then hiring teams will likely start filtering out many of the most safety minded people.
Thanks for writing this; I’ve thought about this before, it seems like an under-explored (or under-exploited?) idea.
Another point: even if ML engineers, software devs etc either could not be persuaded to unionize, or would accelerate AI development if they could, maybe other labour unions could still exert pressure. E.g., workers in the compute or hardware supply chain; HR, cleaners, ops, and other non-technical staff who work at AI companies? Perhaps strong labour unions in sectors that are NOT obviously related to AI could be powerful here, e.g. by consumer boycotts (e.g., what if education union members committed to not spending money on AI products unless and until the companies producing them complied with certain safety measures?)
Some recent polls suggest that the idea of slowing down AI is already popular among US citizens (72% want to slow it down). My loose impressions are also that (i) most union members and organizers are on the political left (ii) many on the left are already sceptical about AI, for reasons related to (un)employment, plagiarism (i.e. critics of art AI’s use of existing art), capitalism (tech too controlled by powerful interests), algorithmic bias. So this might not be an impossible sell, if AI safety advocates communicate about it in the right way.
To your first para—yes I wonder how unionised countries and relevant sectors are in bottlenecks in the compute supply chain—Netherlands, Japan and Taiwan. I don’t know enough about the efficacy of boycotts to comment on the union led boycotts idea.
I’ve raised this in response to another comment but I want to also address here the concern that workers who join a union would organise to accelerate the development of AI. I think that is very unlikely—the history of unions is a strong tradition of safety, slowing down or stopping work. I do not know an example of a union that has instead prioritised acceleration but there’s probably some and it would get grey as you move into the workers self-management space.
Yeah I don’t have a strong opinion about whether they would accelerate it—I was just saying, even if some workers would support acceleration, other workers could work to slow it down.
One reason that developers might oppose slowing down AI is that it would put them out of work, wouldn’t it? (Or threaten to). So if someone is not convinced that AI poses a big risk, or thinks that pausing isn’t the best way to address the risk, then lobbying to slow down AI development would be a big cost for no obvious benefit.
Something feels off about this Article. It is not really discussed what the AI workers could want or believe, or how to convince them that slowing down AI would delay or aviod extinction of humanity.
Are you assuming a world where the risk of extinction from AGI is widely accepted among AI workers? (In this case, why are they still working on the thing that potentially kills everyone?) If the workers do not believe in (large) risks of extinction from AI, how do you want to recruit them into your union? This seems hard if you want to be honest about the main goal of the union?
I don’t think this is predicated on those assumptions.
My assumptions are:
AI workers who join a union are more likely to care about safety than AI workers who do not join a union. That is because the history of unions suggests that unions promote a culture of safety
Unionised AI workers will be more organised in influencing their workplace than non unionised AI workers. That is because of their ability to co-ordinate collectively
Therefore:
Unionisation of AI workers would encourage a culture of safety
Furthermore, these unions could be in a position to implement AI safety policies.