We are fighting a shared battle (a call for a different approach to AI Strategy)

Disclaimer 1: This following essay doesn’t purport to offer much original ideas, and I am certainly a non-expert on AI Governance, so please don’t take my word for these things too seriously. I have linked sources throughout the text, and have some other similar texts later on, but this should merely be treated as another data point in people saying very similar things; far smarter people than I have written on this.

Disclaimer 2: This post is quite long, so I recommend reading the section on ” A choice not an inevitability” and “It’s all about power” for the core of my argument.

My argument essentially is as follows; under most plausible understandings of how harms arise from very advanced AI systems, be these AGI or narrow AI or systems somewhere in between, the actors responsible, and the actions that must be taken to reduce or avert the harm, are broadly similar whether you care about both existential and non-existential harms from AI development. I will then further go on to argue that this calls for broad, coalitional politics of people who vastly disagree on specifics of AI systems harms, because we essentially have the same goals.

It’s important to note that calls like these have happened before. Whilst I will be taking a slightly different argument to them, Prunkl & Whittlestone, Baum, Stix & Maas and Cave & Ó hÉigeartaigh have all made arguments attempting to bridge near term and long term concerns. In general, these proposals (with the exception of Baum) have made calls for narrower cooperation between ‘AI Ethics’ and ‘AI Safety’ than I will make, and are all considerably less focused on the common source of harm than I will be. None go as far as I do in essentially suggesting all key forms of harm that we worry about are incidents of the same phenomena of power concentration in and through AI. These pieces are in many ways more research focused, whilst mine is considerably more politically focused. Nonetheless, there is considerable overlap in spirit of identifying that the near-term/​ethics and long-term/​safety distinction is overemphasised and is not as analytically useful as is made out, as well as the intention of all these pieces and mine to reconcile for mutual benefit of the two factions.

A choice not an inevitability

At present, there is no AI inevitably coming to harm us. Those AIs that do will be given capabilities, and power to cause harm, by developers. If the AI companies stopped developing their AIs now, and people chose to stop deploying them, then both existential or non-existential harms would stop. These harms are in our hands, and whilst the technologies clearly act as important intermediaries, ultimately it is a human choice, a social choice, and perhaps most importantly a political choice to carry on developing more and more powerful AI systems when such dangers are apparent (or merely plausible or possible). The attempted development of AGI is far from value neutral, far from inevitable and very much in the realm of legitimate political contestation. Thus far, we have simply accepted the right for powerful tech companies to decide our future for us; this is both unnecessary and dangerous. It’s important to note that our current acceptance of the right of companies to legislate for our future is historically contingent. In the past, corporate power has been curbed, from colonial era companies, Progressive Era trust-busting, postwar Germany and more, and this could be used again. Whilst governments have often taken a leading role, civil society has also been significant in curbing corporate power and technology development throughout history. Acceptance of corporate dominance is far from inevitable.

I also think it’s wrong to just point the finger at humanity, as if we are all complicit in this. In reality, the development of more and more dangerous AI systems seems to essentially be driven by a very small number of corporate actors (often propped up by a very small number of individuals supporting them). OpenAI seem to be committed to shortening timelines as much as possible, had half their safety team leave to form another company in response to their lax approach to safety, and seem to see themselves as essentially empowered to risk all of humanity as they see themselves as saviours of humanity. Google sacked prominent members of the ethics team for speaking out on the dangers of LLMs. Microsoft sacked an entire ethics team, despite the ethical issues that increasing AI in their products has brought. None of these seem like the behaviour of companies that have earned the trust that society (implicitly) gives them.

It’s all about power

Ultimately, the root cause of the harms that AI causes (whether or not you think they are existential or not), is the power that AI companies have to make unilateral decisions that affect large swathes of humanity without any oversight. They certainly don’t paint their actions as political, despite clearly attempting to gain power and guide the course of humanity from their ivory towers in silicon valley. They feel empowered torisk huge amounts of harm (e.g. Bender and Gebru et al, Birhane, Carlsmith, Yudkowsky) by developing more powerful AI systems, partially because there is little political opposition despite growing public worry. Whilst there is some mounting opposition to these companies’ unsafe deployments (activism, legislation, hardware control), there is so much further to go, in particular in restricting the research into advanced AI.

If we see it like this, whether AI is closer to a stupid ‘stochastic parrot’ or on the ‘verge-of-superintelligence’ doesn’t really matter; whichever world we are in, it’s the same processes and actors that ultimately generate the harm. The root cause, powerful, unaccountable, unregulated AI companies with little opposition playing fast and loose with risk and social responsibility, with utopian and quasi-messianic visions of their mission, causes the risk, irrespective of what you think that risk is. As capitalists like Sam Altman take in their private benefits, the rest of humanity suffers with the risk they place on all of us. Of course, these risks and harms aren’t placed on everyone equally, and as always, more harm is done to theless powerful and privileged (in healthcare, on women’s working lives, facial recognition etc.) ; but nonetheless, these AI companies are happy to run roughshod over the rights of everyone in order to pursue their own ends. They have their vision of the future, and are happy to impose significant risk on the rest of humanity to achieve this without our consent. A lot of the researchers helping these companies think what they are doing has high probabilities of being extremely bad, and yet they carry on!

Irrespective of which sort of dangers we worry about, it’s clear who we need to worry about: the AI companies, chiefly (although not exclusively) OpenAI and DeepMind. Whether you care about ‘AI Ethics’ or ‘AI Safety’, no matter what the type of harms you worry about, if you look at the issue politically the source of the harms looks the same. It’s clear who has, and is trying to gain more, power, and it is clear that everyone else is put at extreme risk. If the problem is power, then we ought to fight it with power. We cannot merely allow these powerful actors to imagine and create futures for us, crowding out alternatives; we need to build coalitions that give us the power to imagine and create safer and more equitable futures.

Thus, the importance of making a distinction between existential and non-existential harms will start to dissolve away, because either are possible hugely negative consequences of the same phenomena, with similar political solutions: slow down or stop companies trying to develop AGI and other risky ‘advanced AI’ systems. If we buy this, then the strategy needs to be much broader than the current status quo in the ‘AI XRisk’ community of merely empowering a narrow range of ‘value-aligned’ individuals to research ‘technical alignment’ or even friendly technocratic ‘Existential AI Governance.’ (I’m not saying this is bad- far from it- or shouldn’t be hugely expanded, but it is very very very far from sufficient). Rather, it likely looks like bringing together coalitions of actors, with perhaps different underlying ethical concerns, but the same political concern that the growing unaccountable power of dangerously hubristic AI companies needs to be curbed. It requires building coalitions to engage in the politics of technology, imagining futures we can perform into existence, and asserting power to challenge the inherently risky pathways these AI companies want to put us on.

It’s important to note that this isn’t saying we don’t and can’t do AI research. But the type of research, moving towards larger and larger models with less accountability for the companies, trying to get more and more general systems with more destructive capabilities, with almost no regulatory oversight, is simply not a viable safe pathway forward. There is good reason to think that within our current paradigm and political structures, AGI development may be inherently dangerous; this is a demon we ought not to make. If this recklessness is synonymous with innovation, then those dreaming of innovations have lost a lot of their spark.

In whatever world we are in, putting ‘AI’s in charge’ of powerful systems is dangerous

Whether we are in ‘stochastic parrot’ or ‘verge of superintelligence’ worlds, giving AIs power is deeply dangerous. ‘Stupid’ AIs are already causing fatalities, reinforcing biases, and causing other harms, all of which will likely get worse if given more power. ‘Stupid systems’ could even cause harm of existential proportions, for example if they are integrated into nuclear command and control, or used to make more powerful new biochemical weapons. Superintelligent AIs, if given power, could similarly cause tremendous amounts of harm scaling to existential harm. I think it’s also important to note, that AI’s needn’t be an agent in the typical, anthropomorphised sense, for it to be useful to describe them as ‘having power’, and that is what I mean here.

Once again, unaccountable, opaque, ‘machine power’, generally allows for an increase in harm that can be done, and a reduction in the ability of society to respond to said harm as systems get entrenched and remake the social world we live in, which is incredibly dangerous. And once again, these harms are often imposed on the rest of the world, without our consent, by companies, militaries and governments looking to rely on AI systems, normally due to hype from the same, few AGI companies. In this way, irrespective of the world we are in, hype is dangerous, because once again it provides the dangerously risky AI companies with more power, which they almost certainly use to pose risks of unacceptable harm on the world population.

In whatever world we are in, AGI research is dangerous

If we are in ‘stochastic parrot’ world, research into AGI is used as an excuse and a fig leaf to hide enormous harms imposed by dangerously stupid AI systems. In this world, AGI research is used to focus on increasing the power of a few, unaccountable, powerful companies, and causes harm for the rest of us, whilst failing to deliver on its promises. By controlling visions of the future, actors gain control over the present. Visions of utopia allow more mundane harms to get ignored, with these companies provided a free pass.

If we are in the ‘verge of superintelligence’ world, research into AGI is flirting with the apocalypse in a way that is unacceptably dangerous. Stories of the inevitability of AGI development are useful as excuses for those who care little of the existential risk that developing these systems could bring in comparison to their desire to impose their vision of what a glorious future looks like upon mankind.

There may be a counterargument to this, that suggests research here isn’t dangerous, but deployment is, so it is on model deployment we need to regulate and not research. I think in both worlds, this is flawed. In ‘stochastic parrot’ world, even with regulated deployment, unrestricted research is likely to lead to a slippery slope to deployment (as worried about in geoengineering, for example), where research enables a gaining of financial, intellectual and discursive power by the AI companies in a way that makes dangerous deployment of technologies much more likely. And in a ‘verge of superintelligence’ world having powerful doomsday devices developed is probably already an unacceptable risk no matter how strict the governance of deployment is. Even if we think our regulation of deployment is sound, governance mechanisms can break down, the existence of technologies can induce social changes affecting governance and deceptive alignment is enough of a problem that it seems better to simply never try and develop these systems in the first place. Moreover, to suggest the problem doesn’t start with research fails to reckon with the risk of bad actors; whilst one could say that guns don’t murder, people do, had guns not been invented far fewer people would have been killed in violence than are now.

Why this is a shared battle

I hope the previous paragraphs have shown that whilst the disagreements between the AI Safety and AI Ethics crowds are significant, they are not massively analytically useful or core to understanding the key challenge that we are facing. The relevant question isn’t “are the important harms to be prioritised the existential harms or the non-existential ones?”, “will AI be agents or not?’, nor ’will AI be stochastic parrots or superintelligence?” Rather, the relevant question is whether we think that power-accumulation and concentration in and through AI systems, at different scales of capability, is extremely risky. On this, I think we agree, and so whilst scientifically our differences may be significant, in the realm of political analysis it isn’t. Ultimately, it is this power concentration that has the potential to cause harm, and it is ultimately this which we normatively care about.

Moreover, many of these surface level disagreements also aren’t politically or strategically relevant: once we understand that the source of all these risks is a small group of AI companies recklessly forging ahead and concentrating power, it becomes much clearer that both communities in fact share interests in finding ways to (1) slow down/​halt research; (2) avert and cool down AI hype; (3) spur much greater public/​government scrutiny into whether (and if yes, how) we want to develop advanced AI technologies.

What we gain from each other

This essay framed itself as suggesting that both the ‘AI Ethics’ and ‘AI Safety’ crowds can benefit each other. Thus far, I’ve mostly suggested that the AI Safety crowd should realise that even if the AI Ethics crowd were incorrect about dismissing the importance of existential risks from AI, that their analysis, that power accumulation and concentration through and by AI, originating from a small number of powerful and unaccountable corporations, is the major cause of the threats we face, is correct. From this perspective, the AI Safety crowd probably should come and fight in the trenches with the AI Ethics people as well, realising that their identification of the core of the issue has been broadly correct, even if they underestimated how bad these corporations could make things. Moreover, the AI Ethics crowds seem to have been more effective at tempering AI Hype in contrast to the way in which AI Safety crowds have potentially sped up AI development, so practically there may be significant benefit in collaboration.

However, I’m not sure if the exchange here is so one-sided. I think the AI Safety community has a lot to offer the AI Ethics community as well. Technical AI Safety techniques, like RLHF or Constitutional AI, whilst potentially not very beneficial from an AI Safety perspective, seem to have had a meaningfully significant impact on making systems more ethical. Moreover, the moral inflation and urgency that Existential Harms can bring seems to resonate with the public, and so politically may be very useful tools if utilised to fight the companies rather than empower them. Intellectually, AI Safety provides much greater urgency and impetus for governing research and cutting the problem off at the sources (which has been underexplored so far) , a concern which would likely be more muted in AI Ethics discussions. By regulating these problems at the sources, AI Ethics work can be made a lot easier and less reactive. Moreover, the focus from the AI Safety crowd on risks from systems that look vastly different from the risks we face now may be useful even if we don’t develop AGI; risks and harms will change in the future just as they have changed in the past, and anticipatory governance may be absolutely essentially at reducing these. So even if one doesn’t buy my suggestion that we are on the same side of the most analytically relevant distinction, I hope that the insights and political benefits that the two communities have to offer each other will be enough cause for common ground to start working together.

Coalitional Politics

If one accepts my (not particularly groundbreaking) analysis that the ultimate problem is the power of the AI companies, how do we combat this? There are lots of ways to do this, from narrow technocratic governance to broad range political salience raising, to ethics teams within corporations and broad governance frameworks and many other approaches. Each of these are necessary and useful, and I don’t argue against any of them. Rather, I’m arguing for a broad, pluralistic coalition taking a variety of approaches to AI governance, with more focus put towards work to raise the political salience of the restriction of AI Research than currently is.

Given AI Ethics and AI Safety people are actually concerned with the same phenomena of harms arising from the unaccountable power enabling dangerously risky behaviour from a very small number of AI companies, then we also have the same solution; take them on. Use all the discursive and political tools at our disposal to curb their power and hold them to account. We need a big tent to take them on. We need op eds in major newspapers attesting to the dangerous power and harms these AI companies have and are happy to risk. We need to (continue to) expose just how their messianic arrogance endangers people, and let the public see what these few key leaders have said about the world they are pushing us towards. We need to mobilise peoples worries such that politicians will react, establishing a culture against the unaccountable power of these AI companies. We need to show people across the political spectrum (even those we disagree with!) how this new power base of AI companies has no one’s interests at heart but their own, so no matter where you fall, they are a danger to your vision of a better world. There is nascent public worries around AGI and these AI companies, we just need to activate this through a broad coalition to challenge the power of these companies and wrestle control of humanity’s future from them. Hopefully this can lay the groundwork for formal governance and at the very least quickly create a political culture reflective of the degree of worry that ought to be held about these companies’ power.


There is nothing inevitable about technology development, and there is nothing inevitable about the status quo. In my ‘home field’ of Solar Geoengineering, considerably smaller coalitions, less well funded and powerful than what we could build in the AI space in a few months, successfully halted technology development for at least the last decade. Similar coalitions have constrained GMOs in various regions of the world, nuclear energy and nuclear weapons for peaceful purposes. There are enough reasons to oppose development of AGI systems from the perspective of all sorts of worldviews and ethical systems to build such coalitions; this has successfully occurred in a number of the above examples, and it may be even easier in the context of AI. Some have tried to make a start on this (e.g. Piper, Marcus and Garner, Gebru etc), but a larger and more diverse coalition trying to raise the political salience of curbing AI companies’ power is key. Bringing genuine restriction of these companies power into the overton window, building coalitions across political divides to do this, building constituencies of people who care about regulating the power of AI companies, raising the salience of the issue in the media and crafting and envisioning new futures for ourselves are all vital steps that can be taken. We can build a relevant civil society to act as a powerful counterbalance to corporate power.

This isn’t an argument to shut down existing narrow technocratic initiatives, or academic research presenting alternative directions for AI; rather, it is an argument that we need to do more, and do it together. There seems to be a gaping narrative hole (despite the admirable attempts of a few people to fill it) in pushing for a public political response to these AI companies. These discourses, social constructions and visions of the future matter to technology development and governance. They put pressure and establish norms that guide near term corporate decision making, government policy, and how society and public relate to technology and its governance.

Urgency

I would also argue that this issue is urgent. Firstly, around ChatGPT, Bing/​Sydney and now GPT-4, AI is experiencing a bit of a period of political attention at present. Public and government attention at the moment are good, and plausibly as good as they’ll ever be, for a politics to slow AGI development, and we are most powerful pushing for this together, rather than fighting and mocking each other in an attempt to gain political influence. This may be a vital moment that coalitional politics can be a powerful lever for enacting change, where the issue is suitably malleable to political contestation, to the formation of a governance object and to framing of how this issue could be solved; these are the exact times where power can be asserted over governance, and so assembling a coalition may give us that power.

There is also a risk that if we don’t foster such a coalition soon, both of our communities get outmanoeuvred by a new wave of tech enthusiasts that are currently pushing very hard to accelerate AI, remove all content or alignment filters, open-source and disseminate all capabilities with little care for the harms caused and more. Indeed, many tech boosters are beginning to paint AI ethics and AI risk advocates as two sides of the same coin. To counteract this movement, it is key for both communities to bury the hatchet and combat these plausibly rising threats together. Divided we fall.

So what does coalitional politics look like?

I think this question is an open one, something we will need to continue to iterate with in this context, learn by doing and generally work on together. Nonetheless, I will give some thought.

Firstly, it involves trying to build bridges with people who we think have wrong conceptions of the harms of AI development. I hope my argument that the political source of harm looks the same has convinced you, so let’s work together to address that, rather than mocking, insulting and refusing to talk to one another. I understand people from AI Safety and AI Ethics have serious personal and ethical problems with one another; that needn’t translate to political issues. Building these bridges not only increases the number of people in our shared coalition, but the diversity of views and thinkers, allowing for new ideas to develop. This broad, pluralistic and diverse ecosystem will likely come not just with political, but with epistemic benefits as well.

Secondly, it involves using the opportunities we can to raise the political salience of the issue of the power of AI companies as much as we can. At present, we are at something of a moment of public attention towards AI; rather than competing with one another for attention and discursive control, we ought to focus on the common concern we have. Whether the impetus for regulating these companies comes from motivations of concentration of corporate power or existential harms, it raises the salience of the issue and increases the pressure to regulate these systems, as well as increasing the pressure on companies to self-regulate. We must recognise our shared interest in joining into a single knowledge network and work out how best to construct a governance object to achieve our shared ends. At the moment, there is a weird discursive vacuum despite the salience of AI. We can fill this vacuum, and this will be most effective if done together. Only by filling this vacuum can we successfully create a landscape that can allow the curbing of the power of these corporations. People are already trying to do this, but the louder and broader the united front against these companies are, the better.

Then, we need to try and create a culture that pressures political leaders and corporations, no matter where politically they fall, that these unaccountable companies have no right to legislate our future for us. We can do this agenda and culture shift through activism, through the media, through the law, through protests and through political parties, as well as more broadly how discourses and imaginaries are shaped in key fora (social media, traditional media, fiction, academic work, conversations); the power of discourses have long been recognised in the development and stabilisation of socio-technical systems. Democracy is rarely ensured by technocracy alone; it often takes large scale cu requires large scale cultural forces. Luckily, most people seem to support this!

We then need suggestions for policy and direct legal action to restrict the power and ability of these AI companies to do what they currently do. Again, luckily these exist. Compute governance, utilising competitionlaw, holding companies legally accountable for harmful outputs of generative AI (and slightly more tangentially platforms), supporting copyright suitsand more seem like ways we can attack these companies and curb their power. Human rights suits may be possible. In general, there is an argument to suggest that the use of the courts is an important and underexplored lever to keep these companies accountable. Moreover, given the risks these companies themselves suggest they are imposing, other more speculative suits based on various other rights and principles, as has occurred in the climate context, may be possible. This is just some of a shopping list of policy and direct actions a broad coalitional movement could push for. People are already pushing for these things, but with better organisation that comes with more groups, the ability to push some of these into practice may be significantly enhanced. With a broader, diverse coalition, our demands can get stronger.

Sure, this coalitional politics will be hard. Building bridges might sometimes feel like losing sight of the prize as we focus on restricting the power of these agents of doom via other arguments and means alongside whatever each of our most salient concerns is. It will be hard to form coalitions with people you feel very culturally different from. Ultimately, if we want to curb the ability of AI companies to do harm, we need all the support we can get, not just from those in one culture, but those in many. I hope many, a lot of whom have already contributed so much to this fight in both AI Safety and AI Ethics, will take up such an offer for coalitional politics, at this potentially vital moment.

Acknowledgements: Matthijs Maas gave me substantial advice and help, despite substantive disagreements with aspects of the essay, and conversations with Maathijs Maas and Seth Lazar provided a lot of the inspiration for me to write this.