[Edit: I’ve updated this post on October 24 in response to some feedback]
NIMBYs don’t call themselves NIMBYs. They call themselves affordable housing advocates or community representatives or environmental campaigners. They’re usually not against building houses. They just want to make sure that those houses are affordable, attractive to existing residents, and don’t destroy habitat for birds and stuff.
Who can argue with that? If, ultimately, those demands stop houses from being built entirely, well, that’s because developers couldn’t find a way to build them without hurting poor people, local communities, or birds and stuff.
This is called politics and it’s powerful. The most effective anti-housebuilding organisation in the UK doesn’t call itself Pause Housebuilding. It calls itself the Campaign to Protect Rural England, because English people love rural England. CPRE campaigns in the 1940s helped shape England’s planning system. As a result, permission to build houses is only granted when it’s in the “public interest”; in practice it is given infrequently and often with onerous conditions.[1]
The AI pause folks could learn from their success. Instead of campaigning for a total halt to AI development, they could push for strict regulations that aim to ensure new AI systems won’t harm people (or birds and stuff).
This approach has two advantages. First, it’s more politically palatable than a heavy-handed pause. And second, it’s closer to what those of us concerned about AI safety ideally want: not an end to progress, but progress that is safe and advances human flourishing.
In practice, these requirements might be hard to meet. But, considering the potential harms and meaningful chance something goes wrong, they should be. If a company developing an unprecedentedly large AI model with surprising capabilities can’t prove it’s safe, they shouldn’t release it.
This is not about pausing AI.
I don’t know anybody who thinks AI systems have zero upside. In fact, the same people worried about the risks are often excited about the potential for advanced AI systems to solve thorny coordination problems, liberate billions from mindless toil, achieve wonderful breakthroughs in medicine, and generally advance human flourishing.
But they’d like companies to prove their systems are safe before they release them into the world, or even train them at all. To prove that they’re not going to cause harm by, for example, hurting people, disrupting democratic institutions, or wresting control of important sociopolitical decisions from human hands.
Who can argue with that?
[Edit: Peter McIntyre has pointed out that Ezra Klein made a version of this argument on the 80K podcast. So I’ve been scooped—but at least I’m in good company!]
“Joshua Carson, head of policy at the consultancy Blackstock, said: “The notion of developers ‘sitting on planning permissions’ has been taken out of context. It takes a considerable length of time to agree the provision of new infrastructure on strategic sites for housing and extensive negotiation with councils to discharge planning conditions before homes can be built.”” (Kollewe 2021)
Pausing AI might be good policy, but it’s bad politics
Link post
[Edit: I’ve updated this post on October 24 in response to some feedback]
NIMBYs don’t call themselves NIMBYs. They call themselves affordable housing advocates or community representatives or environmental campaigners. They’re usually not against building houses. They just want to make sure that those houses are affordable, attractive to existing residents, and don’t destroy habitat for birds and stuff.
Who can argue with that? If, ultimately, those demands stop houses from being built entirely, well, that’s because developers couldn’t find a way to build them without hurting poor people, local communities, or birds and stuff.
This is called politics and it’s powerful. The most effective anti-housebuilding organisation in the UK doesn’t call itself Pause Housebuilding. It calls itself the Campaign to Protect Rural England, because English people love rural England. CPRE campaigns in the 1940s helped shape England’s planning system. As a result, permission to build houses is only granted when it’s in the “public interest”; in practice it is given infrequently and often with onerous conditions.[1]
The AI pause folks could learn from their success. Instead of campaigning for a total halt to AI development, they could push for strict regulations that aim to ensure new AI systems won’t harm people (or birds and stuff).
This approach has two advantages. First, it’s more politically palatable than a heavy-handed pause. And second, it’s closer to what those of us concerned about AI safety ideally want: not an end to progress, but progress that is safe and advances human flourishing.
I think NIMBYs happen to be wrong about the cost-benefit calculation of strong regulation. But AI safety people are right. Advanced AI systems pose grave threats and we don’t know how to mitigate them.
Maybe ask governments for an equivalent system for new AI models. Require companies to prove to planners their models are safe. Ask for:
Independent safety audits
Ethics reviews
Economic analyses
Public reports on risk analysis and mitigation measures
Compensation mechanisms for people whose livelihoods are disrupted by automation
And a bunch of other measures that plausibly limit the AI risks
In practice, these requirements might be hard to meet. But, considering the potential harms and meaningful chance something goes wrong, they should be. If a company developing an unprecedentedly large AI model with surprising capabilities can’t prove it’s safe, they shouldn’t release it.
This is not about pausing AI.
I don’t know anybody who thinks AI systems have zero upside. In fact, the same people worried about the risks are often excited about the potential for advanced AI systems to solve thorny coordination problems, liberate billions from mindless toil, achieve wonderful breakthroughs in medicine, and generally advance human flourishing.
But they’d like companies to prove their systems are safe before they release them into the world, or even train them at all. To prove that they’re not going to cause harm by, for example, hurting people, disrupting democratic institutions, or wresting control of important sociopolitical decisions from human hands.
Who can argue with that?
[Edit: Peter McIntyre has pointed out that Ezra Klein made a version of this argument on the 80K podcast. So I’ve been scooped—but at least I’m in good company!]
“Joshua Carson, head of policy at the consultancy Blackstock, said: “The notion of developers ‘sitting on planning permissions’ has been taken out of context. It takes a considerable length of time to agree the provision of new infrastructure on strategic sites for housing and extensive negotiation with councils to discharge planning conditions before homes can be built.”” (Kollewe 2021)