[Linkpost] Beware the Squirrel by Verity Harding

Link post

I think this piece by Verity Harding raises some good points about AI safety & x-risk. I would be curious to hear anyones thoughts on it.

Below are a few excerpts from the post, which hopefully string together into a sort of summary (but, of course, please read the full post before commenting).

“I’ll go out on a limb and say, superintelligence AI systems are not going to cause human extinction… None of this is to argue that AI systems today, and in the next decade, aren’t dangerous – or that I’m personally not worried. They will be, and I am.”

“[W]hat worries me [about AI] is that this headlong rush into AI mania will result in it being used where it should not be—either because the system is not capable enough to actually do what it claims to be able to do, or because it is in an area where human judgement should not be replaced, or in a way that fundamentally upends the social contract between a democratic government and its citizens.”

“Which brings me back to the problem with the resurgence in advocates for existential AI risk. It’s not that any work in this area is pointless. A counter argument to my scepticism would go “well, if there’s even a 0.01% chance of wiping out humanity then we should do all we can to stop that!” And, sure. AI research is a wide, varied field and if this is a topic for which someone has a personal passion then great. But as with the hard grind of getting people to take bias, accountability, and transparency in AI seriously, others have done excellent work highlighting why too much focus on what’s known as “long-termism” is distracting at best. At worst, it is a disregard for any effect of AI apart from total human extinction. At best, it is a naive approach that is careless with the lives of people—real people, alive now – who will be affected by pervasive, unregulated AI. Critics of this will argue that just because you are worried about short-term risk doesn’t negate long-term risk, that it’s not a zero-sum game. Well, actually, it is.”

“Because there is not infinite money to spend on AI safety and ethics research. The biggest builders of AI systems, the large tech companies, may invest in ethics and safety teams who are focussed on issues outside of existential risk. But the resources they have at their disposal are a fraction of the resources given to the teams moving ahead at speed with building and integrating AI systems into our lives already. Even when the big tech companies are thoughtful and careful about what they build and release, private philanthropy and government funding is extremely important in ensuring that independent AI ethics and safety research is also supported, not least as a check on that power. So when those funders become convinced that the most important thing to focus on is long-term AI risk – something which has very little evidence, if any, in its favour and is refuted by numerous AI experts – it very seriously detracts from support for much more urgent and important work.”

“Everyone using their voice to advocate for panic and alarm over something without a very clear explanation of how it would be possible, really, in the physical world, should think carefully about whether or not they are really helping with their (I’m sure genuinely felt) goal to make AI—and the world safer.”