I also used to be pretty skeptical about the credibility of the field. I was surprised to learn about how much mainstream, credible support AI safety concerns have received:
MultipleleadingAI labs have large (e.g. 30-person) teams of researchers dedicated to AI alignment.
They sometimes publish statements like, “Unaligned AGI could pose substantial risks to humanity and solving the AGI alignment problem could be so difficult that it will require all of humanity to work together. ”
Keyfindings that are central to concerns over AI risk have been accepted (with peer review) into top ML conferences.
A top ML conference is hosting a workshop on ML safety (with a description that emphasizes “long-term and long-tail safety risks”).
Reports and declarations from some major governments haveendorsed AI risk worries.
The UK’s National AI Strategy states, “The government takes the long term risk of non-aligned Artificial General Intelligence, and the unforeseeable changes that it would mean for the UK and the world, seriously.”
There are AI faculty at universities including MIT, UC Berkeley, and Cambridge who endorse AI risk worries.
To be fair, AI risk worries are far from a consensus view. But in light of the above, the idea that all respected AI researchers find AI risk laughable seems plainly mistaken. Instead, it seems clear that a significant fraction of respected AI researchers and institutions are worried. Maybe these concerns are misguided, but probably not for any reason that’s obvious to whoever has basic knowledge of AI—or these worried AI experts would have noticed.
(Also, in case you haven’t seen it yet, you might find this discussion on whether there are any experts on these questions interesting.)
Thank you for these references, I’ll take a close look on them. I’ll write a new comment if I have any thoughts after going through them.
Before having read them, I want to say that I’m interested in research about risk estimation and AI progress forecasting. General research about possible AI risks without assigning them any probabilities is not very useful in determining if a threat is relevant. If anyone has papers specifically on that topic, I’m very interested in reading them too.
(edited to add: as you might guess from my previous post, I think some level of AI skepticism is healthy and I appreciate you sharing your thoughts. I’ve become more convinced of the seriousness of AI x-risk over time, feel free to DM me if you’re interested in chatting sometime)
I also used to be pretty skeptical about the credibility of the field. I was surprised to learn about how much mainstream, credible support AI safety concerns have received:
Multiple leading AI labs have large (e.g. 30-person) teams of researchers dedicated to AI alignment.
They sometimes publish statements like, “Unaligned AGI could pose substantial risks to humanity and solving the AGI alignment problem could be so difficult that it will require all of humanity to work together. ”
Key findings that are central to concerns over AI risk have been accepted (with peer review) into top ML conferences.
A top ML conference is hosting a workshop on ML safety (with a description that emphasizes “long-term and long-tail safety risks”).
Reports and declarations from some major governments have endorsed AI risk worries.
The UK’s National AI Strategy states, “The government takes the long term risk of non-aligned Artificial General Intelligence, and the unforeseeable changes that it would mean for the UK and the world, seriously.”
There are AI faculty at universities including MIT, UC Berkeley, and Cambridge who endorse AI risk worries.
To be fair, AI risk worries are far from a consensus view. But in light of the above, the idea that all respected AI researchers find AI risk laughable seems plainly mistaken. Instead, it seems clear that a significant fraction of respected AI researchers and institutions are worried. Maybe these concerns are misguided, but probably not for any reason that’s obvious to whoever has basic knowledge of AI—or these worried AI experts would have noticed.
(Also, in case you haven’t seen it yet, you might find this discussion on whether there are any experts on these questions interesting.)
Thank you for these references, I’ll take a close look on them. I’ll write a new comment if I have any thoughts after going through them.
Before having read them, I want to say that I’m interested in research about risk estimation and AI progress forecasting. General research about possible AI risks without assigning them any probabilities is not very useful in determining if a threat is relevant. If anyone has papers specifically on that topic, I’m very interested in reading them too.
IMO by far the most through estimation of AI x-risk thus far is Carlsmith’s Is Power-Seeking an Existential Risk? (see also summary presentation, reviews).
(edited to add: as you might guess from my previous post, I think some level of AI skepticism is healthy and I appreciate you sharing your thoughts. I’ve become more convinced of the seriousness of AI x-risk over time, feel free to DM me if you’re interested in chatting sometime)
I would be curious to know if your beliefs have been updated in light of the recent developments?