I fully support a pause, however that is enacted, until we find a way to ensure safety.
I think part of the reasons so many people do not consider a pause not only reasonable but actually self-evidently the right thing to do is related to the specific experience of some of the people on the forum.
A lot of people engaging in this debate naturally come from an AI or tech background. Or they’ve followed the fortunes of Facebook and Amazon and Microsoft from a distance and seen how they’ve been allowed to do pretty much whatever they want. Any proposal to limit tech innovation may seem shocking. Because tech has had an almost regulation-free ride until now. And other groups in the public eye, such as banks and investment firms have paid off enough people in congress to eliminate most of their regulations too.
But this is very much NOT the norm.
But if you look at, say, the S&P 500, you’ll see maybe 30 tech companies or banks, and a few others, which face very little regulation. But many more companies who are very used to being very strictly regulated.
Pharma companies are used to discovering miracle drugs but still having to go through decades (literally!) of safety testing before they can make them available to the public, and even then they still need FDA audits to prove that they are producing exactly what they said, how they said they would. Any change can take another few years to get approved.
Engineers and Architects know that every major design they create needs to be reviewed by countless bodies who effectively have a right to deny approval—and the burden of proof is always on the side of those who want to go ahead.
If you try to get a new chemical approved for use in food, it is such a long and costly process that most companies just don’t bother even trying.
This is how the world works. There is this mentality among tech people that they somehow have the right to innovate and put out products with no restrictions as if this as everyone’s god-given right. But it’s not.
So maybe people within tech have a can’t do attitude (as Katja Grace called it) towards a pause, thinking it cannot work. But the world knows how to do pauses, how to define safety criteria and ways to make sure they are met before a product is released. Sure, the details for AI will be different than for Pharma, but is AI fundamentally more complex than the interactions of a new, complex chemical with a human body? It isn’t obviously so.
The FDA and others have found ways to keep drugs safe, while still allowing phenomenal progress. It is frustrating as hell in the short term, but in the long run it works best for everyone—when you buy a drug, it is highly unlikely to do you harm in unexpected ways, and typically any harm it might do has been analysed, communicated to the medical community. So that you and your doctor know what the risks are.
It feels wrong for the AI community to believe that they deserve to be free of regulation when the risks are even greater than those from Pharma. And it feels like a can’t do attitude for us to believe that a pause cannot happen or cannot be effective.
I fully support a pause, however that is enacted, until we find a way to ensure safety.
I think part of the reasons so many people do not consider a pause not only reasonable but actually self-evidently the right thing to do is related to the specific experience of some of the people on the forum.
A lot of people engaging in this debate naturally come from an AI or tech background. Or they’ve followed the fortunes of Facebook and Amazon and Microsoft from a distance and seen how they’ve been allowed to do pretty much whatever they want. Any proposal to limit tech innovation may seem shocking. Because tech has had an almost regulation-free ride until now. And other groups in the public eye, such as banks and investment firms have paid off enough people in congress to eliminate most of their regulations too.
But this is very much NOT the norm.
But if you look at, say, the S&P 500, you’ll see maybe 30 tech companies or banks, and a few others, which face very little regulation. But many more companies who are very used to being very strictly regulated.
Pharma companies are used to discovering miracle drugs but still having to go through decades (literally!) of safety testing before they can make them available to the public, and even then they still need FDA audits to prove that they are producing exactly what they said, how they said they would. Any change can take another few years to get approved.
Engineers and Architects know that every major design they create needs to be reviewed by countless bodies who effectively have a right to deny approval—and the burden of proof is always on the side of those who want to go ahead.
If you try to get a new chemical approved for use in food, it is such a long and costly process that most companies just don’t bother even trying.
This is how the world works. There is this mentality among tech people that they somehow have the right to innovate and put out products with no restrictions as if this as everyone’s god-given right. But it’s not.
So maybe people within tech have a can’t do attitude (as Katja Grace called it) towards a pause, thinking it cannot work. But the world knows how to do pauses, how to define safety criteria and ways to make sure they are met before a product is released. Sure, the details for AI will be different than for Pharma, but is AI fundamentally more complex than the interactions of a new, complex chemical with a human body? It isn’t obviously so.
The FDA and others have found ways to keep drugs safe, while still allowing phenomenal progress. It is frustrating as hell in the short term, but in the long run it works best for everyone—when you buy a drug, it is highly unlikely to do you harm in unexpected ways, and typically any harm it might do has been analysed, communicated to the medical community. So that you and your doctor know what the risks are.
It feels wrong for the AI community to believe that they deserve to be free of regulation when the risks are even greater than those from Pharma. And it feels like a can’t do attitude for us to believe that a pause cannot happen or cannot be effective.