Do you think a permanent* ban on AI research and development would be a better path than a pause? I agree a six-month pause is likely not to do anything, but far-reaching government legislation banning AI just might—especially if we can get the U.S., China, EU, and Russia all on board (easier said than done!).
*nothing is truly permanent, but I would feel much more comfortable with a more socially just and morally advanced human society having the AI discussion ~200 years from now, than for the tech to exist today. Humanity today shouldn’t be trusted to develop AI for the same reason 10-year-olds shouldn’t be trusted to drive trucks: it lacks the knowledge, experience, and development to do it safely.
Let’s look at the history of global bans: - They don’t work for doping in the Olympics. - They don’t work for fissile material. - They don’t prevent luxury goods from entering North Korea. - They don’t work against cocaine or heroine.
We could go on. And those examples are much easier to implement—there’s global consensus and law enforcement trying to stop the drug trade, but the economics of the sector mean an escalating war with cartels only leads to greater payoffs for new market entrants.
Setting aside practical limitations, we ought to think carefully before weaponizing the power of central governments against private individuals. When we can identify a negative externality, we have some justification to internalize it. No one wants firms polluting rivers or scammers selling tainted milk.
Generative AI hasn’t shown externalities that would necessitate something like a global ban.
Trucks: we know what the externalities of a poorly piloted vehicle are. So we minimize those risks by requiring competence.
And on a morally advanced society—yes, I’m certain a majority of folks if asked would say they’d like a more moral and ethical world. But that’s not the question—the question is who gets to decide what we can and cannot do? And what criteria are they using to make these decisions? Real risk, as demonstrated by data, or theoretical risk? The latter was used to halt interest in nuclear fission. Should we expect the same for generative AI?
The question of “who gets to do what” is fundamentally political, and I really try to stay away from politics especially when dealing with the subject of existential risk. This isn’t to discount the importance of politics, only to say that while political processes are helpful in determining how we manage x-risk, they don’t in and of themselves directly relate to the issue. Global bans would also be political, of course.
You may well be right that the existential risk iof generative AI, and eventually AGI, is low or indeterminate, and theoretical rather than actual. I don’t think we should wait until we have an actual x-risk on our hands to act — because then it may be too late.
You’re also likely correct on AI development being unstoppable at this point. Mitigation plans are needed should unfriendly outcomes occur especially with an AGI, and I think we can both agree on that.
Maybe I’m too cautious when it comes to the subject of AI, but part of what motivates me is the idea that, should the catastrophic occur, I could at least know that I did everything in my power to oppose that risk.
These are all very reasonable positions, and one would struggle to find fault with them.
Personally, I’m glad there are smart folks out there thinking about what sorts of risks we might face in the near future. Biologists have been talking about the next big pandemic for years. It makes sense to think these issues through.
Where I vehemently object is on the policy side. To use the pandemic analogy, it’s the difference between a research-led investigation into future pandemics and a call to ban the use of CRISPR. It’s impractical and, from a policy perspective, questionable.
The conversation around AI within EA is framed as “we need to stop AI progress before we all die.” It seems tough to justify such an extreme policy position.
Do you think a permanent* ban on AI research and development would be a better path than a pause? I agree a six-month pause is likely not to do anything, but far-reaching government legislation banning AI just might—especially if we can get the U.S., China, EU, and Russia all on board (easier said than done!).
*nothing is truly permanent, but I would feel much more comfortable with a more socially just and morally advanced human society having the AI discussion ~200 years from now, than for the tech to exist today. Humanity today shouldn’t be trusted to develop AI for the same reason 10-year-olds shouldn’t be trusted to drive trucks: it lacks the knowledge, experience, and development to do it safely.
Let’s look at the history of global bans:
- They don’t work for doping in the Olympics.
- They don’t work for fissile material.
- They don’t prevent luxury goods from entering North Korea.
- They don’t work against cocaine or heroine.
We could go on. And those examples are much easier to implement—there’s global consensus and law enforcement trying to stop the drug trade, but the economics of the sector mean an escalating war with cartels only leads to greater payoffs for new market entrants.
Setting aside practical limitations, we ought to think carefully before weaponizing the power of central governments against private individuals. When we can identify a negative externality, we have some justification to internalize it. No one wants firms polluting rivers or scammers selling tainted milk.
Generative AI hasn’t shown externalities that would necessitate something like a global ban.
Trucks: we know what the externalities of a poorly piloted vehicle are. So we minimize those risks by requiring competence.
And on a morally advanced society—yes, I’m certain a majority of folks if asked would say they’d like a more moral and ethical world. But that’s not the question—the question is who gets to decide what we can and cannot do? And what criteria are they using to make these decisions? Real risk, as demonstrated by data, or theoretical risk? The latter was used to halt interest in nuclear fission. Should we expect the same for generative AI?
The question of “who gets to do what” is fundamentally political, and I really try to stay away from politics especially when dealing with the subject of existential risk. This isn’t to discount the importance of politics, only to say that while political processes are helpful in determining how we manage x-risk, they don’t in and of themselves directly relate to the issue. Global bans would also be political, of course.
You may well be right that the existential risk iof generative AI, and eventually AGI, is low or indeterminate, and theoretical rather than actual. I don’t think we should wait until we have an actual x-risk on our hands to act — because then it may be too late.
You’re also likely correct on AI development being unstoppable at this point. Mitigation plans are needed should unfriendly outcomes occur especially with an AGI, and I think we can both agree on that.
Maybe I’m too cautious when it comes to the subject of AI, but part of what motivates me is the idea that, should the catastrophic occur, I could at least know that I did everything in my power to oppose that risk.
These are all very reasonable positions, and one would struggle to find fault with them.
Personally, I’m glad there are smart folks out there thinking about what sorts of risks we might face in the near future. Biologists have been talking about the next big pandemic for years. It makes sense to think these issues through.
Where I vehemently object is on the policy side. To use the pandemic analogy, it’s the difference between a research-led investigation into future pandemics and a call to ban the use of CRISPR. It’s impractical and, from a policy perspective, questionable.
The conversation around AI within EA is framed as “we need to stop AI progress before we all die.” It seems tough to justify such an extreme policy position.