Hey, welcome to the EA forum! I hope you stick around.
I pretty much agree with this post. The argument put forward by AI risk doomers is generally flimsy and weak, with core weaknesses involving unrealistic assumptions about what AGI would actually be capable of, given limitations of computational complexity and the physical difficulty of technological advancements, and also a lack of justification for assuming AI will be fanatical utility function maximisers. I think the chances of human extinction from AI are extremely low, and that estimates around here are inflated by subtle groupthink, poor probabilistic treatment of speculative events, and a few just straight up wrong ideas that were made up a long time ago and not updated sufficiently for the latest events in AI.
That being said, AI advancements could have a significant effect on the world. I think it’s fairly likely that if AI is misused, there may be a body count, perhaps a significant one. I don’t think it’s a bad idea to be proactive and think ahead about how to manage the risks involved. There is a middle ground between no regulation and bombing data centers.
I’m curious as to the somewhat hedging word choices in the second paragraph like could and perhaps. The case for great, even extreme, harm from AI misuse seems a lot more straightforward than AI doom. Misuse of new, very powerful technologies has caused at least significant harm (including body counts) in the past with some consistency, so I would assume the pattern would follow with AI as well.
I’m allowing for the possibility that we hit another AI winter, and the new powerful technology just doesn’t arrive in our lifetime. Or that the technology is powerful for some things, but remains too unreliable for use in life-critical situations and is kept out of them.
I think it’s likely that AI will have at least an order or magnitude or two greater body count than it has now, but I don’t know how high it will be.
I once worked on a program with DoD to help buy up loose MANPADS in Libya. There’s a linear causal relationship between portable air defense systems and harm. Other ordnance has a similar relationship.
The relationship is tenuous when we move from the world of atoms to bits. I struggle to see how new software could pose novel risks to life and limb. That doesn’t mean developers of self-driving vehicles or autopilot functions in aircraft should ignore safety in their software design, what I’m suggesting is that those considerations are not novel.
If someone advocates that we treat neural networks unlike any other system in existence today, I would imagine the burden of proof would be on them to justify this new approach.
Hey, welcome to the EA forum! I hope you stick around.
I pretty much agree with this post. The argument put forward by AI risk doomers is generally flimsy and weak, with core weaknesses involving unrealistic assumptions about what AGI would actually be capable of, given limitations of computational complexity and the physical difficulty of technological advancements, and also a lack of justification for assuming AI will be fanatical utility function maximisers. I think the chances of human extinction from AI are extremely low, and that estimates around here are inflated by subtle groupthink, poor probabilistic treatment of speculative events, and a few just straight up wrong ideas that were made up a long time ago and not updated sufficiently for the latest events in AI.
That being said, AI advancements could have a significant effect on the world. I think it’s fairly likely that if AI is misused, there may be a body count, perhaps a significant one. I don’t think it’s a bad idea to be proactive and think ahead about how to manage the risks involved. There is a middle ground between no regulation and bombing data centers.
I’m curious as to the somewhat hedging word choices in the second paragraph like could and perhaps. The case for great, even extreme, harm from AI misuse seems a lot more straightforward than AI doom. Misuse of new, very powerful technologies has caused at least significant harm (including body counts) in the past with some consistency, so I would assume the pattern would follow with AI as well.
I’m allowing for the possibility that we hit another AI winter, and the new powerful technology just doesn’t arrive in our lifetime. Or that the technology is powerful for some things, but remains too unreliable for use in life-critical situations and is kept out of them.
I think it’s likely that AI will have at least an order or magnitude or two greater body count than it has now, but I don’t know how high it will be.
I once worked on a program with DoD to help buy up loose MANPADS in Libya. There’s a linear causal relationship between portable air defense systems and harm. Other ordnance has a similar relationship.
The relationship is tenuous when we move from the world of atoms to bits. I struggle to see how new software could pose novel risks to life and limb. That doesn’t mean developers of self-driving vehicles or autopilot functions in aircraft should ignore safety in their software design, what I’m suggesting is that those considerations are not novel.
If someone advocates that we treat neural networks unlike any other system in existence today, I would imagine the burden of proof would be on them to justify this new approach.