Congrats on your first post! I appreciate reading your perspective on this – it’s well articulated.
I think I disagree about how likely existential risk from advanced AI is. You write:
Given that life is capable of thriving all on its own via evolution, AI would have to see the existence of any life as a threat for it to actively pursue extinction
In my view, an AGI (artificial general intelligence) is a self-aware agent with a set of goals and the capability to pursue those goals very well. Sure, if such an agent views humans as a threat to its own existence it would wipe us out. It might also wipe us out because we slightly get in the way of some goal it’s pursuing. Humans have very complex values, and it is quite difficult to match an AI’s values to human values. I am somewhat worried that an AI would kill us all not because it hates us but because we are a minor nuisance to its pursuit of unrelated goals.
When humans bulldoze an ant hill in order to make a highway, it’s not because we hate the ants or are threatened by them. It’s because they’re in the way of what we’re trying to do. Humans tend to want to control the future, so if I were an advanced AI trying to optimize for some values, and they weren’t the same exact values humans have, it might be easiest to just get rid of the competition – we’re not that hard to kill.
I think this is one story of why AI poses existential risk, but there are many more. For further reading, I quite like Carlsmith’s piece! Again, welcome to the forum!
Congrats on your first post! I appreciate reading your perspective on this – it’s well articulated.
I think I disagree about how likely existential risk from advanced AI is. You write:
In my view, an AGI (artificial general intelligence) is a self-aware agent with a set of goals and the capability to pursue those goals very well. Sure, if such an agent views humans as a threat to its own existence it would wipe us out. It might also wipe us out because we slightly get in the way of some goal it’s pursuing. Humans have very complex values, and it is quite difficult to match an AI’s values to human values. I am somewhat worried that an AI would kill us all not because it hates us but because we are a minor nuisance to its pursuit of unrelated goals.
When humans bulldoze an ant hill in order to make a highway, it’s not because we hate the ants or are threatened by them. It’s because they’re in the way of what we’re trying to do. Humans tend to want to control the future, so if I were an advanced AI trying to optimize for some values, and they weren’t the same exact values humans have, it might be easiest to just get rid of the competition – we’re not that hard to kill.
I think this is one story of why AI poses existential risk, but there are many more. For further reading, I quite like Carlsmith’s piece! Again, welcome to the forum!