Not sure if this helps, but I currently believe: 1. Relatively little or no AI suffering will happen, pre-AGI. 2. There’s not going to actually be much lock-in on this, post-AGI. 3. When we get to AGI, we’ll gain much better abilities to reason through these questions. (making it different from the “figuring out animal welfare” claim.
Commenting just to encourage you to make this its own post. I haven’t seen a (recent) standalone post about this topic, it seems important, and though I imagine many people are following this comment section it also seems easy for this discussion to get lost and for people with relevant opinions to miss it/not engage because it’s off-topic.
Apparently there will be a debate week about this soon! I hope that that covers territory similar to what I’m thinking (which I assumed was fairly basic). It’s very possible I’ll be convinced to the other side, I look forward to the discussion.
I might write a short post if it seems useful then.
Some quick takes on this from me: I agree with 2 and 3, but it’s worth noting that “post-AGI” might be “2 years after AGI while there is a crazy singularity on going and vast amounts of digital minds”.
I think as stated, (1) seems about 75% likely to me, which is not hugely reassuring. Further, I think there is a critical time you’re not highlighting: a time when AGI exists but humans are still (potentially) in control and society looks similar to now.
Not sure if this helps, but I currently believe:
1. Relatively little or no AI suffering will happen, pre-AGI.
2. There’s not going to actually be much lock-in on this, post-AGI.
3. When we get to AGI, we’ll gain much better abilities to reason through these questions. (making it different from the “figuring out animal welfare” claim.
Commenting just to encourage you to make this its own post. I haven’t seen a (recent) standalone post about this topic, it seems important, and though I imagine many people are following this comment section it also seems easy for this discussion to get lost and for people with relevant opinions to miss it/not engage because it’s off-topic.
Apparently there will be a debate week about this soon! I hope that that covers territory similar to what I’m thinking (which I assumed was fairly basic). It’s very possible I’ll be convinced to the other side, I look forward to the discussion.
I might write a short post if it seems useful then.
Some quick takes on this from me: I agree with 2 and 3, but it’s worth noting that “post-AGI” might be “2 years after AGI while there is a crazy singularity on going and vast amounts of digital minds”.
I think as stated, (1) seems about 75% likely to me, which is not hugely reassuring. Further, I think there is a critical time you’re not highlighting: a time when AGI exists but humans are still (potentially) in control and society looks similar to now.