My personal model is that most of that can be figured out post-AGI
One could also have argued for figuring out farmed animal welfare after cheap animal food (produced in factory-farms) is widely available? Now that lots of people are eating factory-farmed animals, it is harder to role back factory-farming.
Not sure if this helps, but I currently believe: 1. Relatively little or no AI suffering will happen, pre-AGI. 2. Thereās not going to actually be much lock-in on this, post-AGI. 3. When we get to AGI, weāll gain much better abilities to reason through these questions. (making it different from the āfiguring out animal welfareā claim.
Commenting just to encourage you to make this its own post. I havenāt seen a (recent) standalone post about this topic, it seems important, and though I imagine many people are following this comment section it also seems easy for this discussion to get lost and for people with relevant opinions to miss it/ānot engage because itās off-topic.
Apparently there will be a debate week about this soon! I hope that that covers territory similar to what Iām thinking (which I assumed was fairly basic). Itās very possible Iāll be convinced to the other side, I look forward to the discussion.
I might write a short post if it seems useful then.
Some quick takes on this from me: I agree with 2 and 3, but itās worth noting that āpost-AGIā might be ā2 years after AGI while there is a crazy singularity on going and vast amounts of digital mindsā.
I think as stated, (1) seems about 75% likely to me, which is not hugely reassuring. Further, I think there is a critical time youāre not highlighting: a time when AGI exists but humans are still (potentially) in control and society looks similar to now.
Hi Ozzie,
One could also have argued for figuring out farmed animal welfare after cheap animal food (produced in factory-farms) is widely available? Now that lots of people are eating factory-farmed animals, it is harder to role back factory-farming.
Not sure if this helps, but I currently believe:
1. Relatively little or no AI suffering will happen, pre-AGI.
2. Thereās not going to actually be much lock-in on this, post-AGI.
3. When we get to AGI, weāll gain much better abilities to reason through these questions. (making it different from the āfiguring out animal welfareā claim.
Commenting just to encourage you to make this its own post. I havenāt seen a (recent) standalone post about this topic, it seems important, and though I imagine many people are following this comment section it also seems easy for this discussion to get lost and for people with relevant opinions to miss it/ānot engage because itās off-topic.
Apparently there will be a debate week about this soon! I hope that that covers territory similar to what Iām thinking (which I assumed was fairly basic). Itās very possible Iāll be convinced to the other side, I look forward to the discussion.
I might write a short post if it seems useful then.
Some quick takes on this from me: I agree with 2 and 3, but itās worth noting that āpost-AGIā might be ā2 years after AGI while there is a crazy singularity on going and vast amounts of digital mindsā.
I think as stated, (1) seems about 75% likely to me, which is not hugely reassuring. Further, I think there is a critical time youāre not highlighting: a time when AGI exists but humans are still (potentially) in control and society looks similar to now.