It seems like it would be good if the discussion moved from the binary-like question “is this AI system sentient?” to the spectrum-like question “what is the expected welfare range of this AI system?”. I would say any system has a positive expected welfare range, because welfare ranges cannot be negative, and we cannot be 100 % sure they are null. If one interprets sentience as having a positive expected welfare range, AI systems are already sentient, and so the question is how much.
I think something like this is right, but I’m not entirely sure what an expected welfare range is. Suppose I think that all conscious things with pleasurable/painful experiences have the same welfare range, but there is only a 1 in 1000 chance that a particular AI systems has conscious pains and pleasures. What would it’s expected welfare range be?
The expected welfare range can be calculated from “probability of welfare range being positive”*”expected welfare range if it is positive”, and is usually assumed to be 1 for humans. So it would be 10^-4 for the case you described, i.e. having 10 k such AI systems experiencing the best possible state instead of the worst would produce as much welfare as having 1 human experiencing the best possible state instead of the worst.
Thanks for clarifying!
It seems like it would be good if the discussion moved from the binary-like question “is this AI system sentient?” to the spectrum-like question “what is the expected welfare range of this AI system?”. I would say any system has a positive expected welfare range, because welfare ranges cannot be negative, and we cannot be 100 % sure they are null. If one interprets sentience as having a positive expected welfare range, AI systems are already sentient, and so the question is how much.
I think something like this is right, but I’m not entirely sure what an expected welfare range is. Suppose I think that all conscious things with pleasurable/painful experiences have the same welfare range, but there is only a 1 in 1000 chance that a particular AI systems has conscious pains and pleasures. What would it’s expected welfare range be?
The expected welfare range can be calculated from “probability of welfare range being positive”*”expected welfare range if it is positive”, and is usually assumed to be 1 for humans. So it would be 10^-4 for the case you described, i.e. having 10 k such AI systems experiencing the best possible state instead of the worst would produce as much welfare as having 1 human experiencing the best possible state instead of the worst.
Okay, that makes sense I guess.