I think something like this is right, but I’m not entirely sure what an expected welfare range is. Suppose I think that all conscious things with pleasurable/painful experiences have the same welfare range, but there is only a 1 in 1000 chance that a particular AI systems has conscious pains and pleasures. What would it’s expected welfare range be?
The expected welfare range can be calculated from “probability of welfare range being positive”*”expected welfare range if it is positive”, and is usually assumed to be 1 for humans. So it would be 10^-4 for the case you described, i.e. having 10 k such AI systems experiencing the best possible state instead of the worst would produce as much welfare as having 1 human experiencing the best possible state instead of the worst.
I think something like this is right, but I’m not entirely sure what an expected welfare range is. Suppose I think that all conscious things with pleasurable/painful experiences have the same welfare range, but there is only a 1 in 1000 chance that a particular AI systems has conscious pains and pleasures. What would it’s expected welfare range be?
The expected welfare range can be calculated from “probability of welfare range being positive”*”expected welfare range if it is positive”, and is usually assumed to be 1 for humans. So it would be 10^-4 for the case you described, i.e. having 10 k such AI systems experiencing the best possible state instead of the worst would produce as much welfare as having 1 human experiencing the best possible state instead of the worst.
Okay, that makes sense I guess.