The field is interested in looking more closely at valence independently of consciousness
Could you link the most relevant piece you are aware of? What do you mean by “independently”? Under hedonism, I think the probability of consciousness only matters to the extent it informs the probability of valences experiences.
The probability of sentience (valenced experiences) conditional of consciousness is quite high for animals? Should we expect the same for AIs?
you could at least confirm that AIs don’t have valenced experience
Interesting! How?
Independently, we’re also very interested in how to capture the difference between positive and negative experiences in alien sorts of minds. It is often taken for granted based on human experience, but it isn’t trivial to say what it is.
Makes sense. Without that, it would be very hard to improve digital welfare.
Could you link the most relevant piece you are aware of? What do you mean by “independently”? Under hedonism, I think the probability of consciousness only matters to the extent it informs the probability of valences experiences.
The idea is more aspirational. I’m not really sure of what to recommend in the field, but this is a pretty good overview: https://arxiv.org/pdf/2404.16696
Interesting! How?
Perhaps valence requires something like the assignment of weights to alternative possibilities. If you can look inside the AI and confirm that it is making decisions in a different way, you can conclude that it doesn’t have valenced experiences. Valence plausibly requires such assignments of weights (most likely with a bunch of other constraints), and the absence of one requirement is enough to disconfirm something. Of course, this sort of requirement is likely to be controversial, but it is less open to radically different views than consciousness itself.
Could you link the most relevant piece you are aware of? What do you mean by “independently”? Under hedonism, I think the probability of consciousness only matters to the extent it informs the probability of valences experiences.
The probability of sentience (valenced experiences) conditional of consciousness is quite high for animals? Should we expect the same for AIs?
Interesting! How?
Makes sense. Without that, it would be very hard to improve digital welfare.
The idea is more aspirational. I’m not really sure of what to recommend in the field, but this is a pretty good overview: https://arxiv.org/pdf/2404.16696
Perhaps valence requires something like the assignment of weights to alternative possibilities. If you can look inside the AI and confirm that it is making decisions in a different way, you can conclude that it doesn’t have valenced experiences. Valence plausibly requires such assignments of weights (most likely with a bunch of other constraints), and the absence of one requirement is enough to disconfirm something. Of course, this sort of requirement is likely to be controversial, but it is less open to radically different views than consciousness itself.