I’ll take a shot at these questions too, perhaps being usefully only partially familiar with QRI.
1. Which question is QRI trying to answer?
Is there a universal pattern to conscious experience? Can we specify a function from the structure and state of a mind to the quality of experience it is having?
2. Why does that all matter?
If we discover a function from mind to valence, and develop the right tools of measurement and intervention (big IFs, for sure), we can steer all minds towards positive experience.
Until recently we only had intuitive physics, useful for survival, but not enough for GPS. In the same way, we can make some predictions today about what will make humans happy or sad, but we don’t understand depression very well, we can guess about how other animals feel, but it gets murky as you consider more and more distant species, and we’re in the dark on whether artificial minds experience anything at all. A theory of valence would let us navigate phenomenological space with new precision, across a broad domain of minds.
I’ll take a shot at these questions too, perhaps being usefully only partially familiar with QRI.
Is there a universal pattern to conscious experience? Can we specify a function from the structure and state of a mind to the quality of experience it is having?
If we discover a function from mind to valence, and develop the right tools of measurement and intervention (big IFs, for sure), we can steer all minds towards positive experience.
Until recently we only had intuitive physics, useful for survival, but not enough for GPS. In the same way, we can make some predictions today about what will make humans happy or sad, but we don’t understand depression very well, we can guess about how other animals feel, but it gets murky as you consider more and more distant species, and we’re in the dark on whether artificial minds experience anything at all. A theory of valence would let us navigate phenomenological space with new precision, across a broad domain of minds.