I read this post and the comments that have followed it with great interest.
I have two major, and one minor, worries about QRI’s research agenda I hope you can clarify. First, I am not sure exactly which question you are trying to answer. Second, it’s not clear to me why you think this project is (especially) important. Third, I can’t understand what STV is about because there is so much (undefined) technical jargon.
1. Which question is QRI trying to answer?
You open by saying:
We know suffering when we feel it — but what is it? What would a satisfying answer for this even look like?
This makes me think you want to identify what suffering is, that is, what it consists in. But you then immediately raise Buddhist and Arisotlean theories of what causes suffering—a wholly different issue. FWIW, I don’t see anything deeply problematic in identifying what suffering, and related terms, refer to. Valence just refers to how good/bad you feel (the intrinsic pleasurableness/displeasurableness of your experience); happiness is feeling overall good; suffering is feeling overall bad. I don’t find anything dissatisfying about these. Valence refers to something subjective. That’s a definition in terms of something subjective. What else could one want?
It seems you want to do two things: (1) somehow identify which brainstates are associated with valence and (2) represent subjective experiences in terms of something mathematical, i.e. something non-subjective. Neither of these questions is identical to establishing either what suffering is, or what causes it. Hence, when you say:
QRI thinks not having a good answer to the question of suffering is a core bottleneck
I’m afraid I don’t know which question you have in mind. Could you please specify?
2. Why does that all matter?
It’s unclear to me why you think solving either problem - (1) or (2) - is (especially) valuable. There is some fairly vague stuff about neurotech, but this seems pretty hand-wavey. It’s rather bold for you to claim
there are trillion-dollar bills on the sidewalk, waiting to be picked up if we just actually try
and I think you owe the reader a bit more to bite into, in terms of a theory of change.
You might offer some answer about the importance of being able to measure what impacts well-being here but—and I hope old-time forum hands will forgive me as I mount a familiar hobby-horse—economics and psychology seem to be doing a reasonable job of this simply by surveying people, e.g. asking them how happy they are (0-10). Such work can and does proceed without a theory of exactly what is happening inside the ‘black box’ of the brain; it can be used, right now, to help us determine what our priorities are - if I can be permitted to toot my horn from aside the hobby-horse, I should add that this just is what my organisation, the Happier Lives Institute, is working on. If I were to insist on waiting for real-time brain scanning data to learn whether, saying, cash transfers are more cost-effective than psychotherapy at increasing happiness, I would be waiting some time.
3. Too much (undefined) jargon
Here is a list of terms or phrases that seem very important for understanding STV where I have very little idea exactly what you mean:
Neurophysiological models of suffering try to dig into the computational utility and underlying biology of suffering
symmetry
harmony
dissonance
resonance as a proxy for characteristic activity
Consonance Dissonance Noise Signature
self-organizing systems
Neural Annealing
full neuroimaging stack
precise physical formalism for consciousness
STV gives us a rich set of threads to follow for clear neurofeedback targets, which should allow for much more effective closed-loop systems, and I am personally extraordinarily excited about the creation of technologies that allow people to “update toward wholesome”,
Finally, and perhaps most importantly, I really not sure what it could even mean to represent consciousness/ valence as a mathematic shape.
If this is the ‘primer’, I am certainly not ready for the advanced course(!).
Hi Michael, I appreciate the kind effortpost, as per usual. I’ll do my best to answer.
This is a very important question. To restate it in several ways: what kind of thing is suffering? What kind of question is ‘what is suffering’? What would a philosophically satisfying definition of suffering look like? How would we know if we saw it? Why does QRI think existing theories of suffering are lacking? Is an answer to this question a matter of defining some essence, or defining causal conditions, or something else?
Our intent is to define phenomenological valence in a fully formal way, with the template being physics: we wish to develop our models such that we can speak of pain and pleasure with all the clarity, precision, and rigor as we currently describe photons and quarks and fields.
This may sound odd, but physics is a grand success story of formalization, and we essentially wish to apply the things that worked in physics, to phenomenology. Importantly, physics has a strong tradition of using symmetry considerations to inform theory. STV borrows squarely from this tradition (see e.g. my write up on Emmy Noether).
Valence is subjective as you note, but that doesn’t mean it’s arbitrary; there are deep patterns in which conditions and sensations feel good, and which feel bad. We think it’s possible to create a formal system for the subjective. Valence and STV are essentially the pilot project for this system. Others such as James and Husserl have tried to make phenomenological systems, but we believe they didn’t have all the pieces of the puzzle. I’d offer our lineages page for what we identify as ‘the pieces of the puzzle’; these are the shoulders we’re standing on to build our framework.
2. I see the question. Also, thank you for your work on the Happier Lives Institute; we may not interact frequently but I really like what you’re doing.
The significance of a fully rigorous theory of valence might not be fully apparent, even to the people working on it. Faraday and Maxwell formalized electromagnetism; they likely did not foresee theIr theory being used to build the iPhone. However, I suspect that they had deep intuitions that there’s something deeply useful in understanding the structure of nature, and perhaps they wouldn’t be as surprised as their contemporaries. We also hold intuitions as to the applications of a full theory of valence.
The simplest would be, it would unlock novel psychological and psychiatric diagnostics. If there is some difficult-to-diagnose nerve pain, or long covid type bodily suffering, or some emotional disturbance that is difficult to verbalize, well, this is directly measurable in principle with STV. This wouldn’t replace economics and psychology, as you say, but it would augment them.
Longer term, I’m reminded of the (adapted) phrase, “what you can measure, you can manage.” If you can reliably measure suffering, you can better design novel interventions for reducing it. I could see a validated STV as the heart of a revolution in psychiatry, and some of our work (Neural Annealing, Wireheading Done Right) are aimed at possible shapes this might take.
3. Aha, an easy question :) I’d point you toward our web glossary.
To your question, “Finally, and perhaps most importantly, I really not sure what it could even mean to represent consciousness/ valence as a mathematic shape“ — this is perhaps an overly-fancy way of saying that we believe consciousness is precisely formalizable. The speed of light is precisely formalizable; the UK tax rate is precisely formalizable; the waveform of an mp3 is precisely formalizable, and all of these formalizations can be said to be different ‘mathematical shapes’. To say something does not have a ‘mathematical shape’ is to say it defies formal analysis.
Thanks again for your clear and helpful questions.
I’ll take a shot at these questions too, perhaps being usefully only partially familiar with QRI.
1. Which question is QRI trying to answer?
Is there a universal pattern to conscious experience? Can we specify a function from the structure and state of a mind to the quality of experience it is having?
2. Why does that all matter?
If we discover a function from mind to valence, and develop the right tools of measurement and intervention (big IFs, for sure), we can steer all minds towards positive experience.
Until recently we only had intuitive physics, useful for survival, but not enough for GPS. In the same way, we can make some predictions today about what will make humans happy or sad, but we don’t understand depression very well, we can guess about how other animals feel, but it gets murky as you consider more and more distant species, and we’re in the dark on whether artificial minds experience anything at all. A theory of valence would let us navigate phenomenological space with new precision, across a broad domain of minds.
I appreciate your comment here, and am a big fan of your work.
In response to point #3, I think it is extremely revealing how you ask for definitions of a few phrases, and Mike directs you to a link that does not define the phrases you specifically ask for. https://www.qualiaresearchinstitute.org/glossary Edit: Mike responded directly to this below, so this feels unfair to say now.
Good catch; there’s plenty that our glossary does not cover yet. This post is at 70 comments now, and I can just say I’m typing as fast as I can!
I pinged our engineer (who has taken the lead on the neuroimaging pipeline work) about details, but as the collaboration hasn’t yet been announced I’ll err on the side of caution in sharing.
To Michael — here’s my attempt to clarify the terms you highlighted:
Neurophysiological models of suffering try to dig into the computational utility and underlying biology of suffering
-> existing theories talk about what emotions ‘do’ for an organism, and what neurochemicals and brain regions seem to be associated with suffering
symmetry
Frank Wilczek calls symmetry ‘change without change’. A limited definition is that it’s a measure of the number of ways you can rotate a picture, and still get the same result. You can rotate a square 90 degrees, 180 degrees, and 270 degrees and get something identical; you can rotate a circle any direction and get something identical. Thus we’d say circles have more rotational symmetries than squares (who have more than rectangles, etc)
harmony
Harmony has been in our vocabulary a long time, but it’s not a ‘crisp’ word. This is why I like to talk about symmetry, rather than harmony — although they more-or-less point in the same direction
dissonance
The combination of multiple frequencies that have a high amount of interaction, but few common patterns. Nails on a chalkboard create a highly dissonant sound; playing the C and C# keys at the same time also creates a relatively dissonant sound
resonance as a proxy for characteristic activity
I’m not sure I can give a fully satisfying definition here that doesn’t just reference CSHW; I’ll think about this one more.
Consonance Dissonance Noise Signature
A way of mathematically calculating how much consonance, dissonance, and noise there is when we add different frequencies together. This is an algorithm developed at QRI by my co-founder, Andrés
self-organizing systems
A system which isn’t designed by some intelligent person, but follows an organizing logic of its own. A beehive or anthill would be a self-organizing system; no one’s in charge, but there’s still something clever going on
Neural Annealing
In November 2019 I released a work speaking of the brain as a self-organizing system. Basically, “when the brain is in an emotionally intense state, change is easier” similar to how when metal heats up and starts to melt, it’s easier to change the shape of the metal
full neuroimaging stack
All the software we need to do an analysis (and specifically, the CSHW analysis), from start to finish
precise physical formalism for consciousness
A perfect theory of consciousness, which could be applied to anything. Basically a “consciousness meter”
STV gives us a rich set of threads to follow for clear neurofeedback targets, which should allow for much more effective closed-loop systems, and I am personally extraordinarily excited about the creation of technologies that allow people to “update toward wholesome”,
Ah yes this is a litttttle bit dense. Basically, one big thing holding back neurotech is we don’t have good biomarkers for well-being. If we design these biomarkers, we can design neurofeedback systems which work better (not sure how familiar you are with neurofeedback)
I read this post and the comments that have followed it with great interest.
I have two major, and one minor, worries about QRI’s research agenda I hope you can clarify. First, I am not sure exactly which question you are trying to answer. Second, it’s not clear to me why you think this project is (especially) important. Third, I can’t understand what STV is about because there is so much (undefined) technical jargon.
1. Which question is QRI trying to answer?
You open by saying:
This makes me think you want to identify what suffering is, that is, what it consists in. But you then immediately raise Buddhist and Arisotlean theories of what causes suffering—a wholly different issue. FWIW, I don’t see anything deeply problematic in identifying what suffering, and related terms, refer to. Valence just refers to how good/bad you feel (the intrinsic pleasurableness/displeasurableness of your experience); happiness is feeling overall good; suffering is feeling overall bad. I don’t find anything dissatisfying about these. Valence refers to something subjective. That’s a definition in terms of something subjective. What else could one want?
It seems you want to do two things: (1) somehow identify which brainstates are associated with valence and (2) represent subjective experiences in terms of something mathematical, i.e. something non-subjective. Neither of these questions is identical to establishing either what suffering is, or what causes it. Hence, when you say:
I’m afraid I don’t know which question you have in mind. Could you please specify?
2. Why does that all matter?
It’s unclear to me why you think solving either problem - (1) or (2) - is (especially) valuable. There is some fairly vague stuff about neurotech, but this seems pretty hand-wavey. It’s rather bold for you to claim
and I think you owe the reader a bit more to bite into, in terms of a theory of change.
You might offer some answer about the importance of being able to measure what impacts well-being here but—and I hope old-time forum hands will forgive me as I mount a familiar hobby-horse—economics and psychology seem to be doing a reasonable job of this simply by surveying people, e.g. asking them how happy they are (0-10). Such work can and does proceed without a theory of exactly what is happening inside the ‘black box’ of the brain; it can be used, right now, to help us determine what our priorities are - if I can be permitted to toot my horn from aside the hobby-horse, I should add that this just is what my organisation, the Happier Lives Institute, is working on. If I were to insist on waiting for real-time brain scanning data to learn whether, saying, cash transfers are more cost-effective than psychotherapy at increasing happiness, I would be waiting some time.
3. Too much (undefined) jargon
Here is a list of terms or phrases that seem very important for understanding STV where I have very little idea exactly what you mean:
Neurophysiological models of suffering try to dig into the computational utility and underlying biology of suffering
symmetry
harmony
dissonance
resonance as a proxy for characteristic activity
Consonance Dissonance Noise Signature
self-organizing systems
Neural Annealing
full neuroimaging stack
precise physical formalism for consciousness
STV gives us a rich set of threads to follow for clear neurofeedback targets, which should allow for much more effective closed-loop systems, and I am personally extraordinarily excited about the creation of technologies that allow people to “update toward wholesome”,
Finally, and perhaps most importantly, I really not sure what it could even mean to represent consciousness/ valence as a mathematic shape.
If this is the ‘primer’, I am certainly not ready for the advanced course(!).
Hi Michael, I appreciate the kind effortpost, as per usual. I’ll do my best to answer.
This is a very important question. To restate it in several ways: what kind of thing is suffering? What kind of question is ‘what is suffering’? What would a philosophically satisfying definition of suffering look like? How would we know if we saw it? Why does QRI think existing theories of suffering are lacking? Is an answer to this question a matter of defining some essence, or defining causal conditions, or something else?
Our intent is to define phenomenological valence in a fully formal way, with the template being physics: we wish to develop our models such that we can speak of pain and pleasure with all the clarity, precision, and rigor as we currently describe photons and quarks and fields.
This may sound odd, but physics is a grand success story of formalization, and we essentially wish to apply the things that worked in physics, to phenomenology. Importantly, physics has a strong tradition of using symmetry considerations to inform theory. STV borrows squarely from this tradition (see e.g. my write up on Emmy Noether).
Valence is subjective as you note, but that doesn’t mean it’s arbitrary; there are deep patterns in which conditions and sensations feel good, and which feel bad. We think it’s possible to create a formal system for the subjective. Valence and STV are essentially the pilot project for this system. Others such as James and Husserl have tried to make phenomenological systems, but we believe they didn’t have all the pieces of the puzzle. I’d offer our lineages page for what we identify as ‘the pieces of the puzzle’; these are the shoulders we’re standing on to build our framework.
2. I see the question. Also, thank you for your work on the Happier Lives Institute; we may not interact frequently but I really like what you’re doing.
The significance of a fully rigorous theory of valence might not be fully apparent, even to the people working on it. Faraday and Maxwell formalized electromagnetism; they likely did not foresee theIr theory being used to build the iPhone. However, I suspect that they had deep intuitions that there’s something deeply useful in understanding the structure of nature, and perhaps they wouldn’t be as surprised as their contemporaries. We also hold intuitions as to the applications of a full theory of valence.
The simplest would be, it would unlock novel psychological and psychiatric diagnostics. If there is some difficult-to-diagnose nerve pain, or long covid type bodily suffering, or some emotional disturbance that is difficult to verbalize, well, this is directly measurable in principle with STV. This wouldn’t replace economics and psychology, as you say, but it would augment them.
Longer term, I’m reminded of the (adapted) phrase, “what you can measure, you can manage.” If you can reliably measure suffering, you can better design novel interventions for reducing it. I could see a validated STV as the heart of a revolution in psychiatry, and some of our work (Neural Annealing, Wireheading Done Right) are aimed at possible shapes this might take.
3. Aha, an easy question :) I’d point you toward our web glossary.
To your question, “Finally, and perhaps most importantly, I really not sure what it could even mean to represent consciousness/ valence as a mathematic shape“ — this is perhaps an overly-fancy way of saying that we believe consciousness is precisely formalizable. The speed of light is precisely formalizable; the UK tax rate is precisely formalizable; the waveform of an mp3 is precisely formalizable, and all of these formalizations can be said to be different ‘mathematical shapes’. To say something does not have a ‘mathematical shape’ is to say it defies formal analysis.
Thanks again for your clear and helpful questions.
I’ll take a shot at these questions too, perhaps being usefully only partially familiar with QRI.
Is there a universal pattern to conscious experience? Can we specify a function from the structure and state of a mind to the quality of experience it is having?
If we discover a function from mind to valence, and develop the right tools of measurement and intervention (big IFs, for sure), we can steer all minds towards positive experience.
Until recently we only had intuitive physics, useful for survival, but not enough for GPS. In the same way, we can make some predictions today about what will make humans happy or sad, but we don’t understand depression very well, we can guess about how other animals feel, but it gets murky as you consider more and more distant species, and we’re in the dark on whether artificial minds experience anything at all. A theory of valence would let us navigate phenomenological space with new precision, across a broad domain of minds.
Hi Michael,
I appreciate your comment here, and am a big fan of your work.
In response to point #3, I think it is extremely revealing how you ask for definitions of a few phrases, and Mike directs you to a link that does not define the phrases you specifically ask for.https://www.qualiaresearchinstitute.org/glossary Edit: Mike responded directly to this below, so this feels unfair to say now.Good catch; there’s plenty that our glossary does not cover yet. This post is at 70 comments now, and I can just say I’m typing as fast as I can!
I pinged our engineer (who has taken the lead on the neuroimaging pipeline work) about details, but as the collaboration hasn’t yet been announced I’ll err on the side of caution in sharing.
To Michael — here’s my attempt to clarify the terms you highlighted:
Neurophysiological models of suffering try to dig into the computational utility and underlying biology of suffering
-> existing theories talk about what emotions ‘do’ for an organism, and what neurochemicals and brain regions seem to be associated with suffering
symmetry
Frank Wilczek calls symmetry ‘change without change’. A limited definition is that it’s a measure of the number of ways you can rotate a picture, and still get the same result. You can rotate a square 90 degrees, 180 degrees, and 270 degrees and get something identical; you can rotate a circle any direction and get something identical. Thus we’d say circles have more rotational symmetries than squares (who have more than rectangles, etc)
harmony
Harmony has been in our vocabulary a long time, but it’s not a ‘crisp’ word. This is why I like to talk about symmetry, rather than harmony — although they more-or-less point in the same direction
dissonance
The combination of multiple frequencies that have a high amount of interaction, but few common patterns. Nails on a chalkboard create a highly dissonant sound; playing the C and C# keys at the same time also creates a relatively dissonant sound
resonance as a proxy for characteristic activity
I’m not sure I can give a fully satisfying definition here that doesn’t just reference CSHW; I’ll think about this one more.
Consonance Dissonance Noise Signature
A way of mathematically calculating how much consonance, dissonance, and noise there is when we add different frequencies together. This is an algorithm developed at QRI by my co-founder, Andrés
self-organizing systems
A system which isn’t designed by some intelligent person, but follows an organizing logic of its own. A beehive or anthill would be a self-organizing system; no one’s in charge, but there’s still something clever going on
Neural Annealing
In November 2019 I released a work speaking of the brain as a self-organizing system. Basically, “when the brain is in an emotionally intense state, change is easier” similar to how when metal heats up and starts to melt, it’s easier to change the shape of the metal
full neuroimaging stack
All the software we need to do an analysis (and specifically, the CSHW analysis), from start to finish
precise physical formalism for consciousness
A perfect theory of consciousness, which could be applied to anything. Basically a “consciousness meter”
STV gives us a rich set of threads to follow for clear neurofeedback targets, which should allow for much more effective closed-loop systems, and I am personally extraordinarily excited about the creation of technologies that allow people to “update toward wholesome”,
Ah yes this is a litttttle bit dense. Basically, one big thing holding back neurotech is we don’t have good biomarkers for well-being. If we design these biomarkers, we can design neurofeedback systems which work better (not sure how familiar you are with neurofeedback)