A Primer on the Symmetry Theory of Valence
Crossposted from opentheory.net
STV is Qualia Research Institute‘s candidate for a universal theory of valence, first proposed in Principia Qualia (2016). The following is a brief discussion of why existing theories are unsatisfying, what STV says, and key milestones so far.
I. Suffering is a puzzle
We know suffering when we feel it — but what is it? What would a satisfying answer for this even look like?
The psychological default model of suffering is “suffering is caused by not getting what you want.” This is the model that evolution has primed us toward. Empirically, it appears false (1)(2).
The Buddhist critique suggests that most suffering actually comes from holding this as our model of suffering. My co-founder Romeo Stevens suggests that we create a huge amount of unpleasantness by identifying with the sensations we want and making a commitment to ‘dukkha’ ourselves until we get them. When this fails to produce happiness, we take our failure as evidence we simply need to be more skillful in controlling our sensations, to work harder to get what we want, to suffer more until we reach our goal — whereas in reality there is no reasonable way we can force our sensations to be “stable, controllable, and satisfying” all the time. As Romeo puts it, “The mind is like a child that thinks that if it just finds the right flavor of cake it can live off of it with no stomach aches or other negative results.”
Buddhism itself is a brilliant internal psychology of suffering (1)(2), but has strict limits: it’s dogmatically silent on the influence of external factors on suffering, such as health, relationships, or anything having to do with the brain.
The Aristotelian model of suffering & well-being identifies a set of baseline conditions and virtues for human happiness, with suffering being due to deviations from these conditions. Modern psychology and psychiatry are tacitly built on this model, with one popular version being Seligman’s PERMA Model: P – Positive Emotion; E – Engagement; R – Relationships; M – Meaning; A – Accomplishments. Chris Kresser and other ‘holistic medicine’ practitioners are synthesizing what I would call ‘Paleo Psychology’, which suggests that we should look at our evolutionary history to understand the conditions for human happiness, with a special focus on nutrition, connection, sleep, and stress.
I have a deep affection for these ways of thinking and find them uncannily effective at debugging hedonic problems. But they’re not proper theories of mind, and say little about the underlying metaphysics or variation of internal experience.
Neurophysiological models of suffering try to dig into the computational utility and underlying biology of suffering. Bright spots include Friston & Seth, Panksepp, Joffily, and Eldar talking about emotional states being normative markers of momentum (i.e. whether you should keep doing what you’re doing, or switch things up), and Wager, Tracey, Kucyi, Osteen, and others discussing neural correlates of pain. These approaches are clearly important parts of the story, but tend to be descriptive rather than predictive, either focusing on ‘correlation collecting’ or telling a story without grounding that story in mechanism.
QRI thinks not having a good answer to the question of suffering is a core bottleneck for neuroscience, drug development, and next-generation mental health treatments, as well as philosophical questions about the future direction of civilization. We think this question is also much more tractable than people realize, that there are trillion-dollar bills on the sidewalk, waiting to be picked up if we just actually try.
II. QRI’s model of suffering – history & roadmap
What does “actually trying” to solve suffering look like? I can share what we’ve done, what we’re doing, and our future directions.
QRI.2016: We released the world’s first crisp formalism for pain and pleasure: the Symmetry Theory of Valence (STV)
QRI had a long exploratory gestation period as we explored various existing answers and identified their inadequacies. Things started to ‘gel’ as we identified and collected core research lineages that any fundamentally satisfying answer mustengage with.
A key piece of the puzzle for me was Integrated Information Theory (IIT), the first attempt at a formal bridge between phenomenology and causal emergence (Tononi et. al 2004, 2008, 2012). The goal of IIT is to create a mathematical object ‘isomorphic to’ a system’s phenomenology — that is to say, to create a perfect mathematical representation of what it feels like to be something. If it’s possible to create such a mathematical representation of an experience, then how pleasant or unpleasant the experience is should be ‘baked into’ this representation somehow.
In 2016 I introduced the Symmetry Theory of Valence (STV) built on the expectation that, although the details of IIT may not yet be correct, it has the correct goal — to create a mathematical formalism for consciousness. STV proposes that, given such a mathematical representation of an experience, the symmetry of this representation will encode how pleasant the experience is (Johnson 2016). STV is a formal, causal expression of the sentiment that “suffering is lack of harmony in the mind” and allowed us to make philosophically clear assertions such as:
X causes suffering because it creates dissonance, resistance, turbulence in the brain/mind.
If there is dissonance in the brain, there is suffering; if there is suffering, there is dissonance in the brain. Always.
This also let us begin to pose first-principles, conceptual-level models for affective mechanics: e.g., ‘pleasure centers’ function as pleasure centers insofar as they act as tuning knobs for harmony in the brain.
QRI.2017: We figured out how to apply our formalism to brains in an elegant way: CDNS
We had a formal hypothesis that harmony in the brain feels good, and dissonance feels bad. But how do we measure harmony and dissonance, given how noisy most forms of neuroimaging are?
An external researcher, Selen Atasoy, had the insight to use resonance as a proxy for characteristic activity. Neural activity may often look random— a confusing cacophony— but if we look at activity as the sum of all natural resonances of a system we can say a great deal about how the system works, and which configuration the system is currently in, with a few simple equations. Atasoy’s contribution here was connectome-specific harmonic waves (CSHW), an experimental method for doing this with fMRI (Atasoy et. al 2016; 2017a; 2017b). This is similar to how mashing keys on a piano might produce a confusing mix of sounds, but through applying harmonic decomposition to this sound we can calculate which notes must have been played to produce it. There are many ways to decompose brain activity into various parameters or dimensions; CSHW’s strength is it grounds these dimensions in physical mechanism: resonance within the connectome. (See also work by Helmholtz, Tesla, and Lehar.)
QRI built our ‘Consonance Dissonance Noise Signature’ (CDNS) method around combining STV with Atasoy’s work: my co-founder Andrés Gomez Emilsson had the key insight that if Atasoy’s method can give us a power-weighted list of harmonics in the brain, we can take this list and do a pairwise ‘CDNS’ analysis between harmonics and sum the result to figure out how much total consonance, dissonance, and noise a brain has (Gomez Emilsson 2017). Consonance is roughly equivalent to symmetry (invariance under transforms) in the time domain, and so the consonance between these harmonics should be a reasonable measure for the ‘symmetry’ of STV. This process offers a clean, empirical measure for how much harmony (and lack thereof) there is in a mind, structured in a way that lets us be largely agnostic about the precise physical substrate of consciousness.
With this, we had a full empirical theory of suffering.
QRI.2018: We invested in the CSHW paradigm and built ‘trading material’ for collaborations
We had our theory, and tried to get the data to test it. We decided that if STV is right, it should let us build better theory, and this should open doors for collaboration. This led us through a detailed exploration of the implications of CSHW (Johnson 2018a), and original work on the neuroscience of meditation (Johnson 2018b) and the phenomenology of time (Gomez Emilsson 2018).
QRI.2019: We synthesized a new neuroscience paradigm (Neural Annealing)
2019 marked a watershed for us in a number of ways. On the theory side, we realized there are many approaches to doing systems neuroscience, but only a few really good ones. We decided the best neuroscience research lineages were using various flavors of self-organizing systems theory to explain complex phenomena with very simple assumptions. Moreover, there were particularly elegant theories from Atasoy, Carhart-Harris, and Friston, all doing very similar things, just on different levels (physical, computational, energetic). So we combined these theories together into Neural Annealing (Johnson 2019), a unified theory of music, meditation, psychedelics, trauma, and emotional updating:
Annealing involves heating a metal above its recrystallization temperature, keeping it there for long enough for the microstructure of the metal to reach equilibrium, then slowly cooling it down, letting new patterns crystallize. This releases the internal stresses of the material, and is often used to restore ductility (plasticity and toughness) on metals that have been ‘cold-worked’ and have become very hard and brittle— in a sense, annealing is a ‘reset switch’ which allows metals to go back to a more pristine, natural state after being bent or stressed. I suspect this is a useful metaphor for brains, in that they can become hard and brittle over time with a build-up of internal stresses, and these stresses can be released by periodically entering high-energy states where a more natural neural microstructure can reemerge.
This synthesis allowed us to start discussing not only which brain states are pleasant, but what processes are healing.
QRI.2020: We raised money, built out a full neuroimaging stack, and expanded the organization
In 2020 the QRI technical analysis pipeline became real, and we became one of the few neuroscience groups in the world able to carry out a full CSHW analysis in-house, thanks in particular to hard work by Quintin Frerichs and Patrick Taylor. This has led to partnerships with King’s College London, Imperial College London, National Institute of Mental Health of the Czech Republic, Emergent Phenomenology Research Consortium, as well as many things in the pipeline. 2020 and early 2021 also saw us onboard some fantastic talent and advisors.
III. What’s next?
We’re actively working on improving STV in three areas:
Finding a precise physical formalism for consciousness. Asserting that symmetry in the mathematical representation of an experience corresponds with the valence of the experience involves a huge leap in clarity over other theories. But we also need to be able to formally generate this mathematical representation. I’ve argued previously against functionalism and for a physicalist approach to consciousness (partially echoing Aaronson), and Barrett, Tegmark, and McFadden offer notable arguments suggesting the electromagnetic field may be the physical seat of consciousness because it’s the only field that can support sufficient complexity. We believe determining a physical formalism for consciousness is intimately tied to the binding problem, and have conjectures I’m excited to test.
Building better neuroscience proxies for STV. We’ve built our empirical predictions around the expectation that consonance within a brain’s connectome-specific harmonic waves (CSHW) will be a good proxy for the symmetry of that mind’s formal mathematical representation. We think this is a best-in-the-world compression for valence. But CSHW rests on a chain of inferences about neuroimaging and brain structure, and using it to discuss consciousness rests on further inferences still. We think there’s room for improvement.
Building neurotech that can help people. The team may be getting tired of hearing me say this, but: better philosophy should lead to better neuroscience, and better neuroscience should lead to better neurotech. STV gives us a rich set of threads to follow for clear neurofeedback targets, which should allow for much more effective closed-loop systems, and I am personally extraordinarily excited about the creation of technologies that allow people to “update toward wholesome”, with the neuroscience of meditation as a model.
The Symmetry Theory of Valence sounds wrong to me and is not substantiated by any empirical research I am aware of. (Edited to be nicer.) I’m sorry to post a comment so negative and non-constructive, but I just don’t want EA people to read this and think it is something worth spending time on.
Credentials: I’m doing a PhD in Neuroscience and Psychology at Princeton with a focus on fMRI research, I have a masters in Neuroscience from Oxford, I’ve presented my fMRI research projects at multiple academic conferences, and I published a peer reviewed fMRI paper in a mainstream journal. As far as I can tell, nobody at the Qualia Research Institute has a PhD in Neuroscience or has industry experience doing equivalent level work. Keeping in mind credentialism is bad, I am still pointing out their lack of neuroscience credentials compared to mine because I am confused by how overwhelmingly confident they are in their claims, their incomprehensible use of neuro jargon, and how dismissive they are of my expertise. (Edited to be nicer.) https://www.qualiaresearchinstitute.org/team
There are a lot of things I don’t understand about STV, but the primary one is:
If there is dissonance in the brain, there is suffering; if there is suffering, there is dissonance in the brain. Always.
Please provide evidence that “dissonance in the brain” as measured by a “Consonance Dissonance Noise Signature” is associated with suffering? This should be an easy study to run. Put people in an fMRI scanner, ask them to do things that make them feel suffering/feel peaceful, and see how the CDNS changes between conditions. I’m willing to change my skepticism about this theory if you have this evidence, but if you have this evidence, it seems bizarre that you do not lead with it?
_________________________________________________________
Edit: I have asked multiple times for empirical evidence to support these claims, but Mike Johnson has not produced anything.
I wish I could make more specific criticisms about why his theory makes no sense theoretically, but so much of what he is saying is incomprehensible, it’s hard to know where to start. Here’s a copy paste of something he said in a comment that got buried below about why suffering == harmonic dissonance:
He’s using “predictive coding frame” as fancy jargon here in what I’m guessing is a reference to Karl Friston’s free-energy principle work. Knowing the context and definition of these words, his explanation still makes no sense.
All he is doing here is saying that the brain has some reason to reduce “dissonance in the harmonic frame” in a similar way it has reasons to reduce prediction errors (ie mistakes in the brain’s predictions of what will happen in an environment). There are good reasons why the brain should reduce prediction errors. Mike offers no clear explanation for why the brain would have a reason to reduce neural asynchrony/”dissonance in the harmonic frame”. His unclear explanation is that dissonance == suffering, but… WHY. There is no evidence to support this.
He says “Dissonant systems shake themselves apart.” Is he saying dissonant neural networks destroy themselves and we subjectively perceive this as suffering? This makes no theoretical sense AND there’s no evidence to support it.
Edit 2: My lab is working on fmri based neurofeedback to improve mental health outcomes of depressed patients, if neurofeedback to reduce psychological suffering is something you’re interested in. (This is not my personal research focus, I’m just familiar with the challenges in fMRI neurofeedback paradigms.) Here’s a paper from my primary and secondary academic advisors: https://www.biorxiv.org/content/10.1101/2020.06.07.137943v1.abstract
Hi, all. Talk is cheap, and EA Forum karma may be insufficiently nuanced to convey substantive disagreements.
I’ve taken the liberty to sketch out several forecasting questions that might reflect underlying differences in opinion. Interested parties may wish to forecast on them (which the EA Forum should allow you to do directly, at least on desktop) and then make bets accordingly.
Feel free to also counterpropose (and make!) other questions if you think the existing question operationalizations are not sufficient (I’m far from knowledgeable in this field!).
Hi Linch, cool idea.
I’d suggest that 100 citations can be a rather large number for papers, depending on what reference class you put us in, 3000 larger still; here’s an overview of the top-cited papers in neuroscience for what it’s worth: https://www.frontiersin.org/articles/10.3389/fnhum.2017.00363/full
Methods papers tend to be among the most highly cited, and e.g. Selen Atasoy’s original work on CSHW has been cited 208 times, according to Google Scholar. Some more recent papers are at significantly less than 100, though this may climb over time.
Anyway my sense is (1) is possible but depends on future direction, (2) is unlikely, (3) is likely, (4) is unlikely (high confidence).
Perhaps a better measure of success could be expert buy-in. I.e., does QRI get endorsements from distinguished scientists who themselves fit criteria (1) and/or (2)? Likewise, technological usefulness, e.g. has STV directly inspired the creation of some technical device that is available to buy or is used in academic research labs? I’m much more optimistic about these criteria than citation counts, and by some measures we’re already there.
Note that the 2nd question is about total citations rather than of one paper, and 3k citations doesn’t seem that high if you’re introducing an entirely new subfield (which is roughly what I’d expect if STV is true). The core paper of Friston’s free energy principle has almost 5,000 citations for example, and it seems from the outside that STV (if true) ought to be roughly as big a deal as free energy.
For a sense of my prior beliefs about EA-encouraged academic subfields, I think 3k citations in 10 years is an unlikely but not insanely high target for wild animal welfare (maybe 20-30%?), and AI risk is likely already well beyond that (eg >1k citations for Concrete Problems alone).
I’d say that’s a fair assessment — one wrinkle that isn’t a critique of what you wrote, but seems worth mentioning, is that it’s an open question if these are the metrics we should be optimizing for. If we were part of academia, citations would be the de facto target, but we have different incentives (we’re not trying to impress tenure committees). That said, the more citations the better of course.
As you say, if STV is true, it would essentially introduce an entirely new subfield. It would also have implications for items like AI safety and those may outweigh its academic impact. The question we’re looking at is how to navigate questions of support, utility, and impact here: do we put our (unfortunately rather small) resources toward academic writing and will that get us to the next step of support, or do we put more visceral real-world impact first (can we substantially improve peoples’ lives? How much and how many?), or do we go all out towards AI safety?
It’s of course possible to be wrong; I’m also understanding it’s possible to be right, but take the wrong strategic path and run out of gas. Basically I’m a little worried that racking up academic metrics like citations is less a panacea than it might appear, and we’re looking to hedge our bets here.
For what it’s worth, we’ve been interfacing with various groups working on emotional wellness neurotech and one internal metric I’m tracking is how useful a framework STV is to these groups; here’s Jay Sanguinetti explaining STV to Shinzen Young (first part of the interview):
https://open.spotify.com/episode/6cI9pZHzT9sV1tVwoxncWP?si=S1RgPs_CTYuYQ4D-adzNnA&dl_branch=1
I think of the metrics I mentioned above as proxies rather than as the underlying targets, which is some combination of:
a) Is STV true?
b) Conditional upon STV being true, is it useful?
What my forecasting questions aimed to do is shedding light on a). I agree that academia and citations isn’t the best proxy. They may in some cases have conservatism bias (I think trusting the apparent academic consensus on AI risk in 2014 would’ve been a mistake for early EAs), but are also not immune to falseties/crankery (cf replication crisis). In addition, standards for truth and usefulness are different within EA circles than academia, partially because we are trying to answer different questions.
This is especially an issue as the areas that QRI is likely to interact with (consciousness, psychedelics) seem from the outside to be more prone than average to falseness and motivated cognition, including within academia.
This is what I was trying to get at with “will Luke Muelhauser say statements to the effect that the Symmetry Theory of Valence is substantively true?” because Luke is a non-QRI affiliated person within EA who’s a) respected and b) have thought about concepts adjacent to QRI’s work. Bearing in mind that Luke is very far from a perfect oracle, I would still trust Luke’s judgement on this more than an arbitrarily selected academic in an adjacent field.
I think the actual question I’m interested in is something like “In X year, will a panel of well-respected EAs a) not affiliated with QRI and b) have very different thoughts from each other and c)who have thought about things adjacent to QRI’s work have updated to believing STV to be substantively true” but I was unable to come up with a clean question operationalization in the relatively brief amount of time I gave myself to come up with this.
People are free to counterpropose and make their own questions.
Hi Linch, that’s very well put. I would also add a third possibility (c), which is “is STV false but generative.” — I explore this a little here, with the core thesis summarized in this graphic:
I.e., STV could be false in a metaphysical sense, but insofar as the brain is a harmonic computer (a strong reframe of CSHW), it could be performing harmonic gradient descent. Fully expanded, there would be four cases:
STV true, STHR true
STV true, STHR false
STV false, STHR true
STV false, STHR false
Of course, ‘true and false’ are easier to navigate if we can speak of absolutes; STHR is a model, and ‘all models are wrong; some are useful.’
For what it’s worth, I read this comment as constructive rather than non-constructive.
If I write a long report and an expert in the field think that the entire premise is flawed for specific technical reasons, I’d much rather them point this out rather than for them to worry about niceness and then never getting around to mentioning it, thus causing my report to languish in obscurity without me knowing why (or worse, for my false research to actually be used!)
I’m a bit hesitant to upvote this comment given how critical it is [was] + how little I know about the field (and thus whether the criticism is deserved), but I’m a bit relieved/interested to see I wasn’t the only one who thought it sounded really confusing/weird. I have somewhat skeptical priors towards big theories of consciousness and suffering (sort of/it’s complicated) + towards theories that rely on lots of complicated methods/jargon/theory (again, sort of/with caveats)—but I also know very little about this field and so I couldn’t really judge. Thus, I’m definitely interested to see the opinions of people with some experience in the field.
Hi Harrison, appreciate the remarks. My response would be more-or-less an open-ended question: do you feel this is a valid scientific mystery? And, what do you feel an answer would/should look like? I.e., correct answers to long-unsolved mysteries might tend to be on the weird side, but there’s “useful generative clever weird” and “bad wrong crazy timecube weird”. How would you tell the difference?
Haha, I certainly wouldn’t label what you described/presented as “timecube weird.” To be honest, I don’t have a very clear cut set of criteria, and upon reflection it’s probable that the prior is a bit over-influenced by my experiences with some social science research and theory as opposed to hard science research/theory. Additionally, it’s not simply that I’m skeptical of whether the conclusion is true, but more generally my skepticism heuristics for research is about whether whatever is being presented is “A) novel/in contrast with existing theories or intuitions; B) is true; and/or C) is useful.” For example, some theory might be basically rehashing what existing research already has come to consensus on but simply worded in a very different way that adds little to existing research (aside from complexity); alternatively, something could just be flat out wrong; alternatively, something could be technically true and novel as explicitly written, but that is not very useful (e.g., tautological definitions), whereas the common interpretation is wrong (but would be useful if it were right).
Still, two of the key features here that contributed to my mental yellow flags were:
The emphasis on jargon and seemingly ambiguous concepts (e.g., “harmony”) vs. a clear, lay-oriented narrative that explains the theory—crucially including how it is different from other plausible theories (in addition to “why should you believe this? / how did we test this?”). STEM jargon definitely seems different from social science jargon in that STEM jargon seems to more often require more knowledge/experience to get a sense of whether something is nonsense strung together or just legitimate-but-complicated analyses, whereas I can much more easily detect nonsense in social science work when it starts equivocating ideas and making broad generalizations.
(To a lesser extent) The emphasis on mathematical analyses and models for something that seemed to call for a broader approach/acceptance of some ambiguity. (Of course, it’s necessary to mathematically represent some things, but I’m a bit skeptical of systems that try to break down such complex concepts as consciousness and affective experience into a mathematical/quantified representation, just like how I’ve been skeptical of many attempts to measure/operationalize complex conceptual variables like “culture” or “polity” in some social sciences, even if I think doing so can be helpful relative to doing nothing—so long as people still are very clear-eyed about the limitations of the quantification)
In the end, I don’t have strong reason to believe that what you are arguing for is wrong, but especially given points like I just mentioned I haven’t updated my beliefs much in any direction after reading this post.
Hi Harrison, that’s very helpful. I think it’s a challenge to package fairly technical and novel research into something that’s both precise and intuitive. Definitely agree that “harmony” is an ambiguous concept.
One of the interesting aspects of this work is it does directly touch on issues of metaphysics and ontology: what are the natural kinds of reality? What concepts ‘carve reality at the joints’? Most sorts of research can avoid dealing with these questions directly, and just speak about observables and predictions. But since part of what we’re doing is to establish valence as a phenomenological natural kind, we have to make certain moves, and these moves may raise certain yellow flags, as you note, since often when these moves are made there’s some philosophical shenanigans going on. That said, I’m happy with the overall direction of our work, which has been steadily more and more empirical.
One takeaway that I do hope I can offer is the deeply philosophically unsatisfactory nature of existing answers in this space. Put simply, no one knows what pleasure and suffering are, or at least have definitions that are coherent across all domains they’d like to be able to define them. This is an increasing problem as we tackle e.g. problems of digital sentience and fundamental questions of AI alignment. I’m confident in our research program, but even more confident that the questions we’re trying to grapple with are important to try to address directly, and that there’s no good ’default hypothesis’ at present.
People are asking for object-level justifications for the Symmetry Theory of Valence:
The first thing to mention is that the Symmetry Theory of Valence (STV) is *really easy to strawman*. It really is the case that there are many near enemies of STV that sound exactly like what a naïve researcher who is missing developmental stages (e.g. is a naïve realist about perception) would say. That we like pretty symmetrical shapes of course does not mean that symmetry is at the root of valence; that we enjoy symphonic music does not mean harmony is “inherently pleasant”; that we enjoy nice repeating patterns of tactile stimulation does not mean, well, you get the idea...
The truth of course is that at QRI we really are meta-contrarian intellectual hipsters. So the weird and often dumb-sounding things we say are already taking into account the criticisms people in our people-cluster would make and are taking the conversation one step further. For instance, we think digital computers cannot be conscious, but this belief comes from entirely different arguments than those that justify such beliefs out there. We think that the “energy body” is real and important, except that we interpret it within a physicalist paradigm of dynamic systems. We take seriously the possible positive sum game-theoretical implications of MDMA, but not out of a naïve “why can’t we all love each other?” impression, but rather, based on deep evolutionary arguments. And we take seriously non-standard views of identity, not because “we are all Krishna”, but because the common-sense view of identity turns out to, in retrospect, be based on illusion (cf. Parfit, Kolak, “The Future of Personal Identity”) and a true physicalist theory of consciousness (e.g. Pearce’s theory) has no room for enduring metaphysical egos. This is all to say that straw-manning the paradigms explored at QRI is easy; steelmanning them is what’s hard. Can anyone here make a Titanium Man out of them instead? :-)
Now, I am indeed happy to address any mischaracterization of STV. Sadly, to my knowledge nobody outside of QRI really “gets it”, so I don’t think there is anyone other than us (and possibly Scott Alexander!) who can make a steelman of STV. My promise is that “there is something here” and that to “get it” is not merely to buy into the theory blindly, but rather, it is what happens when you give it enough benefit of the doubt, share a sufficient number of background assumptions, and have a wide enough experience base that it actually becomes a rather obvious “good fit” for all of the data available.
For a bit of history (and properly giving due credit), I should clarify that Michael Johnson is the one who came up with the hypothesis in Principia Qualia (for a brief history see: STV Primer). I started out very skeptical of STV myself, and in fact it took about three years of thinking it through in light of many meditation and exotic high-energy experiences to be viscerally convinced that it’s pointing in the right direction. I’m talking about a process of elimination where, for instance, I checked if what feels good is at the computational level of abstraction (such as prediction error minimization) or if it’s at the implementation level (i.e. dissonance). I then developed a number of technical paradigms for how to translate STV into something we could actually study in neuroscience and ultimately try out empirically with non-invasive neurotech (in our case, light-sound-vibration systems that produce multi-modally coherent high-valence states of consciousness). Quintin Frerichs (who gave a presentation about Neural Annealing to Friston) has since been working hard on the actual neuroscience of it in collaboration with Johns Hopkins University, Daniel Ingram, Imperial College and others. We are currently testing the theory in a number of ways and will publish a large paper based on all this work.
For clarification, I should point out that what is brilliant (IMO) about Mike’s Principia Qualia is that he breaks down the problem of consciousness in such a way that it allows us to divide and conquer the hard problem of consciousness. Indeed, once broken down into his 8 subproblems, calling it the “hard problem of consciousness” sounds as bizarre as it would sound to us to hear about “the hard problem of matter”. We do claim that if we are able to solve each of these subproblems, that indeed the hard problem will dissolve. Not the way illusionists would have it (where the very concept of consciousness is problematic), but rather, in the way that electricity and lightning and magnets all turned out to be explained by just 4 simple equations of electromagnetism. Of course the further question of why do those equations exist and why consciousness follows such laws remains, but even that could IMO be fully explained with the appropriate paradigm (cf. Zero Ontology).
The main point to consider here w.r.t. STV is that symmetry is posited to be connected with valence at the implementation level of analysis. This squarely and clearly distinguishes STV from behaviorist accounts of valence (e.g. “behavioral reinforcement”) and also from algorithmic accounts (e.g. compression drive or prediction error minimization). Indeed, with STV you can have a brain (perhaps a damaged brain, or one in an exotic state of consciousness) where prediction errors are not in fact connected to valence. Rather, the brain evolved to recruit valence gradients in order to make better predictions. Similarly, STV predicts that what makes activation of the pleasure centers feel good is precisely that doing so gives rise to large-scale harmony in brain activity. This is exciting because it means the theory predicts we can actually observe a double dissociation: if we inhibit the pleasure centers while exogenously stimulating large-scale harmonic patterns we expect that to feel good, and we likewise expect that even if you activate the pleasure centers you will not feel good if something inhibits the large-scale harmony that would typically result. Same with prediction errors, behavior, etc.: we predict we can doubly-dissociate valence from those features if we conduct the right experiment. But we won’t be able to dissociate valence from symmetry in the formalism of consciousness.
Now, of course we currently can’t see consciousness directly, but we can infer a lot of invariants about it with different “projections”, and so far all are consistent with STV:
Of especial note, I’d point you to one of the studies discussed in the 2020 STV talk: The Human Default Consciousness and Its Disruption: Insights From an EEG Study of Buddhist Jhāna Meditation. It shows a very tight correspondence between jhanas and various smoothly-repeating EEG patterns, including a seizure-like activity that unlike normal seizures (of typically bad valence) shows up as having a *harmonic structure*. Here we find a beautiful correspondence between (a) sense of peace/jhanic bliss, (b) phenomenological descriptions of simplicity and smoothness, (c) valence, and (d) actual neurophysiological data mirroring these phenomenological accounts. At QRI we have similarly observed something quite similar studying the EEG patterns of other ultra-high-valence meditation states (which we will hopefully publish in 2022). I expect this pattern to hold for other exotic high-valence states in one way or another, ranging from quality of orgasm to exogenous opioids.
Phenomenologically speaking, STV is not only capable of describing and explaining why certain meditation or psychedelic states of consciousness feel good or bad, but in fact it can be used as a navigation aid! You can introspect on the ways energy does not flow smoothly, the presence of blockages and pinch points make it reflect in discordant ways, or zone in on areas of the “energy body” that are out of synch with one another and then specifically use attention in order to “comb the field of experience”. This approach—the purely secular climbing of the harmony gradient leads all of its own to amazing high-valence states of consciousness (cf. Buddhist Annealing). I’ll probably make a video series with meditation instructions for people to actually experience this on themselves first hand. It doesn’t take very long, actually. Also, STV as a paradigm can be used in order to experience more pleasant trajectories along the “Energy X Complexity landscape” of a DMT trip (something I even talked about at the SSC meetup online!). In a simple quip, I’d say “there are good and bad ways of vibing on DMT, and STV gives you the key to the realms of good vibes” :-)
Another angle: we can find subtle ways of dissociating valence from e.g. chemicals: if you take stimulants but don’t feel the nice buzz that provides a “working frame” for your mental activity, they will not feel good. At the same time, without stimulants you can get that pleasant productivity-enhancing buzz with the right tactile patterns of stimulation. Indeed this “buzz” that characterizes the effects of many euphoric drugs (and the quality of e.g. metta meditation) is precisely a valence effect, one that provides a metronome to self-organize around and which can feel bad when you don’t follow where it takes you. Literally, one of the core reasons why MDMA feels better than LSD which feels better than DOB is precisely because the “quality of the buzz” of each of these highs is different. MDMA’s buzz is beautiful and harmonious; DOB’s buzz is harsh and dissonant. More so, such a buzz can work as task-specific dissonance guide-rails, if you will. Meaning that when you do buzz-congruent behaviors you feel a sense of inner harmony, whereas when you do buzz-incongruent behaviors you feel a sense of inner turmoil. Hence what kind of buzz one experiences is deeply consequential! All of this falls rather nicely within STV—IMO other theories need to keep adding epicycles to keep up.
Hopefully this all worked as useful clarifications.
It sounds like you’re saying we all need to become more suggestible and just feel like your theory is true before we can understand it. Do you see what poor reasoning that would be?
I take Andrés’s point to be that there’s a decently broad set of people who took a while to see merit in STV, but eventually did. One can say it’s an acquired taste, something that feels strange and likely wrong at first, but is surprisingly parsimonious across a wide set of puzzles. Some of our advisors approached STV with significant initial skepticism, and it took some time for them to come around. That there are at least a few distinguished scientists who like STV isn’t proof it’s correct, but may suggest withholding some forms of judgment.
Thanks Andrés, this helped me get oriented around the phenomenological foundations of what y’all are exploring.
Edit: This comment now makes less sense, given that Abby has revised the language of her comment.
Abby,
I strongly endorse what you say in your last paragraph:
However, I’d like to push back on the tone of your reply. If you’re sorry for posting a negative non-constructive comment, why not try to be a bit more constructive? Why not say something like “I am deeply skeptical of this theory and do not at this moment think it’s worth EAs spending time on. [insert reasons]. I would be willing to change my view if there was evidence.”
Apologies for being pedantic, but I think it’s worth the effort to try and keep the conversation on the forum as constructive as possible!
Hi Jpmos,
I think context is important here. This is not an earnest but misguided post from an undergrad with big ideas and little experience. This is a post from an organization trying to raise hundreds of thousands of dollars. You can check out their website if you want, the front page has a fundraising advertisement.
Further, there are a lot of fancy buzzwords in this post (“connectome!”) and enough jargon that people unfamiliar with the topic might think there is substance here that they just don’t understand (see Harrison’s comment: “I also know very little about this field and so I couldn’t really judge”).
As somebody who knows a lot about this field, I think it’s important that my opinion on these ideas is clearly stated. So I will state it again.
There is no a priori reason to believe any of the claims of STV. There is no empirical evidence to support STV. To an expert, these claims do not sound “interesting and plausible but unproven”, they sound “nonsensical and presented with baffling confidence”.
People have been observing brain oscillations at different frequencies and at different powers for about 100 years. These oscillations have been associated with different patterns of behavior, ranging from sleep stages to memory formation. Nobody has observed asynchrony to be associated with anything like suffering (as far as I’m aware, but please present evidence if I’m mistaken!).
fMRI is a technique that doesn’t measure the firing of neurons (it measures the oxygen consumed over relatively big patches of neurons) and is extremely poorly suited to provide evidence for STV. A better method would be MEG (expensive) or EEG (extremely affordable). If the Qualia Research Institute was a truth seeking institution, they would have either run the simple experiment I proposed themselves, or had any of the neuroscientists they claim to be collaborating with run it for them.
This is a bad post and it should be called out as such. I would have been more gentle if this was a single misguided researcher and not the head of an organization that publishes a lot of other nonsense too.
This reads to me as insinuating fraud, without much supporting evidence.
I appreciate that in other comments you followed up with more concrete criticisms, but this still feels against the “Keep EA Weird” spirit to me. If we never spend a million or two on something that turns out to be nonsense, we wouldn’t have applied hits-based giving very well.
(Despite the username, I have no affiliation with QRI. I’ll admit to finding the problem worth working on. )
Keeping EA honest and rigorous is much higher priority. Making excuses for incompetence or lack of evidence base is the opposite of EA.
I agree that honesty is more important than weirdness. Maybe I’m being taken, but I see miscommunication and not dishonesty from QRI.
I am not sure what an appropriate standard of rigor is for a preparadigmatic area. I would welcome more qualifiers and softer claims.
At the very least, miscommunication this bad is evidence of serious incompetence at QRI. I think you are mistaken to want to excuse that.
Hi all, I messaged some with Holly a bit about this, and what she shared was very helpful. I think a core part of what happened was a mismatch of expectations: I originally wrote this content for my blog and QRI’s website, and the tone and terminology was geared toward “home team content”, not “away team content”. Some people found both the confidence and somewhat dense terminology offputting, and I think that’s reasonable of them to raise questions. As a takeaway, I‘ve updated that crossposting involves some pitfalls and intend to do things differently next time.
Thanks valence. I do think the ‘hits-based giving’ frame is important to develop, although I understand it’s doesn’t have universal support as some of the implications may be difficult to navigate.
And thank for appreciating the problem; it’s sometimes hard for me to describe how important the topic feels and all the reasons for working on it.
Edit: probably an unhelpful comment
Hi Mike,
I am comfortable calling myself “somebody who knows a lot about this field”, especially in relation to the average EA Forum reader, our current context.
I respect Karl Friston as well, I’m looking forward to reading his thoughts on your theory. Is there anything you can share?
The CSHW stuff looks potentially cool, but it’s separate from your original theory, so I don’t want to get too deep into it here. The only thing I would say is that I don’t understand why the claims of your original theory cannot be investigated using standard (cheap) EEG techniques. This is important if a major barrier to finding empirical evidence for your theory is funding. Could you explain why standard EEG is insufficient to investigate the synchrony of neuronal firing during suffering?
I was very aggressive with my criticism of your theory, partially because I think it is wrong (again, the basis of your theory, “the symmetry of this representation will encode how pleasant the experience is”, makes no sense to me), but also because of how confidently you describe your theory with no empirical evidence. So I happily accept being called arrogant and would also happily accept being shown how I am wrong. My tone is in reaction to what I feel is your unfounded confidence, and other posts like “I think all neuroscientists, all philosophers, all psychologists, and all psychiatrists should basically drop whatever they’re doing and learn Selen Atasoy’s “connectome-specific harmonic wave” (CSHW) framework.” https://opentheory.net/2018/08/a-future-for-neuroscience/
You link to your other work in this post, and are raising money for your organization (which I think will redirect money from organizations that I think are doing more effective work), so I think it’s fair for my comments to be in reaction to things outside the text of your original post.
I’m glad to hear you feel good about your background and are filled with confidence in yourself and your field. I think the best work often comes from people who don’t at first see all the challenges involved in doing something, because often those are the only people who even try.
At first I was a little taken aback by your tone, but to be honest I’m a little amused by the whole interaction now.
The core problem with EEG is that the most sophisticated analyses depend on source localization (holographic reconstruction of brain activity), and accurate source localization from EEG remains an unsolved problem, at least at the resolution and confidence we’d need. In particular we’ve looked at various measures of coherence as applied to EEG and found them all wanting in various ways. I notice some backtracking on your criticism of CSHW. ;) it’s a cool method, not without downsides but occupies a cool niche. I have no idea what your research is about but it might be useful for you to learn about for some purposes.
I’m glad you‘re reading more of our ‘back issues’ as it were. We have some talks on our YouTube channel as well (including the NA presentation to Friston), although not all of our work on STV is public yet.
If you share what your research is about, and any published work, I think it’d I’d help me understand where your critiques are coming from a little better. Totally up to you though.
Hi Jpmos, really appreciate the comments. To address the question of evidence, this is a fairly difficult epistemological situation but we’re working with high-valence datasets from Daniel Ingram & Harvard, and Imperial College London (jhana data, and MDMA data, respectively) and looking for signatures of high harmony.
Neuroimaging is a pretty messy thing, there are no shortcuts to denoising data, and we are highly funding constrained, so I’m afraid we don’t have any peer-reviewed work published on this yet. I can say that initial results seem fairly promising and we hope to have something under review in 6 months. There is a wide range of tacit evidence that stimulation patterns with higher internal harmony produce higher valence than dissonant patterns (basically: music feels good, nails on a chalkboard feels bad), but this is in a sense ‘obvious’ and only circumstantial evidence for STV.
Happy to ‘talk shop’ if you want to dig into details here.
Hi Abby, I‘m happy to entertain well-meaning criticism, but it feels your comment rests fairly heavily on credentialism and does not seem to offer any positive information, nor does it feel like high-level criticism (“their actual theory is also bad”). If your background is as you claim, I’m sure you understand the nuances of “proving” an idea in neuroscience, especially with regard to NCCs (neural correlates of consciousness) — neuroscience is also large enough that “I published a peer reviewed fMRI paper in a mainstream journal” isn’t a particularly ringing endorsement of domain knowledge in affective neuroscience. If you do have domain knowledge sufficient to take a crack at the question of valence I’d be glad to hear your ideas.
For a bit of background to theories of valence in neuroscience I’d recommend my forum post here—it goes significantly deeper into the literature than this primer.
Again, I’m not certain you read my piece closely, but as mentioned in my summary, most of our collaboration with British universities has been with Imperial (Robin Carhart-Harris’s lab, though he recently moved to UCSF) rather than Oxford, although Kringelbach has a great research center there and Atasoy (creator of the CSHW reference implementation, which we independently reimplemented) does her research there, so we’re familiar with the scene.
Hi Mike! I appreciate your openness to discussion even though I disagree with you.
Some questions:
1. The most important question: Why would synchrony between different brain areas involved in totally different functions be associated with subjective wellbeing? I fundamentally don’t understand this. For example, asynchrony has been found to be useful in memory as a way of differentiating similar but different memories during encoding/rehearsal/retrieval. It doesn’t seem like a bad thing that the brain has a reason to reduce, the way it has reasons to reduce prediction errors. Please link to brain studies that have found asynchrony leads to suffering.
2. If your theory is focused on neural oscillations, why don’t you use EEG to measure the correlation between neural synchrony and subjective experience? Surely EEG is a more accurate method and vastly cheaper than fMRI?
3. If you are funding constrained, why are none of your collaborators willing to run this experiment for you? Running fMRI and EEG experiments at Princeton is free. I see you have multiple Princeton affiliates on your team, and we even have Michael Graziano as a faculty member who is deeply interested in consciousness and understands fMRI.
My advice is to run the experiment I described in my original comment. Put people in an fMRI scanner (or EEG or MEG), ask them to do things that make them feel suffering/feel peaceful, and see how the CDNS changes between conditions. This is an extremely basic experiment and I am confused why you would be so confident in your theory before running this.
Hi Abby, thanks for the clear questions. In order:
In brief, asynchrony levies a complexity and homeostatic cost that harmony doesn’t. A simple story here is that dissonant systems shake themselves apart; we can draw a parallel between dissonance in the harmonic frame and free energy in the predictive coding frame.
We work with all the high-quality data we can get our hands on. We do have hd-EEG data of jhana meditation, but EEG data as you may(?) know is very noisy and ‘NCC-style’ research with EEG is a methodological minefield.
We know and like Graziano. I’ll share the idea of using Princeton facilities with the team.
To be direct, years ago I felt as you did about the simplicity of the scientific method in relation to neuroscience; “Just put people in an fMRI, have them do things, analyze the data; how hard can it be?” — experience has cured me of this frame, however. I’ve learned that neuroimaging data pipelines are often held together by proverbial duct tape, neuroimaging is noisy, the neural correlates of consciousness frame is suspect and existing philosophy of mind is rather bonkers, and to even say One True Thing about the connection between brain and mind is very hard (and expensive) indeed. I would say I expect you to be surprised by certain realities of neuroscience as you complete your PhD, and I hope you can turn that into determination to refactor the system towards elegance, rather than being progressively discouraged by all the hidden mess.
:)
I appreciate your direct answer to my question, but I do not understand what you are trying to say. I am familiar with Friston and the free-energy principle, so feel free to explain your theory in those terms. All you are doing here is saying that the brain has some reason to reduce “dissonance in the harmonic frame” (a phrase I have other issues with) in a similar way it has reasons to reduce prediction errors. There are good reasons why the brain should reduce prediction errors. You say (but do not clearly explain why) there’s a parallel here where the brain should reduce neural asynchrony/dissonance in the harmonic frame. You posit neural asynchrony is suffering, but you do not explain why in an intelligible way. “Dissonant systems shake themselves apart.” Are you saying dissonant neural networks destroy themselves and we subjectively perceive this as suffering? This makes no sense. Maybe you’re trying to say something else, but I have made my confusion about the link between suffering and asynchrony extremely clear multiple times now, and you have not offered an explanation that I understand.
I mean, I’ve done ~7 peer reviewed conference presentations on my multiple fmri research projects, and I also do multi-site longitudinal research into the mental health of graduate students (with thousands of participants), but thanks for the heads up ;)
I agree neuroimaging is extremely messy and discouraging, but you’re the one posting about successfully building an fmri analysis pipeline to run this specific analysis to support your theory. I am very annoyed that your response to my multiple requests for any empirical data to support your theory is you basically saying “science is hard”, as opposed to “no experiment, dataset, or analysis is perfect, but here is some empirical evidence that is at least consistent with my theory.”
I wish you came at this by saying, “Hey I have a cool idea, what do you guys think?” But instead you’re saying “We have a full empirical theory of suffering” with as far as I can tell, nothing to back this up.
I know that this is the EA forum and it’s bad that two people are trading arch emoticons...but I know I’m not the only one enjoying Abby Hoskin’s response to someone explaining her future journey to her.
Inject this into my veins.
Maybe more constructively (?) I think the OP responses have updated others in support of Abby’s concerns.
In the past, sometimes I have said things that turned out not to be as helpful as I thought. In those situations, I think I have benefitted from someone I trust reviewing the discussion and offering another perspective to me.
[Own views]
I’m not sure ‘enjoy’ is the right word, but I also noticed the various attempts to patronize Hoskin.
This ranges from the straightforward “I’m sure once you know more about your own subject you’ll discover I am right”:
‘Well-meaning suggestions’ alongside implication her criticism arises from some emotional reaction rather than her strong and adverse judgement of its merit.
[Adding a smiley after something insulting or patronizing doesn’t magically make you the ‘nice guy’ in the conversation, but makes you read like a passive-aggressive ass who is nonetheless too craven for candid confrontation. I’m sure once you reflect on what I said and grow up a bit you’ll improve so your writing inflicts less of a tax on our collective intelligence and good taste. I know you’ll make us proud! :)]
Or just straight-up belittling her knowledge and expertise with varying degrees of passive-aggressiveness.
I think this sort of smug and catty talking down would be odious even if the OP really did have much more expertise than their critic: I hope I wouldn’t write similarly in response to criticism (however strident) from someone more junior in my own field.
What makes this kinda amusing, though, is although the OP is trying to set himself up as some guru trying to dismiss his critic with the textual equivalent of patting her on the head, virtually any reasonable third party would judge the balance of expertise to weigh in the other direction. Typically we’d take, “Post-graduate degree, current doctoral student, and relevant publication record” over “Basically nothing I could put on an academic CV, but I’ve written loads of stuff about my grand theory of neuroscience.”
In that context (plus the genders of the participants) I guess you could call it ‘mansplaining’.
Greg, I have incredible respect for you as a thinker, and I don’t have a particularly high opinion of the Qualia Research Institute. However, I find your comment to be unnecessarily mean: every substantive point you raise could have been made more nicely and less personal, in a way more conducive to mutual understanding and more focused on an evaluation of QRI’s research program. Even if you think that Michael was condescending or disrespectful to Abby, I don’t think he deserves to be treated like this.
Hmm I have conflicting feelings about this. I think whenever you add additional roadblocks or other limitations on criticism, or suggestions that criticisms can be improved, you
a) see the apparent result that criticisms that survive the process will on average be better.
b) fail to see the (possibly larger) effect that there’s an invisible graveyard of criticisms that people choose not to voice because it’s not worth the hassle.
At the same time, being told that your life work is approximately useless is never a pleasant feeling, and it’s not always reasonable to expect people to handle it with perfect composure (Thankfully nothing of this magnitude has ever happened to me, but I was pretty upset when an EA Forum draft I wrote in only a few days had to be scrapped or at least rewritten because it assumed a mathematical falsehood). So while I think Mike’s responses to Abby are below a reasonable bar of good forum commenting norms, I think I have more sympathy for his feelings and actions here than Greg seems to.
So I’m pretty conflicted. My own current view is that I endorse Abby’s comments and tone as striking the right balance for the forum, and I endorse Greg’s content but not the tone.
But I think reasonable people can disagree here, and we should also be mindful that when we ask people to rephrase substantive criticisms to meet a certain stylistic bar (see also comments here), we are implicitly making criticisms more onerous, which arguably has pretty undesirable outcomes.
I want to say something more direct:
Based on how the main critic Abby was treated, how the OP replies to comments in a way that selectively chooses what content they want to respond to, the way they respond to direct questions with jargon, I place serious weight that this isn’t a good faith conversation.
This is not a stylistic issue, in fact it seems to be exactly the opposite: someone is taking the form of EA norms and styles (maintaining a positive tone, being sympathetic) while actively undermining someone odiously.
I have been in several environments where this behavior is common.
At the risk of policing or adding to the noise (I am not willing to read more of this to update myself), I am writing this because I am concerned you and others who are conscientious are being sucked into this.
Hi Charles, I think several people (myself, Abby, and now Greg) were put in some pretty uncomfortable positions across these replies. By posting, I open myself to replies, but I was pretty surprised by some of the energy of the initial comments (as apparently were others; both Abby and I edited some of our comments to be less confrontational, and I’m happy with and appreciate that).
Happy to answer any object level questions you have that haven’t been covered in other replies, but this remark seems rather strange to me.
For the avoidance of doubt, I remain entirely comfortable with the position expressed in my comment: I wholeheartedly and emphatically stand behind everything I said. I am cheerfully reconciled to the prospect some of those replying to or reading my earlier comment judge me adversely for it—I invite these folks to take my endorsement here as reinforcing whatever negative impressions they formed from what I said there.
The only thing I am uncomfortable with is that someone felt they had to be anonymous to criticise something I wrote. I hope the measure I mete out to others makes it clear I am happy for similar to be meted out to me in turn. I also hope reasonable folks like the anonymous commenter are encouraged to be forthright when they think I err—this is something I would be generally grateful to them for, regardless of whether I agree with their admonishment in a particular instance. I regret to whatever degree my behaviour has led others to doubt this is the case.
Greg, I want to bring two comments that have been posted since your comment above to your attention:
Abby said the following to Mike:
2. Another anonymous commentator (thanks to Linch for posting) highlights that Abby’s line of questioning regarding EEGs ultimately resulted in a response satisfactory to her and which she didn’t have the expertise to further evaluate:
Thanks, but I’ve already seen them. Presuming the implication here is something like “Given these developments, don’t you think you should walk back what you originally said?”, the answer is “Not really, no”: subsequent responses may be better, but that is irrelevant to whether earlier ones were objectionable; one may be making good points, but one can still behave badly whilst making them.
(Apologies if I mistake what you are trying to say here. If it helps generally, I expect—per my parent comment—to continue to affirm what I’ve said before however the morass of commentary elsewhere on this post shakes out.)
Gregory, I’ll invite you to join the object-level discussion between Abby and I.
Just want to be clear, the main post isn’t about analyzing eigenmodes with EEG data. It’s very funny that when I am intellectually honest enough to say I don’t know about one specific EEG analysis that doesn’t exist and is not referenced in the main text, people conclude that I don’t have expertise to comment on fMRI data analysis or the nature of neural representations.
Meanwhile QRI does not have expertise to comment on many of the things they discuss, but they are super confident about everything and in the original posts especially did not clearly indicate what is speculation versus what is supported by research.
I continue to be unconvinced with the arguments laid out, but I do think both the tone of the conversation and Mike Johnson’s answers improved after he was criticized. (Correlation? Causation?)
Generally speaking, I agree with the aphorism “You catch more flies with honey than vinegar;”
For what it’s worth, I interpreted Gregory’s critique as an attempt to blow up the conversation and steer away from the object level, which felt odd. I’m happiest speaking of my research, and fielding specific questions about claims.
Hi Gregory, I’ll own that emoticon. My intent was not to belittle, but to show I’m not upset and I‘m actually enjoying the interaction. To be crystal clear, I have no doubt Hoskin is a sharp scientist and cast no aspersions on her work. Text can be a pretty difficult medium for conveying emotions (things can easily come across as either flat or aggressive).
Hi Abby, to give a little more color on the data: we’re very interested in CSHW as it gives us a way to infer harmonic structure from fMRI, which we’re optimistic is a significant factor in brain self-organization. (This is still a live hypothesis, not established fact; Atasoy is still proving her paradigm, but we really like it.)
We expect this structure to be highly correlated with global valence, and to show strong signatures of symmetry/harmony during high-valence states. The question we’ve been struggling with as we’ve been building this hypothesis is “what is a signature of symmetry/harmony?” — there’s a bit of research from Stanford (Chon) here on quantifying consonance in complex waveforms and some cool music theory based on Helmholz’s work, but this appears to be an unsolved problem. Our “CDNS” approach basically looks at pairwise relationships between harmonics to quantify the degree to which they’re in consonance or dissonance with each other. We’re at the stage here where we have the algorithm, but need to validate it on audio samples first before applying it too confidently to the brain.
There’s also a question of what datasets are ideal for the sort of thing we’re interested in. Extreme valence datasets are probably the most promising, states of extreme pleasure or extreme suffering. We prefer datasets involving extreme pleasure, for two reasons:
(1) We viscerally feel better analyzing this sort of data than states of extreme suffering;
(2) fMRI’s time resolution is such that the best results will come from mental states with high structural stability. We expect this structural stability to be much higher during pleasure than suffering.
As such we’ve been focusing on collecting data from meditative jhana states, and from MDMA states. There might be other states that involve reliable good emotion that we can study, but these are the best we’ve found conceptually so far.
Lastly, there’s been the issue of neuroimaging pipelines and CSHW. Atasoy‘s work is not open source, so we had to reimplement her core logic (big thanks to Patrick here) and we ended up collaborating with an external group on a project to combine this core logic with a neuroimaging packaging system. I can’t share all the details here as our partner doesn’t want to be public about their involvement yet but this is thankfully wrapping up soon.
I wish we had a bunch of deeply analyzed data we could send you in direct support of STV! And I agree with you that is the ideal and you’re correct to ask for it. Sadly we don’t at this point, but I’m glad to say a lot of the preliminaries have been now taken care of and things are moving. I hope my various comments here haven’t come across as disrespectful (and I sincerely apologize if they have- not my intention but if that’s been your interpretation I accept it, sorry!); there’s just a lot of high-context stuff here that’s hard to package up into something that’s neat and tidy, and overall what clarity we’ve been able to find on this topic has been very hard-won.
Hi Abby, to be honest the parallels between free-energy-minimizing systems and dissonance-minimizing systems is a novel idea we’re playing with (or at least I believe it’s novel—my colleague Andrés coined it to my knowledge) and I’m not at full liberty to share all the details before we publish it. I think it’s reasonable to doubt this intuition, and we’ll hopefully be assembling more support for it soon.
To the larger question of neural synchrony and STV, a good collection of our argument and some available evidence would be our talk to Robin Carhart-Harris’ lab:
(I realize an hour-long presentation is a big ‘ask’; don’t feel like you need to watch it, but I think this shares what we can share publicly at this time)
>I agree neuroimaging is extremely messy and discouraging, but you’re the one posting about successfully building an fmri analysis pipeline to run this specific analysis to support your theory. I am very annoyed that your response to my multiple requests for any empirical data to support your theory is you basically saying “science is hard”, as opposed to “no experiment, dataset, or analysis is perfect, but here is some empirical evidence that is at least consistent with my theory.”
One of my takeaways from our research is that neuroimaging tooling is in fairly bad shape overall. I’m frankly surprised we had to reimplement an fMRI analysis pipeline in order to start really digging into this question, and I wonder how typical our experience here is.
One of the other takeaways from our work is that it’s really hard to find data that’s suitable for fundamental research into valence; we just got some MDMA fMRI+DTI data that appears very high quality, so we may have more to report soon. I’m happy to talk about what sorts of data are, vs are not, suitable for our research and why; my hands are a bit tied with provisional data at this point (sorry about that, wish I had more to share)
Thanks for adjusting your language to be nicer. I wouldn’t say we’re overwhelmingly confident in our claims, but I am overwhelmingly confident in the value of exploring these topics from first principles, and although I wish I had knockout evidence for STV to share with you today, that would be Nobel Prize tier and I think we’ll have to wait and see what the data brings. For the data we would identify as provisional support, this video is likely the best public resource at this point:
This sounds overwhelmingly confident to me, especially since you have no evidence to support either of these claims.
This is in fact the claim of STV, loosely speaking; that there is an identity relationship here. I can see how it would feel like an aggressive claim, but I’d also suggest that positing identity relationships is a very positive thing, as they generally offer clear falsification criteria. Happy to discuss object-level arguments as presented in the linked video.
Hi Mike, I really enjoy your and Andrés’s work, including STV, and I have to say I’m disappointed by how the ideas are presented here, and entirely unsurprised at the reaction they’ve elicited.
There’s a world of a difference between saying “nobody knows what valence is made out of, so we’re trying to see if we can find correlations with symmetries in imaging data” (weird but fascinating) and “There is an identity relationship between suffering and disharmony” (time cube). I know you’re not time cube man, because I’ve read lots of other QRI output over the years, but most folks here will lack that context. This topic is fringe enough that I’d expect everything to be extra-delicately phrased and very well seasoned with ifs and buts.
Again, I’m a big fan of QRI’s mission, but I’d be worried about donating I if I got the sense that the organization viewed STV not as something to test, but as something to prove. Statistically speaking, it’s not likely that STV will turn out to be the correct mechanistic grand theory of valence, simply because it’s the first one (of hopefully many to come). I would like to know:
When do you expect to be able to share the first set of empirical results, and what kinds of conclusions do you expect we will be able to draw from them, depending on how they turn out? Tiny studies with limited statistical power are ok; “oh it’s promising so far but we can’t share details” isn’t.
I hope QRI’s fate isn’t tied to STV – if STV can’t be reconciled with the data, then what alternative ideas would you test next?
Hi Seb, I appreciate the honest feedback and kind frame.
I can say that it’s difficult to write a short piece that will please a diverse audience, but that ducks the responsibility of the writer.
You might be interested in my reply to Linch which notes that STV may be useful even if false; I would be surprised if it were false but it wouldn’t be an end to qualia research, merely a new interesting chapter.
I spoke with the team today about data, and we just got a new batch this week we’re optimistic has exactly the properties we’re looking for (meditative cessations, all 8 jhanas in various orders, DTI along with the fMRI). We have a lot of people on our team page but to this point QRI has mostly been fueled by volunteer work (I paid myself my first paycheck this month, after nearly five years) so we don’t always have the resources to do everything we want to do as fast as we want to do it, but I’m optimistic we’ll have something to at least circulate privately within a few months.
But did you have any reason to posit it? Any evidence that this identity is the case?
Andrés’s STV presentation to Imperial College London’s psychedelics research group is probably the best public resource I can point to on this right now. I can say after these interactions it’s much more clear that people hearing these claims are less interested in the detailed structure of the philosophical argument, and more in the evidence, and in a certain form of evidence. I think this is very reasonable and it’s something we’re finally in a position to work on directly: we spent the last ~year building the technical capacity to do the sorts of studies we believe will either falsify or directly support STV.
I read this post and the comments that have followed it with great interest.
I have two major, and one minor, worries about QRI’s research agenda I hope you can clarify. First, I am not sure exactly which question you are trying to answer. Second, it’s not clear to me why you think this project is (especially) important. Third, I can’t understand what STV is about because there is so much (undefined) technical jargon.
1. Which question is QRI trying to answer?
You open by saying:
This makes me think you want to identify what suffering is, that is, what it consists in. But you then immediately raise Buddhist and Arisotlean theories of what causes suffering—a wholly different issue. FWIW, I don’t see anything deeply problematic in identifying what suffering, and related terms, refer to. Valence just refers to how good/bad you feel (the intrinsic pleasurableness/displeasurableness of your experience); happiness is feeling overall good; suffering is feeling overall bad. I don’t find anything dissatisfying about these. Valence refers to something subjective. That’s a definition in terms of something subjective. What else could one want?
It seems you want to do two things: (1) somehow identify which brainstates are associated with valence and (2) represent subjective experiences in terms of something mathematical, i.e. something non-subjective. Neither of these questions is identical to establishing either what suffering is, or what causes it. Hence, when you say:
I’m afraid I don’t know which question you have in mind. Could you please specify?
2. Why does that all matter?
It’s unclear to me why you think solving either problem - (1) or (2) - is (especially) valuable. There is some fairly vague stuff about neurotech, but this seems pretty hand-wavey. It’s rather bold for you to claim
and I think you owe the reader a bit more to bite into, in terms of a theory of change.
You might offer some answer about the importance of being able to measure what impacts well-being here but—and I hope old-time forum hands will forgive me as I mount a familiar hobby-horse—economics and psychology seem to be doing a reasonable job of this simply by surveying people, e.g. asking them how happy they are (0-10). Such work can and does proceed without a theory of exactly what is happening inside the ‘black box’ of the brain; it can be used, right now, to help us determine what our priorities are - if I can be permitted to toot my horn from aside the hobby-horse, I should add that this just is what my organisation, the Happier Lives Institute, is working on. If I were to insist on waiting for real-time brain scanning data to learn whether, saying, cash transfers are more cost-effective than psychotherapy at increasing happiness, I would be waiting some time.
3. Too much (undefined) jargon
Here is a list of terms or phrases that seem very important for understanding STV where I have very little idea exactly what you mean:
Neurophysiological models of suffering try to dig into the computational utility and underlying biology of suffering
symmetry
harmony
dissonance
resonance as a proxy for characteristic activity
Consonance Dissonance Noise Signature
self-organizing systems
Neural Annealing
full neuroimaging stack
precise physical formalism for consciousness
STV gives us a rich set of threads to follow for clear neurofeedback targets, which should allow for much more effective closed-loop systems, and I am personally extraordinarily excited about the creation of technologies that allow people to “update toward wholesome”,
Finally, and perhaps most importantly, I really not sure what it could even mean to represent consciousness/ valence as a mathematic shape.
If this is the ‘primer’, I am certainly not ready for the advanced course(!).
Hi Michael, I appreciate the kind effortpost, as per usual. I’ll do my best to answer.
This is a very important question. To restate it in several ways: what kind of thing is suffering? What kind of question is ‘what is suffering’? What would a philosophically satisfying definition of suffering look like? How would we know if we saw it? Why does QRI think existing theories of suffering are lacking? Is an answer to this question a matter of defining some essence, or defining causal conditions, or something else?
Our intent is to define phenomenological valence in a fully formal way, with the template being physics: we wish to develop our models such that we can speak of pain and pleasure with all the clarity, precision, and rigor as we currently describe photons and quarks and fields.
This may sound odd, but physics is a grand success story of formalization, and we essentially wish to apply the things that worked in physics, to phenomenology. Importantly, physics has a strong tradition of using symmetry considerations to inform theory. STV borrows squarely from this tradition (see e.g. my write up on Emmy Noether).
Valence is subjective as you note, but that doesn’t mean it’s arbitrary; there are deep patterns in which conditions and sensations feel good, and which feel bad. We think it’s possible to create a formal system for the subjective. Valence and STV are essentially the pilot project for this system. Others such as James and Husserl have tried to make phenomenological systems, but we believe they didn’t have all the pieces of the puzzle. I’d offer our lineages page for what we identify as ‘the pieces of the puzzle’; these are the shoulders we’re standing on to build our framework.
2. I see the question. Also, thank you for your work on the Happier Lives Institute; we may not interact frequently but I really like what you’re doing.
The significance of a fully rigorous theory of valence might not be fully apparent, even to the people working on it. Faraday and Maxwell formalized electromagnetism; they likely did not foresee theIr theory being used to build the iPhone. However, I suspect that they had deep intuitions that there’s something deeply useful in understanding the structure of nature, and perhaps they wouldn’t be as surprised as their contemporaries. We also hold intuitions as to the applications of a full theory of valence.
The simplest would be, it would unlock novel psychological and psychiatric diagnostics. If there is some difficult-to-diagnose nerve pain, or long covid type bodily suffering, or some emotional disturbance that is difficult to verbalize, well, this is directly measurable in principle with STV. This wouldn’t replace economics and psychology, as you say, but it would augment them.
Longer term, I’m reminded of the (adapted) phrase, “what you can measure, you can manage.” If you can reliably measure suffering, you can better design novel interventions for reducing it. I could see a validated STV as the heart of a revolution in psychiatry, and some of our work (Neural Annealing, Wireheading Done Right) are aimed at possible shapes this might take.
3. Aha, an easy question :) I’d point you toward our web glossary.
To your question, “Finally, and perhaps most importantly, I really not sure what it could even mean to represent consciousness/ valence as a mathematic shape“ — this is perhaps an overly-fancy way of saying that we believe consciousness is precisely formalizable. The speed of light is precisely formalizable; the UK tax rate is precisely formalizable; the waveform of an mp3 is precisely formalizable, and all of these formalizations can be said to be different ‘mathematical shapes’. To say something does not have a ‘mathematical shape’ is to say it defies formal analysis.
Thanks again for your clear and helpful questions.
I’ll take a shot at these questions too, perhaps being usefully only partially familiar with QRI.
Is there a universal pattern to conscious experience? Can we specify a function from the structure and state of a mind to the quality of experience it is having?
If we discover a function from mind to valence, and develop the right tools of measurement and intervention (big IFs, for sure), we can steer all minds towards positive experience.
Until recently we only had intuitive physics, useful for survival, but not enough for GPS. In the same way, we can make some predictions today about what will make humans happy or sad, but we don’t understand depression very well, we can guess about how other animals feel, but it gets murky as you consider more and more distant species, and we’re in the dark on whether artificial minds experience anything at all. A theory of valence would let us navigate phenomenological space with new precision, across a broad domain of minds.
Hi Michael,
I appreciate your comment here, and am a big fan of your work.
In response to point #3, I think it is extremely revealing how you ask for definitions of a few phrases, and Mike directs you to a link that does not define the phrases you specifically ask for.https://www.qualiaresearchinstitute.org/glossary Edit: Mike responded directly to this below, so this feels unfair to say now.Good catch; there’s plenty that our glossary does not cover yet. This post is at 70 comments now, and I can just say I’m typing as fast as I can!
I pinged our engineer (who has taken the lead on the neuroimaging pipeline work) about details, but as the collaboration hasn’t yet been announced I’ll err on the side of caution in sharing.
To Michael — here’s my attempt to clarify the terms you highlighted:
Neurophysiological models of suffering try to dig into the computational utility and underlying biology of suffering
-> existing theories talk about what emotions ‘do’ for an organism, and what neurochemicals and brain regions seem to be associated with suffering
symmetry
Frank Wilczek calls symmetry ‘change without change’. A limited definition is that it’s a measure of the number of ways you can rotate a picture, and still get the same result. You can rotate a square 90 degrees, 180 degrees, and 270 degrees and get something identical; you can rotate a circle any direction and get something identical. Thus we’d say circles have more rotational symmetries than squares (who have more than rectangles, etc)
harmony
Harmony has been in our vocabulary a long time, but it’s not a ‘crisp’ word. This is why I like to talk about symmetry, rather than harmony — although they more-or-less point in the same direction
dissonance
The combination of multiple frequencies that have a high amount of interaction, but few common patterns. Nails on a chalkboard create a highly dissonant sound; playing the C and C# keys at the same time also creates a relatively dissonant sound
resonance as a proxy for characteristic activity
I’m not sure I can give a fully satisfying definition here that doesn’t just reference CSHW; I’ll think about this one more.
Consonance Dissonance Noise Signature
A way of mathematically calculating how much consonance, dissonance, and noise there is when we add different frequencies together. This is an algorithm developed at QRI by my co-founder, Andrés
self-organizing systems
A system which isn’t designed by some intelligent person, but follows an organizing logic of its own. A beehive or anthill would be a self-organizing system; no one’s in charge, but there’s still something clever going on
Neural Annealing
In November 2019 I released a work speaking of the brain as a self-organizing system. Basically, “when the brain is in an emotionally intense state, change is easier” similar to how when metal heats up and starts to melt, it’s easier to change the shape of the metal
full neuroimaging stack
All the software we need to do an analysis (and specifically, the CSHW analysis), from start to finish
precise physical formalism for consciousness
A perfect theory of consciousness, which could be applied to anything. Basically a “consciousness meter”
STV gives us a rich set of threads to follow for clear neurofeedback targets, which should allow for much more effective closed-loop systems, and I am personally extraordinarily excited about the creation of technologies that allow people to “update toward wholesome”,
Ah yes this is a litttttle bit dense. Basically, one big thing holding back neurotech is we don’t have good biomarkers for well-being. If we design these biomarkers, we can design neurofeedback systems which work better (not sure how familiar you are with neurofeedback)
I feel like it’s important to highlight two things QRI people have said. These statements illustrate why STV sounds extremely implausible to me.
“STV makes a big jump in that it assumes the symmetry of this mathematical object corresponds to how pleasurable the experience it represents is. This is a huge, huge, huge jump, and cannot be arrived at by deduction; none of my premeses force this conclusion. We can call it an educated guess. But, it is my best educated guess after thinking about this topic for about 7 years before posting my theory. I can say I’m fully confident the problem is super important and I’m optimistic this guess is correct, for many reasons, but many of these reasons are difficult to put into words.”
″I started out very skeptical of STV myself, and in fact it took about three years of thinking it through in light of many meditation and exotic high-energy experiences to be viscerally convinced that it’s pointing in the right direction.”
These are not satisfying arguments.
Hi Abby,
I feel we’ve been in some sense talking past each other from the start. I think I bear some of the responsibility for that, based on how my post was written (originally for my blog, and more as a summary than an explanation).
I’m sorry for your frustration. I can only say I’m not intentionally trying to frustrate you, but that we appear to have very different styles of thinking and writing and this may have caused some friction, and I have been answering object-level questions from the community as best I can.
Object level questions:
1. Why would asynchronous firing between the visual word form area and the fusiform face area either cause suffering or occur as the result of suffering?
2. If your answer relies on something about how modularism/functionalism is bad: why is source localization critical for your main neuroimaging analysis of interest?
3. If source localization is not necessary: why can’t you use EEG to measure synchrony of neural oscillations?
4. Why can’t you just ask people if they’re suffering? What’s the value of quantifying the degree of their suffering using harmonic coherence?
5. Assuming you are right about everything, do you think EA funds would more efficiently reduce suffering by improving living conditions of people in poor countries, or by quantifying the suffering of people living in rich countries and giving them neurofeedback on how coherent their brain harmonics are at the cost of over $500 per hour?
Hi Abby, thanks for the questions. I have direct answers to 2,3,4, and indirect answers to 1 and 5.
1a. Speaking of the general case, we expect network control theory to be a useful frame for approaching questions of why certain sorts of activity in certain regions of the brain are particularly relevant for valence. (A simple story: hedonic centers of the brain act as ‘tuning knobs’ toward or away from global harmony. This would imply they don’t intrinsically create pleasure and suffering, merely facilitate these states.) This paper from the Bassett lab is the best intro I know of to this;
1b. Speaking again of the general case, asynchronous firing isn’t exactly identical to the sort of dissonance we’d identify as giving rise to suffering: asynchronous firing could be framed as in uncorrelated firing, or ‘non-interacting frequency regimes’. There’s a really cool paper asserting that the golden mean is the optimal frequency ratio for non-interaction, and some applications to EEG work, in case you’re curious. What we’re more interested in is frequency combinations that are highly interacting, and lacking a common basis set. An example would be playing the C and C# keys on a piano. This lens borrows more from music theory and acoustics (e.g. Helmholtz, Sethares) than traditional neuroscience although it lines up with some work by e.g. Buzsáki (Rhythms of the Brain); Friston has also done some cool work here on frequencies, communication, and birdsong, although I’d have to find the reference.
1c. Speaking again of the general case, naively I’d expect dissonance somewhere in the brain to induce dissonance elsewhere in the brain. I‘d have to think about what reference I could point to here as I don’t know if you’ll share this intuition, but a simple analogy would be if many people are walking in a line, if someone trips, more people might trip; chaos begets chaos.
1d. Speaking, finally, of the specific case, I admit I have only a general sense of the structure of the brain networks in question and I’m hesitant to put my foot in my mouth by giving you an answer I have little confidence in. I’d probably punt to the general case, and say if there’s dissonance between these two regions, depending on the network control theory involved, it could be caused by dissonance elsewhere in the brain, and/or it could spread to elsewhere in the brain: i.e. it could be both cause and effect.
2&3. The harmonic analysis we’re most interested in depends on accurately modeling the active harmonics (eigenmodes) of the brain. EEG doesn’t directly model eigenmodes; to infer eigenmodes we’d need fairly accurate source localization. It could be there are alternative ways to test STV without modeling brain eigenmodes, and that EEG could give us. I hope that’s the case, and I hope we find it, since EEG is certainly a lot easier to work with than fMRI.
I.e. we’re definitely not intrinsically tied to source localization, but currently we just don’t see a way to get clean enough abstractions upon which we could compute consonance/dissonance/noise without source localization.
4. Usually we can, and usually it’s much better than trying to measure it with some brain scanner! The rationale for pursuing this line of research is that existing biomarkers for mood and well-being are pretty coarse. If we can design a better biomarker, it’ll be useful for e.g. neurotech wearables. If your iPhone can directly measure how happy you are, you can chart that, correlate it with things, and all sorts of things. “What you can measure, you can manage.” It could also lead to novel therapies and other technologies, and that’s probably what I’m most viscerally excited about. There are also more ‘sci-fi’ applications such as using this to infer the experience of artificial sentience.
5. This question is definitely above my pay grade; I take my special edge here to be helping build a formal theory and more accurate biomarkers for suffering, rather than public policy (e.g. Michael D. Plant‘s turf). I do suspect however that some of the knowledge gained from better biomarkers could help inform emotional wellness best practices, and these best practices could be used by everyone, not just people getting scanned. I also think some therapies that might arise out of having better biomarkers could heal some sorts of trauma more-or-less permanently, so the scanning would just need to be a one-time-thing, not continuous. But this gets into the weeds of implementation pretty quickly.
Hi Mike,
Thanks again for your openness to discussion, I do appreciate you taking the time. Your responses here are much more satisfying and comprehensible than your previous statements, it’s a bit of a shame we can’t reset the conversation.
1a. I am interpreting this as you saying there are certain brain areas that, when activated, are more likely to result in the experience of suffering or pleasure. This is the sort of thing that is plausible and possible to test.
1b. I think you are making a mistake by thinking of the brain like a musical instrument, and I really don’t like how you’re assuming discordant brain oscillations “feel bad” the way discordant chords “sound bad”. (Because as I’ve stated earlier, there’s no evidence linking suffering to dissonance, and as you’ve stated previously, you made a massive jump in reasoning here.) But this is the clearest you have explained your thinking on this question so far, which I do appreciate.
1c. I am confused here. I did not ask whether dissonance in VWFA causes dissonance in FFA. I asked how dissonance between the two regions causes suffering. What does it mean neurologically to have dissonance within a specific brain area? I thought the point of using fMRI instead of EEG was that you needed to measure the differences between specific areas.
1d. You’re saying dissonance in place a could cause dissonance in place b, or both could be caused by dissonance in place c. That sounds super reasonable. But my question is why would dissonance between a and b cause suffering? It doesn’t really matter what brain areas a and b are, I know I keep hammering at the point of why suffering == dissonance, but this is the most important part of your theory, and your explanation of “This is a huge, huge, huge jump, and cannot be arrived at by deduction” is incredibly unsatisfying to me.
2&3. Ok, I appreciate this concrete response. I don’t know enough about calculating eigenmodes with EEG data to predict how tractable it is.
4. Your current analysis is incompatible with wearable biotech. Moving your body even a millimeter within the fMRI scanner negatively affects data quality. This is part of the reason I am confused about why you are focused so much on fMRI. I appreciate in general the value of accurate biomarkers for wellbeing, but I don’t think symmetry/harmonics is either accurate or useful.
5. The labs I am in (although not me personally) are working on closed loop fmri neurofeedback to improve mental health outcomes of depressed patients. I am familiar with the technical challenges in this work, which is partially why I am coming at you so hard on this. Here’s a paper from my primary and secondary academic advisors: https://www.biorxiv.org/content/10.1101/2020.06.07.137943v1.abstract
Hi Abby, I understand. We can just make the best of it.
1a. Yep, definitely. Empirically we know this is true from e.g. Kringelbach and Berridge’s work on hedonic centers of the brain; what we’d be interested in looking into would be whether these areas are special in terms of network control theory.
1c. I may be getting ahead of myself here: the basic approach to testing STV we intend is looking at dissonance in global activity. Dissonance between brain regions likely contribute to this ‘global dissonance’ metric. I’m also interested in measuring dissonance within smaller areas of the brain as I think it could help improve the metric down the line, but definitely wouldn’t need to at this point.
1d. As a quick aside, STV says that ‘symmetry in the mathematical representation of phenomenology corresponds to pleasure’. We can think of that as ‘core STV’. We’ve then built neuroscience metrics around consonance, dissonance, and noise that we think can be useful for proxying symmetry in this representation; we can think of that as a looser layer of theory around STV, something that doesn’t have the ‘exact truth’ expectation of core STV. When I speak of dissonance corresponding to suffering, it’s part of this looser second layer.
To your question — why would STV be true? — my background is in the philosophy of science, so I’m perhaps more ready to punt to this domain. I understand this may come across as somewhat frustrating or obfuscating from the perspective of a neuroscientist asking for a neuroscientific explanation. But, this is a universal thread across philosophy of science: why is such and such true? Why does gravity exist; why is the speed of light as it is? Etc. Many things we’ve figured out about reality seem like brute facts. Usually there is some hints of elegance in the structures we’re uncovering, but we’re just not yet knowledgable to see some universal grand plan. Physics deals with this a lot, and I think philosophy of mind is just starting to grapple with this in terms of NCCs. Here’s something Frank Wilczek (won the 2004 Nobel Prize in physics for helping formalize the Strong nuclear force) shared about physics:
>… the idea that there is symmetry at the root of Nature has come to dominate our understanding of physical reality. We are led to a small number of special structures from purely mathematical considerations—considerations of symmetry—and put them forward to Nature, as candidate elements for her design. … In modern physics we have taken this lesson to heart. We have learned to work from symmetry toward truth. Instead of using experiments to infer equations, and then finding (to our delight and astonishment) that the equations have a lot of symmetry, we propose equations with enormous symmetry and then check to see whether Nature uses them. It has been an amazingly successful strategy. (A Beautiful Question, 2015)
So — why would STV be the case? ”Because it would be beautiful, and would reflect and extend the flavor of beauty we’ve found to be both true and useful in physics” is probably not the sort of answer you’re looking for, but it’s the answer I have at this point. I do think all the NCC literature is going to have to address this question of ‘why’ at some point.
4. We’re ultimately opportunistic about what exact format of neuroimaging we use to test our hypotheses, but fMRI checks a lot of the boxes (though not all). As you say, fMRI is not a great paradigm for neurotech; we’re looking at e.g. headsets by Kernel and others, and also digging into the TUS (transcranial ultrasound) literature for more options.
5. Cool! I’ve seen some big reported effect sizes and I’m generally pretty bullish on neurofeedback in the long term; Adam Gazzaley‘s Neuroscape is doing some cool stuff in this area too.
Ok, thank you for these thoughts.
Considering how asymmetries can be both pleasing (complex stimuli seem more beautiful to me than perfectly symmetrical spheres) and useful (as Holly Ellmore points out in the domain of information theory, and as the Mosers found with their Nobel prize winning work on orthogonal neural coding of similar but distinct memories), I question your intuition that asymmetry needs to be associated with suffering.
Welcome, thanks for the good questions.
Asymmetries in stimuli seem crucial for getting patterns through the “predictive coding gauntlet.” I.e., that which can be predicted can be ignored. We demonstrably screen perfect harmony out fairly rapidly.
The crucial context for STV on the other hand isn’t symmetries/asymmetries in stimuli, but rather in brain activity. (More specifically, as we’re currently looking at things, in global eigenmodes.)
With a nod back to the predictive coding frame, it’s quite plausible that the stimuli that create the most internal symmetry/harmony are not themselves perfectly symmetrical, but rather have asymmetries crafted to avoid top-down predictive models. I’d expect this to vary quite a bit across different senses though, and depend heavily on internal state.
The brain may also have mechanisms which introduce asymmetries in global eigenmodes, in order to prevent getting ‘trapped’ by pleasure — I think of boredom as fairly sophisticated ‘anti-wireheading technology’ — but if we set aside dynamics, the assertion is that symmetry/harmony in the brain itself is intrinsically coupled with pleasure.
Edit: With respect to the Mosers, that’s really cool example of this stuff. I can’t say I have answers here but as a punt, I’d suspect the “orthogonal neural coding of similar but distinct memories” is going to revolve around some pretty complex frequency regimes and we may not yet be able to say exact things about how ‘consonant’ or ‘dissonant’ these patterns are to each other yet. My intuition is that this result about the golden mean being the optimal ratio for non-interaction will end up intersecting with the Mosers’ work. That said I wonder if STV would assert that some sorts of memories are ‘hedonically incompatible’ due to their encodings being dissonant? Basically, as memories get encoded, the oscillatory patterns they’re encoded with could subtly form a network which determines what sorts of new memories can form and/or which sorts of stimuli we enjoy and which we don’t. But this is pretty hand-wavy speculation…
Why can’t you just observe that objects fall towards the ground? What’s the value of quantifying the degree of their falling using laws of motion?
How much do newborns suffer? Whales? Ants?
I did not propose putting whales into fMRI scanners. I would not have proposed trying to weigh distant stars with a scale either, yet somehow we’ve learned how to say some things about their mass and contents.
This is difficult to read as in good faith.
This makes absolutely no sense on its face. I am not a neuroscience expert. I am not a consciousness expert. I do not need to be to say that these conclusions just do not follow.
To recap what you said: You start by saying that, if you could make a complete mathematical representation of the brain (IIT), it would be symmetric to the physical manifestation of the brain, and therefore pleasure would be included in the representation. Then you claim that STV is a formal and causal theory, without backing that up or explaining it all. And then you just assert these ideas about dissonace and harmony being the structural correlates of suffering and pleasure!
You present this all as if you were building a case where one point leads to another. Perhaps it’s just poor communication about a a better idea, but what’s here is very shoddy reasoning.
Hi Holly, I’d say the format of my argument there would be enumeration of claims, not e.g. trying to create a syllogism. I’ll try to expand and restate those claims here:
A very important piece of this is assuming there exists a formal structure (formalism) to consciousness. If this is true, STV becomes a lot more probable. If it isn’t, STV can’t be the case.
Integrated Information Theory (IIT) is the most famous framework for determining the formal structure to an experience. It does so by looking at the causal relationships between components of a system; the more a system’s parts demonstrate ‘integration’ (which is a technical, mathematical term that tries to define how much a system’s parts interact with its other parts), the more conscious the system is.
I didn’t make IIT, I don’t know if it’s true, and I actually suspect it might not be true (I devoted a section of Principia Qualia to explaining IIT, and another section to critiques of IIT). But it’s a great example of an attempt to formalize phenomenology, and I think the project or overall frame of IIT (the idea of consciousness being the sort of thing that one can apply formal mathematics to) is correct even if its implementation (integration) isn’t.
You can think of IIT as a program. Put in the details of how a system (such as a brain) is put together, and it gives you some math that tells you what the system is feeling.
You can think of STV as a way to analyze this math. STV makes a big jump in that it assumes the symmetry of this mathematical object corresponds to how pleasurable the experience it represents is. This is a huge, huge, huge jump, and cannot be arrived at by deduction; none of my premeses force this conclusion. We can call it an educated guess. But, it is my best educated guess after thinking about this topic for about 7 years before posting my theory. I can say I’m fully confident the problem is super important and I’m optimistic this guess is correct, for many reasons, but many of these reasons are difficult to put into words. My co-founder Andrés also believes in STV and his way of describing things is often very different than mine in helpful ways and he recently posted his own description of this, so I also encourage you to read his comment.
I feel like your explanations are skipping a bunch of steps that would help folks understand where you’re coming from. FWIW, here’s how I make sense of STV:
Neuroscience can tell us that some neurons light up when we eat chocolate, but it doesn’t tell us what it is about the delicious experience of chocolate that makes it so wonderful. “This is what sugar looks like” and “this is the location of the reward center” are great descriptions of parts of the process, but they don’t explain why certain patterns of neural activations feel a certain way.
Everyone agrees that clearly, certain activation patterns do feel a certain way. Quite plausibly, this isn’t just a brain thing but more fundamental, and evolution simply recruited the relationship to build RL agents like ourselves. And yet, almost nobody has tried to figure out how exactly the patterns relate to the experiences. Of course, that’s because we struggle with both sides of the equation: on the neural side of the equals-sign, the data is incomplete and noisy; on the experience side, what do we even put to represent “delicious”?
But we have to start somewhere, so we simplify. On the neural side, we look for the simplest kind of signature we can reliably detect in global-scale brain data: symmetries/harmonies across space and time. On the experience side, we collapse everything onto a good/bad axis: valence. Now that’s still a pretty vague hypothesis, but just barely solid enough that we can at least reason about and perhaps even test it.
This seems very arbitrary! Well, collapsing qualia onto a single valence dimension (for now) is relatively uncontroversial, since there are at least a few things that everyone can agree feel fantastic or terrible. But why look for symmetry/harmony/resonance in the brain data, rather than other things like amplitude or spatial distribution? Here it’s worth explaining that you didn’t just pull that idea out of your … uhm, hat, but that experience suggests that all our senses – visual, spatial, temporal, auditory, etc. – are exquisitely attuned to certain kinds of symmetry. This may sound trivial, but the evidence from psychedelic research, intense meditation, psychotherapy etc. suggests that there’s something about it that goes much deeper than just “kaleidoscopes are pretty”. And also, that the hypothetical mathematical object representing one’s brain state is so high-dimensional that a huge class of neural activation patterns will have some kind of symmetry or another, leaving plenty of room for agreement with existing neuroscience. This is what most of your material is about, at least to my understanding.
How compelling this feels (and just feels!) to investigate is something most readers won’t appreciate unless they’ve experienced altered states of consciousness themselves. This is worth acknowledging explicitly, but not condescendingly: “this may be more difficult to relate to if you haven’t tried psychedelics”, rather than “you wouldn’t understand if you’re at a lower developmental stage”.
But also, none of this proves anything yet. People used to think that a fever was the disease, simply because it was the most obvious symptom, so perhaps back then it would have been an obvious leap to claim temperature were fundamental and causal to the qualia of sickness. It’s possible that the symmetry story will turn out to be a dud in the same way, even though it feels very appealing now and is certainly worth investigating.
This is an interesting summary, and was basically what I guessed STV was getting at, but this is a hypothesis, not a theory. The hypothesis is: what if there is content in the symmetry encoded in various brain states?
I don’t understand is how symmetry in brain readings is supposed to really explain valence better than, say, neurons firing brain areas involved in attraction/repulsion. Is the claim that the symmetry is the qualia of valence? How would symmetries and resonance be exempt from the hard problem any more than neuronal activation?
> How compelling this feels (and just feels!) to investigate is something most readers won’t appreciate unless they’ve experienced altered states of consciousness themselves.
Do you think it should be compelling based on a trip? Is that real evidence? I’m not closed to the possibility in principle, but outside view it sounds like psychedelics just give you an attraction to certain shapes and ideas and give you a sense of insight. That might not be totally unrelated to a relevant observation about valence or qualia, but I don’t see any reason to think pschedelics give you more direct access to the nature of our brains.
Thanks Holly! I’m not advocating for STV, I’m just an interested layperson who’s followed QRI’s work for some time and felt frustrated with everyone here furiously talking past one another.
Yep – if I understand it correctly, the reasoning goes something like “there’s nothing obviously special about biological neurons as a physical substrate, so maybe consciousness is fundamental to the universe but only emerges when physical systems interact with each other/themselves in particular ways”. IIT seems to have that flavour, and STV as well. I don’t know if it solves the hard problem per se, but I can see why a fundamental theory is more appealing than just a brain map of reward/aversion “centers” and the like.
I wanted to be careful, that’s why I tried to emphasize the word “feels” :P
Trips are compelling evidence that the space of possible conscious experiences is vast and unspeakably weird, and that our “normal” consciousness is just what evolution optimized for to help us get through the day. And so in the endeavour of cataloguing, systematizing, and eventually trying to model qualia, I would trust someone who personally appreciates the vastness of this space, and who is rigorous and detailed about its weirdness.
This is dangerous territory, not just epistemically but politically. Drunk and stoned people’s “deep insights” tend to be dumb nonsense, so why should we trust other druggies’ claims? Sadly, academic psychedelics researchers struggle with this public perception, and their solution is to publish only on clinical applications, as if the changes in consciousness were an embarrassing side effect rather than central to the experience. QRI are the only team I know of who don’t implicitly privilege sobriety and instead explicitly talk about the space of possible qualia. That is something I really appreciate.
As for whether symmetries/harmonies in the qualia experienced on trips are a compelling enough reason to look for symmetries/harmonies in brain data – I really don’t know. But I do think gears-level models of qualia would be useful, and since neuroscience is mostly silent on the topic, symmetries are as good a place as any to start ¯\_ (ツ)_/¯
It’s also worth noting there are a number of reasons I’m skeptical of the attraction to symmetry. I think it’s reasoning from aesthetics that we have very good and well-understood reasons (not realted to the nature of valence) to hold. And, if the claim is that the resonances are conveying the valence, highly synchronous or symmetrical states hold less information, so I’m skeptical that that would be a way of encoding valence. It’s at best redundant as a way of storing information (at worst its a seizure, where too many neurons are recruited away from doing their job to doing the same thing at once).
Again, not evidence for anything, but seizures can apparently be incredibly blissful, so it all depends. STV proponents would probably say that depending on the subnetworks involved and the particular synchronicities in the firing patterns, it could be a pleasant seizure or an unpleasant one …
They can be blissful or terrifying depending on where in the brain they occur. I thought is was pretty well understood that locality is what determines the experience, not harmonics of the seizure. Even if harmonics have something to do with it, I wouldn’t say that experiences during seizures are evidence in favor of STV.
I really appreciate you putting it like this, and endorse everything you wrote.
I think sometimes researchers can get too close to their topics and collapse many premises and steps together; they sometimes sort of ‘throw away the ladder’ that got them where they are, to paraphrase Wittgenstein. This can make it difficult to communicate to some audiences. My experience on the forum this week suggests this may have happened to me on this topic. I’m grateful for the help the community is offering on filling in the gaps.
This is a message I received in private conversation by someone who I trust reasonably highly in terms of general epistemics. I’m reposting it here because it goes against the general “vibe” of the EAF and it’s good to get well-informed contrarian opinions.
Just a quick comment in terms of comment flow: there’s been a large amount of editing of the top comment, and some of the replies that have been posted may not seem to follow the logic of the comment they‘re attached to. If there are edits to a comment that you wish me to address, I’d be glad if you made a new comment. (If you don’t, I don’t fault you but I may not address the edit.)
To be clear, the comment flow was originally disrupted because Mike deleted one of his comments. Then some of his comments got buried under so many downvotes that they’re hidden. I edited my top post to try to partially address this.
Disclaimer: I’m not very familiar with either QRI’s research or neuroscience, but in the spirit of Cunningham’s Law:
QRI’s research seems to predicated on the idea that moral realism and hedonistic utilitarianism are true. I’m very skeptical about both of these, and I think QRI’s time would be better spent working on the question of whether these starting assumptions are true in the first place.
Hi Samuel,
I’d say there’s at least some diversity of views on these topics within QRI. When I introduced STV in PQ, I very intentionally did not frame it as a moral hypothesis. If we’re doing research, best to keep the descriptive and the normative as separate as possible. If STV is true it may make certain normative frames easier to formulate, but STV itself is not a theory of morality or ethics.
One way to put this is that when I wear my philosopher’s hat, I’m most concerned about understanding what the ‘natural kinds’ (in Plato’s terms) of qualia are. If valence is a natural kind (similar to how a photon or electromagnetism are natural kinds), that’s important knowledge about the structure of reality. My sense is that ‘understanding what reality’s natural kinds are’ is prior to ethics: first figure out what is real, and then everything else (such as ethics and metaethics) becomes easier.
In terms of specific ethical frames, we do count among QRI some deeply committed hedonistic utilitarians. I see deep value in that frame although I would categorize myself as closer to a virtue ethicist.
Thanks for the response. I guess I find the idea that there is such a thing as a platonic form of qualia or valence highly dubious.
A simple thought experiment: for any formal description of “negative valence,” you could build an agent that acts to maximize this “negative valence” form and still acts exactly like a human maximizing happiness when looking from the outside (something like a “philosophical masochist”). It seems to me that it’s impossible to define positive and negative valence independently from the environment the agent is embedded in.
Hi Samuel, I think it’s a good thought experiment. One prediction I’ve made is that one could make an agent such as that, but it would be deeply computationally suboptimal: it would be a system that maximizes disharmony/dissonance internally, but seeks out consonant patterns externally. Possible to make but definitely an AI-complete problem.
Just as an idle question, what do you suppose the natural kinds of phenomenology are? I think this can be a generative place to think about qualia in general.
I disagree that QRI’s comparative advantage, such as it is, is figuring out the correctness of moral realism or hedonistic utilitarianism. “Your philosophers were so preoccupied with whether or not they should, they didn’t even stop to think if they could.”
You’re right. The questions of moral realism and hedonistic utilitarianism do make me skeptical about QRI’s research (as I currently understand it), but doing research starting from uncertain premises definitely can be worthwhile.