The Symmetry Theory of Valence sounds wrong to me and is not substantiated by any empirical research I am aware of. (Edited to be nicer.) I’m sorry to post a comment so negative and non-constructive, but I just don’t want EA people to read this and think it is something worth spending time on.
As far as I can tell, nobody at the Qualia Research Institute has a PhD in Neuroscience or has industry experience doing equivalent level work. Keeping in mind credentialism is bad, I am still pointing out their lack of neuroscience credentials because I am confused by how overwhelmingly confident they are in their claims, their incomprehensible use of neuro jargon, and how dismissive they are of my expertise. (Edited to be nicer.) https://www.qualiaresearchinstitute.org/team
There are a lot of things I don’t understand about STV, but the primary one is:
If there is dissonance in the brain, there is suffering; if there is suffering, there is dissonance in the brain. Always.
Please provide evidence that “dissonance in the brain” as measured by a “Consonance Dissonance Noise Signature” is associated with suffering? This should be an easy study to run. Put people in an fMRI scanner, ask them to do things that make them feel suffering/feel peaceful, and see how the CDNS changes between conditions. I’m willing to change my skepticism about this theory if you have this evidence, but if you have this evidence, it seems bizarre that you do not lead with it? _________________________________________________________ Edit: I have asked multiple times for empirical evidence to support these claims, but Mike Johnson has not produced anything.
I wish I could make more specific criticisms about why his theory makes no sense theoretically, but so much of what he is saying is incomprehensible, it’s hard to know where to start. Here’s a copy paste of something he said in a comment that got buried below about why suffering == harmonic dissonance:
A simple story here is that dissonant systems shake themselves apart; we can draw a parallel between dissonance in the harmonic frame and free energy in the predictive coding frame.
He’s using “predictive coding frame” as fancy jargon here in what I’m guessing is a reference to Karl Friston’s free-energy principle work. Knowing the context and definition of these words, his explanation still makes no sense.
All he is doing here is saying that the brain has some reason to reduce “dissonance in the harmonic frame” in a similar way it has reasons to reduce prediction errors (ie mistakes in the brain’s predictions of what will happen in an environment). There are good reasons why the brain should reduce prediction errors. Mike offers no clear explanation for why the brain would have a reason to reduce neural asynchrony/”dissonance in the harmonic frame”. His unclear explanation is that dissonance == suffering, but… WHY. There is no evidence to support this.
He says “Dissonant systems shake themselves apart.” Is he saying dissonant neural networks destroy themselves and we subjectively perceive this as suffering? This makes no theoretical sense AND there’s no evidence to support it.
Hi, all. Talk is cheap, and EA Forum karma may be insufficiently nuanced to convey substantive disagreements.
I’ve taken the liberty to sketch out several forecasting questions that might reflect underlying differences in opinion. Interested parties may wish to forecast on them (which the EA Forum should allow you to do directly, at least on desktop) and then make bets accordingly.
Feel free to also counterpropose (and make!) other questions if you think the existing question operationalizations are not sufficient (I’m far from knowledgeable in this field!).
Methods papers tend to be among the most highly cited, and e.g. Selen Atasoy’s original work on CSHW has been cited 208 times, according to Google Scholar. Some more recent papers are at significantly less than 100, though this may climb over time.
Anyway my sense is (1) is possible but depends on future direction, (2) is unlikely, (3) is likely, (4) is unlikely (high confidence).
Perhaps a better measure of success could be expert buy-in. I.e., does QRI get endorsements from distinguished scientists who themselves fit criteria (1) and/or (2)? Likewise, technological usefulness, e.g. has STV directly inspired the creation of some technical device that is available to buy or is used in academic research labs? I’m much more optimistic about these criteria than citation counts, and by some measures we’re already there.
Note that the 2nd question is about total citations rather than of one paper, and 3k citations doesn’t seem that high if you’re introducing an entirely new subfield (which is roughly what I’d expect if STV is true). The core paper of Friston’s free energy principle has almost 5,000 citations for example, and it seems from the outside that STV (if true) ought to be roughly as big a deal as free energy.
For a sense of my prior beliefs about EA-encouraged academic subfields, I think 3k citations in 10 years is an unlikely but not insanely high target for wild animal welfare (maybe 20-30%?), and AI risk is likely already well beyond that (eg >1k citations for Concrete Problems alone).
I’d say that’s a fair assessment — one wrinkle that isn’t a critique of what you wrote, but seems worth mentioning, is that it’s an open question if these are the metrics we should be optimizing for. If we were part of academia, citations would be the de facto target, but we have different incentives (we’re not trying to impress tenure committees). That said, the more citations the better of course.
As you say, if STV is true, it would essentially introduce an entirely new subfield. It would also have implications for items like AI safety and those may outweigh its academic impact. The question we’re looking at is how to navigate questions of support, utility, and impact here: do we put our (unfortunately rather small) resources toward academic writing and will that get us to the next step of support, or do we put more visceral real-world impact first (can we substantially improve peoples’ lives? How much and how many?), or do we go all out towards AI safety?
It’s of course possible to be wrong; I’m also understanding it’s possible to be right, but take the wrong strategic path and run out of gas. Basically I’m a little worried that racking up academic metrics like citations is less a panacea than it might appear, and we’re looking to hedge our bets here.
For what it’s worth, we’ve been interfacing with various groups working on emotional wellness neurotech and one internal metric I’m tracking is how useful a framework STV is to these groups; here’s Jay Sanguinetti explaining STV to Shinzen Young (first part of the interview):
I think of the metrics I mentioned above as proxies rather than as the underlying targets, which is some combination of:
a) Is STV true? b) Conditional upon STV being true, is it useful?
What my forecasting questions aimed to do is shedding light on a). I agree that academia and citations isn’t the best proxy. They may in some cases have conservatism bias (I think trusting the apparent academic consensus on AI risk in 2014 would’ve been a mistake for early EAs), but are also not immune to falseties/crankery (cf replication crisis). In addition, standards for truth and usefulness are different within EA circles than academia, partially because we are trying to answer different questions.
This is especially an issue as the areas that QRI is likely to interact with (consciousness, psychedelics) seem from the outside to be more prone than average to falseness and motivated cognition, including within academia.
This is what I was trying to get at with “will Luke Muelhauser say statements to the effect that the Symmetry Theory of Valence is substantively true?” because Luke is a non-QRI affiliated person within EA who’s a) respected and b) have thought about concepts adjacent to QRI’s work. Bearing in mind that Luke is very far from a perfect oracle, I would still trust Luke’s judgement on this more than an arbitrarily selected academic in an adjacent field.
I think the actual question I’m interested in is something like “In X year, will a panel of well-respected EAs a) not affiliated with QRI and b) have very different thoughts from each other and c)who have thought about things adjacent to QRI’s work have updated to believing STV to be substantively true” but I was unable to come up with a clean question operationalization in the relatively brief amount of time I gave myself to come up with this.
People are free to counterpropose and make their own questions.
Hi Linch, that’s very well put. I would also add a third possibility (c), which is “is STV false but generative.” — I explore this a little here, with the core thesis summarized in this graphic:
I.e., STV could be false in a metaphysical sense, but insofar as the brain is a harmonic computer (a strong reframe of CSHW), it could be performing harmonic gradient descent. Fully expanded, there would be four cases:
STV true, STHR true
STV true, STHR false
STV false, STHR true
STV false, STHR false
Of course, ‘true and false’ are easier to navigate if we can speak of absolutes; STHR is a model, and ‘all models are wrong; some are useful.’
For what it’s worth, I read this comment as constructive rather than non-constructive.
If I write a long report and an expert in the field think that the entire premise is flawed for specific technical reasons, I’d much rather them point this out rather than for them to worry about niceness and then never getting around to mentioning it, thus causing my report to languish in obscurity without me knowing why (or worse, for my false research to actually be used!)
I’m a bit hesitant to upvote this comment given how critical it is [was] + how little I know about the field (and thus whether the criticism is deserved), but I’m a bit relieved/interested to see I wasn’t the only one who thought it sounded really confusing/weird. I have somewhat skeptical priors towards big theories of consciousness and suffering (sort of/it’s complicated) + towards theories that rely on lots of complicated methods/jargon/theory (again, sort of/with caveats)—but I also know very little about this field and so I couldn’t really judge. Thus, I’m definitely interested to see the opinions of people with some experience in the field.
Hi Harrison, appreciate the remarks. My response would be more-or-less an open-ended question: do you feel this is a valid scientific mystery? And, what do you feel an answer would/should look like? I.e., correct answers to long-unsolved mysteries might tend to be on the weird side, but there’s “useful generative clever weird” and “bad wrong crazy timecube weird”. How would you tell the difference?
Haha, I certainly wouldn’t label what you described/presented as “timecube weird.” To be honest, I don’t have a very clear cut set of criteria, and upon reflection it’s probable that the prior is a bit over-influenced by my experiences with some social science research and theory as opposed to hard science research/theory. Additionally, it’s not simply that I’m skeptical of whether the conclusion is true, but more generally my skepticism heuristics for research is about whether whatever is being presented is “A) novel/in contrast with existing theories or intuitions; B) is true; and/or C) is useful.” For example, some theory might be basically rehashing what existing research already has come to consensus on but simply worded in a very different way that adds little to existing research (aside from complexity); alternatively, something could just be flat out wrong; alternatively, something could be technically true and novel as explicitly written, but that is not very useful (e.g., tautological definitions), whereas the common interpretation is wrong (but would be useful if it were right).
Still, two of the key features here that contributed to my mental yellow flags were:
The emphasis on jargon and seemingly ambiguous concepts (e.g., “harmony”) vs. a clear, lay-oriented narrative that explains the theory—crucially including how it is different from other plausible theories (in addition to “why should you believe this? / how did we test this?”). STEM jargon definitely seems different from social science jargon in that STEM jargon seems to more often require more knowledge/experience to get a sense of whether something is nonsense strung together or just legitimate-but-complicated analyses, whereas I can much more easily detect nonsense in social science work when it starts equivocating ideas and making broad generalizations.
(To a lesser extent) The emphasis on mathematical analyses and models for something that seemed to call for a broader approach/acceptance of some ambiguity. (Of course, it’s necessary to mathematically represent some things, but I’m a bit skeptical of systems that try to break down such complex concepts as consciousness and affective experience into a mathematical/quantified representation, just like how I’ve been skeptical of many attempts to measure/operationalize complex conceptual variables like “culture” or “polity” in some social sciences, even if I think doing so can be helpful relative to doing nothing—so long as people still are very clear-eyed about the limitations of the quantification)
In the end, I don’t have strong reason to believe that what you are arguing for is wrong, but especially given points like I just mentioned I haven’t updated my beliefs much in any direction after reading this post.
Hi Harrison, that’s very helpful. I think it’s a challenge to package fairly technical and novel research into something that’s both precise and intuitive. Definitely agree that “harmony” is an ambiguous concept.
One of the interesting aspects of this work is it does directly touch on issues of metaphysics and ontology: what are the natural kinds of reality? What concepts ‘carve reality at the joints’? Most sorts of research can avoid dealing with these questions directly, and just speak about observables and predictions. But since part of what we’re doing is to establish valence as a phenomenological natural kind, we have to make certain moves, and these moves may raise certain yellow flags, as you note, since often when these moves are made there’s some philosophical shenanigans going on. That said, I’m happy with the overall direction of our work, which has been steadily more and more empirical.
One takeaway that I do hope I can offer is the deeply philosophically unsatisfactory nature of existing answers in this space. Put simply, no one knows what pleasure and suffering are, or at least have definitions that are coherent across all domains they’d like to be able to define them. This is an increasing problem as we tackle e.g. problems of digital sentience and fundamental questions of AI alignment. I’m confident in our research program, but even more confident that the questions we’re trying to grapple with are important to try to address directly, and that there’s no good ’default hypothesis’ at present.
People are asking for object-level justifications for the Symmetry Theory of Valence:
The first thing to mention is that the Symmetry Theory of Valence (STV) is *really easy to strawman*. It really is the case that there are many near enemies of STV that sound exactly like what a naïve researcher who is missing developmental stages (e.g. is a naïve realist about perception) would say. That we like pretty symmetrical shapes of course does not mean that symmetry is at the root of valence; that we enjoy symphonic music does not mean harmony is “inherently pleasant”; that we enjoy nice repeating patterns of tactile stimulation does not mean, well, you get the idea...
The truth of course is that at QRI we really are meta-contrarian intellectual hipsters. So the weird and often dumb-sounding things we say are already taking into account the criticisms people in our people-cluster would make and are taking the conversation one step further. For instance, we think digital computers cannot be conscious, but this belief comes from entirely different arguments than those that justify such beliefs out there. We think that the “energy body” is real and important, except that we interpret it within a physicalist paradigm of dynamic systems. We take seriously the possible positive sum game-theoretical implications of MDMA, but not out of a naïve “why can’t we all love each other?” impression, but rather, based on deep evolutionary arguments. And we take seriously non-standard views of identity, not because “we are all Krishna”, but because the common-sense view of identity turns out to, in retrospect, be based on illusion (cf. Parfit, Kolak, “The Future of Personal Identity”) and a true physicalist theory of consciousness (e.g. Pearce’s theory) has no room for enduring metaphysical egos. This is all to say that straw-manning the paradigms explored at QRI is easy; steelmanning them is what’s hard. Can anyone here make a Titanium Man out of them instead? :-)
Now, I am indeed happy to address any mischaracterization of STV. Sadly, to my knowledge nobody outside of QRI really “gets it”, so I don’t think there is anyone other than us (and possibly Scott Alexander!) who can make a steelman of STV. My promise is that “there is something here” and that to “get it” is not merely to buy into the theory blindly, but rather, it is what happens when you give it enough benefit of the doubt, share a sufficient number of background assumptions, and have a wide enough experience base that it actually becomes a rather obvious “good fit” for all of the data available.
For a bit of history (and properly giving due credit), I should clarify that Michael Johnson is the one who came up with the hypothesis in Principia Qualia (for a brief history see: STV Primer). I started out very skeptical of STV myself, and in fact it took about three years of thinking it through in light of many meditation and exotic high-energy experiences to be viscerally convinced that it’s pointing in the right direction. I’m talking about a process of elimination where, for instance, I checked if what feels good is at the computational level of abstraction (such as prediction error minimization) or if it’s at the implementation level (i.e. dissonance). I then developed a number of technical paradigms for how to translate STV into something we could actually study in neuroscience and ultimately try out empirically with non-invasive neurotech (in our case, light-sound-vibration systems that produce multi-modally coherent high-valence states of consciousness). Quintin Frerichs (who gave a presentation about Neural Annealing to Friston) has since been working hard on the actual neuroscience of it in collaboration with Johns Hopkins University, Daniel Ingram, Imperial College and others. We are currently testing the theory in a number of ways and will publish a large paper based on all this work.
For clarification, I should point out that what is brilliant (IMO) about Mike’s Principia Qualia is that he breaks down the problem of consciousness in such a way that it allows us to divide and conquer the hard problem of consciousness. Indeed, once broken down into his 8 subproblems, calling it the “hard problem of consciousness” sounds as bizarre as it would sound to us to hear about “the hard problem of matter”. We do claim that if we are able to solve each of these subproblems, that indeed the hard problem will dissolve. Not the way illusionists would have it (where the very concept of consciousness is problematic), but rather, in the way that electricity and lightning and magnets all turned out to be explained by just 4 simple equations of electromagnetism. Of course the further question of why do those equations exist and why consciousness follows such laws remains, but even that could IMO be fully explained with the appropriate paradigm (cf. Zero Ontology).
The main point to consider here w.r.t. STV is that symmetry is posited to be connected with valence at the implementation level of analysis. This squarely and clearly distinguishes STV from behaviorist accounts of valence (e.g. “behavioral reinforcement”) and also from algorithmic accounts (e.g. compression drive or prediction error minimization). Indeed, with STV you can have a brain (perhaps a damaged brain, or one in an exotic state of consciousness) where prediction errors are not in fact connected to valence. Rather, the brain evolved to recruit valence gradients in order to make better predictions. Similarly, STV predicts that what makes activation of the pleasure centers feel good is precisely that doing so gives rise to large-scale harmony in brain activity. This is exciting because it means the theory predicts we can actually observe a double dissociation: if we inhibit the pleasure centers while exogenously stimulating large-scale harmonic patterns we expect that to feel good, and we likewise expect that even if you activate the pleasure centers you will not feel good if something inhibits the large-scale harmony that would typically result. Same with prediction errors, behavior, etc.: we predict we can doubly-dissociate valence from those features if we conduct the right experiment. But we won’t be able to dissociate valence from symmetry in the formalism of consciousness.
Now, of course we currently can’t see consciousness directly, but we can infer a lot of invariants about it with different “projections”, and so far all are consistent with STV:
Of especial note, I’d point you to one of the studies discussed in the 2020 STV talk: The Human Default Consciousness and Its Disruption: Insights From an EEG Study of Buddhist Jhāna Meditation. It shows a very tight correspondence between jhanas and various smoothly-repeating EEG patterns, including a seizure-like activity that unlike normal seizures (of typically bad valence) shows up as having a *harmonic structure*. Here we find a beautiful correspondence between (a) sense of peace/jhanic bliss, (b) phenomenological descriptions of simplicity and smoothness, (c) valence, and (d) actual neurophysiological data mirroring these phenomenological accounts. At QRI we have similarly observed something quite similar studying the EEG patterns of other ultra-high-valence meditation states (which we will hopefully publish in 2022). I expect this pattern to hold for other exotic high-valence states in one way or another, ranging from quality of orgasm to exogenous opioids.
Phenomenologically speaking, STV is not only capable of describing and explaining why certain meditation or psychedelic states of consciousness feel good or bad, but in fact it can be used as a navigation aid! You can introspect on the ways energy does not flow smoothly, the presence of blockages and pinch points make it reflect in discordant ways, or zone in on areas of the “energy body” that are out of synch with one another and then specifically use attention in order to “comb the field of experience”. This approach—the purely secular climbing of the harmony gradient leads all of its own to amazing high-valence states of consciousness (cf. Buddhist Annealing). I’ll probably make a video series with meditation instructions for people to actually experience this on themselves first hand. It doesn’t take very long, actually. Also, STV as a paradigm can be used in order to experience more pleasant trajectories along the “Energy X Complexity landscape” of a DMT trip (something I even talked about at the SSC meetup online!). In a simple quip, I’d say “there are good and bad ways of vibing on DMT, and STV gives you the key to the realms of good vibes” :-)
Another angle: we can find subtle ways of dissociating valence from e.g. chemicals: if you take stimulants but don’t feel the nice buzz that provides a “working frame” for your mental activity, they will not feel good. At the same time, without stimulants you can get that pleasant productivity-enhancing buzz with the right tactile patterns of stimulation. Indeed this “buzz” that characterizes the effects of many euphoric drugs (and the quality of e.g. metta meditation) is precisely a valence effect, one that provides a metronome to self-organize around and which can feel bad when you don’t follow where it takes you. Literally, one of the core reasons why MDMA feels better than LSD which feels better than DOB is precisely because the “quality of the buzz” of each of these highs is different. MDMA’s buzz is beautiful and harmonious; DOB’s buzz is harsh and dissonant. More so, such a buzz can work as task-specific dissonance guide-rails, if you will. Meaning that when you do buzz-congruent behaviors you feel a sense of inner harmony, whereas when you do buzz-incongruent behaviors you feel a sense of inner turmoil. Hence what kind of buzz one experiences is deeply consequential! All of this falls rather nicely within STV—IMO other theories need to keep adding epicycles to keep up.
Hopefully this all worked as useful clarifications.
My promise is that “there is something here” and that to “get it” is not merely to buy into the theory blindly, but rather, it is what happens when you give it enough benefit of the doubt, share a sufficient number of background assumptions, and have a wide enough experience base that it actually becomes a rather obvious “good fit” for all of the data available.
I started out very skeptical of STV myself, and in fact it took about three years of thinking it through in light of many meditation and exotic high-energy experiences to be viscerally convinced that it’s pointing in the right direction.
It sounds like you’re saying we all need to become more suggestible and just feel like your theory is true before we can understand it. Do you see what poor reasoning that would be?
I take Andrés’s point to be that there’s a decently broad set of people who took a while to see merit in STV, but eventually did. One can say it’s an acquired taste, something that feels strange and likely wrong at first, but is surprisingly parsimonious across a wide set of puzzles. Some of our advisors approached STV with significant initial skepticism, and it took some time for them to come around. That there are at least a few distinguished scientists who like STV isn’t proof it’s correct, but may suggest withholding some forms of judgment.
Edit: This comment now makes less sense, given that Abby has revised the language of her comment.
Abby,
I strongly endorse what you say in your last paragraph:
Please provide evidence that “dissonance in the brain” as measured by a “Consonance Dissonance Noise Signature” is associated with suffering? … I’m willing to change my skepticism about this theory if you have this evidence.
However, I’d like to push back on the tone of your reply. If you’re sorry for posting a negative non-constructive comment, why not try to be a bit more constructive? Why not say something like “I am deeply skeptical of this theory and do not at this moment think it’s worth EAs spending time on. [insert reasons]. I would be willing to change my view if there was evidence.”
Apologies for being pedantic, but I think it’s worth the effort to try and keep the conversation on the forum as constructive as possible!
I think context is important here. This is not an earnest but misguided post from an undergrad with big ideas and little experience. This is a post from an organization trying to raise hundreds of thousands of dollars. You can check out their website if you want, the front page has a fundraising advertisement.
Further, there are a lot of fancy buzzwords in this post (“connectome!”) and enough jargon that people unfamiliar with the topic might think there is substance here that they just don’t understand (see Harrison’s comment: “I also know very little about this field and so I couldn’t really judge”).
As somebody who knows a lot about this field, I think it’s important that my opinion on these ideas is clearly stated. So I will state it again.
There is no a priori reason to believe any of the claims of STV. There is no empirical evidence to support STV. To an expert, these claims do not sound “interesting and plausible but unproven”, they sound “nonsensical and presented with baffling confidence”.
People have been observing brain oscillations at different frequencies and at different powers for about 100 years. These oscillations have been associated with different patterns of behavior, ranging from sleep stages to memory formation. Nobody has observed asynchrony to be associated with anything like suffering (as far as I’m aware, but please present evidence if I’m mistaken!).
fMRI is a technique that doesn’t measure the firing of neurons (it measures the oxygen consumed over relatively big patches of neurons) and is extremely poorly suited to provide evidence for STV. A better method would be MEG (expensive) or EEG (extremely affordable). If the Qualia Research Institute was a truth seeking institution, they would have either run the simple experiment I proposed themselves, or had any of the neuroscientists they claim to be collaborating with run it for them.
This is a bad post and it should be called out as such. I would have been more gentle if this was a single misguided researcher and not the head of an organization that publishes a lot of other nonsense too.
This is a post from an organization trying to raise hundreds of thousands of dollars.
...
If the Qualia Research Institute was a truth seeking institution, they would have either run the simple experiment I proposed themselves, or had any of the neuroscientists they claim to be collaborating with run it for them.
This reads to me as insinuating fraud, without much supporting evidence.
This is a bad post and it should be called out as such. I would have been more gentle if this was a single misguided researcher and not the head of an organization that publishes a lot of other nonsense too.
I appreciate that in other comments you followed up with more concrete criticisms, but this still feels against the “Keep EA Weird” spirit to me. If we never spend a million or two on something that turns out to be nonsense, we wouldn’t have applied hits-based giving very well.
(Despite the username, I have no affiliation with QRI. I’ll admit to finding the problem worth working on. )
Hi all, I messaged some with Holly a bit about this, and what she shared was very helpful. I think a core part of what happened was a mismatch of expectations: I originally wrote this content for my blog and QRI’s website, and the tone and terminology was geared toward “home team content”, not “away team content”. Some people found both the confidence and somewhat dense terminology offputting, and I think that’s reasonable of them to raise questions. As a takeaway, I‘ve updated that crossposting involves some pitfalls and intend to do things differently next time.
Thanks valence. I do think the ‘hits-based giving’ frame is important to develop, although I understand it’s doesn’t have universal support as some of the implications may be difficult to navigate.
And thank for appreciating the problem; it’s sometimes hard for me to describe how important the topic feels and all the reasons for working on it.
I am comfortable calling myself “somebody who knows a lot about this field”, especially in relation to the average EA Forum reader, our current context.
I respect Karl Friston as well, I’m looking forward to reading his thoughts on your theory. Is there anything you can share?
The CSHW stuff looks potentially cool, but it’s separate from your original theory, so I don’t want to get too deep into it here. The only thing I would say is that I don’t understand why the claims of your original theory cannot be investigated using standard (cheap) EEG techniques. This is important if a major barrier to finding empirical evidence for your theory is funding. Could you explain why standard EEG is insufficient to investigate the synchrony of neuronal firing during suffering?
I was very aggressive with my criticism of your theory, partially because I think it is wrong (again, the basis of your theory, “the symmetry of this representation will encode how pleasant the experience is”, makes no sense to me), but also because of how confidently you describe your theory with no empirical evidence. So I happily accept being called arrogant and would also happily accept being shown how I am wrong. My tone is in reaction to what I feel is your unfounded confidence, and other posts like “I think all neuroscientists, all philosophers, all psychologists, and all psychiatrists should basically drop whatever they’re doing and learn Selen Atasoy’s “connectome-specific harmonic wave” (CSHW) framework.” https://opentheory.net/2018/08/a-future-for-neuroscience/
You link to your other work in this post, and are raising money for your organization (which I think will redirect money from organizations that I think are doing more effective work), so I think it’s fair for my comments to be in reaction to things outside the text of your original post.
I’m glad to hear you feel good about your background and are filled with confidence in yourself and your field. I think the best work often comes from people who don’t at first see all the challenges involved in doing something, because often those are the only people who even try.
At first I was a little taken aback by your tone, but to be honest I’m a little amused by the whole interaction now.
The core problem with EEG is that the most sophisticated analyses depend on source localization (holographic reconstruction of brain activity), and accurate source localization from EEG remains an unsolved problem, at least at the resolution and confidence we’d need. In particular we’ve looked at various measures of coherence as applied to EEG and found them all wanting in various ways. I notice some backtracking on your criticism of CSHW. ;) it’s a cool method, not without downsides but occupies a cool niche. I have no idea what your research is about but it might be useful for you to learn about for some purposes.
I’m glad you‘re reading more of our ‘back issues’ as it were. We have some talks on our YouTube channel as well (including the NA presentation to Friston), although not all of our work on STV is public yet.
If you share what your research is about, and any published work, I think it’d I’d help me understand where your critiques are coming from a little better. Totally up to you though.
Hi Jpmos, really appreciate the comments. To address the question of evidence, this is a fairly difficult epistemological situation but we’re working with high-valence datasets from Daniel Ingram & Harvard, and Imperial College London (jhana data, and MDMA data, respectively) and looking for signatures of high harmony.
Neuroimaging is a pretty messy thing, there are no shortcuts to denoising data, and we are highly funding constrained, so I’m afraid we don’t have any peer-reviewed work published on this yet. I can say that initial results seem fairly promising and we hope to have something under review in 6 months. There is a wide range of tacit evidence that stimulation patterns with higher internal harmony produce higher valence than dissonant patterns (basically: music feels good, nails on a chalkboard feels bad), but this is in a sense ‘obvious’ and only circumstantial evidence for STV.
Happy to ‘talk shop’ if you want to dig into details here.
Hi Abby, I‘m happy to entertain well-meaning criticism, but it feels your comment rests fairly heavily on credentialism and does not seem to offer any positive information, nor does it feel like high-level criticism (“their actual theory is also bad”). If your background is as you claim, I’m sure you understand the nuances of “proving” an idea in neuroscience, especially with regard to NCCs (neural correlates of consciousness) — neuroscience is also large enough that “I published a peer reviewed fMRI paper in a mainstream journal” isn’t a particularly ringing endorsement of domain knowledge in affective neuroscience. If you do have domain knowledge sufficient to take a crack at the question of valence I’d be glad to hear your ideas.
For a bit of background to theories of valence in neuroscience I’d recommend my forum post here—it goes significantly deeper into the literature than this primer.
Again, I’m not certain you read my piece closely, but as mentioned in my summary, most of our collaboration with British universities has been with Imperial (Robin Carhart-Harris’s lab, though he recently moved to UCSF) rather than Oxford, although Kringelbach has a great research center there and Atasoy (creator of the CSHW reference implementation, which we independently reimplemented) does her research there, so we’re familiar with the scene.
Hi Mike! I appreciate your openness to discussion even though I disagree with you.
Some questions:
1. The most important question: Why would synchrony between different brain areas involved in totally different functions be associated with subjective wellbeing? I fundamentally don’t understand this. For example, asynchrony has been found to be useful in memory as a way of differentiating similar but different memories during encoding/rehearsal/retrieval. It doesn’t seem like a bad thing that the brain has a reason to reduce, the way it has reasons to reduce prediction errors. Please link to brain studies that have found asynchrony leads to suffering.
2. If your theory is focused on neural oscillations, why don’t you use EEG to measure the correlation between neural synchrony and subjective experience? Surely EEG is a more accurate method and vastly cheaper than fMRI?
3. If you are funding constrained, why are none of your collaborators willing to run this experiment for you? Running fMRI and EEG experiments at Princeton is free. I see you have multiple Princeton affiliates on your team, and we even have Michael Graziano as a faculty member who is deeply interested in consciousness and understands fMRI.
My advice is to run the experiment I described in my original comment. Put people in an fMRI scanner (or EEG or MEG), ask them to do things that make them feel suffering/feel peaceful, and see how the CDNS changes between conditions. This is an extremely basic experiment and I am confused why you would be so confident in your theory before running this.
Hi Abby, thanks for the clear questions. In order:
In brief, asynchrony levies a complexity and homeostatic cost that harmony doesn’t. A simple story here is that dissonant systems shake themselves apart; we can draw a parallel between dissonance in the harmonic frame and free energy in the predictive coding frame.
We work with all the high-quality data we can get our hands on. We do have hd-EEG data of jhana meditation, but EEG data as you may(?) know is very noisy and ‘NCC-style’ research with EEG is a methodological minefield.
We know and like Graziano. I’ll share the idea of using Princeton facilities with the team.
To be direct, years ago I felt as you did about the simplicity of the scientific method in relation to neuroscience; “Just put people in an fMRI, have them do things, analyze the data; how hard can it be?” — experience has cured me of this frame, however. I’ve learned that neuroimaging data pipelines are often held together by proverbial duct tape, neuroimaging is noisy, the neural correlates of consciousness frame is suspect and existing philosophy of mind is rather bonkers, and to even say One True Thing about the connection between brain and mind is very hard (and expensive) indeed. I would say I expect you to be surprised by certain realities of neuroscience as you complete your PhD, and I hope you can turn that into determination to refactor the system towards elegance, rather than being progressively discouraged by all the hidden mess.
In brief, asynchrony levies a complexity and homeostatic cost that harmony doesn’t. A simple story here is that dissonant systems shake themselves apart; we can draw a parallel between dissonance in the harmonic frame and free energy in the predictive coding frame.
I appreciate your direct answer to my question, but I do not understand what you are trying to say. I am familiar with Friston and the free-energy principle, so feel free to explain your theory in those terms. All you are doing here is saying that the brain has some reason to reduce “dissonance in the harmonic frame” (a phrase I have other issues with) in a similar way it has reasons to reduce prediction errors. There are good reasons why the brain should reduce prediction errors. You say (but do not clearly explain why) there’s a parallel here where the brain should reduce neural asynchrony/dissonance in the harmonic frame. You posit neural asynchrony is suffering, but you do not explain why in an intelligible way. “Dissonant systems shake themselves apart.” Are you saying dissonant neural networks destroy themselves and we subjectively perceive this as suffering? This makes no sense. Maybe you’re trying to say something else, but I have made my confusion about the link between suffering and asynchrony extremely clear multiple times now, and you have not offered an explanation that I understand.
I’ve learned that neuroimaging data pipelines are often held together by proverbial duct tape, neuroimaging is noisy, the neural correlates of consciousness frame is suspect and existing philosophy of mind is rather bonkers, and to even say One True Thing about the connection between brain and mind is very hard (and expensive) indeed. I would say I expect you to be surprised by certain realities of neuroscience as you complete your PhD, and I hope you can turn that into determination to refactor the system towards elegance, rather than being progressively discouraged by all the hidden mess.
I agree neuroimaging is extremely messy and discouraging, but you’re the one posting about successfully building an fmri analysis pipeline to run this specific analysis to support your theory. I am very annoyed that your response to my multiple requests for any empirical data to support your theory is you basically saying “science is hard”, as opposed to “no experiment, dataset, or analysis is perfect, but here is some empirical evidence that is at least consistent with my theory.”
I wish you came at this by saying, “Hey I have a cool idea, what do you guys think?” But instead you’re saying “We have a full empirical theory of suffering” with as far as I can tell, nothing to back this up.
I know that this is the EA forum and it’s bad that two people are trading arch emoticons...but I know I’m not the only one enjoying Abby Hoskin’s response to someone explaining her future journey to her.
Inject this into my veins.
Maybe more constructively (?) I think the OP responses have updated others in support of Abby’s concerns.
In the past, sometimes I have said things that turned out not to be as helpful as I thought. In those situations, I think I have benefitted from someone I trust reviewing the discussion and offering another perspective to me.
I’m not sure ‘enjoy’ is the right word, but I also noticed the various attempts to patronize Hoskin.
This ranges from the straightforward “I’m sure once you know more about your own subject you’ll discover I am right”:
I would say I expect you to be surprised by certain realities of neuroscience as you complete your PhD
‘Well-meaning suggestions’ alongside implication her criticism arises from some emotional reaction rather than her strong and adverse judgement of its merit.
I’m a little baffled by the emotional intensity here but I’d suggest approaching this as an opportunity to learn about a new neuroimaging method, literally pioneered by your alma mater. :)
[Adding a smiley after something insulting or patronizing doesn’t magically make you the ‘nice guy’ in the conversation, but makes you read like a passive-aggressive ass who is nonetheless too craven for candid confrontation. I’m sure once you reflect on what I said and grow up a bit you’ll improve so your writing inflicts less of a tax on our collective intelligence and good taste. I know you’ll make us proud! :)]
Or just straight-up belittling her knowledge and expertise with varying degrees of passive-aggressiveness.
I understand it may feel significant that you have published work using fMRI, and that you hold a master’s degree in neuroscience.
I’m glad to hear you feel good about your background and are filled with confidence in yourself and your field.
I think this sort of smug and catty talking down would be odious even if the OP really did have much more expertise than their critic: I hope I wouldn’t write similarly in response to criticism (however strident) from someone more junior in my own field.
What makes this kinda amusing, though, is although the OP is trying to set himself up as some guru trying to dismiss his critic with the textual equivalent of patting her on the head, virtually any reasonable third party would judge the balance of expertise to weigh in the other direction. Typically we’d take, “Post-graduate degree, current doctoral student, and relevant publication record” over “Basically nothing I could put on an academic CV, but I’ve written loads of stuff about my grand theory of neuroscience.”
In that context (plus the genders of the participants) I guess you could call it ‘mansplaining’.
Greg, I have incredible respect for you as a thinker, and I don’t have a particularly high opinion of the Qualia Research Institute. However, I find your comment to be unnecessarily mean: every substantive point you raise could have been made more nicely and less personal, in a way more conducive to mutual understanding and more focused on an evaluation of QRI’s research program. Even if you think that Michael was condescending or disrespectful to Abby, I don’t think he deserves to be treated like this.
Hmm I have conflicting feelings about this. I think whenever you add additional roadblocks or other limitations on criticism, or suggestions that criticisms can be improved, you
a) see the apparent result that criticisms that survive the process will on average be better.
b) fail to see the (possibly larger) effect that there’s an invisible graveyard of criticisms that people choose not to voice because it’s not worth the hassle.
At the same time, being told that your life work is approximately useless is never a pleasant feeling, and it’s not always reasonable to expect people to handle it with perfect composure (Thankfully nothing of this magnitude has ever happened to me, but I was pretty upset when an EA Forum draft I wrote in only a few days had to be scrapped or at least rewritten because it assumed a mathematical falsehood). So while I think Mike’s responses to Abby are below a reasonable bar of good forum commenting norms, I think I have more sympathy for his feelings and actions here than Greg seems to.
So I’m pretty conflicted. My own current view is that I endorse Abby’s comments and tone as striking the right balance for the forum, and I endorse Greg’s content but not the tone.
But I think reasonable people can disagree here, and we should also be mindful that when we ask people to rephrase substantive criticisms to meet a certain stylistic bar (see also comments here), we are implicitly making criticisms more onerous, which arguably has pretty undesirable outcomes.
Based on how the main critic Abby was treated, how the OP replies to comments in a way that selectively chooses what content they want to respond to, the way they respond to direct questions with jargon, I place serious weight that this isn’t a good faith conversation.
This is not a stylistic issue, in fact it seems to be exactly the opposite: someone is taking the form of EA norms and styles (maintaining a positive tone, being sympathetic) while actively undermining someone odiously.
I have been in several environments where this behavior is common.
At the risk of policing or adding to the noise (I am not willing to read more of this to update myself), I am writing this because I am concerned you and others who are conscientious are being sucked into this.
Hi Charles, I think several people (myself, Abby, and now Greg) were put in some pretty uncomfortable positions across these replies. By posting, I open myself to replies, but I was pretty surprised by some of the energy of the initial comments (as apparently were others; both Abby and I edited some of our comments to be less confrontational, and I’m happy with and appreciate that).
Happy to answer any object level questions you have that haven’t been covered in other replies, but this remark seems rather strange to me.
For the avoidance of doubt, I remain entirely comfortable with the position expressed in my comment: I wholeheartedly and emphatically stand behind everything I said. I am cheerfully reconciled to the prospect some of those replying to or reading my earlier comment judge me adversely for it—I invite these folks to take my endorsement here as reinforcing whatever negative impressions they formed from what I said there.
The only thing I am uncomfortable with is that someone felt they had to be anonymous to criticise something I wrote. I hope the measure I mete out to others makes it clear I am happy for similar to be meted out to me in turn. I also hope reasonable folks like the anonymous commenter are encouraged to be forthright when they think I err—this is something I would be generally grateful to them for, regardless of whether I agree with their admonishment in a particular instance. I regret to whatever degree my behaviour has led others to doubt this is the case.
Your responses here are much more satisfying and comprehensible than your previous statements, it’s a bit of a shame we can’t reset the conversation.
2. Another anonymous commentator (thanks to Linch for posting) highlights that Abby’s line of questioning regarding EEGs ultimately resulted in a response satisfactory to her and which she didn’t have the expertise to further evaluate:
if they had given the response that they gave in one of the final comments in the discussion, right at the beginning (assuming Abby would have responded similarly) the response to their exchange might have been very different i.e. I think people would have concluded that they gave a sensible response and were talking about things that Abby didn’t have expertise to comment on:
_______
Abby Hoskin: If your answer relies on something about how modularism/functionalism is bad: why is source localization critical for your main neuroimaging analysis of interest? If source localization is not necessary: why can’t you use EEG to measure synchrony of neural oscillations?
Mike Johnson: The harmonic analysis we’re most interested in depends on accurately modeling the active harmonics (eigenmodes) of the brain. EEG doesn’t directly model eigenmodes; to infer eigenmodes we’d need fairly accurate source localization. It could be there are alternative ways to test STV without modeling brain eigenmodes, and that EEG could give us. I hope that’s the case, and I hope we find it, since EEG is certainly a lot easier to work with than fMRI.
Abby Hoskin: Ok, I appreciate this concrete response. I don’t know enough about calculating eigenmodes with EEG data to predict how tractable it is.
Thanks, but I’ve already seen them. Presuming the implication here is something like “Given these developments, don’t you think you should walk back what you originally said?”, the answer is “Not really, no”: subsequent responses may be better, but that is irrelevant to whether earlier ones were objectionable; one may be making good points, but one can still behave badly whilst making them.
(Apologies if I mistake what you are trying to say here. If it helps generally, I expect—per my parent comment—to continue to affirm what I’ve said before however the morass of commentary elsewhere on this post shakes out.)
Just want to be clear, the main post isn’t about analyzing eigenmodes with EEG data. It’s very funny that when I am intellectually honest enough to say I don’t know about one specific EEG analysis that doesn’t exist and is not referenced in the main text, people conclude that I don’t have expertise to comment on fMRI data analysis or the nature of neural representations.
Meanwhile QRI does not have expertise to comment on many of the things they discuss, but they are super confident about everything and in the original posts especially did not clearly indicate what is speculation versus what is supported by research.
I continue to be unconvinced with the arguments laid out, but I do think both the tone of the conversation and Mike Johnson’s answers improved after he was criticized. (Correlation? Causation?)
Generally speaking, I agree with the aphorism “You catch more flies with honey than vinegar;”
For what it’s worth, I interpreted Gregory’s critique as an attempt to blow up the conversation and steer away from the object level, which felt odd. I’m happiest speaking of my research, and fielding specific questions about claims.
Hi Gregory, I’ll own that emoticon. My intent was not to belittle, but to show I’m not upset and I‘m actually enjoying the interaction. To be crystal clear, I have no doubt Hoskin is a sharp scientist and cast no aspersions on her work. Text can be a pretty difficult medium for conveying emotions (things can easily come across as either flat or aggressive).
Hi Abby, to give a little more color on the data: we’re very interested in CSHW as it gives us a way to infer harmonic structure from fMRI, which we’re optimistic is a significant factor in brain self-organization. (This is still a live hypothesis, not established fact; Atasoy is still proving her paradigm, but we really like it.)
We expect this structure to be highly correlated with global valence, and to show strong signatures of symmetry/harmony during high-valence states. The question we’ve been struggling with as we’ve been building this hypothesis is “what is a signature of symmetry/harmony?” — there’s a bit of research from Stanford (Chon) here on quantifying consonance in complex waveforms and some cool music theory based on Helmholz’s work, but this appears to be an unsolved problem. Our “CDNS” approach basically looks at pairwise relationships between harmonics to quantify the degree to which they’re in consonance or dissonance with each other. We’re at the stage here where we have the algorithm, but need to validate it on audio samples first before applying it too confidently to the brain.
There’s also a question of what datasets are ideal for the sort of thing we’re interested in. Extreme valence datasets are probably the most promising, states of extreme pleasure or extreme suffering. We prefer datasets involving extreme pleasure, for two reasons:
(1) We viscerally feel better analyzing this sort of data than states of extreme suffering;
(2) fMRI’s time resolution is such that the best results will come from mental states with high structural stability. We expect this structural stability to be much higher during pleasure than suffering.
As such we’ve been focusing on collecting data from meditative jhana states, and from MDMA states. There might be other states that involve reliable good emotion that we can study, but these are the best we’ve found conceptually so far.
Lastly, there’s been the issue of neuroimaging pipelines and CSHW. Atasoy‘s work is not open source, so we had to reimplement her core logic (big thanks to Patrick here) and we ended up collaborating with an external group on a project to combine this core logic with a neuroimaging packaging system. I can’t share all the details here as our partner doesn’t want to be public about their involvement yet but this is thankfully wrapping up soon.
I wish we had a bunch of deeply analyzed data we could send you in direct support of STV! And I agree with you that is the ideal and you’re correct to ask for it. Sadly we don’t at this point, but I’m glad to say a lot of the preliminaries have been now taken care of and things are moving. I hope my various comments here haven’t come across as disrespectful (and I sincerely apologize if they have- not my intention but if that’s been your interpretation I accept it, sorry!); there’s just a lot of high-context stuff here that’s hard to package up into something that’s neat and tidy, and overall what clarity we’ve been able to find on this topic has been very hard-won.
Hi Abby, to be honest the parallels between free-energy-minimizing systems and dissonance-minimizing systems is a novel idea we’re playing with (or at least I believe it’s novel—my colleague Andrés coined it to my knowledge) and I’m not at full liberty to share all the details before we publish it. I think it’s reasonable to doubt this intuition, and we’ll hopefully be assembling more support for it soon.
To the larger question of neural synchrony and STV, a good collection of our argument and some available evidence would be our talk to Robin Carhart-Harris’ lab:
(I realize an hour-long presentation is a big ‘ask’; don’t feel like you need to watch it, but I think this shares what we can share publicly at this time)
>I agree neuroimaging is extremely messy and discouraging, but you’re the one posting about successfully building an fmri analysis pipeline to run this specific analysis to support your theory. I am very annoyed that your response to my multiple requests for any empirical data to support your theory is you basically saying “science is hard”, as opposed to “no experiment, dataset, or analysis is perfect, but here is some empirical evidence that is at least consistent with my theory.”
One of my takeaways from our research is that neuroimaging tooling is in fairly bad shape overall. I’m frankly surprised we had to reimplement an fMRI analysis pipeline in order to start really digging into this question, and I wonder how typical our experience here is.
One of the other takeaways from our work is that it’s really hard to find data that’s suitable for fundamental research into valence; we just got some MDMA fMRI+DTI data that appears very high quality, so we may have more to report soon. I’m happy to talk about what sorts of data are, vs are not, suitable for our research and why; my hands are a bit tied with provisional data at this point (sorry about that, wish I had more to share)
Thanks for adjusting your language to be nicer. I wouldn’t say we’re overwhelmingly confident in our claims, but I am overwhelmingly confident in the value of exploring these topics from first principles, and although I wish I had knockout evidence for STV to share with you today, that would be Nobel Prize tier and I think we’ll have to wait and see what the data brings. For the data we would identify as provisional support, this video is likely the best public resource at this point:
This is in fact the claim of STV, loosely speaking; that there is an identity relationship here. I can see how it would feel like an aggressive claim, but I’d also suggest that positing identity relationships is a very positive thing, as they generally offer clear falsification criteria. Happy to discuss object-level arguments as presented in the linked video.
Hi Mike, I really enjoy your and Andrés’s work, including STV, and I have to say I’m disappointed by how the ideas are presented here, and entirely unsurprised at the reaction they’ve elicited.
There’s a world of a difference between saying “nobody knows what valence is made out of, so we’re trying to see if we can find correlations with symmetries in imaging data” (weird but fascinating) and “There is an identity relationship between suffering and disharmony” (time cube). I know you’re not time cube man, because I’ve read lots of other QRI output over the years, but most folks here will lack that context. This topic is fringe enough that I’d expect everything to be extra-delicately phrased and very well seasoned with ifs and buts.
Again, I’m a big fan of QRI’s mission, but I’d be worried about donating I if I got the sense that the organization viewed STV not as something to test, but as something to prove. Statistically speaking, it’s not likely that STV will turn out to be the correct mechanistic grand theory of valence, simply because it’s the first one (of hopefully many to come). I would like to know:
When do you expect to be able to share the first set of empirical results, and what kinds of conclusions do you expect we will be able to draw from them, depending on how they turn out? Tiny studies with limited statistical power are ok; “oh it’s promising so far but we can’t share details” isn’t.
I hope QRI’s fate isn’t tied to STV – if STV can’t be reconciled with the data, then what alternative ideas would you test next?
Hi Seb, I appreciate the honest feedback and kind frame.
I can say that it’s difficult to write a short piece that will please a diverse audience, but that ducks the responsibility of the writer.
You might be interested in my reply to Linch which notes that STV may be useful even if false; I would be surprised if it were false but it wouldn’t be an end to qualia research, merely a new interesting chapter.
I spoke with the team today about data, and we just got a new batch this week we’re optimistic has exactly the properties we’re looking for (meditative cessations, all 8 jhanas in various orders, DTI along with the fMRI). We have a lot of people on our team page but to this point QRI has mostly been fueled by volunteer work (I paid myself my first paycheck this month, after nearly five years) so we don’t always have the resources to do everything we want to do as fast as we want to do it, but I’m optimistic we’ll have something to at least circulate privately within a few months.
This is in fact the claim of STV, loosely speaking; that there is an identity relationship here. I can see how it would feel like an aggressive claim, but I’d also suggest that positing identity relationships is a very positive thing, as they generally offer clear falsification criteria.
But did you have any reason to posit it? Any evidence that this identity is the case?
Andrés’s STV presentation to Imperial College London’s psychedelics research group is probably the best public resource I can point to on this right now. I can say after these interactions it’s much more clear that people hearing these claims are less interested in the detailed structure of the philosophical argument, and more in the evidence, and in a certain form of evidence. I think this is very reasonable and it’s something we’re finally in a position to work on directly: we spent the last ~year building the technical capacity to do the sorts of studies we believe will either falsify or directly support STV.
The Symmetry Theory of Valence sounds wrong to me and is not substantiated by any empirical research I am aware of. (Edited to be nicer.) I’m sorry to post a comment so negative and non-constructive, but I just don’t want EA people to read this and think it is something worth spending time on.
As far as I can tell, nobody at the Qualia Research Institute has a PhD in Neuroscience or has industry experience doing equivalent level work. Keeping in mind credentialism is bad, I am still pointing out their lack of neuroscience credentials because I am confused by how overwhelmingly confident they are in their claims, their incomprehensible use of neuro jargon, and how dismissive they are of my expertise. (Edited to be nicer.) https://www.qualiaresearchinstitute.org/team
There are a lot of things I don’t understand about STV, but the primary one is:
If there is dissonance in the brain, there is suffering; if there is suffering, there is dissonance in the brain. Always.
Please provide evidence that “dissonance in the brain” as measured by a “Consonance Dissonance Noise Signature” is associated with suffering? This should be an easy study to run. Put people in an fMRI scanner, ask them to do things that make them feel suffering/feel peaceful, and see how the CDNS changes between conditions. I’m willing to change my skepticism about this theory if you have this evidence, but if you have this evidence, it seems bizarre that you do not lead with it?
_________________________________________________________
Edit: I have asked multiple times for empirical evidence to support these claims, but Mike Johnson has not produced anything.
I wish I could make more specific criticisms about why his theory makes no sense theoretically, but so much of what he is saying is incomprehensible, it’s hard to know where to start. Here’s a copy paste of something he said in a comment that got buried below about why suffering == harmonic dissonance:
He’s using “predictive coding frame” as fancy jargon here in what I’m guessing is a reference to Karl Friston’s free-energy principle work. Knowing the context and definition of these words, his explanation still makes no sense.
All he is doing here is saying that the brain has some reason to reduce “dissonance in the harmonic frame” in a similar way it has reasons to reduce prediction errors (ie mistakes in the brain’s predictions of what will happen in an environment). There are good reasons why the brain should reduce prediction errors. Mike offers no clear explanation for why the brain would have a reason to reduce neural asynchrony/”dissonance in the harmonic frame”. His unclear explanation is that dissonance == suffering, but… WHY. There is no evidence to support this.
He says “Dissonant systems shake themselves apart.” Is he saying dissonant neural networks destroy themselves and we subjectively perceive this as suffering? This makes no theoretical sense AND there’s no evidence to support it.
Hi, all. Talk is cheap, and EA Forum karma may be insufficiently nuanced to convey substantive disagreements.
I’ve taken the liberty to sketch out several forecasting questions that might reflect underlying differences in opinion. Interested parties may wish to forecast on them (which the EA Forum should allow you to do directly, at least on desktop) and then make bets accordingly.
Feel free to also counterpropose (and make!) other questions if you think the existing question operationalizations are not sufficient (I’m far from knowledgeable in this field!).
Hi Linch, cool idea.
I’d suggest that 100 citations can be a rather large number for papers, depending on what reference class you put us in, 3000 larger still; here’s an overview of the top-cited papers in neuroscience for what it’s worth: https://www.frontiersin.org/articles/10.3389/fnhum.2017.00363/full
Methods papers tend to be among the most highly cited, and e.g. Selen Atasoy’s original work on CSHW has been cited 208 times, according to Google Scholar. Some more recent papers are at significantly less than 100, though this may climb over time.
Anyway my sense is (1) is possible but depends on future direction, (2) is unlikely, (3) is likely, (4) is unlikely (high confidence).
Perhaps a better measure of success could be expert buy-in. I.e., does QRI get endorsements from distinguished scientists who themselves fit criteria (1) and/or (2)? Likewise, technological usefulness, e.g. has STV directly inspired the creation of some technical device that is available to buy or is used in academic research labs? I’m much more optimistic about these criteria than citation counts, and by some measures we’re already there.
Note that the 2nd question is about total citations rather than of one paper, and 3k citations doesn’t seem that high if you’re introducing an entirely new subfield (which is roughly what I’d expect if STV is true). The core paper of Friston’s free energy principle has almost 5,000 citations for example, and it seems from the outside that STV (if true) ought to be roughly as big a deal as free energy.
For a sense of my prior beliefs about EA-encouraged academic subfields, I think 3k citations in 10 years is an unlikely but not insanely high target for wild animal welfare (maybe 20-30%?), and AI risk is likely already well beyond that (eg >1k citations for Concrete Problems alone).
I’d say that’s a fair assessment — one wrinkle that isn’t a critique of what you wrote, but seems worth mentioning, is that it’s an open question if these are the metrics we should be optimizing for. If we were part of academia, citations would be the de facto target, but we have different incentives (we’re not trying to impress tenure committees). That said, the more citations the better of course.
As you say, if STV is true, it would essentially introduce an entirely new subfield. It would also have implications for items like AI safety and those may outweigh its academic impact. The question we’re looking at is how to navigate questions of support, utility, and impact here: do we put our (unfortunately rather small) resources toward academic writing and will that get us to the next step of support, or do we put more visceral real-world impact first (can we substantially improve peoples’ lives? How much and how many?), or do we go all out towards AI safety?
It’s of course possible to be wrong; I’m also understanding it’s possible to be right, but take the wrong strategic path and run out of gas. Basically I’m a little worried that racking up academic metrics like citations is less a panacea than it might appear, and we’re looking to hedge our bets here.
For what it’s worth, we’ve been interfacing with various groups working on emotional wellness neurotech and one internal metric I’m tracking is how useful a framework STV is to these groups; here’s Jay Sanguinetti explaining STV to Shinzen Young (first part of the interview):
https://open.spotify.com/episode/6cI9pZHzT9sV1tVwoxncWP?si=S1RgPs_CTYuYQ4D-adzNnA&dl_branch=1
I think of the metrics I mentioned above as proxies rather than as the underlying targets, which is some combination of:
a) Is STV true?
b) Conditional upon STV being true, is it useful?
What my forecasting questions aimed to do is shedding light on a). I agree that academia and citations isn’t the best proxy. They may in some cases have conservatism bias (I think trusting the apparent academic consensus on AI risk in 2014 would’ve been a mistake for early EAs), but are also not immune to falseties/crankery (cf replication crisis). In addition, standards for truth and usefulness are different within EA circles than academia, partially because we are trying to answer different questions.
This is especially an issue as the areas that QRI is likely to interact with (consciousness, psychedelics) seem from the outside to be more prone than average to falseness and motivated cognition, including within academia.
This is what I was trying to get at with “will Luke Muelhauser say statements to the effect that the Symmetry Theory of Valence is substantively true?” because Luke is a non-QRI affiliated person within EA who’s a) respected and b) have thought about concepts adjacent to QRI’s work. Bearing in mind that Luke is very far from a perfect oracle, I would still trust Luke’s judgement on this more than an arbitrarily selected academic in an adjacent field.
I think the actual question I’m interested in is something like “In X year, will a panel of well-respected EAs a) not affiliated with QRI and b) have very different thoughts from each other and c)who have thought about things adjacent to QRI’s work have updated to believing STV to be substantively true” but I was unable to come up with a clean question operationalization in the relatively brief amount of time I gave myself to come up with this.
People are free to counterpropose and make their own questions.
Hi Linch, that’s very well put. I would also add a third possibility (c), which is “is STV false but generative.” — I explore this a little here, with the core thesis summarized in this graphic:
I.e., STV could be false in a metaphysical sense, but insofar as the brain is a harmonic computer (a strong reframe of CSHW), it could be performing harmonic gradient descent. Fully expanded, there would be four cases:
STV true, STHR true
STV true, STHR false
STV false, STHR true
STV false, STHR false
Of course, ‘true and false’ are easier to navigate if we can speak of absolutes; STHR is a model, and ‘all models are wrong; some are useful.’
For what it’s worth, I read this comment as constructive rather than non-constructive.
If I write a long report and an expert in the field think that the entire premise is flawed for specific technical reasons, I’d much rather them point this out rather than for them to worry about niceness and then never getting around to mentioning it, thus causing my report to languish in obscurity without me knowing why (or worse, for my false research to actually be used!)
I’m a bit hesitant to upvote this comment given how critical it is [was] + how little I know about the field (and thus whether the criticism is deserved), but I’m a bit relieved/interested to see I wasn’t the only one who thought it sounded really confusing/weird. I have somewhat skeptical priors towards big theories of consciousness and suffering (sort of/it’s complicated) + towards theories that rely on lots of complicated methods/jargon/theory (again, sort of/with caveats)—but I also know very little about this field and so I couldn’t really judge. Thus, I’m definitely interested to see the opinions of people with some experience in the field.
Hi Harrison, appreciate the remarks. My response would be more-or-less an open-ended question: do you feel this is a valid scientific mystery? And, what do you feel an answer would/should look like? I.e., correct answers to long-unsolved mysteries might tend to be on the weird side, but there’s “useful generative clever weird” and “bad wrong crazy timecube weird”. How would you tell the difference?
Haha, I certainly wouldn’t label what you described/presented as “timecube weird.” To be honest, I don’t have a very clear cut set of criteria, and upon reflection it’s probable that the prior is a bit over-influenced by my experiences with some social science research and theory as opposed to hard science research/theory. Additionally, it’s not simply that I’m skeptical of whether the conclusion is true, but more generally my skepticism heuristics for research is about whether whatever is being presented is “A) novel/in contrast with existing theories or intuitions; B) is true; and/or C) is useful.” For example, some theory might be basically rehashing what existing research already has come to consensus on but simply worded in a very different way that adds little to existing research (aside from complexity); alternatively, something could just be flat out wrong; alternatively, something could be technically true and novel as explicitly written, but that is not very useful (e.g., tautological definitions), whereas the common interpretation is wrong (but would be useful if it were right).
Still, two of the key features here that contributed to my mental yellow flags were:
The emphasis on jargon and seemingly ambiguous concepts (e.g., “harmony”) vs. a clear, lay-oriented narrative that explains the theory—crucially including how it is different from other plausible theories (in addition to “why should you believe this? / how did we test this?”). STEM jargon definitely seems different from social science jargon in that STEM jargon seems to more often require more knowledge/experience to get a sense of whether something is nonsense strung together or just legitimate-but-complicated analyses, whereas I can much more easily detect nonsense in social science work when it starts equivocating ideas and making broad generalizations.
(To a lesser extent) The emphasis on mathematical analyses and models for something that seemed to call for a broader approach/acceptance of some ambiguity. (Of course, it’s necessary to mathematically represent some things, but I’m a bit skeptical of systems that try to break down such complex concepts as consciousness and affective experience into a mathematical/quantified representation, just like how I’ve been skeptical of many attempts to measure/operationalize complex conceptual variables like “culture” or “polity” in some social sciences, even if I think doing so can be helpful relative to doing nothing—so long as people still are very clear-eyed about the limitations of the quantification)
In the end, I don’t have strong reason to believe that what you are arguing for is wrong, but especially given points like I just mentioned I haven’t updated my beliefs much in any direction after reading this post.
Hi Harrison, that’s very helpful. I think it’s a challenge to package fairly technical and novel research into something that’s both precise and intuitive. Definitely agree that “harmony” is an ambiguous concept.
One of the interesting aspects of this work is it does directly touch on issues of metaphysics and ontology: what are the natural kinds of reality? What concepts ‘carve reality at the joints’? Most sorts of research can avoid dealing with these questions directly, and just speak about observables and predictions. But since part of what we’re doing is to establish valence as a phenomenological natural kind, we have to make certain moves, and these moves may raise certain yellow flags, as you note, since often when these moves are made there’s some philosophical shenanigans going on. That said, I’m happy with the overall direction of our work, which has been steadily more and more empirical.
One takeaway that I do hope I can offer is the deeply philosophically unsatisfactory nature of existing answers in this space. Put simply, no one knows what pleasure and suffering are, or at least have definitions that are coherent across all domains they’d like to be able to define them. This is an increasing problem as we tackle e.g. problems of digital sentience and fundamental questions of AI alignment. I’m confident in our research program, but even more confident that the questions we’re trying to grapple with are important to try to address directly, and that there’s no good ’default hypothesis’ at present.
People are asking for object-level justifications for the Symmetry Theory of Valence:
The first thing to mention is that the Symmetry Theory of Valence (STV) is *really easy to strawman*. It really is the case that there are many near enemies of STV that sound exactly like what a naïve researcher who is missing developmental stages (e.g. is a naïve realist about perception) would say. That we like pretty symmetrical shapes of course does not mean that symmetry is at the root of valence; that we enjoy symphonic music does not mean harmony is “inherently pleasant”; that we enjoy nice repeating patterns of tactile stimulation does not mean, well, you get the idea...
The truth of course is that at QRI we really are meta-contrarian intellectual hipsters. So the weird and often dumb-sounding things we say are already taking into account the criticisms people in our people-cluster would make and are taking the conversation one step further. For instance, we think digital computers cannot be conscious, but this belief comes from entirely different arguments than those that justify such beliefs out there. We think that the “energy body” is real and important, except that we interpret it within a physicalist paradigm of dynamic systems. We take seriously the possible positive sum game-theoretical implications of MDMA, but not out of a naïve “why can’t we all love each other?” impression, but rather, based on deep evolutionary arguments. And we take seriously non-standard views of identity, not because “we are all Krishna”, but because the common-sense view of identity turns out to, in retrospect, be based on illusion (cf. Parfit, Kolak, “The Future of Personal Identity”) and a true physicalist theory of consciousness (e.g. Pearce’s theory) has no room for enduring metaphysical egos. This is all to say that straw-manning the paradigms explored at QRI is easy; steelmanning them is what’s hard. Can anyone here make a Titanium Man out of them instead? :-)
Now, I am indeed happy to address any mischaracterization of STV. Sadly, to my knowledge nobody outside of QRI really “gets it”, so I don’t think there is anyone other than us (and possibly Scott Alexander!) who can make a steelman of STV. My promise is that “there is something here” and that to “get it” is not merely to buy into the theory blindly, but rather, it is what happens when you give it enough benefit of the doubt, share a sufficient number of background assumptions, and have a wide enough experience base that it actually becomes a rather obvious “good fit” for all of the data available.
For a bit of history (and properly giving due credit), I should clarify that Michael Johnson is the one who came up with the hypothesis in Principia Qualia (for a brief history see: STV Primer). I started out very skeptical of STV myself, and in fact it took about three years of thinking it through in light of many meditation and exotic high-energy experiences to be viscerally convinced that it’s pointing in the right direction. I’m talking about a process of elimination where, for instance, I checked if what feels good is at the computational level of abstraction (such as prediction error minimization) or if it’s at the implementation level (i.e. dissonance). I then developed a number of technical paradigms for how to translate STV into something we could actually study in neuroscience and ultimately try out empirically with non-invasive neurotech (in our case, light-sound-vibration systems that produce multi-modally coherent high-valence states of consciousness). Quintin Frerichs (who gave a presentation about Neural Annealing to Friston) has since been working hard on the actual neuroscience of it in collaboration with Johns Hopkins University, Daniel Ingram, Imperial College and others. We are currently testing the theory in a number of ways and will publish a large paper based on all this work.
For clarification, I should point out that what is brilliant (IMO) about Mike’s Principia Qualia is that he breaks down the problem of consciousness in such a way that it allows us to divide and conquer the hard problem of consciousness. Indeed, once broken down into his 8 subproblems, calling it the “hard problem of consciousness” sounds as bizarre as it would sound to us to hear about “the hard problem of matter”. We do claim that if we are able to solve each of these subproblems, that indeed the hard problem will dissolve. Not the way illusionists would have it (where the very concept of consciousness is problematic), but rather, in the way that electricity and lightning and magnets all turned out to be explained by just 4 simple equations of electromagnetism. Of course the further question of why do those equations exist and why consciousness follows such laws remains, but even that could IMO be fully explained with the appropriate paradigm (cf. Zero Ontology).
The main point to consider here w.r.t. STV is that symmetry is posited to be connected with valence at the implementation level of analysis. This squarely and clearly distinguishes STV from behaviorist accounts of valence (e.g. “behavioral reinforcement”) and also from algorithmic accounts (e.g. compression drive or prediction error minimization). Indeed, with STV you can have a brain (perhaps a damaged brain, or one in an exotic state of consciousness) where prediction errors are not in fact connected to valence. Rather, the brain evolved to recruit valence gradients in order to make better predictions. Similarly, STV predicts that what makes activation of the pleasure centers feel good is precisely that doing so gives rise to large-scale harmony in brain activity. This is exciting because it means the theory predicts we can actually observe a double dissociation: if we inhibit the pleasure centers while exogenously stimulating large-scale harmonic patterns we expect that to feel good, and we likewise expect that even if you activate the pleasure centers you will not feel good if something inhibits the large-scale harmony that would typically result. Same with prediction errors, behavior, etc.: we predict we can doubly-dissociate valence from those features if we conduct the right experiment. But we won’t be able to dissociate valence from symmetry in the formalism of consciousness.
Now, of course we currently can’t see consciousness directly, but we can infer a lot of invariants about it with different “projections”, and so far all are consistent with STV:
Of especial note, I’d point you to one of the studies discussed in the 2020 STV talk: The Human Default Consciousness and Its Disruption: Insights From an EEG Study of Buddhist Jhāna Meditation. It shows a very tight correspondence between jhanas and various smoothly-repeating EEG patterns, including a seizure-like activity that unlike normal seizures (of typically bad valence) shows up as having a *harmonic structure*. Here we find a beautiful correspondence between (a) sense of peace/jhanic bliss, (b) phenomenological descriptions of simplicity and smoothness, (c) valence, and (d) actual neurophysiological data mirroring these phenomenological accounts. At QRI we have similarly observed something quite similar studying the EEG patterns of other ultra-high-valence meditation states (which we will hopefully publish in 2022). I expect this pattern to hold for other exotic high-valence states in one way or another, ranging from quality of orgasm to exogenous opioids.
Phenomenologically speaking, STV is not only capable of describing and explaining why certain meditation or psychedelic states of consciousness feel good or bad, but in fact it can be used as a navigation aid! You can introspect on the ways energy does not flow smoothly, the presence of blockages and pinch points make it reflect in discordant ways, or zone in on areas of the “energy body” that are out of synch with one another and then specifically use attention in order to “comb the field of experience”. This approach—the purely secular climbing of the harmony gradient leads all of its own to amazing high-valence states of consciousness (cf. Buddhist Annealing). I’ll probably make a video series with meditation instructions for people to actually experience this on themselves first hand. It doesn’t take very long, actually. Also, STV as a paradigm can be used in order to experience more pleasant trajectories along the “Energy X Complexity landscape” of a DMT trip (something I even talked about at the SSC meetup online!). In a simple quip, I’d say “there are good and bad ways of vibing on DMT, and STV gives you the key to the realms of good vibes” :-)
Another angle: we can find subtle ways of dissociating valence from e.g. chemicals: if you take stimulants but don’t feel the nice buzz that provides a “working frame” for your mental activity, they will not feel good. At the same time, without stimulants you can get that pleasant productivity-enhancing buzz with the right tactile patterns of stimulation. Indeed this “buzz” that characterizes the effects of many euphoric drugs (and the quality of e.g. metta meditation) is precisely a valence effect, one that provides a metronome to self-organize around and which can feel bad when you don’t follow where it takes you. Literally, one of the core reasons why MDMA feels better than LSD which feels better than DOB is precisely because the “quality of the buzz” of each of these highs is different. MDMA’s buzz is beautiful and harmonious; DOB’s buzz is harsh and dissonant. More so, such a buzz can work as task-specific dissonance guide-rails, if you will. Meaning that when you do buzz-congruent behaviors you feel a sense of inner harmony, whereas when you do buzz-incongruent behaviors you feel a sense of inner turmoil. Hence what kind of buzz one experiences is deeply consequential! All of this falls rather nicely within STV—IMO other theories need to keep adding epicycles to keep up.
Hopefully this all worked as useful clarifications.
It sounds like you’re saying we all need to become more suggestible and just feel like your theory is true before we can understand it. Do you see what poor reasoning that would be?
I take Andrés’s point to be that there’s a decently broad set of people who took a while to see merit in STV, but eventually did. One can say it’s an acquired taste, something that feels strange and likely wrong at first, but is surprisingly parsimonious across a wide set of puzzles. Some of our advisors approached STV with significant initial skepticism, and it took some time for them to come around. That there are at least a few distinguished scientists who like STV isn’t proof it’s correct, but may suggest withholding some forms of judgment.
Thanks Andrés, this helped me get oriented around the phenomenological foundations of what y’all are exploring.
Edit: This comment now makes less sense, given that Abby has revised the language of her comment.
Abby,
I strongly endorse what you say in your last paragraph:
However, I’d like to push back on the tone of your reply. If you’re sorry for posting a negative non-constructive comment, why not try to be a bit more constructive? Why not say something like “I am deeply skeptical of this theory and do not at this moment think it’s worth EAs spending time on. [insert reasons]. I would be willing to change my view if there was evidence.”
Apologies for being pedantic, but I think it’s worth the effort to try and keep the conversation on the forum as constructive as possible!
Hi Jpmos,
I think context is important here. This is not an earnest but misguided post from an undergrad with big ideas and little experience. This is a post from an organization trying to raise hundreds of thousands of dollars. You can check out their website if you want, the front page has a fundraising advertisement.
Further, there are a lot of fancy buzzwords in this post (“connectome!”) and enough jargon that people unfamiliar with the topic might think there is substance here that they just don’t understand (see Harrison’s comment: “I also know very little about this field and so I couldn’t really judge”).
As somebody who knows a lot about this field, I think it’s important that my opinion on these ideas is clearly stated. So I will state it again.
There is no a priori reason to believe any of the claims of STV. There is no empirical evidence to support STV. To an expert, these claims do not sound “interesting and plausible but unproven”, they sound “nonsensical and presented with baffling confidence”.
People have been observing brain oscillations at different frequencies and at different powers for about 100 years. These oscillations have been associated with different patterns of behavior, ranging from sleep stages to memory formation. Nobody has observed asynchrony to be associated with anything like suffering (as far as I’m aware, but please present evidence if I’m mistaken!).
fMRI is a technique that doesn’t measure the firing of neurons (it measures the oxygen consumed over relatively big patches of neurons) and is extremely poorly suited to provide evidence for STV. A better method would be MEG (expensive) or EEG (extremely affordable). If the Qualia Research Institute was a truth seeking institution, they would have either run the simple experiment I proposed themselves, or had any of the neuroscientists they claim to be collaborating with run it for them.
This is a bad post and it should be called out as such. I would have been more gentle if this was a single misguided researcher and not the head of an organization that publishes a lot of other nonsense too.
This reads to me as insinuating fraud, without much supporting evidence.
I appreciate that in other comments you followed up with more concrete criticisms, but this still feels against the “Keep EA Weird” spirit to me. If we never spend a million or two on something that turns out to be nonsense, we wouldn’t have applied hits-based giving very well.
(Despite the username, I have no affiliation with QRI. I’ll admit to finding the problem worth working on. )
Keeping EA honest and rigorous is much higher priority. Making excuses for incompetence or lack of evidence base is the opposite of EA.
I agree that honesty is more important than weirdness. Maybe I’m being taken, but I see miscommunication and not dishonesty from QRI.
I am not sure what an appropriate standard of rigor is for a preparadigmatic area. I would welcome more qualifiers and softer claims.
At the very least, miscommunication this bad is evidence of serious incompetence at QRI. I think you are mistaken to want to excuse that.
Hi all, I messaged some with Holly a bit about this, and what she shared was very helpful. I think a core part of what happened was a mismatch of expectations: I originally wrote this content for my blog and QRI’s website, and the tone and terminology was geared toward “home team content”, not “away team content”. Some people found both the confidence and somewhat dense terminology offputting, and I think that’s reasonable of them to raise questions. As a takeaway, I‘ve updated that crossposting involves some pitfalls and intend to do things differently next time.
Thanks valence. I do think the ‘hits-based giving’ frame is important to develop, although I understand it’s doesn’t have universal support as some of the implications may be difficult to navigate.
And thank for appreciating the problem; it’s sometimes hard for me to describe how important the topic feels and all the reasons for working on it.
Edit: probably an unhelpful comment
Hi Mike,
I am comfortable calling myself “somebody who knows a lot about this field”, especially in relation to the average EA Forum reader, our current context.
I respect Karl Friston as well, I’m looking forward to reading his thoughts on your theory. Is there anything you can share?
The CSHW stuff looks potentially cool, but it’s separate from your original theory, so I don’t want to get too deep into it here. The only thing I would say is that I don’t understand why the claims of your original theory cannot be investigated using standard (cheap) EEG techniques. This is important if a major barrier to finding empirical evidence for your theory is funding. Could you explain why standard EEG is insufficient to investigate the synchrony of neuronal firing during suffering?
I was very aggressive with my criticism of your theory, partially because I think it is wrong (again, the basis of your theory, “the symmetry of this representation will encode how pleasant the experience is”, makes no sense to me), but also because of how confidently you describe your theory with no empirical evidence. So I happily accept being called arrogant and would also happily accept being shown how I am wrong. My tone is in reaction to what I feel is your unfounded confidence, and other posts like “I think all neuroscientists, all philosophers, all psychologists, and all psychiatrists should basically drop whatever they’re doing and learn Selen Atasoy’s “connectome-specific harmonic wave” (CSHW) framework.” https://opentheory.net/2018/08/a-future-for-neuroscience/
You link to your other work in this post, and are raising money for your organization (which I think will redirect money from organizations that I think are doing more effective work), so I think it’s fair for my comments to be in reaction to things outside the text of your original post.
I’m glad to hear you feel good about your background and are filled with confidence in yourself and your field. I think the best work often comes from people who don’t at first see all the challenges involved in doing something, because often those are the only people who even try.
At first I was a little taken aback by your tone, but to be honest I’m a little amused by the whole interaction now.
The core problem with EEG is that the most sophisticated analyses depend on source localization (holographic reconstruction of brain activity), and accurate source localization from EEG remains an unsolved problem, at least at the resolution and confidence we’d need. In particular we’ve looked at various measures of coherence as applied to EEG and found them all wanting in various ways. I notice some backtracking on your criticism of CSHW. ;) it’s a cool method, not without downsides but occupies a cool niche. I have no idea what your research is about but it might be useful for you to learn about for some purposes.
I’m glad you‘re reading more of our ‘back issues’ as it were. We have some talks on our YouTube channel as well (including the NA presentation to Friston), although not all of our work on STV is public yet.
If you share what your research is about, and any published work, I think it’d I’d help me understand where your critiques are coming from a little better. Totally up to you though.
Hi Jpmos, really appreciate the comments. To address the question of evidence, this is a fairly difficult epistemological situation but we’re working with high-valence datasets from Daniel Ingram & Harvard, and Imperial College London (jhana data, and MDMA data, respectively) and looking for signatures of high harmony.
Neuroimaging is a pretty messy thing, there are no shortcuts to denoising data, and we are highly funding constrained, so I’m afraid we don’t have any peer-reviewed work published on this yet. I can say that initial results seem fairly promising and we hope to have something under review in 6 months. There is a wide range of tacit evidence that stimulation patterns with higher internal harmony produce higher valence than dissonant patterns (basically: music feels good, nails on a chalkboard feels bad), but this is in a sense ‘obvious’ and only circumstantial evidence for STV.
Happy to ‘talk shop’ if you want to dig into details here.
Hi Abby, I‘m happy to entertain well-meaning criticism, but it feels your comment rests fairly heavily on credentialism and does not seem to offer any positive information, nor does it feel like high-level criticism (“their actual theory is also bad”). If your background is as you claim, I’m sure you understand the nuances of “proving” an idea in neuroscience, especially with regard to NCCs (neural correlates of consciousness) — neuroscience is also large enough that “I published a peer reviewed fMRI paper in a mainstream journal” isn’t a particularly ringing endorsement of domain knowledge in affective neuroscience. If you do have domain knowledge sufficient to take a crack at the question of valence I’d be glad to hear your ideas.
For a bit of background to theories of valence in neuroscience I’d recommend my forum post here—it goes significantly deeper into the literature than this primer.
Again, I’m not certain you read my piece closely, but as mentioned in my summary, most of our collaboration with British universities has been with Imperial (Robin Carhart-Harris’s lab, though he recently moved to UCSF) rather than Oxford, although Kringelbach has a great research center there and Atasoy (creator of the CSHW reference implementation, which we independently reimplemented) does her research there, so we’re familiar with the scene.
Hi Mike! I appreciate your openness to discussion even though I disagree with you.
Some questions:
1. The most important question: Why would synchrony between different brain areas involved in totally different functions be associated with subjective wellbeing? I fundamentally don’t understand this. For example, asynchrony has been found to be useful in memory as a way of differentiating similar but different memories during encoding/rehearsal/retrieval. It doesn’t seem like a bad thing that the brain has a reason to reduce, the way it has reasons to reduce prediction errors. Please link to brain studies that have found asynchrony leads to suffering.
2. If your theory is focused on neural oscillations, why don’t you use EEG to measure the correlation between neural synchrony and subjective experience? Surely EEG is a more accurate method and vastly cheaper than fMRI?
3. If you are funding constrained, why are none of your collaborators willing to run this experiment for you? Running fMRI and EEG experiments at Princeton is free. I see you have multiple Princeton affiliates on your team, and we even have Michael Graziano as a faculty member who is deeply interested in consciousness and understands fMRI.
My advice is to run the experiment I described in my original comment. Put people in an fMRI scanner (or EEG or MEG), ask them to do things that make them feel suffering/feel peaceful, and see how the CDNS changes between conditions. This is an extremely basic experiment and I am confused why you would be so confident in your theory before running this.
Hi Abby, thanks for the clear questions. In order:
In brief, asynchrony levies a complexity and homeostatic cost that harmony doesn’t. A simple story here is that dissonant systems shake themselves apart; we can draw a parallel between dissonance in the harmonic frame and free energy in the predictive coding frame.
We work with all the high-quality data we can get our hands on. We do have hd-EEG data of jhana meditation, but EEG data as you may(?) know is very noisy and ‘NCC-style’ research with EEG is a methodological minefield.
We know and like Graziano. I’ll share the idea of using Princeton facilities with the team.
To be direct, years ago I felt as you did about the simplicity of the scientific method in relation to neuroscience; “Just put people in an fMRI, have them do things, analyze the data; how hard can it be?” — experience has cured me of this frame, however. I’ve learned that neuroimaging data pipelines are often held together by proverbial duct tape, neuroimaging is noisy, the neural correlates of consciousness frame is suspect and existing philosophy of mind is rather bonkers, and to even say One True Thing about the connection between brain and mind is very hard (and expensive) indeed. I would say I expect you to be surprised by certain realities of neuroscience as you complete your PhD, and I hope you can turn that into determination to refactor the system towards elegance, rather than being progressively discouraged by all the hidden mess.
:)
I appreciate your direct answer to my question, but I do not understand what you are trying to say. I am familiar with Friston and the free-energy principle, so feel free to explain your theory in those terms. All you are doing here is saying that the brain has some reason to reduce “dissonance in the harmonic frame” (a phrase I have other issues with) in a similar way it has reasons to reduce prediction errors. There are good reasons why the brain should reduce prediction errors. You say (but do not clearly explain why) there’s a parallel here where the brain should reduce neural asynchrony/dissonance in the harmonic frame. You posit neural asynchrony is suffering, but you do not explain why in an intelligible way. “Dissonant systems shake themselves apart.” Are you saying dissonant neural networks destroy themselves and we subjectively perceive this as suffering? This makes no sense. Maybe you’re trying to say something else, but I have made my confusion about the link between suffering and asynchrony extremely clear multiple times now, and you have not offered an explanation that I understand.
I agree neuroimaging is extremely messy and discouraging, but you’re the one posting about successfully building an fmri analysis pipeline to run this specific analysis to support your theory. I am very annoyed that your response to my multiple requests for any empirical data to support your theory is you basically saying “science is hard”, as opposed to “no experiment, dataset, or analysis is perfect, but here is some empirical evidence that is at least consistent with my theory.”
I wish you came at this by saying, “Hey I have a cool idea, what do you guys think?” But instead you’re saying “We have a full empirical theory of suffering” with as far as I can tell, nothing to back this up.
I know that this is the EA forum and it’s bad that two people are trading arch emoticons...but I know I’m not the only one enjoying Abby Hoskin’s response to someone explaining her future journey to her.
Inject this into my veins.
Maybe more constructively (?) I think the OP responses have updated others in support of Abby’s concerns.
In the past, sometimes I have said things that turned out not to be as helpful as I thought. In those situations, I think I have benefitted from someone I trust reviewing the discussion and offering another perspective to me.
[Own views]
I’m not sure ‘enjoy’ is the right word, but I also noticed the various attempts to patronize Hoskin.
This ranges from the straightforward “I’m sure once you know more about your own subject you’ll discover I am right”:
‘Well-meaning suggestions’ alongside implication her criticism arises from some emotional reaction rather than her strong and adverse judgement of its merit.
[Adding a smiley after something insulting or patronizing doesn’t magically make you the ‘nice guy’ in the conversation, but makes you read like a passive-aggressive ass who is nonetheless too craven for candid confrontation. I’m sure once you reflect on what I said and grow up a bit you’ll improve so your writing inflicts less of a tax on our collective intelligence and good taste. I know you’ll make us proud! :)]
Or just straight-up belittling her knowledge and expertise with varying degrees of passive-aggressiveness.
I think this sort of smug and catty talking down would be odious even if the OP really did have much more expertise than their critic: I hope I wouldn’t write similarly in response to criticism (however strident) from someone more junior in my own field.
What makes this kinda amusing, though, is although the OP is trying to set himself up as some guru trying to dismiss his critic with the textual equivalent of patting her on the head, virtually any reasonable third party would judge the balance of expertise to weigh in the other direction. Typically we’d take, “Post-graduate degree, current doctoral student, and relevant publication record” over “Basically nothing I could put on an academic CV, but I’ve written loads of stuff about my grand theory of neuroscience.”
In that context (plus the genders of the participants) I guess you could call it ‘mansplaining’.
Greg, I have incredible respect for you as a thinker, and I don’t have a particularly high opinion of the Qualia Research Institute. However, I find your comment to be unnecessarily mean: every substantive point you raise could have been made more nicely and less personal, in a way more conducive to mutual understanding and more focused on an evaluation of QRI’s research program. Even if you think that Michael was condescending or disrespectful to Abby, I don’t think he deserves to be treated like this.
Hmm I have conflicting feelings about this. I think whenever you add additional roadblocks or other limitations on criticism, or suggestions that criticisms can be improved, you
a) see the apparent result that criticisms that survive the process will on average be better.
b) fail to see the (possibly larger) effect that there’s an invisible graveyard of criticisms that people choose not to voice because it’s not worth the hassle.
At the same time, being told that your life work is approximately useless is never a pleasant feeling, and it’s not always reasonable to expect people to handle it with perfect composure (Thankfully nothing of this magnitude has ever happened to me, but I was pretty upset when an EA Forum draft I wrote in only a few days had to be scrapped or at least rewritten because it assumed a mathematical falsehood). So while I think Mike’s responses to Abby are below a reasonable bar of good forum commenting norms, I think I have more sympathy for his feelings and actions here than Greg seems to.
So I’m pretty conflicted. My own current view is that I endorse Abby’s comments and tone as striking the right balance for the forum, and I endorse Greg’s content but not the tone.
But I think reasonable people can disagree here, and we should also be mindful that when we ask people to rephrase substantive criticisms to meet a certain stylistic bar (see also comments here), we are implicitly making criticisms more onerous, which arguably has pretty undesirable outcomes.
I want to say something more direct:
Based on how the main critic Abby was treated, how the OP replies to comments in a way that selectively chooses what content they want to respond to, the way they respond to direct questions with jargon, I place serious weight that this isn’t a good faith conversation.
This is not a stylistic issue, in fact it seems to be exactly the opposite: someone is taking the form of EA norms and styles (maintaining a positive tone, being sympathetic) while actively undermining someone odiously.
I have been in several environments where this behavior is common.
At the risk of policing or adding to the noise (I am not willing to read more of this to update myself), I am writing this because I am concerned you and others who are conscientious are being sucked into this.
Hi Charles, I think several people (myself, Abby, and now Greg) were put in some pretty uncomfortable positions across these replies. By posting, I open myself to replies, but I was pretty surprised by some of the energy of the initial comments (as apparently were others; both Abby and I edited some of our comments to be less confrontational, and I’m happy with and appreciate that).
Happy to answer any object level questions you have that haven’t been covered in other replies, but this remark seems rather strange to me.
For the avoidance of doubt, I remain entirely comfortable with the position expressed in my comment: I wholeheartedly and emphatically stand behind everything I said. I am cheerfully reconciled to the prospect some of those replying to or reading my earlier comment judge me adversely for it—I invite these folks to take my endorsement here as reinforcing whatever negative impressions they formed from what I said there.
The only thing I am uncomfortable with is that someone felt they had to be anonymous to criticise something I wrote. I hope the measure I mete out to others makes it clear I am happy for similar to be meted out to me in turn. I also hope reasonable folks like the anonymous commenter are encouraged to be forthright when they think I err—this is something I would be generally grateful to them for, regardless of whether I agree with their admonishment in a particular instance. I regret to whatever degree my behaviour has led others to doubt this is the case.
Greg, I want to bring two comments that have been posted since your comment above to your attention:
Abby said the following to Mike:
2. Another anonymous commentator (thanks to Linch for posting) highlights that Abby’s line of questioning regarding EEGs ultimately resulted in a response satisfactory to her and which she didn’t have the expertise to further evaluate:
Thanks, but I’ve already seen them. Presuming the implication here is something like “Given these developments, don’t you think you should walk back what you originally said?”, the answer is “Not really, no”: subsequent responses may be better, but that is irrelevant to whether earlier ones were objectionable; one may be making good points, but one can still behave badly whilst making them.
(Apologies if I mistake what you are trying to say here. If it helps generally, I expect—per my parent comment—to continue to affirm what I’ve said before however the morass of commentary elsewhere on this post shakes out.)
Gregory, I’ll invite you to join the object-level discussion between Abby and I.
Just want to be clear, the main post isn’t about analyzing eigenmodes with EEG data. It’s very funny that when I am intellectually honest enough to say I don’t know about one specific EEG analysis that doesn’t exist and is not referenced in the main text, people conclude that I don’t have expertise to comment on fMRI data analysis or the nature of neural representations.
Meanwhile QRI does not have expertise to comment on many of the things they discuss, but they are super confident about everything and in the original posts especially did not clearly indicate what is speculation versus what is supported by research.
I continue to be unconvinced with the arguments laid out, but I do think both the tone of the conversation and Mike Johnson’s answers improved after he was criticized. (Correlation? Causation?)
Generally speaking, I agree with the aphorism “You catch more flies with honey than vinegar;”
For what it’s worth, I interpreted Gregory’s critique as an attempt to blow up the conversation and steer away from the object level, which felt odd. I’m happiest speaking of my research, and fielding specific questions about claims.
Hi Gregory, I’ll own that emoticon. My intent was not to belittle, but to show I’m not upset and I‘m actually enjoying the interaction. To be crystal clear, I have no doubt Hoskin is a sharp scientist and cast no aspersions on her work. Text can be a pretty difficult medium for conveying emotions (things can easily come across as either flat or aggressive).
Hi Abby, to give a little more color on the data: we’re very interested in CSHW as it gives us a way to infer harmonic structure from fMRI, which we’re optimistic is a significant factor in brain self-organization. (This is still a live hypothesis, not established fact; Atasoy is still proving her paradigm, but we really like it.)
We expect this structure to be highly correlated with global valence, and to show strong signatures of symmetry/harmony during high-valence states. The question we’ve been struggling with as we’ve been building this hypothesis is “what is a signature of symmetry/harmony?” — there’s a bit of research from Stanford (Chon) here on quantifying consonance in complex waveforms and some cool music theory based on Helmholz’s work, but this appears to be an unsolved problem. Our “CDNS” approach basically looks at pairwise relationships between harmonics to quantify the degree to which they’re in consonance or dissonance with each other. We’re at the stage here where we have the algorithm, but need to validate it on audio samples first before applying it too confidently to the brain.
There’s also a question of what datasets are ideal for the sort of thing we’re interested in. Extreme valence datasets are probably the most promising, states of extreme pleasure or extreme suffering. We prefer datasets involving extreme pleasure, for two reasons:
(1) We viscerally feel better analyzing this sort of data than states of extreme suffering;
(2) fMRI’s time resolution is such that the best results will come from mental states with high structural stability. We expect this structural stability to be much higher during pleasure than suffering.
As such we’ve been focusing on collecting data from meditative jhana states, and from MDMA states. There might be other states that involve reliable good emotion that we can study, but these are the best we’ve found conceptually so far.
Lastly, there’s been the issue of neuroimaging pipelines and CSHW. Atasoy‘s work is not open source, so we had to reimplement her core logic (big thanks to Patrick here) and we ended up collaborating with an external group on a project to combine this core logic with a neuroimaging packaging system. I can’t share all the details here as our partner doesn’t want to be public about their involvement yet but this is thankfully wrapping up soon.
I wish we had a bunch of deeply analyzed data we could send you in direct support of STV! And I agree with you that is the ideal and you’re correct to ask for it. Sadly we don’t at this point, but I’m glad to say a lot of the preliminaries have been now taken care of and things are moving. I hope my various comments here haven’t come across as disrespectful (and I sincerely apologize if they have- not my intention but if that’s been your interpretation I accept it, sorry!); there’s just a lot of high-context stuff here that’s hard to package up into something that’s neat and tidy, and overall what clarity we’ve been able to find on this topic has been very hard-won.
Hi Abby, to be honest the parallels between free-energy-minimizing systems and dissonance-minimizing systems is a novel idea we’re playing with (or at least I believe it’s novel—my colleague Andrés coined it to my knowledge) and I’m not at full liberty to share all the details before we publish it. I think it’s reasonable to doubt this intuition, and we’ll hopefully be assembling more support for it soon.
To the larger question of neural synchrony and STV, a good collection of our argument and some available evidence would be our talk to Robin Carhart-Harris’ lab:
(I realize an hour-long presentation is a big ‘ask’; don’t feel like you need to watch it, but I think this shares what we can share publicly at this time)
>I agree neuroimaging is extremely messy and discouraging, but you’re the one posting about successfully building an fmri analysis pipeline to run this specific analysis to support your theory. I am very annoyed that your response to my multiple requests for any empirical data to support your theory is you basically saying “science is hard”, as opposed to “no experiment, dataset, or analysis is perfect, but here is some empirical evidence that is at least consistent with my theory.”
One of my takeaways from our research is that neuroimaging tooling is in fairly bad shape overall. I’m frankly surprised we had to reimplement an fMRI analysis pipeline in order to start really digging into this question, and I wonder how typical our experience here is.
One of the other takeaways from our work is that it’s really hard to find data that’s suitable for fundamental research into valence; we just got some MDMA fMRI+DTI data that appears very high quality, so we may have more to report soon. I’m happy to talk about what sorts of data are, vs are not, suitable for our research and why; my hands are a bit tied with provisional data at this point (sorry about that, wish I had more to share)
Thanks for adjusting your language to be nicer. I wouldn’t say we’re overwhelmingly confident in our claims, but I am overwhelmingly confident in the value of exploring these topics from first principles, and although I wish I had knockout evidence for STV to share with you today, that would be Nobel Prize tier and I think we’ll have to wait and see what the data brings. For the data we would identify as provisional support, this video is likely the best public resource at this point:
This sounds overwhelmingly confident to me, especially since you have no evidence to support either of these claims.
This is in fact the claim of STV, loosely speaking; that there is an identity relationship here. I can see how it would feel like an aggressive claim, but I’d also suggest that positing identity relationships is a very positive thing, as they generally offer clear falsification criteria. Happy to discuss object-level arguments as presented in the linked video.
Hi Mike, I really enjoy your and Andrés’s work, including STV, and I have to say I’m disappointed by how the ideas are presented here, and entirely unsurprised at the reaction they’ve elicited.
There’s a world of a difference between saying “nobody knows what valence is made out of, so we’re trying to see if we can find correlations with symmetries in imaging data” (weird but fascinating) and “There is an identity relationship between suffering and disharmony” (time cube). I know you’re not time cube man, because I’ve read lots of other QRI output over the years, but most folks here will lack that context. This topic is fringe enough that I’d expect everything to be extra-delicately phrased and very well seasoned with ifs and buts.
Again, I’m a big fan of QRI’s mission, but I’d be worried about donating I if I got the sense that the organization viewed STV not as something to test, but as something to prove. Statistically speaking, it’s not likely that STV will turn out to be the correct mechanistic grand theory of valence, simply because it’s the first one (of hopefully many to come). I would like to know:
When do you expect to be able to share the first set of empirical results, and what kinds of conclusions do you expect we will be able to draw from them, depending on how they turn out? Tiny studies with limited statistical power are ok; “oh it’s promising so far but we can’t share details” isn’t.
I hope QRI’s fate isn’t tied to STV – if STV can’t be reconciled with the data, then what alternative ideas would you test next?
Hi Seb, I appreciate the honest feedback and kind frame.
I can say that it’s difficult to write a short piece that will please a diverse audience, but that ducks the responsibility of the writer.
You might be interested in my reply to Linch which notes that STV may be useful even if false; I would be surprised if it were false but it wouldn’t be an end to qualia research, merely a new interesting chapter.
I spoke with the team today about data, and we just got a new batch this week we’re optimistic has exactly the properties we’re looking for (meditative cessations, all 8 jhanas in various orders, DTI along with the fMRI). We have a lot of people on our team page but to this point QRI has mostly been fueled by volunteer work (I paid myself my first paycheck this month, after nearly five years) so we don’t always have the resources to do everything we want to do as fast as we want to do it, but I’m optimistic we’ll have something to at least circulate privately within a few months.
But did you have any reason to posit it? Any evidence that this identity is the case?
Andrés’s STV presentation to Imperial College London’s psychedelics research group is probably the best public resource I can point to on this right now. I can say after these interactions it’s much more clear that people hearing these claims are less interested in the detailed structure of the philosophical argument, and more in the evidence, and in a certain form of evidence. I think this is very reasonable and it’s something we’re finally in a position to work on directly: we spent the last ~year building the technical capacity to do the sorts of studies we believe will either falsify or directly support STV.