Andrés’s STV presentation to Imperial College London’s psychedelics research group is probably the best public resource I can point to on this right now. I can say after these interactions it’s much more clear that people hearing these claims are less interested in the detailed structure of the philosophical argument, and more in the evidence, and in a certain form of evidence. I think this is very reasonable and it’s something we’re finally in a position to work on directly: we spent the last ~year building the technical capacity to do the sorts of studies we believe will either falsify or directly support STV.
MikeJohnson
Hi Holly, I’d say the format of my argument there would be enumeration of claims, not e.g. trying to create a syllogism. I’ll try to expand and restate those claims here:
A very important piece of this is assuming there exists a formal structure (formalism) to consciousness. If this is true, STV becomes a lot more probable. If it isn’t, STV can’t be the case.
Integrated Information Theory (IIT) is the most famous framework for determining the formal structure to an experience. It does so by looking at the causal relationships between components of a system; the more a system’s parts demonstrate ‘integration’ (which is a technical, mathematical term that tries to define how much a system’s parts interact with its other parts), the more conscious the system is.
I didn’t make IIT, I don’t know if it’s true, and I actually suspect it might not be true (I devoted a section of Principia Qualia to explaining IIT, and another section to critiques of IIT). But it’s a great example of an attempt to formalize phenomenology, and I think the project or overall frame of IIT (the idea of consciousness being the sort of thing that one can apply formal mathematics to) is correct even if its implementation (integration) isn’t.
You can think of IIT as a program. Put in the details of how a system (such as a brain) is put together, and it gives you some math that tells you what the system is feeling.
You can think of STV as a way to analyze this math. STV makes a big jump in that it assumes the symmetry of this mathematical object corresponds to how pleasurable the experience it represents is. This is a huge, huge, huge jump, and cannot be arrived at by deduction; none of my premeses force this conclusion. We can call it an educated guess. But, it is my best educated guess after thinking about this topic for about 7 years before posting my theory. I can say I’m fully confident the problem is super important and I’m optimistic this guess is correct, for many reasons, but many of these reasons are difficult to put into words. My co-founder Andrés also believes in STV and his way of describing things is often very different than mine in helpful ways and he recently posted his own description of this, so I also encourage you to read his comment.
Just a quick comment in terms of comment flow: there’s been a large amount of editing of the top comment, and some of the replies that have been posted may not seem to follow the logic of the comment they‘re attached to. If there are edits to a comment that you wish me to address, I’d be glad if you made a new comment. (If you don’t, I don’t fault you but I may not address the edit.)
Hi Charles, I think several people (myself, Abby, and now Greg) were put in some pretty uncomfortable positions across these replies. By posting, I open myself to replies, but I was pretty surprised by some of the energy of the initial comments (as apparently were others; both Abby and I edited some of our comments to be less confrontational, and I’m happy with and appreciate that).
Happy to answer any object level questions you have that haven’t been covered in other replies, but this remark seems rather strange to me.
Hi Michael, I appreciate the kind effortpost, as per usual. I’ll do my best to answer.
This is a very important question. To restate it in several ways: what kind of thing is suffering? What kind of question is ‘what is suffering’? What would a philosophically satisfying definition of suffering look like? How would we know if we saw it? Why does QRI think existing theories of suffering are lacking? Is an answer to this question a matter of defining some essence, or defining causal conditions, or something else?
Our intent is to define phenomenological valence in a fully formal way, with the template being physics: we wish to develop our models such that we can speak of pain and pleasure with all the clarity, precision, and rigor as we currently describe photons and quarks and fields.
This may sound odd, but physics is a grand success story of formalization, and we essentially wish to apply the things that worked in physics, to phenomenology. Importantly, physics has a strong tradition of using symmetry considerations to inform theory. STV borrows squarely from this tradition (see e.g. my write up on Emmy Noether).
Valence is subjective as you note, but that doesn’t mean it’s arbitrary; there are deep patterns in which conditions and sensations feel good, and which feel bad. We think it’s possible to create a formal system for the subjective. Valence and STV are essentially the pilot project for this system. Others such as James and Husserl have tried to make phenomenological systems, but we believe they didn’t have all the pieces of the puzzle. I’d offer our lineages page for what we identify as ‘the pieces of the puzzle’; these are the shoulders we’re standing on to build our framework.
2. I see the question. Also, thank you for your work on the Happier Lives Institute; we may not interact frequently but I really like what you’re doing.
The significance of a fully rigorous theory of valence might not be fully apparent, even to the people working on it. Faraday and Maxwell formalized electromagnetism; they likely did not foresee theIr theory being used to build the iPhone. However, I suspect that they had deep intuitions that there’s something deeply useful in understanding the structure of nature, and perhaps they wouldn’t be as surprised as their contemporaries. We also hold intuitions as to the applications of a full theory of valence.
The simplest would be, it would unlock novel psychological and psychiatric diagnostics. If there is some difficult-to-diagnose nerve pain, or long covid type bodily suffering, or some emotional disturbance that is difficult to verbalize, well, this is directly measurable in principle with STV. This wouldn’t replace economics and psychology, as you say, but it would augment them.
Longer term, I’m reminded of the (adapted) phrase, “what you can measure, you can manage.” If you can reliably measure suffering, you can better design novel interventions for reducing it. I could see a validated STV as the heart of a revolution in psychiatry, and some of our work (Neural Annealing, Wireheading Done Right) are aimed at possible shapes this might take.
3. Aha, an easy question :) I’d point you toward our web glossary.
To your question, “Finally, and perhaps most importantly, I really not sure what it could even mean to represent consciousness/ valence as a mathematic shape“ — this is perhaps an overly-fancy way of saying that we believe consciousness is precisely formalizable. The speed of light is precisely formalizable; the UK tax rate is precisely formalizable; the waveform of an mp3 is precisely formalizable, and all of these formalizations can be said to be different ‘mathematical shapes’. To say something does not have a ‘mathematical shape’ is to say it defies formal analysis.
Thanks again for your clear and helpful questions.
Hi Seb, I appreciate the honest feedback and kind frame.
I can say that it’s difficult to write a short piece that will please a diverse audience, but that ducks the responsibility of the writer.
You might be interested in my reply to Linch which notes that STV may be useful even if false; I would be surprised if it were false but it wouldn’t be an end to qualia research, merely a new interesting chapter.
I spoke with the team today about data, and we just got a new batch this week we’re optimistic has exactly the properties we’re looking for (meditative cessations, all 8 jhanas in various orders, DTI along with the fMRI). We have a lot of people on our team page but to this point QRI has mostly been fueled by volunteer work (I paid myself my first paycheck this month, after nearly five years) so we don’t always have the resources to do everything we want to do as fast as we want to do it, but I’m optimistic we’ll have something to at least circulate privately within a few months.
Hi Linch, that’s very well put. I would also add a third possibility (c), which is “is STV false but generative.” — I explore this a little here, with the core thesis summarized in this graphic:
I.e., STV could be false in a metaphysical sense, but insofar as the brain is a harmonic computer (a strong reframe of CSHW), it could be performing harmonic gradient descent. Fully expanded, there would be four cases:
STV true, STHR true
STV true, STHR false
STV false, STHR true
STV false, STHR false
Of course, ‘true and false’ are easier to navigate if we can speak of absolutes; STHR is a model, and ‘all models are wrong; some are useful.’
This is in fact the claim of STV, loosely speaking; that there is an identity relationship here. I can see how it would feel like an aggressive claim, but I’d also suggest that positing identity relationships is a very positive thing, as they generally offer clear falsification criteria. Happy to discuss object-level arguments as presented in the linked video.
Thanks for adjusting your language to be nicer. I wouldn’t say we’re overwhelmingly confident in our claims, but I am overwhelmingly confident in the value of exploring these topics from first principles, and although I wish I had knockout evidence for STV to share with you today, that would be Nobel Prize tier and I think we’ll have to wait and see what the data brings. For the data we would identify as provisional support, this video is likely the best public resource at this point:
I’d say that’s a fair assessment — one wrinkle that isn’t a critique of what you wrote, but seems worth mentioning, is that it’s an open question if these are the metrics we should be optimizing for. If we were part of academia, citations would be the de facto target, but we have different incentives (we’re not trying to impress tenure committees). That said, the more citations the better of course.
As you say, if STV is true, it would essentially introduce an entirely new subfield. It would also have implications for items like AI safety and those may outweigh its academic impact. The question we’re looking at is how to navigate questions of support, utility, and impact here: do we put our (unfortunately rather small) resources toward academic writing and will that get us to the next step of support, or do we put more visceral real-world impact first (can we substantially improve peoples’ lives? How much and how many?), or do we go all out towards AI safety?
It’s of course possible to be wrong; I’m also understanding it’s possible to be right, but take the wrong strategic path and run out of gas. Basically I’m a little worried that racking up academic metrics like citations is less a panacea than it might appear, and we’re looking to hedge our bets here.
For what it’s worth, we’ve been interfacing with various groups working on emotional wellness neurotech and one internal metric I’m tracking is how useful a framework STV is to these groups; here’s Jay Sanguinetti explaining STV to Shinzen Young (first part of the interview):
https://open.spotify.com/episode/6cI9pZHzT9sV1tVwoxncWP?si=S1RgPs_CTYuYQ4D-adzNnA&dl_branch=1
Thanks valence. I do think the ‘hits-based giving’ frame is important to develop, although I understand it’s doesn’t have universal support as some of the implications may be difficult to navigate.
And thank for appreciating the problem; it’s sometimes hard for me to describe how important the topic feels and all the reasons for working on it.
Hi Linch, cool idea.
I’d suggest that 100 citations can be a rather large number for papers, depending on what reference class you put us in, 3000 larger still; here’s an overview of the top-cited papers in neuroscience for what it’s worth: https://www.frontiersin.org/articles/10.3389/fnhum.2017.00363/full
Methods papers tend to be among the most highly cited, and e.g. Selen Atasoy’s original work on CSHW has been cited 208 times, according to Google Scholar. Some more recent papers are at significantly less than 100, though this may climb over time.
Anyway my sense is (1) is possible but depends on future direction, (2) is unlikely, (3) is likely, (4) is unlikely (high confidence).
Perhaps a better measure of success could be expert buy-in. I.e., does QRI get endorsements from distinguished scientists who themselves fit criteria (1) and/or (2)? Likewise, technological usefulness, e.g. has STV directly inspired the creation of some technical device that is available to buy or is used in academic research labs? I’m much more optimistic about these criteria than citation counts, and by some measures we’re already there.
Hi Abby, to give a little more color on the data: we’re very interested in CSHW as it gives us a way to infer harmonic structure from fMRI, which we’re optimistic is a significant factor in brain self-organization. (This is still a live hypothesis, not established fact; Atasoy is still proving her paradigm, but we really like it.)
We expect this structure to be highly correlated with global valence, and to show strong signatures of symmetry/harmony during high-valence states. The question we’ve been struggling with as we’ve been building this hypothesis is “what is a signature of symmetry/harmony?” — there’s a bit of research from Stanford (Chon) here on quantifying consonance in complex waveforms and some cool music theory based on Helmholz’s work, but this appears to be an unsolved problem. Our “CDNS” approach basically looks at pairwise relationships between harmonics to quantify the degree to which they’re in consonance or dissonance with each other. We’re at the stage here where we have the algorithm, but need to validate it on audio samples first before applying it too confidently to the brain.
There’s also a question of what datasets are ideal for the sort of thing we’re interested in. Extreme valence datasets are probably the most promising, states of extreme pleasure or extreme suffering. We prefer datasets involving extreme pleasure, for two reasons:
(1) We viscerally feel better analyzing this sort of data than states of extreme suffering;
(2) fMRI’s time resolution is such that the best results will come from mental states with high structural stability. We expect this structural stability to be much higher during pleasure than suffering.
As such we’ve been focusing on collecting data from meditative jhana states, and from MDMA states. There might be other states that involve reliable good emotion that we can study, but these are the best we’ve found conceptually so far.
Lastly, there’s been the issue of neuroimaging pipelines and CSHW. Atasoy‘s work is not open source, so we had to reimplement her core logic (big thanks to Patrick here) and we ended up collaborating with an external group on a project to combine this core logic with a neuroimaging packaging system. I can’t share all the details here as our partner doesn’t want to be public about their involvement yet but this is thankfully wrapping up soon.
I wish we had a bunch of deeply analyzed data we could send you in direct support of STV! And I agree with you that is the ideal and you’re correct to ask for it. Sadly we don’t at this point, but I’m glad to say a lot of the preliminaries have been now taken care of and things are moving. I hope my various comments here haven’t come across as disrespectful (and I sincerely apologize if they have- not my intention but if that’s been your interpretation I accept it, sorry!); there’s just a lot of high-context stuff here that’s hard to package up into something that’s neat and tidy, and overall what clarity we’ve been able to find on this topic has been very hard-won.
Hi Gregory, I’ll own that emoticon. My intent was not to belittle, but to show I’m not upset and I‘m actually enjoying the interaction. To be crystal clear, I have no doubt Hoskin is a sharp scientist and cast no aspersions on her work. Text can be a pretty difficult medium for conveying emotions (things can easily come across as either flat or aggressive).
Hi Abby, to be honest the parallels between free-energy-minimizing systems and dissonance-minimizing systems is a novel idea we’re playing with (or at least I believe it’s novel—my colleague Andrés coined it to my knowledge) and I’m not at full liberty to share all the details before we publish it. I think it’s reasonable to doubt this intuition, and we’ll hopefully be assembling more support for it soon.
To the larger question of neural synchrony and STV, a good collection of our argument and some available evidence would be our talk to Robin Carhart-Harris’ lab:
(I realize an hour-long presentation is a big ‘ask’; don’t feel like you need to watch it, but I think this shares what we can share publicly at this time)
>I agree neuroimaging is extremely messy and discouraging, but you’re the one posting about successfully building an fmri analysis pipeline to run this specific analysis to support your theory. I am very annoyed that your response to my multiple requests for any empirical data to support your theory is you basically saying “science is hard”, as opposed to “no experiment, dataset, or analysis is perfect, but here is some empirical evidence that is at least consistent with my theory.”
One of my takeaways from our research is that neuroimaging tooling is in fairly bad shape overall. I’m frankly surprised we had to reimplement an fMRI analysis pipeline in order to start really digging into this question, and I wonder how typical our experience here is.
One of the other takeaways from our work is that it’s really hard to find data that’s suitable for fundamental research into valence; we just got some MDMA fMRI+DTI data that appears very high quality, so we may have more to report soon. I’m happy to talk about what sorts of data are, vs are not, suitable for our research and why; my hands are a bit tied with provisional data at this point (sorry about that, wish I had more to share)
Hi Harrison, that’s very helpful. I think it’s a challenge to package fairly technical and novel research into something that’s both precise and intuitive. Definitely agree that “harmony” is an ambiguous concept.
One of the interesting aspects of this work is it does directly touch on issues of metaphysics and ontology: what are the natural kinds of reality? What concepts ‘carve reality at the joints’? Most sorts of research can avoid dealing with these questions directly, and just speak about observables and predictions. But since part of what we’re doing is to establish valence as a phenomenological natural kind, we have to make certain moves, and these moves may raise certain yellow flags, as you note, since often when these moves are made there’s some philosophical shenanigans going on. That said, I’m happy with the overall direction of our work, which has been steadily more and more empirical.
One takeaway that I do hope I can offer is the deeply philosophically unsatisfactory nature of existing answers in this space. Put simply, no one knows what pleasure and suffering are, or at least have definitions that are coherent across all domains they’d like to be able to define them. This is an increasing problem as we tackle e.g. problems of digital sentience and fundamental questions of AI alignment. I’m confident in our research program, but even more confident that the questions we’re trying to grapple with are important to try to address directly, and that there’s no good ’default hypothesis’ at present.
I’m glad to hear you feel good about your background and are filled with confidence in yourself and your field. I think the best work often comes from people who don’t at first see all the challenges involved in doing something, because often those are the only people who even try.
At first I was a little taken aback by your tone, but to be honest I’m a little amused by the whole interaction now.
The core problem with EEG is that the most sophisticated analyses depend on source localization (holographic reconstruction of brain activity), and accurate source localization from EEG remains an unsolved problem, at least at the resolution and confidence we’d need. In particular we’ve looked at various measures of coherence as applied to EEG and found them all wanting in various ways. I notice some backtracking on your criticism of CSHW. ;) it’s a cool method, not without downsides but occupies a cool niche. I have no idea what your research is about but it might be useful for you to learn about for some purposes.
I’m glad you‘re reading more of our ‘back issues’ as it were. We have some talks on our YouTube channel as well (including the NA presentation to Friston), although not all of our work on STV is public yet.
If you share what your research is about, and any published work, I think it’d I’d help me understand where your critiques are coming from a little better. Totally up to you though.
Hi Abby, thanks for the clear questions. In order:
In brief, asynchrony levies a complexity and homeostatic cost that harmony doesn’t. A simple story here is that dissonant systems shake themselves apart; we can draw a parallel between dissonance in the harmonic frame and free energy in the predictive coding frame.
We work with all the high-quality data we can get our hands on. We do have hd-EEG data of jhana meditation, but EEG data as you may(?) know is very noisy and ‘NCC-style’ research with EEG is a methodological minefield.
We know and like Graziano. I’ll share the idea of using Princeton facilities with the team.
To be direct, years ago I felt as you did about the simplicity of the scientific method in relation to neuroscience; “Just put people in an fMRI, have them do things, analyze the data; how hard can it be?” — experience has cured me of this frame, however. I’ve learned that neuroimaging data pipelines are often held together by proverbial duct tape, neuroimaging is noisy, the neural correlates of consciousness frame is suspect and existing philosophy of mind is rather bonkers, and to even say One True Thing about the connection between brain and mind is very hard (and expensive) indeed. I would say I expect you to be surprised by certain realities of neuroscience as you complete your PhD, and I hope you can turn that into determination to refactor the system towards elegance, rather than being progressively discouraged by all the hidden mess.
:)
Edit: probably an unhelpful comment
I take Andrés’s point to be that there’s a decently broad set of people who took a while to see merit in STV, but eventually did. One can say it’s an acquired taste, something that feels strange and likely wrong at first, but is surprisingly parsimonious across a wide set of puzzles. Some of our advisors approached STV with significant initial skepticism, and it took some time for them to come around. That there are at least a few distinguished scientists who like STV isn’t proof it’s correct, but may suggest withholding some forms of judgment.