Thanks again for your openness to discussion, I do appreciate you taking the time. Your responses here are much more satisfying and comprehensible than your previous statements, it’s a bit of a shame we can’t reset the conversation.
1a. I am interpreting this as you saying there are certain brain areas that, when activated, are more likely to result in the experience of suffering or pleasure. This is the sort of thing that is plausible and possible to test.
1b. I think you are making a mistake by thinking of the brain like a musical instrument, and I really don’t like how you’re assuming discordant brain oscillations “feel bad” the way discordant chords “sound bad”. (Because as I’ve stated earlier, there’s no evidence linking suffering to dissonance, and as you’ve stated previously, you made a massive jump in reasoning here.) But this is the clearest you have explained your thinking on this question so far, which I do appreciate.
1c. I am confused here. I did not ask whether dissonance in VWFA causes dissonance in FFA. I asked how dissonance between the two regions causes suffering. What does it mean neurologically to have dissonance within a specific brain area? I thought the point of using fMRI instead of EEG was that you needed to measure the differences between specific areas.
1d. You’re saying dissonance in place a could cause dissonance in place b, or both could be caused by dissonance in place c. That sounds super reasonable. But my question is why would dissonance between a and b cause suffering? It doesn’t really matter what brain areas a and b are, I know I keep hammering at the point of why suffering == dissonance, but this is the most important part of your theory, and your explanation of “This is a huge, huge, huge jump, and cannot be arrived at by deduction” is incredibly unsatisfying to me.
2&3. Ok, I appreciate this concrete response. I don’t know enough about calculating eigenmodes with EEG data to predict how tractable it is.
4. Your current analysis is incompatible with wearable biotech. Moving your body even a millimeter within the fMRI scanner negatively affects data quality. This is part of the reason I am confused about why you are focused so much on fMRI. I appreciate in general the value of accurate biomarkers for wellbeing, but I don’t think symmetry/harmonics is either accurate or useful.
5. The labs I am in (although not me personally) are working on closed loop fmri neurofeedback to improve mental health outcomes of depressed patients. I am familiar with the technical challenges in this work, which is partially why I am coming at you so hard on this. Here’s a paper from my primary and secondary academic advisors: https://www.biorxiv.org/content/10.1101/2020.06.07.137943v1.abstract
Hi Abby, I understand. We can just make the best of it.
1a. Yep, definitely. Empirically we know this is true from e.g. Kringelbach and Berridge’s work on hedonic centers of the brain; what we’d be interested in looking into would be whether these areas are special in terms of network control theory.
1c. I may be getting ahead of myself here: the basic approach to testing STV we intend is looking at dissonance in global activity. Dissonance between brain regions likely contribute to this ‘global dissonance’ metric. I’m also interested in measuring dissonance within smaller areas of the brain as I think it could help improve the metric down the line, but definitely wouldn’t need to at this point.
1d. As a quick aside, STV says that ‘symmetry in the mathematical representation of phenomenology corresponds to pleasure’. We can think of that as ‘core STV’. We’ve then built neuroscience metrics around consonance, dissonance, and noise that we think can be useful for proxying symmetry in this representation; we can think of that as a looser layer of theory around STV, something that doesn’t have the ‘exact truth’ expectation of core STV. When I speak of dissonance corresponding to suffering, it’s part of this looser second layer.
To your question — why would STV be true? — my background is in the philosophy of science, so I’m perhaps more ready to punt to this domain. I understand this may come across as somewhat frustrating or obfuscating from the perspective of a neuroscientist asking for a neuroscientific explanation. But, this is a universal thread across philosophy of science: why is such and such true? Why does gravity exist; why is the speed of light as it is? Etc. Many things we’ve figured out about reality seem like brute facts. Usually there is some hints of elegance in the structures we’re uncovering, but we’re just not yet knowledgable to see some universal grand plan. Physics deals with this a lot, and I think philosophy of mind is just starting to grapple with this in terms of NCCs. Here’s something Frank Wilczek (won the 2004 Nobel Prize in physics for helping formalize the Strong nuclear force) shared about physics:
>… the idea that there is symmetry at the root of Nature has come to dominate our understanding of physical reality. We are led to a small number of special structures from purely mathematical considerations—considerations of symmetry—and put them forward to Nature, as candidate elements for her design. … In modern physics we have taken this lesson to heart. We have learned to work from symmetry toward truth. Instead of using experiments to infer equations, and then finding (to our delight and astonishment) that the equations have a lot of symmetry, we propose equations with enormous symmetry and then check to see whether Nature uses them. It has been an amazingly successful strategy. (A Beautiful Question, 2015)
So — why would STV be the case? ”Because it would be beautiful, and would reflect and extend the flavor of beauty we’ve found to be both true and useful in physics” is probably not the sort of answer you’re looking for, but it’s the answer I have at this point. I do think all the NCC literature is going to have to address this question of ‘why’ at some point.
4. We’re ultimately opportunistic about what exact format of neuroimaging we use to test our hypotheses, but fMRI checks a lot of the boxes (though not all). As you say, fMRI is not a great paradigm for neurotech; we’re looking at e.g. headsets by Kernel and others, and also digging into the TUS (transcranial ultrasound) literature for more options.
5. Cool! I’ve seen some big reported effect sizes and I’m generally pretty bullish on neurofeedback in the long term; Adam Gazzaley‘s Neuroscape is doing some cool stuff in this area too.
Considering how asymmetries can be both pleasing (complex stimuli seem more beautiful to me than perfectly symmetrical spheres) and useful (as Holly Ellmore points out in the domain of information theory, and as the Mosers found with their Nobel prize winning work on orthogonal neural coding of similar but distinct memories), I question your intuition that asymmetry needs to be associated with suffering.
Asymmetries in stimuli seem crucial for getting patterns through the “predictive coding gauntlet.” I.e., that which can be predicted can be ignored. We demonstrably screen perfect harmony out fairly rapidly.
The crucial context for STV on the other hand isn’t symmetries/asymmetries in stimuli, but rather in brain activity. (More specifically, as we’re currently looking at things, in global eigenmodes.)
With a nod back to the predictive coding frame, it’s quite plausible that the stimuli that create the most internal symmetry/harmony are not themselves perfectly symmetrical, but rather have asymmetries crafted to avoid top-down predictive models. I’d expect this to vary quite a bit across different senses though, and depend heavily on internal state.
The brain may also have mechanisms which introduce asymmetries in global eigenmodes, in order to prevent getting ‘trapped’ by pleasure — I think of boredom as fairly sophisticated ‘anti-wireheading technology’ — but if we set aside dynamics, the assertion is that symmetry/harmony in the brain itself is intrinsically coupled with pleasure.
Edit: With respect to the Mosers, that’s really cool example of this stuff. I can’t say I have answers here but as a punt, I’d suspect the “orthogonal neural coding of similar but distinct memories” is going to revolve around some pretty complex frequency regimes and we may not yet be able to say exact things about how ‘consonant’ or ‘dissonant’ these patterns are to each other yet. My intuition is that this result about the golden mean being the optimal ratio for non-interaction will end up intersecting with the Mosers’ work. That said I wonder if STV would assert that some sorts of memories are ‘hedonically incompatible’ due to their encodings being dissonant? Basically, as memories get encoded, the oscillatory patterns they’re encoded with could subtly form a network which determines what sorts of new memories can form and/or which sorts of stimuli we enjoy and which we don’t. But this is pretty hand-wavy speculation…
Hi Mike,
Thanks again for your openness to discussion, I do appreciate you taking the time. Your responses here are much more satisfying and comprehensible than your previous statements, it’s a bit of a shame we can’t reset the conversation.
1a. I am interpreting this as you saying there are certain brain areas that, when activated, are more likely to result in the experience of suffering or pleasure. This is the sort of thing that is plausible and possible to test.
1b. I think you are making a mistake by thinking of the brain like a musical instrument, and I really don’t like how you’re assuming discordant brain oscillations “feel bad” the way discordant chords “sound bad”. (Because as I’ve stated earlier, there’s no evidence linking suffering to dissonance, and as you’ve stated previously, you made a massive jump in reasoning here.) But this is the clearest you have explained your thinking on this question so far, which I do appreciate.
1c. I am confused here. I did not ask whether dissonance in VWFA causes dissonance in FFA. I asked how dissonance between the two regions causes suffering. What does it mean neurologically to have dissonance within a specific brain area? I thought the point of using fMRI instead of EEG was that you needed to measure the differences between specific areas.
1d. You’re saying dissonance in place a could cause dissonance in place b, or both could be caused by dissonance in place c. That sounds super reasonable. But my question is why would dissonance between a and b cause suffering? It doesn’t really matter what brain areas a and b are, I know I keep hammering at the point of why suffering == dissonance, but this is the most important part of your theory, and your explanation of “This is a huge, huge, huge jump, and cannot be arrived at by deduction” is incredibly unsatisfying to me.
2&3. Ok, I appreciate this concrete response. I don’t know enough about calculating eigenmodes with EEG data to predict how tractable it is.
4. Your current analysis is incompatible with wearable biotech. Moving your body even a millimeter within the fMRI scanner negatively affects data quality. This is part of the reason I am confused about why you are focused so much on fMRI. I appreciate in general the value of accurate biomarkers for wellbeing, but I don’t think symmetry/harmonics is either accurate or useful.
5. The labs I am in (although not me personally) are working on closed loop fmri neurofeedback to improve mental health outcomes of depressed patients. I am familiar with the technical challenges in this work, which is partially why I am coming at you so hard on this. Here’s a paper from my primary and secondary academic advisors: https://www.biorxiv.org/content/10.1101/2020.06.07.137943v1.abstract
Hi Abby, I understand. We can just make the best of it.
1a. Yep, definitely. Empirically we know this is true from e.g. Kringelbach and Berridge’s work on hedonic centers of the brain; what we’d be interested in looking into would be whether these areas are special in terms of network control theory.
1c. I may be getting ahead of myself here: the basic approach to testing STV we intend is looking at dissonance in global activity. Dissonance between brain regions likely contribute to this ‘global dissonance’ metric. I’m also interested in measuring dissonance within smaller areas of the brain as I think it could help improve the metric down the line, but definitely wouldn’t need to at this point.
1d. As a quick aside, STV says that ‘symmetry in the mathematical representation of phenomenology corresponds to pleasure’. We can think of that as ‘core STV’. We’ve then built neuroscience metrics around consonance, dissonance, and noise that we think can be useful for proxying symmetry in this representation; we can think of that as a looser layer of theory around STV, something that doesn’t have the ‘exact truth’ expectation of core STV. When I speak of dissonance corresponding to suffering, it’s part of this looser second layer.
To your question — why would STV be true? — my background is in the philosophy of science, so I’m perhaps more ready to punt to this domain. I understand this may come across as somewhat frustrating or obfuscating from the perspective of a neuroscientist asking for a neuroscientific explanation. But, this is a universal thread across philosophy of science: why is such and such true? Why does gravity exist; why is the speed of light as it is? Etc. Many things we’ve figured out about reality seem like brute facts. Usually there is some hints of elegance in the structures we’re uncovering, but we’re just not yet knowledgable to see some universal grand plan. Physics deals with this a lot, and I think philosophy of mind is just starting to grapple with this in terms of NCCs. Here’s something Frank Wilczek (won the 2004 Nobel Prize in physics for helping formalize the Strong nuclear force) shared about physics:
>… the idea that there is symmetry at the root of Nature has come to dominate our understanding of physical reality. We are led to a small number of special structures from purely mathematical considerations—considerations of symmetry—and put them forward to Nature, as candidate elements for her design. … In modern physics we have taken this lesson to heart. We have learned to work from symmetry toward truth. Instead of using experiments to infer equations, and then finding (to our delight and astonishment) that the equations have a lot of symmetry, we propose equations with enormous symmetry and then check to see whether Nature uses them. It has been an amazingly successful strategy. (A Beautiful Question, 2015)
So — why would STV be the case? ”Because it would be beautiful, and would reflect and extend the flavor of beauty we’ve found to be both true and useful in physics” is probably not the sort of answer you’re looking for, but it’s the answer I have at this point. I do think all the NCC literature is going to have to address this question of ‘why’ at some point.
4. We’re ultimately opportunistic about what exact format of neuroimaging we use to test our hypotheses, but fMRI checks a lot of the boxes (though not all). As you say, fMRI is not a great paradigm for neurotech; we’re looking at e.g. headsets by Kernel and others, and also digging into the TUS (transcranial ultrasound) literature for more options.
5. Cool! I’ve seen some big reported effect sizes and I’m generally pretty bullish on neurofeedback in the long term; Adam Gazzaley‘s Neuroscape is doing some cool stuff in this area too.
Ok, thank you for these thoughts.
Considering how asymmetries can be both pleasing (complex stimuli seem more beautiful to me than perfectly symmetrical spheres) and useful (as Holly Ellmore points out in the domain of information theory, and as the Mosers found with their Nobel prize winning work on orthogonal neural coding of similar but distinct memories), I question your intuition that asymmetry needs to be associated with suffering.
Welcome, thanks for the good questions.
Asymmetries in stimuli seem crucial for getting patterns through the “predictive coding gauntlet.” I.e., that which can be predicted can be ignored. We demonstrably screen perfect harmony out fairly rapidly.
The crucial context for STV on the other hand isn’t symmetries/asymmetries in stimuli, but rather in brain activity. (More specifically, as we’re currently looking at things, in global eigenmodes.)
With a nod back to the predictive coding frame, it’s quite plausible that the stimuli that create the most internal symmetry/harmony are not themselves perfectly symmetrical, but rather have asymmetries crafted to avoid top-down predictive models. I’d expect this to vary quite a bit across different senses though, and depend heavily on internal state.
The brain may also have mechanisms which introduce asymmetries in global eigenmodes, in order to prevent getting ‘trapped’ by pleasure — I think of boredom as fairly sophisticated ‘anti-wireheading technology’ — but if we set aside dynamics, the assertion is that symmetry/harmony in the brain itself is intrinsically coupled with pleasure.
Edit: With respect to the Mosers, that’s really cool example of this stuff. I can’t say I have answers here but as a punt, I’d suspect the “orthogonal neural coding of similar but distinct memories” is going to revolve around some pretty complex frequency regimes and we may not yet be able to say exact things about how ‘consonant’ or ‘dissonant’ these patterns are to each other yet. My intuition is that this result about the golden mean being the optimal ratio for non-interaction will end up intersecting with the Mosers’ work. That said I wonder if STV would assert that some sorts of memories are ‘hedonically incompatible’ due to their encodings being dissonant? Basically, as memories get encoded, the oscillatory patterns they’re encoded with could subtly form a network which determines what sorts of new memories can form and/or which sorts of stimuli we enjoy and which we don’t. But this is pretty hand-wavy speculation…