Hi Abby, to give a little more color on the data: we’re very interested in CSHW as it gives us a way to infer harmonic structure from fMRI, which we’re optimistic is a significant factor in brain self-organization. (This is still a live hypothesis, not established fact; Atasoy is still proving her paradigm, but we really like it.)
We expect this structure to be highly correlated with global valence, and to show strong signatures of symmetry/harmony during high-valence states. The question we’ve been struggling with as we’ve been building this hypothesis is “what is a signature of symmetry/harmony?” — there’s a bit of research from Stanford (Chon) here on quantifying consonance in complex waveforms and some cool music theory based on Helmholz’s work, but this appears to be an unsolved problem. Our “CDNS” approach basically looks at pairwise relationships between harmonics to quantify the degree to which they’re in consonance or dissonance with each other. We’re at the stage here where we have the algorithm, but need to validate it on audio samples first before applying it too confidently to the brain.
There’s also a question of what datasets are ideal for the sort of thing we’re interested in. Extreme valence datasets are probably the most promising, states of extreme pleasure or extreme suffering. We prefer datasets involving extreme pleasure, for two reasons:
(1) We viscerally feel better analyzing this sort of data than states of extreme suffering;
(2) fMRI’s time resolution is such that the best results will come from mental states with high structural stability. We expect this structural stability to be much higher during pleasure than suffering.
As such we’ve been focusing on collecting data from meditative jhana states, and from MDMA states. There might be other states that involve reliable good emotion that we can study, but these are the best we’ve found conceptually so far.
Lastly, there’s been the issue of neuroimaging pipelines and CSHW. Atasoy‘s work is not open source, so we had to reimplement her core logic (big thanks to Patrick here) and we ended up collaborating with an external group on a project to combine this core logic with a neuroimaging packaging system. I can’t share all the details here as our partner doesn’t want to be public about their involvement yet but this is thankfully wrapping up soon.
I wish we had a bunch of deeply analyzed data we could send you in direct support of STV! And I agree with you that is the ideal and you’re correct to ask for it. Sadly we don’t at this point, but I’m glad to say a lot of the preliminaries have been now taken care of and things are moving. I hope my various comments here haven’t come across as disrespectful (and I sincerely apologize if they have- not my intention but if that’s been your interpretation I accept it, sorry!); there’s just a lot of high-context stuff here that’s hard to package up into something that’s neat and tidy, and overall what clarity we’ve been able to find on this topic has been very hard-won.
Hi Abby, to give a little more color on the data: we’re very interested in CSHW as it gives us a way to infer harmonic structure from fMRI, which we’re optimistic is a significant factor in brain self-organization. (This is still a live hypothesis, not established fact; Atasoy is still proving her paradigm, but we really like it.)
We expect this structure to be highly correlated with global valence, and to show strong signatures of symmetry/harmony during high-valence states. The question we’ve been struggling with as we’ve been building this hypothesis is “what is a signature of symmetry/harmony?” — there’s a bit of research from Stanford (Chon) here on quantifying consonance in complex waveforms and some cool music theory based on Helmholz’s work, but this appears to be an unsolved problem. Our “CDNS” approach basically looks at pairwise relationships between harmonics to quantify the degree to which they’re in consonance or dissonance with each other. We’re at the stage here where we have the algorithm, but need to validate it on audio samples first before applying it too confidently to the brain.
There’s also a question of what datasets are ideal for the sort of thing we’re interested in. Extreme valence datasets are probably the most promising, states of extreme pleasure or extreme suffering. We prefer datasets involving extreme pleasure, for two reasons:
(1) We viscerally feel better analyzing this sort of data than states of extreme suffering;
(2) fMRI’s time resolution is such that the best results will come from mental states with high structural stability. We expect this structural stability to be much higher during pleasure than suffering.
As such we’ve been focusing on collecting data from meditative jhana states, and from MDMA states. There might be other states that involve reliable good emotion that we can study, but these are the best we’ve found conceptually so far.
Lastly, there’s been the issue of neuroimaging pipelines and CSHW. Atasoy‘s work is not open source, so we had to reimplement her core logic (big thanks to Patrick here) and we ended up collaborating with an external group on a project to combine this core logic with a neuroimaging packaging system. I can’t share all the details here as our partner doesn’t want to be public about their involvement yet but this is thankfully wrapping up soon.
I wish we had a bunch of deeply analyzed data we could send you in direct support of STV! And I agree with you that is the ideal and you’re correct to ask for it. Sadly we don’t at this point, but I’m glad to say a lot of the preliminaries have been now taken care of and things are moving. I hope my various comments here haven’t come across as disrespectful (and I sincerely apologize if they have- not my intention but if that’s been your interpretation I accept it, sorry!); there’s just a lot of high-context stuff here that’s hard to package up into something that’s neat and tidy, and overall what clarity we’ve been able to find on this topic has been very hard-won.