Edit: This comment now makes less sense, given that Abby has revised the language of her comment.
Abby,
I strongly endorse what you say in your last paragraph:
Please provide evidence that “dissonance in the brain” as measured by a “Consonance Dissonance Noise Signature” is associated with suffering? … I’m willing to change my skepticism about this theory if you have this evidence.
However, I’d like to push back on the tone of your reply. If you’re sorry for posting a negative non-constructive comment, why not try to be a bit more constructive? Why not say something like “I am deeply skeptical of this theory and do not at this moment think it’s worth EAs spending time on. [insert reasons]. I would be willing to change my view if there was evidence.”
Apologies for being pedantic, but I think it’s worth the effort to try and keep the conversation on the forum as constructive as possible!
I think context is important here. This is not an earnest but misguided post from an undergrad with big ideas and little experience. This is a post from an organization trying to raise hundreds of thousands of dollars. You can check out their website if you want, the front page has a fundraising advertisement.
Further, there are a lot of fancy buzzwords in this post (“connectome!”) and enough jargon that people unfamiliar with the topic might think there is substance here that they just don’t understand (see Harrison’s comment: “I also know very little about this field and so I couldn’t really judge”).
As somebody who knows a lot about this field, I think it’s important that my opinion on these ideas is clearly stated. So I will state it again.
There is no a priori reason to believe any of the claims of STV. There is no empirical evidence to support STV. To an expert, these claims do not sound “interesting and plausible but unproven”, they sound “nonsensical and presented with baffling confidence”.
People have been observing brain oscillations at different frequencies and at different powers for about 100 years. These oscillations have been associated with different patterns of behavior, ranging from sleep stages to memory formation. Nobody has observed asynchrony to be associated with anything like suffering (as far as I’m aware, but please present evidence if I’m mistaken!).
fMRI is a technique that doesn’t measure the firing of neurons (it measures the oxygen consumed over relatively big patches of neurons) and is extremely poorly suited to provide evidence for STV. A better method would be MEG (expensive) or EEG (extremely affordable). If the Qualia Research Institute was a truth seeking institution, they would have either run the simple experiment I proposed themselves, or had any of the neuroscientists they claim to be collaborating with run it for them.
This is a bad post and it should be called out as such. I would have been more gentle if this was a single misguided researcher and not the head of an organization that publishes a lot of other nonsense too.
This is a post from an organization trying to raise hundreds of thousands of dollars.
...
If the Qualia Research Institute was a truth seeking institution, they would have either run the simple experiment I proposed themselves, or had any of the neuroscientists they claim to be collaborating with run it for them.
This reads to me as insinuating fraud, without much supporting evidence.
This is a bad post and it should be called out as such. I would have been more gentle if this was a single misguided researcher and not the head of an organization that publishes a lot of other nonsense too.
I appreciate that in other comments you followed up with more concrete criticisms, but this still feels against the “Keep EA Weird” spirit to me. If we never spend a million or two on something that turns out to be nonsense, we wouldn’t have applied hits-based giving very well.
(Despite the username, I have no affiliation with QRI. I’ll admit to finding the problem worth working on. )
Hi all, I messaged some with Holly a bit about this, and what she shared was very helpful. I think a core part of what happened was a mismatch of expectations: I originally wrote this content for my blog and QRI’s website, and the tone and terminology was geared toward “home team content”, not “away team content”. Some people found both the confidence and somewhat dense terminology offputting, and I think that’s reasonable of them to raise questions. As a takeaway, I‘ve updated that crossposting involves some pitfalls and intend to do things differently next time.
Thanks valence. I do think the ‘hits-based giving’ frame is important to develop, although I understand it’s doesn’t have universal support as some of the implications may be difficult to navigate.
And thank for appreciating the problem; it’s sometimes hard for me to describe how important the topic feels and all the reasons for working on it.
I am comfortable calling myself “somebody who knows a lot about this field”, especially in relation to the average EA Forum reader, our current context.
I respect Karl Friston as well, I’m looking forward to reading his thoughts on your theory. Is there anything you can share?
The CSHW stuff looks potentially cool, but it’s separate from your original theory, so I don’t want to get too deep into it here. The only thing I would say is that I don’t understand why the claims of your original theory cannot be investigated using standard (cheap) EEG techniques. This is important if a major barrier to finding empirical evidence for your theory is funding. Could you explain why standard EEG is insufficient to investigate the synchrony of neuronal firing during suffering?
I was very aggressive with my criticism of your theory, partially because I think it is wrong (again, the basis of your theory, “the symmetry of this representation will encode how pleasant the experience is”, makes no sense to me), but also because of how confidently you describe your theory with no empirical evidence. So I happily accept being called arrogant and would also happily accept being shown how I am wrong. My tone is in reaction to what I feel is your unfounded confidence, and other posts like “I think all neuroscientists, all philosophers, all psychologists, and all psychiatrists should basically drop whatever they’re doing and learn Selen Atasoy’s “connectome-specific harmonic wave” (CSHW) framework.” https://opentheory.net/2018/08/a-future-for-neuroscience/
You link to your other work in this post, and are raising money for your organization (which I think will redirect money from organizations that I think are doing more effective work), so I think it’s fair for my comments to be in reaction to things outside the text of your original post.
I’m glad to hear you feel good about your background and are filled with confidence in yourself and your field. I think the best work often comes from people who don’t at first see all the challenges involved in doing something, because often those are the only people who even try.
At first I was a little taken aback by your tone, but to be honest I’m a little amused by the whole interaction now.
The core problem with EEG is that the most sophisticated analyses depend on source localization (holographic reconstruction of brain activity), and accurate source localization from EEG remains an unsolved problem, at least at the resolution and confidence we’d need. In particular we’ve looked at various measures of coherence as applied to EEG and found them all wanting in various ways. I notice some backtracking on your criticism of CSHW. ;) it’s a cool method, not without downsides but occupies a cool niche. I have no idea what your research is about but it might be useful for you to learn about for some purposes.
I’m glad you‘re reading more of our ‘back issues’ as it were. We have some talks on our YouTube channel as well (including the NA presentation to Friston), although not all of our work on STV is public yet.
If you share what your research is about, and any published work, I think it’d I’d help me understand where your critiques are coming from a little better. Totally up to you though.
Hi Jpmos, really appreciate the comments. To address the question of evidence, this is a fairly difficult epistemological situation but we’re working with high-valence datasets from Daniel Ingram & Harvard, and Imperial College London (jhana data, and MDMA data, respectively) and looking for signatures of high harmony.
Neuroimaging is a pretty messy thing, there are no shortcuts to denoising data, and we are highly funding constrained, so I’m afraid we don’t have any peer-reviewed work published on this yet. I can say that initial results seem fairly promising and we hope to have something under review in 6 months. There is a wide range of tacit evidence that stimulation patterns with higher internal harmony produce higher valence than dissonant patterns (basically: music feels good, nails on a chalkboard feels bad), but this is in a sense ‘obvious’ and only circumstantial evidence for STV.
Happy to ‘talk shop’ if you want to dig into details here.
Edit: This comment now makes less sense, given that Abby has revised the language of her comment.
Abby,
I strongly endorse what you say in your last paragraph:
However, I’d like to push back on the tone of your reply. If you’re sorry for posting a negative non-constructive comment, why not try to be a bit more constructive? Why not say something like “I am deeply skeptical of this theory and do not at this moment think it’s worth EAs spending time on. [insert reasons]. I would be willing to change my view if there was evidence.”
Apologies for being pedantic, but I think it’s worth the effort to try and keep the conversation on the forum as constructive as possible!
Hi Jpmos,
I think context is important here. This is not an earnest but misguided post from an undergrad with big ideas and little experience. This is a post from an organization trying to raise hundreds of thousands of dollars. You can check out their website if you want, the front page has a fundraising advertisement.
Further, there are a lot of fancy buzzwords in this post (“connectome!”) and enough jargon that people unfamiliar with the topic might think there is substance here that they just don’t understand (see Harrison’s comment: “I also know very little about this field and so I couldn’t really judge”).
As somebody who knows a lot about this field, I think it’s important that my opinion on these ideas is clearly stated. So I will state it again.
There is no a priori reason to believe any of the claims of STV. There is no empirical evidence to support STV. To an expert, these claims do not sound “interesting and plausible but unproven”, they sound “nonsensical and presented with baffling confidence”.
People have been observing brain oscillations at different frequencies and at different powers for about 100 years. These oscillations have been associated with different patterns of behavior, ranging from sleep stages to memory formation. Nobody has observed asynchrony to be associated with anything like suffering (as far as I’m aware, but please present evidence if I’m mistaken!).
fMRI is a technique that doesn’t measure the firing of neurons (it measures the oxygen consumed over relatively big patches of neurons) and is extremely poorly suited to provide evidence for STV. A better method would be MEG (expensive) or EEG (extremely affordable). If the Qualia Research Institute was a truth seeking institution, they would have either run the simple experiment I proposed themselves, or had any of the neuroscientists they claim to be collaborating with run it for them.
This is a bad post and it should be called out as such. I would have been more gentle if this was a single misguided researcher and not the head of an organization that publishes a lot of other nonsense too.
This reads to me as insinuating fraud, without much supporting evidence.
I appreciate that in other comments you followed up with more concrete criticisms, but this still feels against the “Keep EA Weird” spirit to me. If we never spend a million or two on something that turns out to be nonsense, we wouldn’t have applied hits-based giving very well.
(Despite the username, I have no affiliation with QRI. I’ll admit to finding the problem worth working on. )
Keeping EA honest and rigorous is much higher priority. Making excuses for incompetence or lack of evidence base is the opposite of EA.
I agree that honesty is more important than weirdness. Maybe I’m being taken, but I see miscommunication and not dishonesty from QRI.
I am not sure what an appropriate standard of rigor is for a preparadigmatic area. I would welcome more qualifiers and softer claims.
At the very least, miscommunication this bad is evidence of serious incompetence at QRI. I think you are mistaken to want to excuse that.
Hi all, I messaged some with Holly a bit about this, and what she shared was very helpful. I think a core part of what happened was a mismatch of expectations: I originally wrote this content for my blog and QRI’s website, and the tone and terminology was geared toward “home team content”, not “away team content”. Some people found both the confidence and somewhat dense terminology offputting, and I think that’s reasonable of them to raise questions. As a takeaway, I‘ve updated that crossposting involves some pitfalls and intend to do things differently next time.
Thanks valence. I do think the ‘hits-based giving’ frame is important to develop, although I understand it’s doesn’t have universal support as some of the implications may be difficult to navigate.
And thank for appreciating the problem; it’s sometimes hard for me to describe how important the topic feels and all the reasons for working on it.
Edit: probably an unhelpful comment
Hi Mike,
I am comfortable calling myself “somebody who knows a lot about this field”, especially in relation to the average EA Forum reader, our current context.
I respect Karl Friston as well, I’m looking forward to reading his thoughts on your theory. Is there anything you can share?
The CSHW stuff looks potentially cool, but it’s separate from your original theory, so I don’t want to get too deep into it here. The only thing I would say is that I don’t understand why the claims of your original theory cannot be investigated using standard (cheap) EEG techniques. This is important if a major barrier to finding empirical evidence for your theory is funding. Could you explain why standard EEG is insufficient to investigate the synchrony of neuronal firing during suffering?
I was very aggressive with my criticism of your theory, partially because I think it is wrong (again, the basis of your theory, “the symmetry of this representation will encode how pleasant the experience is”, makes no sense to me), but also because of how confidently you describe your theory with no empirical evidence. So I happily accept being called arrogant and would also happily accept being shown how I am wrong. My tone is in reaction to what I feel is your unfounded confidence, and other posts like “I think all neuroscientists, all philosophers, all psychologists, and all psychiatrists should basically drop whatever they’re doing and learn Selen Atasoy’s “connectome-specific harmonic wave” (CSHW) framework.” https://opentheory.net/2018/08/a-future-for-neuroscience/
You link to your other work in this post, and are raising money for your organization (which I think will redirect money from organizations that I think are doing more effective work), so I think it’s fair for my comments to be in reaction to things outside the text of your original post.
I’m glad to hear you feel good about your background and are filled with confidence in yourself and your field. I think the best work often comes from people who don’t at first see all the challenges involved in doing something, because often those are the only people who even try.
At first I was a little taken aback by your tone, but to be honest I’m a little amused by the whole interaction now.
The core problem with EEG is that the most sophisticated analyses depend on source localization (holographic reconstruction of brain activity), and accurate source localization from EEG remains an unsolved problem, at least at the resolution and confidence we’d need. In particular we’ve looked at various measures of coherence as applied to EEG and found them all wanting in various ways. I notice some backtracking on your criticism of CSHW. ;) it’s a cool method, not without downsides but occupies a cool niche. I have no idea what your research is about but it might be useful for you to learn about for some purposes.
I’m glad you‘re reading more of our ‘back issues’ as it were. We have some talks on our YouTube channel as well (including the NA presentation to Friston), although not all of our work on STV is public yet.
If you share what your research is about, and any published work, I think it’d I’d help me understand where your critiques are coming from a little better. Totally up to you though.
Hi Jpmos, really appreciate the comments. To address the question of evidence, this is a fairly difficult epistemological situation but we’re working with high-valence datasets from Daniel Ingram & Harvard, and Imperial College London (jhana data, and MDMA data, respectively) and looking for signatures of high harmony.
Neuroimaging is a pretty messy thing, there are no shortcuts to denoising data, and we are highly funding constrained, so I’m afraid we don’t have any peer-reviewed work published on this yet. I can say that initial results seem fairly promising and we hope to have something under review in 6 months. There is a wide range of tacit evidence that stimulation patterns with higher internal harmony produce higher valence than dissonant patterns (basically: music feels good, nails on a chalkboard feels bad), but this is in a sense ‘obvious’ and only circumstantial evidence for STV.
Happy to ‘talk shop’ if you want to dig into details here.