I’m a bit hesitant to upvote this comment given how critical it is [was] + how little I know about the field (and thus whether the criticism is deserved), but I’m a bit relieved/interested to see I wasn’t the only one who thought it sounded really confusing/weird. I have somewhat skeptical priors towards big theories of consciousness and suffering (sort of/it’s complicated) + towards theories that rely on lots of complicated methods/jargon/theory (again, sort of/with caveats)—but I also know very little about this field and so I couldn’t really judge. Thus, I’m definitely interested to see the opinions of people with some experience in the field.
Hi Harrison, appreciate the remarks. My response would be more-or-less an open-ended question: do you feel this is a valid scientific mystery? And, what do you feel an answer would/should look like? I.e., correct answers to long-unsolved mysteries might tend to be on the weird side, but there’s “useful generative clever weird” and “bad wrong crazy timecube weird”. How would you tell the difference?
Haha, I certainly wouldn’t label what you described/presented as “timecube weird.” To be honest, I don’t have a very clear cut set of criteria, and upon reflection it’s probable that the prior is a bit over-influenced by my experiences with some social science research and theory as opposed to hard science research/theory. Additionally, it’s not simply that I’m skeptical of whether the conclusion is true, but more generally my skepticism heuristics for research is about whether whatever is being presented is “A) novel/in contrast with existing theories or intuitions; B) is true; and/or C) is useful.” For example, some theory might be basically rehashing what existing research already has come to consensus on but simply worded in a very different way that adds little to existing research (aside from complexity); alternatively, something could just be flat out wrong; alternatively, something could be technically true and novel as explicitly written, but that is not very useful (e.g., tautological definitions), whereas the common interpretation is wrong (but would be useful if it were right).
Still, two of the key features here that contributed to my mental yellow flags were:
The emphasis on jargon and seemingly ambiguous concepts (e.g., “harmony”) vs. a clear, lay-oriented narrative that explains the theory—crucially including how it is different from other plausible theories (in addition to “why should you believe this? / how did we test this?”). STEM jargon definitely seems different from social science jargon in that STEM jargon seems to more often require more knowledge/experience to get a sense of whether something is nonsense strung together or just legitimate-but-complicated analyses, whereas I can much more easily detect nonsense in social science work when it starts equivocating ideas and making broad generalizations.
(To a lesser extent) The emphasis on mathematical analyses and models for something that seemed to call for a broader approach/acceptance of some ambiguity. (Of course, it’s necessary to mathematically represent some things, but I’m a bit skeptical of systems that try to break down such complex concepts as consciousness and affective experience into a mathematical/quantified representation, just like how I’ve been skeptical of many attempts to measure/operationalize complex conceptual variables like “culture” or “polity” in some social sciences, even if I think doing so can be helpful relative to doing nothing—so long as people still are very clear-eyed about the limitations of the quantification)
In the end, I don’t have strong reason to believe that what you are arguing for is wrong, but especially given points like I just mentioned I haven’t updated my beliefs much in any direction after reading this post.
Hi Harrison, that’s very helpful. I think it’s a challenge to package fairly technical and novel research into something that’s both precise and intuitive. Definitely agree that “harmony” is an ambiguous concept.
One of the interesting aspects of this work is it does directly touch on issues of metaphysics and ontology: what are the natural kinds of reality? What concepts ‘carve reality at the joints’? Most sorts of research can avoid dealing with these questions directly, and just speak about observables and predictions. But since part of what we’re doing is to establish valence as a phenomenological natural kind, we have to make certain moves, and these moves may raise certain yellow flags, as you note, since often when these moves are made there’s some philosophical shenanigans going on. That said, I’m happy with the overall direction of our work, which has been steadily more and more empirical.
One takeaway that I do hope I can offer is the deeply philosophically unsatisfactory nature of existing answers in this space. Put simply, no one knows what pleasure and suffering are, or at least have definitions that are coherent across all domains they’d like to be able to define them. This is an increasing problem as we tackle e.g. problems of digital sentience and fundamental questions of AI alignment. I’m confident in our research program, but even more confident that the questions we’re trying to grapple with are important to try to address directly, and that there’s no good ’default hypothesis’ at present.
I’m a bit hesitant to upvote this comment given how critical it is [was] + how little I know about the field (and thus whether the criticism is deserved), but I’m a bit relieved/interested to see I wasn’t the only one who thought it sounded really confusing/weird. I have somewhat skeptical priors towards big theories of consciousness and suffering (sort of/it’s complicated) + towards theories that rely on lots of complicated methods/jargon/theory (again, sort of/with caveats)—but I also know very little about this field and so I couldn’t really judge. Thus, I’m definitely interested to see the opinions of people with some experience in the field.
Hi Harrison, appreciate the remarks. My response would be more-or-less an open-ended question: do you feel this is a valid scientific mystery? And, what do you feel an answer would/should look like? I.e., correct answers to long-unsolved mysteries might tend to be on the weird side, but there’s “useful generative clever weird” and “bad wrong crazy timecube weird”. How would you tell the difference?
Haha, I certainly wouldn’t label what you described/presented as “timecube weird.” To be honest, I don’t have a very clear cut set of criteria, and upon reflection it’s probable that the prior is a bit over-influenced by my experiences with some social science research and theory as opposed to hard science research/theory. Additionally, it’s not simply that I’m skeptical of whether the conclusion is true, but more generally my skepticism heuristics for research is about whether whatever is being presented is “A) novel/in contrast with existing theories or intuitions; B) is true; and/or C) is useful.” For example, some theory might be basically rehashing what existing research already has come to consensus on but simply worded in a very different way that adds little to existing research (aside from complexity); alternatively, something could just be flat out wrong; alternatively, something could be technically true and novel as explicitly written, but that is not very useful (e.g., tautological definitions), whereas the common interpretation is wrong (but would be useful if it were right).
Still, two of the key features here that contributed to my mental yellow flags were:
The emphasis on jargon and seemingly ambiguous concepts (e.g., “harmony”) vs. a clear, lay-oriented narrative that explains the theory—crucially including how it is different from other plausible theories (in addition to “why should you believe this? / how did we test this?”). STEM jargon definitely seems different from social science jargon in that STEM jargon seems to more often require more knowledge/experience to get a sense of whether something is nonsense strung together or just legitimate-but-complicated analyses, whereas I can much more easily detect nonsense in social science work when it starts equivocating ideas and making broad generalizations.
(To a lesser extent) The emphasis on mathematical analyses and models for something that seemed to call for a broader approach/acceptance of some ambiguity. (Of course, it’s necessary to mathematically represent some things, but I’m a bit skeptical of systems that try to break down such complex concepts as consciousness and affective experience into a mathematical/quantified representation, just like how I’ve been skeptical of many attempts to measure/operationalize complex conceptual variables like “culture” or “polity” in some social sciences, even if I think doing so can be helpful relative to doing nothing—so long as people still are very clear-eyed about the limitations of the quantification)
In the end, I don’t have strong reason to believe that what you are arguing for is wrong, but especially given points like I just mentioned I haven’t updated my beliefs much in any direction after reading this post.
Hi Harrison, that’s very helpful. I think it’s a challenge to package fairly technical and novel research into something that’s both precise and intuitive. Definitely agree that “harmony” is an ambiguous concept.
One of the interesting aspects of this work is it does directly touch on issues of metaphysics and ontology: what are the natural kinds of reality? What concepts ‘carve reality at the joints’? Most sorts of research can avoid dealing with these questions directly, and just speak about observables and predictions. But since part of what we’re doing is to establish valence as a phenomenological natural kind, we have to make certain moves, and these moves may raise certain yellow flags, as you note, since often when these moves are made there’s some philosophical shenanigans going on. That said, I’m happy with the overall direction of our work, which has been steadily more and more empirical.
One takeaway that I do hope I can offer is the deeply philosophically unsatisfactory nature of existing answers in this space. Put simply, no one knows what pleasure and suffering are, or at least have definitions that are coherent across all domains they’d like to be able to define them. This is an increasing problem as we tackle e.g. problems of digital sentience and fundamental questions of AI alignment. I’m confident in our research program, but even more confident that the questions we’re trying to grapple with are important to try to address directly, and that there’s no good ’default hypothesis’ at present.