A review of what affective neuroscience knows about suffering & valence. (TLDR: Affective Neuroscience is very confused about what suffering is.)
A significant fraction of the EA movement is concerned with suffering, and all else being equal, think there should be less of it. I think this is an extraordinarily noble goal.
But what *is* suffering? There are roughly as many working definitions of suffering in the EA movement as there are factions in the EA movement. Worryingly, these definitions often implicitly or explicitly conflict, and only the fact that the EA pie is growing relatively rapidly prevents a descent into factional warfare over resources being wasted on ‘incorrect’ understandings of suffering.
Intuitively, one would hope that gradual progress in affective neuroscience will make this problem less pressing- that given enough time&effort&resources, different approaches to defining suffering will cohere, and this problem will fade away.
I am here to inform you that this is not going to happen: this outside view that “affective neuroscience is slowly settling on a consensus view of suffering” is not happening, and this hurdle to coordination will not resolve itself. Instead, the more affective neuroscience has learned about valence, the more confusing and divergent the picture becomes.
The following is an overview (adapted from my core research) of what affective neuroscience knows about valence.
I’ll front-load some implications for discussion:
There’s lots of philosophical confusion in valence/suffering research. In Kuhnian terms, this would suggest that affective neuroscience is ripe for a paradigm shift. Paradigm shifts often come from outside the field, and usually have unpredictable outcomes (it’s difficult to predict how some future version of affective neuroscience may define suffering).
Organizations who’ve been using models from affective neuroscience, such as FRI, ACE, and OpenPhil, should be clearer about the caveats involved, and should consider hedging their bets with some ‘basic research’ plays.
The longer we don’t have a good model for what suffering is, the worse off we’ll be with regard to movement coordination.
-------------------------Begin review-------------------------
Why some things feel better than others: the view from neuroscience
Valence research tends to segregate into two buckets: function and anatomy. The former attempts to provide a description of how valence interacts with thought and behavior, whereas the latter attempts to map valence states to the anatomy of the brain. The following are key highlights from each ‘bucket’:
Valence as a functional component of thought & behavior:
One of the most common views of valence is that it’s the way the brain encodes value:
Emotional feelings (affects) are intrinsic values that inform animals how they are faring in the quest to survive. The various positive affects indicate that animals are returning to “comfort zones” that support survival, and negative affects reflect “discomfort zones” that indicate that animals are in situations that may impair survival. They are ancestral tools for living—evolutionary memories of such importance that they were coded into the genome in rough form (as primary brain processes), which are refined by basic learning mechanisms (secondary processes) as well as by higher-order cognitions/thoughts (tertiary processes). (Panksepp 2010a).
Similarly, valence seems to be a mechanism the brain uses to determine or label salience, or phenomena worth paying attention to (Cooper and Knutson 2008), and to drive reinforcement learning (Bischoff-Grethe et al. 2009).
A common thread in these theories is that valence is entangled with, and perhaps caused by, an appraisal of a situation. Frijda describes this idea as the law of situated meaning: ‘‘Input some event with its particular meaning; out comes an emotion of a particular kind’’ (Frijda 1988). Similarly, Clore et al. phrase this in terms of “The Information Principle”, where “[e]motional feelings provide conscious information from unconscious appraisals of situations.” (Clore, Gasper, and Garvin 2001) Within this framework, positive valence is generally modeled as the result of an outcome being better than expected (Schultz 2015), or a surprising decrease in ‘reward prediction errors’ (RPEs) (Joffily and Coricelli 2013).
Computational affective neuroscience is a relatively new subdiscipline which attempts to formalize this appraisal framework into a unified model of cognitive-emotional-behavioral dynamics. A good example is “Mood as Representation of Momentum” (Eldar et al. 2016), where moods (and valence states) are understood as pre-packaged behavioral and epistemic biases which can be applied to different strategies depending on what kind of ‘reward prediction errors’ are occurring. E.g., if things are going surprisingly well, the brain tries to take advantage of this momentum by shifting into a happier state that is more suited to exploration & exploitation. On the other hand, if things are going surprisingly poorly, the brain shifts into a “hunker-down” mode which conserves resources and options.
However- while these functional descriptions are intuitive, elegant, and appear to explain quite a lot about valence, frustratingly, they fall apart as metaphysically-satisfying answers when we look closely at edge-cases and the anatomy of pain and pleasure.
Valence as a product of neurochemistry & neuroanatomy:
The available neuroanatomical evidence suggests that the above functional themes merely highlight correlations rather than metaphysical truths, and for every functional story about the role of valence, there exist counter-examples. E.g.:
Valence is not the same as value or salience:
(Berridge and Kringelbach 2013) find that “representation [of value] and causation [of pleasure] may actually reflect somewhat separable neuropsychological functions”. Relatedly, (Jensen et al. 2007) note that salience is also handled by different, non-perfectly-overlapping systems in the brain.
Valence should not be thought of in terms of preferences, or reinforcement learning:
Even more interestingly, (Berridge, Robinson, and Aldridge 2009) find that what we call ‘reward’ has three distinct elements in the brain: ‘wanting’, ‘liking’, and ‘learning’, and the neural systems supporting each are each relatively distinct from each other. ‘Wanting’, a.k.a. seeking, seems strongly (though not wholly) dependent upon the mesolimbic dopamine system, whereas ‘liking’, or the actual subjective experience of pleasure, seems to depend upon the opioid, endocannabinoid, and GABA-benzodiazepine neurotransmitter systems, but only within the context of a handful of so-called “hedonic hotspots” (elsewhere, their presence seems to only increase ‘wanting’). With the right interventions disabling each system, it looks like brains can exhibit any permutation of these three: ‘wanting and learning without liking’, ‘wanting and liking without learning’, and so on. Likewise with pain, we can roughly separate the sensory/discriminative component from the affective/motivational component, each of which can be modulated independently (Shriver 2016).
These distinctions between components are empirically significant but not necessarily theoretically crisp: (Berridge and Kringelbach 2013) suggest that the dopamine-mediated, novelty-activated seeking state of mind involves at least some small amount of intrinsic pleasure.
A strong theme in the affective neuroscience literature is that pleasure seems highly linked to certain specialized brain regions / types of circuits:
We note the rewarding properties for all pleasures are likely to be generated by hedonic brain circuits that are distinct from the mediation of other features of the same events (e.g., sensory, cognitive). Thus pleasure is never merely a sensation or a thought, but is instead an additional hedonic gloss generated by the brain via dedicated systems. … Analogous to scattered islands that form a single archipelago, hedonic hotspots are anatomically distributed but interact to form a functional integrated circuit. The circuit obeys control rules that are largely hierarchical and organized into brain levels. Top levels function together as a cooperative heterarchy, so that, for example, multiple unanimous ‘votes’ in favor from simultaneously-participating hotspots in the nucleus accumbens and ventral pallidum are required for opioid stimulation in either forebrain site to enhance ‘liking’ above normal. (Kringelbach and Berridge 2009a)
Some of these ‘hedonic hotspots’ are also implicated in pain, and activity in normally-hedonic regions have been shown to produce an aversive effect under certain psychological conditions, e.g., when threatened or satiated (Berridge and Kringelbach 2013). Furthermore, damage to certain regions of the brain (e.g., the ventral pallidum) in rats changes their reaction toward normally-pleasurable things to active ‘disliking’ (Cromwell and Berridge 1993; Smith et al. 2009). Moreover, certain painkillers such as acetaminophen blunt both pain and pleasure (Durso, Luttrell, and Way 2015). By implication, the circuits or activity patterns that cause pain and pleasure may have similarities not shared with ‘hedonically neutral’ circuits. However, pain does seem to be a slightly more ‘distributed’ phenomenon than pleasure, with fewer regions that consistently contribute.
Importantly, the key takeaway from the neuro-anatomical research into valence is this: at this time we don’t have a clue as to what properties are necessary or sufficient to make a given brain region a so-called “pleasure center” or “pain center”. Instead, we just know that some regions of the brain appear to contribute much more to valence than others.
Finally, the core circuitry implicated in emotions in general, and valence in particular, is highly evolutionarily conserved, and all existing brains seem to generate valence in similar ways: “Cross-species affective neuroscience studies confirm that primary-process emotional feelings are organized within primitive subcortical regions of the brain that are anatomically, neurochemically, and functionally homologous in all mammals that have been studied.” (Panksepp 2010b) Others have indicated the opioid-mediated ‘liking’ reaction may be conserved across an incredibly broad range of brains, from the very complex (humans & other mammals) to the very simple (c. elegans, with 302 neurons), and all known data points in between- e.g., vertebrates, molluscs, crustaceans, and insects (D’iakonova 2001). On the other hand, the role of dopamine may be substantially different, and even behaviorally inverted (associated with negative valence and aversion) in certain invertebrates like insects (Van Swinderen and Andretic 2011) and octopi.
A taxonomy of valence?
How many types of pain and pleasure are there? While neuroscience doesn’t offer a crisp taxonomy, there are some apparent distinctions we can draw from physiological & phenomenological data:
-
There appear to be at least three general types of physical pain, each associated with a certain profile of ion channel activation: thermal (heat, cold, capsaicin), chemical (lactic acid buildup), and mechanical (punctures, abrasions, etc) (Osteen et al. 2016).
-
More speculatively, based on a dimensional analysis of psychoactive substances, there appear to be at least three general types of pleasure: ‘fast’ (cocaine, amphetamines), ‘slow’ (morphine), and ‘spiritual’ (LSD, Mescaline, DMT) (Gomez Emilsson 2015).
-
Mutations in the gene SCN9A can remove the ability to feel any pain mediated by physical nociception (Marković, Janković, and Veselinović 2015; Drenth and Waxman 2007)- however, it appears that this doesn’t impact the ability to feel emotional pain (Heckert 2012).
However, these distinctions between different types of pain & pleasure appear substantially artificial:
-
Hedonic pleasure, social pleasure, eudaimonic well-being, etc all seem to be manifestations of the same underlying process. (Kringelbach and Berridge 2009b) note: “The available evidence suggests that brain mechanisms involved in fundamental pleasures (food and sexual pleasures) overlap with those for higher-order pleasures (for example, monetary, artistic, musical, altruistic, and transcendent pleasures).” This seems to express a rough neuroscientific consensus (Kashdan, Robert, and King 2008), albeit with some caveats.
-
Likewise in support of lumping emotional & physical valence together, common painkillers such as acetaminophen help with both physical and social pain (Dewall et al. 2010).
A deeper exploration of the taxonomy of valence is hindered by the fact that the physiologies of pain and pleasure are frustrating inverses of each other.
-
The core hurdle to understanding pleasure (in contrast to pain) is that there’s no pleasure-specific circuitry analogous to nociceptors, sensors on the periphery of the nervous system which reliably cause pleasure, and whose physiology we can isolate and reverse-engineer.
-
The core hurdle to understanding pain (in contrast to pleasure) is that there’s only weak and conflicting evidence for pain-specific circuitry analogous to hedonic hotspots, regions deep in the interior of the nervous system which seem to centrally coordinate all pain, and whose physiological mechanics & dynamics we can isolate and reverse-engineer.
I.e., pain is easy to cause, but hard to localize in the brain; pleasure has a more definite footprint in the brain, but is much harder to generate on demand.
Philosophical confusion in valence research:
In spite of the progress affective neuroscience continues to make, our current understanding of valence and consciousness is extremely limited, and I offer that the core hurdle for affective neuroscience is philosophical confusion, not mere lack of data. I.e., perhaps our entire approach deserves to be questioned. Several critiques stand out:
Neuroimaging is a poor tool for gathering data:
Much of what we know about valence in the brain has been informed by functional imaging techniques such as fMRI and PET. But neuroscientist Martha Farah notes that these techniques depend upon a very large set of assumptions, and that there’s a widespread worry in neuroscience “that [functional brain] images are more researcher inventions than researcher observations.” (Farah 2014) Farah notes the following flaws:
-
Neuroimaging is built around indirect and imperfect proxies. Blood flow (which fMRI tracks) and metabolic rates (which PET tracks) are correlated with neural activity, but exactly how and to what extent it’s correlated is unclear, and skeptics abound. Psychologist William Uttal suggests that “fMRI is as distant as the galvanic skin response or pulse rate from cognitive processes.” (Uttal 2011)
-
The elegant-looking graphics neuroimaging produces are not direct pictures of anything: rather, they involve extensive statistical guesswork and ‘cleaning actions’ by many layers of algorithms. This hidden inferential distance can lead to unwarranted confidence, especially when most models can’t control for differences in brain anatomy.
-
Neuroimaging tools bias us toward the wrong sorts of explanations. As Uttal puts it, neuroimaging encourages hypotheses “at the wrong (macroscopic) level of analysis rather than the (correct) microscopic level. … we are doing what we can do when we cannot do what we should do.” (Uttal 2011)
Neuroscience’s methods for analyzing data aren’t as good as people think:
There’s a popular belief that if only the above data-gathering problems could be solved, neuroscience would be on firm footing. (Jonas and Kording 2016) attempted to test whether the field is merely data-limited (yet has good methods) in a novel way: by taking a microprocessor (where the ground truth is well-known, and unlimited amounts of arbitrary data can be gathered) and attempting to reverse-engineer it via standard neuroscientific techniques such as lesion studies, whole-processor recordings, pairwise & granger causality, and dimensionality reduction. This should be an easier task than reverse-engineering brain function, yet when they performed this analysis, they found that “the approaches reveal interesting structure in the data but do not meaningfully describe the hierarchy of information processing in the processor. This suggests that current approaches in neuroscience may fall short of producing meaningful models of the brain.” The authors conclude that we don’t understand the brain as well as we think we do, and we’ll need better theories and methods to get there, not just more data.
Subjective experience is hard to study objectively:
Unfortunately, even if we improved our methods for understanding the brain’s computational hierarchy, it will be difficult to translate this into improved knowledge of subjective mental states & properties of experience (such as valence).
In studying consciousness we’ve had to rely on either crude behavioral proxies, or subjective reports of what we’re experiencing. These ‘subjective reports of qualia’ are very low-bandwidth, are of unknown reliability and likely vary in complex, hidden ways across subjects, and as (Tsuchiya et al. 2015) notes, the methodological challenge of gathering them “has biased much of the neural correlates of consciousness (NCC) research away from consciousness and towards neural correlates of perceptual reports”. I.e., if we ask someone to press a button when they have a certain sensation, then measure their brain activity, we’ll often measure the brain activity associated with pressing buttons, rather than the activity associated with the sensation we’re interested in. We can and do attempt to control for this with the addition of ‘no-report’ paradigms, but largely they’re based on the sorts of neuroimaging paradigms critiqued above.
Affective neuroscience has confused goals:
Lisa Barrett (Barrett 2006) goes further and suggests that studying emotions is a particularly hard task for neuroscience, since most emotions are not “natural kinds” i.e.. things whose objective existence makes it possible to discover durable facts about. Instead, Barrett notes, “the natural-kind view of emotion may be the result of an error of arbitrary aggregation. That is, our perceptual processes lead us to aggregate emotional processing into categories that do not necessarily reveal the causal structure of the emotional processing.” As such, many of the terms we use to speak about emotions have only an ad-hoc, fuzzy pseudo-existence, and this significantly undermines the ability of affective neuroscience to standardize on definitions, methods, and goals.
-----
In summary, affective neuroscience suffers from (1) a lack of tools that gather unbiased and functionally-relevant data about the brain, (2) a lack of formal methods which can reconstruct what the brain’s doing and how it’s doing it, (3) epistemological problems interfacing with the subjective nature of consciousness, and (4) an ill-defined goal, as it’s unclear just what it’s attempting to reverse-engineer in the first place.
Fig 1 summarizes some core implications of current neuroscience and philosophical research. In short: valence in the human brain is a complex phenomenon which defies simple description. Affective neuroscience has been hugely useful at illuminating the shape of this complexity, but is running into hugely diminishing returns with its current paradigm, and offers multiple conflicting models of what valence & suffering could be.
Figure 1, core takeaways of affective neuroscience on valence
Citations:
Frijda, Nico H. 1988. “The Laws of Emotion.” The American Psychologist 43 (5): 349–58.
Gomez Emilsson, Andres. 2015. “State-Space of Drug Effects: Results.” Qualiacomputing.com. June 9. https://qualiacomputing.com/2015/06/09/state-space-of-drug-effects-results/.
Heckert, Justin. 2012. “The Hazards of Growing Up Painlessly.” New York Times, November 18.
Jonas, Eric, and Konrad Kording. 2016. “Could a Neuroscientist Understand a Microprocessor?” doi:10.1101/055624.
Shriver, Adam. 2016. “The Unpleasantness of Pain For Humans and Other Animals.” https://www.academia.edu/12621257/The_Unpleasantness_of_Pain_For_Humans_and_Other_Animals.
Uttal, William R. 2011. Mind and Brain: A Critical Appraisal of Cognitive Neuroscience. MIT Press.
-------------------------End review-------------------------
The above review actually understates the challenge of getting a good model of suffering, because it mostly avoids problems relating to consciousness (of which there are many). Still, my intent here isn’t to be discouraging—or even to throw cold water on the idea that someday, EA could have a good, integrative definition of suffering we could confidently use for animal welfare, AI safety, and social interventions alike. It should be clear from my work that I do think that’s possible.
Rather, my point is that EA should be realistic about how bad the current state of knowledge about suffering is, and that this problem isn’t going to solve itself.
Mike Johnson, Qualia Research Institute
- A Simple Two-Axis Model of Subjective States, with Possible Applications to Utilitarian Problems by 2 Jan 2018 18:07 UTC; 10 points) (LessWrong;
- 6 Sep 2021 20:16 UTC; 9 points) 's comment on A Primer on the Symmetry Theory of Valence by (
- 16 Nov 2018 2:07 UTC; 8 points) 's comment on Wireheading as a Possible Contributor to Civilizational Decline by (
Thanks for the summary! Lots of useful info here.
As a functionalist, I’m not at all troubled by these counter-examples. They merely show that the brain is very complicated, and they reinforce my view that crisp definitions of valence don’t work. ;)
As an analogy, suppose you were trying to find the location of “pesticide regulation” in the United States. You might start with the EPA: “Pesticide regulation in the United States is primarily a responsibility of the Environmental Protection Agency.” But you might notice that other federal agencies do work related to pesticides (e.g., the USDA). Moreover, some individual states have their own pesticide regulations. Plus, individual schools, golf courses, and homes decide if and how to apply pesticides; in this sense, they also “regulate” pesticide use. We might try to distinguish “legal regulation” from “individual choices” and note that the two can operate differently. We might question what counts as a pesticide. And so on. All this shows is that there’s a lot of stuff going on that doesn’t cleanly map onto simple constructs.
Actually, your later Barrett (2006) quote says the same thing: “the natural-kind view of emotion may be the result of an error of arbitrary aggregation. That is, our perceptual processes lead us to aggregate emotional processing into categories that do not necessarily reveal the causal structure of the emotional processing.” And you seemed to agree in your conclusion: “valence in the human brain is a complex phenomenon which defies simple description.” I’m puzzled how this squares with your attempt to find a crisp definition for valence.
Likewise, we can debate the necessary and sufficient properties that make something a “pesticide-regulation center”.
Interesting. :) This is part of why I don’t expect whole-brain emulation to come before de-novo AGI. Reverse-engineering of complex systems is often very difficult.
Hi Brian,
Thanks for the thoughts & kind words.
Nominally, this post is simply making the point that affective neuroscience doesn’t have a good definition of valence nor suffering, and based on its current trajectory, isn’t likely to produce one in the foreseeable future. It seems we both agree on that. :) However, you’re quite correct that the subtext to this post is that I believe a crisp definition of valence is possible, and you’re curious how I square this with the above description of the sad state of affective neuroscience.
Essentially, my model is that valence in the human brain is an incredibly complex phenomenon that defies simple description—but valence itself is probably a simple property of conscious systems. This seems entirely consistent with the above facts (Section I of my paper), and also very plausible if consciousness is a physical phenomenon. Here are the next few paragraphs of my paper:
Anything piped through the complexity of the brain will look complex, regardless of how simple or complex it starts out as. Similarly, anything will look irreducibly complex if we’re looking at it from the wrong level of abstraction. So just because affective neuroscience is confused about valence, doesn’t mean that valence is somehow intrinsically confusing.
In this sense, I see valence research as no different than any other physical science: progress will be made by (1) controlling for the messy complexity added by studying valence in messy systems, and (2) finding levels of abstractions that “carve reality at the joints” better. (For instance, “emotions” are not natural kinds, as Barrett notes, but “valence” may be one.)
The real kicker here is whether there exists a cache of predictive knowledge about consciousness to be discovered (similar to how Faraday&Maxwell discovered a cache of predictive knowledge about electromagnetism) or whether consciousness is a linguistic confusion, to be explained away (similar to how elan vital was a linguistic confusion & improper reification).
Fundamental research about suffering looks very, very different depending on which of these is true. Principia Qualia lays out how it would look in the case of the former, and describes a research program that I expect to bear predictive fruit if we ‘turn the crank’ on it.
But there doesn’t seem to be an equivalent document describing what suffering research is if we assume that consciousness should be thought of more as a linguistic confusion than a ‘real’ thing, and that suffering is a leaky reification. Explicitly describing what fundamental research about suffering looks like, and predicting what kinds of knowledge are & aren’t possible, if we assume functionalism (or perhaps ‘computational constructivism’ fits your views?) seems like it could be a particularly worthwhile project for FRI.
p.s. Yes, I quite enjoyed that piece on attempting to reverse-engineer a 6502 microprocessor via standard neuroscientific methods. My favorite paper of 2016 actually!
Fair enough. :) By analogy, even if pesticide regulation looks complex, the molecular structure of a single insecticide molecule is more crisp.
Various of my essays mention examples of my intuitions on the topic, and this piece discusses one framework for thinking about the matter. But I envision this project as more like interpreting the themes and imagery of Shakespeare than like a comprehensive scientific program. It’s subjective, personal, and dependent on one’s emotional whims. Of course, one can choose to make it more formalized if one prefers, like formalized preference utilitarianism does.
Certainly, Barrett makes a strong case that statements about emotions “will not be factual in any deep ontological sense,” because they aren’t natural kinds. My argument is that valence probably is a natural kind, however, and so we can make statements about it that are as factual as statements about the weak nuclear force, if (and only if) we find the right level of abstraction by which to view it.
I would say I’ve undergone the reverse process. :)
Your implication is that questions of consciousness & suffering are relegated to ‘spiritual poetry’ and can only be ‘debated in the moral realm’ (as stated in some of your posts). But I would suggest this is rather euphemistic, and runs into failure modes that are worrying.
The core implication seems to be that there are no crisp facts of the matter about what suffering is, or which definition is the ‘correct’ one, and so it’s ultimately a subjective choice which definition we use. This leads to insane conclusions: we could use odd definitions of suffering to conclude that animals probably don’t feel pain, or that current chatbots can feel pain, or that the suffering which happens when a cis white man steps on a nail, is less than the suffering which happens when a bisexual black female steps on a nail, or vice-versa. I find it very likely that there are people making all these claims today.
Now, I suspect you and I have similar intuitions about these things: we both think animals can feel pain, whereas current chatbots probably can’t, and that race almost certainly doesn’t matter with respect to capacity to suffer. I believe I can support these intuitions from a principled position (as laid out in Principia Qualia). But being a functionalist, and especially if our moral intuitions and definitions of suffering are “subjective, personal, and dependent on one’s emotional whims,” then it would seem that your support of these intuitions is in some sense arbitrary— they are your spiritual poetry, but other people can create different spiritual poetry that comes from very different directions.
And so, I fear that if we’re constructivists about suffering, then we should expect a very dark scenario: that society’s definition of suffering, and any institutions we build whose mission is to reduce suffering, will almost certainly be co-opted by future intellectual fashions. And, in fact, that given enough time and enough Moloch, society’s definition of suffering could in fact invert, and some future Effective Altruism movement may very well work to maximize what we today would call suffering.
I believe I have a way out of this: I think consciousness and suffering(valence) are both ‘real’, and so a crisp definition of each exists, about which one can be correct or incorrect. My challenge to you is to find a way out of this ‘repugnant conclusion’ also. Or to disprove that I’ve found a way out of it, of course. :)
In short, I think we can be constructivists about qualia & suffering, or we can be very concerned about reducing suffering, but I question the extent to which we can do both at the same time while maintaining consistency.
But doing so would amount to shifting the goalpost, which is a way of cheating at arguments whether there’s a single definition of a word or not. :)
It’s similar to arguments over abortion of very early embryos. One side calls a small clump of cells “a human life”, and the other side doesn’t. There’s no correct answer; it just depends what you mean by that phrase. But the disagreement isn’t rendered trivial by the lack of objectivity of a single definition.
If by this you mean society’s prevailing concepts and values, then yes. But everything is at the mercy of those. If reducing your precisely defined version of suffering falls out of fashion, it won’t matter that it has a crisp definition. :)
Hm, that doesn’t seem too likely to me (more likely is that society becomes indifferent to suffering), except if you mean that altruists might, e.g., try to maximize the amount of sentience that exists, which would as a byproduct entail creating tons of suffering (but that statement already describes many EAs right now).
I think your solution, even if true, doesn’t necessarily help with goal drift / Moloch stuff because people still have to care about the kind of suffering you’re talking about. It’s similar to moral realism: even if you find the actual moral truth, you need to get people to care about it, and most people won’t (especially not future beings subject to Darwinian pressures).
Thanks for the thoughts! Here’s my attempt at laying out a strong form of why I don’t think constructivism as applied to ethics & suffering leads to productive areas:
Imagine someone arguing that electromagnetism was purely a matter of definitions- there’s no “correct” definition of electricity, so how one approaches the topic and which definition one uses is ultimately a subjective choice.
But now imagine they also want to build a transistor. Transistors are, in fact, possible, and so it turns out that there is a good definition of electricity, by way of quantum theory, and of course many bad ones that don’t ‘carve reality at the joints’.
So I would say very strongly that we can’t both say that electricity is subjective and everyone can have their own arbitrary poetic definition of what it is and how it works, but also do interesting and useful things with it.
Likewise, my claim is that we can be a subjectivist about qualia and about suffering and say that how we define them is rather arbitrary and ultimately subjective, or we can say that some qualia are better than others and we should work to promote more good qualia and less bad qualia. But I don’t think we can do both at the same time. If someone makes a strong assertion that something is bad and that we should work to reduce its prevalence, then they’re also implying it’s real in a non-trivial sense; if something is not real, then it cannot be bad in an actionable sense.
Imagine that tomorrow I write a strong denouncement of blegblarg on the EA forum. I state that blegblarg is a scourge upon the universe, and we should work to rid ourselves of it, and all right-thinking people should agree with me. People ask me, “Mike…. I thought your post was interesting, but…. what the heck is blegblarg??”—I respond that “Well, blegblarg doesn’t have a crisp definition, it’s more of a you-know-it-when-you-see-it thing where there’s no ‘correct’ definition of blegblarg and we can each use our own moral compass to determine if something is blegblarg, but there’s definitely a lot of it out there and it’s clearly bad so we should definitely work to reduce it!”
This story would have no happy ending. Blegblarg can’t be a good rallying cry, because I can’t explain what it is. I can’t say it’s good or bad in a specific actionable sense, for the same reason. One person’s blegblarg is another person’s blargbleg, you know? :)
I see a strict reading of the constructivist project as essentially claiming similar things about suffering, and ultimately leading to concluding that what is, and isn’t, suffering, is fundamentally arbitrary—i.e., it leads to post-modern moral nihilism. But you’re clearly not a moral nihilist, and FRI certainly doesn’t see itself as nihilist. In my admittedly biased view of the situation, I see you & FRI circling around moral realism without admitting it. :) Now, perhaps my flavor of moral realism isn’t to your liking- perhaps you might come to a completely different principled conclusion about what qualia & valence are. But I do hope you keep looking.
p.s. I tend to be very direct when speaking about these topics, and my apologies if anything I’ve said comes across as rude. I think we differ in an interesting way and there may be updates in this for both of us.
As long as we’re both using the same equations of physics to describe the phenomenon, it seems that exactly how we define “electricity” may not matter too much. The most popular interpretation of quantum mechanics is “shut up and calculate”.
As another analogy, “life” has a fuzzy, arbitrary boundary, but that doesn’t prevent us from doing biology.
An example I like to use is “justice”. It’s clear to many people that injustice is bad, even though there’s no crisp, physics-based definition of injustice.
Replace “blegblarg” with “obscenity”, and you have an argument that many people suffering from religious viruses would endorse.
I am. :) At least by this definition: “Moral nihilists consider morality to be constructed, a complex set of rules and recommendations that may give a psychological, social, or economical advantage to its adherents, but is otherwise without universal or even relative truth in any sense.”
I was worried about the same in reverse. I didn’t find your comments rude. :)
Good! I’ll charge forward then. :)
That is my favorite QM interpretation! But following this analogy, I’m offering a potential equation for electricity, but you’re saying that electricity doesn’t have an equation because it’s not ‘real’, so it doesn’t seem like you will ever be in a position to calculate.
But that doesn’t address the concern: if you argue that something is bad and we should work to reduce it, but also say there’s no correct definition for it and no wrong definition for it, what are you really saying? You note elsewhere that “We can interpret any piece of matter as being conscious if we want to,” and imply something similar about suffering. I would say that a definition that allows for literally anything is not a definition; an ethics that says something is bad, but notes that it’s impossible to ever tell whether any particular thing is bad, is not an ethics.
This doesn’t seem to match how you use the term ‘suffering’ in practice. E.g., we could claim that “protons oppress electrons” or “there’s injustice in fundamental physics” — but this is obviously nonsense, and from a Wittgensteinian “language game” point of view, what’s happening is that we’re using perfectly good words in contexts where they break down. But you do want to say that there could be suffering in fundamental physics, and potentially the far future. It looks like you want to have your cake and eat it too, and say that (1) “suffering” is a fuzzy linguistic construct, like “injustice” is, but also that (2) we can apply this linguistic construct of “suffering” to arbitrary contexts without it losing meaning. This seems deeply inconsistent.
That definition doesn’t seem to leave much room for ethical behavior (or foundational research!), merely selfish action. This ties into my notion above, that you seem to have one set of stated positions (extreme skepticism & constructivism about qualia & suffering, moral nihilism for the purpose of ‘psychological, social, or economical advantage’), but show different revealed preferences (which seem more altruistic, and seem to assume something close to moral realism).
The challenge in this space of consciousness/valence/suffering research is to be skeptical-yet-generative: to spot and explain the flaws in existing theories, yet also to constantly search for and/or build new theories which have the potential to avoid these flaws.
You have many amazing posts doing the former (I particularly enjoyed this piece ) but you seem to have given up on the latter, and at least in these replies, seem comfortable with extreme constructivism and moral nihilism. However, you also seem to implicitly lean on valence realism to avoid biting the bullet on full-out moral nihilism & constructivism— your revealed preferences seem to be that you still want meaning, you want to say suffering is actually bad, I assume you don’t think it’s 100% arbitrary whether we say something is suffering or not. But these things are not open to a full-blown moral nihilist.
Anyway, perhaps you would have very different interpretations on these things. I would expect so. :) I’m probing your argument to see what you do think. But in general, I agree with the sentiments of Scott Aaronson:
I want a future where we can tell each other to “shut up and calculate”. You may not like my solution for grounding what valence is (though I’m assuming you haven’t read Principia Qualia yet), but I hope you don’t stop looking for a solution.
I haven’t read your main article (sorry!), so I may not be able to engage deeply here. If we’re trying to model brain functioning, then there’s not really any disagreement about what success looks like. Different neuroscientists will use different methods, some more biological, some more algorithmic, and some more mathematical. Insofar as your work is a form of neuroscience, perhaps from a different paradigm, that’s cool. But I think we disagree more fundamentally in some way.
My point is that your objection is not an obstacle to practical implementation of my program, given that, e.g., anti-pornography activism exists.
If you want a more precise specification, you could define suffering as “whatever Brian says is suffering”. See “Brian utilitarianism”.
It’s not nonsense. :) If I cared about justice as my fundamental goal, I would wonder how far to extend it to simpler cases. I discuss an example with scheduling algorithms here. (Search for “justice” in that interview.)
We do lose much of the meaning when applying that concept to fundamental physics. The question is whether there’s enough of the concept left over that our moral sympathies are still (ever so slightly) engaged.
In my interpretation, altruism is part of “psychological advantage”, e.g., helping others because you want to and because it makes you feel better to do so.
I do think it’s 100% arbitrary, depending how you define “arbitrary”. But of course I deeply want people to care about reducing suffering. There’s no contradiction here.
Quantum field theory is instrumentally useful for any superintelligent agent. Preventing negative valence is not. Even if the knowledge of what valence is remains, caring about it may disappear.
I don’t know what “objectively bad” means.
I’m glad we roughly agree on this factual prediction, even if we interpret “value” differently.
Our definition of electricity may evolve over time, in accordance with new developments in the foundational physics, but we’re unlikely to chuck quantum field theory in favor of some idiosyncratic theory of crystal chakras. If we discover the universe’s equation for valence, we’re unlikely to find our definition of suffering at the mercy of intellectual fads.
I agree that this seems unlikely, but it seems like you grant that such a values—inversion is possible, and say that it wouldn’t be a bad thing, because there’s no fundamental moral truth (moral nihilism). But I think that, unambiguously, cats being lit on fire is an objectively bad thing. Even if time and Moloch happen to twist the definition of ‘suffering’ such that future utilitarian EAs want to tile the universe with burning cats, I completely reject that such an intellectual fashion could be right.
I think most people would strongly agree with this moral realist position, rather than the moral nihilist position- that this specific thing is actually and unambiguously is bad, and that any definition of suffering that wouldn’t say it’s bad is wrong.
Yeah, I mostly agree with this— Andres covers some of this with this post. I feel great urgency to figure this out while we’re still in the non-malthusian time Robin Hanson calls the dreamtime. If we don’t figure out what has value and slide into a highly Darwinian/Malthusian/Molochian context, then I fear that could be the end of value.
The frustrating inverse point makes me think this is a reflection of the asymmetric payoff structure in the AE.
Interesting- to attempt to re-state your notion: it’s more important to avoid death than get an easy meal, so pain&aversion should come easier than pleasure.
I’d agree with this, but perhaps this is overdetermined in that both evolution and substrate lead us to “pleasure is centralized&highly contextual, pain is distributed&easily caused”.
I.e., I would expect that given a set of conscious systems with randomized configurations, valence probably doesn’t fall into a standard distribution. Rather, my expectation is that high-valence states will be outnumbered by low-valence states… and so, just like it’s easier to destroy value than create it, it’s easier to create negative valence than positive valence. Thus, positive valence requires centralized coordination (hedonic regions) and is easily disrupted by nociceptors (injections of entropy are unlikely to push the system toward positive states, since those are rare).
One possible explanation why we have nociceptors but not direct pleasure-ceptors is that there’s no stimulus that’s always fitness-enhancing (or is there?), while flames, skin wounds, etc. are always bad. Sugar receptors usually convey pleasure, but not if you’re full, nauseous, etc.
Also, we can’t have simple pleasure-ceptors for beautiful images or music because those stimuli require complex processing by visual or auditory cortices; there’s no “pleasant music molecule” that can stimulate a pleasure-ceptor neuron the way there are pleasant-tasting gustatory molecules.
Yeah, strongly agree.
Additionally, accidentally wireheading oneself had to have been at least a big potential problem during evolution, which would strongly select against anything like a pleasure-ceptor.
Hm, I would think that hedonic adaptation/habituation could be applied to stimuli from pleasure-ceptors fairly easily?
Hmm—I’d suggest that if pleasure-ceptors are easy contextually habituated, they might not be pleasure-ceptors per se.
(Pleasure is easily habituated; pain is not. This is unfortunate but seems adaptive, at least in the AE...)
My intuition is that if an organism did have dedicated pleasure-ceptors, it would probably immediately become its biggest failure-point (internal dynamics breaking down) and attack surface (target for others to exploit in order to manipulate behavior, which wouldn’t trigger fight/flight like most manipulations do).
Arguably, we do see both of these things happen to some degree with regard to “pseudo-pleasure-ceptors” in the pelvis(?).
Coordination being required for pleasure makes a lot of sense if the thing that we care about is a fragile, high dimensional thing, such as robustness of a pattern over time in a hard to predict environment.
Not sure why that is unless you’re just defining things that way, which is fine. :)
BTW, this page says
Yeah, as well as with various other addictions.