I’m pretty sympathetic to suffering ≈ displeasure + involuntary attention to the displeasure, or something similar.
I think wanting is downstream from the combination of displeasure + attention.
I think wanting, or at least the relevant kind here, just is involuntary attention effects, specifically motivational salience. Or, at least, motivational salience is a huge part of what it is. This is how Berridge often uses the terms.[1] Maybe a conscious ‘want’ is just when the effects on our attention are noticeable to us, e.g. captured by our model of our own attention (attention schema), or somehow make it into the global workspace. You can feel the pull of your attention, or resistance against your voluntary attention control. Maybe it also feels different from just strong sensory stimuli (bottom-up, stimulus-driven attention).
Well, when you do think about it, you still immediately want it to stop!
Where I might disagree with “involuntary attention to the displeasure” is that the attention effects could sometimes be to force your attention away from an unpleasant thought, rather than to focus on it. Unpleasant signals reinforce and bias attention towards actions and things that could relieve the unpleasatness, and/or disrupt your focus so that you will find something to relieve it. Sometimes the thing that works could just be forcing your attention away from the thing that seems unpleasant, and your attention will be biased to not think about unpleasant things. Other times, you can’t ignore it well enough, so you your attention will force you towards addressing it. Maybe there’s some inherent bias towards focusing on the unpleasant thing.
But maybe suffering just is the kind of thing that can’t be ignored this way. Would we consider an unpleasant thought that’s easily ignored to be suffering?
Berridge and Robinson (2016) distinguish different kinds of wanting/desires, and equate one kind with motivational (incentive) salience:
Ordinarily, cognitive wanting and incentive salience ‘wanting’ go together, so that incentive salience can give heightened urgency to feelings of cognitive desire. But the two forms of wanting vs. ‘wanting’ can sometimes dissociate, so that incentive salience can occur either in opposition to a cognitive desire or even unconsciously in absence of any cognitive desire. Incentive salience ‘wanting’ in opposition to cognitive wanting, for example, occurs when a recovering addict has a genuine cognitive desire to abstain from taking drugs, but still ‘wants’ drugs, so relapses anyway when exposed to drug cues or during vivid imagery about them. Nonconscious ‘wants’ can be triggered in some circumstances by subliminal stimuli, even though the person remains unable to report any change in subjective feelings while motivation increases are revealed in their behavior (Childress et al., 2008; Winkielman, Berridge, & Wilbarger, 2005).
I think wanting, or at least the relevant kind here, just is involuntary attention effects, specifically motivational salience
I think you can have involuntary attention that aren’t particularly related to wanting anything (I’m not sure if you’re denying that). If your watch beeps once every 10 minutes in an otherwise-silent room, each beep will create involuntary attention—the orienting response a.k.a. startle. But is it associated with wanting? Not necessarily. It depends on what the beep means to you. Maybe it beeps for no reason and is just an annoying distraction from something you’re trying to focus on. Or maybe it’s a reminder to do something you like doing, or something you dislike doing, or maybe it just signifies that you’re continuing to make progress and it has no action-item associated with it. Who knows.
Where I might disagree with “involuntary attention to the displeasure” is that the attention effects could sometimes be to force your attention away from an unpleasant thought, rather than to focus on it.
In my ontology, voluntary actions (both attention actions and motor actions) happen if and only if the idea of doing them is positive-valence, while involuntary actions (again both attention actions and motor actions) can happen regardless of their valence. In other words, if the reinforcement learning system is the reason that something is happening, it’s “voluntary”.
Orienting responses are involuntary (with both involuntary motor aspects and involuntary attention aspects). It doesn’t matter if orienting to a sudden loud sound has led to good things happening in the past, or bad things in the past. You’ll orient to a sudden loud sound either way. By the same token, paying attention to a headache is involuntary. You’re not doing it because doing similar things has worked out well for you in the past. Quite the contrary, paying attention to the headache is negative valence. If it was just reinforcement learning, you simply wouldn’t think about the headache ever, to a first approximation. Anyway, over the course of life experience, you learn habits / strategies that apply (voluntary) attention actions and motor actions towards not thinking about the headache. But those strategies may not work, because meanwhile the brainstem is sending involuntary attention signals that overrule them.
So for example, “ugh fields” are a strategy implemented via voluntary attention to preempt the possibility of triggering the unpleasant involuntary-attention process of anxious rumination.
The thing you wrote is kinda confusing in my ontology. I’m concerned that you’re slipping into a mode where there’s a soul / homunculus “me” that gets manipulated by the exogenous pressures of reinforcement learning. If so, I think that’s a bad ontology—reinforcement learning is not an exogenous pressure on the “me” concept, it is part of how the “me” thing works and why it wants what it wants. Sorry if I’m misunderstanding.
I think you can have involuntary attention that aren’t particularly related to wanting anything (I’m not sure if you’re denying that).
I agree you can, but that’s not motivational salience. The examples you give of the watch beeping and a sudden loud sound are stimulus-driven or bottom-up salience, not motivational salience. There are apparently different underlying brain mechanisms. A summary from Kim et al., 2021:
Traditionally, the allocation of limited attentional resources had been thought to be governed by task goals (Wolfe, Cave, & Franzel, 1989) and physical salience (Theeuwes, 2010). A newer construct, selection history, challenges this dichotomy and suggests previous episodes of attentional orienting are capable of independently biasing attention in a manner that is neither top-down nor bottom-up (Awh, Belopolsky, & Theeuwes, 2012). One component of selection history is reward history. Via associative learning, initially neutral stimuli come to predict reward and thus acquire heightened attentional priority, consequently capturing attention even when non-salient and task-irrelevant (referred to as value-driven attentional capture; e.g., Anderson, Laurent, & Yantis, 2011).
I’d say there is some “innate” motivational salience, e.g. probably for innate drives, physical pains, innate fears and perhaps pleasant sensations, but then reinforcement (when it’s working as typically) biases your systems for motivational salience and action towards things associated with those, to get more pleasure and less unpleasantness.
I’ll address two things you said in opposite order.
The thing you wrote is kinda confusing in my ontology. I’m concerned that you’re slipping into a mode where there’s a soul / homunculus “me” that gets manipulated by the exogenous pressures of reinforcement learning. If so, I think that’s a bad ontology—reinforcement learning is not an exogenous pressure on the “me” concept, it is part of how the “me” thing works and why it wants what it wants. Sorry if I’m misunderstanding.
I don’t have in mind anything like a soul / homunculus. I think it’s mostly a moral question, not an empirical one, to what extent we should consider the mechanisms for reinforcement to be a part of “you”, and to what extent your identity persists through reinforcement. Reinforcement basically rewires your brain and changes your desires. I definitely consider your desires, as motivational salience, which have been shaped by past reinforcement, to be part of “you” now and (in my view) morally important.
In my ontology, voluntary actions (both attention actions and motor actions) happen if and only if the idea of doing them is positive-valence, while involuntary actions (again both attention actions and motor actions) can happen regardless of their valence. In other words, if the reinforcement learning system is the reason that something is happening, it’s “voluntary”.
From my understanding of the cognitive (neuro)science literature and their use of terms, attentional and action biases/dispositions caused by reinforcement are not necessarily “voluntary”.
I think they use “voluntary”, “endogenous”, “top-down”, “task-driven/directed” and “goal-driven/directed” (roughly) interchangeably for a type of attentional mechanism. For example, you have a specific task in mind, and then things related to that task become salient and your actions are biased towards actions that support that task. This is what focusing/concentration is. But then other motivationally salient stimuli (pain, hunger, your phone, an attractive person) and intense stimuli or changes in background stimuli (a beeping watch, a sudden loud noise) can get in the way.
My impression is that there is indeed a distinct mechanism describable as voluntary/endogenous/top-down attention, which lets you focus and block irrelevant but otherwise motivationally salient stimuli. It might also recruit motivational salience towards relevant stimuli. It’s an executive function. And I’m inclined to reserve the term “voluntary” for executive functions.
In this way, we can say:
a drug addict’s behaviour is often (largely) involuntarily driven, specifically by high motivational salience, like cravings (and perhaps also dysfunction of top-down attention control), and
the distractibility of someone with ADHD by their phone or random websites, for example, is involuntary, driven by a dysfunction of top-down attention control, which lets task-irrelevant stimuli, including task-irrelevant motivationally salient stimuli, pull the person’s attention.
In both cases, reinforcement for motivational salience is partly the reason for the behaviour. But they seem less voluntary than when executive/top-down control works better.
Motivational salience can also be manipulated in experiments to lead to dissociation with remembered, predicted and actual reward (Baumgartner et al., 2021):
These hyper-reactive states of mesolimbic systems can even cause ‘wanting for what hurts’, such as causing a laboratory rat to compulsively seek out electric shocks repeatedly. In such cases, the ‘miswanted’ shock stimulus is remembered to hurt, predicted to hurt, and actually does hurt—yet is still positively sought as a target of incentive motivation.
I’m pretty sympathetic to suffering ≈ displeasure + involuntary attention to the displeasure, or something similar.
I think wanting, or at least the relevant kind here, just is involuntary attention effects, specifically motivational salience. Or, at least, motivational salience is a huge part of what it is. This is how Berridge often uses the terms.[1] Maybe a conscious ‘want’ is just when the effects on our attention are noticeable to us, e.g. captured by our model of our own attention (attention schema), or somehow make it into the global workspace. You can feel the pull of your attention, or resistance against your voluntary attention control. Maybe it also feels different from just strong sensory stimuli (bottom-up, stimulus-driven attention).
Where I might disagree with “involuntary attention to the displeasure” is that the attention effects could sometimes be to force your attention away from an unpleasant thought, rather than to focus on it. Unpleasant signals reinforce and bias attention towards actions and things that could relieve the unpleasatness, and/or disrupt your focus so that you will find something to relieve it. Sometimes the thing that works could just be forcing your attention away from the thing that seems unpleasant, and your attention will be biased to not think about unpleasant things. Other times, you can’t ignore it well enough, so you your attention will force you towards addressing it. Maybe there’s some inherent bias towards focusing on the unpleasant thing.
But maybe suffering just is the kind of thing that can’t be ignored this way. Would we consider an unpleasant thought that’s easily ignored to be suffering?
Berridge and Robinson (2016) distinguish different kinds of wanting/desires, and equate one kind with motivational (incentive) salience:
I think you can have involuntary attention that aren’t particularly related to wanting anything (I’m not sure if you’re denying that). If your watch beeps once every 10 minutes in an otherwise-silent room, each beep will create involuntary attention—the orienting response a.k.a. startle. But is it associated with wanting? Not necessarily. It depends on what the beep means to you. Maybe it beeps for no reason and is just an annoying distraction from something you’re trying to focus on. Or maybe it’s a reminder to do something you like doing, or something you dislike doing, or maybe it just signifies that you’re continuing to make progress and it has no action-item associated with it. Who knows.
In my ontology, voluntary actions (both attention actions and motor actions) happen if and only if the idea of doing them is positive-valence, while involuntary actions (again both attention actions and motor actions) can happen regardless of their valence. In other words, if the reinforcement learning system is the reason that something is happening, it’s “voluntary”.
Orienting responses are involuntary (with both involuntary motor aspects and involuntary attention aspects). It doesn’t matter if orienting to a sudden loud sound has led to good things happening in the past, or bad things in the past. You’ll orient to a sudden loud sound either way. By the same token, paying attention to a headache is involuntary. You’re not doing it because doing similar things has worked out well for you in the past. Quite the contrary, paying attention to the headache is negative valence. If it was just reinforcement learning, you simply wouldn’t think about the headache ever, to a first approximation. Anyway, over the course of life experience, you learn habits / strategies that apply (voluntary) attention actions and motor actions towards not thinking about the headache. But those strategies may not work, because meanwhile the brainstem is sending involuntary attention signals that overrule them.
So for example, “ugh fields” are a strategy implemented via voluntary attention to preempt the possibility of triggering the unpleasant involuntary-attention process of anxious rumination.
The thing you wrote is kinda confusing in my ontology. I’m concerned that you’re slipping into a mode where there’s a soul / homunculus “me” that gets manipulated by the exogenous pressures of reinforcement learning. If so, I think that’s a bad ontology—reinforcement learning is not an exogenous pressure on the “me” concept, it is part of how the “me” thing works and why it wants what it wants. Sorry if I’m misunderstanding.
I agree you can, but that’s not motivational salience. The examples you give of the watch beeping and a sudden loud sound are stimulus-driven or bottom-up salience, not motivational salience. There are apparently different underlying brain mechanisms. A summary from Kim et al., 2021:
I’d say there is some “innate” motivational salience, e.g. probably for innate drives, physical pains, innate fears and perhaps pleasant sensations, but then reinforcement (when it’s working as typically) biases your systems for motivational salience and action towards things associated with those, to get more pleasure and less unpleasantness.
I’ll address two things you said in opposite order.
I don’t have in mind anything like a soul / homunculus. I think it’s mostly a moral question, not an empirical one, to what extent we should consider the mechanisms for reinforcement to be a part of “you”, and to what extent your identity persists through reinforcement. Reinforcement basically rewires your brain and changes your desires. I definitely consider your desires, as motivational salience, which have been shaped by past reinforcement, to be part of “you” now and (in my view) morally important.
From my understanding of the cognitive (neuro)science literature and their use of terms, attentional and action biases/dispositions caused by reinforcement are not necessarily “voluntary”.
I think they use “voluntary”, “endogenous”, “top-down”, “task-driven/directed” and “goal-driven/directed” (roughly) interchangeably for a type of attentional mechanism. For example, you have a specific task in mind, and then things related to that task become salient and your actions are biased towards actions that support that task. This is what focusing/concentration is. But then other motivationally salient stimuli (pain, hunger, your phone, an attractive person) and intense stimuli or changes in background stimuli (a beeping watch, a sudden loud noise) can get in the way.
My impression is that there is indeed a distinct mechanism describable as voluntary/endogenous/top-down attention, which lets you focus and block irrelevant but otherwise motivationally salient stimuli. It might also recruit motivational salience towards relevant stimuli. It’s an executive function. And I’m inclined to reserve the term “voluntary” for executive functions.
In this way, we can say:
a drug addict’s behaviour is often (largely) involuntarily driven, specifically by high motivational salience, like cravings (and perhaps also dysfunction of top-down attention control), and
the distractibility of someone with ADHD by their phone or random websites, for example, is involuntary, driven by a dysfunction of top-down attention control, which lets task-irrelevant stimuli, including task-irrelevant motivationally salient stimuli, pull the person’s attention.
In both cases, reinforcement for motivational salience is partly the reason for the behaviour. But they seem less voluntary than when executive/top-down control works better.
Motivational salience can also be manipulated in experiments to lead to dissociation with remembered, predicted and actual reward (Baumgartner et al., 2021):