For eusocial insects like bees in particular, evolution ought to incentivize them to have net positive lives as long as the hive is doing well overall.
There might be a way to salvage what you’re saying, but I think this stuff is tricky.
I don’t think there are objective facts about the net value/disvalue of experiences. (That doesn’t mean all judgments about the topic are equally reasonable, though, so we can still have a discussion about what indicates higher or lower welfare levels on some kind of scale, even if there’s no objective way to place the zero point.)
While I find it plausible that eusocial insects suffer less in certain situations than other animals would (because their genes want them to readily sacrifice themselves for the queen), I think it’s not obvious how to generally link these evolutionary considerations with welfare assessments for the individual:
Most animals don’t have a concept of making changes to their daily routines to avoid low-grade suffering (let alone think about something more drastic like suicide). Even if they were stuck in behavioral loops that involved lots of suffering, they might stay in those loops anyway because they lack the flexibility to consider alternatives. (For instance, I don’t know much about bee behavior, but bees are said to be busy, so maybe they’re restless/anxious all the time about their tasks, but it’s what helps them do well?)
I concede that it is natural/parsimonious to assume that welfare is positive (insofar as that’s a thing) when the individual is doing well in terms of evolutionary fitness because happiness is a motivating factor. However, suffering is a motivating force too, and sometimes suffering is adaptive (even though it often isn’t). In fact, I’d say suffering is more centrally a motivating force than happiness (see here) because we don’t pursue happiness agentically when we’re content in the moment.
Maybe the reason some people succeed a lot in life is because they have an inner drive that subjectively feels like a lot of pressure and not all that much fun? I’m thinking of someone like Musk, who has achieved things that would score highly on some natural selection scoring function, but has often said that he doesn’t necessarily enjoy being himself, well-being-wise. (And this was before things have started to get harder for him in terms of falling out with former friends or becoming more of a polarized or flat-out hated figure.) If people could play a game where they get to live Musk’s life, but it had to involve all the hard parts and not just the fun ones, would they do it for fun? If they’d be unsure/hesitant, then this example helps sketch out the hypothesis that the relationship between evolutionary success and hedonic well-being for the individual isn’t at all straightforward. (Some maybe don’t want to experience life as someone they consider doesn’t have enough integrity, but since this example is about a game/an experience machine and not real life, the idea is to try to abstract away this confounder.)
I agree this stuff is very tricky! And I appreciate the detailed reply.
Before I engage further, may I ask if you believe that suffering vs pleasure intensity is comparable on the same axis? Iirc I think I might’ve read you saying otherwise.
This is not meant as a “gotcha” question, but just to set the parameters of debate/help decide whether we’ll likely to have useful object-level cruxes.
I remember one time a good friend of mine made a crazy (from my perspective) claim about AI consciousness. I was about to debate him, but then remembered that he was an illusionist about experience. Which is a perfectly valid and logical position to hold, but does mean that it’s less likely we’d have useful object-level things to debate on that question, since any object-level intuition differences are downstream of or at least overshadowed by this major meta-level difference.
Before I engage further, may I ask if you believe that suffering vs pleasure intensity is comparable on the same axis? Iirc I think I might’ve read you saying otherwise.
I think they are not on the same axis. (Good that you asked!)
For one thing, I don’t think all valuable-to-us experiences are of the same type and “intensity” only makes some valuable experiences better, but not others. (I’m not too attached to this point; my view that positive and negative experiences aren’t on the same scale is also based on other considerations.) One of my favorite experiences (easily top ten experience types in my life) is being half asleep cozily in bed knowing that it’s way too early to wake up and I just get to sleep in. I think that experience is “pleasurable” in a way, or at least clearly positive/valuable-to-me, but it doesn’t benefit from added intensity and the point of the experience is more about “everything is just right” rather than “wow this feels so good and I want more of it.”
Sex or eating one’s favorite food have a compulsive element to it, they have arrows of volition pointing at the content of the experience and wanting more of it. By contrast, cozy half-sleep (or hugging one’s life partner in romantic love that is no longer firework-feelings-type love) feel good because the arrows of volition are taking time off. (Or maybe we can say that they loop around and signal that everything is perfect the way it is and our mind gets to rest.)
If all positive experiences resembled each other as “satisfied cravings” the way it works with sex and eating one’s favorite food, then I’d be a bit more open to the idea that positive and negative experiences are on the same scale. However, even then, I’d point out—and that point actually feels a lot stronger to me for compulsive pleasures than it does for “everything is right” types of positive experiences—that “the value of pleasures,” and the great lengths we sometimes go for them behaviorally, seems to be a bit of a trick of the mind, and that suffering arguably plays a more central role in addictive pleasure-seeking tendencies than pleasure itself does.
(The following is based on copy-pasted text snippets from stuff I wrote elsewhere non-publically:)
In Narnia, the witch hands one of the children a piece of candy so pleasurable to eat that the child betrays his siblings for the prospect of a second candy. The child felt internally conflicted during that episode: He would surely have walked through lava for a second piece of candy, but not without an intense sense of despair about how his motivational system had been broken by the evil witch.
We can distinguish between:
Walking into lava with reflective consistency.
Walking into lava with internal conflict.
By 1. I don’t mean that one would be walking into lava joyously. Even the most ardent personal hedonists are going to feel uneasy before they actually step into the lava. But the people to whom 1. applies endorse the parts of their psychology that make superpleasures viscerally appealing. By contrast, people to whom 2. applies would rather not feel compelled to pursue superpleasures when they lie behind a river of lava. People familiar with addiction can probably relate to the sense of “Why am I doing this?” that befalls someone when they find themselves going through great inconveniences to fuel their addiction.
So, my point is that it’s an added step, an extra decision, to consider pleasures valuable to the degree that experiencing them triggers our visceral and addictive sense of “omg I want more of that.” (People’s vulnerability to addiction also differs. Does that mean addiction-prone individuals experience stronger pleasures, or are their minds merely more susceptible to developing cravings towards certain pleasures? Is there even a difference here for functionalists? If there isn’t, this would illustrate that there’s something problematic about the idea of an objective scale on the value of experiences that’s properly and universally linked to correct human behavior in pleasure-suffering tradeoffs.) I think it’s a perfectly rational stance to never want to get addicted to pleasures enough to want to walk through lava for the prospect of intense and prolonged (think: centuries of bliss) pleasures. This forms a counterargument to the idea that we can just measure/elicit via experiments, “how much does this person want to trade off pleasure vs pain behaviorally” to see how they compare on some objective scale.
So far, I spoke of “addictive pleasure-seeking.” I think there’s a second motivational mode where we pursue things not because we feel cravings in the moment, but because we have a sohpisticated world model (unlike other animals) and have decided that there are things within that world model that we’d like to pursue even if they may not lead to us having the most pleasure. The interesting thing about that reflection-based (as opposed to cravings-/needs-based motivation) form/mode of motivation is that it’s very open-ended. People are agentic to different degrees and they set for themselves different types of goals. Some people don’t pursue personal hedonic pleasures but they have long-term plans related to existentialist meaning like the EA mission, or protecting/caring for loved ones. (We can imagine extreme examples where people voluntarily go to prison for a greater cause, disproving the notion that everyone is straightforwardly motivated by personal pleasure.)
There’s an inherent tension in the view that hedonism is the rational approach to living. Part of the appeal of hedonism is that we just want pleasure, but adopting an optimization mindset toward it leads to a kind of instrumentalization of everything “near term.” If you set the life goal of maximizing the number of your happy days, the rational way to go about your life probably implies treating the next decades as “instrumental only.” On a first approximation, the only thing that matters is optimizing the chances of obtaining indefinite life extension (potentially leading to more happy days). Through adopting an outcome-focused optimizing mindset, seemingly self-oriented concerns such as wanting to maximize the number of happiness moments turn into an almost other-regarding endeavor. After all, only one’s far-away future selves get to enjoy the benefits – which can feel essentially like living for someone else.
To be a good hedonist, someone has to disentangle the part of their brain that cares about short-term pleasure from the part of them that does long-term planning. In doing so, they now prove that they’re capable of caring about something other than their pleasure. It is now an open question whether they use this disentanglement capability for maximizing pleasure or for something else that motivates them to act on long-term plans (such as personal meaning like the EA mission, or protecting/caring for loved ones). Relatedly, even if a person decided that they wanted self-oriented happiness, it is an open question whether they go for the rationalist idea of wanting to maximize happy life years, or for something more holistic and down to earth like wanting to make some awesome meaningful memories with loved ones without obsessing over longevity, and considering life “well-lived” if one has finished one’s most important life projects, even if one only makes it into one’s late forties or fifties or sixties, or whatever. (The ending of “The Good Place” comes to mind for me, for those who’ve seen the series, though the people in there have lived longer lives compared to the world’s population at present.)
And, sure, we can say similar things about reducing suffering: it’s perfectly possible for people to give their own suffering comparatively little weight compared to things like achieving a mission that one deems sacred. (But there’s always something that seems relevant that is bad about suffering, because even in a mind that has accepted suffering as an necessary condition to achieve other goals, there are parts of the mind that brace against the suffering in the moment someone is suffering.) I think suffering is what matters by default/in the absence of other overriding considerations, but when someone decides for themselves that there are things that matter to them more than their own suffering, then that’s something we should definitely respect.
The thing with nonhuman animals like bees is that they lack the capacity to decide those things, which is why it’s under-defined how they would decide if they could think about it. Treating them the suffering-focused way seems safest/most parsimonious to me, but I don’t necessarily think that treating them with hedonist intuitions (and trying to guess at where they would place the hedonic zero point that is only really a meaningful concept if we grant some of the premises of hedonist axiology) is contradciting something obvious that’s happening inside the bees. Personally, I find it “less parsimonious/less elegant,” but that’s a subjective judgment that’s probably influenced by idiosyncratic features of my psychology (perhaps because I’m particularly fond of “everything is right” types of positive experiences, and not adventure-seeking). I mostly just think “bee values” are under-defined on this topic and that there’s no “point of view of the universe.”
There might be a way to salvage what you’re saying, but I think this stuff is tricky.
I don’t think there are objective facts about the net value/disvalue of experiences. (That doesn’t mean all judgments about the topic are equally reasonable, though, so we can still have a discussion about what indicates higher or lower welfare levels on some kind of scale, even if there’s no objective way to place the zero point.)
While I find it plausible that eusocial insects suffer less in certain situations than other animals would (because their genes want them to readily sacrifice themselves for the queen), I think it’s not obvious how to generally link these evolutionary considerations with welfare assessments for the individual:
Most animals don’t have a concept of making changes to their daily routines to avoid low-grade suffering (let alone think about something more drastic like suicide). Even if they were stuck in behavioral loops that involved lots of suffering, they might stay in those loops anyway because they lack the flexibility to consider alternatives. (For instance, I don’t know much about bee behavior, but bees are said to be busy, so maybe they’re restless/anxious all the time about their tasks, but it’s what helps them do well?)
I concede that it is natural/parsimonious to assume that welfare is positive (insofar as that’s a thing) when the individual is doing well in terms of evolutionary fitness because happiness is a motivating factor. However, suffering is a motivating force too, and sometimes suffering is adaptive (even though it often isn’t). In fact, I’d say suffering is more centrally a motivating force than happiness (see here) because we don’t pursue happiness agentically when we’re content in the moment.
Maybe the reason some people succeed a lot in life is because they have an inner drive that subjectively feels like a lot of pressure and not all that much fun? I’m thinking of someone like Musk, who has achieved things that would score highly on some natural selection scoring function, but has often said that he doesn’t necessarily enjoy being himself, well-being-wise. (And this was before things have started to get harder for him in terms of falling out with former friends or becoming more of a polarized or flat-out hated figure.) If people could play a game where they get to live Musk’s life, but it had to involve all the hard parts and not just the fun ones, would they do it for fun? If they’d be unsure/hesitant, then this example helps sketch out the hypothesis that the relationship between evolutionary success and hedonic well-being for the individual isn’t at all straightforward. (Some maybe don’t want to experience life as someone they consider doesn’t have enough integrity, but since this example is about a game/an experience machine and not real life, the idea is to try to abstract away this confounder.)
I agree this stuff is very tricky! And I appreciate the detailed reply.
Before I engage further, may I ask if you believe that suffering vs pleasure intensity is comparable on the same axis? Iirc I think I might’ve read you saying otherwise.
This is not meant as a “gotcha” question, but just to set the parameters of debate/help decide whether we’ll likely to have useful object-level cruxes.
I remember one time a good friend of mine made a crazy (from my perspective) claim about AI consciousness. I was about to debate him, but then remembered that he was an illusionist about experience. Which is a perfectly valid and logical position to hold, but does mean that it’s less likely we’d have useful object-level things to debate on that question, since any object-level intuition differences are downstream of or at least overshadowed by this major meta-level difference.
I think they are not on the same axis. (Good that you asked!)
For one thing, I don’t think all valuable-to-us experiences are of the same type and “intensity” only makes some valuable experiences better, but not others. (I’m not too attached to this point; my view that positive and negative experiences aren’t on the same scale is also based on other considerations.) One of my favorite experiences (easily top ten experience types in my life) is being half asleep cozily in bed knowing that it’s way too early to wake up and I just get to sleep in. I think that experience is “pleasurable” in a way, or at least clearly positive/valuable-to-me, but it doesn’t benefit from added intensity and the point of the experience is more about “everything is just right” rather than “wow this feels so good and I want more of it.”
Sex or eating one’s favorite food have a compulsive element to it, they have arrows of volition pointing at the content of the experience and wanting more of it. By contrast, cozy half-sleep (or hugging one’s life partner in romantic love that is no longer firework-feelings-type love) feel good because the arrows of volition are taking time off. (Or maybe we can say that they loop around and signal that everything is perfect the way it is and our mind gets to rest.)
If all positive experiences resembled each other as “satisfied cravings” the way it works with sex and eating one’s favorite food, then I’d be a bit more open to the idea that positive and negative experiences are on the same scale. However, even then, I’d point out—and that point actually feels a lot stronger to me for compulsive pleasures than it does for “everything is right” types of positive experiences—that “the value of pleasures,” and the great lengths we sometimes go for them behaviorally, seems to be a bit of a trick of the mind, and that suffering arguably plays a more central role in addictive pleasure-seeking tendencies than pleasure itself does.
(The following is based on copy-pasted text snippets from stuff I wrote elsewhere non-publically:)
In Narnia, the witch hands one of the children a piece of candy so pleasurable to eat that the child betrays his siblings for the prospect of a second candy. The child felt internally conflicted during that episode: He would surely have walked through lava for a second piece of candy, but not without an intense sense of despair about how his motivational system had been broken by the evil witch.
We can distinguish between:
Walking into lava with reflective consistency.
Walking into lava with internal conflict.
By 1. I don’t mean that one would be walking into lava joyously. Even the most ardent personal hedonists are going to feel uneasy before they actually step into the lava. But the people to whom 1. applies endorse the parts of their psychology that make superpleasures viscerally appealing. By contrast, people to whom 2. applies would rather not feel compelled to pursue superpleasures when they lie behind a river of lava. People familiar with addiction can probably relate to the sense of “Why am I doing this?” that befalls someone when they find themselves going through great inconveniences to fuel their addiction.
So, my point is that it’s an added step, an extra decision, to consider pleasures valuable to the degree that experiencing them triggers our visceral and addictive sense of “omg I want more of that.” (People’s vulnerability to addiction also differs. Does that mean addiction-prone individuals experience stronger pleasures, or are their minds merely more susceptible to developing cravings towards certain pleasures? Is there even a difference here for functionalists? If there isn’t, this would illustrate that there’s something problematic about the idea of an objective scale on the value of experiences that’s properly and universally linked to correct human behavior in pleasure-suffering tradeoffs.) I think it’s a perfectly rational stance to never want to get addicted to pleasures enough to want to walk through lava for the prospect of intense and prolonged (think: centuries of bliss) pleasures. This forms a counterargument to the idea that we can just measure/elicit via experiments, “how much does this person want to trade off pleasure vs pain behaviorally” to see how they compare on some objective scale.
So far, I spoke of “addictive pleasure-seeking.” I think there’s a second motivational mode where we pursue things not because we feel cravings in the moment, but because we have a sohpisticated world model (unlike other animals) and have decided that there are things within that world model that we’d like to pursue even if they may not lead to us having the most pleasure. The interesting thing about that reflection-based (as opposed to cravings-/needs-based motivation) form/mode of motivation is that it’s very open-ended. People are agentic to different degrees and they set for themselves different types of goals. Some people don’t pursue personal hedonic pleasures but they have long-term plans related to existentialist meaning like the EA mission, or protecting/caring for loved ones. (We can imagine extreme examples where people voluntarily go to prison for a greater cause, disproving the notion that everyone is straightforwardly motivated by personal pleasure.)
There’s an inherent tension in the view that hedonism is the rational approach to living. Part of the appeal of hedonism is that we just want pleasure, but adopting an optimization mindset toward it leads to a kind of instrumentalization of everything “near term.” If you set the life goal of maximizing the number of your happy days, the rational way to go about your life probably implies treating the next decades as “instrumental only.” On a first approximation, the only thing that matters is optimizing the chances of obtaining indefinite life extension (potentially leading to more happy days). Through adopting an outcome-focused optimizing mindset, seemingly self-oriented concerns such as wanting to maximize the number of happiness moments turn into an almost other-regarding endeavor. After all, only one’s far-away future selves get to enjoy the benefits – which can feel essentially like living for someone else.
To be a good hedonist, someone has to disentangle the part of their brain that cares about short-term pleasure from the part of them that does long-term planning. In doing so, they now prove that they’re capable of caring about something other than their pleasure. It is now an open question whether they use this disentanglement capability for maximizing pleasure or for something else that motivates them to act on long-term plans (such as personal meaning like the EA mission, or protecting/caring for loved ones). Relatedly, even if a person decided that they wanted self-oriented happiness, it is an open question whether they go for the rationalist idea of wanting to maximize happy life years, or for something more holistic and down to earth like wanting to make some awesome meaningful memories with loved ones without obsessing over longevity, and considering life “well-lived” if one has finished one’s most important life projects, even if one only makes it into one’s late forties or fifties or sixties, or whatever. (The ending of “The Good Place” comes to mind for me, for those who’ve seen the series, though the people in there have lived longer lives compared to the world’s population at present.)
And, sure, we can say similar things about reducing suffering: it’s perfectly possible for people to give their own suffering comparatively little weight compared to things like achieving a mission that one deems sacred. (But there’s always something that seems relevant that is bad about suffering, because even in a mind that has accepted suffering as an necessary condition to achieve other goals, there are parts of the mind that brace against the suffering in the moment someone is suffering.) I think suffering is what matters by default/in the absence of other overriding considerations, but when someone decides for themselves that there are things that matter to them more than their own suffering, then that’s something we should definitely respect.
The thing with nonhuman animals like bees is that they lack the capacity to decide those things, which is why it’s under-defined how they would decide if they could think about it. Treating them the suffering-focused way seems safest/most parsimonious to me, but I don’t necessarily think that treating them with hedonist intuitions (and trying to guess at where they would place the hedonic zero point that is only really a meaningful concept if we grant some of the premises of hedonist axiology) is contradciting something obvious that’s happening inside the bees. Personally, I find it “less parsimonious/less elegant,” but that’s a subjective judgment that’s probably influenced by idiosyncratic features of my psychology (perhaps because I’m particularly fond of “everything is right” types of positive experiences, and not adventure-seeking). I mostly just think “bee values” are under-defined on this topic and that there’s no “point of view of the universe.”
(btw for people who haven’t noticed, the substack itself has more details on the eusociality argument)