A few random thoughts from a researcher with a background in psychology research:
One driver for preferences for LSTs or eudaimonia frameworks for SWB is an intuition that solely focusing our well-being concerns on happiness or affect would lead us to conclude that happiness wireheading as a complete and final solution, and that’s intuitively wrong for most people.
Because psychologists are empiricists, they don’t spend too much time worrying about whether affect, life satisfaction, or eudamonia are more important in a philosophical or ethical sense. They are more concerned about how they can measure each of these factors, and how environmental (or behavioral or genetic) factors might be linked to SWB measures. To the extent there is psychological literature on the relative value of SWB measures, I think most of it is simply just trying to justify that it is worth measuring and talking about eudamonia at all, as eudamonia is probably the least accepted of the three SWB measures.
Working out the relative importance of SWB measures seems to me to be solely a question of values, for moral philosophy and not psychology, so I am glad that you, as a moral philosopher, are considering the question!
Finally, a bit of an aside, but another area where I would like to see more moral philosophers, psychologists, and neuroscientists talking is the relative importance of positive vs. negative affect. From a neuropsychological point of view, positive and negative affect are qualitatively different. Often, for convenience, researchers might measure a net difference between them, but I think there are very good empirical reasons that they should be considered incommensurable. All positive affect shares certain physical neuroscientific characteristics (almost always nucleus accumbens activity, for instance) but negative affect activates different systems. If these are really incommensurable, again we need to look to moral philosophers to be think about which is more important. This could be important for questions in moral philosophy (e.g., prior existence vs. total view) and in EA particularly: a strong emphasis on the moral desirability of positive affect might lead us towards a total view (because more people means more total positive affect) whereas balancing negative and positive affect could lead us towards a prior existence view (fewer people means less negative affect but also less positive affect), and a strong focus on avoidance of negative affect could even lead to a preference for the extinction of sentient life.
On your last point about positive and negative affect, I’d also add that we don’t have good reason to believe they’re measurable cardinally, either. If we try to use people’s intuitive preferred tradeoffs, then there’s really no one size fits all. Maybe we could ask people to judge relative intensities.
I also think trying to balance affect won’t lead to a prior existence view, since that’s too fragile. Just a little higher, and then we’re positive; and just a little lower, and then we’re negative. Also, it will depend on the population distribution and other morally irrelevant factors to the question of how they should be balanced, some of which we manipulate, e.g. improving quality of life.
Just to flag: I’ve nearly finished another paper where I explore whether measures of subjective states are cardinally and conclude they probably are (at least, on average). Stay tuned.
There are many parts to this topic and I’m not sure whether you’re denying (1) that subjective states are experienced in cardinal units or (2) that they are experienced in cardinal units but that our measures are (for one reason or another) not cardinal. I think you mean the former. But we do think of affect as being experienced in cardinal units, otherwise we wouldn’t say things like “this will hurt you as much as it hurts me”. Asking people to state their preferences doesn’t solve the problem: what we are inquiring about are the intensities of sensations, not what you would choose, so asking about the latter doesn’t address the former.
But we do think of affect as being experienced in cardinal units, otherwise we wouldn’t say things like “this will hurt you as much as it hurts me”
I think this is merely a statement of ordinal ranking (of course compatible with cardinal ranking). The issue is with statements like “X was 2x more intense than Y”. I’m skeptical that these can be grounded. We could take people’s intuitive judgements of relative intensities, but it’s not clear these are reliable and valid/get at anything fundamental.
And even if they are reliable, they may well end up conflicting with most people’s (and animals’) intuitions about what kinds of tradeoffs they’d prefer to make in their own lives. Should moral value be exactly equal to the signed intensity? I guess we have more reason for this on an internalist account (I remember you recommended Hedonism Reconsidered to me).
If we look at brain activity, there won’t be any obviously correct cardinal measure to come out of it, since brain functions are very nonlinear. We can count how many neurons are firing in some region, but there’s no reason to believe intensity scales linearly with the number, rather than the square or square root or anything else.
I will try a slightly different claim that links neuropsychology to moral philosophy then. If you think maximizing well-being is the key aim of morality, and you do this with some balance of positive and negative affect, then I predict your balance of positive and negative affect at least as an empirical matter will change your ideal number of people to populate the Earth and other environments with in the total view.
Maybe it’s too obvious: if we’re totally insensitive to negative affect, then adding any number of people who experience any level of positive affect is helpful. If we’re insensitive to positive affect then total view would lead to advocating the extinction of conscious life (would Schopenhauer almost have found himself endorsing that view if it was put to him?). And there would be points all along the range in the middle that would lead to varying conclusions about optimal population. It might go some way to making total view seem less counterintuitive.
[thinking that] well-being concerns on happiness or affect would lead us to conclude that happiness wireheading as a complete and final solution, and that’s intuitively wrong for most people
Yes, I agree many people are against hedonism because of the (at least initially) counter-intuitive examples about wireheading and experience machines. As a purely sociological observation, I’ve been struck that social scientists I talk to are familiar with the objections to hedonism, but unfamiliar with those to desire theories and the objective list. Theorising doesn’t penetrate too deeply into the social sciences. As you say:
Because psychologists are empiricists, they don’t spend too much time worrying about whether affect, life satisfaction, or eudamonia are more important in a philosophical or ethical sense
I spend quite a lot of type talking to social scientists and it used to surprise me that they seem to think theorising is pointless (“you philosophers never agree on anything”). I now realise this is largely a selection effect: people who like empirical work more than theoretical work become social scientists instead of philosophers.That social scientists don’t spend too much time theorising is, I think, a bit of a problem. The impetus to write the paper came from the fact social scientists have developed this notion that life satisfaction is what really matters, and been running with it for some decades, without really stopping to think about what that view would imply.
Right now, the field is focusing on doing its empirical work better—the “open science” movement. I think that social scientists do engage in what we call “theoretical” work, but it is generally simply theories about how things empirically work (e.g., if religion is unique in its ability to produce high eudamonia for a large number of people, how can we conceptualize it as a eudamonia-producing system? Or which systems in the brain are responsible for production of pain experience; how is physical pain related to other forms of emotional pain?).
A fair number of us are probably logical positivists to a degree, in that we don’t want to go near a theoretical question with no empirical implications. That is a real shame. But to me, it just seems like theoretical values questions are outside of the domain of “social science” and in the domain of “humanities”. And one good reason to continue specialising/compartmentalizing like that is that many social scientists are just crap at formulating a clearly-articulated logical argument (try to read a theory in a psychology paper in the latter half of the Intro where they formulate hypotheses from their theory; compare the level of logical rigor and clarity with that from your philosophy papers). Collaborations between philosophers and psychologists are great (have you listened to Very Bad Wizards by Tamler Sommers and David Pizzaro? I only cite a podcast because honestly, I can’t think of actual research project collaborations) and collaborations should happen more, but honestly, it’s just difficult for me to even conceive of a psychologist trying to answer the question “what really matters more: eudamonia or net positive and negative affect?” because it seems to me at that point they’re doing humanities, not science.
I suppose there’s a whole history of that too; BF Skinner’s ‘behavioral turn’ really focused the field on what we can measure to the exclusion of anything that can’t be measured; it took a few decades just for the field to creep into thinking about things that could be in principle measured, or only indirectly measured (the ‘cognitive turn’) let alone thinking about entirely non-measurable values questions like “what ultimate moral end should we prefer?” Prior to Skinner, there was Freud and Jung and related theorists who did do theory, but I am not sure it was very good or useful theory.
To focus what I am trying to say: is there something we could gain from social scientists (particularly moral psychologists) theorising more about values that is unique or distinct from or would add to what philosophers (particularly moral philosophers) are already doing?
A few random thoughts from a researcher with a background in psychology research:
One driver for preferences for LSTs or eudaimonia frameworks for SWB is an intuition that solely focusing our well-being concerns on happiness or affect would lead us to conclude that happiness wireheading as a complete and final solution, and that’s intuitively wrong for most people.
Because psychologists are empiricists, they don’t spend too much time worrying about whether affect, life satisfaction, or eudamonia are more important in a philosophical or ethical sense. They are more concerned about how they can measure each of these factors, and how environmental (or behavioral or genetic) factors might be linked to SWB measures. To the extent there is psychological literature on the relative value of SWB measures, I think most of it is simply just trying to justify that it is worth measuring and talking about eudamonia at all, as eudamonia is probably the least accepted of the three SWB measures.
Working out the relative importance of SWB measures seems to me to be solely a question of values, for moral philosophy and not psychology, so I am glad that you, as a moral philosopher, are considering the question!
Finally, a bit of an aside, but another area where I would like to see more moral philosophers, psychologists, and neuroscientists talking is the relative importance of positive vs. negative affect. From a neuropsychological point of view, positive and negative affect are qualitatively different. Often, for convenience, researchers might measure a net difference between them, but I think there are very good empirical reasons that they should be considered incommensurable. All positive affect shares certain physical neuroscientific characteristics (almost always nucleus accumbens activity, for instance) but negative affect activates different systems. If these are really incommensurable, again we need to look to moral philosophers to be think about which is more important. This could be important for questions in moral philosophy (e.g., prior existence vs. total view) and in EA particularly: a strong emphasis on the moral desirability of positive affect might lead us towards a total view (because more people means more total positive affect) whereas balancing negative and positive affect could lead us towards a prior existence view (fewer people means less negative affect but also less positive affect), and a strong focus on avoidance of negative affect could even lead to a preference for the extinction of sentient life.
On your last point about positive and negative affect, I’d also add that we don’t have good reason to believe they’re measurable cardinally, either. If we try to use people’s intuitive preferred tradeoffs, then there’s really no one size fits all. Maybe we could ask people to judge relative intensities.
I also think trying to balance affect won’t lead to a prior existence view, since that’s too fragile. Just a little higher, and then we’re positive; and just a little lower, and then we’re negative. Also, it will depend on the population distribution and other morally irrelevant factors to the question of how they should be balanced, some of which we manipulate, e.g. improving quality of life.
Just to flag: I’ve nearly finished another paper where I explore whether measures of subjective states are cardinally and conclude they probably are (at least, on average). Stay tuned.
There are many parts to this topic and I’m not sure whether you’re denying (1) that subjective states are experienced in cardinal units or (2) that they are experienced in cardinal units but that our measures are (for one reason or another) not cardinal. I think you mean the former. But we do think of affect as being experienced in cardinal units, otherwise we wouldn’t say things like “this will hurt you as much as it hurts me”. Asking people to state their preferences doesn’t solve the problem: what we are inquiring about are the intensities of sensations, not what you would choose, so asking about the latter doesn’t address the former.
I think this is merely a statement of ordinal ranking (of course compatible with cardinal ranking). The issue is with statements like “X was 2x more intense than Y”. I’m skeptical that these can be grounded. We could take people’s intuitive judgements of relative intensities, but it’s not clear these are reliable and valid/get at anything fundamental.
And even if they are reliable, they may well end up conflicting with most people’s (and animals’) intuitions about what kinds of tradeoffs they’d prefer to make in their own lives. Should moral value be exactly equal to the signed intensity? I guess we have more reason for this on an internalist account (I remember you recommended Hedonism Reconsidered to me).
If we look at brain activity, there won’t be any obviously correct cardinal measure to come out of it, since brain functions are very nonlinear. We can count how many neurons are firing in some region, but there’s no reason to believe intensity scales linearly with the number, rather than the square or square root or anything else.
Looking forward to your next paper! :)
Yes you’re right.
I will try a slightly different claim that links neuropsychology to moral philosophy then. If you think maximizing well-being is the key aim of morality, and you do this with some balance of positive and negative affect, then I predict your balance of positive and negative affect at least as an empirical matter will change your ideal number of people to populate the Earth and other environments with in the total view.
Maybe it’s too obvious: if we’re totally insensitive to negative affect, then adding any number of people who experience any level of positive affect is helpful. If we’re insensitive to positive affect then total view would lead to advocating the extinction of conscious life (would Schopenhauer almost have found himself endorsing that view if it was put to him?). And there would be points all along the range in the middle that would lead to varying conclusions about optimal population. It might go some way to making total view seem less counterintuitive.
Some interesting points here, thanks!
Yes, I agree many people are against hedonism because of the (at least initially) counter-intuitive examples about wireheading and experience machines. As a purely sociological observation, I’ve been struck that social scientists I talk to are familiar with the objections to hedonism, but unfamiliar with those to desire theories and the objective list. Theorising doesn’t penetrate too deeply into the social sciences. As you say:
I spend quite a lot of type talking to social scientists and it used to surprise me that they seem to think theorising is pointless (“you philosophers never agree on anything”). I now realise this is largely a selection effect: people who like empirical work more than theoretical work become social scientists instead of philosophers.That social scientists don’t spend too much time theorising is, I think, a bit of a problem. The impetus to write the paper came from the fact social scientists have developed this notion that life satisfaction is what really matters, and been running with it for some decades, without really stopping to think about what that view would imply.
Right now, the field is focusing on doing its empirical work better—the “open science” movement. I think that social scientists do engage in what we call “theoretical” work, but it is generally simply theories about how things empirically work (e.g., if religion is unique in its ability to produce high eudamonia for a large number of people, how can we conceptualize it as a eudamonia-producing system? Or which systems in the brain are responsible for production of pain experience; how is physical pain related to other forms of emotional pain?).
A fair number of us are probably logical positivists to a degree, in that we don’t want to go near a theoretical question with no empirical implications. That is a real shame. But to me, it just seems like theoretical values questions are outside of the domain of “social science” and in the domain of “humanities”. And one good reason to continue specialising/compartmentalizing like that is that many social scientists are just crap at formulating a clearly-articulated logical argument (try to read a theory in a psychology paper in the latter half of the Intro where they formulate hypotheses from their theory; compare the level of logical rigor and clarity with that from your philosophy papers). Collaborations between philosophers and psychologists are great (have you listened to Very Bad Wizards by Tamler Sommers and David Pizzaro? I only cite a podcast because honestly, I can’t think of actual research project collaborations) and collaborations should happen more, but honestly, it’s just difficult for me to even conceive of a psychologist trying to answer the question “what really matters more: eudamonia or net positive and negative affect?” because it seems to me at that point they’re doing humanities, not science.
I suppose there’s a whole history of that too; BF Skinner’s ‘behavioral turn’ really focused the field on what we can measure to the exclusion of anything that can’t be measured; it took a few decades just for the field to creep into thinking about things that could be in principle measured, or only indirectly measured (the ‘cognitive turn’) let alone thinking about entirely non-measurable values questions like “what ultimate moral end should we prefer?” Prior to Skinner, there was Freud and Jung and related theorists who did do theory, but I am not sure it was very good or useful theory.
To focus what I am trying to say: is there something we could gain from social scientists (particularly moral psychologists) theorising more about values that is unique or distinct from or would add to what philosophers (particularly moral philosophers) are already doing?