The more I think about value monism, I get confused about why some people really want to cling to it, even though our own experience seems to tell us every day that we are in fact not value monists. We care about many different values and also care about what values other people hold. When we ask people who are dying most of them will talk of friendship, love, and regrets. Does all of this just count instrumentally toward one âsuper valueâ such as welfare or are there some values we hold dear as ends in themselves?
I came up with a short experiment that can maybe act as an intuition pump in this regard. I would be interested in your thoughts!
Thought experiment: What do we care about at the end of time?
We are close to the end of time. Humanity gained sophisticated technologies we can only imagine. Still, only two very old humans remain alive: Alice and Bob. However, there also remain machines that can predict the effects of medicines and states of consciousness and lived experience.
It seems like the last day for both Alice and Bob has come. Alice is terminally ill and in severe pain, Bob is simply old but also feels he is about to die a peaceful death soon. They have used up almost all of the medicine which was still around, only one dose of morphine remains.
The medical machines tell them that if Alice takes the morphine her pain would be soothed but the effect would not be as strong as normally due to her specific physiology which dampens the effect of morphine. Bob on the other hand would have a really great time if he took the morphine. His specific physiology is super receptive to morphine. He would experience unimaginable heights and states of bliss. The medical machines are entirely sure that net happiness would be several times higher if Bob would take the morphine. If Alice would take it, they would simply have one last conversation and both die peacefully.
How should Alice and Bob decide? What values are important in their decision?
I take the strongest argument for value monism to be something like this: if you have more than one value, you need to trade them off at some point. Given this, how do you decide the exchange rate? Either there is no principled exchange rate, in which case you canât decide any principled way to trade them off and there is no principled reason to invoke any more than one value when making a decision anyway, which defeats the original intuition for why one would want to recognize more values, or there is some commonality between these values that can determine the exchange, in which case, as it turns out, that is the true intrinsic value, not either of the ones being exchanged against one another. This dilemma always applies when trading off more than one value, so the principled solution will always tend to be finding one common value. There are of course various counterarguments, but hopefully this helps understand why people are drawn to it.
I mean, I do get the appeal. But as you say it also has pretty huge drawbacks. I am curious how far people are willing to tie themselves to the mast and argue that value monism is actually a tenable position to take as a âlife philosophyâ despite itâs drawbacks. How far are you willing to defend your âprinciplesâ even if the situation really calls them into question? What would your reply to the thought experiment be?
The scenario given doesnât seem to pump the intuition for value pluralism so much as prioritarianism. I suppose you could conceptualize prioritarianism as a sort of value pluralism, I.e. the value of helping those worse off and the value of happiness, but you can also create a single scale on which all that matters is happiness but the amount that it matters doesnât exactly correspond to the amount of the happiness. I at least usually think of it as importantly distinct from most plural value theories. Iâm open to the possibility that this is just semantics, but it does seem to avoid some dilemmas typical plural value theories have (though not all).
More on the topic of what to do about counterintuitive implications, my approach is fairly controversial, in that I mostly say if you canât bite the bullet, donât, but donât revise your theory to take the bullet away. In part this just seems like a more principled approach to me as a rule, but also there are important areas of ethics, like aggregation or population axiology, where basically no good answers exist, and this is pretty much provable. This is just the nature of ethics once you get really deep into the weeds. My impression is that most philosophers respond to this by not endorsing complete theories, basically they just endorse certain specific principles that donât come with serious bullets, and put off other questions where they donât see a way to escape the bullets. I donât think this ultimately fixes the problem for topics like these where the territory of possibilities has been scoured pretty thoroughly, but for what itâs worth it seems like a more common approach.
Yeah, I think the intuitions it pumps really depends on the perspective and mindset of the reader. For me, it was triggering my desire to exhibit comradery and friendship in the last moments of life. I could also adjust the thought experiment so that nobody is hurt and simply ask whether one of them should take the morphine or whether they should die âbeing there for each otherâ. I really do believe that we are kidding ourselves when we say that we only value âwelfareâ narrowly construed. But I get that some people may just look at such situations with a different mindset and, thus, different intuitions are triggered.
Regarding your approach, I think the important thing to keep in mind is that âthe map is not the territory.â âTheories are not truth.â âEvery model is wrong but some are useful, some of the time.â Thus, there is not necessarily a need to âupdateâ theories with every challenge one encounters but it is still important to stay mindful of the limitations a given theory has and consider alternative viewpoints to ensure that one doesnât run around with huge blind spots. Moral uncertainty can help here to some degree but acknowledging that we simply value more things than welfare maximization seems also an important step to guard against oversimplification. Interestingly, Spencer Greenberg made a related (much more eloquent) post today.
I endorse moral uncertainty, but I think one should be careful in treating moral theories like vague, useful models of some feature of the world. I am not a utilitarian because I think there is some âethicsâ out there in the world, and being utilitarian approximates it in many situations, I think the theory is the ethics, and if it isnât, the theory is wrong. What I take myself to be debating when I debate ethics isnât which model âworksâ the best, but rather which one is actually what I mean by âethicsâ.
This position seems confusing to me. So, either (1) ethics is something âout thereâ, which we can try to learn about and uncover. Then, we would tend to treat all our theories and models as approximations to some degree because similar issues as in science apply. Or (2) we take ethics as something which we define in some way to suit some of our own goals. Then, itâs pretty arbitrary what models we come up with, whether they make sense depends mainly on the goals we have in mind.
This kind of mirrors the question whether a moral theory is to be taken as a standard for judging ethics (1) or a definition of ethics (2). Even if you opt for (2) the moral theory is still an instrument that should be treated as useful means to an end-in-view. You want the definition to be convincing by demonstrating that it can actually get you somewhere that is desirable. Thus, it would be appropriate to acknowledge what this definition can and cannot do so that people can make appropriate use of it. Whatever road you chose you still come to the point where you need to debate which model âworksâ best. Thatâs the beauty of philosophical and ethical discourse.
And turning back to the question of value monism, I think Spencer Greenberg has some interesting discussion for people who are moral anti-realists (people who fall in camp 2 above) and utilitarians. Maybe thatâs worth checking out.
Because my draft response was getting too long, Iâm going to put it as a list of relevant arguments/âpoints, rather than the conventional format, hopefully not much is lost in the process:
-Ethics does take things out there in the world as its subjects, but I donât take the comparison to empirical science in this case to work, because the methods of inquiry are more about discourse than empirical study. Empirical study comes at the point of implementation, not philosophy. The strong version of this point is rather controversial but I do endorse it, I will return to it in a couple bullets to expand it out
-Even in empirical sciences, the idea of theories just being rough models is not always relevant. it comes from both uncertainty and the positive view that the actual real answer is far too complicated to exactly model. This is the difference between say economics and physics â theories in both will be tentative, and accept that they are probably just approximations right now because of uncertainty, but in economics this is not just a matter of historical humility, but also a positive belief about complexity in the world. Physics theories are both ways of getting good-enough-for-now answers, and positive proposals for ways some aspect of reality might actually be. Typically with plurality but not majority credence.
-Fully defining what I mean by ethics is difficult, and of less interest to me than doing the ethics. Maybe this seems a bit strange if you think defining ethics is of supreme importance to doing it, but my feeling of disconnect between the two is probably part of why Iâm an anti-realist. Iâm not sure thereâs any definition I could plug into a machine to make an ethics-o-meter I would simply be satisfied taking its word for it on an answer (this is where the stronger version of bullet one comes in). This is sort of related to Brian Tomasikâs point that if moral realism were true, and it turned out that the true ethics was just torturing as many squirrels as you can, he would have simply learned he didnât care about ethics and it wasnât what he was doing all along. I feel part of my caring about ethics is constituted by my understanding of how I got there more than it is about extrapolating from exact definitions. I know it when I do it, and it is a project that, to my understanding of it, I care about deeply right now.
-I donât think this answer quite fits any of Greenbergâs proposals exactly, but he is definitely confused, and fair enough, as he is confused about a confusing topic. I just want to note that it is meta-ethics that is confusing, not anti-realism. I think he blows past moral realism sort of quickly, expecting that what realists who subscribe to theories like these are doing is perfectly understandable, but I think it is still extremely weird. Most initial approaches one can take to moral realism either start out apparently collapsing into normative ethical theories instead, or else require some extremely unlikely empirical assumption. In order to rescue realist theories, you need to start getting ideas that are more complicated and recognize the dilemmas. I originally wrote two example dialogues to get at this point, but they wound up going on too long for a comment, so I just want to start by positing that, in my experience, this is the case. The obvious first approaches either in some way posit oneâs normative theory to be what âvalueâ is despite disagreement from other people who are using the same words, or else there is some sense in which the disagreement is explained away as coming from some source of irrationality that, if spelled out with an empirical prediction, requires a probably bad prediction. Meta-ethics always faces a foundational dilemma in spelling out what exactly moral disagreement is.
-Since this is getting long winded and it seems like itâs pretty much only us here at this point, I was wondering if you wanted to migrate this conversation in some way, for instance we could chat more via video call or something at some point. If not Iâm also fine with that, we could call it here or keep going in the comments. I just thought I would mention that Iâm open to it.
Hey Devin,
first of all, thanks for engaging and the offer in the end. If you want to continue the discussion feel free to reach out via PM.
I think there is some confusion about my and also Spencer Greenbergâs position. Afaik, we are both moral anti-realists and not suggesting that moral realism is a tenable position. Without presuming to know much about Spencer, I have taken his stance in the post to be that he did not want to âargueâ with realists in that post because even though he rejects their position, it requires a different type of argument than what he was after for that post. He wanted to draw attention to the fact that moral anti-realism and utilitarian value monism doesnât necessarily and ânaturallyâ go well together. Many of the statements he heard from people in the EA community were confusing to him not because anti-realism is confusing but being anti-realist and steadfastly holding on to value monism was, given that we empirically seem to value many more things than just one âsuper valueâ such as âwelfareâ and that there is no inherent obligation that we âshouldâ only value one âsuper valueâ. He elaborates that also in another post.
My point was also mainly to point out that we should see moral theories as instruments that can help us get us more of what we value. They can help us reach some end-in-view and be evaluated in this regard, anything else is specious.
From my perspective, adopting classic utilitarianism can be very limiting because it can oversimplify and obscure what we actually care about in a given situation. Itâs maybe useful as a helpful guide for considering what should be important but I am trying to not delude myself that âwelfareâ must be the only thing I should care about. This would be akin to a premature closure of inquiry into the specific situation at hand. I cannot and will never be able to fully anticipate all relevant details and aspects of a real world situation, so how can I be a priori certain that there is only one value I should care about?
If you are interested in this kind of position, feel free to check out: Ulrich, W. (2006). Critical Pragmatism: A New Approach to Professional and Business Ethics. In Interdisciplinary Yearbook for Business Ethics. V. 1, v. 1,. Peter Lang Pub Inc.
I donât think I find this a particularly difficult dilemma or a compelling objection to value monism. If everything is as you stipulate, then Bob should definitely take the morphine. If I were in Aliceâs position, I would hope that I wouldnât try to deprive Bob of such a special experience in order to experience a bit less pain.