My Meta-Ethics and Possible Implications for EA

Link post

EDIT: Following the lively discussion with David Moss, I have clarified my thinking on the below. I still endorse the conclusions drawn below, but for somewhat different reasons than those outlined in this post—see the back-and-forth with David below. I am in the process of writing up my revised thinking at the moment.

EA Forum Preface

I’ve long had the impression that moral language may be on a similar footing to talk of personal identity in that both cannot be made fully precise and the identity/​non-identity (rightness/​wrongness) in certain thought experiments is simply under-determined. In this blog post, I look at how our use of moral language attains reference and then examine what implications this has on our more speculative uses of moral language—e.g. in population ethics.

Besides the topics touched on in the post, another point of interest to EA may be its implications on animal welfare. When we take as primitive our reactions and interpret all else as derivative evidence of future reactions, it seems that personal experience with animals (and their suffering) becomes particularly significant. Other evidence from neuron counts, self-awareness experiments and the like only have force as mediated through our meta-reactions such as the ‘call to universality’ discussed in the post.

The Roots and Shoots of Moral Language

Background

I’m writing this post in order to clarify and share my variant of hybrid expressivism and to explore what implications this meta-ethical position has on issues of interest to effective altruism (EA). I call my position empirical expressivism. In short, empirical expressivism agrees with emotivist and expressivist theories that our use of moral language has its roots in emotional reactions and expressions of (dis)approval. Empirical expressivism then goes further to stress the importance of generalizing previous such reactions to anticipate future reactions. Empirical expressivism also stresses the importance of meta-reactions, e.g. frustration at one’s own inability to empathize. Together these last two points, generalizing reaction and meta-reactions, make it possible to reconcile the expressivist position with the truth-aptness characteristic of many uses of moral language. Empirical expressivism seeks to characterize moral language as used today, but it does not rule out a future use of moral language which has some more naturalist or otherwise different basis.

If much of this post is correct, it could have a number of implications which I will briefly enumerate from least to most speculative:

  • We should put more weight on intuitions which are closer to our lived experiences.

  • We should put more weight on our meta-intuitions.

  • We should take disagreement over everyday ethical judgements more seriously in a certain sense.

  • We should disregard much of population ethics.

Let me briefly describe the appeal of the expressivist and emotivist positions in meta-ethics. These positions approach the problem of understanding moral language as a descriptive problem beginning by examining how we use and learn to use moral language. The upside of this approach becomes clear when comparing to a claim unmoored from moral language. Imagine somoeone claims, “When you’re talking about ethics, better and worse, good and bad, etc. you are (or should be) doing a utilitarian calculus.” She is making a steep claim, even if utilitarianism is a valuable formalization of some uses of ethical language, she needs to make the case that no other formalization is plausible of our ethical language. In contrast in this essay, we will begin by examining what is entailed by correct uses of moral language independent of any formalization, and see where that takes us.

When are ethical intuitions true?

Let’s start by discussing intuitions. In the hands of different authors, ‘intuition’ has been used to refer to different things. I will use ‘intuition’ to refer to cases in which we say that X seems worse than Y without having experienced X or Y. This may arise when discussing legal questions, personal experiences or thought experiments. Note that the apparent reason for such intuitions may vary. Take for example, “Is hearing an excruciatingly loud siren for a while worse than the worst toothache most people have in their lives?” When answering this, what goes through your mind? What goes through the typical non-EAer’s mind? There are many possibilities:

  • An immediate answer springs to mind for no apparent reason

  • You think of times you’ve heard loud noises and compare to toothaches you’ve had

  • You think about some other people’s reactions that you’ve seen having a toothache or hearing a loud sound.

  • You think about some abstract argument for summing pain experience over time and try to do some calculation.

All of the above considerations are similar in that they provide evidence as to what you and others would say had you experienced both X and Y. Or, seen from another angle, consider under what conditions you would agree that you were mistaken about the relation between X and Y. Surely if you one day experience both X and Y and feel that the opposite relationship held from what you expected, you would then agree that you had been mistaken. The truth condition for this kind of moral intuition thus entails the claim that you would make the same judgement after experiencing X and Y. Now let’s focus on beliefs as used to describe generalizations of intuitions. In other words, claims of the form, “For all experiences of kind X, Y: X is worse than Y” Once again the apparent reason why you hold a belief of this kind may be various, but the truth condition is a simple quantification of the truth condition for an intuition. Since the truth/​falsity of claims depend on later observations of yourself, I call my meta-ethical position ‘empirical’. For the simplest instances of this pattern, e.g. “Tripping and falling is worse than not tripping” there can be no disagreement. Indeed we are only able to learn the meaning of ‘pain’, ‘bad’, ‘ouch’, etc. because we share a species-wide aversion to such common childhood experiences. For this reason, I call my position ‘expressivist’. In summary, we first learn the use of moral language as shared forms of expressing the unpleasant nature of certain experiences, and then in generalizing these experiences to novel experiences we arrive at moral intuitions and beliefs.

From intuitions to principles

So far I’ve only addressed a small fraction of our many uses of moral language. Much of everyday moral language involves talk of principles e.g. “Lying is bad”, “One must not kill”, or “An action’s moral value is determined by its consequences”. To see how these uses fit in with the previous picture, let’s first talk about water. Before chemists identified H20 as a molecule there was talk of water, and people learned the meaning of the word exclusively ostensively. Although there was no chemical theory of water, one could still sensibly talk about laws or principles, e.g. “Is all ice water?” Perhaps most ice seen by our imagined speaker was whitish but now she comes across some transparent ice, and it turns out that this ice was indeed also water after applying some heat. I argue that we are in the same position with respect to morality as this pre-scientific individual was with respect to water. Perhaps it will turn out that morality has some underlying naturalist character, but until then any discussion of laws and principles of morality must be made using more mundane means. We can ‘melt’ but we cannot probe the ‘chemical structure’ for morality.

Melting in our previous story was a sort of reductive test, the unknown ice was reduced to a more well known form, water. In the same vein, we may reduce claims about moral principles to a more well known form, the comparative intuitions and beliefs discussed above. From this perspective, a purported moral principle is just an explanatory claim. Take lying for example: the claim “Lying is always bad”, we may see this as the claim “In any situation I am lied to, I would have preferred not to be lied to.”<sup>1</​sup> Another way of putting this is that our use of moral language is defined by the expression of some shared preferences, and so any downstream use of moral language must naturally have some preference utilitarian structure.

Aggregating across persons

The cases treated involved single person-affecting comparisons. Let’s look at how we learn the use of moral language for the multiple person case. We see the reactions of groups to different events, e.g. two people experiencing emotional distress (e.g. a breakup), and a larger group experiencing similar distress (e.g. a death in a family). When confronted with such suffering we react sympathetically, experiencing sadness within ourselves. This sadness may be both attributable to a conscious process of building empathy by imagining the others’ experience, or perhaps an involuntary immediate reaction resulting from our neural wiring. As children we learn to associate these processes with words like sad or terrible, and eventually we associate the word immoral to any action which leads to such consequences. From this perspective, we may probe our usage of these words to check whether they correspond to an (additive) utilitarian calculus: Are our reactions to tragic events linearly stronger as the number of affected people rises? No. If we try to imagine the plight of the affected, are we able to hold in or mind the plight of many? Again, no. It seems thus that our use of moral language is distinctly non-utilitarian. Hence, if we are to justify utilitarianism, it will not be by formalizing our use of moral language as applied to actions.

Up to now, we’ve focused on prototypical examples of actions through which we learn the use of moral language, but we have neglected uses of moral language as applied to dispositions, thoughts and emotions. I argue that these latter cases explain our interest in utilitarianism. At some point in our development as speakers and thinkers, we come to have meta-emotions: guilt over jealousy, regret over anger, helplessness over implicit bias, etc. Insofar as we feel and say that an inability to empathize with large-scale suffering is wrong (e.g. war, oppression, global poverty, etc.), we also see impartial, closer-to-utilitarianism reactions as something to strive for. Note that this still does not justify interpreting moral language in terms of a utilitarian calculus, but rather closeness-to-utilitarianism is an end in itself. Another common and important emotional reaction is what I’ll term the ‘call to universality’. This class of reaction encompasses our praise for self-sacrifice and the desire of many to align their actions and beliefs with some coherent narrative — usually religion, but more recently utilitarianism or maybe virtue-ethics as it may be. Taken together these impartiality and coherency meta-reactions lend some normative force to utilitarianism, not as a system to make judgements, but rather as a system to which we ought to align our reactions. Notice also that considerable disagreement exists. For those who do not feel guilt or regret as a result of inability to empathize with large-scale tragedies, utilitarianism does not have the same force.

Notice that the connection between these meta-reactions and utilitarianism is somewhat distant and likely not precise enough to distinguish between average and total utilitarianism. If so, we should see deciding between average and total utilitarianism as independent from our use of moral language. Moreover, if we understand our moral intuitions as guesses about what we would believe having lived the relevant experiences, it follows that the further the subject of an intuition is from our lived experiences, the less likely it is for this intuition to be true. Hence, any argument which begins by appealing to an intuition about an alien world, e.g. the repugnant conclusion, should be discounted as unsubstantiated. Returning to our water analogy, in trying to do population ethics, we’re in the same position the pre-scientific person would find themselves if they asked “Will water take on a new form when heated to 10,000 degrees?” In both the population ethics and water cases the question appears meaningful, but the answer is out of reach. We are limited by both our engineering inability—to simulate worlds/​heat water—and definitional vagueness—of morality/​water.

Thoughts on EA

From the above, it follows that there is considerable individual variance over what force EA principles carry. I personally react more strongly to reading about distant causes, existential risks, etc. than others I know, so much of EA carries an emotional force for me in a way that it would not for them. It is perhaps correct to say that many of those who do not see the appeal of EA would if they were exposed to a broader set of experiences, but insofar as they do not feel themselves in the wrong for having limited sets of experiences, EA carries no force. In the end, these reflections have led me to a more calibrated understanding of the role of EA: EA is important not because it is the only right thing to do, but rather because our experiences have endowed us with a broader and richer sense of right. One which has the potential to play an invaluable role in guiding mankind to a brighter future.

Footnotes

<sup>1</​sup>: Of course someone saying “Lying is always bad” may intend to make any of many other claims, e.g. “If we could achieve the same end without lying, that would be better”, “A law against lying would be desirable”, and so on. My claim is merely that if we want to give motivating force to the statement “Lying is always bad” as an extension of our defining uses of moral language, then we must interpret “Lying is always bad” as I do.