Consequentialism and Cluelessness

Link post

TL;DR: Invisible high stakes don’t undermine ordinary expected value verdicts. And even if they did, that wouldn’t undermine consequentialism because the question of what fundamentally matters is epistemically prior to the question of whether we can reliably track it. Moreover, one cannot plausibly deny that invisible consequences still matter, in principle.

James Lenman’s ‘Consequentialism and Cluelessness’ presents an influential epistemic argument against consequentialism. Roughly:

  1. We’ve no idea what the long-term ramifications of any of our actions will be.

  2. So we’ve no idea what consequentalist reasons for action we have.

  3. But an adequate ethical theory must guide us.

So: C. Consequentialism is not an adequate ethical theory.

I think each of those premises is probably false (especially the last two).

Indecipherable clues

1. Longtermist Clues

Though I won’t dwell on the point here, longtermists obviously believe that there are at least some high-impact actions where we can be reasonably confident that they will improve the long-term future. Examples might include (i) working to avert existential risk, (ii) moral circle expansion and other efforts to secure “moral progress” by improving society-wide ethics, and (iii) generally improving civilizational capacities (through education, economic growth, technological breakthroughs, etc.), in ways that don’t directly increase existential risks.

But in what follows, I’ll put such cases aside and focus on ordinary acts (e.g. saving a child’s life) with only short-term foreseeable effects, and unknowable long-term causal ramifications (for familiar reasons to do with the extreme fragility of who ends up being conceived, such that even tiny changes may presumably ripple out and completely transform the future population).

2. Defending the Expected Value Response

The obvious response to cluelessness worries is to move to expectational consequentialism: if we’ve no idea what the long-term consequences will be, then these “invisible” considerations are (given our evidence) simply silent—speaking neither for nor against any particular option. So the visible reasons will trivially win out. For example, saving a child’s life has an expected value of one life saved, and pointing to our long-term ignorance doesn’t change this.

Lenman is unimpressed with this response, but the four reasons he offers (on pp. 353 − 359) strike me as thoroughly confused.

First, he suggests that expectational consequentialists must rely upon some controversial probabilistic indifference principles (coming up with a principled way of partitioning the possibilities, and then assigning equal probability to each one), whereas it seems to me that no work at all is required because no competing reasons have been offered.

Perhaps the thought is that speculative long-term ramifications could be produced to count against the expected value of saving the child. (Like, “What if the child turns out to be an ancestor of future-Hitler?”) In response, the agent may say, “That’s no reason at all unless you can show that the future risk is greater if I perform this act than if I don’t.” Why is the burden on the consequentialist agent to refute such utterly baseless speculation? I don’t think I need to commit to any particular principle of indifference in order to say that I haven’t yet been presented with any compelling reason to revise my expected value estimate of +1 life saved.

[Update: Hilary Greaves offers the stronger response that some restricted principle of indifference seems clearly warranted in these cases, notwithstanding whatever problems might apply to a fully general such principle. Whereas I’ve argued that it’s surely defensible to take EV to be unaffected by simple cluelessness, Greaves argues that it’s plausibly rationally mandatory. It would seem completely crazy to have asymmetric expectations in such cases, after all.]

Second, Lenman assumes that, against the background of astronomical invisible stakes, the visible reason to save a life must be, for consequentialists, “extremely weak”—merely “a drop in the ocean”. But why the focus on relative stakes? In absolute terms, saving a life is incredibly important. The presence of even greater invisible stakes doesn’t change the absolute weight of this reason in the slightest.

Perhaps Lenman is thinking that the strength of a consequentialist reason must be proportionate to the action’s likelihood of serving the ultimate goal of maximizing overall value. Since the value of one life is vanishingly unlikely to sway the scales when comparing the long-term value of each option, to save one life can only be an “extremely weak” reason to pick one option over another. But the assumption here is simply false. The strength of a consequentialist reason is given by its associated (expected) value in absolute terms: the size of the drop, not the size of the ocean.

Third, Lenman objects:

It is surely a sophistry to treat a zero expected value that reflects our knowledge that an act will lack significant consequences as parallel in significance to one that reflects our total ignorance of what such consequences (although we know they will be massive) will be.

Like the separateness of persons objection, this mistakenly assumes that anything significant must result in changes to our verdicts about acts, when often fitting attitudes are better suited to reflect such significance. Consider: we obviously should feel vastly more angst /​ ambivalence—and strongly wish that more info was available—in the “total uncertainty” case than in the “known zero” case. Why isn’t that a sufficient difference in “significance”? I don’t see any reason here to think that it calls for a different decision to be made (assuming that no feasible investigative options are available; in practice, of course, the astronomical stakes instead motivate at least attempting longtermist investigation).

Fourth and finally, Lenman raises the possibility that perhaps some (less significant) acts may avoid having radical causal ramifications, resulting in non-uniform “scaling down” of our moral reasons, which would be awkward (absurdly yielding stronger consequentialist reasons to do more trivial acts). But again, as stressed in #2 above, there should be no “scaling down” at all—that suggestion rested on a total misunderstanding of the reasons posited by any sensible consequentialism.

Wrapping up: Why trust expected value?

Perhaps the heart of Lenman’s objection can be restated as a challenge: given astronomical invisible stakes, why trust visible expected value in the slightest? There’s vanishingly small reason to think that the EV-maximizing act is also the value-maximizing act, and surely what consequentialists ultimately care about is actual value rather than expected value.

But I think this misses the point of being guided by expected value. As Frank Jackson stressed in his paper on ‘Decision-Theoretic Consequentialism’, in certain risky cases we may know that a “safe” option will not maximize value, yet it may nonetheless maximize expected value (if the alternatives risk disaster), and is for that very reason the prudent and rational choice. In other cases, we may be required to give up a “sure thing” for a slight chance of securing a vastly better outcome—even if the outcome will then be almost certainly worse. So the point of being guided by expected value is not to increase our chance of doing the objectively best thing, nor to make a good result highly likely.

It’s difficult to express precisely what the point is. But roughly speaking, it’s a way to promote value as best we can given the information available to us (balancing stakes and probabilities). And one important feature of maximizing expected value is that we cannot expect any subjectively-identifiable alternative to do better in the limit (that is, imagining like decisions being repeated a sufficient number of times, across different possible worlds if need be), at least for object-given reasons.1 After all, if there were an identifiably better alternative, it would maximize expected value to follow it. And if Lenman’s critique were accurate, it would imply not that expected value is untrustworthy, but rather that (contrary to initial appearances) saving a life lacks positive expected value after all.

Put this way, I don’t think it makes sense to question our trust in expected value. (It’s the practical analogue of asking, “Why believe in accordance with the evidence, when evidence can be misleading?” In either case, the answer to “Why be rational?” is just: it’s the best we can non-accidentally do!) If the question is instead asked, “Why think that saving a life has positive expected value?” then I just point to the prior section of this blog post. (In short: why not? It’s visibly positive, and invisible considerations can hardly be shown to count against it!)

I get that cluelessness in the face of massive invisible long-term stakes can be angst-inducing. It should make us strongly wish for more information, and motivate us to pursue longtermist investigation if at all possible. But if no such investigations prove feasible, we should not mistake this residual feeling of angst for a reason to doubt that we can still be rationally guided by the smaller-scale considerations that we do see. To undermine the latter, it is not enough for the skeptic to gesture at the deep unknown. Unknowns, as such, are not epistemically undermining (greedily gobbling up all else that is known). To undermine an expected value verdict, you need to show that some alternative verdict is epistemically superior. Proponents of the epistemic objection (like skeptics in many other contexts)2 cannot do this.

3. The Possibility of Moral Cluelessness

Suppose I’m wrong about all of the above, and in fact we have no reason at all to think that saving a child’s life does more good than harm (or is positive in expectation). That would be a sad situation. But it hardly seems kosher to infer from this that doing good isn’t what matters. There’s no metaphysical guarantee that we’re in a position to fruitfully follow moral guidance.

It’s surely conceivable that some agents (in some possible worlds) may be irreparably lost on practical matters. Any agents in the benighted epistemic circumstances (of not having the slightest reason to think that any given action of theirs will be positive or negative on net) are surely amongst the strongest possible candidates for being in this deplorable position. So if we conclude (or stipulate) that we are in those benighted epistemic circumstances, we should similarly conclude that we are the possible agents who are irreparably practically lost.

To suggest that we instead revise our account of what morally matters, merely to protect our presumed (but unearned) status as not totally at sea, strikes me as a transparently illegitimate use of “reflective equilibrium” methodology—akin to wishfully inferring that causal determinism must be false on the basis of incompatibilism plus a belief in free will.

Sometimes inferences are directionally constrained by considerations of epistemic priority: Against a backdrop of incompatibilism, you can infer “no free will” from causal determinism, but not “no causal determinism” from free will. The question whether causal determinism is true is epistemically prior to the question whether (given incompatibilism) we have free will. In a similar way, I suggest, the question of what morally matters is clearly epistemically prior to the question of whether we have epistemic access to what morally matters. To instead let the latter question settle the former strikes me as plainly perverse.

Ethics and What Matters

So what does matter? To prevent cluelessness from becoming a puzzle for everyone, Lenman suggests that non-consequentialist agents “should ordinarily simply not regard [invisible consequences] as of moral concern.” This seems crazy wrong to me.

Suppose you’re given a magic box from the Gods, and told only that if you open it, one of two things will happen: either (a) it will cause a future holocaust, or (b) it will prevent a future holocaust. Lenman’s view seems to be that you should regard this whole turn of events as a matter of indifference. I think it’s much more plausible that you should care greatly about which outcome eventuates, and so naturally feel immense angst over the whole thing. Given your ineradicable cluelessness about the outcomes, the box doesn’t affect what actions you should perform. But it surely is a matter of concern!

You should, for example, strongly wish that you had more info about which outcome would result from opening the box. Why would this be so, if invisible consequences were “simply not… of moral concern”? I think we should prefer that invisible consequences be rendered visible, precisely because (i) this would help us to bring about better ones, and (ii) we should care about that.

In a confusing passage, Lenman acknowledges that invisible consequences matter, just not morally:

Of course, the invisible consequences of action very plausibly matter too, but there is no clear reason to suppose this mattering to be a matter of moral significance any more than the consequences, visible or otherwise, of earthquakes or meteor impacts (although they may certainly matter enormously) need be matters of, in particular, moral concern. There is nothing particularly implausible here. It is simply to say, for example, that the crimes of Hitler, although they were a terrible thing, are not something we can sensibly raise in discussion of the moral failings or excellences of [someone who saved the life of Hitler’s distant ancestor].

This is a strange use of “moral significance”. Moral agents clearly ought to care about earthquakes, meteor strikes, and future genocidal dictators. (At a minimum, we ought to prefer that there be fewer of such things, as part of our beneficent concern for others generally.) An agent who was truly indifferent to these things would not be a virtuous agent: their indifference reveals a callous disregard for future people. So it could certainly constitute a “moral failing” to fail to care about such harmful events.

On the other hand, if Lenman really just means to say that whether unforeseeable consequences eventuate as a matter of fact shouldn’t affect our assessment of a person’s “moral failings or excellences”, then this seems a truism that in no way threatens consequentialism. It’s a familiar point that many forms of agential assessment (e.g. rationality, virtue, etc.) are “internalist”—supervening on the intrinsic properties of the agent, and not what happens in the external world, beyond their control. While I’ve long been frustrated that other consequentialists tend to downplay or neglect this, and think that saying plausible things here requires going beyond “pure consequentialism” in some respects (we need to make additional claims about fitting attitudes, for example), these additional claims are by no means in conflict with the core claims of pure consequentialism. So there really isn’t any problem here—at least, none that can’t easily be fixed just by saying a bit more.

Conclusion

I’ve argued that the cluelessness objection is deeply misguided. Invisible high stakes don’t undermine ordinary expected value verdicts. And even if they did, that wouldn’t undermine consequentialism because the question of what fundamentally matters is epistemically prior to the question of whether we can reliably track it. Lenman’s non-consequentialist alternative proposal seems vicious, unless interpreted so narrowly that the relevant claim becomes trivial, and compatible with expectational consequentialism all along.3

Footnotes

1 Cf. an evil demon threatening to blow up the world if you use expected value as a decision procedure. We can bracket such “state-given” reasons for present purposes, as they aren’t relevant to the question of whether EV is a rational decision-procedure. The evil demon case is simply one of Parfitian “rational irrationality”.

2 I raise a similar objection to Sharon Street’s “moral lottery” objection to moral realism.

3 Thanks to participants in the “Cluelessness” reading group at GPI last week, for helpful discussion.