The Harsanyi-Rawls debate: political philosophy as decision theory under uncertainty

This is a link post with a summary for the paper “The Harsanyi-Rawls debate: political philosophy as decision theory under uncertainty.” https://​​doi.org/​​10.1590/​​0100-6045.2021.V44N2.RP

Why this might interest EAs:

  1. Decision theory: there’s a discussion on decision theory under ignorance and Knightian uncertainty. Personally, I’ve read better things on this subject, but I like the way the paper connects it to social decision-making and political philosophy.

  2. Moral philosophy: the best part of the paper is the discussion on Harsanyi’s average-utilitarianism and Rawls’s liberalism. Not the way it links each philosopher to different criteria for decision theory under ignorance (there’s plenty of material on that), but how it argues that these criteria are appealing because of the specific contractualist counterfactual scenarios (the Impartial Observer and the Original Position) where they are chosen[1] - i.e., they use different information sets.

  3. Shared intuitions are Schelling points: The conjecture that some of our intuitions (such as the appeal of the difference principle in the original position, or the practice of being highly risk averse when deciding for the sake of others) derive from something like salient Schelling points we can converge to in shaping social practices. Again, this might explain the general appeal of egalitarian principles in some scenarios; but then, it implies these principles are not justifiably applicable in very distinct contexts from those we used to justify them. This text reminds me of being very careful with philosophical intuitions.

Conflict of interest: I am the author – thus probably not the best person to talk about it.

Abstract

Social decisions are often made under great uncertainty—in situations where political principles, and even standard subjective expected utility, do not apply smoothly. In the first section, we argue that the core of this problem lies in decision theory itself—it is about how to act when we do not have an adequate representation of the context of the action and of its possible consequences. Thus, we distinguish two criteria to complement decision theory under ignorance—Laplace’s principle of insufficient reason and Wald’s maximin criterion. After that, we apply this analysis to political philosophy, by contrasting Harsanyi’s and Rawls’s theories of justice, respectively based on Laplace’s principle of insufficient reason and Wald’s maximin rule—and we end up highlighting the virtues of Rawls’s principle on practical grounds (it is intuitively attractive because of its computational simplicity, so providing a salient point for convergence) - and connect this argument to our moral intuitions and social norms requiring prudence in the case of decisions made for the sake of others.

Introduction

How should we act in social contexts of great uncertainty—when we find it hard to apply our standard political principles and face some sort of decision paralysis? Since an action aims to an end, the decision to act is irrational if we cannot justifiably believe that we can achieve the end—or it is self-defeating, if acting in accordance with the decision prevents us from reaching that end.

[…]

The subfield of decision theory that deals with scenarios where there is no probability distribution over possible outcomes is called decision under ignorance; there are four different criteria to complement decision theory under ignorance: Laplace’s principle of insufficient reason (a.k.a. “principle of indifference”), Wald’s maximin criterion, Savage’s minimax regret and the Hurwicz’s criterion. In the first half of this paper, we will on decision theory and: a) provide an introduction to a canonical model of decision theory, the subjective expected utility theory, and also to the common obstacles this model faces concerning Knightian uncertainty; b) though we do not equate decision under ignorance to Knightian uncertainty (actually, our intent is to highlight their differences), we explain how Laplace’s principle and Wald’s maximin aim to overcome them; c) we argue that detaching the notion of risks from subjective probabilities is not a solution to the problem posed by uncertainty—we criticize Pritchard (2015) as an example of this failed proposal. In the last half of the text, we extrapolate this discussion to political philosophy: first, we aim to make clear that this problem is not restricted to consequentialist theories; second, we present and contrast two competing conceptions of theories of justice with contractualist grounds—i.e., Harsanyi’s utilitarianism and Rawls’s difference principle2. We show that the former uses Laplace’s principle of indifference to cope with the uncertainty of the contratualist thought experiment, while the latter uses a version of Wald’s maximin rule—leading to the much-debated difference principle. We highlight how framing the original position as a social contract favors the difference principle on the grounds that it better incentivizes ex post stable cooperation, and that, thanks to its simplicity, it works as a salient point3; this is consistent with common social practices regarding decisions made for the sake of others, such as norms requiring decision-makers to be prudent, which display high uncertainty-aversion.

[…]

Conclusion

First, we have seen that the problem of uncertainty is pervasive: we cannot escape the problem of ignoring the consequences of a decision—otherwise such a theory might be inapplicable. We argued that, in situations of social risks, the adoption of the difference principle, according to the maximin criteria, justifies the action to all parties involved; moreover, as noted by Harsanyi himself, this principle is easier to apply than the utilitarian principle because it has lower informational requirements—it is epistemically simpler to identify and avoid worst outcomes. We showed how this reasoning may explain our moral intuitions and how it is consistent with social norms concerning risk allocation.

We must remark, though, the limitations of our argument: it doesn’t mean that the maximin is a good criterion for decision theory in general; it does not extrapolate to individual decision-making, nor even to cases where the boundaries of a decision problem can be well-defined. We only argued that, in uncertain social contexts, it provides a more acceptable justification for policies than utilitarianism. It is a decision rule, for coping with uncertainty, not a judgement procedure, for reducing it; i.e., it is a policy to select actions in the face of uncertainty, not a procedure to precisify our credences when we lack information—so it doesn’t solve the problem of assigning probabilities to different possible states. Finally, we highlight that we ignored population ethics and intergenerational conflict—i.e., our argument explicitly appeals to the need for stable cooperation between present agents, not future ones.

Rawlsians may dislike this conventional, even naturalistic, account of a theory of justice; it seems to lack the normative ‘flavor’ we usually expect from arguments of principle. However, instead of thinking of this as a reduction of a normative theory of justice to a non-normative theory of conventions, we suggest one should see it as an argument over the conditions under which principles of justice can be applied: even in the absence of a common agreement on what precise norms should be chosen and followed, or on what is the best conception of good, bounded rational agents can converge in a meta-level, particularly if they know they need to cooperate with each other. Actually, we dare to conclude by suggesting that this might be the main function of a normative theory—a theory about how agents should proceed: to allow for some guidance for the cooperation of bounded rational agents under uncertainty. If we could determine a cardinal utility function for each agent, and a corresponding precise probability distribution over outcomes, we would have no need for a normative theory of any kind; game theory would be enough to provide us with an answer about what decisions would be observed.


[1] I.e., maybe too much ink has been spilled on theoretical arguments over these scenarios: Harsanyi’s principle of utility is a good way of thinking about which society would you like to live in, absent any other information except the distribution of utilities (answer: the one with the ex ante highest general utility); the difference principle is a what you would choose to regulate the distribution of resources in a society where you would be cooperating with others, given the need of justifying this distribution every now and then (answer: ensure no one can complain their bundle is too small—and that others have too much).