What is so wrong with the “dogmatic” solution to recklessness?

If we try to maximize expected utility with an unbounded utility function, we will sometimes be reckless: we will accept gambles with an arbitrarily small chance of success if the payout is large enough. And it is not just expected utility maximizers who encounter this problem. Beckstead and Thomas have shown that any decision framework will be either reckless, timid (unwilling to take obviously good gambles), or non-transitive. This issue becomes important when considering the case for strong longtermism, which says that protecting the far future is overwhelmingly important because of the small probability that it contains an astronomically huge amount of value. It is also at the heart of the Pascal mugger problem, where someone threatens to use alleged supernatural powers to do some astronomically huge amount of harm/​good, unless/​if we give them our wallet.

But there is a loophole in these arguments. This loophole allows us to avoid the most problematic implications of recklessness in practice. The loophole: we can choose to adopt prior probabilities which make it unlikely that our actions would have such large effects. The larger the potential effect of our action in some scenario, the less probability we assign that scenario in our prior, in proportion. We can give zero probability to scenarios that would allow us to influence infinite utility, removing all of the associated issues that infinite value introduces. I usually see this referred to as the “dogmatic” response to recklessness. We simply adopt extreme confidence that the Pascal mugger is lying to us, or mistaken. If they claim to be able to do infinite harm, then we say that they are lying, or maybe mistaken, with certainty.

It seems to me that given results like Beckstead and Thomas, dogmatism is the most promising way to justify the obvious fact that it is not irrational to refuse to hand over your wallet to a Pascal mugger. (If anyone disagrees that this is an obvious fact, please get in touch, and be prepared to hand over lots of cash).

It is true that penalising large utilities in our priors will not eliminate recklessness entirely. For example, a decay in our credences could still lead us to take some long-shot gambles. Also, although our prior might severely penalise the possibility of our actions having large effects, we may encounter evidence which causes us to update our initial scepticism. But I think a dogmatic prior would eliminate the most problematic forms of reckless behaviour. To convince a dogmatic decision maker to be reckless, you would now need to make an argument of the form:

Here is a good reason to believe that X has this particular small but non-tiny chance of leading to Y, and Y would be extremely good/​bad, so you should do this reckless thing.

But a dogmatic decision maker would not be susceptible to arguments of the form:

We can’t rule out that X would lead to Y, so you surely can’t assign it that small a probability, and Y would be extremely good/​bad, so you should do this reckless thing.

In other words: we avoid the Pascal mugger problem, as well as, I believe, the strongest form of longtermism. But we continue to allow other reckless conclusions that actually seem perfectly fine, like the expected utility based justification for why you should vote in an election (there is a ~1/​N chance of your vote changing the result in a close election, and ~N impact on utility if it does, where N is the number of voters).

Given how neat this solution to the problem seems to be, I am confused about why I don’t see it defended more often. I believe it is defended by Holden Karnofsky here. But apart from that, I typically see this solution raised, labelled as ‘dogmatic’ (with obvious negative connotations), and then the discussion moves on. I’m not a philosopher though. I would be interested to read anything anyone can point me to that discusses this in more depth.

In the next section, I’ll try to explain what I think is supposed to be wrong with the dogmatic approach. Then, in the following section, I’ll explain why I don’t actually find this problem to be that bad. I suggest that there could be a close analogy between dogmatism and Occam’s razor: both are assumptions that maybe can’t be justified in a purely epistemological way, but which nevertheless should be adopted by practical decision makers.

What I think is supposed to be wrong with dogmatism

The obvious objection to dogmatism is that it seems to be a form of motivated reasoning. It tells us to adopt extreme confidence that certain claims are false, for apparently no good reason except that we dislike their consequences. Isn’t this the wrong way for a truth-seeker to behave?

Here is another way of looking at it: the dogmatic approach looks like it has things backwards. Epistemology should come first, and decision theory second. That is, first, we should look at the world and form beliefs. Then, given those beliefs, we should try to make good decisions. Proposing dogmatic priors as a solution to recklessness seems strange, because it is an argument which goes in the other direction. It starts by considering problems in decision theory, and then lifts those into conclusions about epistemology, which feels wrong.

Holden Karnofsky’s blogpost side-steps this criticism by attempting to provide a first principles defence of dogmatic priors. In his account, the prior is not chosen because it resolves the Pascal mugger problem. That is merely a nice consequence. Instead he argues that a dogmatic prior should emerge naturally from our ‘life experience’:

Say that you’ve come to believe – based on life experience – in a “prior distribution” for the value of your actions, with a mean of zero and a standard deviation of 1.

(and he advocates taking a normal, or log-normal, distribution with these parameters).

He then argues that you can refuse the Pascal mugger on this basis.

But I can see how this argument might not be convincing. Something seems wrong with using your experience of ordinary every-day past decisions to make such a confident judgement on the possibility of this mugger being a sorceror from another dimension. If you go around making these kinds of conclusions based on your past life experiences, that seems like it would lead you into an unjustified level of scepticism of ‘black swan’ events. It is also not clear to me how Karnofsky’s argument would handle the possibility of infinite value.

Personally, I think I’d like to make the dogmatic assumption a part of my true prior. That is, the probability distribution I adopt before I have looked at any evidence at all. It is one of my starting assumptions, and not derived from my life experience. But if I do that, then it does look like I might be open to the ‘motivated reasoning’ criticism described above.

Why I think dogmatism isn’t so bad: the analogy with Occam’s razor

In the last section I described how dogmatism feels wrong because its logic seems to go in the wrong direction: from decisions to epistemology, instead of from epistemology to decisions. But in this section I explain why I’m actually not so concerned about this, because I think all of our knowledge about the world ultimately depends on a similarly suspicious argument.

I’ll start by explicitly making the argument for dogmatic priors as best I can, and then I’ll discuss the problem of induction and Occam’s razor, and why the best defence of Occam’s razor probably takes a similar form.

An (outline of a possible) argument for dogmatism:

  • We should make decisions to maximize expected utility (see e.g. von Neumann theorem, Savage’s axioms)

  • There should be no bound on our utility function, at least when we’re doing ethics (if an action affecting N individuals has utility X, the same action affecting 2N individuals should have utility 2X).

  • If there are actions available to us with infinite expected utility, then decision making under an expected utility framework becomes practically impossible (I’m sure you could write many papers on whether this claim is true or not, but here I’m just going to take it for granted).

  • We should therefore adopt dogmatic priors which penalise large utilities. This is the only way to rule out the possibility of infinite expected utility actions. (Note: the penalising of large finite utilities comes for free, even though we only required that infinite expected value be removed from the theory, because we need to rule out St Petersberg paradox type scenarios).

  • If dogmatic priors are a good model for the world, we make good decisions, and if they’re not, we were always doomed anyway.

This argument might seem unsatisfying, but I think there could be a close analogy between this argument and the basis for Occam’s razor, on which all our knowledge about the world ultimately rests.

The problem of induction concerns the apparent impossibility of learning anything at all about the world, from experience. In Machine Learning, it manifests itself as the No Free Lunch theorem. Suppose we see a coin tossed 99 times and it lands heads every time. What can we say about the probability that the next toss will be heads instead of tails? If we adopt the maximum entropy prior over the possible coin toss results, where each of the results is equally likely, and then apply Bayes’ theorem to our observations, we can say nothing. The probability of heads or tails on the 100th toss is still 5050.

In order to make inferences about the unknown, from the known, we need to start by assuming that not all of the possible results are a priori equally likely. We start off believing, based on no evidence, that some possibilities are more likely than others. For example, we might typically assume that the coin has some constant unknown bias, p. But this might be too strong an assumption. What if this coin obeyed a rule where each toss was 99.9% likely to be the same as the immediately preceding toss? That seems a priori possible, and consistent with the observations, but would be ruled out by the constant unknown bias model. In general, the best way of describing the approach we actually take in these situations, is that we pick a prior over the coin tosses which is consistent with Occam’s razor (and you might try to formalize this using Solomonoff induction). We assume that the world follows rules, and that simpler rules are a priori more likely to be true than more complex rules. Under this assumption, we can be justified in stating that the 100th coin toss is very likely to be heads.

Occam’s razor seems to be necessary in order to learn anything at all about the world from experience, but it remains an assumption. It is something we must take for granted. It is extremely tempting to try to justify Occam’s razor based on our past experience of the world, but that would be circular. We would be using induction to justify induction.

It is troubling to discover that all of our knowledge potentially rests on an unjustified assumption. It would certainly be convenient for us if the Occam’s razor principle were valid, but is there any other reason for believing in it? Or are we engaging in motivated reasoning here? How could we go about trying to defend ourselves against this charge? I think the best defence would actually look very similar to the defence of dogmatic priors given above:

  • We want to have external reasons for our decisions, so we need learning to be possible.

  • Learning is only possible if the Occam’s razor principle is true (I’m sure you could write many papers on whether this claim is true or not, or perhaps how to re-phrase it slightly so that it becomes true, but I’m just going to take it for granted).

  • We should therefore adopt priors consistent with Occam’s razor.

  • If Occam’s-razor -consistent priors are a good model for the world, we make good decisions, and if they’re not, we were always doomed anyway.

Hopefully the analogy with dogmatism is clear.

Conclusion

I’d be very interested to read a more in-depth discussion of whether the so-called dogmatic approach is a reasonable response to the problem of recklessness or not. To me, it seems like the best candidate for resolving some of the thorny issues associated with Pascal mugger type problems.

On the face of it, dogmatism looks like it might involve irrationally extreme confidence that the world happens to be arranged in a certain way. But I think there could actually be a close analogy between adopting dogmatic priors, and adopting priors consistent with Occam’s razor. Everyone happily does that already without fretting too much.