# An Epistemic Defense of Rounding Down

This post is part of WIT’s CRAFT sequence. It examines one of the decision theories included in the Portfolio Builder Tool.

**Executive summary**

Expected value maximization (EVM) leads to problems of fanaticism, recommending that you ought to take gambles on actions that have very low probabilities of success if the potential outcomes would be extremely valuable. This has motivated some to adopt alternative decision procedures.

One common method for moderating the fanatical effects of EVM is to ignore very low probability outcomes, rounding them down to 0. Then, one maximizes EV across the remaining set of sufficiently probable outcomes.

We can distinguish between two types of low probabilities that could be candidates for rounding down. A decision-theoretic defense of rounding down states that we should (or are permitted to) round down low objective chances. An epistemic defense states that we should (or are permitted to) round down low subjective credences that reflect uncertainty about how the world really is.

Rounding down faces four key objections:

The choice of a threshold for rounding down (i.e., how low a probability must be before we round it to 0) is arbitrary.

It implies that normative principles change at some probability threshold, which is implausible.

It ignores important outcomes and thus leads to bad decisions.

It either gives no or bad advice about how to make decisions among options under the threshold.

Epistemic rounding down fares much better with respect to these four objections than does decision-theoretic rounding down.

The resolution or specificity of our evidence constrains our ability to distinguish between probabilistic hypotheses. Our evidence does not typically have enough resolution to give us determinate probabilities for very improbable outcomes. In such cases, we sometimes have good reasons for rounding them down to 0.

**1. Intro**

Expected value maximization is the most prominent and well-defended theory about how to make decisions under uncertainty. However, it famously leads to problems of fanaticism: it recommends pursuing actions that have extremely small values of success when the payoffs, if successful, would be astronomically large. Because many people find these recommendations highly implausible, several solutions have been offered that retain many of the attractive features of EVM but rule out fanatical results.

One solution is to dismiss outcomes that have very low probabilities—in effect, rounding them down to 0—and then maximizing EV among the remaining set of sufficiently probable outcomes. This “truncated EVM” strategy yields more intuitive results about what one ought to do in paradigm cases where traditional EVM recommends fanaticism. It also retains many of the virtues of EVM, in that it provides a simple and mathematically tractable way of balancing probabilities and value.

However, rounding down faces four key objections.^{[1]} The first two suggest that rounding down will sometimes keep us from making correct decisions, and the second two present problems of arbitrariness:

Ignores important outcomes: events that have very low probabilities are sometimes important to consider when making decisions.

Disallows decisions under the threshold: every event with a probability below the threshold is ignored. Therefore, rounding down precludes us from making rational decisions about events under the threshold, sometimes leading to violations of Dominance.

Normative arbitrariness: rounding down implies that normative principles governing rational behavior change discontinuously at some cut-off of probability. This is unparsimonious and unmotivated.

Threshold arbitrariness: the choice of a threshold for how improbable an outcome must be to be rounded down is arbitrary.

I will argue that there are two interpretations and defenses of rounding down that fare differently with respect to these four objections. To make this case, we can distinguish between two kinds of uncertainty that arise in decision contexts. The first concerns the objective chances within a model of the world. For example, when I play roulette at a well-regulated casino, my model of the game says there is a ^{1}⁄_{38} (2.63%) chance that the wheel will land on 12, an ^{18}⁄_{38} (47.37%) chance it will land on red, and so on.

The second concerns subjective credences over models, that is, my subjective uncertainty about what the world is like. In some cases, I may know the objective chances predicted by various models but be unsure which model is true. For example, suppose I play roulette at an unregulated and unfamiliar casino. I’m unsure whether the wheel is fair or biased toward particular outcomes. In this latter case, the probabilities I assign will be a mixture of my subjective uncertainty over models and the objective uncertainties they entail. For example, suppose I think there’s a .5 chance the wheel is fair and a .5 chance it is biased entirely toward black. Now, my credence that the wheel will land on red is my credence in fair times the chance of red if fair plus my credence in biased times the chance of red if biased = .5(.4737) + .5(0) = .2368. In other cases, I might believe that a particular model is true but have uncertainty about the probabilities that it entails. For example, I might only know the probabilities up to a certain level of precision. My belief state here can be modeled as subjective uncertainty over models with varying exact probabilities, so we can treat these two kinds of uncertainty as the same.^{[2]}

The list of paradigm cases where EVM recommends fanaticism includes a mixture of cases of low objective and low subjective probabilities. In the St. Petersburg game, you are offered a bet on the flips of a fair coin, and the game ends (and pays off) when the first heads is flipped. The prize is $2n where *n *is the number of flips it took to land heads. For example, if the coin lands heads on the first flip (an event with probability = .5), you earn $2. If it lands tails on the first and heads on the second (probability = .25), you earn $4. And so on, so that the expected payoff of the game is .5($2) + .25($4) + .125($8)… = ∞. EVM recommends that you should pay any finite amount of money to play though you will probably win $4 or less. Here, the chances are *objective*. For example, you know that the probability of getting the first heads on the 100th toss is 1/2100.^{[3]} The question is whether this extremely low probability event should be taken into account when deliberating about the bet.

In Pascal’s mugging, a person tells you that if you give them your wallet (which contains $1), they will pay you back $1 billion dollars tomorrow. If you assign a credence of greater than 1 in a billion to the proposition that they are telling the truth^{[4]}, then EVM recommends that you should hand over your wallet. Here, the probabilities are *subjective*. It’s not as if you accept a model of the world that says that one out of every billion muggers is telling the truth. Instead, you think that there are some imaginable scenarios in which the mugger is telling the truth, and many, many more in which they are not. Your credence is a reflection of this subjective uncertainty. As Chappell (2023a) puts it, “Your one act doesn’t even make a definite probabilistic difference. It’s a different (more epistemic) sort of gamble.”

Most of the more realistic fanatical scenarios also involve subjective credences over models of the world. For example, should we direct all of our philanthropic donations toward developing technology that could create trillions of digital minds? Should we try to help bacteria or rocks on the small chance that panpsychism is true? Should you give money to avert an AI apocalypse? We’re not claiming that one should assign extremely low probabilities to these outcomes. The claim is that whatever probability we assign will involve subjective credences over different hypotheses about the causal structure of the world.

When evaluating rounding down, we can distinguish between ignoring events whose small probabilities reflect small known *objective chances *and those which reflect low *subjective credences*. Relatedly, we can distinguish between *decision-theoretic *and *epistemic* justifications for rounding down. A *decision-theoretic *defense says that small objective chances should not be taken into account in our decision-making. Even if the chances are known, the best way to get what we want is to ignore outcomes with sufficiently low probabilities. An *epistemic *defense says that there is something epistemically defective about sufficiently low probabilities and either (a) it is irrational to act on them or (b) rationality is more permissive with respect to them.

The four objections listed above are most acute for decision-theoretic defenses of rounding down. In contrast, there are epistemic defenses of rounding down that fare much better with respect to all four. The basic idea is this: in many cases, the very low probabilities we assign are subjective credences over models of the world. These very low credences are often *amorphous*, stemming from the fact that our evidence does not have high enough resolution to distinguish between nearby but distinct probabilities (Monton 2019).^{[5]} Because we lack adequate precision to make decisions involving these low probabilities, we are permitted to ignore them when making decisions (or, at least, rationality is more permissive in this murky zone).

If there is some way to determine the threshold at which our credences become amorphous, relative to our evidence, then the threshold for rounding down will not be arbitrary. Second, we need not posit that the principles of rational decision-making change at this threshold; rather, our ability to adhere to or be guided by these principles falls off at this threshold. Third, while there may be important events that have low probabilities, we will not be able to discern them and incorporate them into our decision-making in a rational way. This also prevents us from making rational comparisons between amorphous, low-probability events.

**2. Rounding down**

Rounding down says that one ought to ignore sufficiently improbable outcomes (or states) when evaluating an action. However, this is somewhat vague. How does one “ignore” these outcomes? It would be foolish to adjust one’s actual credence down to 0 (judging the improbable to be impossible). We need some process for “*treating *the probabilities as zero *for purposes of making the decision at hand*” (Smith 2014, 478). A solution is to construct a bet (a probability distribution over states) that is interchangeable with the one at hand but which assigns probability 0 to the sufficiently low probability outcomes and (somehow) redistributes probability across the other states. Then, one evaluates what maximizes expected utility across these substitute bets.

Kosonen (ms) describes three families of strategies for rounding down.^{[6]} Naïve rounding down constructs the substitute bet by conditionalizing on the supposition that some outcome with probability greater than the threshold value occurred (Smith 2014, 478). As Kosnonen points out, this strategy depends on a strategy for individuating outcomes. If we specify them finely enough, then every outcome will be sufficiently improbable.^{[7]}

Stochastic and tail discounting take as their target a slightly different probability: the probability of getting one of the extremely good or bad outcomes. Typically, the EV of an outcome is calculated via a weighted average of the value of its outcomes. Equivalently, one can first rank all of the possible outcomes by value. One assumes a baseline certainty of getting the utility of the worst-case positive outcome, plus the probability of getting the additional value of the second-worst outcome compared to the worst case, multiplied by the difference in value between the two outcomes, and so on. Then do the same for the negative values and sum their expectations.

Stochastic discounting states that you do this until the probability of jumping up to the next, more extreme level of utility falls below the threshold. Tail discounting works similarly, in that outcomes are ranked by their values and then the extreme tails of the distribution are ignored. The result is that “the greatest positive and negative utilities (whose utility levels have negligible cumulative probability) have been replaced with the greatest positive or negative utility whose utility level has a non-negligible cumulative probability (respectively for positive and negative utilities)” (Kosonen ms, 21). The chief difference between these two approaches to rounding down is that, in the first, all low-probability states are ignored and, in the second, states of extreme value and low probability are ignored.^{[8]}

Rounding down yields intuitively plausible results in paradigm cases in which EVM recommends fanaticism. If the probability of the mugger paying you back falls below one’s threshold—say, 1 in a billion—then all of the states you consider live possibilities are ones where you lose your wallet’s contents and you should not give the mugger your wallet. In the St. Petersburg game, an agent who sets his threshold at 1/1030 will value the game at $99 (Schwitzgebel 2017, Monton 2019).

## 3. D**ecision-theoretic rounding down**

Suppose that one has good epistemic reasons to believe a particular hypothesis about the world’s structure and the probabilities that it entails. A decision-theoretic defense of rounding down states that even in these scenarios, one ought to (or is permitted to) ignore extremely low probability outcomes when determining the expected value of the bet.

The most compelling decision-theoretical reason for rounding down is that rational decision-makers are often sensitive to more than just the *expected *value of their action. They might also care about the *probability *of success or what will *likely *happen. For example, consider actions with the following probabilities and payoffs:

A: {.001, 1 million; .999, −10}

B: {1, 990}

A has a slightly higher expected utility than B (990.01 vs. 990), but differs in several important respects. Of note here, what will *probably *happen if you do A is that you will incur a loss, whereas you will certainly win something if you do B. If you were able to take bets A and B millions of times, then it would make sense to pick A, for the many losses could be compensated for by the rare gain. However, if you can only take the bet once, then there is no such promise of compensation.

In light of this, it might be rational to be sensitive only to things that have some significant chance of occurring.^{[9]} As Monton (2019, 14) puts it:

Maximizing expected utility can put one in a situation where one has a high probability of having one’s life actually go badly. Because one only lives once, one has good reason to avoid choosing an action where one has a high probability of having one’s life go badly, regardless of whether or not the action maximizes expected utility.

Rounding down is one strategy for narrowing one’s focus to those outcomes or states that are sufficiently probable to be action-relevant.^{[10]}

Rounding down might play a role in a comprehensive strategy for investment by placing some threshold on the probability of loss that you’re willing to accept. There is an interesting parallel between the rounding down strategy and p-value significance testing in science. In significance testing, we test a null hypothesis, H, against some data. We define a region of significance, a set of the least probable outcomes for us to observe if H were true. For a p-value of .05, the region of significance is the region encompassing the least probable 5% of outcomes. If we observe something in the region of significance, we reject H. As a result, we expect to reject true hypotheses p% of the time.

Defenders of significance testing argue that this is tolerable when we look at the general strategy that involves them. As Neyman and Pearson (1933, 290-291) famously put it: “Without hoping to know whether each separate hypothesis is true or false, we may search for rules to govern our behavior with regard to them, in following which we insure that, in the long run of experience, we shall not be too often wrong.” One’s rounding down threshold is like a p-value which sets a threshold of likeliness one is willing to consider. By dismissing possible outcomes below this threshold, you know antecedently that you are leaving some potential value (a good outcome, a true hypothesis) on the table. But that might be worth it to you if the general strategy ensures that you get things right most of the time.

**4. Objections to decision-theoretic rounding down**

The decision-theoretic defense of rounding down faces four key objections.^{[11]} The first two claim that low-probability events can be action-relevant. The latter two point to problems with a normative decision principle that distinguishes between probabilities above and below a threshold.

### 4.1. Objection 1: I**gnores important outcomes**

Chappell (2023a) presents the following objection. Suppose that one sets the threshold for rounding down at 1 in 100 million. A killer asteroid is on track to end all life on Earth. Fortunately, a billion distinct, moderately costly actions could each independently reduce the risk of extinction by 1 in a billion. For example, suppose that a billion anti-asteroid missiles could be constructed, each of which has a tiny chance of hitting the asteroid but which would knock it off course if it does (Kosonen 2023). Clearly, it would be worth it for these actions to be performed. However, for each individual, the truncated expected utility of acting is negative (equal to the moderate cost) and no one will act.* *

Chappell’s example purports to show that rounding down can ignore outcomes that are decision-theoretically relevant. We have a model that states that each person’s action is a possible cause of the asteroid’s diversion, that each has a low objective chance of diverting it, and that these probabilities are independent across agents. Hence, rounding down leads to collective defeat.^{[12]}

Chappell contrasts this case with Pascal’s mugging, where he thinks rounding down is permissible. Pascal’s wager involves low* subjective *probabilities rather than low objective chances in a known causal model. You are uncertain whether your action is a possible cause of a huge payoff. Indeed, your credence that the mugger will pay you back should decrease as the amount of money promised gets bigger.^{[13]} Hence, the low probability of being repaid reflects your belief that you are almost certainly in a world where such repayment is impossible and thus, it should not be taken into consideration when acting.

### 4.2. Objection 2: Neglects d**ecisions under the threshold**

Let us continue to suppose that agents set their rounding-down threshold at 1 in 100 million. We can now construct bets where one option is obviously better than the other, but an agent who rounds down will be indifferent between the two options.

First, suppose that we offer the agent two choices. In each, they will purchase and fire an anti-asteroid missile. In option A, if the missile is successful (with probability 1 in a billion), they will destroy the asteroid and humanity will give them $1 billion dollars as a reward. In option B, if the missile is successful (with probability 1 in a billion), they will destroy the asteroid but get no reward. If the missile fails (probability 1 – 1 in a billion), then the payoffs of A and B are identical.

Clearly, the agent should prefer A to B. The probabilities of success and failure are the same in A and B, but the payoff of A if the missile succeeds is much higher than for B. Option A *dominates *option B; it has at least as high a payoff in every state of the world and a higher payoff in at least one state. Very plausibly, it is irrational to select a dominated option. However, since our agent has rounded down the probability of success to 0, that state of the world where A has a higher payoff is irrelevant to them and they will be indifferent between two actions.

One can fix this problem by pairing rounding down with a commitment to dominance reasoning: round down small probabilities, but if you have to compare options under the threshold, don’t pick one that is dominated.^{[14]} Notice that assessing dominance does not require us to delve into the probabilities; we can look at the payoffs alone.

However, this strategy would have to be supplemented to deal with the following kind of case. Suppose that an agent is offered one free missile launcher^{[15]} and they can choose between a model that has a 1 in 2 billion success rate and one that has a 1 in 1 billion success rate. The payoffs of averting the asteroid and the costs of the missile launcher are the same in both cases. Clearly, they should choose the latter. However, since our agent has rounded both of these probabilities down to 0, there is nothing that distinguishes them. In order to use something akin to dominance reasoning (here, stochastic dominance), the agent would have to consult the different probabilities of success. If rounding down loses this information—making the agent genuinely unable to distinguish between the two—then this move is unavailable. On the other hand, if the agent retains this information and finds it relevant for deciding between options A and B, why shouldn’t she also retain and find it relevant for deciding between A, B, and doing nothing (her option in the original example)?

**4.3. Objection 3: Normative arbitrariness**

The previous case can be used to motivate a third objection to rounding down. If we offered our hypothetical agent a choice between a missile launcher with a .5 chance of success versus one with a .49 chance, she would choose the former. Why? Presumably, because she’s committed to something like the following principle: If options A and B have equal payoffs in every state of the world, and option A makes the good outcomes more probable than B, then A is better than B. She would follow this principle when comparing options with .1 vs. .09 chances of success, 1 in 100,000 vs. 1 in 200,000 chances of success, and so on until she reaches the threshold for rounding down. At this point, her normative principles seem to abruptly change!

Here’s another example. Imagine I offer you a lottery, A, that will save 1 life with probability 1. Now, I offer you a bet, B, that will save 10 billion lives with probability .999999. Intuitively, B is far better than A. Now I offer you a bet, C, that will save (10 billion)2 lives with probability .999998. Intuitively, C is far better than B or A. This sequence motivates what Wilkinson (2022) calls Minimal Tradeoffs, the claim that there is some ratio of increase to payoffs (here, 1 billion times greater) and decrease in probability (here, *p2*) that will always result in a better bet. However, if Transitivity and Minimal Tradeoffs are true, then we can continue this series until we arrive at *some *bet, Z, that has a probability below the threshold yet is better than a bet, Y, that has a probability above the threshold. Rounding down denies this: Z is worse than Y because Z has a probability so low that we round it to 0. Therefore, rounding down violates either Transitivity or Minimal Tradeoffs. To deny Minimal Tradeoffs is to maintain that “there is at least one threshold of probability *p’ *between *p *and 0 (which might be unknown or vague) at which we have a discontinuity, such that no value with probability below *p’ *is better than any value with probability above *p’ *.”

The problem of normative arbitrariness alleges that if rounding down were correct, the normative laws would change at some arbitrary cut-off. That’s a strange and unparsimonious view, like positing that the laws of nature change at regions of space smaller than a breadbox.

Perhaps one could reply that there’s nothing deeply wrong about laws changing at different scales. It’s possible that this is how the laws of nature actually work, since things behave very differently at the quantum or even nanoscale compared to the scale of ordinary objects (Bursten 2018). If the physical world contains such discontinuities, then maybe we should accept them in the normative realm as well, even if they violate our aesthetic preferences. The obvious rejoinder is that we have good physical explanations of why things behave differently at the nanoscale, and the threshold of the nanoscale is non-arbitrarily specifiable. But if we can’t explain *why *the laws of rationality change at a particular cut-off, then it does seem problematically arbitrary.

### 4.4. Objection 4: The threshold for rounding down is arbitrary

The final objection, then, is that there is no non-arbitrary way of setting a threshold for rounding down. Should I round down probabilities under 1 in a billion? 1 in 10 billion? 1 in 1000?^{[16]} What reason could I give in favor of one of these thresholds over another?

There are a few ways to interpret “arbitrary” here, some of which will not worry defenders of rounding down. Perhaps the objection is that the threshold is subjective—varying from agent to agent—or that it is context-dependent. Since decision theories are ultimately beholden to subjective values, since they are ultimately about getting an agent what they *want, *it’s not too much of a stretch to think that the choice of a threshold should be subjective and context-dependent.^{[17]}

The deeper problem of arbitrariness is that there can be no reason (subjective or otherwise) given for why the threshold is set at one value rather than another. As Monton (2019, 17) points out, even if one cannot derive independent reasons for choosing some threshold, one can work backward from intuitions about what is proper to do in paradigm cases to derive one’s threshold.^{[18]} Monton is willing to pay $50 to play the St. Petersburg game (with no diminishing marginal utility). Therefore, his threshold is somewhere between ^{1}⁄_{250} and ^{1}⁄_{251} (the probability of flipping heads 50 times in a row), about 1 in 2 quadrillion. Nevertheless, one might hope there is more to say here about the reasons for having the intuition in the first place.

**5. Amorphous probabilities and the resolution of our evidence**

So far, I’ve been assuming that the probabilities involved in the lotteries are known and determinate. The question was whether rounding down objective chances yields good results and theoretically satisfying explanations about what we ought to do. I’ll leave this as an open question. The question I’ll shift to here is whether rounding down makes sense when the probabilities are not fully known, when we have subjective uncertainty over various models of what the world is like. After all, that is the situation we are in with respect to almost any real-world gamble we might face.

An epistemic defense states that one should (or is permitted to) round down those very low probabilities of which one is uncertain about their precise value. A separate claim is that in most cases, very low probabilities will be ones of which we are highly uncertain. I will argue that epistemic rounding yields intuitively plausible anti-fanatical results and* *does a better job of evading the four objections above (though it’s not completely immune to them). To make this case, I first want to round up a cluster of related concepts.

**5.1. Tolerance**

Smith (2014) proposes that rounding down is permissible because we should allow *tolerance *in probability assignments. Every normative prescription about a practical human activity must have some tolerance for error, “a range such that discrepancies from the norm within that range can be ignored” (471). Decision theory is a theory of a practical human activity. Therefore, decision theory must allow for some tolerance for error. This suggests the following:

Rationally negligible probabilities (RNP): For any lottery featuring in any decision problem faced by any agent, there is an ϵ > o such that the agent need not consider outcomes of that lottery of probability less than ϵ in coming to a fully rational decision.

Why must a normative principle allow for some acceptable variation? For Smith, the answer is not that infinite precision is impossible (though it often is).^{[19]} It’s that some deviation from the norm simply doesn’t matter for success at the project at hand. There’s some range of error such that having smaller error does not make the result any better. For example, suppose a construction plan says that a board must be 10.5 cm across. Its tolerance will specify the range of widths that will count as close enough for the purposes of constructing the building (say, +/- 2 mm). According to RNP, incorporating very low probability events will lead you to make different decisions than if you had rounded down, but they won’t be *better *(more acceptable, more rational) decisions.

There’s a worry here that RNP simply begs the question: ignoring small probability states is rationally permissible because ignoring these states doesn’t make one’s decisions any less rational.^{[20]} After all, the first objection to rounding down states that ignoring small probability states can cause us to make bad decisions. What we want is some non-arbitrary reason for thinking that there is a range of probability assignments within which rationality becomes more permissive. In the case of the 10.5 cm board, we could supply some reason for setting the tolerance level at, say, +/- 2 mm; perhaps the hardware that we use to fix the board works equally well for widths between 10.3 and 10.7 cm. We need some similar reason for specifying the tolerance range of probabilities within which variation doesn’t make a rational difference.

**5.2. Specificity and Resolution**

A second useful concept is that of the *specificity *of our evidence. Joyce (2005) distinguishes three dimensions of the relationship between one’s total evidence, E, and a hypothesis, H:

Balance: How decisively E tells is in favor of H compared to alternative hypotheses, typically reflected in the credence

Weight: The amount of evidence available in E, typically reflected in how stable or resilient credences are to new information

Specificity: How much E discriminates between H and alternative hypotheses, typically reflected in the spread of the credence values assigned to H

Suppose that you have formed a belief about the width of your board. You take many measurements using different rulers. However, the rulers all have markings in cm increments. You believe the board is almost certainly between 10 and 11 cm wide. From the looks of it, it is probably between 10.25 and 10.75 cm. But you can’t reliably tell whether it is 10.4 or 10.6 cm. Your evidence is just not that specific.^{[21]}

A body of evidence can fail to be specific for several reasons. Above, the hypotheses were at a finer level of grain than the observations. In other cases, the evidence can be incomplete or ambiguous. All else equal, as the differences predicted by hypothesized models get smaller, the same body of evidence will become less specific and the balance between the hypotheses will be indeterminate (Joyce 2005, 171).

The hypotheses that are important for our purposes are models that specify the objective chances of various states of the world. A helpful metaphor here is to think of the *resolution *of the evidence. The resolution of a microscope is the distance apart two things must be in order to see them as distinct. The resolution of our evidence is how far apart two probabilistic models of the world must be in order for our evidence to distinguish between them. Some kinds of evidence have very high resolution. For example, scientists at CERN derive very specific probabilities of decay states which they test against very sensitive observations of actual decay.^{[22]} Modern weather forecasting may be able to distinguish between a .1 and .3 probability of rain but may not be so sensitive as to distinguish between .11 and .12. Our evidence has even less resolution for other kinds of forecasts about the future. How sensitive is our evidence regarding the probability of an existential risk over the next 100 years? Or that AGI will emerge within 10 years?

We could use *specificity *or *resolution *to flesh out a theory of *tolerance*. If our evidence only tells us that the probability of some outcome is *p *+/- ϵ, then we are not rationally required to make decisions in a way that requires finer-grained probabilities within this interval. Let’s say that we “coarse-grain” a probability within *p *+/- ϵ when we treat it as equivalent to *p *for the purposes of decision-making. And note that we can coarse-grain all probabilities, not just low ones; a probability of .50001 might be coarse-grained to .5 for the same reasons that a probability of .00001 might be coarse-grained to 0.

**5.3. Amorphousness**

Monton (2019) argues that very low probabilities are often *amorphous*, and that this matters for decision-making. Consider again the probability that you assign to the proposition that the mugger will pay you a billion dollars (say, 1 in a billion). Now, consider the proposition that if you do not give your money to the mugger, a second mugger will approach you offering two billion dollars. Whether you should give your money to the first mugger will depend on the ratio of the probabilities you assign to these two possibilities (one mugger vs. two). For simplicity, suppose that if the probability of the second mugger is higher than 1 in 2 billion, then you should keep your wallet. Making a rational decision by the lights of EVM depends on your ability to finely discriminate between these very low probabilities.

In general, the expected value of a bet will involve the payoffs over all possible states of the world. It is possible to come up with many very unlikely states that would have extreme payoffs (positive or negative). Monton suggests that we should ignore these states, given that we have so little epistemic purchase on them:

When it comes to these tiny values, our probability assignments are *amorphous*: As a practical matter, we aren’t cognitively capable of making well-thought-out precise probability assignments for that immense space of remote possibilities… Because we aren’t cognitively capable of taking them all into account, and because the probabilities associated with the possibilities are all very small, the best thing to do is to ignore all such possibilities, by discounting their probabilities to zero. (13)

To put these pieces together, our evidence does not typically have enough resolution to give us determinate probabilities for very improbable events. If the resolution of our evidence sets the tolerance level, then we are permitted (and perhaps ought) to treat very low probabilities the same as 0. The argument is not that these events will never occur and therefore we should dismiss them (this would be a decision-theoretic defense). The argument is that even if these events matter, we are too ignorant for them to matter for rational decision-making.

**6. Why round down?**

The epistemic defense of rounding down states that when we are rationally incapable of distinguishing among models that posit different probabilities for some outcome, then we should or are rationally permitted to coarse-grain those probabilities. We are very often in that situation when it comes to very low probabilities and coarse-graining would (or could) take them to probability 0. Therefore, we should or are rationally permitted to round low probabilities down to 0. Several questions remain.

First, why *ignore *these amorphous probabilities rather than doing something else with them? One thing you could do is incorporate your best estimate of the probability (say, 1 in a billion) into your expected value calculations. As we’ve seen, since tiny probabilities of astronomical value have a strong influence on expected value, doing so can lead to fanaticism. One might round down simply to avoid fanatical conclusions. More pragmatically, bets with high uncertainty may tend to have higher EV as a result of these long tails. Rounding down is a way of guarding oneself against perverse incentives toward favoring bets with high uncertainty.

A more distinctively epistemic reason for rounding down has to do with our reasons against acting on uncertainties. For example, Monton suggests that rounding down amorphous probabilities is justified by a “judgment-stability line of reasoning: we should prefer to make life-changing decisions that we would not call into question upon making tiny adjustments in our probability assignments” (2019, 14). Likewise, someone who is *ambiguity averse *is averse to taking bets where the probabilities are unknown. Rounding down removes highly uncertain probability assignments, such that one’s decisions are determined more fully by those probabilities about which one is more certain.^{[23]} If ambiguity aversion is rationally mandated, then rounding down may be as well. If one is a permissivist about risk attitudes or about ways of dealing with ambiguity, then rounding down will be permissible but not mandatory.

Rounding down embodies one kind of attitude toward ambiguous bets; it is a refusal to consider the uncertain *states* that factor into the overall EV of the bet. Another option is to refuse *bets* that contain uncertainties. The problem here is that the tail ends and fine-grained partitions of any bet will contain uncertainties, so the ambiguity-averse will simply fail to act.

Another option is to perform a Bayesian adjustment to one’s probability distribution over outcomes. A Bayesian’s posterior expectation of some value is a function of both their prior and the evidence that they have for the true value. When there is little evidence, the estimated value will revert to the prior. For example, suppose you are a Bayesian considering the expected value of a speculative technology. If most technologies have moderate cost-effectiveness, then you will need very strong evidence that *this *technology’s cost-effectiveness is very high or very low. When evidence is scarce, Bayesian posterior probability assignments tend to have less variance than the original ambiguous estimate, assigning lower probabilities to extreme payoffs. Rounding down will have a similar effect, pushing the probability mass toward the center of the distribution, though it does so in a less continuous fashion.

A parallel discussion is found in the literature on imprecise probabilities. It is controversial how to make decisions using imprecise probabilities, especially in cases where (i) the probabilities are amorphous and (ii) different ways of disambiguating the probabilities matter for decision-making. Consider Monton’s discussion of Pascal’s mugging. If you assign a probability < 1 in 2 billion to their being a second mugger, then handing over your wallet has higher EV. If you assign a probability greater than that, keeping it has higher EV. Suppose you assign an imprecise probability, ranging from 0 to 1 in 900 million. What does this imprecise probability tell you to do? Are both actions permissible? Neither? Just one?

Some authors argue that any decision rule will have to utilize some estimate of the credence, so the imprecise credence will function, in effect, as a precise one (Elga 2010, Dorr 2010). While some of these decision rules might recommend rounding down, many – such as “use the midpoint” – will not. Rounding down will likely find more support from more permissivist theories. For example, Rinard (2015) argues that it is indeterminate in such cases whether the candidate actions are permissible or impermissible.

Moss (2015) goes further, arguing that acting requires one to identify with a particular credence function within your range of uncertainty, but there is considerable leeway about which of these functions you may rationally choose in any case. On her view, we can choose to identify with a credence function based on practical considerations and these identifications may change from context to context even without a change in evidence (Lennertz 2022). If one of your credence functions assigns probability 0 to a particular state, then you are permitted to identify with this function and ignore that state for the purposes of decision if you have good pragmatic or epistemic reasons for doing so.^{[24]} Lennertz argues that to identify with a credence function is to accept it for the purposes of action. Acceptance can come apart from belief. For example, you could accept an idealization (e.g., assuming a frictionless plane when making calculations in physics) without believing that it is literally true. Other contexts might call for fewer idealizations (e.g., including friction when designing a vehicle). The acceptance model gives a plausible analysis of what it means to treat a state as having probability 0 for the purposes of action.

## 7. An e**pistemic defense of rounding down**

An epistemic defense of rounding down fares better with respect to the four objections raised for that view. Let’s start with the arbitrariness objections. First, the resolution of our evidence places non-arbitrary constraints on the threshold for rounding down/coarse-graining. There are still choices to be made about how specific our evidence has to be to recommend rounding down. For example, suppose you’re 75% confident that the probability of a second mugger is greater than 1 in 2 billion. Is that enough uncertainty to permit you to round down? The problem of arbitrariness has been pushed back from having no external standard for our rounding down value to having some arbitrariness about when that external standard applies. Some progress has been made.

Second, the epistemic defense does not hold that the normative laws change at some arbitrary threshold, at least when it comes to first-order principles of rational decision. We need not posit that maximizing expected utility is the right way to assess actions until the probabilities of certain states get too low. Instead, our ability to adhere to or be guided by these principles falls off at this threshold; we can’t reliably or justifiably take actions to maximize expected utility. The unspecificity of your evidence explains why you cannot rationally adhere to what you might even take to be the “true” baseline normative principles.^{[25]}

What about the objection that rounding down ignores important outcomes? Suppose you are considering an investment in a new technology. In the most probable states of the world, this action will have some moderate impact. You are uncertain about what the world is like, so you reserve some tiny probabilities for the cases in which it has extremely good or extremely bad outcomes. For example, suppose you assign a 1 in a billion probability that the technology will backfire in a completely unforeseen way and kill everyone on earth.^{[26]} Your evidence is not very specific, so you entertain a range of models that assign probabilities from 0 to 1 in 900 million. Why round down to zero rather than acting on some positive probability within that interval?

Or consider Chappell’s and Kosonen’s missile case. The threshold for rounding down is 1 in 100 million and the probability, *p*, that a missile is successful is 1 in a billion. If the missile launcher has some cost, *c*, and some benefit, *b*, if it is successful. The expected value of the action reduces to *pb – c*. This will be positive when *p *> *c/b*. Rounding down *p *to 0 means that buying the missile launcher will always be net negative (and worse than doing nothing). What happens when we introduce uncertainty? First, suppose you are confident of a model with particular values of *b *and *c* but have uncertainty over *p*. Your evidence states that the probability of success is somewhere between 0 and 1 in 900 million. Somewhere in this interval, the EV of buying the missile launcher goes from negative to positive, but you don’t know whether the actual probability falls above or below it. Should you round down in this case?

This seems to me to be the kind of case that casts the most doubt on rounding down. We have an interval of reasonable credences. We could round down to 0, round up to the highest probability, or choose some probability in the middle. Rounding down dismisses what, to our knowledge, is a very real risk (though we don’t know how much of a risk it is). That seems reckless, especially given the potentially catastrophic consequences of doing nothing.

I think the strongest response is this. Suppose we adopt Moss’s account on which we are permitted to identify with any of the credences in our interval and that our reasons for picking a particular credence will be extra-evidential (pragmatic, ethical, etc.). In this case, we have strong reasons for accepting a higher credence for the purposes of action. The consequences of underestimating the probability of success (giving up a chance to avert the asteroid) are worse than the consequences of overestimating it (wasting money on a missile launcher). Therefore, we should take precautionary measures. The key feature of this case is a kind of asymmetry. Among the possibilities we consider, there is just one low probability event (the probability distribution only has a long tail on the right), and the consequences of error are worse for rounding down than rounding up or averaging.

The pragmatic consequences of various kinds of error can thus influence whether we round down and at what threshold. The analogy between rounding down and significance testing is fruitful here. It is commonly accepted that non-epistemic values influence our thresholds for accepting or rejecting scientific hypotheses in cases of inductive risk (Douglas 2000).

Rounding down seems more acceptable when we introduce uncertainty over the costs and benefits as well as the probability of success. Ignoring low-probability events may cause us to disregard important events. The flipside is that by *not *ignoring low-probability events, too many events will be deemed important. Suppose I think there’s a roughly 1 in a billion chance of success, a roughly 1 in 5 billion chance that my missile will be stolen by a terrorist attempting to knock the asteroid onto a collision course toward earth, a 1 in 2 billion chance the future of humanity is full of suffering and an asteroid would put us out of our misery, etc. Figuring out what maximizes expected utility will involve a delicate balance among tiny probability differences on which we have very little purchase. There are equally serious consequences of error in any direction.^{[27]}

The final objection is that rounding down prevents us from making rational decisions under the threshold of probability. If the evidence does not have sufficiently fine resolution at this threshold, then it seems correct to say that we *should *treat all of these probabilities the same. However, suppose we do have some information about two options that both lie under the threshold. If we adopt the acceptance model of rounding down, rounding down in some circumstances does not preclude us from refusing to round down in others.

Suppose you assign a probability of 0 to state s1 for a particular decision. Later, you are faced with a decision with a state s2 that your evidence says has a lower probability than s1 (even though we don’t know what their precise values are). In *this *context, you might want to un-zero s1 so as to compare the two states. Likewise, when the stakes change or when circumstances require you to act on highly ambiguous evidence, perhaps rationality points in favor of doing your best with the amorphous probabilities you have.^{[28]}

### 8**. Implications for decisions in EA**

I have argued that there are certain epistemic conditions that make rounding down rationally permissible. Whether we ought to round down, round up, or average will depend on pragmatic and ethical facts about the situation, including (a)symmetries among costs and benefits and how many low probability events we consider live possibilities. The upshot is that the choice to round down will be context-sensitive. Is there anything more general we can say about the effect that rounding down would have if adopted?

Some cause areas—particularly existential risk reduction—may derive more of their EV from long tail events. Therefore, rounding down will tend to discount the value of these cause areas. However, my discussion here does not recommend rounding down in all cases, only those where we have little specific evidence about the precise probabilities of long tail outcomes and there are several important low probability events whose probabilities would matter for our decision.

There are two general upshots for cause prioritization exercises, especially in the x-risk domain. First, we should be very cautious when an action’s high EV is due to long-tail events *and *the probability we assign to those events is due to ignorance rather than specific evidence.^{[29]} Second, it is a mistake to focus on some salient low probability events while ignoring others. For example, we might focus on the possibility that a new technology will avert an AI apocalypse but ignore the possibility that the effort will backfire and the probability that it will hasten the AI apocalypse.^{[30]} Doing so will cause significant bias in EV calculations. Rounding down is one way of hedging against our failures of imagination or evidence.

The epistemic defense of rounding down is not that revisionary then after all. It’s a call to use EV carefully and only where the evidence speaks.

# Acknowledgments

This post was written by Hayley Clatterbuck in her capacity as a researcher on the Worldview Investigation Team at Rethink Priorities. Thanks to Arvo Muñoz Morán, Bob Fischer, David Moss, and Derek Shiller for helpful feedback. If you like our work, please consider subscribing to our newsletter. You can explore our completed public work here.

- ^
See Kosonen (forthcoming) for additional arguments showing that rounding down violates Independence and Continuity and leaves one susceptible to money pumps.

- ^
Imprecise credences are a typical way of representing such uncertainty.

- ^
You might be uncertain that the person making the bet can and will pay any amount that you win. After all, there are some possible winnings that surpass the world’s GDP. But we will put this aside and assume that you are certain that the world is the way that the bet describes.

- ^
Assuming, for the sake of exposition, that money does not have declining utility. You can easily amend the case for decreasing marginal utility by increasing the dollar amount that the mugger offers you.

- ^
As we will see, rounding down will be an instance of a more general issue about the grain of probability estimates. This argument will also show that, in some instances, you should “round off” the differences between other close probability estimates, treating .50000001 as effectively .5, just as you would treat .0000001 as effectively 0.

- ^
Kosonen helpfully considers variants of each strategy, the differences among which I will be ignoring here.

- ^
For example, I’m considering whether to wear my seatbelt on a drive to the grocery store. Suppose that the probability that I will get in an accident is

^{1}⁄_{100,000}. This is the sum of the probabilities of getting in an accident in the first inch of my drive, the second inch of my drive, etc. My probability of each of these micro-events is smaller than my threshold probability, so rounding down suggests I dismiss them all. - ^
See Kosonen (ms) for a discussion of how each view fares with respect to conditions like Dominance. Rethink Priorities’ Portfolio Builder rounds down via tail discounting, so to the extent that any criticisms below leave that approach unscathed, they will not pose a problem for our methodology.

- ^
If you are concerned with maximizing the probability of getting something at least as good as some benchmark value, then you should be sensitive to the mode of the outcomes. As Hajek (2021) puts it, “in a case of uncertainty about what to do, an agent might want to maximize the chance of doing what’s objectively right—that is, to be guided by the mode of the relevant distribution.”

- ^
Risk aversion is another. A reasonable agent might “think that guaranteeing themselves something of moderate value is a better way to satisfy their general aim of getting some of the things that they value than is making something of very high value merely possible” (Buchak 2013, 45). For a discussion of how risk aversion and rounding down deal with fanaticism, see Fanaticism, Risk Aversion, and Decision Theory.

- ^
P-value significance testing faces analogous versions of each objection (Sober 2008). For example, if both H and ~H confer a low probability on O, then it recommends that we reject both (though one must be true). This is similar to the claim that rounding down gives implausible recommendations when all options are under the threshold.

- ^
There are plausible responses to make on behalf of decision-theoretic rounding down. One is to deny that the

*individual’s*probability of success is the relevant probability to assess. Instead, given that this is a collective action problem, “all the choices faced by different agents should be evaluated collectively, and if the total probability of some event or outcome is above the discounting threshold, then no one should discount” (Kosonen 2023, 26). - ^
Kosonen (2023) emphasizes the importance of the independence assumption in these two cases: “Suppose that a googolplex agents face Pascal’s Mugging. The probability that at least one of them gets a thousand quadrillion happy days in the Seventh Dimension is still small even if they all pay the mugger because the probability of obtaining the great outcome is not independent for the different agents: Either the mugger has magical powers, or he does not. However, if the probabilities were independent, then Collective Difference-Making would recommend against discounting, provided that the total probability of at least one person obtaining the great outcome is sufficiently high” (25).

- ^
This move is suggested by Monton (2019) and evaluated by Kosonen (ms, 10). The problem of distinguishing between 0 probability states arises outside of the context of rounding down, in the case of infinitesimally small probabilities. Hajek (2014) and Easwaran (2015) discuss dominance reasoning in those contexts.

- ^
Nothing important changes if we assume that the missile launchers have some cost, but making them free makes the point more salient.

- ^
For the history of attempts to provide such a threshold, see Monton (2019) and Hajek (2014).

- ^
Monton (2019) argues that thresholds will be subjective but should be context-invariant for each subject.

- ^
See Hajek (2014, 550) for a skeptical discussion of this move.

- ^
Perhaps this reason for rounding down should not be dismissed so readily. If we cannot assign precise probabilities, then it’s not the case that we ought to. Here, we can distinguish between decision theory as a norm of

*ideal*rationality and one that governs responsibility for actual agents (Hajek 2014). - ^
See Smith (2014) for arguments for why rounding down gives the best account of St. Petersburg cases.

- ^
Imprecise probabilities are a useful tool for capturing specificity, which I will return to in Section 6. As Joyce puts it, “Indefiniteness in the evidence is reflected not in the values of any single credence function, but in the spread of values across the family of all credence functions that the evidence does not exclude. This is why modern Bayesians represent credal states using sets of credence functions. It is not just that sharp degrees of belief are psychologically unrealistic (though they are). Imprecise credences have a clear epistemological motivation: they are the proper response to unspecific evidence” (Joyce 2005, 171)

- ^
Scientists at CERN use a five-sigma significance level which means that the observations had a 0.00003% likelihood of being a mere statistical fluctuation.

- ^
An objection here is that uncertainty about the

*tails*of one’s probability distribution also makes one more uncertain about the rest of the probabilities. After all, when rounding down, the remaining probability gets redistributed to the remaining states, thus altering your estimate of their probabilities. A solution is to act on coarse-grained ranges of probabilities, not exact probabilities within the middle of the distribution. - ^
This defense of rounding down may not work if probability 0 outside the range of your imprecise credence. Further, one might argue that it is irrational ever to give credence to a model that says the objective chance of some state is 0, as long as that state is a logical possibility. However, deterministic models will assign probability 0 to a great many states, so if these are among the models you consider, 0 will often be in the range of the possible. This raises a further problem: if you always entertain some model with probability 0, and you are permitted to identify with any credence function consistent with your evidence, then rounding down will

*always*be rationally permitted. See Lennertz (2022) for a discussion of whether one can identify with a credence value outside of their range of credences and whether there are limits on which credence functions one can identify with. - ^
See Weatherson (2024) for a related discussion.

- ^
This might be a bad example, since this probability assignment may be reasonable based on past evidence. For example, CFCs nearly caused a potentially catastrophic destruction of the ozone layer, a result that was not anticipated and only discovered decades later. See Chappell (2023b) for a related discussion.

- ^
Ord, Hillerbrand, and Sandberg (2010) argue that when there is symmetry among low probability events, they cancel out and can be ignored.

- ^
Monton (2019) gives the case of an effective altruist who must spend a large sum of money quickly or else the money will go away. They might be justified in making decisions based on amorphous probabilities since doing so is (probably) better than doing nothing.

- ^
A central point made by Karnofsky here.

- ^
Duffy (2023) shows that the probability of backfire can significantly affect one’s evaluation of x-risk mitigation projects.

(Edited.)

It seems like we just moved the same problem to somewhere else? Let S be “that external standard” to which you refer. What external standard do we use to decide when S applies? It’s hard to know if this is progress until/unless we can actually define and justify that additional external standard. Maybe we’re heading off into a dead end, or it’s just external standards all the way down.

Ultimately, if there’s a precise number — like the threshold here — that looks arbitrary, eventually, we’re going to have to rely on some precise and

I’d guessarbitrary-seeming direct intuition aboutsomenumber.Doesn’t it still mean the normative laws — as epistemology is also normative — change at some arbitrary threshold? Seems like basically the same problem to me, and equally objectionable.

Likewise, at a first glance (and I’m neither an expert in decision theory nor epistemology), your other responses to the objections in your epistemic defense seem usable for decision-theoretic rounding down. One of your defenses of epistemic rounding down is stakes-sensitive, but then it doesn’t seem so different from risk aversion, ambiguity aversion and their difference-making versions, which are decision-theoretic stances.

In particular

sounds like an explicit endorsement of motivated reasoning to me. What we believe, i.e. the credences we pick, about what will happen shouldn’t depend on ethical considerations, i.e. our (ethical) preferences. If we’re talking about picking credences from a set of imprecise credences to use in practice, then this seems to fall well under decision-theoretic procedures, like ambiguity aversion. So, such a procedure seems better justified to me as decision-theoretic.

Similarly, I don’t see why this wouldn’t be at least as plausible for decision theory:

Thank you, Michael!

To your first point, that we have replaced arbitrariness over the threshold of probabilities with arbitrariness about how uncertain we must be before rounding down: I suppose I’m more inclined to accept that decisions about which metaprinciples to apply will be context-sensitive, vague, and unlikely to be capturable by any simple, idealized decision theory. A non-ideal agent deciding when to round down has to juggle lots of different factors: their epistemic limitations, asymmetries in evidence, costs of being right or wrong, past track records, etc. I doubt that there’s any decision theory that is both stateable and clear on this point. Even if there is a non-arbitrary threshold, I have trouble saying what that is. That is probably not a very satisfying response! I did enjoy Weatherson’s latest that touches on this point.

You suggest that the defenses of rounding down would also bolster decision-theoretic defenses of rounding down. It’s worth thinking what a defense of ambiguity aversion would look like. Indeed, it might turn out to be the same as the epistemic defense given here. I don’t have a favorite formal model of ambiguity aversion, so I’m all ears if you do!

Couldn’t the decision theory just do exactly the same, and follow the same procedures? It could also just be context-sensitive, vague and complex.

How do we draw the line between which parts are epistemic vs decision-theoretic here? Maybe it’s kind of arbitrary? Maybe they can’t be cleanly separated?

I’m inclined to say that when we’re considering the stakes to decide what credences to use, then that’s decision-theoretic, not epistemic, because it seems like motivated reasoning if epistemic. It just seems very wrong to me to say that an outcome is more likely just because it would be worse (or more important) if it happened. If instead under the epistemic approach, we’re

notsaying it’s actually more likely, it’s just something we shouldn’t round down in practical decision-making if morally significant enough, then why is this epistemic rather than decision-theoretic? This seems like a matter of deciding what to do with our credences, a decision procedure, and typically the domain of decision theory.Maybe it’s harder to defend something on decision-theoretic grounds if it leads to Dutch books or money pumps? The procedure would lead to the same results regardless of which parts we call epistemic or decision-theoretic, but we could avoid blaming the decision theory for the apparent failures of instrumental rationality. But I’m also not sold on actually acknowledging such money pump and Dutch book arguments as proof of failure of instrumental rationality at all.

One response to these objections to rounding down is that similar objections could be raised against treating consciousness, pleasure, unpleasantness and desires sharply if it turns out to be vague whether some systems are capable of them. We wouldn’t stop caring about consciousness, pleasure, unpleasantness or desires just because they turn out to be vague.

And one potential “fix” to avoid these objections is to just put a probability distribution over the threshold, and use something like a (non-fanatical) method for normative uncertainty like a moral parliament over the resulting views. Maybe the threshold is distributed uniformly over the interval [a,b],0≤a<b≤1.

Now, you might say that this is just a probability distribution over views to which the objections apply, so we can still just object to each view separately as before. However, someone could just consider the normative view that is (extensionally) equivalent to a moral parliament over the views across different thresholds. It’s one view. If we take the interval to just be [0,1], then the view doesn’t ignore important outcomes, it doesn’t neglect decisions under any threshold, and the normative laws don’t change sharply at some arbitrary point.

The specific choice of distribution for the threshold may still seem arbitrary. But this seems like a much weaker objection, because it’s much harder to avoid in general, e.g. precise cardinal tradeoffs between pleasures, between displeasures, between desires and between different kinds of interests could be similarly arbitrary.

This view may seem somewhat ad hoc. However, I do think treating vagueness/imprecision like normative uncertainty is independently plausible. At any rate, in case some of the things we care about turn out to be vague but we’ll want to keep caring about them anyway, we’ll want to have a way to deal with vagueness, and whatever that is could be applied here. Treating vagueness like normative uncertainty is just one possibility, which I happen to like.

I haven’t had time to read the whole thing yet, but I disagree that the problem Wilkinson is pointing to with his argument is just that it is hard to know where to put the cut, because putting it anywhere is arbitrary. The issue to me seems more like, for any of the individual pairs in the sequence, looked at in isolation, rejecting the view that the very, very slightly lower probability of the much, MUCH better outcome is preferable, seems insane. Why would you ever reject an option with a trillion trillion times better outcome, just because it was 1x10^-999999999999999999999999999999999999 less likely to happen than trillion trillion times worse outcome (assuming for both options, if you don’t get the prize, the result is neutral)? The fact that it is hard to say where is the best place in the sequence to first make that apparently insane choice seems also concerning, but less central to me?

Hi David,

Thanks for the comment. I agree that Wilkinson makes a lot of other (really persuasive) points against drawing some threshold of probability. As you point out, one reason is that the normative principle (Minimal Tradeoffs) seems to be independently justified, regardless of the probabilities involved. If you agree with that, then the arbitrariness point seems secondary. I’m suggesting that the uncertainty that accompanies very low probabilities might mean that applying Minimal Tradeoffs to very low probabilities is a bad idea,

andthere’s some non-arbitrary way to say when that will be. I should also note that one doesn’t need to reject Minimal Tradeoffs. You might think that if we did have precise knowledge of the low probabilities (say, in Pascal’s wager), then we should trade them off for greater payoffs.Thanks, I will think about that.