Consider a decision between: 1) a certainty of a moderately good outcome, such as one additional life saved; 2) a lottery which probably gives a worse outcome, but has a tiny probability of some vastly better outcome (perhaps trillions of additional blissful lives created). Which is morally better? By expected value theory (with a plausible axiology), no matter how tiny its probability of the better outcome, (2) will be better than (1) as long as that better outcome is good enough. But this seems fanatical. So you may be tempted to abandon expected value theory.
But not so fast — denying all such fanatical verdicts brings serious problems. For one, you must reject either that moral betterness is transitive or even a weak principle of tradeoffs. For two, you must accept that judgements are either: inconsistent over structurally-identical pairs of lotteries; or absurdly sensitive to small probability differences. For three, you must accept that the practical judgements of agents like us are sensitive to our beliefs about far-off events that are unaffected by our actions. And, for four, you may also be forced to accept judgements which you know you would reject if you simply learned more. Better to accept fanaticism than these implications.
Introduction
Suppose you face the following moral decision.
Dyson’s Wager
You have $2,000 to use for charitable purposes. You can donate it to either of two charities.
The first charity distributes bednets in low-income countries in which malaria is endemic.[1] With an additional $2,000 in their budget this year, they would prevent one additional death from malaria. You are certain of this.
The second charity does speculative research into how to do computations using ‘positronium’ — a form of matter which will be ubiquitous in the far future of our universe. If our universe has the right structure (which it probably does not), then in the distant future we may be able to use positronium to instantiate all of the operations of human minds living blissful lives, and thereby allow morally valuable life to survive indefinitely long into the future[2].[3] From your perspective as a good epistemic agent, there is some tiny, non-zero probability that, with (and only with) your donation, this research would discover a method for stable positronium computation and would be used to bring infinitely many blissful lives into existence.[4]
What ought you do, morally speaking? Which is the better option: saving a life with certainty, or generating a tiny probability of bringing about infinitely many future lives?
A common view in normative decision theory and the ethics of risk — expected value theory — says that it’s better to donate to the speculative research. Why? Each option has some probability of bringing about each of several outcomes, and each of those outcomes has some value, specified by our moral theory. Expected value theory says that one option is better than another if and only if it has the greater probability-weighted sum of value — the greater expected value.[5] Here, the option with the greater expected value is donating to the speculative research (at least on certain theories of value — more on those in a moment). So perhaps that is what you should do.
That verdict of expected value theory is counterintuitive to many. All the more counterintuitive is that it can still be better to donate to speculative research no matter how low the probability is (short of being 0)[6], since there are so many blissful lives at stake. For instance, the odds of your donation actually making the research succeed could be 1 in 10 to the one hundredth power. (10 to the one hundredth power is greater than the number of atoms in the observable universe). The chance that the research yields nothing at all would be 99.99… percent, with another 96 nines after that. And yet expected value theory says that it is better to take the bet, despite it being almost guaranteed that it will actually turn out worse than the alternative; despite the fact that you will almost certainly have let a person die for no actual benefit. Surely not, says my own intuition. On top of that, suppose that $2,000 spent on preventing malaria would save more than one life. Suppose it would save a billion lives, or any enormous finite value. Expected value theory would say that it’s still better to fund the speculative research — expected value theory says that it would be better to sacrifice those billion or more lives for a minuscule chance at the infinitely many blissful lives (and likewise if the number of blissful lives were finite but still sufficiently many). But endorsing that verdict, regardless of how low the probability of success and how high the cost, seems fanatical. Likewise, even without infinite value at stake, it would also seem fanatical to judge a lottery with sufficiently tiny probability of arbitrarily high finite value as better than getting some modest value with certainty.
Fanatical verdicts depend on more than just our theory of instrumental rationality, expected value theory. They also depend on our theory of (moral) value, or axiology. Various plausible axiologies, in conjunction with expected value theory, deliver that fanatical conclusion. Foremost among them is totalism: that the ranking of outcomes is determined by the total aggregate of value of each outcome; and that this total value increases linearly, without bound, with the sum of value in all lives that ever exist. By totalism, the outcome containing infinitely many blissful lives is indeed a much better one than that in which one life is saved. And, as we increase the number of blissful lives, we can increase how much better it is without bound. No matter how low the probability of those many blissful lives, the expected total value of the speculative research is greater than that of malaria prevention. (Likewise, even if there are only finitely many blissful lives at stake, for any tiny probability there can be sufficiently many of them to make the risky gamble better than saving a life with certainty.) But this problem isn’t unique to totalism. When combined with expected value theory, analogous problems face most competing axiologies, including: averageism, critical-level views, prioritarianism, pure egalitarianism, maximin, maximax, and narrow person-affecting views. Those axiologies each allow possible outcomes to be unboundedly valuable, so it’s easy enough to construct cases like Dyson’s Wager for each.[7] And some — namely, critical-level views and prioritarianism — already deliver the same result as totalism in the original Dyson’s Wager. In this paper, I’ll focus on totalism, both to streamline the discussion and because it seems to me far more plausible than the others.[8] But suffice it to say that just about any plausible axiology can deliver fanatical verdicts when combined with expected value theory.
In general, we will sometimes be led to verdicts that seem fanatical if we endorse Fanaticism.[9] Inversely, to succeed in avoiding fanatical verdicts, our theory of instrumental rationality and our axiology must not imply Fanaticism.
Fanaticism: For any tiny (finite) probability > 0, and for any finite value v, there is some finite V that is large enough that L risky is better than L safe (no matter which scale those cardinal values are represented on).
L risky: value V with probability; value 0 otherwise
L safe: value v with probability 1
The comparison of lotteries L risky and L safe resembles Dyson’s Wager: one option gives a slim chance of a potentially astronomical value V ; the other a certainty of some modest value v. But, here, V need not be infinite, in case you think infinite value impossible. And, with some minor assumptions (see Section 3), Fanaticism in this form implies the fanatical verdict in Dyson’s Wager. Likewise, to reject the fanatical verdict in Dyson’s Wager, we must reject Fanaticism.
Note that Fanaticism is quite a strong claim. As defined here, it requires that the ranking of L risky above L safe holds not only when the number of lives in each outcome are proportional to V, v,_ and 0. It must hold whenever outcomes can be cardinally represented with those values. Recall that cardinal representations of value[10] are unique only up to positive affine transformations — two outcomes represented by 0 and v on one scale could instead be represented by 0 × a + b and v × a + b (for any positive a and real b). Conversely, an outcome that contains many happy lives might still be represented cardinally with value 0. So Fanaticism doesn’t apply only to risky lotteries in which some possible outcome contains zero valuable lives, or zero value on net. It also applies to lotteries that can be represented as L risky and L safe even though every one of their outcomes contains enormous numbers of blissful lives, or enormous amounts of suffering, as long the differences in value between those outcomes are proportional to 0, v, and V.
Given how strong and how counterintuitive Fanaticism is, you might think it easy to reject. And many philosophers and other thinkers have done so, rejecting either Fanaticism or similar principles. For instance, Bostrom (2009) presents a compelling reductio ad absurdum for fanatical verdicts in the prudential context. Bostrom (2011), Beckstead (2013), and Askell (2019) treat even a weak form of (moral) Fanaticism as a reductio for moral theories. Others propose theories of rationality with the expressed purpose of avoiding fanatical verdicts. For instance, some propose that we simply ignore outcomes with small enough probabilities (e.g., D’Alembert 1761; Buffon 1777; Smith 2014[11]; Monton 2019). Others insist that we maximise not expected moral value but instead the expected utility of outcomes (given by some increasing function of an outcome’s value), and that the correct utility function is bounded above so as to keep the expectation of utility bounded as well (e.g., Arrow 1971: 64).
Meanwhile, there are few defenders of Fanaticism or, more broadly, of fanatical verdicts in cases similar to Dyson’s Wager. Notable examples include Pascal (1669) himself, Parfit (1984: §3.27), and H´ajek (2014). And even they most often endorse such verdicts only because they are a consequence of expected value theory, not because they see good independent justification for them. I suspect that even many diehard adherents of expected value theory are uncomfortable with the fanatical verdicts supplied by their theory
This situation is unfortunate. There are compelling arguments in favour of Fanaticism that do not rely on expected value theory, and so we have good reason to accept it even if we reject that particular theory. If we do not, we face disturbing implications.
The paper proceeds as follows. Section 2 addresses some common motivations for rejecting Fanaticism. Section 3 introduces the necessary formal framework for what follows. Sections 4 through 6 present arguments in favour of Fanaticism, each premised on weaker claims than expected value theory, and each (by my reckoning) more compelling than the last. The first is a basic continuum argument. The second is that, to deny Fanaticism, we must accept either what I’ll call ‘scale dependence’ or an absurd sensitivity to arbitrarily small differences in probability. And the final nails in the coffin are an updated version of Parfit’s classic Egyptology Objection and what I’ll call the Indology Objection, by which those who deny Fanaticism must make judgements which appear deeply irrational. Section 7 is the conclusion.
I have in mind the Against Malaria Foundation. As of 2019, the charity evaluator GiveWell estimated that the Against Malaria Foundation prevents the death of an additional child under the age of 5 for, on average, every US$3,710 donated (GiveWell 2020). Including other health benefits, a total benefit roughly equivalent to that is produced for, on average, every US$1,690 donated. Of course, in reality, a donor can never be certain that their donation will result in an additional life saved. This assumption of certainty is for the sake of simplicity
Dyson (1981) was the first to suggest positronium as a medium for computation and information storage. This follows Dyson (1979), wherein it is argued that an infinite duration of computation could be performed with finite energy if the computation hibernates intermittently, and if the universe has a particular structure. Tipler (1986) suggests an alternative method which may work if the universe has a different structure. Sandberg (n.d.) argues that both Dyson and Tipler’s proposals are unlikely to work, as our universe appears to match neither structure. Nonetheless, it is still epistemically possible that the universe has the right structure for Dyson’s proposal. And possibility is sufficient for my purposes.
Would such artificially-instantiated lives hold the same moral value as lives led by flesh-and-blood humans? I assume that they would, if properly implemented. See Chalmers (2010) for arguments supporting this view. And note that, for the purposes of the example, all that’s really needed is that it is epistemically possible that the lives of such simulations hold similar moral value.
I have deliberately chosen a case involving many separate lives rather than a single person’s life containing infinite value. Why? You might think that one individual’s life can contribute only some bounded amount of value to the value of the world as a whole — you might prefer for 100 people to each obtain some finite value than for one person to obtain infinite value. But whether this verdict is correct is orthogonal to the issue at hand, so I’ll focus on large amounts of value spread over many people.
Note that expected value is distinct from the frequently-used notion of expected utility, and expected value theory distinct from expected utility theory. Under expected utility theory, utility is given by some (indeed, any) increasing function of value—perhaps a concave function, such that additional value contributes less and less additional utility. The utility of an outcome may even be bounded, such that arbitrarily large amounts of additional value contribute arbitrarily little additional utility. Where expected value theory says that a lottery is better the higher its expected value, expected utility theory says that it is better the higher its expected utility. And, if the utility function is bounded, then the expected utilities of lotteries will be bounded as well. As a result, expected utility theory can avoid the fanatical verdict described here. But, if it does, it faces the objections raised in Sections 4, 5, and 6. Where relevant, I will indicate in notes how the argument applies to expected utility theory.
For instance, take (standard, welfarist) averageism. A population containing at least one blissful life of infinite (or arbitrarily long) duration will have average value greater than any finite value we choose. And so, to generate an averageist analogue of Dyson’s Wager, we can substitute an outcome containing this population for the outcome of arbitrarily many lives in the original wager.
Each of the other axiologies listed falls prey to devastating objections. See Arrhenius (2000), Huemer (2008), Greaves (2017), and chapters 17-19 of Parfit (1984).
This use of the term ‘fanaticism’ seems to originate with Bostrom (2011) and Beckstead (2013: chapter 6) (however Beckstead uses the term ‘Fanaticism’ for a similar claim specific to infinite values and instead uses ‘Recklessness’ for a claim more akin to my version of Fanaticism.). My formulation is slightly stronger than each of theirs but also, unlike theirs, applicable even if infinite total value cannot exist. For discussion of whether outcomes with infinite moral value are possible and how we might coherently compare them, see Bostrom 2011; Askell 2018; Wilkinson 2020; Wilkinson n.d.
Smith’s proposal can be interpreted in two different ways, only one of which rules out Fanaticism. By the other interpretation, which Smith prefers, we still ignore events with probability below some threshold but, in any lotteries over finitely many different outcomes, that threshold is set below the probability of the least probable outcome. This is compatible with Fanticism while still avoiding the problems with which Smith is more concerned: counterintuitive verdicts in the St Petersburg and Pasadena games.
In defence of fanaticism
Link post
Abstract
Consider a decision between: 1) a certainty of a moderately good outcome, such as one additional life saved; 2) a lottery which probably gives a worse outcome, but has a tiny probability of some vastly better outcome (perhaps trillions of additional blissful lives created). Which is morally better? By expected value theory (with a plausible axiology), no matter how tiny its probability of the better outcome, (2) will be better than (1) as long as that better outcome is good enough. But this seems fanatical. So you may be tempted to abandon expected value theory.
But not so fast — denying all such fanatical verdicts brings serious problems. For one, you must reject either that moral betterness is transitive or even a weak principle of tradeoffs. For two, you must accept that judgements are either: inconsistent over structurally-identical pairs of lotteries; or absurdly sensitive to small probability differences. For three, you must accept that the practical judgements of agents like us are sensitive to our beliefs about far-off events that are unaffected by our actions. And, for four, you may also be forced to accept judgements which you know you would reject if you simply learned more. Better to accept fanaticism than these implications.
Introduction
Suppose you face the following moral decision.
Dyson’s Wager
You have $2,000 to use for charitable purposes. You can donate it to either of two charities.
The first charity distributes bednets in low-income countries in which malaria is endemic.[1] With an additional $2,000 in their budget this year, they would prevent one additional death from malaria. You are certain of this.
The second charity does speculative research into how to do computations using ‘positronium’ — a form of matter which will be ubiquitous in the far future of our universe. If our universe has the right structure (which it probably does not), then in the distant future we may be able to use positronium to instantiate all of the operations of human minds living blissful lives, and thereby allow morally valuable life to survive indefinitely long into the future[2].[3] From your perspective as a good epistemic agent, there is some tiny, non-zero probability that, with (and only with) your donation, this research would discover a method for stable positronium computation and would be used to bring infinitely many blissful lives into existence.[4]
What ought you do, morally speaking? Which is the better option: saving a life with certainty, or generating a tiny probability of bringing about infinitely many future lives?
A common view in normative decision theory and the ethics of risk — expected value theory — says that it’s better to donate to the speculative research. Why? Each option has some probability of bringing about each of several outcomes, and each of those outcomes has some value, specified by our moral theory. Expected value theory says that one option is better than another if and only if it has the greater probability-weighted sum of value — the greater expected value.[5] Here, the option with the greater expected value is donating to the speculative research (at least on certain theories of value — more on those in a moment). So perhaps that is what you should do.
That verdict of expected value theory is counterintuitive to many. All the more counterintuitive is that it can still be better to donate to speculative research no matter how low the probability is (short of being 0)[6], since there are so many blissful lives at stake. For instance, the odds of your donation actually making the research succeed could be 1 in 10 to the one hundredth power. (10 to the one hundredth power is greater than the number of atoms in the observable universe). The chance that the research yields nothing at all would be 99.99… percent, with another 96 nines after that. And yet expected value theory says that it is better to take the bet, despite it being almost guaranteed that it will actually turn out worse than the alternative; despite the fact that you will almost certainly have let a person die for no actual benefit. Surely not, says my own intuition. On top of that, suppose that $2,000 spent on preventing malaria would save more than one life. Suppose it would save a billion lives, or any enormous finite value. Expected value theory would say that it’s still better to fund the speculative research — expected value theory says that it would be better to sacrifice those billion or more lives for a minuscule chance at the infinitely many blissful lives (and likewise if the number of blissful lives were finite but still sufficiently many). But endorsing that verdict, regardless of how low the probability of success and how high the cost, seems fanatical. Likewise, even without infinite value at stake, it would also seem fanatical to judge a lottery with sufficiently tiny probability of arbitrarily high finite value as better than getting some modest value with certainty.
Fanatical verdicts depend on more than just our theory of instrumental rationality, expected value theory. They also depend on our theory of (moral) value, or axiology. Various plausible axiologies, in conjunction with expected value theory, deliver that fanatical conclusion. Foremost among them is totalism: that the ranking of outcomes is determined by the total aggregate of value of each outcome; and that this total value increases linearly, without bound, with the sum of value in all lives that ever exist. By totalism, the outcome containing infinitely many blissful lives is indeed a much better one than that in which one life is saved. And, as we increase the number of blissful lives, we can increase how much better it is without bound. No matter how low the probability of those many blissful lives, the expected total value of the speculative research is greater than that of malaria prevention. (Likewise, even if there are only finitely many blissful lives at stake, for any tiny probability there can be sufficiently many of them to make the risky gamble better than saving a life with certainty.) But this problem isn’t unique to totalism. When combined with expected value theory, analogous problems face most competing axiologies, including: averageism, critical-level views, prioritarianism, pure egalitarianism, maximin, maximax, and narrow person-affecting views. Those axiologies each allow possible outcomes to be unboundedly valuable, so it’s easy enough to construct cases like Dyson’s Wager for each.[7] And some — namely, critical-level views and prioritarianism — already deliver the same result as totalism in the original Dyson’s Wager. In this paper, I’ll focus on totalism, both to streamline the discussion and because it seems to me far more plausible than the others.[8] But suffice it to say that just about any plausible axiology can deliver fanatical verdicts when combined with expected value theory.
In general, we will sometimes be led to verdicts that seem fanatical if we endorse Fanaticism.[9] Inversely, to succeed in avoiding fanatical verdicts, our theory of instrumental rationality and our axiology must not imply Fanaticism.
Fanaticism: For any tiny (finite) probability > 0, and for any finite value v, there is some finite V that is large enough that L risky is better than L safe (no matter which scale those cardinal values are represented on).
L risky: value V with probability; value 0 otherwise
L safe: value v with probability 1
The comparison of lotteries L risky and L safe resembles Dyson’s Wager: one option gives a slim chance of a potentially astronomical value V ; the other a certainty of some modest value v. But, here, V need not be infinite, in case you think infinite value impossible. And, with some minor assumptions (see Section 3), Fanaticism in this form implies the fanatical verdict in Dyson’s Wager. Likewise, to reject the fanatical verdict in Dyson’s Wager, we must reject Fanaticism.
Note that Fanaticism is quite a strong claim. As defined here, it requires that the ranking of L risky above L safe holds not only when the number of lives in each outcome are proportional to V, v,_ and 0. It must hold whenever outcomes can be cardinally represented with those values. Recall that cardinal representations of value[10] are unique only up to positive affine transformations — two outcomes represented by 0 and v on one scale could instead be represented by 0 × a + b and v × a + b (for any positive a and real b). Conversely, an outcome that contains many happy lives might still be represented cardinally with value 0. So Fanaticism doesn’t apply only to risky lotteries in which some possible outcome contains zero valuable lives, or zero value on net. It also applies to lotteries that can be represented as L risky and L safe even though every one of their outcomes contains enormous numbers of blissful lives, or enormous amounts of suffering, as long the differences in value between those outcomes are proportional to 0, v, and V.
Given how strong and how counterintuitive Fanaticism is, you might think it easy to reject. And many philosophers and other thinkers have done so, rejecting either Fanaticism or similar principles. For instance, Bostrom (2009) presents a compelling reductio ad absurdum for fanatical verdicts in the prudential context. Bostrom (2011), Beckstead (2013), and Askell (2019) treat even a weak form of (moral) Fanaticism as a reductio for moral theories. Others propose theories of rationality with the expressed purpose of avoiding fanatical verdicts. For instance, some propose that we simply ignore outcomes with small enough probabilities (e.g., D’Alembert 1761; Buffon 1777; Smith 2014[11]; Monton 2019). Others insist that we maximise not expected moral value but instead the expected utility of outcomes (given by some increasing function of an outcome’s value), and that the correct utility function is bounded above so as to keep the expectation of utility bounded as well (e.g., Arrow 1971: 64).
Meanwhile, there are few defenders of Fanaticism or, more broadly, of fanatical verdicts in cases similar to Dyson’s Wager. Notable examples include Pascal (1669) himself, Parfit (1984: §3.27), and H´ajek (2014). And even they most often endorse such verdicts only because they are a consequence of expected value theory, not because they see good independent justification for them. I suspect that even many diehard adherents of expected value theory are uncomfortable with the fanatical verdicts supplied by their theory
This situation is unfortunate. There are compelling arguments in favour of Fanaticism that do not rely on expected value theory, and so we have good reason to accept it even if we reject that particular theory. If we do not, we face disturbing implications.
The paper proceeds as follows. Section 2 addresses some common motivations for rejecting Fanaticism. Section 3 introduces the necessary formal framework for what follows. Sections 4 through 6 present arguments in favour of Fanaticism, each premised on weaker claims than expected value theory, and each (by my reckoning) more compelling than the last. The first is a basic continuum argument. The second is that, to deny Fanaticism, we must accept either what I’ll call ‘scale dependence’ or an absurd sensitivity to arbitrarily small differences in probability. And the final nails in the coffin are an updated version of Parfit’s classic Egyptology Objection and what I’ll call the Indology Objection, by which those who deny Fanaticism must make judgements which appear deeply irrational. Section 7 is the conclusion.
Read the rest of the paper
I have in mind the Against Malaria Foundation. As of 2019, the charity evaluator GiveWell estimated that the Against Malaria Foundation prevents the death of an additional child under the age of 5 for, on average, every US$3,710 donated (GiveWell 2020). Including other health benefits, a total benefit roughly equivalent to that is produced for, on average, every US$1,690 donated. Of course, in reality, a donor can never be certain that their donation will result in an additional life saved. This assumption of certainty is for the sake of simplicity
Dyson (1981) was the first to suggest positronium as a medium for computation and information storage. This follows Dyson (1979), wherein it is argued that an infinite duration of computation could be performed with finite energy if the computation hibernates intermittently, and if the universe has a particular structure. Tipler (1986) suggests an alternative method which may work if the universe has a different structure. Sandberg (n.d.) argues that both Dyson and Tipler’s proposals are unlikely to work, as our universe appears to match neither structure. Nonetheless, it is still epistemically possible that the universe has the right structure for Dyson’s proposal. And possibility is sufficient for my purposes.
Would such artificially-instantiated lives hold the same moral value as lives led by flesh-and-blood humans? I assume that they would, if properly implemented. See Chalmers (2010) for arguments supporting this view. And note that, for the purposes of the example, all that’s really needed is that it is epistemically possible that the lives of such simulations hold similar moral value.
I have deliberately chosen a case involving many separate lives rather than a single person’s life containing infinite value. Why? You might think that one individual’s life can contribute only some bounded amount of value to the value of the world as a whole — you might prefer for 100 people to each obtain some finite value than for one person to obtain infinite value. But whether this verdict is correct is orthogonal to the issue at hand, so I’ll focus on large amounts of value spread over many people.
Note that expected value is distinct from the frequently-used notion of expected utility, and expected value theory distinct from expected utility theory. Under expected utility theory, utility is given by some (indeed, any) increasing function of value—perhaps a concave function, such that additional value contributes less and less additional utility. The utility of an outcome may even be bounded, such that arbitrarily large amounts of additional value contribute arbitrarily little additional utility. Where expected value theory says that a lottery is better the higher its expected value, expected utility theory says that it is better the higher its expected utility. And, if the utility function is bounded, then the expected utilities of lotteries will be bounded as well. As a result, expected utility theory can avoid the fanatical verdict described here. But, if it does, it faces the objections raised in Sections 4, 5, and 6. Where relevant, I will indicate in notes how the argument applies to expected utility theory.
I’ll assume throughout that probability takes on only real values from 0 to 1.
For instance, take (standard, welfarist) averageism. A population containing at least one blissful life of infinite (or arbitrarily long) duration will have average value greater than any finite value we choose. And so, to generate an averageist analogue of Dyson’s Wager, we can substitute an outcome containing this population for the outcome of arbitrarily many lives in the original wager.
Each of the other axiologies listed falls prey to devastating objections. See Arrhenius (2000), Huemer (2008), Greaves (2017), and chapters 17-19 of Parfit (1984).
This use of the term ‘fanaticism’ seems to originate with Bostrom (2011) and Beckstead (2013: chapter 6) (however Beckstead uses the term ‘Fanaticism’ for a similar claim specific to infinite values and instead uses ‘Recklessness’ for a claim more akin to my version of Fanaticism.). My formulation is slightly stronger than each of theirs but also, unlike theirs, applicable even if infinite total value cannot exist. For discussion of whether outcomes with infinite moral value are possible and how we might coherently compare them, see Bostrom 2011; Askell 2018; Wilkinson 2020; Wilkinson n.d.
Equivalently, these are representations of value on interval scales.
Smith’s proposal can be interpreted in two different ways, only one of which rules out Fanaticism. By the other interpretation, which Smith prefers, we still ignore events with probability below some threshold but, in any lotteries over finitely many different outcomes, that threshold is set below the probability of the least probable outcome. This is compatible with Fanticism while still avoiding the problems with which Smith is more concerned: counterintuitive verdicts in the St Petersburg and Pasadena games.