I think the Train to Crazytown is a result of mistaken utilitarian calculations, not an intrinsic flaw in utilitarianism. If we can’t help but make such mistakes, then perhaps utilitarianism would insist we take that risk into account when deciding whether or not to follow through on such calculations.
Take the St. Petersburg Paradox. A one-off button push has positive expected utility. But no rational gambler would take such a series of bets, even if they’re entirely motivated by money.
The Kelly criterion gives us the theoretically optimal bet size for an even money bet, which the St. Petersburg Paradox invokes (but for EV instead of money).
The Paradox proposes sizing the bet at 100% of bankroll. So to compute the proportion of the bet we’d have to win to make this an optimal bet, we plug it into the Kelly criteria and solve for B.
1 = .51 - .49/B
This gives B = −1, less than zero, so the Kelly criterion recommends not taking the bet.
The implicit utility function in Kelly (log of bankroll) amounts to rejecting additive aggregation/utilitarianism. That would be saying that doubling goodness from 100 to 200 would be of the same decision value as doubling from 100 billion to 200 billion, even though in the latter case the benefit conferred is a billion times greater.
It also absurdly says that loss goes to infinity as you go to zero. So it will reject any finite benefit of any kind to prevent even an infinitesimal chance of going to zero. If you say that the world ending has infinite disutility then of course you won’t press a button with any chance of the end of the world, but you’ll also sacrifice everything else to increment that probability downward, e.g. taking away almost everything good about the world for the last tiny slice of probability.
That’s a helpful correction/clarification, thank you!
I suppose this is why it’s important to be cautious about overapplying a particular utilitarian calculation—you (or in this case, I) might be wrong in how you’re going about it, even though the right ultimate conclusion is justified on the basis of a correct utilitarian calculus.
I don’t understand the relevance of the Kelly criterion. The wikipedia page for the Kelly criterion states that “[t]he Kelly bet size is found by maximizing the expected value of the logarithm of wealth,” but that’s not relevant here, is it?
I think the Train to Crazytown is a result of mistaken utilitarian calculations, not an intrinsic flaw in utilitarianism. If we can’t help but make such mistakes, then perhaps utilitarianism would insist we take that risk into account when deciding whether or not to follow through on such calculations.
Take the St. Petersburg Paradox. A one-off button push has positive expected utility. But no rational gambler would take such a series of bets, even if they’re entirely motivated by money.
The Kelly criterion gives us the theoretically optimal bet size for an even money bet, which the St. Petersburg Paradox invokes (but for EV instead of money).
The Paradox proposes sizing the bet at 100% of bankroll. So to compute the proportion of the bet we’d have to win to make this an optimal bet, we plug it into the Kelly criteria and solve for B.
1 = .51 - .49/B
This gives B = −1, less than zero, so the Kelly criterion recommends not taking the bet.
The implicit utility function in Kelly (log of bankroll) amounts to rejecting additive aggregation/utilitarianism. That would be saying that doubling goodness from 100 to 200 would be of the same decision value as doubling from 100 billion to 200 billion, even though in the latter case the benefit conferred is a billion times greater.
It also absurdly says that loss goes to infinity as you go to zero. So it will reject any finite benefit of any kind to prevent even an infinitesimal chance of going to zero. If you say that the world ending has infinite disutility then of course you won’t press a button with any chance of the end of the world, but you’ll also sacrifice everything else to increment that probability downward, e.g. taking away almost everything good about the world for the last tiny slice of probability.
That’s a helpful correction/clarification, thank you!
I suppose this is why it’s important to be cautious about overapplying a particular utilitarian calculation—you (or in this case, I) might be wrong in how you’re going about it, even though the right ultimate conclusion is justified on the basis of a correct utilitarian calculus.
I don’t understand the relevance of the Kelly criterion. The wikipedia page for the Kelly criterion states that “[t]he Kelly bet size is found by maximizing the expected value of the logarithm of wealth,” but that’s not relevant here, is it?