The implicit utility function in Kelly (log of bankroll) amounts to rejecting additive aggregation/utilitarianism. That would be saying that doubling goodness from 100 to 200 would be of the same decision value as doubling from 100 billion to 200 billion, even though in the latter case the benefit conferred is a billion times greater.
It also absurdly says that loss goes to infinity as you go to zero. So it will reject any finite benefit of any kind to prevent even an infinitesimal chance of going to zero. If you say that the world ending has infinite disutility then of course you won’t press a button with any chance of the end of the world, but you’ll also sacrifice everything else to increment that probability downward, e.g. taking away almost everything good about the world for the last tiny slice of probability.
That’s a helpful correction/clarification, thank you!
I suppose this is why it’s important to be cautious about overapplying a particular utilitarian calculation—you (or in this case, I) might be wrong in how you’re going about it, even though the right ultimate conclusion is justified on the basis of a correct utilitarian calculus.
The implicit utility function in Kelly (log of bankroll) amounts to rejecting additive aggregation/utilitarianism. That would be saying that doubling goodness from 100 to 200 would be of the same decision value as doubling from 100 billion to 200 billion, even though in the latter case the benefit conferred is a billion times greater.
It also absurdly says that loss goes to infinity as you go to zero. So it will reject any finite benefit of any kind to prevent even an infinitesimal chance of going to zero. If you say that the world ending has infinite disutility then of course you won’t press a button with any chance of the end of the world, but you’ll also sacrifice everything else to increment that probability downward, e.g. taking away almost everything good about the world for the last tiny slice of probability.
That’s a helpful correction/clarification, thank you!
I suppose this is why it’s important to be cautious about overapplying a particular utilitarian calculation—you (or in this case, I) might be wrong in how you’re going about it, even though the right ultimate conclusion is justified on the basis of a correct utilitarian calculus.