There are a few mistakes/âgaps in the quantitative claims:
Continuity: If A â» B â» C, thereâs some probability p â (0, 1) where a guaranteed state of the world B is ex ante morally equivalent to âlottery p·A + (1-p)·Câ (i.e., p chance of state of the world A, and the rest of the probability mass of C)
This is not quite the same as either property 3 or property 3âČ in the Wikipedia article, and itâs plausible but unclear to me that you can prove 3âČ from it. Property 3 uses âp â [0, 1]â and 3âČ has an inequality; it seems like the argument still goes through with 3âČ so Iâd switch to that, but then you should also say why 3 is unintuitive to you because VNM only requires 3 OR 3âČ.
This arbitrariness diminishes somewhat (though, again, not entirely) when viewed through the asymptotic structure. Once we accept that compensation requirements grow without bound as suffering intensifies, some threshold becomes inevitable. The asymptote must diverge somewhere; debates about exactly where are secondary to recognizing the underlying pattern.
âGrow without boundâ just means that for any M, we have f(X) > M for sufficiently large X. This is different from there being a vertical asymptote so a threshold is not inevitable. For instance one could have f(X) = X or f(X) = X^2.
To be clear, whether we call this behavior âcontinuousâ depends on mathematical context and convention. In standard calculus, a function that approaches infinity exhibits an infinite discontinuity. [...]
[1] In the extended reals with appropriate topology, such a function can be rigorously called left-continuous.
It would be confusing to call this behavior continuous, because (a) the VNM axiom you reject is called continuity and (b) we are not using any other properties of the extended reals, but we are using real-valued probabilities and x values.
Once youâve accepted that some suffering might require a number of flourishing lives that you could not write down, compute, or physically instantiate to morally justify, at least in principle, the additional step to âinfiniteâ is smaller in some important conceptual sense than it might seem prima facie.
This may seem like a nitpick, but âwrite downâ, âcomputeâ, and âphysically instantiateâ are wildly different ranges of numbers. The largest number one could âphysically instantiateâ is something like 10^50 minds, the most one could âwrite downâ the digits of is something like 10^10^10.
Not all large numbers are the same here, because if one thinks the offset ratio for a cluster headache is in the 10^50 range, there are only 50 âlevelsâ of suffering each of which is 10x worse than the last. If itâs over 10^10^10, there are over 10 billion such âlevelsâ, it would be impossible to rate cluster headaches on a logarithmic pain scale, and we would happily give everyone on Earth (say) a level 10,000,000,000 cluster headache to prevent one person from having a (slightly worse than average) level 10,000,000,010 cluster headache. Moving from 10^10^10 to infinity, we would then believe that suffering has a threshold t where t + epsilon intensity suffering cannot be offset by removing tâepsilon intensity suffering, and also need to propose some other mechanism like lexicographic order for how to deal with suffering above the infinite badness threshold.
So itâs already a huge step to reject numbers we can âphysically instantiateâ to ones we can barely âwrite downâ, and another step from there to infinity; at both steps your treatment of comparisons between different suffering intensities changes significantly, even in thought experiments without an unphysically large number of beings.
Thanks, yeah I may have gotten slightly confused when writing.
1) VNM
Wikipedia screenshot:
Let P be the thing I said in the post:
If A â» B â» C, thereâs some probability p â (0, 1) where a guaranteed state of the world B is ex ante morally equivalent to âlottery p·A + (1-p)·Câ
I think (P and Independence)âčQ but notPâčQ in general.
So my writing was sloppy. Super good catch (not caught by any of the various LLMs iirc!)
But for the purposes of the argument everything holds together because you need independence axiom for VNM to hold. But still, sloppy.
2) âgrows without boundâ bit
Me: âThis arbitrariness diminishes somewhat (though, again, not entirely) when viewed through the asymptotic structure. Once we accept that compensation requirements grow without bound as suffering intensifies, some threshold becomes inevitable. The asymptote must diverge somewhere; debates about exactly where are secondary to recognizing the underlying pattern.â
You:
âGrow without boundâ just means that for any M, we have f(X) > M for sufficiently large X. This is different from there being a vertical asymptote so a threshold is not inevitable. For instance one could have f(X) = X or f(X) = X^2.
Straightforward error by me, I will change the wording. Not sure how that happened
3) âcontinuityâ
It would be confusing to call this behavior continuous, because (a) the VNM axiom you reject is called continuity and (b) we are not using any other properties of the extended reals, but we are using real-valued probabilities and x values.
Yeah, idk, English and math only provide so many words. I could have spent more words more driving home and clarifying this point or inventing and defining additional terms. My intuition is that itâs clear enough as is (evidently we disagree about this) but if a couple other people say âyeah this is misleading and confusingâ then Iâll concede that I made a bad choice about clarity vs brevity as a writing decision.
4) âwrite downâ, âcomputeâ, and âphysically instantiateâ till end
Ngl I am pretty confused about everything starting here. I think Iâm just reading you wrong somehow. Like the difference in those magnitudes is huge, point taken, but I donât see why that matters for my argument.
Moving from 10^10^10 to infinity, we would then believe that suffering has a threshold t where t + epsilon intensity suffering cannot be offset by removing tâepsilon intensity suffering
Confused here because yeah clearly adding t+epsilon and removing t-epsilon gives you a net change below zero. But I sense you might be getting at the (very substantive and important) cluster of critiques I respond to in this comment (?)
also need to propose some other mechanism like lexicographic order for how to deal with suffering above the infinite badness threshold.
Yeah Iâm ~totally agnostic about this in the post. There are many substantively different possibilities about what the moral world might be like when dealing above that threshold, I agree! Could be distinct levels of lexicality, perhaps some literal integer like 13 levels or perhaps arbitrarily many. Probably other solutions/âmodels as well
Maybe I should just remove/âmodify the âwrite downâ, âcomputeâ, and âphysically instantiateâ bit of rhetorical flourish because it might be doing more harm than good.
(Note that it may take me some time to update the post to reflect sections 1 and 2 in this comment)
There are a few mistakes/âgaps in the quantitative claims:
This is not quite the same as either property 3 or property 3âČ in the Wikipedia article, and itâs plausible but unclear to me that you can prove 3âČ from it. Property 3 uses âp â [0, 1]â and 3âČ has an inequality; it seems like the argument still goes through with 3âČ so Iâd switch to that, but then you should also say why 3 is unintuitive to you because VNM only requires 3 OR 3âČ.
âGrow without boundâ just means that for any M, we have f(X) > M for sufficiently large X. This is different from there being a vertical asymptote so a threshold is not inevitable. For instance one could have f(X) = X or f(X) = X^2.
It would be confusing to call this behavior continuous, because (a) the VNM axiom you reject is called continuity and (b) we are not using any other properties of the extended reals, but we are using real-valued probabilities and x values.
This may seem like a nitpick, but âwrite downâ, âcomputeâ, and âphysically instantiateâ are wildly different ranges of numbers. The largest number one could âphysically instantiateâ is something like 10^50 minds, the most one could âwrite downâ the digits of is something like 10^10^10.
Not all large numbers are the same here, because if one thinks the offset ratio for a cluster headache is in the 10^50 range, there are only 50 âlevelsâ of suffering each of which is 10x worse than the last. If itâs over 10^10^10, there are over 10 billion such âlevelsâ, it would be impossible to rate cluster headaches on a logarithmic pain scale, and we would happily give everyone on Earth (say) a level 10,000,000,000 cluster headache to prevent one person from having a (slightly worse than average) level 10,000,000,010 cluster headache. Moving from 10^10^10 to infinity, we would then believe that suffering has a threshold t where t + epsilon intensity suffering cannot be offset by removing tâepsilon intensity suffering, and also need to propose some other mechanism like lexicographic order for how to deal with suffering above the infinite badness threshold.
So itâs already a huge step to reject numbers we can âphysically instantiateâ to ones we can barely âwrite downâ, and another step from there to infinity; at both steps your treatment of comparisons between different suffering intensities changes significantly, even in thought experiments without an unphysically large number of beings.
Thanks, yeah I may have gotten slightly confused when writing.
1) VNM
Wikipedia screenshot:
Let P be the thing I said in the post:
or, symbolically
PâĄ(Aâ»Bâ»Cââpâ(0,1)[BâŒpA+(1âp)C])
and let
QâĄ(LâșMâșNââΔâ(0,1)[(1âΔ)L+ΔNâșMâșΔL+(1âΔ)N])
I think (P and Independence)âčQ but not PâčQ in general.
So my writing was sloppy. Super good catch (not caught by any of the various LLMs iirc!)
But for the purposes of the argument everything holds together because you need independence axiom for VNM to hold. But still, sloppy.
2) âgrows without boundâ bit
Me: âThis arbitrariness diminishes somewhat (though, again, not entirely) when viewed through the asymptotic structure. Once we accept that compensation requirements grow without bound as suffering intensifies, some threshold becomes inevitable. The asymptote must diverge somewhere; debates about exactly where are secondary to recognizing the underlying pattern.â
You:
Straightforward error by me, I will change the wording. Not sure how that happened
3) âcontinuityâ
Yeah, idk, English and math only provide so many words. I could have spent more words more driving home and clarifying this point or inventing and defining additional terms. My intuition is that itâs clear enough as is (evidently we disagree about this) but if a couple other people say âyeah this is misleading and confusingâ then Iâll concede that I made a bad choice about clarity vs brevity as a writing decision.
4) âwrite downâ, âcomputeâ, and âphysically instantiateâ till end
Ngl I am pretty confused about everything starting here. I think Iâm just reading you wrong somehow. Like the difference in those magnitudes is huge, point taken, but I donât see why that matters for my argument.
Confused here because yeah clearly adding t+epsilon and removing t-epsilon gives you a net change below zero. But I sense you might be getting at the (very substantive and important) cluster of critiques I respond to in this comment (?)
Yeah Iâm ~totally agnostic about this in the post. There are many substantively different possibilities about what the moral world might be like when dealing above that threshold, I agree! Could be distinct levels of lexicality, perhaps some literal integer like 13 levels or perhaps arbitrarily many. Probably other solutions/âmodels as well
Maybe I should just remove/âmodify the âwrite downâ, âcomputeâ, and âphysically instantiateâ bit of rhetorical flourish because it might be doing more harm than good.
(Note that it may take me some time to update the post to reflect sections 1 and 2 in this comment)
Again, sharp eye, thanks for the comment!