I interpret OP’s point about asymptotes to mean that he indeed bites this bullet and believes that the “compensation schedule” is massively higher even when the “instrument” only feels slightly worse?
Great points both and I agree that the kind of tradeoff/scenario described by @EJT and @bruce in his comment are the strongest/best/most important objections to my view (and the thing most likely to make me change my mind)
Let me just quote Bruce to get the relevant info in one place and so this comment can serve as a dual response/update. I think the fundamentals are pretty similar (between EJT and Bruce’s examples) even though the exact wording/implementation is not:
A) 70 years of non-offsettable suffering, followed by 1 trillion happy human lives and 1 trillion happy pig lives, or
B) [70 years minus 1 hour of non-offsettable suffering (NOS)], followed by 1 trillion unhappy humans who are living at barely offsettable suffering (BOS), followed by 1 trillion pig lives that are living at the BOS,
You would prefer option B here. And it’s not at all obvious to me that we should find this deal more acceptable or intuitive than what I understand is basically an extreme form of the Very Repugnant Conclusion, and I’m not sure you’ve made a compelling case for this, or that world B contains less relevant suffering.
to which I replied:
Yeah not going to lie this is an important point, I have three semi-competing responses:
I’m much more confident about the (positive wellbeing + suffering) vs neither trade than intra-suffering trades. It sounds right that something like the tradeoff you describe follows from the most intuitive version of my model, but I’m not actually certain of this; like maybe there is a system that fits within the bounds of the thing I’m arguing for that chooses A instead of B (with no money pumps/very implausible conclusions following)
Well the question again is “what would the IHE under experiential totalization do?” Insofar as the answer is “A”, I endorse that. I want to lean on this type of thinking much more strongly than hyper-systematic quasi-formal inferences about what indirectly follows from my thesis.
I think it’s possible that the answer is just B because BOS is just radically qualitatively different from NOS.
Maybe most importantly I (tentatively?) object to the term “barely” here because under the asymptotic model I suggest, the value of subtracting arbitrarily small amount of suffering instrument ϵ from the NOS state results in no change in moral value at all because (to quote myself again) “Working in the extended reals, this is left-continuous: limis→is∗ϕ(is)=∞+=ϕ(is∗)”
So in order to get BOS, we need to remove something larger than ϵ, and now it’s a quasi-empirical question of how different that actually feels from the inside. Plausibly the answer is that “BOS” (scare quotes) doesn’t actually feel “barely” different—it feels extremely and categorically different
Consider “which of these responses if any is correct” a bit of an open question for me.
Plausibly I should have figured this out before writing/publishing my piece but I’ve updated nontrivially (though certainly not all the way) towards just being wrong on the metaphysical claim.
This is in part because after thinking some more since my reply to Bruce (and chatting with some LLMs), I’ve updated away from my points (1) and (2) above.
I am still struggling with (3) both at:
the conceptual level of whether it could be the case that there are fundamental qualitative discontinuities corresponding to the asymptote location at arbitrarily small but not infinitesimal (!) changes in i_s; and
the quasi-empirical level of whether that’s actually how things are
Mostly (2) though, I should add. I think (uncertain/tentative etc etc) that this is conceptually on the table.
So to respond to Ben:
I interpret OP’s point about asymptotes to mean that he indeed bites this bullet and believes that the “compensation schedule” is massively higher even when the “instrument” only feels slightly worse?
I don’t bite the bullet in the most natural reading of this, where very small changes in i_s do only result in very small changes in subjective suffering from a subjective qualitative POV. Insofar as that is conceptually and empirically correct, I (tentatively) think it’s a counterexample that more or less disproves my metaphysical claim (if true/legit).
But I feel pretty conflicted right now about whether the small but not infinitesimal change ini_s → subjectively small difference is true (again, mostly because of quasi-empirical uncertainty).
This is hard to think about largely because my model/view leaves the actual shape of the asymptote unspecified (here’s a new version of the second pic in my post), and that includes all the uncertainty associated with what instrument we are literally or conceptually talking about (since the sole criterion is that it’s monotonic)[1]
I will add that one reason I think this might be a correct “way out” is that it would just be very strange to me if “IHE preference is to refuse 70 year torture and happiness trade mentioned in post” logically entails (maybe with some extremely basic additional assumptions like transitivity) “IHE gives up divine bliss for a very small subjective amount of suffering mitigation”
I know that this could just be a failure of cognition and/or imagination on my part. Tbh this is really the thing that I’m trying to grok/wrestle with (as of now, like for the last day or so, not in the post)
I also know this is ~motivated reasoning, but idk I just do think it has some evidential weight. Hard to justify in explicit terms though.
I’m curious if others have different intuitions about how weird/plausible this [2] is from a very abstract POV
“IHE preference is to refuse 70 year torture and happiness trade mentioned in post” logically entails (maybe with some extremely basic additional assumptions like transitivity) “IHE gives up divine bliss for a very small subjective amount of suffering mitigation”
I don’t bite the bullet in the most natural reading of this, where very small changes in i_s do only result in very small changes in subjective suffering from a subjective qualitative POV. Insofar as that is conceptually and empirically correct, I (tentatively) think it’s a counterexample that more or less disproves my metaphysical claim (if true/legit).
Going along with ‘subjective suffering’, which I think is subject to the risks you mention here, to make the claim that the compensation schedule is asymptotic (which is pretty important to your topline claim RE: offsetability) I think you can’t only be uncertain about Ben’s claim or “not bite the bullet”, you have to make a positive case for your claim. For example:
I will add that one reason I think this might be a correct “way out” is that it would just be very strange to me if “IHE preference is to refuse 70 year torture and happiness trade mentioned in post” logically entails (maybe with some extremely basic additional assumptions like transitivity) “IHE gives up divine bliss for a very small subjective amount of suffering mitigation”
Like, is it correct that absent some categorical lexical property that you can identify, “the way out” is very dependent on you being able to support the claim “near the threshold a small change in i_s --> large change in subjective experience”?
So I suspect your view is something like: “as i_s increases linearly, subjective experience increases in a non-linear way that approaches infinity at some point, earlier than 70 years of torture”?[1] If so, what’s the reason you think this is the correct view / am I missing something here?
RE: the shape of the asymptote and potential risks of conflating empirical uncertainties
I think this is an interesting graph, and you might feel like you can make some rough progress on this conceptually with your methodology. For example, how many years of bliss would the IHE need to be offered to be indifferent between the equivalent experience of:
1 person boiled alive for an hour at 100degC
Change the time variable to 30mins / 10min / 5min / 1 minute / 10 seconds / 1 seconds of the above experience[2]
Change the exposure variable to different % of the body (e.g. just hand / entire arm / abdomen / chest / back, etc)
(I would be separately interested in how the IHE would make tradeoffs if making a decision for others and the choice was about: 10/10000/1E6 people having ^all the above time/exposure variations, rather than experiencing it themselves, but this is further away from your preferred methodology so I’ll leave it for another time)
And then plotted the graph instrument with different combinations of the time/exposure/temperature variables. This could help you either elucidate the shape of your graph, or the location of uncertainties around your time granularity.
The reason I chose this > cluster headaches is partly because you can get more variables here, but if you wanted just a time comparison then cluster headaches might be easier.
But I actually think temperature is an interesting one to consider for multiple additional reasons. For example, it’s interesting as a real life example where you have perceived discontinuities of responses to continuous changes in some variable. You might be willing to tolerate 35 degree water for a very long time but as soon as it gets to 40+ how tolerable it is very rapidly decreases in a way that feels like a discontinuity.
But what’s happening here is that heat nociceptors activate at a specific temperature (say e.g. 40degC). So you basically just aren’t moving up the suffering instrument below that temperature ~at all, and so the variable you’d change is “how many nociceptors do you activate” or “how frequently do they fire” (all of which are modulated by temperature and amount of skin exposed), and that rapidly goes up as you reach / exceed 40degC.[3]
And so if you naively plot “degrees” or “person-hours” at the bottom, you might think subjective suffering is going up exponentially compared to a linear increase in i_s, but you are not accounting for thresholds in i_s activation, or increased sensitisation or recruitment of nociceptors over time, which might make the relationship look much less asymptotic.[4]
And empirical uncertainties about exactly how these kinds of signals work and are processed I think is a potentially large limiting factor for being able to strongly support “as i_s increases linearly, subjective experience increases in a non-linear way that approaches infinite bliss at some point”[5]
I obviously don’t think it’s possible to have all the empirical Qs worked out for the post, but I wanted to illustrate these empirical uncertainties because I think even if I felt it would be correct for the IHE to reject some weaker version of the torture-bliss trade package[6], it would still be unclear that this reflected an asymptotic relationship, rather than just e.g. a large asymmetry between sensitivity to i_s and i_h, or maximum amount of i_s and i_h possible. I think these possibilities could satisfy the (weaker) IHE thought experiment while potentially satisfying lexicality in practice, but not in theory. It might also explain why you feel much more confident about lexicality WRT happiness but not intra-suffering tradeoffs, and if you put the difference of things like 1E10 vs 1E50 vs 10^10^10 down to scope insensitivity I do think this explains a decent portion of your views.
I’m aware that approaching 1 second is getting towards your uncertainty for the time granularity problem, but I think if you do think 1 hour of cluster headache is NOS then these are the kinds of tradeoffs you’d want to be able to make (and back)
Oops yes, fundamentals between my and Bruce’s cases are very similar. Should have read Bruce’s comment!
The claim we’re discussing—about the possibility of small steps of various kinds—sounds kinda like a claim that gets called ‘Finite Fine-Grainedness’/‘Small Steps’ in the population axiology literature. It seems hard to convincingly argue for, so in this paper I present a problem for lexical views that doesn’t depend on it. I sort of gestured at it above with the point about risk without making it super precise. The one-line summary is that expected welfare levels are finitely fine-grained.
I interpret OP’s point about asymptotes to mean that he indeed bites this bullet and believes that the “compensation schedule” is massively higher even when the “instrument” only feels slightly worse?
Great points both and I agree that the kind of tradeoff/scenario described by @EJT and @bruce in his comment are the strongest/best/most important objections to my view (and the thing most likely to make me change my mind)
Let me just quote Bruce to get the relevant info in one place and so this comment can serve as a dual response/update. I think the fundamentals are pretty similar (between EJT and Bruce’s examples) even though the exact wording/implementation is not:
to which I replied:
Plausibly I should have figured this out before writing/publishing my piece but I’ve updated nontrivially (though certainly not all the way) towards just being wrong on the metaphysical claim.
This is in part because after thinking some more since my reply to Bruce (and chatting with some LLMs), I’ve updated away from my points (1) and (2) above.
I am still struggling with (3) both at:
the conceptual level of whether it could be the case that there are fundamental qualitative discontinuities corresponding to the asymptote location at arbitrarily small but not infinitesimal (!) changes in i_s; and
the quasi-empirical level of whether that’s actually how things are
Mostly (2) though, I should add. I think (uncertain/tentative etc etc) that this is conceptually on the table.
So to respond to Ben:
I don’t bite the bullet in the most natural reading of this, where very small changes in i_s do only result in very small changes in subjective suffering from a subjective qualitative POV. Insofar as that is conceptually and empirically correct, I (tentatively) think it’s a counterexample that more or less disproves my metaphysical claim (if true/legit).
But I feel pretty conflicted right now about whether the small but not infinitesimal change in i_s → subjectively small difference is true (again, mostly because of quasi-empirical uncertainty).
This is hard to think about largely because my model/view leaves the actual shape of the asymptote unspecified (here’s a new version of the second pic in my post), and that includes all the uncertainty associated with what instrument we are literally or conceptually talking about (since the sole criterion is that it’s monotonic)[1]
I will add that one reason I think this might be a correct “way out” is that it would just be very strange to me if “IHE preference is to refuse 70 year torture and happiness trade mentioned in post” logically entails (maybe with some extremely basic additional assumptions like transitivity) “IHE gives up divine bliss for a very small subjective amount of suffering mitigation”
I know that this could just be a failure of cognition and/or imagination on my part. Tbh this is really the thing that I’m trying to grok/wrestle with (as of now, like for the last day or so, not in the post)
I also know this is ~motivated reasoning, but idk I just do think it has some evidential weight. Hard to justify in explicit terms though.
I’m curious if others have different intuitions about how weird/plausible this [2] is from a very abstract POV
I.e.
Going along with ‘subjective suffering’, which I think is subject to the risks you mention here, to make the claim that the compensation schedule is asymptotic (which is pretty important to your topline claim RE: offsetability) I think you can’t only be uncertain about Ben’s claim or “not bite the bullet”, you have to make a positive case for your claim. For example:
Like, is it correct that absent some categorical lexical property that you can identify, “the way out” is very dependent on you being able to support the claim “near the threshold a small change in i_s --> large change in subjective experience”?
So I suspect your view is something like: “as i_s increases linearly, subjective experience increases in a non-linear way that approaches infinity at some point, earlier than 70 years of torture”?[1] If so, what’s the reason you think this is the correct view / am I missing something here?
RE: the shape of the asymptote and potential risks of conflating empirical uncertainties
I think this is an interesting graph, and you might feel like you can make some rough progress on this conceptually with your methodology. For example, how many years of bliss would the IHE need to be offered to be indifferent between the equivalent experience of:
1 person boiled alive for an hour at 100degC
Change the time variable to 30mins / 10min / 5min / 1 minute / 10 seconds / 1 seconds of the above experience[2]
Change the exposure variable to different % of the body (e.g. just hand / entire arm / abdomen / chest / back, etc)
(I would be separately interested in how the IHE would make tradeoffs if making a decision for others and the choice was about: 10/10000/1E6 people having ^all the above time/exposure variations, rather than experiencing it themselves, but this is further away from your preferred methodology so I’ll leave it for another time)
And then plotted the graph instrument with different combinations of the time/exposure/temperature variables. This could help you either elucidate the shape of your graph, or the location of uncertainties around your time granularity.
The reason I chose this > cluster headaches is partly because you can get more variables here, but if you wanted just a time comparison then cluster headaches might be easier.
But I actually think temperature is an interesting one to consider for multiple additional reasons. For example, it’s interesting as a real life example where you have perceived discontinuities of responses to continuous changes in some variable. You might be willing to tolerate 35 degree water for a very long time but as soon as it gets to 40+ how tolerable it is very rapidly decreases in a way that feels like a discontinuity.
But what’s happening here is that heat nociceptors activate at a specific temperature (say e.g. 40degC). So you basically just aren’t moving up the suffering instrument below that temperature ~at all, and so the variable you’d change is “how many nociceptors do you activate” or “how frequently do they fire” (all of which are modulated by temperature and amount of skin exposed), and that rapidly goes up as you reach / exceed 40degC.[3]
And so if you naively plot “degrees” or “person-hours” at the bottom, you might think subjective suffering is going up exponentially compared to a linear increase in i_s, but you are not accounting for thresholds in i_s activation, or increased sensitisation or recruitment of nociceptors over time, which might make the relationship look much less asymptotic.[4]
And empirical uncertainties about exactly how these kinds of signals work and are processed I think is a potentially large limiting factor for being able to strongly support “as i_s increases linearly, subjective experience increases in a non-linear way that approaches infinite bliss at some point”[5]
I obviously don’t think it’s possible to have all the empirical Qs worked out for the post, but I wanted to illustrate these empirical uncertainties because I think even if I felt it would be correct for the IHE to reject some weaker version of the torture-bliss trade package[6], it would still be unclear that this reflected an asymptotic relationship, rather than just e.g. a large asymmetry between sensitivity to i_s and i_h, or maximum amount of i_s and i_h possible. I think these possibilities could satisfy the (weaker) IHE thought experiment while potentially satisfying lexicality in practice, but not in theory. It might also explain why you feel much more confident about lexicality WRT happiness but not intra-suffering tradeoffs, and if you put the difference of things like 1E10 vs 1E50 vs 10^10^10 down to scope insensitivity I do think this explains a decent portion of your views.
And indeed 1 hour of cluster headache
I’m aware that approaching 1 second is getting towards your uncertainty for the time granularity problem, but I think if you do think 1 hour of cluster headache is NOS then these are the kinds of tradeoffs you’d want to be able to make (and back)
There are other heat receptors at higher temperatures but to a first approximation it’s probably fine to ignore
Because of uncertainty around how much i_s there actually is
Also worth flagging that RE: footnote 26, where you say:
You should also expect this to apply to the suffering instrument; there is also some upper bound for all of these variables
e.g. 1E10 years, rather than infinity, since I find that pretty implausible and hard to reason about
Oops yes, fundamentals between my and Bruce’s cases are very similar. Should have read Bruce’s comment!
The claim we’re discussing—about the possibility of small steps of various kinds—sounds kinda like a claim that gets called ‘Finite Fine-Grainedness’/‘Small Steps’ in the population axiology literature. It seems hard to convincingly argue for, so in this paper I present a problem for lexical views that doesn’t depend on it. I sort of gestured at it above with the point about risk without making it super precise. The one-line summary is that expected welfare levels are finitely fine-grained.