In his examples (Râ and Rn lexically ordered) there is no âmost intense suffering which can be outweighedâ (or âleast intense suffering which canât be outweighedâ). E.g. in the hyperreals ân1,n2âR:n1Ï>n2 no matter how small n1 or large n2.
S* is only a tiny bit worse than S
In his examples, between any S which canât be outweighed and S* which can, there are an uncountably infinite number of additional levels of suffering! So I donât think itâs correct to say itâs only a tiny bit worse.
Oh yep nice point, though note thatâe.g. - there are uncountably many reals between 1,000,000 and 1,000,001 and yet it still seems correct (at least talking loosely) to say that 1,000,001 is only a tiny bit bigger than 1,000,000.
But in any case, we can modify the argument to say that S* feels only a tiny bit worse than S. Or instead we can modify it so that S is degrees celsius of a fire that causes suffering that just about can be outweighed, and S* is degrees celsius of a fire that causes suffering that just about canât be outweighed.
I interpret OPâs point about asymptotes to mean that he indeed bites this bullet and believes that the âcompensation scheduleâ is massively higher even when the âinstrumentâ only feels slightly worse?
Great points both and I agree that the kind of tradeoff/âscenario described by @EJT and @bruce in his comment are the strongest/âbest/âmost important objections to my view (and the thing most likely to make me change my mind)
Let me just quote Bruce to get the relevant info in one place and so this comment can serve as a dual response/âupdate. I think the fundamentals are pretty similar (between EJT and Bruceâs examples) even though the exact wording/âimplementation is not:
A) 70 years of non-offsettable suffering, followed by 1 trillion happy human lives and 1 trillion happy pig lives, or
B) [70 years minus 1 hour of non-offsettable suffering (NOS)], followed by 1 trillion unhappy humans who are living at barely offsettable suffering (BOS), followed by 1 trillion pig lives that are living at the BOS,
You would prefer option B here. And itâs not at all obvious to me that we should find this deal more acceptable or intuitive than what I understand is basically an extreme form of the Very Repugnant Conclusion, and Iâm not sure youâve made a compelling case for this, or that world B contains less relevant suffering.
to which I replied:
Yeah not going to lie this is an important point, I have three semi-competing responses:
Iâm much more confident about the (positive wellbeing + suffering) vs neither trade than intra-suffering trades. It sounds right that something like the tradeoff you describe follows from the most intuitive version of my model, but Iâm not actually certain of this; like maybe there is a system that fits within the bounds of the thing Iâm arguing for that chooses A instead of B (with no money pumps/âvery implausible conclusions following)
Well the question again is âwhat would the IHE under experiential totalization do?â Insofar as the answer is âAâ, I endorse that. I want to lean on this type of thinking much more strongly than hyper-systematic quasi-formal inferences about what indirectly follows from my thesis.
I think itâs possible that the answer is just B because BOS is just radically qualitatively different from NOS.
Maybe most importantly I (tentatively?) object to the term âbarelyâ here because under the asymptotic model I suggest, the value of subtracting arbitrarily small amount of suffering instrument Ï” from the NOS state results in no change in moral value at all because (to quote myself again) âWorking in the extended reals, this is left-continuous: limisâisâÏ(is)=â+=Ï(isâ)â
So in order to get BOS, we need to remove something larger than Ï”, and now itâs a quasi-empirical question of how different that actually feels from the inside. Plausibly the answer is that âBOSâ (scare quotes) doesnât actually feel âbarelyâ differentâit feels extremely and categorically different
Consider âwhich of these responses if any is correctâ a bit of an open question for me.
Plausibly I should have figured this out before writing/âpublishing my piece but Iâve updated nontrivially (though certainly not all the way) towards just being wrong on the metaphysical claim.
This is in part because after thinking some more since my reply to Bruce (and chatting with some LLMs), Iâve updated away from my points (1) and (2) above.
I am still struggling with (3) both at:
the conceptual level of whether it could be the case that there are fundamental qualitative discontinuities corresponding to the asymptote location at arbitrarily small but not infinitesimal (!) changes in i_s; and
the quasi-empirical level of whether thatâs actually how things are
Mostly (2) though, I should add. I think (uncertain/âtentative etc etc) that this is conceptually on the table.
So to respond to Ben:
I interpret OPâs point about asymptotes to mean that he indeed bites this bullet and believes that the âcompensation scheduleâ is massively higher even when the âinstrumentâ only feels slightly worse?
I donât bite the bullet in the most natural reading of this, where very small changes in i_s do only result in very small changes in subjective suffering from a subjective qualitative POV. Insofar as that is conceptually and empirically correct, I (tentatively) think itâs a counterexample that more or less disproves my metaphysical claim (if true/âlegit).
But I feel pretty conflicted right now about whether the small but not infinitesimal change ini_s â subjectively small difference is true (again, mostly because of quasi-empirical uncertainty).
This is hard to think about largely because my model/âview leaves the actual shape of the asymptote unspecified (hereâs a new version of the second pic in my post), and that includes all the uncertainty associated with what instrument we are literally or conceptually talking about (since the sole criterion is that itâs monotonic)[1]
I will add that one reason I think this might be a correct âway outâ is that it would just be very strange to me if âIHE preference is to refuse 70 year torture and happiness trade mentioned in postâ logically entails (maybe with some extremely basic additional assumptions like transitivity) âIHE gives up divine bliss for a very small subjective amount of suffering mitigationâ
I know that this could just be a failure of cognition and/âor imagination on my part. Tbh this is really the thing that Iâm trying to grok/âwrestle with (as of now, like for the last day or so, not in the post)
I also know this is ~motivated reasoning, but idk I just do think it has some evidential weight. Hard to justify in explicit terms though.
Iâm curious if others have different intuitions about how weird/âplausible this [2] is from a very abstract POV
âIHE preference is to refuse 70 year torture and happiness trade mentioned in postâ logically entails (maybe with some extremely basic additional assumptions like transitivity) âIHE gives up divine bliss for a very small subjective amount of suffering mitigationâ
I donât bite the bullet in the most natural reading of this, where very small changes in i_s do only result in very small changes in subjective suffering from a subjective qualitative POV. Insofar as that is conceptually and empirically correct, I (tentatively) think itâs a counterexample that more or less disproves my metaphysical claim (if true/âlegit).
Going along with âsubjective sufferingâ, which I think is subject to the risks you mention here, to make the claim that the compensation schedule is asymptotic (which is pretty important to your topline claim RE: offsetability) I think you canât only be uncertain about Benâs claim or ânot bite the bulletâ, you have to make a positive case for your claim. For example:
I will add that one reason I think this might be a correct âway outâ is that it would just be very strange to me if âIHE preference is to refuse 70 year torture and happiness trade mentioned in postâ logically entails (maybe with some extremely basic additional assumptions like transitivity) âIHE gives up divine bliss for a very small subjective amount of suffering mitigationâ
Like, is it correct that absent some categorical lexical property that you can identify, âthe way outâ is very dependent on you being able to support the claim ânear the threshold a small change in i_s --> large change in subjective experienceâ?
So I suspect your view is something like: âas i_s increases linearly, subjective experience increases in a non-linear way that approaches infinity at some point, earlier than 70 years of tortureâ?[1] If so, whatâs the reason you think this is the correct view /â am I missing something here?
RE: the shape of the asymptote and potential risks of conflating empirical uncertainties
I think this is an interesting graph, and you might feel like you can make some rough progress on this conceptually with your methodology. For example, how many years of bliss would the IHE need to be offered to be indifferent between the equivalent experience of:
1 person boiled alive for an hour at 100degC
Change the time variable to 30mins /â 10min /â 5min /â 1 minute /â 10 seconds /â 1 seconds of the above experience[2]
Change the exposure variable to different % of the body (e.g. just hand /â entire arm /â abdomen /â chest /â back, etc)
(I would be separately interested in how the IHE would make tradeoffs if making a decision for others and the choice was about: 10/â10000/â1E6 people having ^all the above time/âexposure variations, rather than experiencing it themselves, but this is further away from your preferred methodology so Iâll leave it for another time)
And then plotted the graph instrument with different combinations of the time/âexposure/âtemperature variables. This could help you either elucidate the shape of your graph, or the location of uncertainties around your time granularity.
The reason I chose this > cluster headaches is partly because you can get more variables here, but if you wanted just a time comparison then cluster headaches might be easier.
But I actually think temperature is an interesting one to consider for multiple additional reasons. For example, itâs interesting as a real life example where you have perceived discontinuities of responses to continuous changes in some variable. You might be willing to tolerate 35 degree water for a very long time but as soon as it gets to 40+ how tolerable it is very rapidly decreases in a way that feels like a discontinuity.
But whatâs happening here is that heat nociceptors activate at a specific temperature (say e.g. 40degC). So you basically just arenât moving up the suffering instrument below that temperature ~at all, and so the variable youâd change is âhow many nociceptors do you activateâ or âhow frequently do they fireâ (all of which are modulated by temperature and amount of skin exposed), and that rapidly goes up as you reach /â exceed 40degC.[3]
And so if you naively plot âdegreesâ or âperson-hoursâ at the bottom, you might think subjective suffering is going up exponentially compared to a linear increase in i_s, but you are not accounting for thresholds in i_s activation, or increased sensitisation or recruitment of nociceptors over time, which might make the relationship look much less asymptotic.[4]
And empirical uncertainties about exactly how these kinds of signals work and are processed I think is a potentially large limiting factor for being able to strongly support âas i_s increases linearly, subjective experience increases in a non-linear way that approaches infinite bliss at some pointâ[5]
I obviously donât think itâs possible to have all the empirical Qs worked out for the post, but I wanted to illustrate these empirical uncertainties because I think even if I felt it would be correct for the IHE to reject some weaker version of the torture-bliss trade package[6], it would still be unclear that this reflected an asymptotic relationship, rather than just e.g. a large asymmetry between sensitivity to i_s and i_h, or maximum amount of i_s and i_h possible. I think these possibilities could satisfy the (weaker) IHE thought experiment while potentially satisfying lexicality in practice, but not in theory. It might also explain why you feel much more confident about lexicality WRT happiness but not intra-suffering tradeoffs, and if you put the difference of things like 1E10 vs 1E50 vs 10^10^10 down to scope insensitivity I do think this explains a decent portion of your views.
Iâm aware that approaching 1 second is getting towards your uncertainty for the time granularity problem, but I think if you do think 1 hour of cluster headache is NOS then these are the kinds of tradeoffs youâd want to be able to make (and back)
Also worth flagging that RE: footnote 26, where you say:
Feasible happiness is boundedâthere are only so many neurons that can fire, years beings can live, resources we can marshal. Call this maximum H_max.
You should also expect this to apply to the suffering instrument; there is also some upper bound for all of these variables
Oops yes, fundamentals between my and Bruceâs cases are very similar. Should have read Bruceâs comment!
The claim weâre discussingâabout the possibility of small steps of various kindsâsounds kinda like a claim that gets called âFinite Fine-Grainednessâ/ââSmall Stepsâ in the population axiology literature. It seems hard to convincingly argue for, so in this paper I present a problem for lexical views that doesnât depend on it. I sort of gestured at it above with the point about risk without making it super precise. The one-line summary is that expected welfare levels are finitely fine-grained.
In his examples (Râ and Rn lexically ordered) there is no âmost intense suffering which can be outweighedâ (or âleast intense suffering which canât be outweighedâ). E.g. in the hyperreals ân1,n2âR:n1Ï>n2 no matter how small n1 or large n2.
In his examples, between any S which canât be outweighed and S* which can, there are an uncountably infinite number of additional levels of suffering! So I donât think itâs correct to say itâs only a tiny bit worse.
Oh yep nice point, though note thatâe.g. - there are uncountably many reals between 1,000,000 and 1,000,001 and yet it still seems correct (at least talking loosely) to say that 1,000,001 is only a tiny bit bigger than 1,000,000.
But in any case, we can modify the argument to say that S* feels only a tiny bit worse than S. Or instead we can modify it so that S is degrees celsius of a fire that causes suffering that just about can be outweighed, and S* is degrees celsius of a fire that causes suffering that just about canât be outweighed.
I interpret OPâs point about asymptotes to mean that he indeed bites this bullet and believes that the âcompensation scheduleâ is massively higher even when the âinstrumentâ only feels slightly worse?
Great points both and I agree that the kind of tradeoff/âscenario described by @EJT and @bruce in his comment are the strongest/âbest/âmost important objections to my view (and the thing most likely to make me change my mind)
Let me just quote Bruce to get the relevant info in one place and so this comment can serve as a dual response/âupdate. I think the fundamentals are pretty similar (between EJT and Bruceâs examples) even though the exact wording/âimplementation is not:
to which I replied:
Plausibly I should have figured this out before writing/âpublishing my piece but Iâve updated nontrivially (though certainly not all the way) towards just being wrong on the metaphysical claim.
This is in part because after thinking some more since my reply to Bruce (and chatting with some LLMs), Iâve updated away from my points (1) and (2) above.
I am still struggling with (3) both at:
the conceptual level of whether it could be the case that there are fundamental qualitative discontinuities corresponding to the asymptote location at arbitrarily small but not infinitesimal (!) changes in i_s; and
the quasi-empirical level of whether thatâs actually how things are
Mostly (2) though, I should add. I think (uncertain/âtentative etc etc) that this is conceptually on the table.
So to respond to Ben:
I donât bite the bullet in the most natural reading of this, where very small changes in i_s do only result in very small changes in subjective suffering from a subjective qualitative POV. Insofar as that is conceptually and empirically correct, I (tentatively) think itâs a counterexample that more or less disproves my metaphysical claim (if true/âlegit).
But I feel pretty conflicted right now about whether the small but not infinitesimal change in i_s â subjectively small difference is true (again, mostly because of quasi-empirical uncertainty).
This is hard to think about largely because my model/âview leaves the actual shape of the asymptote unspecified (hereâs a new version of the second pic in my post), and that includes all the uncertainty associated with what instrument we are literally or conceptually talking about (since the sole criterion is that itâs monotonic)[1]
I will add that one reason I think this might be a correct âway outâ is that it would just be very strange to me if âIHE preference is to refuse 70 year torture and happiness trade mentioned in postâ logically entails (maybe with some extremely basic additional assumptions like transitivity) âIHE gives up divine bliss for a very small subjective amount of suffering mitigationâ
I know that this could just be a failure of cognition and/âor imagination on my part. Tbh this is really the thing that Iâm trying to grok/âwrestle with (as of now, like for the last day or so, not in the post)
I also know this is ~motivated reasoning, but idk I just do think it has some evidential weight. Hard to justify in explicit terms though.
Iâm curious if others have different intuitions about how weird/âplausible this [2] is from a very abstract POV
I.e.
Going along with âsubjective sufferingâ, which I think is subject to the risks you mention here, to make the claim that the compensation schedule is asymptotic (which is pretty important to your topline claim RE: offsetability) I think you canât only be uncertain about Benâs claim or ânot bite the bulletâ, you have to make a positive case for your claim. For example:
Like, is it correct that absent some categorical lexical property that you can identify, âthe way outâ is very dependent on you being able to support the claim ânear the threshold a small change in i_s --> large change in subjective experienceâ?
So I suspect your view is something like: âas i_s increases linearly, subjective experience increases in a non-linear way that approaches infinity at some point, earlier than 70 years of tortureâ?[1] If so, whatâs the reason you think this is the correct view /â am I missing something here?
RE: the shape of the asymptote and potential risks of conflating empirical uncertainties
I think this is an interesting graph, and you might feel like you can make some rough progress on this conceptually with your methodology. For example, how many years of bliss would the IHE need to be offered to be indifferent between the equivalent experience of:
1 person boiled alive for an hour at 100degC
Change the time variable to 30mins /â 10min /â 5min /â 1 minute /â 10 seconds /â 1 seconds of the above experience[2]
Change the exposure variable to different % of the body (e.g. just hand /â entire arm /â abdomen /â chest /â back, etc)
(I would be separately interested in how the IHE would make tradeoffs if making a decision for others and the choice was about: 10/â10000/â1E6 people having ^all the above time/âexposure variations, rather than experiencing it themselves, but this is further away from your preferred methodology so Iâll leave it for another time)
And then plotted the graph instrument with different combinations of the time/âexposure/âtemperature variables. This could help you either elucidate the shape of your graph, or the location of uncertainties around your time granularity.
The reason I chose this > cluster headaches is partly because you can get more variables here, but if you wanted just a time comparison then cluster headaches might be easier.
But I actually think temperature is an interesting one to consider for multiple additional reasons. For example, itâs interesting as a real life example where you have perceived discontinuities of responses to continuous changes in some variable. You might be willing to tolerate 35 degree water for a very long time but as soon as it gets to 40+ how tolerable it is very rapidly decreases in a way that feels like a discontinuity.
But whatâs happening here is that heat nociceptors activate at a specific temperature (say e.g. 40degC). So you basically just arenât moving up the suffering instrument below that temperature ~at all, and so the variable youâd change is âhow many nociceptors do you activateâ or âhow frequently do they fireâ (all of which are modulated by temperature and amount of skin exposed), and that rapidly goes up as you reach /â exceed 40degC.[3]
And so if you naively plot âdegreesâ or âperson-hoursâ at the bottom, you might think subjective suffering is going up exponentially compared to a linear increase in i_s, but you are not accounting for thresholds in i_s activation, or increased sensitisation or recruitment of nociceptors over time, which might make the relationship look much less asymptotic.[4]
And empirical uncertainties about exactly how these kinds of signals work and are processed I think is a potentially large limiting factor for being able to strongly support âas i_s increases linearly, subjective experience increases in a non-linear way that approaches infinite bliss at some pointâ[5]
I obviously donât think itâs possible to have all the empirical Qs worked out for the post, but I wanted to illustrate these empirical uncertainties because I think even if I felt it would be correct for the IHE to reject some weaker version of the torture-bliss trade package[6], it would still be unclear that this reflected an asymptotic relationship, rather than just e.g. a large asymmetry between sensitivity to i_s and i_h, or maximum amount of i_s and i_h possible. I think these possibilities could satisfy the (weaker) IHE thought experiment while potentially satisfying lexicality in practice, but not in theory. It might also explain why you feel much more confident about lexicality WRT happiness but not intra-suffering tradeoffs, and if you put the difference of things like 1E10 vs 1E50 vs 10^10^10 down to scope insensitivity I do think this explains a decent portion of your views.
And indeed 1 hour of cluster headache
Iâm aware that approaching 1 second is getting towards your uncertainty for the time granularity problem, but I think if you do think 1 hour of cluster headache is NOS then these are the kinds of tradeoffs youâd want to be able to make (and back)
There are other heat receptors at higher temperatures but to a first approximation itâs probably fine to ignore
Because of uncertainty around how much i_s there actually is
Also worth flagging that RE: footnote 26, where you say:
You should also expect this to apply to the suffering instrument; there is also some upper bound for all of these variables
e.g. 1E10 years, rather than infinity, since I find that pretty implausible and hard to reason about
Oops yes, fundamentals between my and Bruceâs cases are very similar. Should have read Bruceâs comment!
The claim weâre discussingâabout the possibility of small steps of various kindsâsounds kinda like a claim that gets called âFinite Fine-Grainednessâ/ââSmall Stepsâ in the population axiology literature. It seems hard to convincingly argue for, so in this paper I present a problem for lexical views that doesnât depend on it. I sort of gestured at it above with the point about risk without making it super precise. The one-line summary is that expected welfare levels are finitely fine-grained.