Tradeoffs like the Very Repugnant Conclusion (VRC) are not only theoretical, because arguments like that of Bostrom (2003) imply that the stakes may be astronomically high in practice. When non-minimalist axiologies find the VRC a worthwhile tradeoff, they would presumably also have similar implications on an arbitrarily large scale. Therefore, we need to have an inclusive discussion about the extent to which the subjective problems (e.g. extreme suffering) of some can be âcounterbalancedâ by the âgreater (intrinsic) goodâ for others, because this has direct implications for what kind of large-scale space colonization could be called ânet positiveâ.
This seems wrong to me, and confusing âfinding the VRC counter-intuitiveâ with âcounterbalancing (/âextreme) bad with with good in any circumstance is counterintuitiveâ (e.g. the linked article to Omelas) is unfortunateâespecially as this error has been repeated a few times in and around SFE-land.
First, what is turning the screws in the VRC is primarily the aggregation, not the (severe/â) suffering. If the block of âpositive lives/âstuffâ in the VRC was high magnitudeâsay about as much (or even more) above neutral as the block of ânegative lives/âstuffâ lie below itâthere is little about this more Omelas-type scenario a classical utilitarian would find troubling. âN terrible lives and k*N wonderful lives is better than N wonderful lives aloneâ seems plausible for sufficiently high values of k. (Notably, âMinimalistâ views seem to fare worse as it urges no value of kâgoogleplexes, TREE(TREE(3)), 1/âP(Randomly picking the single âwrongâ photon from our light cone a million times consecutively), etc. would be high enough.)
The challenge of the V/âRC is the counter-intuitive ânickel and dimingâ where a great good or bad is outweighed by a vast multitude of small/âtrivial things. âN terrible lives and c*k*N barely-better-than-nothing lives is better than N wonderful lives aloneâ remains counter-intuitive to many who accept the first scenario (for some value of k) basically regardless of how large you make c. The natural impulse (at least for me) is to wish to discount trivially positive wellbeing rather than saying it can outweigh severe suffering if provided in sufficiently vast quantity.
If it were just âThe VRC says you can counterbalance severe suffering with happinessâ simpliciter which was generally counterintuitive, we could skip the rigmarole of A, A+, B etc. and just offer Omelas-type scenarios (as Tomasik does in the linked piece) without stipulating the supposedly outweighing good stuff comprises a lot of trivial well-being.
Second, although scenarios where one may consider counterbalancing (/âsevere) suffering with happiness in general may not be purely theoretical (either now or in the future) the likelihood of something closely analogous to the VRC in particular looks very remote. In terms of âprocessâ the engine of the counter-intuitiveness relies on being able to parcel out good stuff in arbitrarily many arbitrarily small increments rather than in smaller more substantial portions; in terms of âoutcomeâ one needs a much smaller set of terrible lives outweighed by a truly vast multitude of just-about-better-than-nothing ones. I donât see how either arise on credible stories of the future.
Third, there are other lines classical utilitarians or similar can take in response to the VRC besides biting the bullet (or attempting to undercut our intuitive responses): critical level views, playing with continuity, and other anti-aggregation devices to try and preserve trading-off in general but avoid the nickel and diming issues of the VRC in particular . Obviously, these themselves introduce other challenges (so much so Iâm more inclined to accept the costly counter-examples than the costs of (e.g.) non-continuity) and surveying all this terrain would be a gargantuan task far beyond the remit of work introducing a related but distinct issue.
But I bring this up because I anticipate the likely moves you will make to avoid the counter-example Shulman and I have brought up will be along the lines of anti-aggregationist moves around lexicality, thresholds, and whatnot. If so, what is good for the goose is good for the gander: it seems better to use similarly adapted versions of total utilitarianism as a âlike for likeâ comparison. âLexical threshold total utilitarianismâ, which lexically de-prioritises dis/âvalue below some magnitude can accept mere addition, accept trading off suffering for sufficient (non-trivial) happiness, but avoid both the RC and VRC. This seems a better point of departure for weighing up minimalism or not, rather than discussing counter-examples to one or the other view which only apply given an (ex hypothesi) mistaken account of how to aggregate harms and benefits.
(Edit: Added a note(*) on minimalist views and the extended VRC of Budolfson & Spears.)
Thanks for highlighting an important section for discussion. Let me try to respond to your points. (I added the underline in them just to unburden the readerâs working memory.)
This seems wrong to me,
The quoted passage contained many claims; which one(s) seemed wrong to you?
and confusing âfinding the VRC counter-intuitiveâ with âcounterbalancing (/âextreme) bad with with good in any circumstance is counterintuitiveâ (e.g. the linked article to Omelas) is unfortunateâespecially as this error has been repeated a few times in and around SFE-land.
My argument was rather the other way around. Namely, if we accept any kind of counterbalancing of harms with isolated goods, then CU-like views would imply that it is net positive to create space colonies that are at least as good as the hellish + barely positive lives of the VRC. And given arguments like astronomical waste (AW) (Bostrom, 2003), the justified harm could be arbitrarily vast as long as the isolated positive lives are sufficiently numerous. (Tomasikâs Omelas article does not depend on the VRC, but speaks of the risk of astronomical harms given the views of Bostrom, which was also my intended focus.)
(To avoid needless polarization and promote fruitful dialogue, I think it might be best to generally avoid using âdisjointingâ territorial metaphors such as âSFE-landâ or âCU-landâ, not least considering the significant common ground among people in the EA(-adjacent) community.)
First, what is turning the screws in the VRC is primarily the aggregation, not the (severe/â) suffering.
For minimalist views, there is a very relevant difference between the RC and VRC, which is that the RC can be non-problematic (provided that we assume that the lives ânever sufferâ, cf. footnote 16 here), but minimalist views would always reject the VRC. For minimalist views, the (severe) suffering is, of course, the main concern. My point about the VRC was to highlight how CU can justify astronomical harms even for (supposedly) barely positive isolated lives, and an even bigger commonsensical worry is how much harm it can justify for (supposedly) greatly positive isolated lives.
If the block of âpositive lives/âstuffâ in the VRC was high magnitudeâsay about as much (or even more) above neutral as the block of ânegative lives/âstuffâ lie below itâthere is little about this more Omelas-type scenario a classical utilitarian would find troubling. âN terrible lives and k*N wonderful lives is better than N wonderful lives aloneâ seems plausible for sufficiently high values of k. (Notably, âMinimalistâ views seem to fare worse as it urges no value of k ⊠would be high enough.)
It seems true that more people would find that more plausible. Even so, this is precisely what minimalists may find worrying about the CU approach to astronomical tradeoffs, namely that astronomical harms can be justified by the creation of sufficiently many instances of isolated goods.
Additionally, I feel like the point above applies more to classical utilitarianism (the view) rather than to the views of actual classical utilitarians, not to mention people who are mildly sympathetic to CU, which seems a particularly relevant group in this context given that they may represent an even larger number of people in the EA(-adjacent) community.
After all, CU-like views contain a minimalist (sub)component, and probably many self-identified CUs and CU-sympathetic people would thereby be at least more than a âlittleâ troubled by the implication that astronomical amounts of hellish lives â e.g. vastly more suffering than what has occurred on Earth to date â would be a worthwhile tradeoff for (greater) astronomical amounts of wonderful lives (what minimalist views would frame as unproblematic lives), especially given that the alternative was a wonderful (unproblematic) population with no hellish lives.
(For what itâs worth, I used to feel drawn to a CU axiology until I became too troubled by the logic of counterbalancing harm for some with isolated good for others. For many people on the fence, the core problem is probably this kind of counterbalancing itself, which is independent of the VRC but of course also clearly illustrated by it.)
If it were just âThe VRC says you can counterbalance severe suffering with happinessâ simpliciter which was generally counterintuitive, we could skip the rigmarole of A, A+, B etc. and just offer Omelas-type scenarios (as Tomasik does in the linked piece) without stipulating the supposedly outweighing good stuff comprises a lot of trivial well-being.
Of course, minimalist views (as explored here) would deny allcounterbalancing of severe problems with isolated goods, independent of the VRC.
The Mere-Addition Paradox, RC, and VRC are often-discussed problems to which minimalist views may provide satisfying answers. The first two were included in the post for many reasons, and not only as a build-up to the VRC. The build-up was also not meant to end with the VRC, but instead to further motivate the question of how much harm can be justified to reduce astronomical waste (AW).
If CU-like views can justify the creation of a lot of hellish lives even for vast amounts of isolated value-containers that have only âbarely positiveâ contents (the VRC), then how much more hellish lives can they supposedly counterbalance once those containers are filled (cf. AW)?
Second, although scenarios where one may consider counterbalancing (/âsevere) suffering with happiness in general may not be purely theoretical (either now or in the future) the likelihood of something closely analogous to the VRC in particular looks very remote. In terms of âprocessâ the engine of the counter-intuitiveness relies on being able to parcel out good stuff in arbitrarily many arbitrarily small increments rather than in smaller more substantial portions; in terms of âoutcomeâ one needs a much smaller set of terrible lives outweighed by a truly vast multitude of just-about-better-than-nothing ones. I donât see how either arise on credible stories of the future.
MichaelStJules already responded to this in the sibling comment. Additionally, I would again emphasize that the main worry is not so much the practical manifestation of the VRC in particular, but more the extent to which much worse problems might be justified by CU-like views given the creation of supposedly even greater amounts of isolated goods (i.e. reducing AW).
Third, there are other lines classical utilitarians or similar can take in response to the VRC besides biting the bullet (or attempting to undercut our intuitive responses): critical level views, playing with continuity, and other anti-aggregation devices to try and preserve trading-off in general but avoid the nickel and diming issues of the VRC in particular.
MichaelStJules already mentioned an arbitrariness objection to those lines. Additionally, my impressions (based on Budolfson & Spears, 2018) are that âthe VRC cannot be avoided by any leading welfarist axiology despite prior consensus in the literature to the contraryâ and that â[the extended] VRC cannot be avoided by any other welfarist axiology in the literature.â
Their literature did not include minimalist views(*). Did they also omit some CU-like views, or are the VRC-rejecting CU-like views not defended by anyone in the literature?
Obviously, these themselves introduce other challenges (so much so Iâm more inclined to accept the costly counter-examples than the costs of (e.g.) non-continuity) and surveying all this terrain would be a gargantuan task far beyond the remit of work introducing a related but distinct issue.
This again leaves me wondering: Are all of the VRC-rejecting CU-like views so arbitrary or counterintuitive that people will just rather accept the VRC? And will even the most attractive of those views still justify astronomical harms for a sufficiently high amount of isolated lives that are âtallerâ than those in the VRC?
This does not ease the worry that CU-like views can justify astronomically large harms in order to create isolated positive lives that never needed to exist in the first place.
But I bring this up because I anticipate the likely moves you will make to avoid the counter-example Shulman and I have brought up will be along the lines of anti-aggregationist moves around lexicality, thresholds, and whatnot.
First, in terms of practical relevance, one could argue that the choice to âprefer hell to prevent an imperfect heavenâ is much more speculative and unlikely than is the VRC for CU-like views, not to mention the likelihood of CU justifying astronomical harms for supposedly greater goods regardless of the VRC (i.e. for reducing AW). In other words, the former can much more plausibly be disregarded as practically irrelevant than can the latter.
Second, lexical views do indeed avoid the conclusion in question, but these need not entail abrupt thresholds (per the arguments here and here), and even if they do, the threshold need not be an arbitrary or ad hoc move. For example, one could hold that there is a difference between psychologically consentable and unconsentable suffering, which is normally ignored by the logic of additive aggregationism. Moreover, the OP entails no commitment to additive aggregationism, as it only specifies that the minimalist views in question are monist, impartial, and welfarist.
If so, what is good for the goose is good for the gander: it seems better to use similarly adapted versions of total utilitarianism as a âlike for likeâ comparison. âLexical threshold total utilitarianismâ, which lexically de-prioritises dis/âvalue below some magnitude can accept mere addition, accept trading off suffering for sufficient (non-trivial) happiness, but avoid both the RC and VRC. This seems a better point of departure for weighing up minimalism or not, rather than discussing counter-examples to one or the other view which only apply given an (ex hypothesi) mistaken account of how to aggregate harms and benefits.
First, I am ââhappy to compare like views in this way in my forthcoming post. I would greatly appreciate it if people were to present or refer me to specific such views to be compared.
Second, the point above may seem to imply that there is a symmetry between these lexical adaptations, i.e. that we can âsimilarlyâ construct lexical minimalism and lexical symmetric totalism (if you allow the short expression). Yet the fact that we can make formally symmetric constructions for these different views does not imply that the respective plausibility of these constructions is symmetric at the substantive level. In this sense, what is good for the goose may do nothing for the gander. (But again, Iâm happy to explore the possibility that it might.)
Specifically, how would one set the threshold(s) on the lexical symmetric view in a non-arbitrary way, and has anyone presented and defended plausible versions of such views?
Furthermore, most people would probably find it much more plausible that some harms cannot be counterbalanced by any amount of isolated goods (âa lexical minimalist componentâ), than that some goods can counterbalance any amount of isolated harms (a similarly lexical positive component). At least Iâve never heard anyone defend or outline the latter kind of view. (By contrast, beyondexamples in academicphilosophy, there are numerousexamples in literature hinting at âminimalist lexicalityâ.)
Overall, I remain worried about the vast harms that CU-like views could justify for the supposed greater good, also considering that even you feel inclined to rather accept the VRC than deal with the apparently arbitrary or counterintuitive features of the versions of CU-like views that avoid it. (And if one proposes a positive lexical threshold, it seems that above the lexical threshold there is always a higher isolated good that can justify vast harms.)
Lastly, why do we need to âaccept trading off suffering for sufficient (non-trivial) [isolated] happinessâ in the first place? Would not a relationalaccount of the value of happiness suffice? What seems to be the problem with relational goods, without isolated goods?
(*) A note on minimalist views and the extended VRC of Budolfson & Spears (2018).
Strictly speaking, the extended VRC in the formulation of Budolfson & Spears does not pertain to minimalist views, because they say âu^h>0â (i.e. strictly greater than zero). So minimalist views fall outside of the domain that they draw conclusions for.
But if we allow the âhigh-utility livesâ to be exactly zero, or even less than zero, then their conclusion would also hold for (continuous, aggregationist) minimalist views. (But the conclusion arguably also becomes much less implausible in the minimalist case compared to the symmetric case, cf. the final point below.)
So it (also) holds for continuous aggregationist minimalist views that there exists a base population âsuch that it is better to both add to the base population the negative-utility lives and cause [a sufficiently large number of] Δ-changesâ.
But beyond questioning the continuous aggregationist component of these views (indeed a possibility that lies open to many kinds of views with such a component), and beyond questioning the practical relevance of this conclusion for minimalist views versus for symmetric views (as I do above), one may further argue that the conclusion is significantly moreplausible in the minimalist case than in the case where we allow torture for the sake of isolated, purported goods that arguably do not need to exist. For in the minimalist case, the overall burden of subjective problems is still lessened (assuming continuous aggregationist minimalism). We are not creating extreme suffering for the mere sake of isolated, âunrelievingâ goods.
Thanks for the reply, and with apologies for brevity.
Re. 1 (ie. âThe primary issue with the VRC is aggregation rather than trade-offâ). I take it we should care about plausibility of axiological views with respect to something like âcommonsenseâ intuitions, rather than those a given axiology urges us to adopt. Itâs at least opaque to me whether commonsense intuitions are more offended by âtrade-offy/âCUâ or âno-trade-offy/âNUâ intuitions. On the one hand:
âAny arbitrarily awful thing can be better than nothing providing it is counterbalanced by k good things (for some value of k)â
(a fortiori) âN awful things can be better than nothing providing they are counterbalanced by k*N good things (and N can be arbitrarily large, say a trillion awful lives).â
But on the other:
âNo amount of good things (no matter how great their magnitude) can compensate for a single awful thing, no matter how astronomical the ratio (e.g. trillions to 1, TREE(3) to 1, whatever).â
(a fortiori) âNo amount of great things can compensate for a single bad thing, no matter how small it is (e.g. pinpricks, a minute risk of an awful thing)â
However, I am confident the aggregation viewsâbasically orthogonal to this questionâare indeed the main driver for folks finding the V/âRC particularly repugnant. Compare:
1 million great lives vs. 1 million terrible lives and a Quadrillion great lives.
1 thousand great lives vs. 1 thousand terrible lives and TREE(3) marginally good lives.
A minimalist view may well be concerned with increasing the amount of aggregate harm in 1 vs. 2, and so worry that (re. 2) if CU was willing to accept this, it would accept a lot more aggregate harm if we increase the upside to more than compensate (e.g. TREE(3) great lives). Yet I aver commonsense intuitions favour 1 over 2, and would find variants of 2 where the downside is increased but the upside is reduced but concentrated (e.g. a trillion great lives) more palatable.
So appeals along the lines of âCU accepts the VRC, andâeven worseâwould accept even larger downsides if the compensating upside was composed of very- rather than marginally- happy livesâ seems misguided, as this adaptation of the VRC aligns it better, not worse, with commonsense (if not minimalist) intuitions.
Re. 3 Iâve read Budolfson & Spears, and as you note (*) it seems we can construct xVRCs which minimalist views (inc. those which introduce lexical thresholds) are susceptible to. (I also note they agree with me re. 1 - e.g. s8: âWhenever aggregation is done over an unbounded space, repugnant outcomes inevitably occurâ; their identification with the underlying mechanism for repugnance being able to aggregate e-changes.)
The replies minimalists can make here seem very âas good for the goose as the ganderâ to me:
One could deny minimalism is susceptible to even xVRCs as one should drop aggregation/âcontinuity/âetc. Yet symmetric views should do the same, so one should explore whether on the margin of this atypical account of aggregation minimalist axiologies are a net plus or minus to overall plausibility.
One could urge we shouldnât dock points to a theory for counter-examples which are impractical/âunrealistic, the x/âVRCs for minimalism fare much better than the x/âVRCs for totalism. This would be quite a departure from my understanding of how the discussion proceeds in the literature, where the main concern is the âin principleâ determination for scenarios (I donât ever recallâe.g. - replies for averagism along the lines of âBut thereâd never be a realistic scenario where weâd actually find ourselves minded to add net-negative lives to improve average utilityâ). In any case, a lot of the xVRCs applicable to CU-variants require precisely stipulated âbase populationsâ, so theyâre presumably also âin the clearâ by this criterion.
One could accept minimalism entails an xVRC, but this bullet is easier to bite than x/âVRCs against symmetric views. Perhaps, but in which case we should probably pick the closest symmetric comparator (e.g. if they canât play with thresholds, you should deal with Shulman-esque pinprick scenarios). I also note the appeals to plausibility made (here and in the comments you link) seem to be mostly re-statements of minimalism itself (e.g. that epsilon changes in misery count but epsilon changes in happiness donât, âsubjective perfectionâ equated to neutrality, etc.) âConditional on minimalist intuitions, minimalism has no truly counter-intuitive resultsâ is surely true, but also question-begging to folks who donât share them (compare a totalist asserting the VRC is much less counter-intuitive than minimalist-xVRCs as - âobviouslyâ - wellbeing can be greater than zero, and axiology shouldnât completely discount unbounded amounts of it in evaluation).
[Finally, Iâm afraid I canât really see much substantive merit in the ârelational goodsâ approach. Minimalism (like SFE and NU) straightforwardly offends the naive intuition that happiness is indeed âbetter than nothingâ, and I donât find relational attempts to undercut this by offering an account of these being roundabout ways/âpolicies of reducing problems either emotionally satisfying (e.g. All the rich relationships between members of a community may make everyone have âlives worth livingâ in the sense that âwithout me these other people would be worse offâ, but minimalism appears still committed to the dispiriting claim that this rich tapestry of relationships is still worse than nothing) or intellectually credible (cf. virtually everyoneâs expressed and implied preferences suggest non-assent to âno-trade-offâ views).
Similarly, I think assessing âisolatedâ goods as typical population cases do is a good way to dissect out the de/âmerits of different theories, and noting our evaluation changes as we add in a lot of âpracticalâ considerations seems apt to muddy the issue again (for example, Iâd guess various âpractical elaborationsâ of the V/âRC would make it appear more palatable, but I donât think this is a persuasive reply).
I focus on the âpureâ population ethics as âI donât buy itâ is barren ground for discussion.]
Re. 1 (ie. âThe primary issue with the VRC is aggregation rather than trade-offâ). I take it we should care about plausibility of axiological views with respect to something like âcommonsenseâ intuitions, rather than those a given axiology urges us to adopt.
Agreed, and this is also why I focus also on the psychological and practical implications of axiological views, and not only on their theoretical implications. Especially in the EA(-adjacent) community, it seems common to me that the plausibility of theoretical views is assessed also based on the plausibility of their practical implications, which tap into further important intuitions than what may be involved by staying at the abstract level.
E.g., people may bite bullets in theory to retain a consistent view, but still never bite those bullets in practice due to some still unarticulated reasons, which may indicate an inconsistency between their explicit and implicit axiology.
Itâs at least opaque to me whether commonsense intuitions are more offended by âtrade-offy/âCUâ or âno-trade-offy/âNUâ intuitions.
By âtrade-offyâ and âno-trade-offyâ, Iâd like to emphasize that we mean trade-offs between isolated things. In other words, the diagrams of population ethics could just as well consist of causally isolated experience machines (âisolated Matrix-livesâ), which is plausibly a confounding factor for our practical (âcommonsenseâ) intuitions, as our practical intuitions are arguably adapted for trade-offs in an interpersonal (ârelationalâ) world.
On the one hand:
âAny arbitrarily awful thing can be better than nothing providing it is counterbalanced by k good things (for some value of k)â
(a fortiori) âN awful things can be better than nothing providing they are counterbalanced by k*N good things (and N can be arbitrarily large, say a trillion awful lives).â
Itâs very unclear to me how many people actually believe that any arbitrarily awful thing can be counterbalanced by sufficiently many (and/âor awesome) isolated Matrix-lives, or other isolated goods. By default, I would assume that most people do not (want to) think about torture, and also do not properly respect the âall else being equalâ assumption, and thereby would not count as votes of âinformed consentâ for those claims. Additionally, in at least one small Mechanical Turk survey about a tradeoff for people themselves, more than 40 percent of people said that they would not accept one minute of extreme suffering for any number of happy years added to their lives.
But on the other:
âNo amount of good things (no matter how great their magnitude) can compensate for a single awful thing, no matter how astronomical the ratio (e.g. trillions to 1, TREE(3) to 1, whatever).â
(a fortiori) âNo amount of great things can compensate for a single bad thing, no matter how small it is (e.g. pinpricks, a minute risk of an awful thing)â
The first claim (i.e. âa lexical minimalist componentâ) is precisely what has been defended in the philosophical (and fictional) literature. And again, this claim might be something that most people have not thought about, because only a minority of people have had first- or even second-person experience of an awful thing that might be defended as being categorically âimpossible to compensate for with isolated goodsâ, such as torture.
(The second claim does not strictly follow from the first, which was about âawfulâ things; e.g. some SFE views hold that sufficiently awful things are lexical bads, but not that all kinds of tiny bads are. This is also relevant for the practical implications of lexical minimalist views with relational goods, on which pinpricks may be practically ignored unless they increase the risk of lexically bad things, whereas anything worthy of the name âgreat thingâ would probably play positive roles to help reduce that risk.)
However, I am confident the aggregation viewsâbasically orthogonal to this questionâare indeed the main driver for folks finding the V/âRC particularly repugnant. Compare: [...]
So appeals along the lines of âCU accepts the VRC, andâeven worseâwould accept even larger downsides if the compensating upside was composed of very- rather than marginally- happy livesâ seems misguided, as this adaptation of the VRC aligns it better, not worse, with commonsense (if not minimalist) intuitions.
Here I would again note that our commonsense intuitions are arguably not adapted to track the isolated value of lives, and so we should be careful to make it clear that we are comparing e.g. isolated Matrix-lives. By default, I suspect that people may think of the happy populations as consisting of lives like their own or of people they know, which may implicitly involve a lot of effects on other lives.
Of course, the framings of âisolated Matrix-livesâ or âexperience machinesâ may themselves bring in connotations that can feel pejorative or dismissive with regard to the actual subjective experience of those lives, but my point is just to drive home the fact that these lives are, by hypothesis, radically devoid of any positive roles for others, or even for their future selves. And if people implicitly have a relational notion of positive value (e.g. if they think of positive value as implying an inverse causal relation to some subjective problems), then they may feel very differently about harms counterbalanced by isolated goods vs. harms counterbalanced by relational goods (of which minimalist views can endorse the latter).
To be clear, the inverse relations include not only subjective problems prevented by social relationships, but also e.g. any desirable effects on wild animals and future s-risks. Admittedly, probably neither of the latter two is a very commonsensical contributor to positive tradeoffs, but Iâd guess that neither would many people find it intuitive to counterbalance astronomical harms with (âeven greater amounts ofâ) isolated experience machines, or with a single âutility monsterâ. Arguably, all of these cases are also tricky to measure against peopleâs commonsense intuitions, given that not many people have thought about them in the first place.
Re. 3 Iâve read Budolfson & Spears, and as you note (*) it seems we can construct xVRCs which minimalist views (inc. those which introduce lexical thresholds) are susceptible to. (I also note they agree with me re. 1 - e.g. s8: âWhenever aggregation is done over an unbounded space, repugnant outcomes inevitably occurâ; their identification with the underlying mechanism for repugnance being able to aggregate e-changes.)
Yeah, we can formally construct xVRCs for minimalist views, including for lexical minimalist views, but my claim is that these are consistently less repugnant in like-like comparisons with symmetric views (relative to commonsense or widely shared intuitions). Specifically in the lexical minimalist xVRC â i.e. thesecomments which you refer to in your point #3 below â the tradeoff results in ever less (and less intense) suffering if followed repeatedly. By comparison, every symmetric xVRC would keep on increasing suffering if scaled up in an analogous way, which is arguably the most repugnant aspect of the VRC.
Additionally, this comment (upstream of the linked ones) points out a source of intra-personal repugnance in the symmetric cases, namely that CU-like views would be fine with the âmarginally goodâ Δ-lives being âroller coasterâ lives that also contain a lot of extreme suffering:
One way to see that a Δ increase could be very repugnant is to recall Portmoreâs (1999) suggestion that Δ lives in the restricted RC could be âroller coasterâ lives, in which there is much that is wonderful, but also much terribly suffering, such that the good ever-so-slightly outweighs the bad [according to some symmetric view]. Here, one admitted possibility is that an Δ-change could substantially increase the terrible suffering in a life, and also increase good components; such a Δ-change is not the only possible Δ-change, but it would have the consequence of increasing the total amount of suffering. ⊠Moreover, if Δ-changes are of the âroller coasterâ form, they could increase deep suffering considerably beyond even the arbitrarily many [u < 0] lives, and in fact could require everyone in the chosen population to experience terrible suffering. [From Budolfson & Spears]
Of course, in some minimalist examples it is arguably repugnant to create extreme suffering to avoid a vast number of mildly problematic states. But I would claim that commonsense (and not only minimalist) intuitions would find even more repugnant the analogous symmetric case, namely to create extreme suffering for a vast number of mildly positive states which are not needed to relieve anyoneâs burden. (The latter case may appear especially repugnant if the symmetric view in question would allow the mildly positive states to be âroller coasterâ lives that are not even themselves free of, but would in fact contain a lot of, extreme suffering.) Consider, for instance, that:
A 2017 survey by FLI (n > 14,000), found that the goal people favored most as the ideal aim of a future civilization was âminimizing sufferingâ. This was the most popular aim by a large margin, ahead of âmaximizing positive experiencesâ, and most of the people who favored this goal were probably not suffering while they responded to the survey.
According to some plausible moral views, the alleviation of suffering is more important, morally, than the promotion of happiness. According to other plausible moral views (such as classical utilitarianism), the alleviation of suffering is equally as important, morally, as the promotion of happiness. But there is no reasonable moral view on which the alleviation of suffering is less important than the promotion of happiness. So, under moral uncertainty, itâs appropriate to prefer to alleviate suffering rather than to promote happiness more often than the utilitarian would.
The above points do not tip the scales all the way in favor of minimalism over CU-variants, but they do suggest that common intuitions would not necessarily favor âadditively aggregationist CUâ (even before looking at the respective x/âVRCs for these views, let alone after considering the overall direction when we iterate such tradeoffs multiple times).
The replies minimalists can make here seem very âas good for the goose as the ganderâ to me:
1. One could deny minimalism is susceptible to even xVRCs as one should drop aggregation/âcontinuity/âetc. Yet symmetric views should do the same, so one should explore whether on the margin of this atypical account of aggregation minimalist axiologies are a net plus or minus to overall plausibility.
Agreed, although it is unclear whether continuous aggregation is in fact more typical. But since Iâm interested in defending lexical minimalism (which many people already hold with a priority for extreme suffering), Iâd be curious to hear if anyone has defended an analogous symmetric view, or how that view would be constructed in the first place. E.g., should I compare âpriority for the worst-offâ with a view that (also) entails âpriority for the best-offâ, even if no one (to my knowledge) defends the latter priority?
2. One could urge we shouldnât dock points to a theory for counter-examples which are impractical/âunrealistic, the x/âVRCs for minimalism fare much better than the x/âVRCs for totalism. This would be quite a departure from my understanding of how the discussion proceeds in the literature, where the main concern is the âin principleâ determination for scenarios
The literature is mostly not written by people trying to figure out whether to prioritize the reduction of AW versus the reduction of s-risks. And once we accept some tradeoff in theory, it becomes relevant to ask if we would plausibly accept similar tradeoffs that could practically occur on an astronomical scale, for which the e-changes could of course first be âenlargedâ so as to make more practical sense. (At least I feel like none of my intended points depend on the e-changes being tiny, nor on the base populations consisting of lives with mutually equal welfare, so Iâm fine with discussing x/âVRCs that are in those ways more realistic â especially if we account for the âroller coasterâ aspects of more realistic lives.)
In other words, whether we affirm or reject the claim that purported positive goods can outweigh extreme suffering has great relevance for our priorities, whereas the question of whether lexical minimalist views are more plausible than non-lexical minimalist views has limited practical relevance, since the real-life implications (e.g. for ideal population sizes) are roughly convergent for minimalist views.
3. One could accept minimalism entails an xVRC, but this bullet is easier to bite than x/âVRCs against symmetric views. Perhaps, but in which case we should probably pick the closest symmetric comparator (e.g. if they canât play with thresholds, you should deal with Shulman-esque pinprick scenarios). I also note the appeals to plausibility made (here and in the comments you link) seem to be mostly re-statements of minimalism itself (e.g. that epsilon changes in misery count but epsilon changes in happiness donât, âsubjective perfectionâ equated to neutrality, etc.)
Again, Iâm happy to pick the closest symmetric view to compare with the minimalist priority for extreme suffering, but Iâm still unsure what that view might be (and eager to hear if there is anything to be read about such views).
I donât agree that the points about the minimalist xVRCsâ comparatively greater plausibility are mostly re-statements of minimalism itself. Rather, I claim that commonsense intuitions would favor the lexical minimalist xVRC â in which suffering is âspread more equally between those who already exist and those who do notâ (and eventually minimized if iterated) â over any symmetric xVRC of âexpanding hell to help the best-offâ. (In other words, even if one finds it somewhat plausible that happiness has independent value, or value in isolation, it still seems that the symmetric xVRCs are worse than the minimalist xVRC.)
(For subjective perfection equated with the absence of something, I was thinking of tranquilism as a need-based account of the isolated value of different experiential states, which is centered on cravings to change oneâs subjective experience.)
Finally, Iâm afraid I canât really see much substantive merit in the ârelational goodsâ approach. Minimalism (like SFE and NU) straightforwardly offends the naive intuition that happiness is indeed âbetter than nothingâ, and I donât find relational attempts to undercut this by offering an account of these being roundabout ways/âpolicies of reducing problems either emotionally satisfying (e.g. All the rich relationships between members of a community may make everyone have âlives worth livingâ in the sense that âwithout me these other people would be worse offâ, but minimalism appears still committed to the dispiriting claim that this rich tapestry of relationships is still worse than nothing) or intellectually credible
(Strictly speaking, minimalism is a category that contains NU but only overlaps with SFE; some SFE views may recognize isolated positive value even as they prioritize reducing suffering, and e.g. Fehigeâs view represents a preference-based instead of suffering-focused minimalism.)
About the naive intuition that happiness is indeed âbetter than nothingâ, Iâm curious if that really applies also for isolated Matrix-lives (for most people). As Iâve noted in this section, by focusing on isolated value we may often underestimate the relational value of some goods, which may be greater than the amount of intrinsic value we perceive them to have.
About the relational account having dispiriting or emotionally unsatisfying implications, those can also be compared between views (to the extent that they matter for the plausibility of axiological views). E.g., on minimalist views, unlike CU-like views, itâs not a tragedy or atrocity if we fail to reduce astronomical waste. In this sense, minimalist views may be less dispiriting than CU-like views. Moreover, Iâd practically emphasize that our positive roles need not be limited to the confines of our social communities, but extend all the way to those communitiesâ effects on things like factory farming, wild-animal suffering, and the risks of future suffering (and thus potentially match or even exceed our commonsense feelings about the positive value of many lives, even if this would formally consist of âonlyâ relational instead of independently positive value).
However, we should also be careful to account for our personal emotional responses to the implications of a given axiology. By analogy with empirical claims, we would probably want our views on (e.g.) global catastrophic risks to be unaffected by whether we find them dispiriting or not. Similarly, we should arguably account for such feelings in our axiological considerations of what, if anything, would constitute an axiologically positive life in causal isolation (and, specifically, what would constitute a life capable of counterbalancing the suffering of others without the consent of the latter).
But I bring this up because I anticipate the likely moves you will make to avoid the counter-example Shulman and I have brought up will be along the lines of anti-aggregationist moves around lexicality, thresholds, and whatnot.
Do you mean trivial pains adding up to severe suffering? I can see how if you would accept lexicality or thresholds to prevent this, you could do the same to prevent trivial pleasures outweighing severe suffering or greater joys.
My original comment follows.
I think your first and third points are mostly right, but I would add that minimalist axiologies can avoid the (V)RC without (arbitrary) critical levels, (arbitrary) thresholds, giving up continuity, or giving up additivity/âseparability, which someone might find as counterintuitive as the VRC. Views like these tend to look more arbitrary, or assuming transitivity, the independence of irrelevant alternatives and a far larger unaffected population, often reduce to solipsism or recommend totally ignoring value thatâs (weakly or strongly) lexically dominated in practice. So, if you find the (V)RC, and these aggregation tricks or their implications very counterintuitive, then minimalist and person-affecting views will look better than otherwise (not necessarily best), and classical utilitarianism will look worse than otherwise (but potentially still best overall or better than minimalist axiologies, if the other points in favour are strong enough).
Furthermore, the VRC is distinguished from the RC by the addition of severe suffering. Someone might find the VRC far worse than the RC (e.g. the person who named it, adding the âVeryâ :P), and if they do, that may indeed say something about their views on suffering and bad lives, and not just about the aggregation of the trivial vs values larger in magnitude. I do suspect like you that considering Omelas (or tradeoffs between a more even number of good and bad lives) would usually already get at this, though, but maybe not always.
That being said, personally, I am also separately sympathetic to lexicality (and previously non-additivity, but less so now because of the arguments in the papers I cited above), but not because of the RC or VRC, but because of direct intuitions about torture vs milder suffering (dust specks or even fairly morally significant suffering). EDIT: I guess this is the kind of âcounter-exampleâ you and Shulman have brought up?
On your second point, I donât think something like the VRC is remote, although I wouldnât consider it my best guess for the future. If it turns out that itâs more efficient to maximize pleasure (or value generally) in a huge number of tiny systems that produce very little value each, classical utilitarians may be motivated to do so at substantial cost, including sacrificing a much higher average welfare and ignoring s-risks. So, you end up with astronomically many more marginally good lives and a huge number of additional horrible lives (possibly astronomically many, although far fewer than the marginally good lives) and missing out on many very high welfare lives. This is basically the VRC. This seems unlikely unless classical utilitarians have majority control over large contiguous chunks of space in the future.
Do you mean trivial pains adding up to severe suffering? I can see how if you would accept lexicality or thresholds to prevent this, you could do the same to prevent trivial pleasures outweighing severe suffering or greater joys.
Yeah, thatâs it. As you note these sorts of moves seem to have costs elsewhere, but if one thinks on balance they nonetheless should be accepted, then the V/âRC isnât really a strike against âsymmetric axiologyâ simpliciter, but merely âsymmetric axiologies with a mistaken account of aggregationâ. If instead âstraightforward/âunadornedâ aggregation is the right way to go, then the V/âRC is a strike against symmetric views and a strike in favour of minimalist ones; but âstraightforwardâ aggregation can also produce highly counter-intuitive results for minimalist views which symmetric axiologies avoid (e.g. âbetter N awful lives than TREE(N+3) lives of perfect bliss and a pin-prickâ).
Hence (per 3) I feel the OP would be trying to have it both ways if they donât discuss argumentative resources which could defend a rival theory from objections they mount against it, yet subsequently rely upon those same resources to respond to objections to their preferred theory.
(Re. 2, perhaps it depends on the value of âtinyââmy intuition is the dynamic range of (e.g.) human happiness is much smaller than that for future beings, so âvery smallâ on this scale would still typically be greatly above the âmarginally goodâ range by the lights of classical util. If (e.g.) commonsenically happy human lives/âexperiences are 10, joyful future beings could go up to 1000, and âmarginally goodâ is anything <1, weâd be surprised to find the optimal average for the maximal aggregate is in the marginally good range. Adding in the âVâ bit to this RC adds a further penalty).
With respect to 2, Iâm thinking something on the order of insect brains. There are reasons to expect pleasure to scale sublinearly with brain size even in artificial brains optimized for pleasure, e.g. a lot of unnecessary connections that donât produce additional value, greater complexity in building larger brains without getting things wrong, or even giving weight to the belief that integrating minds actually reduces value, say because of bottlenecks in some of the relevant circuits/âfunctions. Smaller brains are easier/âfaster to run in parallel.
This is assuming the probability of consciousness doesnât dominate. There may also be scale efficiencies, since the brains need containers and to be connected to things (even digitally?) or there may be some other overhead.
So, I donât think it would be too surprising to find the optimal average in the marginally good range.
I think itâs useful to have a thought experiment to refer to other than Omelas to capture the intuition of âa perfect, arbitrarily large utopia is better than a world with arbitrarily many miserable lives supposedly counterbalanced by sufficiently many good lives.â Because:
The âarbitrarily manyâ quantifiers show just how extreme this can get, and indeed the sort of axiology that endorses the VRC is committed to judging the VRC as better the more you multiply the scale, which seems backwards to my intuitions.
The first option is a utopia, whereas the Omelas story doesnât say that thereâs some other civilization that is smaller yet still awesome and has no suffering.
Omelas as such is confounded by deontological intuitions, and the alternative postulated in the story is âwalking away,â not preventing the existence of such a world in the first place. Iâve frequently found that people get hung up on the counterproductiveness of walking away, which is true, but irrelevant to the axiological point I want to make. The VRC is purely axiological, so more effective at conveying this.
So while I agree that aggregation is an important part of the VRC, I also disagree that the ânickel and dimingâ is at the heart of this. To my intuitions, the VRC is still horrible and borderline unacceptable if we replace the just-barely-worth-living lives with lives that have sufficiently intense happiness, intense enough to cross any positive lexical threshold you want to stipulate. In fact, muzak and potatoes lives as Parfit originally formulated them (i.e., with no suffering) seem much better than lots of lives with both lexically negative and lexically âpositiveâ experiences. Iâll eagerly accept Parfitâs version of the RC. (If you want to say this is contrary to common sense intuitions, thatâs fine, since I donât put much stock in common sense when it comes to ethics; there seem to be myriad forces pushing our default intuitions in directions that make evolutionary sense but are disturbing to me upon reflection.)
This seems wrong to me, and confusing âfinding the VRC counter-intuitiveâ with âcounterbalancing (/âextreme) bad with with good in any circumstance is counterintuitiveâ (e.g. the linked article to Omelas) is unfortunateâespecially as this error has been repeated a few times in and around SFE-land.
First, what is turning the screws in the VRC is primarily the aggregation, not the (severe/â) suffering. If the block of âpositive lives/âstuffâ in the VRC was high magnitudeâsay about as much (or even more) above neutral as the block of ânegative lives/âstuffâ lie below itâthere is little about this more Omelas-type scenario a classical utilitarian would find troubling. âN terrible lives and k*N wonderful lives is better than N wonderful lives aloneâ seems plausible for sufficiently high values of k. (Notably, âMinimalistâ views seem to fare worse as it urges no value of kâgoogleplexes, TREE(TREE(3)), 1/âP(Randomly picking the single âwrongâ photon from our light cone a million times consecutively), etc. would be high enough.)
The challenge of the V/âRC is the counter-intuitive ânickel and dimingâ where a great good or bad is outweighed by a vast multitude of small/âtrivial things. âN terrible lives and c*k*N barely-better-than-nothing lives is better than N wonderful lives aloneâ remains counter-intuitive to many who accept the first scenario (for some value of k) basically regardless of how large you make c. The natural impulse (at least for me) is to wish to discount trivially positive wellbeing rather than saying it can outweigh severe suffering if provided in sufficiently vast quantity.
If it were just âThe VRC says you can counterbalance severe suffering with happinessâ simpliciter which was generally counterintuitive, we could skip the rigmarole of A, A+, B etc. and just offer Omelas-type scenarios (as Tomasik does in the linked piece) without stipulating the supposedly outweighing good stuff comprises a lot of trivial well-being.
Second, although scenarios where one may consider counterbalancing (/âsevere) suffering with happiness in general may not be purely theoretical (either now or in the future) the likelihood of something closely analogous to the VRC in particular looks very remote. In terms of âprocessâ the engine of the counter-intuitiveness relies on being able to parcel out good stuff in arbitrarily many arbitrarily small increments rather than in smaller more substantial portions; in terms of âoutcomeâ one needs a much smaller set of terrible lives outweighed by a truly vast multitude of just-about-better-than-nothing ones. I donât see how either arise on credible stories of the future.
Third, there are other lines classical utilitarians or similar can take in response to the VRC besides biting the bullet (or attempting to undercut our intuitive responses): critical level views, playing with continuity, and other anti-aggregation devices to try and preserve trading-off in general but avoid the nickel and diming issues of the VRC in particular . Obviously, these themselves introduce other challenges (so much so Iâm more inclined to accept the costly counter-examples than the costs of (e.g.) non-continuity) and surveying all this terrain would be a gargantuan task far beyond the remit of work introducing a related but distinct issue.
But I bring this up because I anticipate the likely moves you will make to avoid the counter-example Shulman and I have brought up will be along the lines of anti-aggregationist moves around lexicality, thresholds, and whatnot. If so, what is good for the goose is good for the gander: it seems better to use similarly adapted versions of total utilitarianism as a âlike for likeâ comparison. âLexical threshold total utilitarianismâ, which lexically de-prioritises dis/âvalue below some magnitude can accept mere addition, accept trading off suffering for sufficient (non-trivial) happiness, but avoid both the RC and VRC. This seems a better point of departure for weighing up minimalism or not, rather than discussing counter-examples to one or the other view which only apply given an (ex hypothesi) mistaken account of how to aggregate harms and benefits.
(Edit: Added a note(*) on minimalist views and the extended VRC of Budolfson & Spears.)
Thanks for highlighting an important section for discussion. Let me try to respond to your points. (I added the underline in them just to unburden the readerâs working memory.)
The quoted passage contained many claims; which one(s) seemed wrong to you?
My argument was rather the other way around. Namely, if we accept any kind of counterbalancing of harms with isolated goods, then CU-like views would imply that it is net positive to create space colonies that are at least as good as the hellish + barely positive lives of the VRC. And given arguments like astronomical waste (AW) (Bostrom, 2003), the justified harm could be arbitrarily vast as long as the isolated positive lives are sufficiently numerous. (Tomasikâs Omelas article does not depend on the VRC, but speaks of the risk of astronomical harms given the views of Bostrom, which was also my intended focus.)
(To avoid needless polarization and promote fruitful dialogue, I think it might be best to generally avoid using âdisjointingâ territorial metaphors such as âSFE-landâ or âCU-landâ, not least considering the significant common ground among people in the EA(-adjacent) community.)
For minimalist views, there is a very relevant difference between the RC and VRC, which is that the RC can be non-problematic (provided that we assume that the lives ânever sufferâ, cf. footnote 16 here), but minimalist views would always reject the VRC. For minimalist views, the (severe) suffering is, of course, the main concern. My point about the VRC was to highlight how CU can justify astronomical harms even for (supposedly) barely positive isolated lives, and an even bigger commonsensical worry is how much harm it can justify for (supposedly) greatly positive isolated lives.
It seems true that more people would find that more plausible. Even so, this is precisely what minimalists may find worrying about the CU approach to astronomical tradeoffs, namely that astronomical harms can be justified by the creation of sufficiently many instances of isolated goods.
Additionally, I feel like the point above applies more to classical utilitarianism (the view) rather than to the views of actual classical utilitarians, not to mention people who are mildly sympathetic to CU, which seems a particularly relevant group in this context given that they may represent an even larger number of people in the EA(-adjacent) community.
After all, CU-like views contain a minimalist (sub)component, and probably many self-identified CUs and CU-sympathetic people would thereby be at least more than a âlittleâ troubled by the implication that astronomical amounts of hellish lives â e.g. vastly more suffering than what has occurred on Earth to date â would be a worthwhile tradeoff for (greater) astronomical amounts of wonderful lives (what minimalist views would frame as unproblematic lives), especially given that the alternative was a wonderful (unproblematic) population with no hellish lives.
(For what itâs worth, I used to feel drawn to a CU axiology until I became too troubled by the logic of counterbalancing harm for some with isolated good for others. For many people on the fence, the core problem is probably this kind of counterbalancing itself, which is independent of the VRC but of course also clearly illustrated by it.)
Of course, minimalist views (as explored here) would deny all counterbalancing of severe problems with isolated goods, independent of the VRC.
The Mere-Addition Paradox, RC, and VRC are often-discussed problems to which minimalist views may provide satisfying answers. The first two were included in the post for many reasons, and not only as a build-up to the VRC. The build-up was also not meant to end with the VRC, but instead to further motivate the question of how much harm can be justified to reduce astronomical waste (AW).
If CU-like views can justify the creation of a lot of hellish lives even for vast amounts of isolated value-containers that have only âbarely positiveâ contents (the VRC), then how much more hellish lives can they supposedly counterbalance once those containers are filled (cf. AW)?
MichaelStJules already responded to this in the sibling comment. Additionally, I would again emphasize that the main worry is not so much the practical manifestation of the VRC in particular, but more the extent to which much worse problems might be justified by CU-like views given the creation of supposedly even greater amounts of isolated goods (i.e. reducing AW).
MichaelStJules already mentioned an arbitrariness objection to those lines. Additionally, my impressions (based on Budolfson & Spears, 2018) are that âthe VRC cannot be avoided by any leading welfarist axiology despite prior consensus in the literature to the contraryâ and that â[the extended] VRC cannot be avoided by any other welfarist axiology in the literature.â
Their literature did not include minimalist views(*). Did they also omit some CU-like views, or are the VRC-rejecting CU-like views not defended by anyone in the literature?
This again leaves me wondering: Are all of the VRC-rejecting CU-like views so arbitrary or counterintuitive that people will just rather accept the VRC? And will even the most attractive of those views still justify astronomical harms for a sufficiently high amount of isolated lives that are âtallerâ than those in the VRC?
This does not ease the worry that CU-like views can justify astronomically large harms in order to create isolated positive lives that never needed to exist in the first place.
First, in terms of practical relevance, one could argue that the choice to âprefer hell to prevent an imperfect heavenâ is much more speculative and unlikely than is the VRC for CU-like views, not to mention the likelihood of CU justifying astronomical harms for supposedly greater goods regardless of the VRC (i.e. for reducing AW). In other words, the former can much more plausibly be disregarded as practically irrelevant than can the latter.
Second, lexical views do indeed avoid the conclusion in question, but these need not entail abrupt thresholds (per the arguments here and here), and even if they do, the threshold need not be an arbitrary or ad hoc move. For example, one could hold that there is a difference between psychologically consentable and unconsentable suffering, which is normally ignored by the logic of additive aggregationism. Moreover, the OP entails no commitment to additive aggregationism, as it only specifies that the minimalist views in question are monist, impartial, and welfarist.
First, I am ââhappy to compare like views in this way in my forthcoming post. I would greatly appreciate it if people were to present or refer me to specific such views to be compared.
Second, the point above may seem to imply that there is a symmetry between these lexical adaptations, i.e. that we can âsimilarlyâ construct lexical minimalism and lexical symmetric totalism (if you allow the short expression). Yet the fact that we can make formally symmetric constructions for these different views does not imply that the respective plausibility of these constructions is symmetric at the substantive level. In this sense, what is good for the goose may do nothing for the gander. (But again, Iâm happy to explore the possibility that it might.)
Specifically, how would one set the threshold(s) on the lexical symmetric view in a non-arbitrary way, and has anyone presented and defended plausible versions of such views?
Furthermore, most people would probably find it much more plausible that some harms cannot be counterbalanced by any amount of isolated goods (âa lexical minimalist componentâ), than that some goods can counterbalance any amount of isolated harms (a similarly lexical positive component). At least Iâve never heard anyone defend or outline the latter kind of view. (By contrast, beyond examples in academic philosophy, there are numerous examples in literature hinting at âminimalist lexicalityâ.)
Overall, I remain worried about the vast harms that CU-like views could justify for the supposed greater good, also considering that even you feel inclined to rather accept the VRC than deal with the apparently arbitrary or counterintuitive features of the versions of CU-like views that avoid it. (And if one proposes a positive lexical threshold, it seems that above the lexical threshold there is always a higher isolated good that can justify vast harms.)
Lastly, why do we need to âaccept trading off suffering for sufficient (non-trivial) [isolated] happinessâ in the first place? Would not a relational account of the value of happiness suffice? What seems to be the problem with relational goods, without isolated goods?
(*) A note on minimalist views and the extended VRC of Budolfson & Spears (2018).
Strictly speaking, the extended VRC in the formulation of Budolfson & Spears does not pertain to minimalist views, because they say âu^h>0â (i.e. strictly greater than zero). So minimalist views fall outside of the domain that they draw conclusions for.
But if we allow the âhigh-utility livesâ to be exactly zero, or even less than zero, then their conclusion would also hold for (continuous, aggregationist) minimalist views. (But the conclusion arguably also becomes much less implausible in the minimalist case compared to the symmetric case, cf. the final point below.)
So it (also) holds for continuous aggregationist minimalist views that there exists a base population âsuch that it is better to both add to the base population the negative-utility lives and cause [a sufficiently large number of] Δ-changesâ.
But beyond questioning the continuous aggregationist component of these views (indeed a possibility that lies open to many kinds of views with such a component), and beyond questioning the practical relevance of this conclusion for minimalist views versus for symmetric views (as I do above), one may further argue that the conclusion is significantly more plausible in the minimalist case than in the case where we allow torture for the sake of isolated, purported goods that arguably do not need to exist. For in the minimalist case, the overall burden of subjective problems is still lessened (assuming continuous aggregationist minimalism). We are not creating extreme suffering for the mere sake of isolated, âunrelievingâ goods.
Thanks for the reply, and with apologies for brevity.
Re. 1 (ie. âThe primary issue with the VRC is aggregation rather than trade-offâ). I take it we should care about plausibility of axiological views with respect to something like âcommonsenseâ intuitions, rather than those a given axiology urges us to adopt. Itâs at least opaque to me whether commonsense intuitions are more offended by âtrade-offy/âCUâ or âno-trade-offy/âNUâ intuitions. On the one hand:
âAny arbitrarily awful thing can be better than nothing providing it is counterbalanced by k good things (for some value of k)â
(a fortiori) âN awful things can be better than nothing providing they are counterbalanced by k*N good things (and N can be arbitrarily large, say a trillion awful lives).â
But on the other:
âNo amount of good things (no matter how great their magnitude) can compensate for a single awful thing, no matter how astronomical the ratio (e.g. trillions to 1, TREE(3) to 1, whatever).â
(a fortiori) âNo amount of great things can compensate for a single bad thing, no matter how small it is (e.g. pinpricks, a minute risk of an awful thing)â
However, I am confident the aggregation viewsâbasically orthogonal to this questionâare indeed the main driver for folks finding the V/âRC particularly repugnant. Compare:
1 million great lives vs. 1 million terrible lives and a Quadrillion great lives.
1 thousand great lives vs. 1 thousand terrible lives and TREE(3) marginally good lives.
A minimalist view may well be concerned with increasing the amount of aggregate harm in 1 vs. 2, and so worry that (re. 2) if CU was willing to accept this, it would accept a lot more aggregate harm if we increase the upside to more than compensate (e.g. TREE(3) great lives). Yet I aver commonsense intuitions favour 1 over 2, and would find variants of 2 where the downside is increased but the upside is reduced but concentrated (e.g. a trillion great lives) more palatable.
So appeals along the lines of âCU accepts the VRC, andâeven worseâwould accept even larger downsides if the compensating upside was composed of very- rather than marginally- happy livesâ seems misguided, as this adaptation of the VRC aligns it better, not worse, with commonsense (if not minimalist) intuitions.
Re. 3 Iâve read Budolfson & Spears, and as you note (*) it seems we can construct xVRCs which minimalist views (inc. those which introduce lexical thresholds) are susceptible to. (I also note they agree with me re. 1 - e.g. s8: âWhenever aggregation is done over an unbounded space, repugnant outcomes inevitably occurâ; their identification with the underlying mechanism for repugnance being able to aggregate e-changes.)
The replies minimalists can make here seem very âas good for the goose as the ganderâ to me:
One could deny minimalism is susceptible to even xVRCs as one should drop aggregation/âcontinuity/âetc. Yet symmetric views should do the same, so one should explore whether on the margin of this atypical account of aggregation minimalist axiologies are a net plus or minus to overall plausibility.
One could urge we shouldnât dock points to a theory for counter-examples which are impractical/âunrealistic, the x/âVRCs for minimalism fare much better than the x/âVRCs for totalism. This would be quite a departure from my understanding of how the discussion proceeds in the literature, where the main concern is the âin principleâ determination for scenarios (I donât ever recallâe.g. - replies for averagism along the lines of âBut thereâd never be a realistic scenario where weâd actually find ourselves minded to add net-negative lives to improve average utilityâ). In any case, a lot of the xVRCs applicable to CU-variants require precisely stipulated âbase populationsâ, so theyâre presumably also âin the clearâ by this criterion.
One could accept minimalism entails an xVRC, but this bullet is easier to bite than x/âVRCs against symmetric views. Perhaps, but in which case we should probably pick the closest symmetric comparator (e.g. if they canât play with thresholds, you should deal with Shulman-esque pinprick scenarios). I also note the appeals to plausibility made (here and in the comments you link) seem to be mostly re-statements of minimalism itself (e.g. that epsilon changes in misery count but epsilon changes in happiness donât, âsubjective perfectionâ equated to neutrality, etc.) âConditional on minimalist intuitions, minimalism has no truly counter-intuitive resultsâ is surely true, but also question-begging to folks who donât share them (compare a totalist asserting the VRC is much less counter-intuitive than minimalist-xVRCs as - âobviouslyâ - wellbeing can be greater than zero, and axiology shouldnât completely discount unbounded amounts of it in evaluation).
[Finally, Iâm afraid I canât really see much substantive merit in the ârelational goodsâ approach. Minimalism (like SFE and NU) straightforwardly offends the naive intuition that happiness is indeed âbetter than nothingâ, and I donât find relational attempts to undercut this by offering an account of these being roundabout ways/âpolicies of reducing problems either emotionally satisfying (e.g. All the rich relationships between members of a community may make everyone have âlives worth livingâ in the sense that âwithout me these other people would be worse offâ, but minimalism appears still committed to the dispiriting claim that this rich tapestry of relationships is still worse than nothing) or intellectually credible (cf. virtually everyoneâs expressed and implied preferences suggest non-assent to âno-trade-offâ views).
Similarly, I think assessing âisolatedâ goods as typical population cases do is a good way to dissect out the de/âmerits of different theories, and noting our evaluation changes as we add in a lot of âpracticalâ considerations seems apt to muddy the issue again (for example, Iâd guess various âpractical elaborationsâ of the V/âRC would make it appear more palatable, but I donât think this is a persuasive reply).
I focus on the âpureâ population ethics as âI donât buy itâ is barren ground for discussion.]
Thanks for the reply!
Agreed, and this is also why I focus also on the psychological and practical implications of axiological views, and not only on their theoretical implications. Especially in the EA(-adjacent) community, it seems common to me that the plausibility of theoretical views is assessed also based on the plausibility of their practical implications, which tap into further important intuitions than what may be involved by staying at the abstract level.
E.g., people may bite bullets in theory to retain a consistent view, but still never bite those bullets in practice due to some still unarticulated reasons, which may indicate an inconsistency between their explicit and implicit axiology.
By âtrade-offyâ and âno-trade-offyâ, Iâd like to emphasize that we mean trade-offs between isolated things. In other words, the diagrams of population ethics could just as well consist of causally isolated experience machines (âisolated Matrix-livesâ), which is plausibly a confounding factor for our practical (âcommonsenseâ) intuitions, as our practical intuitions are arguably adapted for trade-offs in an interpersonal (ârelationalâ) world.
Itâs very unclear to me how many people actually believe that any arbitrarily awful thing can be counterbalanced by sufficiently many (and/âor awesome) isolated Matrix-lives, or other isolated goods. By default, I would assume that most people do not (want to) think about torture, and also do not properly respect the âall else being equalâ assumption, and thereby would not count as votes of âinformed consentâ for those claims. Additionally, in at least one small Mechanical Turk survey about a tradeoff for people themselves, more than 40 percent of people said that they would not accept one minute of extreme suffering for any number of happy years added to their lives.
The first claim (i.e. âa lexical minimalist componentâ) is precisely what has been defended in the philosophical (and fictional) literature. And again, this claim might be something that most people have not thought about, because only a minority of people have had first- or even second-person experience of an awful thing that might be defended as being categorically âimpossible to compensate for with isolated goodsâ, such as torture.
(The second claim does not strictly follow from the first, which was about âawfulâ things; e.g. some SFE views hold that sufficiently awful things are lexical bads, but not that all kinds of tiny bads are. This is also relevant for the practical implications of lexical minimalist views with relational goods, on which pinpricks may be practically ignored unless they increase the risk of lexically bad things, whereas anything worthy of the name âgreat thingâ would probably play positive roles to help reduce that risk.)
Here I would again note that our commonsense intuitions are arguably not adapted to track the isolated value of lives, and so we should be careful to make it clear that we are comparing e.g. isolated Matrix-lives. By default, I suspect that people may think of the happy populations as consisting of lives like their own or of people they know, which may implicitly involve a lot of effects on other lives.
Of course, the framings of âisolated Matrix-livesâ or âexperience machinesâ may themselves bring in connotations that can feel pejorative or dismissive with regard to the actual subjective experience of those lives, but my point is just to drive home the fact that these lives are, by hypothesis, radically devoid of any positive roles for others, or even for their future selves. And if people implicitly have a relational notion of positive value (e.g. if they think of positive value as implying an inverse causal relation to some subjective problems), then they may feel very differently about harms counterbalanced by isolated goods vs. harms counterbalanced by relational goods (of which minimalist views can endorse the latter).
To be clear, the inverse relations include not only subjective problems prevented by social relationships, but also e.g. any desirable effects on wild animals and future s-risks. Admittedly, probably neither of the latter two is a very commonsensical contributor to positive tradeoffs, but Iâd guess that neither would many people find it intuitive to counterbalance astronomical harms with (âeven greater amounts ofâ) isolated experience machines, or with a single âutility monsterâ. Arguably, all of these cases are also tricky to measure against peopleâs commonsense intuitions, given that not many people have thought about them in the first place.
Yeah, we can formally construct xVRCs for minimalist views, including for lexical minimalist views, but my claim is that these are consistently less repugnant in like-like comparisons with symmetric views (relative to commonsense or widely shared intuitions). Specifically in the lexical minimalist xVRC â i.e. these comments which you refer to in your point #3 below â the tradeoff results in ever less (and less intense) suffering if followed repeatedly. By comparison, every symmetric xVRC would keep on increasing suffering if scaled up in an analogous way, which is arguably the most repugnant aspect of the VRC.
Additionally, this comment (upstream of the linked ones) points out a source of intra-personal repugnance in the symmetric cases, namely that CU-like views would be fine with the âmarginally goodâ Δ-lives being âroller coasterâ lives that also contain a lot of extreme suffering:
Of course, in some minimalist examples it is arguably repugnant to create extreme suffering to avoid a vast number of mildly problematic states. But I would claim that commonsense (and not only minimalist) intuitions would find even more repugnant the analogous symmetric case, namely to create extreme suffering for a vast number of mildly positive states which are not needed to relieve anyoneâs burden. (The latter case may appear especially repugnant if the symmetric view in question would allow the mildly positive states to be âroller coasterâ lives that are not even themselves free of, but would in fact contain a lot of, extreme suffering.) Consider, for instance, that:
A 2017 survey by FLI (n > 14,000), found that the goal people favored most as the ideal aim of a future civilization was âminimizing sufferingâ. This was the most popular aim by a large margin, ahead of âmaximizing positive experiencesâ, and most of the people who favored this goal were probably not suffering while they responded to the survey.
The authors of Moral Uncertainty write (p. 185):
The above points do not tip the scales all the way in favor of minimalism over CU-variants, but they do suggest that common intuitions would not necessarily favor âadditively aggregationist CUâ (even before looking at the respective x/âVRCs for these views, let alone after considering the overall direction when we iterate such tradeoffs multiple times).
Agreed, although it is unclear whether continuous aggregation is in fact more typical. But since Iâm interested in defending lexical minimalism (which many people already hold with a priority for extreme suffering), Iâd be curious to hear if anyone has defended an analogous symmetric view, or how that view would be constructed in the first place. E.g., should I compare âpriority for the worst-offâ with a view that (also) entails âpriority for the best-offâ, even if no one (to my knowledge) defends the latter priority?
The literature is mostly not written by people trying to figure out whether to prioritize the reduction of AW versus the reduction of s-risks. And once we accept some tradeoff in theory, it becomes relevant to ask if we would plausibly accept similar tradeoffs that could practically occur on an astronomical scale, for which the e-changes could of course first be âenlargedâ so as to make more practical sense. (At least I feel like none of my intended points depend on the e-changes being tiny, nor on the base populations consisting of lives with mutually equal welfare, so Iâm fine with discussing x/âVRCs that are in those ways more realistic â especially if we account for the âroller coasterâ aspects of more realistic lives.)
In other words, whether we affirm or reject the claim that purported positive goods can outweigh extreme suffering has great relevance for our priorities, whereas the question of whether lexical minimalist views are more plausible than non-lexical minimalist views has limited practical relevance, since the real-life implications (e.g. for ideal population sizes) are roughly convergent for minimalist views.
Again, Iâm happy to pick the closest symmetric view to compare with the minimalist priority for extreme suffering, but Iâm still unsure what that view might be (and eager to hear if there is anything to be read about such views).
I donât agree that the points about the minimalist xVRCsâ comparatively greater plausibility are mostly re-statements of minimalism itself. Rather, I claim that commonsense intuitions would favor the lexical minimalist xVRC â in which suffering is âspread more equally between those who already exist and those who do notâ (and eventually minimized if iterated) â over any symmetric xVRC of âexpanding hell to help the best-offâ. (In other words, even if one finds it somewhat plausible that happiness has independent value, or value in isolation, it still seems that the symmetric xVRCs are worse than the minimalist xVRC.)
(For subjective perfection equated with the absence of something, I was thinking of tranquilism as a need-based account of the isolated value of different experiential states, which is centered on cravings to change oneâs subjective experience.)
(Strictly speaking, minimalism is a category that contains NU but only overlaps with SFE; some SFE views may recognize isolated positive value even as they prioritize reducing suffering, and e.g. Fehigeâs view represents a preference-based instead of suffering-focused minimalism.)
About the naive intuition that happiness is indeed âbetter than nothingâ, Iâm curious if that really applies also for isolated Matrix-lives (for most people). As Iâve noted in this section, by focusing on isolated value we may often underestimate the relational value of some goods, which may be greater than the amount of intrinsic value we perceive them to have.
About the relational account having dispiriting or emotionally unsatisfying implications, those can also be compared between views (to the extent that they matter for the plausibility of axiological views). E.g., on minimalist views, unlike CU-like views, itâs not a tragedy or atrocity if we fail to reduce astronomical waste. In this sense, minimalist views may be less dispiriting than CU-like views. Moreover, Iâd practically emphasize that our positive roles need not be limited to the confines of our social communities, but extend all the way to those communitiesâ effects on things like factory farming, wild-animal suffering, and the risks of future suffering (and thus potentially match or even exceed our commonsense feelings about the positive value of many lives, even if this would formally consist of âonlyâ relational instead of independently positive value).
However, we should also be careful to account for our personal emotional responses to the implications of a given axiology. By analogy with empirical claims, we would probably want our views on (e.g.) global catastrophic risks to be unaffected by whether we find them dispiriting or not. Similarly, we should arguably account for such feelings in our axiological considerations of what, if anything, would constitute an axiologically positive life in causal isolation (and, specifically, what would constitute a life capable of counterbalancing the suffering of others without the consent of the latter).
EDIT:
Do you mean trivial pains adding up to severe suffering? I can see how if you would accept lexicality or thresholds to prevent this, you could do the same to prevent trivial pleasures outweighing severe suffering or greater joys.
My original comment follows.
I think your first and third points are mostly right, but I would add that minimalist axiologies can avoid the (V)RC without (arbitrary) critical levels, (arbitrary) thresholds, giving up continuity, or giving up additivity/âseparability, which someone might find as counterintuitive as the VRC. Views like these tend to look more arbitrary, or assuming transitivity, the independence of irrelevant alternatives and a far larger unaffected population, often reduce to solipsism or recommend totally ignoring value thatâs (weakly or strongly) lexically dominated in practice. So, if you find the (V)RC, and these aggregation tricks or their implications very counterintuitive, then minimalist and person-affecting views will look better than otherwise (not necessarily best), and classical utilitarianism will look worse than otherwise (but potentially still best overall or better than minimalist axiologies, if the other points in favour are strong enough).
Furthermore, the VRC is distinguished from the RC by the addition of severe suffering. Someone might find the VRC far worse than the RC (e.g. the person who named it, adding the âVeryâ :P), and if they do, that may indeed say something about their views on suffering and bad lives, and not just about the aggregation of the trivial vs values larger in magnitude. I do suspect like you that considering Omelas (or tradeoffs between a more even number of good and bad lives) would usually already get at this, though, but maybe not always.
That being said, personally, I am also separately sympathetic to lexicality (and previously non-additivity, but less so now because of the arguments in the papers I cited above), but not because of the RC or VRC, but because of direct intuitions about torture vs milder suffering (dust specks or even fairly morally significant suffering). EDIT: I guess this is the kind of âcounter-exampleâ you and Shulman have brought up?
On your second point, I donât think something like the VRC is remote, although I wouldnât consider it my best guess for the future. If it turns out that itâs more efficient to maximize pleasure (or value generally) in a huge number of tiny systems that produce very little value each, classical utilitarians may be motivated to do so at substantial cost, including sacrificing a much higher average welfare and ignoring s-risks. So, you end up with astronomically many more marginally good lives and a huge number of additional horrible lives (possibly astronomically many, although far fewer than the marginally good lives) and missing out on many very high welfare lives. This is basically the VRC. This seems unlikely unless classical utilitarians have majority control over large contiguous chunks of space in the future.
Yeah, thatâs it. As you note these sorts of moves seem to have costs elsewhere, but if one thinks on balance they nonetheless should be accepted, then the V/âRC isnât really a strike against âsymmetric axiologyâ simpliciter, but merely âsymmetric axiologies with a mistaken account of aggregationâ. If instead âstraightforward/âunadornedâ aggregation is the right way to go, then the V/âRC is a strike against symmetric views and a strike in favour of minimalist ones; but âstraightforwardâ aggregation can also produce highly counter-intuitive results for minimalist views which symmetric axiologies avoid (e.g. âbetter N awful lives than TREE(N+3) lives of perfect bliss and a pin-prickâ).
Hence (per 3) I feel the OP would be trying to have it both ways if they donât discuss argumentative resources which could defend a rival theory from objections they mount against it, yet subsequently rely upon those same resources to respond to objections to their preferred theory.
(Re. 2, perhaps it depends on the value of âtinyââmy intuition is the dynamic range of (e.g.) human happiness is much smaller than that for future beings, so âvery smallâ on this scale would still typically be greatly above the âmarginally goodâ range by the lights of classical util. If (e.g.) commonsenically happy human lives/âexperiences are 10, joyful future beings could go up to 1000, and âmarginally goodâ is anything <1, weâd be surprised to find the optimal average for the maximal aggregate is in the marginally good range. Adding in the âVâ bit to this RC adds a further penalty).
That all seems fair to me.
With respect to 2, Iâm thinking something on the order of insect brains. There are reasons to expect pleasure to scale sublinearly with brain size even in artificial brains optimized for pleasure, e.g. a lot of unnecessary connections that donât produce additional value, greater complexity in building larger brains without getting things wrong, or even giving weight to the belief that integrating minds actually reduces value, say because of bottlenecks in some of the relevant circuits/âfunctions. Smaller brains are easier/âfaster to run in parallel.
This is assuming the probability of consciousness doesnât dominate. There may also be scale efficiencies, since the brains need containers and to be connected to things (even digitally?) or there may be some other overhead.
So, I donât think it would be too surprising to find the optimal average in the marginally good range.
I think itâs useful to have a thought experiment to refer to other than Omelas to capture the intuition of âa perfect, arbitrarily large utopia is better than a world with arbitrarily many miserable lives supposedly counterbalanced by sufficiently many good lives.â Because:
The âarbitrarily manyâ quantifiers show just how extreme this can get, and indeed the sort of axiology that endorses the VRC is committed to judging the VRC as better the more you multiply the scale, which seems backwards to my intuitions.
The first option is a utopia, whereas the Omelas story doesnât say that thereâs some other civilization that is smaller yet still awesome and has no suffering.
Omelas as such is confounded by deontological intuitions, and the alternative postulated in the story is âwalking away,â not preventing the existence of such a world in the first place. Iâve frequently found that people get hung up on the counterproductiveness of walking away, which is true, but irrelevant to the axiological point I want to make. The VRC is purely axiological, so more effective at conveying this.
So while I agree that aggregation is an important part of the VRC, I also disagree that the ânickel and dimingâ is at the heart of this. To my intuitions, the VRC is still horrible and borderline unacceptable if we replace the just-barely-worth-living lives with lives that have sufficiently intense happiness, intense enough to cross any positive lexical threshold you want to stipulate. In fact, muzak and potatoes lives as Parfit originally formulated them (i.e., with no suffering) seem much better than lots of lives with both lexically negative and lexically âpositiveâ experiences. Iâll eagerly accept Parfitâs version of the RC. (If you want to say this is contrary to common sense intuitions, thatâs fine, since I donât put much stock in common sense when it comes to ethics; there seem to be myriad forces pushing our default intuitions in directions that make evolutionary sense but are disturbing to me upon reflection.)
[edited for some clarifications]