(Edit: Added a note(*) on minimalist views and the extended VRC of Budolfson & Spears.)
Thanks for highlighting an important section for discussion. Let me try to respond to your points. (I added the underline in them just to unburden the reader’s working memory.)
This seems wrong to me,
The quoted passage contained many claims; which one(s) seemed wrong to you?
and confusing ‘finding the VRC counter-intuitive’ with ‘counterbalancing (/extreme) bad with with good in any circumstance is counterintuitive’ (e.g. the linked article to Omelas) is unfortunate—especially as this error has been repeated a few times in and around SFE-land.
My argument was rather the other way around. Namely, if we accept any kind of counterbalancing of harms with isolated goods, then CU-like views would imply that it is net positive to create space colonies that are at least as good as the hellish + barely positive lives of the VRC. And given arguments like astronomical waste (AW) (Bostrom, 2003), the justified harm could be arbitrarily vast as long as the isolated positive lives are sufficiently numerous. (Tomasik’s Omelas article does not depend on the VRC, but speaks of the risk of astronomical harms given the views of Bostrom, which was also my intended focus.)
(To avoid needless polarization and promote fruitful dialogue, I think it might be best to generally avoid using “disjointing” territorial metaphors such as “SFE-land” or “CU-land”, not least considering the significant common ground among people in the EA(-adjacent) community.)
First, what is turning the screws in the VRC is primarily the aggregation, not the (severe/) suffering.
For minimalist views, there is a very relevant difference between the RC and VRC, which is that the RC can be non-problematic (provided that we assume that the lives “never suffer”, cf. footnote 16 here), but minimalist views would always reject the VRC. For minimalist views, the (severe) suffering is, of course, the main concern. My point about the VRC was to highlight how CU can justify astronomical harms even for (supposedly) barely positive isolated lives, and an even bigger commonsensical worry is how much harm it can justify for (supposedly) greatly positive isolated lives.
If the block of ‘positive lives/stuff’ in the VRC was high magnitude—say about as much (or even more) above neutral as the block of ‘negative lives/stuff’ lie below it—there is little about this more Omelas-type scenario a classical utilitarian would find troubling. “N terrible lives and k*N wonderful lives is better than N wonderful lives alone” seems plausible for sufficiently high values of k. (Notably, ‘Minimalist’ views seem to fare worse as it urges no value of k … would be high enough.)
It seems true that more people would find that more plausible. Even so, this is precisely what minimalists may find worrying about the CU approach to astronomical tradeoffs, namely that astronomical harms can be justified by the creation of sufficiently many instances of isolated goods.
Additionally, I feel like the point above applies more to classical utilitarianism (the view) rather than to the views of actual classical utilitarians, not to mention people who are mildly sympathetic to CU, which seems a particularly relevant group in this context given that they may represent an even larger number of people in the EA(-adjacent) community.
After all, CU-like views contain a minimalist (sub)component, and probably many self-identified CUs and CU-sympathetic people would thereby be at least more than a “little” troubled by the implication that astronomical amounts of hellish lives — e.g. vastly more suffering than what has occurred on Earth to date — would be a worthwhile tradeoff for (greater) astronomical amounts of wonderful lives (what minimalist views would frame as unproblematic lives), especially given that the alternative was a wonderful (unproblematic) population with no hellish lives.
(For what it’s worth, I used to feel drawn to a CU axiology until I became too troubled by the logic of counterbalancing harm for some with isolated good for others. For many people on the fence, the core problem is probably this kind of counterbalancing itself, which is independent of the VRC but of course also clearly illustrated by it.)
If it were just ‘The VRC says you can counterbalance severe suffering with happiness’ simpliciter which was generally counterintuitive, we could skip the rigmarole of A, A+, B etc. and just offer Omelas-type scenarios (as Tomasik does in the linked piece) without stipulating the supposedly outweighing good stuff comprises a lot of trivial well-being.
Of course, minimalist views (as explored here) would deny allcounterbalancing of severe problems with isolated goods, independent of the VRC.
The Mere-Addition Paradox, RC, and VRC are often-discussed problems to which minimalist views may provide satisfying answers. The first two were included in the post for many reasons, and not only as a build-up to the VRC. The build-up was also not meant to end with the VRC, but instead to further motivate the question of how much harm can be justified to reduce astronomical waste (AW).
If CU-like views can justify the creation of a lot of hellish lives even for vast amounts of isolated value-containers that have only “barely positive” contents (the VRC), then how much more hellish lives can they supposedly counterbalance once those containers are filled (cf. AW)?
Second, although scenarios where one may consider counterbalancing (/severe) suffering with happiness in general may not be purely theoretical (either now or in the future) the likelihood of something closely analogous to the VRC in particular looks very remote. In terms of ‘process’ the engine of the counter-intuitiveness relies on being able to parcel out good stuff in arbitrarily many arbitrarily small increments rather than in smaller more substantial portions; in terms of ‘outcome’ one needs a much smaller set of terrible lives outweighed by a truly vast multitude of just-about-better-than-nothing ones. I don’t see how either arise on credible stories of the future.
MichaelStJules already responded to this in the sibling comment. Additionally, I would again emphasize that the main worry is not so much the practical manifestation of the VRC in particular, but more the extent to which much worse problems might be justified by CU-like views given the creation of supposedly even greater amounts of isolated goods (i.e. reducing AW).
Third, there are other lines classical utilitarians or similar can take in response to the VRC besides biting the bullet (or attempting to undercut our intuitive responses): critical level views, playing with continuity, and other anti-aggregation devices to try and preserve trading-off in general but avoid the nickel and diming issues of the VRC in particular.
MichaelStJules already mentioned an arbitrariness objection to those lines. Additionally, my impressions (based on Budolfson & Spears, 2018) are that “the VRC cannot be avoided by any leading welfarist axiology despite prior consensus in the literature to the contrary” and that “[the extended] VRC cannot be avoided by any other welfarist axiology in the literature.”
Their literature did not include minimalist views(*). Did they also omit some CU-like views, or are the VRC-rejecting CU-like views not defended by anyone in the literature?
Obviously, these themselves introduce other challenges (so much so I’m more inclined to accept the costly counter-examples than the costs of (e.g.) non-continuity) and surveying all this terrain would be a gargantuan task far beyond the remit of work introducing a related but distinct issue.
This again leaves me wondering: Are all of the VRC-rejecting CU-like views so arbitrary or counterintuitive that people will just rather accept the VRC? And will even the most attractive of those views still justify astronomical harms for a sufficiently high amount of isolated lives that are “taller” than those in the VRC?
This does not ease the worry that CU-like views can justify astronomically large harms in order to create isolated positive lives that never needed to exist in the first place.
But I bring this up because I anticipate the likely moves you will make to avoid the counter-example Shulman and I have brought up will be along the lines of anti-aggregationist moves around lexicality, thresholds, and whatnot.
First, in terms of practical relevance, one could argue that the choice to “prefer hell to prevent an imperfect heaven” is much more speculative and unlikely than is the VRC for CU-like views, not to mention the likelihood of CU justifying astronomical harms for supposedly greater goods regardless of the VRC (i.e. for reducing AW). In other words, the former can much more plausibly be disregarded as practically irrelevant than can the latter.
Second, lexical views do indeed avoid the conclusion in question, but these need not entail abrupt thresholds (per the arguments here and here), and even if they do, the threshold need not be an arbitrary or ad hoc move. For example, one could hold that there is a difference between psychologically consentable and unconsentable suffering, which is normally ignored by the logic of additive aggregationism. Moreover, the OP entails no commitment to additive aggregationism, as it only specifies that the minimalist views in question are monist, impartial, and welfarist.
If so, what is good for the goose is good for the gander: it seems better to use similarly adapted versions of total utilitarianism as a ‘like for like’ comparison. ‘Lexical threshold total utilitarianism’, which lexically de-prioritises dis/value below some magnitude can accept mere addition, accept trading off suffering for sufficient (non-trivial) happiness, but avoid both the RC and VRC. This seems a better point of departure for weighing up minimalism or not, rather than discussing counter-examples to one or the other view which only apply given an (ex hypothesi) mistaken account of how to aggregate harms and benefits.
First, I am happy to compare like views in this way in my forthcoming post. I would greatly appreciate it if people were to present or refer me to specific such views to be compared.
Second, the point above may seem to imply that there is a symmetry between these lexical adaptations, i.e. that we can “similarly” construct lexical minimalism and lexical symmetric totalism (if you allow the short expression). Yet the fact that we can make formally symmetric constructions for these different views does not imply that the respective plausibility of these constructions is symmetric at the substantive level. In this sense, what is good for the goose may do nothing for the gander. (But again, I’m happy to explore the possibility that it might.)
Specifically, how would one set the threshold(s) on the lexical symmetric view in a non-arbitrary way, and has anyone presented and defended plausible versions of such views?
Furthermore, most people would probably find it much more plausible that some harms cannot be counterbalanced by any amount of isolated goods (“a lexical minimalist component”), than that some goods can counterbalance any amount of isolated harms (a similarly lexical positive component). At least I’ve never heard anyone defend or outline the latter kind of view. (By contrast, beyondexamples in academicphilosophy, there are numerousexamples in literature hinting at “minimalist lexicality”.)
Overall, I remain worried about the vast harms that CU-like views could justify for the supposed greater good, also considering that even you feel inclined to rather accept the VRC than deal with the apparently arbitrary or counterintuitive features of the versions of CU-like views that avoid it. (And if one proposes a positive lexical threshold, it seems that above the lexical threshold there is always a higher isolated good that can justify vast harms.)
Lastly, why do we need to “accept trading off suffering for sufficient (non-trivial) [isolated] happiness” in the first place? Would not a relationalaccount of the value of happiness suffice? What seems to be the problem with relational goods, without isolated goods?
(*) A note on minimalist views and the extended VRC of Budolfson & Spears (2018).
Strictly speaking, the extended VRC in the formulation of Budolfson & Spears does not pertain to minimalist views, because they say “u^h>0” (i.e. strictly greater than zero). So minimalist views fall outside of the domain that they draw conclusions for.
But if we allow the “high-utility lives” to be exactly zero, or even less than zero, then their conclusion would also hold for (continuous, aggregationist) minimalist views. (But the conclusion arguably also becomes much less implausible in the minimalist case compared to the symmetric case, cf. the final point below.)
So it (also) holds for continuous aggregationist minimalist views that there exists a base population “such that it is better to both add to the base population the negative-utility lives and cause [a sufficiently large number of] ε-changes”.
But beyond questioning the continuous aggregationist component of these views (indeed a possibility that lies open to many kinds of views with such a component), and beyond questioning the practical relevance of this conclusion for minimalist views versus for symmetric views (as I do above), one may further argue that the conclusion is significantly moreplausible in the minimalist case than in the case where we allow torture for the sake of isolated, purported goods that arguably do not need to exist. For in the minimalist case, the overall burden of subjective problems is still lessened (assuming continuous aggregationist minimalism). We are not creating extreme suffering for the mere sake of isolated, “unrelieving” goods.
Thanks for the reply, and with apologies for brevity.
Re. 1 (ie. “The primary issue with the VRC is aggregation rather than trade-off”). I take it we should care about plausibility of axiological views with respect to something like ‘commonsense’ intuitions, rather than those a given axiology urges us to adopt. It’s at least opaque to me whether commonsense intuitions are more offended by ‘trade-offy/CU’ or ‘no-trade-offy/NU’ intuitions. On the one hand:
“Any arbitrarily awful thing can be better than nothing providing it is counterbalanced by k good things (for some value of k)”
(a fortiori) “N awful things can be better than nothing providing they are counterbalanced by k*N good things (and N can be arbitrarily large, say a trillion awful lives).”
But on the other:
“No amount of good things (no matter how great their magnitude) can compensate for a single awful thing, no matter how astronomical the ratio (e.g. trillions to 1, TREE(3) to 1, whatever).”
(a fortiori) “No amount of great things can compensate for a single bad thing, no matter how small it is (e.g. pinpricks, a minute risk of an awful thing)”
However, I am confident the aggregation views—basically orthogonal to this question—are indeed the main driver for folks finding the V/RC particularly repugnant. Compare:
1 million great lives vs. 1 million terrible lives and a Quadrillion great lives.
1 thousand great lives vs. 1 thousand terrible lives and TREE(3) marginally good lives.
A minimalist view may well be concerned with increasing the amount of aggregate harm in 1 vs. 2, and so worry that (re. 2) if CU was willing to accept this, it would accept a lot more aggregate harm if we increase the upside to more than compensate (e.g. TREE(3) great lives). Yet I aver commonsense intuitions favour 1 over 2, and would find variants of 2 where the downside is increased but the upside is reduced but concentrated (e.g. a trillion great lives) more palatable.
So appeals along the lines of “CU accepts the VRC, and—even worse—would accept even larger downsides if the compensating upside was composed of very- rather than marginally- happy lives” seems misguided, as this adaptation of the VRC aligns it better, not worse, with commonsense (if not minimalist) intuitions.
Re. 3 I’ve read Budolfson & Spears, and as you note (*) it seems we can construct xVRCs which minimalist views (inc. those which introduce lexical thresholds) are susceptible to. (I also note they agree with me re. 1 - e.g. s8: “Whenever aggregation is done over an unbounded space, repugnant outcomes inevitably occur”; their identification with the underlying mechanism for repugnance being able to aggregate e-changes.)
The replies minimalists can make here seem very ‘as good for the goose as the gander’ to me:
One could deny minimalism is susceptible to even xVRCs as one should drop aggregation/continuity/etc. Yet symmetric views should do the same, so one should explore whether on the margin of this atypical account of aggregation minimalist axiologies are a net plus or minus to overall plausibility.
One could urge we shouldn’t dock points to a theory for counter-examples which are impractical/unrealistic, the x/VRCs for minimalism fare much better than the x/VRCs for totalism. This would be quite a departure from my understanding of how the discussion proceeds in the literature, where the main concern is the ‘in principle’ determination for scenarios (I don’t ever recall—e.g. - replies for averagism along the lines of “But there’d never be a realistic scenario where we’d actually find ourselves minded to add net-negative lives to improve average utility”). In any case, a lot of the xVRCs applicable to CU-variants require precisely stipulated ‘base populations’, so they’re presumably also ‘in the clear’ by this criterion.
One could accept minimalism entails an xVRC, but this bullet is easier to bite than x/VRCs against symmetric views. Perhaps, but in which case we should probably pick the closest symmetric comparator (e.g. if they can’t play with thresholds, you should deal with Shulman-esque pinprick scenarios). I also note the appeals to plausibility made (here and in the comments you link) seem to be mostly re-statements of minimalism itself (e.g. that epsilon changes in misery count but epsilon changes in happiness don’t, ‘subjective perfection’ equated to neutrality, etc.) “Conditional on minimalist intuitions, minimalism has no truly counter-intuitive results” is surely true, but also question-begging to folks who don’t share them (compare a totalist asserting the VRC is much less counter-intuitive than minimalist-xVRCs as - ‘obviously’ - wellbeing can be greater than zero, and axiology shouldn’t completely discount unbounded amounts of it in evaluation).
[Finally, I’m afraid I can’t really see much substantive merit in the ‘relational goods’ approach. Minimalism (like SFE and NU) straightforwardly offends the naive intuition that happiness is indeed ‘better than nothing’, and I don’t find relational attempts to undercut this by offering an account of these being roundabout ways/policies of reducing problems either emotionally satisfying (e.g. All the rich relationships between members of a community may make everyone have ‘lives worth living’ in the sense that ‘without me these other people would be worse off’, but minimalism appears still committed to the dispiriting claim that this rich tapestry of relationships is still worse than nothing) or intellectually credible (cf. virtually everyone’s expressed and implied preferences suggest non-assent to ‘no-trade-off’ views).
Similarly, I think assessing ‘isolated’ goods as typical population cases do is a good way to dissect out the de/merits of different theories, and noting our evaluation changes as we add in a lot of ‘practical’ considerations seems apt to muddy the issue again (for example, I’d guess various ‘practical elaborations’ of the V/RC would make it appear more palatable, but I don’t think this is a persuasive reply).
I focus on the ‘pure’ population ethics as “I don’t buy it” is barren ground for discussion.]
Re. 1 (ie. “The primary issue with the VRC is aggregation rather than trade-off”). I take it we should care about plausibility of axiological views with respect to something like ‘commonsense’ intuitions, rather than those a given axiology urges us to adopt.
Agreed, and this is also why I focus also on the psychological and practical implications of axiological views, and not only on their theoretical implications. Especially in the EA(-adjacent) community, it seems common to me that the plausibility of theoretical views is assessed also based on the plausibility of their practical implications, which tap into further important intuitions than what may be involved by staying at the abstract level.
E.g., people may bite bullets in theory to retain a consistent view, but still never bite those bullets in practice due to some still unarticulated reasons, which may indicate an inconsistency between their explicit and implicit axiology.
It’s at least opaque to me whether commonsense intuitions are more offended by ‘trade-offy/CU’ or ‘no-trade-offy/NU’ intuitions.
By ‘trade-offy’ and ‘no-trade-offy’, I’d like to emphasize that we mean trade-offs between isolated things. In other words, the diagrams of population ethics could just as well consist of causally isolated experience machines (“isolated Matrix-lives”), which is plausibly a confounding factor for our practical (“commonsense”) intuitions, as our practical intuitions are arguably adapted for trade-offs in an interpersonal (“relational”) world.
On the one hand:
“Any arbitrarily awful thing can be better than nothing providing it is counterbalanced by k good things (for some value of k)”
(a fortiori) “N awful things can be better than nothing providing they are counterbalanced by k*N good things (and N can be arbitrarily large, say a trillion awful lives).”
It’s very unclear to me how many people actually believe that any arbitrarily awful thing can be counterbalanced by sufficiently many (and/or awesome) isolated Matrix-lives, or other isolated goods. By default, I would assume that most people do not (want to) think about torture, and also do not properly respect the “all else being equal” assumption, and thereby would not count as votes of “informed consent” for those claims. Additionally, in at least one small Mechanical Turk survey about a tradeoff for people themselves, more than 40 percent of people said that they would not accept one minute of extreme suffering for any number of happy years added to their lives.
But on the other:
“No amount of good things (no matter how great their magnitude) can compensate for a single awful thing, no matter how astronomical the ratio (e.g. trillions to 1, TREE(3) to 1, whatever).”
(a fortiori) “No amount of great things can compensate for a single bad thing, no matter how small it is (e.g. pinpricks, a minute risk of an awful thing)”
The first claim (i.e. “a lexical minimalist component”) is precisely what has been defended in the philosophical (and fictional) literature. And again, this claim might be something that most people have not thought about, because only a minority of people have had first- or even second-person experience of an awful thing that might be defended as being categorically “impossible to compensate for with isolated goods”, such as torture.
(The second claim does not strictly follow from the first, which was about “awful” things; e.g. some SFE views hold that sufficiently awful things are lexical bads, but not that all kinds of tiny bads are. This is also relevant for the practical implications of lexical minimalist views with relational goods, on which pinpricks may be practically ignored unless they increase the risk of lexically bad things, whereas anything worthy of the name “great thing” would probably play positive roles to help reduce that risk.)
However, I am confident the aggregation views—basically orthogonal to this question—are indeed the main driver for folks finding the V/RC particularly repugnant. Compare: [...]
So appeals along the lines of “CU accepts the VRC, and—even worse—would accept even larger downsides if the compensating upside was composed of very- rather than marginally- happy lives” seems misguided, as this adaptation of the VRC aligns it better, not worse, with commonsense (if not minimalist) intuitions.
Here I would again note that our commonsense intuitions are arguably not adapted to track the isolated value of lives, and so we should be careful to make it clear that we are comparing e.g. isolated Matrix-lives. By default, I suspect that people may think of the happy populations as consisting of lives like their own or of people they know, which may implicitly involve a lot of effects on other lives.
Of course, the framings of “isolated Matrix-lives” or “experience machines” may themselves bring in connotations that can feel pejorative or dismissive with regard to the actual subjective experience of those lives, but my point is just to drive home the fact that these lives are, by hypothesis, radically devoid of any positive roles for others, or even for their future selves. And if people implicitly have a relational notion of positive value (e.g. if they think of positive value as implying an inverse causal relation to some subjective problems), then they may feel very differently about harms counterbalanced by isolated goods vs. harms counterbalanced by relational goods (of which minimalist views can endorse the latter).
To be clear, the inverse relations include not only subjective problems prevented by social relationships, but also e.g. any desirable effects on wild animals and future s-risks. Admittedly, probably neither of the latter two is a very commonsensical contributor to positive tradeoffs, but I’d guess that neither would many people find it intuitive to counterbalance astronomical harms with (“even greater amounts of”) isolated experience machines, or with a single “utility monster”. Arguably, all of these cases are also tricky to measure against people’s commonsense intuitions, given that not many people have thought about them in the first place.
Re. 3 I’ve read Budolfson & Spears, and as you note (*) it seems we can construct xVRCs which minimalist views (inc. those which introduce lexical thresholds) are susceptible to. (I also note they agree with me re. 1 - e.g. s8: “Whenever aggregation is done over an unbounded space, repugnant outcomes inevitably occur”; their identification with the underlying mechanism for repugnance being able to aggregate e-changes.)
Yeah, we can formally construct xVRCs for minimalist views, including for lexical minimalist views, but my claim is that these are consistently less repugnant in like-like comparisons with symmetric views (relative to commonsense or widely shared intuitions). Specifically in the lexical minimalist xVRC — i.e. thesecomments which you refer to in your point #3 below — the tradeoff results in ever less (and less intense) suffering if followed repeatedly. By comparison, every symmetric xVRC would keep on increasing suffering if scaled up in an analogous way, which is arguably the most repugnant aspect of the VRC.
Additionally, this comment (upstream of the linked ones) points out a source of intra-personal repugnance in the symmetric cases, namely that CU-like views would be fine with the “marginally good” ε-lives being “roller coaster” lives that also contain a lot of extreme suffering:
One way to see that a ε increase could be very repugnant is to recall Portmore’s (1999) suggestion that ε lives in the restricted RC could be “roller coaster” lives, in which there is much that is wonderful, but also much terribly suffering, such that the good ever-so-slightly outweighs the bad [according to some symmetric view]. Here, one admitted possibility is that an ε-change could substantially increase the terrible suffering in a life, and also increase good components; such a ε-change is not the only possible ε-change, but it would have the consequence of increasing the total amount of suffering. … Moreover, if ε-changes are of the “roller coaster” form, they could increase deep suffering considerably beyond even the arbitrarily many [u < 0] lives, and in fact could require everyone in the chosen population to experience terrible suffering. [From Budolfson & Spears]
Of course, in some minimalist examples it is arguably repugnant to create extreme suffering to avoid a vast number of mildly problematic states. But I would claim that commonsense (and not only minimalist) intuitions would find even more repugnant the analogous symmetric case, namely to create extreme suffering for a vast number of mildly positive states which are not needed to relieve anyone’s burden. (The latter case may appear especially repugnant if the symmetric view in question would allow the mildly positive states to be “roller coaster” lives that are not even themselves free of, but would in fact contain a lot of, extreme suffering.) Consider, for instance, that:
A 2017 survey by FLI (n > 14,000), found that the goal people favored most as the ideal aim of a future civilization was “minimizing suffering”. This was the most popular aim by a large margin, ahead of “maximizing positive experiences”, and most of the people who favored this goal were probably not suffering while they responded to the survey.
According to some plausible moral views, the alleviation of suffering is more important, morally, than the promotion of happiness. According to other plausible moral views (such as classical utilitarianism), the alleviation of suffering is equally as important, morally, as the promotion of happiness. But there is no reasonable moral view on which the alleviation of suffering is less important than the promotion of happiness. So, under moral uncertainty, it’s appropriate to prefer to alleviate suffering rather than to promote happiness more often than the utilitarian would.
The above points do not tip the scales all the way in favor of minimalism over CU-variants, but they do suggest that common intuitions would not necessarily favor ‘additively aggregationist CU’ (even before looking at the respective x/VRCs for these views, let alone after considering the overall direction when we iterate such tradeoffs multiple times).
The replies minimalists can make here seem very ‘as good for the goose as the gander’ to me:
1. One could deny minimalism is susceptible to even xVRCs as one should drop aggregation/continuity/etc. Yet symmetric views should do the same, so one should explore whether on the margin of this atypical account of aggregation minimalist axiologies are a net plus or minus to overall plausibility.
Agreed, although it is unclear whether continuous aggregation is in fact more typical. But since I’m interested in defending lexical minimalism (which many people already hold with a priority for extreme suffering), I’d be curious to hear if anyone has defended an analogous symmetric view, or how that view would be constructed in the first place. E.g., should I compare “priority for the worst-off” with a view that (also) entails “priority for the best-off”, even if no one (to my knowledge) defends the latter priority?
2. One could urge we shouldn’t dock points to a theory for counter-examples which are impractical/unrealistic, the x/VRCs for minimalism fare much better than the x/VRCs for totalism. This would be quite a departure from my understanding of how the discussion proceeds in the literature, where the main concern is the ‘in principle’ determination for scenarios
The literature is mostly not written by people trying to figure out whether to prioritize the reduction of AW versus the reduction of s-risks. And once we accept some tradeoff in theory, it becomes relevant to ask if we would plausibly accept similar tradeoffs that could practically occur on an astronomical scale, for which the e-changes could of course first be “enlarged” so as to make more practical sense. (At least I feel like none of my intended points depend on the e-changes being tiny, nor on the base populations consisting of lives with mutually equal welfare, so I’m fine with discussing x/VRCs that are in those ways more realistic — especially if we account for the “roller coaster” aspects of more realistic lives.)
In other words, whether we affirm or reject the claim that purported positive goods can outweigh extreme suffering has great relevance for our priorities, whereas the question of whether lexical minimalist views are more plausible than non-lexical minimalist views has limited practical relevance, since the real-life implications (e.g. for ideal population sizes) are roughly convergent for minimalist views.
3. One could accept minimalism entails an xVRC, but this bullet is easier to bite than x/VRCs against symmetric views. Perhaps, but in which case we should probably pick the closest symmetric comparator (e.g. if they can’t play with thresholds, you should deal with Shulman-esque pinprick scenarios). I also note the appeals to plausibility made (here and in the comments you link) seem to be mostly re-statements of minimalism itself (e.g. that epsilon changes in misery count but epsilon changes in happiness don’t, ‘subjective perfection’ equated to neutrality, etc.)
Again, I’m happy to pick the closest symmetric view to compare with the minimalist priority for extreme suffering, but I’m still unsure what that view might be (and eager to hear if there is anything to be read about such views).
I don’t agree that the points about the minimalist xVRCs’ comparatively greater plausibility are mostly re-statements of minimalism itself. Rather, I claim that commonsense intuitions would favor the lexical minimalist xVRC — in which suffering is “spread more equally between those who already exist and those who do not” (and eventually minimized if iterated) — over any symmetric xVRC of “expanding hell to help the best-off”. (In other words, even if one finds it somewhat plausible that happiness has independent value, or value in isolation, it still seems that the symmetric xVRCs are worse than the minimalist xVRC.)
(For subjective perfection equated with the absence of something, I was thinking of tranquilism as a need-based account of the isolated value of different experiential states, which is centered on cravings to change one’s subjective experience.)
Finally, I’m afraid I can’t really see much substantive merit in the ‘relational goods’ approach. Minimalism (like SFE and NU) straightforwardly offends the naive intuition that happiness is indeed ‘better than nothing’, and I don’t find relational attempts to undercut this by offering an account of these being roundabout ways/policies of reducing problems either emotionally satisfying (e.g. All the rich relationships between members of a community may make everyone have ‘lives worth living’ in the sense that ‘without me these other people would be worse off’, but minimalism appears still committed to the dispiriting claim that this rich tapestry of relationships is still worse than nothing) or intellectually credible
(Strictly speaking, minimalism is a category that contains NU but only overlaps with SFE; some SFE views may recognize isolated positive value even as they prioritize reducing suffering, and e.g. Fehige’s view represents a preference-based instead of suffering-focused minimalism.)
About the naive intuition that happiness is indeed ‘better than nothing’, I’m curious if that really applies also for isolated Matrix-lives (for most people). As I’ve noted in this section, by focusing on isolated value we may often underestimate the relational value of some goods, which may be greater than the amount of intrinsic value we perceive them to have.
About the relational account having dispiriting or emotionally unsatisfying implications, those can also be compared between views (to the extent that they matter for the plausibility of axiological views). E.g., on minimalist views, unlike CU-like views, it’s not a tragedy or atrocity if we fail to reduce astronomical waste. In this sense, minimalist views may be less dispiriting than CU-like views. Moreover, I’d practically emphasize that our positive roles need not be limited to the confines of our social communities, but extend all the way to those communities’ effects on things like factory farming, wild-animal suffering, and the risks of future suffering (and thus potentially match or even exceed our commonsense feelings about the positive value of many lives, even if this would formally consist of “only” relational instead of independently positive value).
However, we should also be careful to account for our personal emotional responses to the implications of a given axiology. By analogy with empirical claims, we would probably want our views on (e.g.) global catastrophic risks to be unaffected by whether we find them dispiriting or not. Similarly, we should arguably account for such feelings in our axiological considerations of what, if anything, would constitute an axiologically positive life in causal isolation (and, specifically, what would constitute a life capable of counterbalancing the suffering of others without the consent of the latter).
(Edit: Added a note(*) on minimalist views and the extended VRC of Budolfson & Spears.)
Thanks for highlighting an important section for discussion. Let me try to respond to your points. (I added the underline in them just to unburden the reader’s working memory.)
The quoted passage contained many claims; which one(s) seemed wrong to you?
My argument was rather the other way around. Namely, if we accept any kind of counterbalancing of harms with isolated goods, then CU-like views would imply that it is net positive to create space colonies that are at least as good as the hellish + barely positive lives of the VRC. And given arguments like astronomical waste (AW) (Bostrom, 2003), the justified harm could be arbitrarily vast as long as the isolated positive lives are sufficiently numerous. (Tomasik’s Omelas article does not depend on the VRC, but speaks of the risk of astronomical harms given the views of Bostrom, which was also my intended focus.)
(To avoid needless polarization and promote fruitful dialogue, I think it might be best to generally avoid using “disjointing” territorial metaphors such as “SFE-land” or “CU-land”, not least considering the significant common ground among people in the EA(-adjacent) community.)
For minimalist views, there is a very relevant difference between the RC and VRC, which is that the RC can be non-problematic (provided that we assume that the lives “never suffer”, cf. footnote 16 here), but minimalist views would always reject the VRC. For minimalist views, the (severe) suffering is, of course, the main concern. My point about the VRC was to highlight how CU can justify astronomical harms even for (supposedly) barely positive isolated lives, and an even bigger commonsensical worry is how much harm it can justify for (supposedly) greatly positive isolated lives.
It seems true that more people would find that more plausible. Even so, this is precisely what minimalists may find worrying about the CU approach to astronomical tradeoffs, namely that astronomical harms can be justified by the creation of sufficiently many instances of isolated goods.
Additionally, I feel like the point above applies more to classical utilitarianism (the view) rather than to the views of actual classical utilitarians, not to mention people who are mildly sympathetic to CU, which seems a particularly relevant group in this context given that they may represent an even larger number of people in the EA(-adjacent) community.
After all, CU-like views contain a minimalist (sub)component, and probably many self-identified CUs and CU-sympathetic people would thereby be at least more than a “little” troubled by the implication that astronomical amounts of hellish lives — e.g. vastly more suffering than what has occurred on Earth to date — would be a worthwhile tradeoff for (greater) astronomical amounts of wonderful lives (what minimalist views would frame as unproblematic lives), especially given that the alternative was a wonderful (unproblematic) population with no hellish lives.
(For what it’s worth, I used to feel drawn to a CU axiology until I became too troubled by the logic of counterbalancing harm for some with isolated good for others. For many people on the fence, the core problem is probably this kind of counterbalancing itself, which is independent of the VRC but of course also clearly illustrated by it.)
Of course, minimalist views (as explored here) would deny all counterbalancing of severe problems with isolated goods, independent of the VRC.
The Mere-Addition Paradox, RC, and VRC are often-discussed problems to which minimalist views may provide satisfying answers. The first two were included in the post for many reasons, and not only as a build-up to the VRC. The build-up was also not meant to end with the VRC, but instead to further motivate the question of how much harm can be justified to reduce astronomical waste (AW).
If CU-like views can justify the creation of a lot of hellish lives even for vast amounts of isolated value-containers that have only “barely positive” contents (the VRC), then how much more hellish lives can they supposedly counterbalance once those containers are filled (cf. AW)?
MichaelStJules already responded to this in the sibling comment. Additionally, I would again emphasize that the main worry is not so much the practical manifestation of the VRC in particular, but more the extent to which much worse problems might be justified by CU-like views given the creation of supposedly even greater amounts of isolated goods (i.e. reducing AW).
MichaelStJules already mentioned an arbitrariness objection to those lines. Additionally, my impressions (based on Budolfson & Spears, 2018) are that “the VRC cannot be avoided by any leading welfarist axiology despite prior consensus in the literature to the contrary” and that “[the extended] VRC cannot be avoided by any other welfarist axiology in the literature.”
Their literature did not include minimalist views(*). Did they also omit some CU-like views, or are the VRC-rejecting CU-like views not defended by anyone in the literature?
This again leaves me wondering: Are all of the VRC-rejecting CU-like views so arbitrary or counterintuitive that people will just rather accept the VRC? And will even the most attractive of those views still justify astronomical harms for a sufficiently high amount of isolated lives that are “taller” than those in the VRC?
This does not ease the worry that CU-like views can justify astronomically large harms in order to create isolated positive lives that never needed to exist in the first place.
First, in terms of practical relevance, one could argue that the choice to “prefer hell to prevent an imperfect heaven” is much more speculative and unlikely than is the VRC for CU-like views, not to mention the likelihood of CU justifying astronomical harms for supposedly greater goods regardless of the VRC (i.e. for reducing AW). In other words, the former can much more plausibly be disregarded as practically irrelevant than can the latter.
Second, lexical views do indeed avoid the conclusion in question, but these need not entail abrupt thresholds (per the arguments here and here), and even if they do, the threshold need not be an arbitrary or ad hoc move. For example, one could hold that there is a difference between psychologically consentable and unconsentable suffering, which is normally ignored by the logic of additive aggregationism. Moreover, the OP entails no commitment to additive aggregationism, as it only specifies that the minimalist views in question are monist, impartial, and welfarist.
First, I am happy to compare like views in this way in my forthcoming post. I would greatly appreciate it if people were to present or refer me to specific such views to be compared.
Second, the point above may seem to imply that there is a symmetry between these lexical adaptations, i.e. that we can “similarly” construct lexical minimalism and lexical symmetric totalism (if you allow the short expression). Yet the fact that we can make formally symmetric constructions for these different views does not imply that the respective plausibility of these constructions is symmetric at the substantive level. In this sense, what is good for the goose may do nothing for the gander. (But again, I’m happy to explore the possibility that it might.)
Specifically, how would one set the threshold(s) on the lexical symmetric view in a non-arbitrary way, and has anyone presented and defended plausible versions of such views?
Furthermore, most people would probably find it much more plausible that some harms cannot be counterbalanced by any amount of isolated goods (“a lexical minimalist component”), than that some goods can counterbalance any amount of isolated harms (a similarly lexical positive component). At least I’ve never heard anyone defend or outline the latter kind of view. (By contrast, beyond examples in academic philosophy, there are numerous examples in literature hinting at “minimalist lexicality”.)
Overall, I remain worried about the vast harms that CU-like views could justify for the supposed greater good, also considering that even you feel inclined to rather accept the VRC than deal with the apparently arbitrary or counterintuitive features of the versions of CU-like views that avoid it. (And if one proposes a positive lexical threshold, it seems that above the lexical threshold there is always a higher isolated good that can justify vast harms.)
Lastly, why do we need to “accept trading off suffering for sufficient (non-trivial) [isolated] happiness” in the first place? Would not a relational account of the value of happiness suffice? What seems to be the problem with relational goods, without isolated goods?
(*) A note on minimalist views and the extended VRC of Budolfson & Spears (2018).
Strictly speaking, the extended VRC in the formulation of Budolfson & Spears does not pertain to minimalist views, because they say “u^h>0” (i.e. strictly greater than zero). So minimalist views fall outside of the domain that they draw conclusions for.
But if we allow the “high-utility lives” to be exactly zero, or even less than zero, then their conclusion would also hold for (continuous, aggregationist) minimalist views. (But the conclusion arguably also becomes much less implausible in the minimalist case compared to the symmetric case, cf. the final point below.)
So it (also) holds for continuous aggregationist minimalist views that there exists a base population “such that it is better to both add to the base population the negative-utility lives and cause [a sufficiently large number of] ε-changes”.
But beyond questioning the continuous aggregationist component of these views (indeed a possibility that lies open to many kinds of views with such a component), and beyond questioning the practical relevance of this conclusion for minimalist views versus for symmetric views (as I do above), one may further argue that the conclusion is significantly more plausible in the minimalist case than in the case where we allow torture for the sake of isolated, purported goods that arguably do not need to exist. For in the minimalist case, the overall burden of subjective problems is still lessened (assuming continuous aggregationist minimalism). We are not creating extreme suffering for the mere sake of isolated, “unrelieving” goods.
Thanks for the reply, and with apologies for brevity.
Re. 1 (ie. “The primary issue with the VRC is aggregation rather than trade-off”). I take it we should care about plausibility of axiological views with respect to something like ‘commonsense’ intuitions, rather than those a given axiology urges us to adopt. It’s at least opaque to me whether commonsense intuitions are more offended by ‘trade-offy/CU’ or ‘no-trade-offy/NU’ intuitions. On the one hand:
“Any arbitrarily awful thing can be better than nothing providing it is counterbalanced by k good things (for some value of k)”
(a fortiori) “N awful things can be better than nothing providing they are counterbalanced by k*N good things (and N can be arbitrarily large, say a trillion awful lives).”
But on the other:
“No amount of good things (no matter how great their magnitude) can compensate for a single awful thing, no matter how astronomical the ratio (e.g. trillions to 1, TREE(3) to 1, whatever).”
(a fortiori) “No amount of great things can compensate for a single bad thing, no matter how small it is (e.g. pinpricks, a minute risk of an awful thing)”
However, I am confident the aggregation views—basically orthogonal to this question—are indeed the main driver for folks finding the V/RC particularly repugnant. Compare:
1 million great lives vs. 1 million terrible lives and a Quadrillion great lives.
1 thousand great lives vs. 1 thousand terrible lives and TREE(3) marginally good lives.
A minimalist view may well be concerned with increasing the amount of aggregate harm in 1 vs. 2, and so worry that (re. 2) if CU was willing to accept this, it would accept a lot more aggregate harm if we increase the upside to more than compensate (e.g. TREE(3) great lives). Yet I aver commonsense intuitions favour 1 over 2, and would find variants of 2 where the downside is increased but the upside is reduced but concentrated (e.g. a trillion great lives) more palatable.
So appeals along the lines of “CU accepts the VRC, and—even worse—would accept even larger downsides if the compensating upside was composed of very- rather than marginally- happy lives” seems misguided, as this adaptation of the VRC aligns it better, not worse, with commonsense (if not minimalist) intuitions.
Re. 3 I’ve read Budolfson & Spears, and as you note (*) it seems we can construct xVRCs which minimalist views (inc. those which introduce lexical thresholds) are susceptible to. (I also note they agree with me re. 1 - e.g. s8: “Whenever aggregation is done over an unbounded space, repugnant outcomes inevitably occur”; their identification with the underlying mechanism for repugnance being able to aggregate e-changes.)
The replies minimalists can make here seem very ‘as good for the goose as the gander’ to me:
One could deny minimalism is susceptible to even xVRCs as one should drop aggregation/continuity/etc. Yet symmetric views should do the same, so one should explore whether on the margin of this atypical account of aggregation minimalist axiologies are a net plus or minus to overall plausibility.
One could urge we shouldn’t dock points to a theory for counter-examples which are impractical/unrealistic, the x/VRCs for minimalism fare much better than the x/VRCs for totalism. This would be quite a departure from my understanding of how the discussion proceeds in the literature, where the main concern is the ‘in principle’ determination for scenarios (I don’t ever recall—e.g. - replies for averagism along the lines of “But there’d never be a realistic scenario where we’d actually find ourselves minded to add net-negative lives to improve average utility”). In any case, a lot of the xVRCs applicable to CU-variants require precisely stipulated ‘base populations’, so they’re presumably also ‘in the clear’ by this criterion.
One could accept minimalism entails an xVRC, but this bullet is easier to bite than x/VRCs against symmetric views. Perhaps, but in which case we should probably pick the closest symmetric comparator (e.g. if they can’t play with thresholds, you should deal with Shulman-esque pinprick scenarios). I also note the appeals to plausibility made (here and in the comments you link) seem to be mostly re-statements of minimalism itself (e.g. that epsilon changes in misery count but epsilon changes in happiness don’t, ‘subjective perfection’ equated to neutrality, etc.) “Conditional on minimalist intuitions, minimalism has no truly counter-intuitive results” is surely true, but also question-begging to folks who don’t share them (compare a totalist asserting the VRC is much less counter-intuitive than minimalist-xVRCs as - ‘obviously’ - wellbeing can be greater than zero, and axiology shouldn’t completely discount unbounded amounts of it in evaluation).
[Finally, I’m afraid I can’t really see much substantive merit in the ‘relational goods’ approach. Minimalism (like SFE and NU) straightforwardly offends the naive intuition that happiness is indeed ‘better than nothing’, and I don’t find relational attempts to undercut this by offering an account of these being roundabout ways/policies of reducing problems either emotionally satisfying (e.g. All the rich relationships between members of a community may make everyone have ‘lives worth living’ in the sense that ‘without me these other people would be worse off’, but minimalism appears still committed to the dispiriting claim that this rich tapestry of relationships is still worse than nothing) or intellectually credible (cf. virtually everyone’s expressed and implied preferences suggest non-assent to ‘no-trade-off’ views).
Similarly, I think assessing ‘isolated’ goods as typical population cases do is a good way to dissect out the de/merits of different theories, and noting our evaluation changes as we add in a lot of ‘practical’ considerations seems apt to muddy the issue again (for example, I’d guess various ‘practical elaborations’ of the V/RC would make it appear more palatable, but I don’t think this is a persuasive reply).
I focus on the ‘pure’ population ethics as “I don’t buy it” is barren ground for discussion.]
Thanks for the reply!
Agreed, and this is also why I focus also on the psychological and practical implications of axiological views, and not only on their theoretical implications. Especially in the EA(-adjacent) community, it seems common to me that the plausibility of theoretical views is assessed also based on the plausibility of their practical implications, which tap into further important intuitions than what may be involved by staying at the abstract level.
E.g., people may bite bullets in theory to retain a consistent view, but still never bite those bullets in practice due to some still unarticulated reasons, which may indicate an inconsistency between their explicit and implicit axiology.
By ‘trade-offy’ and ‘no-trade-offy’, I’d like to emphasize that we mean trade-offs between isolated things. In other words, the diagrams of population ethics could just as well consist of causally isolated experience machines (“isolated Matrix-lives”), which is plausibly a confounding factor for our practical (“commonsense”) intuitions, as our practical intuitions are arguably adapted for trade-offs in an interpersonal (“relational”) world.
It’s very unclear to me how many people actually believe that any arbitrarily awful thing can be counterbalanced by sufficiently many (and/or awesome) isolated Matrix-lives, or other isolated goods. By default, I would assume that most people do not (want to) think about torture, and also do not properly respect the “all else being equal” assumption, and thereby would not count as votes of “informed consent” for those claims. Additionally, in at least one small Mechanical Turk survey about a tradeoff for people themselves, more than 40 percent of people said that they would not accept one minute of extreme suffering for any number of happy years added to their lives.
The first claim (i.e. “a lexical minimalist component”) is precisely what has been defended in the philosophical (and fictional) literature. And again, this claim might be something that most people have not thought about, because only a minority of people have had first- or even second-person experience of an awful thing that might be defended as being categorically “impossible to compensate for with isolated goods”, such as torture.
(The second claim does not strictly follow from the first, which was about “awful” things; e.g. some SFE views hold that sufficiently awful things are lexical bads, but not that all kinds of tiny bads are. This is also relevant for the practical implications of lexical minimalist views with relational goods, on which pinpricks may be practically ignored unless they increase the risk of lexically bad things, whereas anything worthy of the name “great thing” would probably play positive roles to help reduce that risk.)
Here I would again note that our commonsense intuitions are arguably not adapted to track the isolated value of lives, and so we should be careful to make it clear that we are comparing e.g. isolated Matrix-lives. By default, I suspect that people may think of the happy populations as consisting of lives like their own or of people they know, which may implicitly involve a lot of effects on other lives.
Of course, the framings of “isolated Matrix-lives” or “experience machines” may themselves bring in connotations that can feel pejorative or dismissive with regard to the actual subjective experience of those lives, but my point is just to drive home the fact that these lives are, by hypothesis, radically devoid of any positive roles for others, or even for their future selves. And if people implicitly have a relational notion of positive value (e.g. if they think of positive value as implying an inverse causal relation to some subjective problems), then they may feel very differently about harms counterbalanced by isolated goods vs. harms counterbalanced by relational goods (of which minimalist views can endorse the latter).
To be clear, the inverse relations include not only subjective problems prevented by social relationships, but also e.g. any desirable effects on wild animals and future s-risks. Admittedly, probably neither of the latter two is a very commonsensical contributor to positive tradeoffs, but I’d guess that neither would many people find it intuitive to counterbalance astronomical harms with (“even greater amounts of”) isolated experience machines, or with a single “utility monster”. Arguably, all of these cases are also tricky to measure against people’s commonsense intuitions, given that not many people have thought about them in the first place.
Yeah, we can formally construct xVRCs for minimalist views, including for lexical minimalist views, but my claim is that these are consistently less repugnant in like-like comparisons with symmetric views (relative to commonsense or widely shared intuitions). Specifically in the lexical minimalist xVRC — i.e. these comments which you refer to in your point #3 below — the tradeoff results in ever less (and less intense) suffering if followed repeatedly. By comparison, every symmetric xVRC would keep on increasing suffering if scaled up in an analogous way, which is arguably the most repugnant aspect of the VRC.
Additionally, this comment (upstream of the linked ones) points out a source of intra-personal repugnance in the symmetric cases, namely that CU-like views would be fine with the “marginally good” ε-lives being “roller coaster” lives that also contain a lot of extreme suffering:
Of course, in some minimalist examples it is arguably repugnant to create extreme suffering to avoid a vast number of mildly problematic states. But I would claim that commonsense (and not only minimalist) intuitions would find even more repugnant the analogous symmetric case, namely to create extreme suffering for a vast number of mildly positive states which are not needed to relieve anyone’s burden. (The latter case may appear especially repugnant if the symmetric view in question would allow the mildly positive states to be “roller coaster” lives that are not even themselves free of, but would in fact contain a lot of, extreme suffering.) Consider, for instance, that:
A 2017 survey by FLI (n > 14,000), found that the goal people favored most as the ideal aim of a future civilization was “minimizing suffering”. This was the most popular aim by a large margin, ahead of “maximizing positive experiences”, and most of the people who favored this goal were probably not suffering while they responded to the survey.
The authors of Moral Uncertainty write (p. 185):
The above points do not tip the scales all the way in favor of minimalism over CU-variants, but they do suggest that common intuitions would not necessarily favor ‘additively aggregationist CU’ (even before looking at the respective x/VRCs for these views, let alone after considering the overall direction when we iterate such tradeoffs multiple times).
Agreed, although it is unclear whether continuous aggregation is in fact more typical. But since I’m interested in defending lexical minimalism (which many people already hold with a priority for extreme suffering), I’d be curious to hear if anyone has defended an analogous symmetric view, or how that view would be constructed in the first place. E.g., should I compare “priority for the worst-off” with a view that (also) entails “priority for the best-off”, even if no one (to my knowledge) defends the latter priority?
The literature is mostly not written by people trying to figure out whether to prioritize the reduction of AW versus the reduction of s-risks. And once we accept some tradeoff in theory, it becomes relevant to ask if we would plausibly accept similar tradeoffs that could practically occur on an astronomical scale, for which the e-changes could of course first be “enlarged” so as to make more practical sense. (At least I feel like none of my intended points depend on the e-changes being tiny, nor on the base populations consisting of lives with mutually equal welfare, so I’m fine with discussing x/VRCs that are in those ways more realistic — especially if we account for the “roller coaster” aspects of more realistic lives.)
In other words, whether we affirm or reject the claim that purported positive goods can outweigh extreme suffering has great relevance for our priorities, whereas the question of whether lexical minimalist views are more plausible than non-lexical minimalist views has limited practical relevance, since the real-life implications (e.g. for ideal population sizes) are roughly convergent for minimalist views.
Again, I’m happy to pick the closest symmetric view to compare with the minimalist priority for extreme suffering, but I’m still unsure what that view might be (and eager to hear if there is anything to be read about such views).
I don’t agree that the points about the minimalist xVRCs’ comparatively greater plausibility are mostly re-statements of minimalism itself. Rather, I claim that commonsense intuitions would favor the lexical minimalist xVRC — in which suffering is “spread more equally between those who already exist and those who do not” (and eventually minimized if iterated) — over any symmetric xVRC of “expanding hell to help the best-off”. (In other words, even if one finds it somewhat plausible that happiness has independent value, or value in isolation, it still seems that the symmetric xVRCs are worse than the minimalist xVRC.)
(For subjective perfection equated with the absence of something, I was thinking of tranquilism as a need-based account of the isolated value of different experiential states, which is centered on cravings to change one’s subjective experience.)
(Strictly speaking, minimalism is a category that contains NU but only overlaps with SFE; some SFE views may recognize isolated positive value even as they prioritize reducing suffering, and e.g. Fehige’s view represents a preference-based instead of suffering-focused minimalism.)
About the naive intuition that happiness is indeed ‘better than nothing’, I’m curious if that really applies also for isolated Matrix-lives (for most people). As I’ve noted in this section, by focusing on isolated value we may often underestimate the relational value of some goods, which may be greater than the amount of intrinsic value we perceive them to have.
About the relational account having dispiriting or emotionally unsatisfying implications, those can also be compared between views (to the extent that they matter for the plausibility of axiological views). E.g., on minimalist views, unlike CU-like views, it’s not a tragedy or atrocity if we fail to reduce astronomical waste. In this sense, minimalist views may be less dispiriting than CU-like views. Moreover, I’d practically emphasize that our positive roles need not be limited to the confines of our social communities, but extend all the way to those communities’ effects on things like factory farming, wild-animal suffering, and the risks of future suffering (and thus potentially match or even exceed our commonsense feelings about the positive value of many lives, even if this would formally consist of “only” relational instead of independently positive value).
However, we should also be careful to account for our personal emotional responses to the implications of a given axiology. By analogy with empirical claims, we would probably want our views on (e.g.) global catastrophic risks to be unaffected by whether we find them dispiriting or not. Similarly, we should arguably account for such feelings in our axiological considerations of what, if anything, would constitute an axiologically positive life in causal isolation (and, specifically, what would constitute a life capable of counterbalancing the suffering of others without the consent of the latter).