Thanks for the reply, and with apologies for brevity.
Re. 1 (ie. âThe primary issue with the VRC is aggregation rather than trade-offâ). I take it we should care about plausibility of axiological views with respect to something like âcommonsenseâ intuitions, rather than those a given axiology urges us to adopt. Itâs at least opaque to me whether commonsense intuitions are more offended by âtrade-offy/âCUâ or âno-trade-offy/âNUâ intuitions. On the one hand:
âAny arbitrarily awful thing can be better than nothing providing it is counterbalanced by k good things (for some value of k)â
(a fortiori) âN awful things can be better than nothing providing they are counterbalanced by k*N good things (and N can be arbitrarily large, say a trillion awful lives).â
But on the other:
âNo amount of good things (no matter how great their magnitude) can compensate for a single awful thing, no matter how astronomical the ratio (e.g. trillions to 1, TREE(3) to 1, whatever).â
(a fortiori) âNo amount of great things can compensate for a single bad thing, no matter how small it is (e.g. pinpricks, a minute risk of an awful thing)â
However, I am confident the aggregation viewsâbasically orthogonal to this questionâare indeed the main driver for folks finding the V/âRC particularly repugnant. Compare:
1 million great lives vs. 1 million terrible lives and a Quadrillion great lives.
1 thousand great lives vs. 1 thousand terrible lives and TREE(3) marginally good lives.
A minimalist view may well be concerned with increasing the amount of aggregate harm in 1 vs. 2, and so worry that (re. 2) if CU was willing to accept this, it would accept a lot more aggregate harm if we increase the upside to more than compensate (e.g. TREE(3) great lives). Yet I aver commonsense intuitions favour 1 over 2, and would find variants of 2 where the downside is increased but the upside is reduced but concentrated (e.g. a trillion great lives) more palatable.
So appeals along the lines of âCU accepts the VRC, andâeven worseâwould accept even larger downsides if the compensating upside was composed of very- rather than marginally- happy livesâ seems misguided, as this adaptation of the VRC aligns it better, not worse, with commonsense (if not minimalist) intuitions.
Re. 3 Iâve read Budolfson & Spears, and as you note (*) it seems we can construct xVRCs which minimalist views (inc. those which introduce lexical thresholds) are susceptible to. (I also note they agree with me re. 1 - e.g. s8: âWhenever aggregation is done over an unbounded space, repugnant outcomes inevitably occurâ; their identification with the underlying mechanism for repugnance being able to aggregate e-changes.)
The replies minimalists can make here seem very âas good for the goose as the ganderâ to me:
One could deny minimalism is susceptible to even xVRCs as one should drop aggregation/âcontinuity/âetc. Yet symmetric views should do the same, so one should explore whether on the margin of this atypical account of aggregation minimalist axiologies are a net plus or minus to overall plausibility.
One could urge we shouldnât dock points to a theory for counter-examples which are impractical/âunrealistic, the x/âVRCs for minimalism fare much better than the x/âVRCs for totalism. This would be quite a departure from my understanding of how the discussion proceeds in the literature, where the main concern is the âin principleâ determination for scenarios (I donât ever recallâe.g. - replies for averagism along the lines of âBut thereâd never be a realistic scenario where weâd actually find ourselves minded to add net-negative lives to improve average utilityâ). In any case, a lot of the xVRCs applicable to CU-variants require precisely stipulated âbase populationsâ, so theyâre presumably also âin the clearâ by this criterion.
One could accept minimalism entails an xVRC, but this bullet is easier to bite than x/âVRCs against symmetric views. Perhaps, but in which case we should probably pick the closest symmetric comparator (e.g. if they canât play with thresholds, you should deal with Shulman-esque pinprick scenarios). I also note the appeals to plausibility made (here and in the comments you link) seem to be mostly re-statements of minimalism itself (e.g. that epsilon changes in misery count but epsilon changes in happiness donât, âsubjective perfectionâ equated to neutrality, etc.) âConditional on minimalist intuitions, minimalism has no truly counter-intuitive resultsâ is surely true, but also question-begging to folks who donât share them (compare a totalist asserting the VRC is much less counter-intuitive than minimalist-xVRCs as - âobviouslyâ - wellbeing can be greater than zero, and axiology shouldnât completely discount unbounded amounts of it in evaluation).
[Finally, Iâm afraid I canât really see much substantive merit in the ârelational goodsâ approach. Minimalism (like SFE and NU) straightforwardly offends the naive intuition that happiness is indeed âbetter than nothingâ, and I donât find relational attempts to undercut this by offering an account of these being roundabout ways/âpolicies of reducing problems either emotionally satisfying (e.g. All the rich relationships between members of a community may make everyone have âlives worth livingâ in the sense that âwithout me these other people would be worse offâ, but minimalism appears still committed to the dispiriting claim that this rich tapestry of relationships is still worse than nothing) or intellectually credible (cf. virtually everyoneâs expressed and implied preferences suggest non-assent to âno-trade-offâ views).
Similarly, I think assessing âisolatedâ goods as typical population cases do is a good way to dissect out the de/âmerits of different theories, and noting our evaluation changes as we add in a lot of âpracticalâ considerations seems apt to muddy the issue again (for example, Iâd guess various âpractical elaborationsâ of the V/âRC would make it appear more palatable, but I donât think this is a persuasive reply).
I focus on the âpureâ population ethics as âI donât buy itâ is barren ground for discussion.]
Re. 1 (ie. âThe primary issue with the VRC is aggregation rather than trade-offâ). I take it we should care about plausibility of axiological views with respect to something like âcommonsenseâ intuitions, rather than those a given axiology urges us to adopt.
Agreed, and this is also why I focus also on the psychological and practical implications of axiological views, and not only on their theoretical implications. Especially in the EA(-adjacent) community, it seems common to me that the plausibility of theoretical views is assessed also based on the plausibility of their practical implications, which tap into further important intuitions than what may be involved by staying at the abstract level.
E.g., people may bite bullets in theory to retain a consistent view, but still never bite those bullets in practice due to some still unarticulated reasons, which may indicate an inconsistency between their explicit and implicit axiology.
Itâs at least opaque to me whether commonsense intuitions are more offended by âtrade-offy/âCUâ or âno-trade-offy/âNUâ intuitions.
By âtrade-offyâ and âno-trade-offyâ, Iâd like to emphasize that we mean trade-offs between isolated things. In other words, the diagrams of population ethics could just as well consist of causally isolated experience machines (âisolated Matrix-livesâ), which is plausibly a confounding factor for our practical (âcommonsenseâ) intuitions, as our practical intuitions are arguably adapted for trade-offs in an interpersonal (ârelationalâ) world.
On the one hand:
âAny arbitrarily awful thing can be better than nothing providing it is counterbalanced by k good things (for some value of k)â
(a fortiori) âN awful things can be better than nothing providing they are counterbalanced by k*N good things (and N can be arbitrarily large, say a trillion awful lives).â
Itâs very unclear to me how many people actually believe that any arbitrarily awful thing can be counterbalanced by sufficiently many (and/âor awesome) isolated Matrix-lives, or other isolated goods. By default, I would assume that most people do not (want to) think about torture, and also do not properly respect the âall else being equalâ assumption, and thereby would not count as votes of âinformed consentâ for those claims. Additionally, in at least one small Mechanical Turk survey about a tradeoff for people themselves, more than 40 percent of people said that they would not accept one minute of extreme suffering for any number of happy years added to their lives.
But on the other:
âNo amount of good things (no matter how great their magnitude) can compensate for a single awful thing, no matter how astronomical the ratio (e.g. trillions to 1, TREE(3) to 1, whatever).â
(a fortiori) âNo amount of great things can compensate for a single bad thing, no matter how small it is (e.g. pinpricks, a minute risk of an awful thing)â
The first claim (i.e. âa lexical minimalist componentâ) is precisely what has been defended in the philosophical (and fictional) literature. And again, this claim might be something that most people have not thought about, because only a minority of people have had first- or even second-person experience of an awful thing that might be defended as being categorically âimpossible to compensate for with isolated goodsâ, such as torture.
(The second claim does not strictly follow from the first, which was about âawfulâ things; e.g. some SFE views hold that sufficiently awful things are lexical bads, but not that all kinds of tiny bads are. This is also relevant for the practical implications of lexical minimalist views with relational goods, on which pinpricks may be practically ignored unless they increase the risk of lexically bad things, whereas anything worthy of the name âgreat thingâ would probably play positive roles to help reduce that risk.)
However, I am confident the aggregation viewsâbasically orthogonal to this questionâare indeed the main driver for folks finding the V/âRC particularly repugnant. Compare: [...]
So appeals along the lines of âCU accepts the VRC, andâeven worseâwould accept even larger downsides if the compensating upside was composed of very- rather than marginally- happy livesâ seems misguided, as this adaptation of the VRC aligns it better, not worse, with commonsense (if not minimalist) intuitions.
Here I would again note that our commonsense intuitions are arguably not adapted to track the isolated value of lives, and so we should be careful to make it clear that we are comparing e.g. isolated Matrix-lives. By default, I suspect that people may think of the happy populations as consisting of lives like their own or of people they know, which may implicitly involve a lot of effects on other lives.
Of course, the framings of âisolated Matrix-livesâ or âexperience machinesâ may themselves bring in connotations that can feel pejorative or dismissive with regard to the actual subjective experience of those lives, but my point is just to drive home the fact that these lives are, by hypothesis, radically devoid of any positive roles for others, or even for their future selves. And if people implicitly have a relational notion of positive value (e.g. if they think of positive value as implying an inverse causal relation to some subjective problems), then they may feel very differently about harms counterbalanced by isolated goods vs. harms counterbalanced by relational goods (of which minimalist views can endorse the latter).
To be clear, the inverse relations include not only subjective problems prevented by social relationships, but also e.g. any desirable effects on wild animals and future s-risks. Admittedly, probably neither of the latter two is a very commonsensical contributor to positive tradeoffs, but Iâd guess that neither would many people find it intuitive to counterbalance astronomical harms with (âeven greater amounts ofâ) isolated experience machines, or with a single âutility monsterâ. Arguably, all of these cases are also tricky to measure against peopleâs commonsense intuitions, given that not many people have thought about them in the first place.
Re. 3 Iâve read Budolfson & Spears, and as you note (*) it seems we can construct xVRCs which minimalist views (inc. those which introduce lexical thresholds) are susceptible to. (I also note they agree with me re. 1 - e.g. s8: âWhenever aggregation is done over an unbounded space, repugnant outcomes inevitably occurâ; their identification with the underlying mechanism for repugnance being able to aggregate e-changes.)
Yeah, we can formally construct xVRCs for minimalist views, including for lexical minimalist views, but my claim is that these are consistently less repugnant in like-like comparisons with symmetric views (relative to commonsense or widely shared intuitions). Specifically in the lexical minimalist xVRC â i.e. thesecomments which you refer to in your point #3 below â the tradeoff results in ever less (and less intense) suffering if followed repeatedly. By comparison, every symmetric xVRC would keep on increasing suffering if scaled up in an analogous way, which is arguably the most repugnant aspect of the VRC.
Additionally, this comment (upstream of the linked ones) points out a source of intra-personal repugnance in the symmetric cases, namely that CU-like views would be fine with the âmarginally goodâ Δ-lives being âroller coasterâ lives that also contain a lot of extreme suffering:
One way to see that a Δ increase could be very repugnant is to recall Portmoreâs (1999) suggestion that Δ lives in the restricted RC could be âroller coasterâ lives, in which there is much that is wonderful, but also much terribly suffering, such that the good ever-so-slightly outweighs the bad [according to some symmetric view]. Here, one admitted possibility is that an Δ-change could substantially increase the terrible suffering in a life, and also increase good components; such a Δ-change is not the only possible Δ-change, but it would have the consequence of increasing the total amount of suffering. ⊠Moreover, if Δ-changes are of the âroller coasterâ form, they could increase deep suffering considerably beyond even the arbitrarily many [u < 0] lives, and in fact could require everyone in the chosen population to experience terrible suffering. [From Budolfson & Spears]
Of course, in some minimalist examples it is arguably repugnant to create extreme suffering to avoid a vast number of mildly problematic states. But I would claim that commonsense (and not only minimalist) intuitions would find even more repugnant the analogous symmetric case, namely to create extreme suffering for a vast number of mildly positive states which are not needed to relieve anyoneâs burden. (The latter case may appear especially repugnant if the symmetric view in question would allow the mildly positive states to be âroller coasterâ lives that are not even themselves free of, but would in fact contain a lot of, extreme suffering.) Consider, for instance, that:
A 2017 survey by FLI (n > 14,000), found that the goal people favored most as the ideal aim of a future civilization was âminimizing sufferingâ. This was the most popular aim by a large margin, ahead of âmaximizing positive experiencesâ, and most of the people who favored this goal were probably not suffering while they responded to the survey.
According to some plausible moral views, the alleviation of suffering is more important, morally, than the promotion of happiness. According to other plausible moral views (such as classical utilitarianism), the alleviation of suffering is equally as important, morally, as the promotion of happiness. But there is no reasonable moral view on which the alleviation of suffering is less important than the promotion of happiness. So, under moral uncertainty, itâs appropriate to prefer to alleviate suffering rather than to promote happiness more often than the utilitarian would.
The above points do not tip the scales all the way in favor of minimalism over CU-variants, but they do suggest that common intuitions would not necessarily favor âadditively aggregationist CUâ (even before looking at the respective x/âVRCs for these views, let alone after considering the overall direction when we iterate such tradeoffs multiple times).
The replies minimalists can make here seem very âas good for the goose as the ganderâ to me:
1. One could deny minimalism is susceptible to even xVRCs as one should drop aggregation/âcontinuity/âetc. Yet symmetric views should do the same, so one should explore whether on the margin of this atypical account of aggregation minimalist axiologies are a net plus or minus to overall plausibility.
Agreed, although it is unclear whether continuous aggregation is in fact more typical. But since Iâm interested in defending lexical minimalism (which many people already hold with a priority for extreme suffering), Iâd be curious to hear if anyone has defended an analogous symmetric view, or how that view would be constructed in the first place. E.g., should I compare âpriority for the worst-offâ with a view that (also) entails âpriority for the best-offâ, even if no one (to my knowledge) defends the latter priority?
2. One could urge we shouldnât dock points to a theory for counter-examples which are impractical/âunrealistic, the x/âVRCs for minimalism fare much better than the x/âVRCs for totalism. This would be quite a departure from my understanding of how the discussion proceeds in the literature, where the main concern is the âin principleâ determination for scenarios
The literature is mostly not written by people trying to figure out whether to prioritize the reduction of AW versus the reduction of s-risks. And once we accept some tradeoff in theory, it becomes relevant to ask if we would plausibly accept similar tradeoffs that could practically occur on an astronomical scale, for which the e-changes could of course first be âenlargedâ so as to make more practical sense. (At least I feel like none of my intended points depend on the e-changes being tiny, nor on the base populations consisting of lives with mutually equal welfare, so Iâm fine with discussing x/âVRCs that are in those ways more realistic â especially if we account for the âroller coasterâ aspects of more realistic lives.)
In other words, whether we affirm or reject the claim that purported positive goods can outweigh extreme suffering has great relevance for our priorities, whereas the question of whether lexical minimalist views are more plausible than non-lexical minimalist views has limited practical relevance, since the real-life implications (e.g. for ideal population sizes) are roughly convergent for minimalist views.
3. One could accept minimalism entails an xVRC, but this bullet is easier to bite than x/âVRCs against symmetric views. Perhaps, but in which case we should probably pick the closest symmetric comparator (e.g. if they canât play with thresholds, you should deal with Shulman-esque pinprick scenarios). I also note the appeals to plausibility made (here and in the comments you link) seem to be mostly re-statements of minimalism itself (e.g. that epsilon changes in misery count but epsilon changes in happiness donât, âsubjective perfectionâ equated to neutrality, etc.)
Again, Iâm happy to pick the closest symmetric view to compare with the minimalist priority for extreme suffering, but Iâm still unsure what that view might be (and eager to hear if there is anything to be read about such views).
I donât agree that the points about the minimalist xVRCsâ comparatively greater plausibility are mostly re-statements of minimalism itself. Rather, I claim that commonsense intuitions would favor the lexical minimalist xVRC â in which suffering is âspread more equally between those who already exist and those who do notâ (and eventually minimized if iterated) â over any symmetric xVRC of âexpanding hell to help the best-offâ. (In other words, even if one finds it somewhat plausible that happiness has independent value, or value in isolation, it still seems that the symmetric xVRCs are worse than the minimalist xVRC.)
(For subjective perfection equated with the absence of something, I was thinking of tranquilism as a need-based account of the isolated value of different experiential states, which is centered on cravings to change oneâs subjective experience.)
Finally, Iâm afraid I canât really see much substantive merit in the ârelational goodsâ approach. Minimalism (like SFE and NU) straightforwardly offends the naive intuition that happiness is indeed âbetter than nothingâ, and I donât find relational attempts to undercut this by offering an account of these being roundabout ways/âpolicies of reducing problems either emotionally satisfying (e.g. All the rich relationships between members of a community may make everyone have âlives worth livingâ in the sense that âwithout me these other people would be worse offâ, but minimalism appears still committed to the dispiriting claim that this rich tapestry of relationships is still worse than nothing) or intellectually credible
(Strictly speaking, minimalism is a category that contains NU but only overlaps with SFE; some SFE views may recognize isolated positive value even as they prioritize reducing suffering, and e.g. Fehigeâs view represents a preference-based instead of suffering-focused minimalism.)
About the naive intuition that happiness is indeed âbetter than nothingâ, Iâm curious if that really applies also for isolated Matrix-lives (for most people). As Iâve noted in this section, by focusing on isolated value we may often underestimate the relational value of some goods, which may be greater than the amount of intrinsic value we perceive them to have.
About the relational account having dispiriting or emotionally unsatisfying implications, those can also be compared between views (to the extent that they matter for the plausibility of axiological views). E.g., on minimalist views, unlike CU-like views, itâs not a tragedy or atrocity if we fail to reduce astronomical waste. In this sense, minimalist views may be less dispiriting than CU-like views. Moreover, Iâd practically emphasize that our positive roles need not be limited to the confines of our social communities, but extend all the way to those communitiesâ effects on things like factory farming, wild-animal suffering, and the risks of future suffering (and thus potentially match or even exceed our commonsense feelings about the positive value of many lives, even if this would formally consist of âonlyâ relational instead of independently positive value).
However, we should also be careful to account for our personal emotional responses to the implications of a given axiology. By analogy with empirical claims, we would probably want our views on (e.g.) global catastrophic risks to be unaffected by whether we find them dispiriting or not. Similarly, we should arguably account for such feelings in our axiological considerations of what, if anything, would constitute an axiologically positive life in causal isolation (and, specifically, what would constitute a life capable of counterbalancing the suffering of others without the consent of the latter).
Thanks for the reply, and with apologies for brevity.
Re. 1 (ie. âThe primary issue with the VRC is aggregation rather than trade-offâ). I take it we should care about plausibility of axiological views with respect to something like âcommonsenseâ intuitions, rather than those a given axiology urges us to adopt. Itâs at least opaque to me whether commonsense intuitions are more offended by âtrade-offy/âCUâ or âno-trade-offy/âNUâ intuitions. On the one hand:
âAny arbitrarily awful thing can be better than nothing providing it is counterbalanced by k good things (for some value of k)â
(a fortiori) âN awful things can be better than nothing providing they are counterbalanced by k*N good things (and N can be arbitrarily large, say a trillion awful lives).â
But on the other:
âNo amount of good things (no matter how great their magnitude) can compensate for a single awful thing, no matter how astronomical the ratio (e.g. trillions to 1, TREE(3) to 1, whatever).â
(a fortiori) âNo amount of great things can compensate for a single bad thing, no matter how small it is (e.g. pinpricks, a minute risk of an awful thing)â
However, I am confident the aggregation viewsâbasically orthogonal to this questionâare indeed the main driver for folks finding the V/âRC particularly repugnant. Compare:
1 million great lives vs. 1 million terrible lives and a Quadrillion great lives.
1 thousand great lives vs. 1 thousand terrible lives and TREE(3) marginally good lives.
A minimalist view may well be concerned with increasing the amount of aggregate harm in 1 vs. 2, and so worry that (re. 2) if CU was willing to accept this, it would accept a lot more aggregate harm if we increase the upside to more than compensate (e.g. TREE(3) great lives). Yet I aver commonsense intuitions favour 1 over 2, and would find variants of 2 where the downside is increased but the upside is reduced but concentrated (e.g. a trillion great lives) more palatable.
So appeals along the lines of âCU accepts the VRC, andâeven worseâwould accept even larger downsides if the compensating upside was composed of very- rather than marginally- happy livesâ seems misguided, as this adaptation of the VRC aligns it better, not worse, with commonsense (if not minimalist) intuitions.
Re. 3 Iâve read Budolfson & Spears, and as you note (*) it seems we can construct xVRCs which minimalist views (inc. those which introduce lexical thresholds) are susceptible to. (I also note they agree with me re. 1 - e.g. s8: âWhenever aggregation is done over an unbounded space, repugnant outcomes inevitably occurâ; their identification with the underlying mechanism for repugnance being able to aggregate e-changes.)
The replies minimalists can make here seem very âas good for the goose as the ganderâ to me:
One could deny minimalism is susceptible to even xVRCs as one should drop aggregation/âcontinuity/âetc. Yet symmetric views should do the same, so one should explore whether on the margin of this atypical account of aggregation minimalist axiologies are a net plus or minus to overall plausibility.
One could urge we shouldnât dock points to a theory for counter-examples which are impractical/âunrealistic, the x/âVRCs for minimalism fare much better than the x/âVRCs for totalism. This would be quite a departure from my understanding of how the discussion proceeds in the literature, where the main concern is the âin principleâ determination for scenarios (I donât ever recallâe.g. - replies for averagism along the lines of âBut thereâd never be a realistic scenario where weâd actually find ourselves minded to add net-negative lives to improve average utilityâ). In any case, a lot of the xVRCs applicable to CU-variants require precisely stipulated âbase populationsâ, so theyâre presumably also âin the clearâ by this criterion.
One could accept minimalism entails an xVRC, but this bullet is easier to bite than x/âVRCs against symmetric views. Perhaps, but in which case we should probably pick the closest symmetric comparator (e.g. if they canât play with thresholds, you should deal with Shulman-esque pinprick scenarios). I also note the appeals to plausibility made (here and in the comments you link) seem to be mostly re-statements of minimalism itself (e.g. that epsilon changes in misery count but epsilon changes in happiness donât, âsubjective perfectionâ equated to neutrality, etc.) âConditional on minimalist intuitions, minimalism has no truly counter-intuitive resultsâ is surely true, but also question-begging to folks who donât share them (compare a totalist asserting the VRC is much less counter-intuitive than minimalist-xVRCs as - âobviouslyâ - wellbeing can be greater than zero, and axiology shouldnât completely discount unbounded amounts of it in evaluation).
[Finally, Iâm afraid I canât really see much substantive merit in the ârelational goodsâ approach. Minimalism (like SFE and NU) straightforwardly offends the naive intuition that happiness is indeed âbetter than nothingâ, and I donât find relational attempts to undercut this by offering an account of these being roundabout ways/âpolicies of reducing problems either emotionally satisfying (e.g. All the rich relationships between members of a community may make everyone have âlives worth livingâ in the sense that âwithout me these other people would be worse offâ, but minimalism appears still committed to the dispiriting claim that this rich tapestry of relationships is still worse than nothing) or intellectually credible (cf. virtually everyoneâs expressed and implied preferences suggest non-assent to âno-trade-offâ views).
Similarly, I think assessing âisolatedâ goods as typical population cases do is a good way to dissect out the de/âmerits of different theories, and noting our evaluation changes as we add in a lot of âpracticalâ considerations seems apt to muddy the issue again (for example, Iâd guess various âpractical elaborationsâ of the V/âRC would make it appear more palatable, but I donât think this is a persuasive reply).
I focus on the âpureâ population ethics as âI donât buy itâ is barren ground for discussion.]
Thanks for the reply!
Agreed, and this is also why I focus also on the psychological and practical implications of axiological views, and not only on their theoretical implications. Especially in the EA(-adjacent) community, it seems common to me that the plausibility of theoretical views is assessed also based on the plausibility of their practical implications, which tap into further important intuitions than what may be involved by staying at the abstract level.
E.g., people may bite bullets in theory to retain a consistent view, but still never bite those bullets in practice due to some still unarticulated reasons, which may indicate an inconsistency between their explicit and implicit axiology.
By âtrade-offyâ and âno-trade-offyâ, Iâd like to emphasize that we mean trade-offs between isolated things. In other words, the diagrams of population ethics could just as well consist of causally isolated experience machines (âisolated Matrix-livesâ), which is plausibly a confounding factor for our practical (âcommonsenseâ) intuitions, as our practical intuitions are arguably adapted for trade-offs in an interpersonal (ârelationalâ) world.
Itâs very unclear to me how many people actually believe that any arbitrarily awful thing can be counterbalanced by sufficiently many (and/âor awesome) isolated Matrix-lives, or other isolated goods. By default, I would assume that most people do not (want to) think about torture, and also do not properly respect the âall else being equalâ assumption, and thereby would not count as votes of âinformed consentâ for those claims. Additionally, in at least one small Mechanical Turk survey about a tradeoff for people themselves, more than 40 percent of people said that they would not accept one minute of extreme suffering for any number of happy years added to their lives.
The first claim (i.e. âa lexical minimalist componentâ) is precisely what has been defended in the philosophical (and fictional) literature. And again, this claim might be something that most people have not thought about, because only a minority of people have had first- or even second-person experience of an awful thing that might be defended as being categorically âimpossible to compensate for with isolated goodsâ, such as torture.
(The second claim does not strictly follow from the first, which was about âawfulâ things; e.g. some SFE views hold that sufficiently awful things are lexical bads, but not that all kinds of tiny bads are. This is also relevant for the practical implications of lexical minimalist views with relational goods, on which pinpricks may be practically ignored unless they increase the risk of lexically bad things, whereas anything worthy of the name âgreat thingâ would probably play positive roles to help reduce that risk.)
Here I would again note that our commonsense intuitions are arguably not adapted to track the isolated value of lives, and so we should be careful to make it clear that we are comparing e.g. isolated Matrix-lives. By default, I suspect that people may think of the happy populations as consisting of lives like their own or of people they know, which may implicitly involve a lot of effects on other lives.
Of course, the framings of âisolated Matrix-livesâ or âexperience machinesâ may themselves bring in connotations that can feel pejorative or dismissive with regard to the actual subjective experience of those lives, but my point is just to drive home the fact that these lives are, by hypothesis, radically devoid of any positive roles for others, or even for their future selves. And if people implicitly have a relational notion of positive value (e.g. if they think of positive value as implying an inverse causal relation to some subjective problems), then they may feel very differently about harms counterbalanced by isolated goods vs. harms counterbalanced by relational goods (of which minimalist views can endorse the latter).
To be clear, the inverse relations include not only subjective problems prevented by social relationships, but also e.g. any desirable effects on wild animals and future s-risks. Admittedly, probably neither of the latter two is a very commonsensical contributor to positive tradeoffs, but Iâd guess that neither would many people find it intuitive to counterbalance astronomical harms with (âeven greater amounts ofâ) isolated experience machines, or with a single âutility monsterâ. Arguably, all of these cases are also tricky to measure against peopleâs commonsense intuitions, given that not many people have thought about them in the first place.
Yeah, we can formally construct xVRCs for minimalist views, including for lexical minimalist views, but my claim is that these are consistently less repugnant in like-like comparisons with symmetric views (relative to commonsense or widely shared intuitions). Specifically in the lexical minimalist xVRC â i.e. these comments which you refer to in your point #3 below â the tradeoff results in ever less (and less intense) suffering if followed repeatedly. By comparison, every symmetric xVRC would keep on increasing suffering if scaled up in an analogous way, which is arguably the most repugnant aspect of the VRC.
Additionally, this comment (upstream of the linked ones) points out a source of intra-personal repugnance in the symmetric cases, namely that CU-like views would be fine with the âmarginally goodâ Δ-lives being âroller coasterâ lives that also contain a lot of extreme suffering:
Of course, in some minimalist examples it is arguably repugnant to create extreme suffering to avoid a vast number of mildly problematic states. But I would claim that commonsense (and not only minimalist) intuitions would find even more repugnant the analogous symmetric case, namely to create extreme suffering for a vast number of mildly positive states which are not needed to relieve anyoneâs burden. (The latter case may appear especially repugnant if the symmetric view in question would allow the mildly positive states to be âroller coasterâ lives that are not even themselves free of, but would in fact contain a lot of, extreme suffering.) Consider, for instance, that:
A 2017 survey by FLI (n > 14,000), found that the goal people favored most as the ideal aim of a future civilization was âminimizing sufferingâ. This was the most popular aim by a large margin, ahead of âmaximizing positive experiencesâ, and most of the people who favored this goal were probably not suffering while they responded to the survey.
The authors of Moral Uncertainty write (p. 185):
The above points do not tip the scales all the way in favor of minimalism over CU-variants, but they do suggest that common intuitions would not necessarily favor âadditively aggregationist CUâ (even before looking at the respective x/âVRCs for these views, let alone after considering the overall direction when we iterate such tradeoffs multiple times).
Agreed, although it is unclear whether continuous aggregation is in fact more typical. But since Iâm interested in defending lexical minimalism (which many people already hold with a priority for extreme suffering), Iâd be curious to hear if anyone has defended an analogous symmetric view, or how that view would be constructed in the first place. E.g., should I compare âpriority for the worst-offâ with a view that (also) entails âpriority for the best-offâ, even if no one (to my knowledge) defends the latter priority?
The literature is mostly not written by people trying to figure out whether to prioritize the reduction of AW versus the reduction of s-risks. And once we accept some tradeoff in theory, it becomes relevant to ask if we would plausibly accept similar tradeoffs that could practically occur on an astronomical scale, for which the e-changes could of course first be âenlargedâ so as to make more practical sense. (At least I feel like none of my intended points depend on the e-changes being tiny, nor on the base populations consisting of lives with mutually equal welfare, so Iâm fine with discussing x/âVRCs that are in those ways more realistic â especially if we account for the âroller coasterâ aspects of more realistic lives.)
In other words, whether we affirm or reject the claim that purported positive goods can outweigh extreme suffering has great relevance for our priorities, whereas the question of whether lexical minimalist views are more plausible than non-lexical minimalist views has limited practical relevance, since the real-life implications (e.g. for ideal population sizes) are roughly convergent for minimalist views.
Again, Iâm happy to pick the closest symmetric view to compare with the minimalist priority for extreme suffering, but Iâm still unsure what that view might be (and eager to hear if there is anything to be read about such views).
I donât agree that the points about the minimalist xVRCsâ comparatively greater plausibility are mostly re-statements of minimalism itself. Rather, I claim that commonsense intuitions would favor the lexical minimalist xVRC â in which suffering is âspread more equally between those who already exist and those who do notâ (and eventually minimized if iterated) â over any symmetric xVRC of âexpanding hell to help the best-offâ. (In other words, even if one finds it somewhat plausible that happiness has independent value, or value in isolation, it still seems that the symmetric xVRCs are worse than the minimalist xVRC.)
(For subjective perfection equated with the absence of something, I was thinking of tranquilism as a need-based account of the isolated value of different experiential states, which is centered on cravings to change oneâs subjective experience.)
(Strictly speaking, minimalism is a category that contains NU but only overlaps with SFE; some SFE views may recognize isolated positive value even as they prioritize reducing suffering, and e.g. Fehigeâs view represents a preference-based instead of suffering-focused minimalism.)
About the naive intuition that happiness is indeed âbetter than nothingâ, Iâm curious if that really applies also for isolated Matrix-lives (for most people). As Iâve noted in this section, by focusing on isolated value we may often underestimate the relational value of some goods, which may be greater than the amount of intrinsic value we perceive them to have.
About the relational account having dispiriting or emotionally unsatisfying implications, those can also be compared between views (to the extent that they matter for the plausibility of axiological views). E.g., on minimalist views, unlike CU-like views, itâs not a tragedy or atrocity if we fail to reduce astronomical waste. In this sense, minimalist views may be less dispiriting than CU-like views. Moreover, Iâd practically emphasize that our positive roles need not be limited to the confines of our social communities, but extend all the way to those communitiesâ effects on things like factory farming, wild-animal suffering, and the risks of future suffering (and thus potentially match or even exceed our commonsense feelings about the positive value of many lives, even if this would formally consist of âonlyâ relational instead of independently positive value).
However, we should also be careful to account for our personal emotional responses to the implications of a given axiology. By analogy with empirical claims, we would probably want our views on (e.g.) global catastrophic risks to be unaffected by whether we find them dispiriting or not. Similarly, we should arguably account for such feelings in our axiological considerations of what, if anything, would constitute an axiologically positive life in causal isolation (and, specifically, what would constitute a life capable of counterbalancing the suffering of others without the consent of the latter).