Re. 1 (ie. “The primary issue with the VRC is aggregation rather than trade-off”). I take it we should care about plausibility of axiological views with respect to something like ‘commonsense’ intuitions, rather than those a given axiology urges us to adopt.
Agreed, and this is also why I focus also on the psychological and practical implications of axiological views, and not only on their theoretical implications. Especially in the EA(-adjacent) community, it seems common to me that the plausibility of theoretical views is assessed also based on the plausibility of their practical implications, which tap into further important intuitions than what may be involved by staying at the abstract level.
E.g., people may bite bullets in theory to retain a consistent view, but still never bite those bullets in practice due to some still unarticulated reasons, which may indicate an inconsistency between their explicit and implicit axiology.
It’s at least opaque to me whether commonsense intuitions are more offended by ‘trade-offy/CU’ or ‘no-trade-offy/NU’ intuitions.
By ‘trade-offy’ and ‘no-trade-offy’, I’d like to emphasize that we mean trade-offs between isolated things. In other words, the diagrams of population ethics could just as well consist of causally isolated experience machines (“isolated Matrix-lives”), which is plausibly a confounding factor for our practical (“commonsense”) intuitions, as our practical intuitions are arguably adapted for trade-offs in an interpersonal (“relational”) world.
On the one hand:
“Any arbitrarily awful thing can be better than nothing providing it is counterbalanced by k good things (for some value of k)”
(a fortiori) “N awful things can be better than nothing providing they are counterbalanced by k*N good things (and N can be arbitrarily large, say a trillion awful lives).”
It’s very unclear to me how many people actually believe that any arbitrarily awful thing can be counterbalanced by sufficiently many (and/or awesome) isolated Matrix-lives, or other isolated goods. By default, I would assume that most people do not (want to) think about torture, and also do not properly respect the “all else being equal” assumption, and thereby would not count as votes of “informed consent” for those claims. Additionally, in at least one small Mechanical Turk survey about a tradeoff for people themselves, more than 40 percent of people said that they would not accept one minute of extreme suffering for any number of happy years added to their lives.
But on the other:
“No amount of good things (no matter how great their magnitude) can compensate for a single awful thing, no matter how astronomical the ratio (e.g. trillions to 1, TREE(3) to 1, whatever).”
(a fortiori) “No amount of great things can compensate for a single bad thing, no matter how small it is (e.g. pinpricks, a minute risk of an awful thing)”
The first claim (i.e. “a lexical minimalist component”) is precisely what has been defended in the philosophical (and fictional) literature. And again, this claim might be something that most people have not thought about, because only a minority of people have had first- or even second-person experience of an awful thing that might be defended as being categorically “impossible to compensate for with isolated goods”, such as torture.
(The second claim does not strictly follow from the first, which was about “awful” things; e.g. some SFE views hold that sufficiently awful things are lexical bads, but not that all kinds of tiny bads are. This is also relevant for the practical implications of lexical minimalist views with relational goods, on which pinpricks may be practically ignored unless they increase the risk of lexically bad things, whereas anything worthy of the name “great thing” would probably play positive roles to help reduce that risk.)
However, I am confident the aggregation views—basically orthogonal to this question—are indeed the main driver for folks finding the V/RC particularly repugnant. Compare: [...]
So appeals along the lines of “CU accepts the VRC, and—even worse—would accept even larger downsides if the compensating upside was composed of very- rather than marginally- happy lives” seems misguided, as this adaptation of the VRC aligns it better, not worse, with commonsense (if not minimalist) intuitions.
Here I would again note that our commonsense intuitions are arguably not adapted to track the isolated value of lives, and so we should be careful to make it clear that we are comparing e.g. isolated Matrix-lives. By default, I suspect that people may think of the happy populations as consisting of lives like their own or of people they know, which may implicitly involve a lot of effects on other lives.
Of course, the framings of “isolated Matrix-lives” or “experience machines” may themselves bring in connotations that can feel pejorative or dismissive with regard to the actual subjective experience of those lives, but my point is just to drive home the fact that these lives are, by hypothesis, radically devoid of any positive roles for others, or even for their future selves. And if people implicitly have a relational notion of positive value (e.g. if they think of positive value as implying an inverse causal relation to some subjective problems), then they may feel very differently about harms counterbalanced by isolated goods vs. harms counterbalanced by relational goods (of which minimalist views can endorse the latter).
To be clear, the inverse relations include not only subjective problems prevented by social relationships, but also e.g. any desirable effects on wild animals and future s-risks. Admittedly, probably neither of the latter two is a very commonsensical contributor to positive tradeoffs, but I’d guess that neither would many people find it intuitive to counterbalance astronomical harms with (“even greater amounts of”) isolated experience machines, or with a single “utility monster”. Arguably, all of these cases are also tricky to measure against people’s commonsense intuitions, given that not many people have thought about them in the first place.
Re. 3 I’ve read Budolfson & Spears, and as you note (*) it seems we can construct xVRCs which minimalist views (inc. those which introduce lexical thresholds) are susceptible to. (I also note they agree with me re. 1 - e.g. s8: “Whenever aggregation is done over an unbounded space, repugnant outcomes inevitably occur”; their identification with the underlying mechanism for repugnance being able to aggregate e-changes.)
Yeah, we can formally construct xVRCs for minimalist views, including for lexical minimalist views, but my claim is that these are consistently less repugnant in like-like comparisons with symmetric views (relative to commonsense or widely shared intuitions). Specifically in the lexical minimalist xVRC — i.e. thesecomments which you refer to in your point #3 below — the tradeoff results in ever less (and less intense) suffering if followed repeatedly. By comparison, every symmetric xVRC would keep on increasing suffering if scaled up in an analogous way, which is arguably the most repugnant aspect of the VRC.
Additionally, this comment (upstream of the linked ones) points out a source of intra-personal repugnance in the symmetric cases, namely that CU-like views would be fine with the “marginally good” ε-lives being “roller coaster” lives that also contain a lot of extreme suffering:
One way to see that a ε increase could be very repugnant is to recall Portmore’s (1999) suggestion that ε lives in the restricted RC could be “roller coaster” lives, in which there is much that is wonderful, but also much terribly suffering, such that the good ever-so-slightly outweighs the bad [according to some symmetric view]. Here, one admitted possibility is that an ε-change could substantially increase the terrible suffering in a life, and also increase good components; such a ε-change is not the only possible ε-change, but it would have the consequence of increasing the total amount of suffering. … Moreover, if ε-changes are of the “roller coaster” form, they could increase deep suffering considerably beyond even the arbitrarily many [u < 0] lives, and in fact could require everyone in the chosen population to experience terrible suffering. [From Budolfson & Spears]
Of course, in some minimalist examples it is arguably repugnant to create extreme suffering to avoid a vast number of mildly problematic states. But I would claim that commonsense (and not only minimalist) intuitions would find even more repugnant the analogous symmetric case, namely to create extreme suffering for a vast number of mildly positive states which are not needed to relieve anyone’s burden. (The latter case may appear especially repugnant if the symmetric view in question would allow the mildly positive states to be “roller coaster” lives that are not even themselves free of, but would in fact contain a lot of, extreme suffering.) Consider, for instance, that:
A 2017 survey by FLI (n > 14,000), found that the goal people favored most as the ideal aim of a future civilization was “minimizing suffering”. This was the most popular aim by a large margin, ahead of “maximizing positive experiences”, and most of the people who favored this goal were probably not suffering while they responded to the survey.
According to some plausible moral views, the alleviation of suffering is more important, morally, than the promotion of happiness. According to other plausible moral views (such as classical utilitarianism), the alleviation of suffering is equally as important, morally, as the promotion of happiness. But there is no reasonable moral view on which the alleviation of suffering is less important than the promotion of happiness. So, under moral uncertainty, it’s appropriate to prefer to alleviate suffering rather than to promote happiness more often than the utilitarian would.
The above points do not tip the scales all the way in favor of minimalism over CU-variants, but they do suggest that common intuitions would not necessarily favor ‘additively aggregationist CU’ (even before looking at the respective x/VRCs for these views, let alone after considering the overall direction when we iterate such tradeoffs multiple times).
The replies minimalists can make here seem very ‘as good for the goose as the gander’ to me:
1. One could deny minimalism is susceptible to even xVRCs as one should drop aggregation/continuity/etc. Yet symmetric views should do the same, so one should explore whether on the margin of this atypical account of aggregation minimalist axiologies are a net plus or minus to overall plausibility.
Agreed, although it is unclear whether continuous aggregation is in fact more typical. But since I’m interested in defending lexical minimalism (which many people already hold with a priority for extreme suffering), I’d be curious to hear if anyone has defended an analogous symmetric view, or how that view would be constructed in the first place. E.g., should I compare “priority for the worst-off” with a view that (also) entails “priority for the best-off”, even if no one (to my knowledge) defends the latter priority?
2. One could urge we shouldn’t dock points to a theory for counter-examples which are impractical/unrealistic, the x/VRCs for minimalism fare much better than the x/VRCs for totalism. This would be quite a departure from my understanding of how the discussion proceeds in the literature, where the main concern is the ‘in principle’ determination for scenarios
The literature is mostly not written by people trying to figure out whether to prioritize the reduction of AW versus the reduction of s-risks. And once we accept some tradeoff in theory, it becomes relevant to ask if we would plausibly accept similar tradeoffs that could practically occur on an astronomical scale, for which the e-changes could of course first be “enlarged” so as to make more practical sense. (At least I feel like none of my intended points depend on the e-changes being tiny, nor on the base populations consisting of lives with mutually equal welfare, so I’m fine with discussing x/VRCs that are in those ways more realistic — especially if we account for the “roller coaster” aspects of more realistic lives.)
In other words, whether we affirm or reject the claim that purported positive goods can outweigh extreme suffering has great relevance for our priorities, whereas the question of whether lexical minimalist views are more plausible than non-lexical minimalist views has limited practical relevance, since the real-life implications (e.g. for ideal population sizes) are roughly convergent for minimalist views.
3. One could accept minimalism entails an xVRC, but this bullet is easier to bite than x/VRCs against symmetric views. Perhaps, but in which case we should probably pick the closest symmetric comparator (e.g. if they can’t play with thresholds, you should deal with Shulman-esque pinprick scenarios). I also note the appeals to plausibility made (here and in the comments you link) seem to be mostly re-statements of minimalism itself (e.g. that epsilon changes in misery count but epsilon changes in happiness don’t, ‘subjective perfection’ equated to neutrality, etc.)
Again, I’m happy to pick the closest symmetric view to compare with the minimalist priority for extreme suffering, but I’m still unsure what that view might be (and eager to hear if there is anything to be read about such views).
I don’t agree that the points about the minimalist xVRCs’ comparatively greater plausibility are mostly re-statements of minimalism itself. Rather, I claim that commonsense intuitions would favor the lexical minimalist xVRC — in which suffering is “spread more equally between those who already exist and those who do not” (and eventually minimized if iterated) — over any symmetric xVRC of “expanding hell to help the best-off”. (In other words, even if one finds it somewhat plausible that happiness has independent value, or value in isolation, it still seems that the symmetric xVRCs are worse than the minimalist xVRC.)
(For subjective perfection equated with the absence of something, I was thinking of tranquilism as a need-based account of the isolated value of different experiential states, which is centered on cravings to change one’s subjective experience.)
Finally, I’m afraid I can’t really see much substantive merit in the ‘relational goods’ approach. Minimalism (like SFE and NU) straightforwardly offends the naive intuition that happiness is indeed ‘better than nothing’, and I don’t find relational attempts to undercut this by offering an account of these being roundabout ways/policies of reducing problems either emotionally satisfying (e.g. All the rich relationships between members of a community may make everyone have ‘lives worth living’ in the sense that ‘without me these other people would be worse off’, but minimalism appears still committed to the dispiriting claim that this rich tapestry of relationships is still worse than nothing) or intellectually credible
(Strictly speaking, minimalism is a category that contains NU but only overlaps with SFE; some SFE views may recognize isolated positive value even as they prioritize reducing suffering, and e.g. Fehige’s view represents a preference-based instead of suffering-focused minimalism.)
About the naive intuition that happiness is indeed ‘better than nothing’, I’m curious if that really applies also for isolated Matrix-lives (for most people). As I’ve noted in this section, by focusing on isolated value we may often underestimate the relational value of some goods, which may be greater than the amount of intrinsic value we perceive them to have.
About the relational account having dispiriting or emotionally unsatisfying implications, those can also be compared between views (to the extent that they matter for the plausibility of axiological views). E.g., on minimalist views, unlike CU-like views, it’s not a tragedy or atrocity if we fail to reduce astronomical waste. In this sense, minimalist views may be less dispiriting than CU-like views. Moreover, I’d practically emphasize that our positive roles need not be limited to the confines of our social communities, but extend all the way to those communities’ effects on things like factory farming, wild-animal suffering, and the risks of future suffering (and thus potentially match or even exceed our commonsense feelings about the positive value of many lives, even if this would formally consist of “only” relational instead of independently positive value).
However, we should also be careful to account for our personal emotional responses to the implications of a given axiology. By analogy with empirical claims, we would probably want our views on (e.g.) global catastrophic risks to be unaffected by whether we find them dispiriting or not. Similarly, we should arguably account for such feelings in our axiological considerations of what, if anything, would constitute an axiologically positive life in causal isolation (and, specifically, what would constitute a life capable of counterbalancing the suffering of others without the consent of the latter).
Thanks for the reply!
Agreed, and this is also why I focus also on the psychological and practical implications of axiological views, and not only on their theoretical implications. Especially in the EA(-adjacent) community, it seems common to me that the plausibility of theoretical views is assessed also based on the plausibility of their practical implications, which tap into further important intuitions than what may be involved by staying at the abstract level.
E.g., people may bite bullets in theory to retain a consistent view, but still never bite those bullets in practice due to some still unarticulated reasons, which may indicate an inconsistency between their explicit and implicit axiology.
By ‘trade-offy’ and ‘no-trade-offy’, I’d like to emphasize that we mean trade-offs between isolated things. In other words, the diagrams of population ethics could just as well consist of causally isolated experience machines (“isolated Matrix-lives”), which is plausibly a confounding factor for our practical (“commonsense”) intuitions, as our practical intuitions are arguably adapted for trade-offs in an interpersonal (“relational”) world.
It’s very unclear to me how many people actually believe that any arbitrarily awful thing can be counterbalanced by sufficiently many (and/or awesome) isolated Matrix-lives, or other isolated goods. By default, I would assume that most people do not (want to) think about torture, and also do not properly respect the “all else being equal” assumption, and thereby would not count as votes of “informed consent” for those claims. Additionally, in at least one small Mechanical Turk survey about a tradeoff for people themselves, more than 40 percent of people said that they would not accept one minute of extreme suffering for any number of happy years added to their lives.
The first claim (i.e. “a lexical minimalist component”) is precisely what has been defended in the philosophical (and fictional) literature. And again, this claim might be something that most people have not thought about, because only a minority of people have had first- or even second-person experience of an awful thing that might be defended as being categorically “impossible to compensate for with isolated goods”, such as torture.
(The second claim does not strictly follow from the first, which was about “awful” things; e.g. some SFE views hold that sufficiently awful things are lexical bads, but not that all kinds of tiny bads are. This is also relevant for the practical implications of lexical minimalist views with relational goods, on which pinpricks may be practically ignored unless they increase the risk of lexically bad things, whereas anything worthy of the name “great thing” would probably play positive roles to help reduce that risk.)
Here I would again note that our commonsense intuitions are arguably not adapted to track the isolated value of lives, and so we should be careful to make it clear that we are comparing e.g. isolated Matrix-lives. By default, I suspect that people may think of the happy populations as consisting of lives like their own or of people they know, which may implicitly involve a lot of effects on other lives.
Of course, the framings of “isolated Matrix-lives” or “experience machines” may themselves bring in connotations that can feel pejorative or dismissive with regard to the actual subjective experience of those lives, but my point is just to drive home the fact that these lives are, by hypothesis, radically devoid of any positive roles for others, or even for their future selves. And if people implicitly have a relational notion of positive value (e.g. if they think of positive value as implying an inverse causal relation to some subjective problems), then they may feel very differently about harms counterbalanced by isolated goods vs. harms counterbalanced by relational goods (of which minimalist views can endorse the latter).
To be clear, the inverse relations include not only subjective problems prevented by social relationships, but also e.g. any desirable effects on wild animals and future s-risks. Admittedly, probably neither of the latter two is a very commonsensical contributor to positive tradeoffs, but I’d guess that neither would many people find it intuitive to counterbalance astronomical harms with (“even greater amounts of”) isolated experience machines, or with a single “utility monster”. Arguably, all of these cases are also tricky to measure against people’s commonsense intuitions, given that not many people have thought about them in the first place.
Yeah, we can formally construct xVRCs for minimalist views, including for lexical minimalist views, but my claim is that these are consistently less repugnant in like-like comparisons with symmetric views (relative to commonsense or widely shared intuitions). Specifically in the lexical minimalist xVRC — i.e. these comments which you refer to in your point #3 below — the tradeoff results in ever less (and less intense) suffering if followed repeatedly. By comparison, every symmetric xVRC would keep on increasing suffering if scaled up in an analogous way, which is arguably the most repugnant aspect of the VRC.
Additionally, this comment (upstream of the linked ones) points out a source of intra-personal repugnance in the symmetric cases, namely that CU-like views would be fine with the “marginally good” ε-lives being “roller coaster” lives that also contain a lot of extreme suffering:
Of course, in some minimalist examples it is arguably repugnant to create extreme suffering to avoid a vast number of mildly problematic states. But I would claim that commonsense (and not only minimalist) intuitions would find even more repugnant the analogous symmetric case, namely to create extreme suffering for a vast number of mildly positive states which are not needed to relieve anyone’s burden. (The latter case may appear especially repugnant if the symmetric view in question would allow the mildly positive states to be “roller coaster” lives that are not even themselves free of, but would in fact contain a lot of, extreme suffering.) Consider, for instance, that:
A 2017 survey by FLI (n > 14,000), found that the goal people favored most as the ideal aim of a future civilization was “minimizing suffering”. This was the most popular aim by a large margin, ahead of “maximizing positive experiences”, and most of the people who favored this goal were probably not suffering while they responded to the survey.
The authors of Moral Uncertainty write (p. 185):
The above points do not tip the scales all the way in favor of minimalism over CU-variants, but they do suggest that common intuitions would not necessarily favor ‘additively aggregationist CU’ (even before looking at the respective x/VRCs for these views, let alone after considering the overall direction when we iterate such tradeoffs multiple times).
Agreed, although it is unclear whether continuous aggregation is in fact more typical. But since I’m interested in defending lexical minimalism (which many people already hold with a priority for extreme suffering), I’d be curious to hear if anyone has defended an analogous symmetric view, or how that view would be constructed in the first place. E.g., should I compare “priority for the worst-off” with a view that (also) entails “priority for the best-off”, even if no one (to my knowledge) defends the latter priority?
The literature is mostly not written by people trying to figure out whether to prioritize the reduction of AW versus the reduction of s-risks. And once we accept some tradeoff in theory, it becomes relevant to ask if we would plausibly accept similar tradeoffs that could practically occur on an astronomical scale, for which the e-changes could of course first be “enlarged” so as to make more practical sense. (At least I feel like none of my intended points depend on the e-changes being tiny, nor on the base populations consisting of lives with mutually equal welfare, so I’m fine with discussing x/VRCs that are in those ways more realistic — especially if we account for the “roller coaster” aspects of more realistic lives.)
In other words, whether we affirm or reject the claim that purported positive goods can outweigh extreme suffering has great relevance for our priorities, whereas the question of whether lexical minimalist views are more plausible than non-lexical minimalist views has limited practical relevance, since the real-life implications (e.g. for ideal population sizes) are roughly convergent for minimalist views.
Again, I’m happy to pick the closest symmetric view to compare with the minimalist priority for extreme suffering, but I’m still unsure what that view might be (and eager to hear if there is anything to be read about such views).
I don’t agree that the points about the minimalist xVRCs’ comparatively greater plausibility are mostly re-statements of minimalism itself. Rather, I claim that commonsense intuitions would favor the lexical minimalist xVRC — in which suffering is “spread more equally between those who already exist and those who do not” (and eventually minimized if iterated) — over any symmetric xVRC of “expanding hell to help the best-off”. (In other words, even if one finds it somewhat plausible that happiness has independent value, or value in isolation, it still seems that the symmetric xVRCs are worse than the minimalist xVRC.)
(For subjective perfection equated with the absence of something, I was thinking of tranquilism as a need-based account of the isolated value of different experiential states, which is centered on cravings to change one’s subjective experience.)
(Strictly speaking, minimalism is a category that contains NU but only overlaps with SFE; some SFE views may recognize isolated positive value even as they prioritize reducing suffering, and e.g. Fehige’s view represents a preference-based instead of suffering-focused minimalism.)
About the naive intuition that happiness is indeed ‘better than nothing’, I’m curious if that really applies also for isolated Matrix-lives (for most people). As I’ve noted in this section, by focusing on isolated value we may often underestimate the relational value of some goods, which may be greater than the amount of intrinsic value we perceive them to have.
About the relational account having dispiriting or emotionally unsatisfying implications, those can also be compared between views (to the extent that they matter for the plausibility of axiological views). E.g., on minimalist views, unlike CU-like views, it’s not a tragedy or atrocity if we fail to reduce astronomical waste. In this sense, minimalist views may be less dispiriting than CU-like views. Moreover, I’d practically emphasize that our positive roles need not be limited to the confines of our social communities, but extend all the way to those communities’ effects on things like factory farming, wild-animal suffering, and the risks of future suffering (and thus potentially match or even exceed our commonsense feelings about the positive value of many lives, even if this would formally consist of “only” relational instead of independently positive value).
However, we should also be careful to account for our personal emotional responses to the implications of a given axiology. By analogy with empirical claims, we would probably want our views on (e.g.) global catastrophic risks to be unaffected by whether we find them dispiriting or not. Similarly, we should arguably account for such feelings in our axiological considerations of what, if anything, would constitute an axiologically positive life in causal isolation (and, specifically, what would constitute a life capable of counterbalancing the suffering of others without the consent of the latter).