But I bring this up because I anticipate the likely moves you will make to avoid the counter-example Shulman and I have brought up will be along the lines of anti-aggregationist moves around lexicality, thresholds, and whatnot.
Do you mean trivial pains adding up to severe suffering? I can see how if you would accept lexicality or thresholds to prevent this, you could do the same to prevent trivial pleasures outweighing severe suffering or greater joys.
My original comment follows.
I think your first and third points are mostly right, but I would add that minimalist axiologies can avoid the (V)RC without (arbitrary) critical levels, (arbitrary) thresholds, giving up continuity, or giving up additivity/âseparability, which someone might find as counterintuitive as the VRC. Views like these tend to look more arbitrary, or assuming transitivity, the independence of irrelevant alternatives and a far larger unaffected population, often reduce to solipsism or recommend totally ignoring value thatâs (weakly or strongly) lexically dominated in practice. So, if you find the (V)RC, and these aggregation tricks or their implications very counterintuitive, then minimalist and person-affecting views will look better than otherwise (not necessarily best), and classical utilitarianism will look worse than otherwise (but potentially still best overall or better than minimalist axiologies, if the other points in favour are strong enough).
Furthermore, the VRC is distinguished from the RC by the addition of severe suffering. Someone might find the VRC far worse than the RC (e.g. the person who named it, adding the âVeryâ :P), and if they do, that may indeed say something about their views on suffering and bad lives, and not just about the aggregation of the trivial vs values larger in magnitude. I do suspect like you that considering Omelas (or tradeoffs between a more even number of good and bad lives) would usually already get at this, though, but maybe not always.
That being said, personally, I am also separately sympathetic to lexicality (and previously non-additivity, but less so now because of the arguments in the papers I cited above), but not because of the RC or VRC, but because of direct intuitions about torture vs milder suffering (dust specks or even fairly morally significant suffering). EDIT: I guess this is the kind of âcounter-exampleâ you and Shulman have brought up?
On your second point, I donât think something like the VRC is remote, although I wouldnât consider it my best guess for the future. If it turns out that itâs more efficient to maximize pleasure (or value generally) in a huge number of tiny systems that produce very little value each, classical utilitarians may be motivated to do so at substantial cost, including sacrificing a much higher average welfare and ignoring s-risks. So, you end up with astronomically many more marginally good lives and a huge number of additional horrible lives (possibly astronomically many, although far fewer than the marginally good lives) and missing out on many very high welfare lives. This is basically the VRC. This seems unlikely unless classical utilitarians have majority control over large contiguous chunks of space in the future.
Do you mean trivial pains adding up to severe suffering? I can see how if you would accept lexicality or thresholds to prevent this, you could do the same to prevent trivial pleasures outweighing severe suffering or greater joys.
Yeah, thatâs it. As you note these sorts of moves seem to have costs elsewhere, but if one thinks on balance they nonetheless should be accepted, then the V/âRC isnât really a strike against âsymmetric axiologyâ simpliciter, but merely âsymmetric axiologies with a mistaken account of aggregationâ. If instead âstraightforward/âunadornedâ aggregation is the right way to go, then the V/âRC is a strike against symmetric views and a strike in favour of minimalist ones; but âstraightforwardâ aggregation can also produce highly counter-intuitive results for minimalist views which symmetric axiologies avoid (e.g. âbetter N awful lives than TREE(N+3) lives of perfect bliss and a pin-prickâ).
Hence (per 3) I feel the OP would be trying to have it both ways if they donât discuss argumentative resources which could defend a rival theory from objections they mount against it, yet subsequently rely upon those same resources to respond to objections to their preferred theory.
(Re. 2, perhaps it depends on the value of âtinyââmy intuition is the dynamic range of (e.g.) human happiness is much smaller than that for future beings, so âvery smallâ on this scale would still typically be greatly above the âmarginally goodâ range by the lights of classical util. If (e.g.) commonsenically happy human lives/âexperiences are 10, joyful future beings could go up to 1000, and âmarginally goodâ is anything <1, weâd be surprised to find the optimal average for the maximal aggregate is in the marginally good range. Adding in the âVâ bit to this RC adds a further penalty).
With respect to 2, Iâm thinking something on the order of insect brains. There are reasons to expect pleasure to scale sublinearly with brain size even in artificial brains optimized for pleasure, e.g. a lot of unnecessary connections that donât produce additional value, greater complexity in building larger brains without getting things wrong, or even giving weight to the belief that integrating minds actually reduces value, say because of bottlenecks in some of the relevant circuits/âfunctions. Smaller brains are easier/âfaster to run in parallel.
This is assuming the probability of consciousness doesnât dominate. There may also be scale efficiencies, since the brains need containers and to be connected to things (even digitally?) or there may be some other overhead.
So, I donât think it would be too surprising to find the optimal average in the marginally good range.
EDIT:
Do you mean trivial pains adding up to severe suffering? I can see how if you would accept lexicality or thresholds to prevent this, you could do the same to prevent trivial pleasures outweighing severe suffering or greater joys.
My original comment follows.
I think your first and third points are mostly right, but I would add that minimalist axiologies can avoid the (V)RC without (arbitrary) critical levels, (arbitrary) thresholds, giving up continuity, or giving up additivity/âseparability, which someone might find as counterintuitive as the VRC. Views like these tend to look more arbitrary, or assuming transitivity, the independence of irrelevant alternatives and a far larger unaffected population, often reduce to solipsism or recommend totally ignoring value thatâs (weakly or strongly) lexically dominated in practice. So, if you find the (V)RC, and these aggregation tricks or their implications very counterintuitive, then minimalist and person-affecting views will look better than otherwise (not necessarily best), and classical utilitarianism will look worse than otherwise (but potentially still best overall or better than minimalist axiologies, if the other points in favour are strong enough).
Furthermore, the VRC is distinguished from the RC by the addition of severe suffering. Someone might find the VRC far worse than the RC (e.g. the person who named it, adding the âVeryâ :P), and if they do, that may indeed say something about their views on suffering and bad lives, and not just about the aggregation of the trivial vs values larger in magnitude. I do suspect like you that considering Omelas (or tradeoffs between a more even number of good and bad lives) would usually already get at this, though, but maybe not always.
That being said, personally, I am also separately sympathetic to lexicality (and previously non-additivity, but less so now because of the arguments in the papers I cited above), but not because of the RC or VRC, but because of direct intuitions about torture vs milder suffering (dust specks or even fairly morally significant suffering). EDIT: I guess this is the kind of âcounter-exampleâ you and Shulman have brought up?
On your second point, I donât think something like the VRC is remote, although I wouldnât consider it my best guess for the future. If it turns out that itâs more efficient to maximize pleasure (or value generally) in a huge number of tiny systems that produce very little value each, classical utilitarians may be motivated to do so at substantial cost, including sacrificing a much higher average welfare and ignoring s-risks. So, you end up with astronomically many more marginally good lives and a huge number of additional horrible lives (possibly astronomically many, although far fewer than the marginally good lives) and missing out on many very high welfare lives. This is basically the VRC. This seems unlikely unless classical utilitarians have majority control over large contiguous chunks of space in the future.
Yeah, thatâs it. As you note these sorts of moves seem to have costs elsewhere, but if one thinks on balance they nonetheless should be accepted, then the V/âRC isnât really a strike against âsymmetric axiologyâ simpliciter, but merely âsymmetric axiologies with a mistaken account of aggregationâ. If instead âstraightforward/âunadornedâ aggregation is the right way to go, then the V/âRC is a strike against symmetric views and a strike in favour of minimalist ones; but âstraightforwardâ aggregation can also produce highly counter-intuitive results for minimalist views which symmetric axiologies avoid (e.g. âbetter N awful lives than TREE(N+3) lives of perfect bliss and a pin-prickâ).
Hence (per 3) I feel the OP would be trying to have it both ways if they donât discuss argumentative resources which could defend a rival theory from objections they mount against it, yet subsequently rely upon those same resources to respond to objections to their preferred theory.
(Re. 2, perhaps it depends on the value of âtinyââmy intuition is the dynamic range of (e.g.) human happiness is much smaller than that for future beings, so âvery smallâ on this scale would still typically be greatly above the âmarginally goodâ range by the lights of classical util. If (e.g.) commonsenically happy human lives/âexperiences are 10, joyful future beings could go up to 1000, and âmarginally goodâ is anything <1, weâd be surprised to find the optimal average for the maximal aggregate is in the marginally good range. Adding in the âVâ bit to this RC adds a further penalty).
That all seems fair to me.
With respect to 2, Iâm thinking something on the order of insect brains. There are reasons to expect pleasure to scale sublinearly with brain size even in artificial brains optimized for pleasure, e.g. a lot of unnecessary connections that donât produce additional value, greater complexity in building larger brains without getting things wrong, or even giving weight to the belief that integrating minds actually reduces value, say because of bottlenecks in some of the relevant circuits/âfunctions. Smaller brains are easier/âfaster to run in parallel.
This is assuming the probability of consciousness doesnât dominate. There may also be scale efficiencies, since the brains need containers and to be connected to things (even digitally?) or there may be some other overhead.
So, I donât think it would be too surprising to find the optimal average in the marginally good range.