Do you mean trivial pains adding up to severe suffering? I can see how if you would accept lexicality or thresholds to prevent this, you could do the same to prevent trivial pleasures outweighing severe suffering or greater joys.
Yeah, thatâs it. As you note these sorts of moves seem to have costs elsewhere, but if one thinks on balance they nonetheless should be accepted, then the V/âRC isnât really a strike against âsymmetric axiologyâ simpliciter, but merely âsymmetric axiologies with a mistaken account of aggregationâ. If instead âstraightforward/âunadornedâ aggregation is the right way to go, then the V/âRC is a strike against symmetric views and a strike in favour of minimalist ones; but âstraightforwardâ aggregation can also produce highly counter-intuitive results for minimalist views which symmetric axiologies avoid (e.g. âbetter N awful lives than TREE(N+3) lives of perfect bliss and a pin-prickâ).
Hence (per 3) I feel the OP would be trying to have it both ways if they donât discuss argumentative resources which could defend a rival theory from objections they mount against it, yet subsequently rely upon those same resources to respond to objections to their preferred theory.
(Re. 2, perhaps it depends on the value of âtinyââmy intuition is the dynamic range of (e.g.) human happiness is much smaller than that for future beings, so âvery smallâ on this scale would still typically be greatly above the âmarginally goodâ range by the lights of classical util. If (e.g.) commonsenically happy human lives/âexperiences are 10, joyful future beings could go up to 1000, and âmarginally goodâ is anything <1, weâd be surprised to find the optimal average for the maximal aggregate is in the marginally good range. Adding in the âVâ bit to this RC adds a further penalty).
With respect to 2, Iâm thinking something on the order of insect brains. There are reasons to expect pleasure to scale sublinearly with brain size even in artificial brains optimized for pleasure, e.g. a lot of unnecessary connections that donât produce additional value, greater complexity in building larger brains without getting things wrong, or even giving weight to the belief that integrating minds actually reduces value, say because of bottlenecks in some of the relevant circuits/âfunctions. Smaller brains are easier/âfaster to run in parallel.
This is assuming the probability of consciousness doesnât dominate. There may also be scale efficiencies, since the brains need containers and to be connected to things (even digitally?) or there may be some other overhead.
So, I donât think it would be too surprising to find the optimal average in the marginally good range.
Yeah, thatâs it. As you note these sorts of moves seem to have costs elsewhere, but if one thinks on balance they nonetheless should be accepted, then the V/âRC isnât really a strike against âsymmetric axiologyâ simpliciter, but merely âsymmetric axiologies with a mistaken account of aggregationâ. If instead âstraightforward/âunadornedâ aggregation is the right way to go, then the V/âRC is a strike against symmetric views and a strike in favour of minimalist ones; but âstraightforwardâ aggregation can also produce highly counter-intuitive results for minimalist views which symmetric axiologies avoid (e.g. âbetter N awful lives than TREE(N+3) lives of perfect bliss and a pin-prickâ).
Hence (per 3) I feel the OP would be trying to have it both ways if they donât discuss argumentative resources which could defend a rival theory from objections they mount against it, yet subsequently rely upon those same resources to respond to objections to their preferred theory.
(Re. 2, perhaps it depends on the value of âtinyââmy intuition is the dynamic range of (e.g.) human happiness is much smaller than that for future beings, so âvery smallâ on this scale would still typically be greatly above the âmarginally goodâ range by the lights of classical util. If (e.g.) commonsenically happy human lives/âexperiences are 10, joyful future beings could go up to 1000, and âmarginally goodâ is anything <1, weâd be surprised to find the optimal average for the maximal aggregate is in the marginally good range. Adding in the âVâ bit to this RC adds a further penalty).
That all seems fair to me.
With respect to 2, Iâm thinking something on the order of insect brains. There are reasons to expect pleasure to scale sublinearly with brain size even in artificial brains optimized for pleasure, e.g. a lot of unnecessary connections that donât produce additional value, greater complexity in building larger brains without getting things wrong, or even giving weight to the belief that integrating minds actually reduces value, say because of bottlenecks in some of the relevant circuits/âfunctions. Smaller brains are easier/âfaster to run in parallel.
This is assuming the probability of consciousness doesnât dominate. There may also be scale efficiencies, since the brains need containers and to be connected to things (even digitally?) or there may be some other overhead.
So, I donât think it would be too surprising to find the optimal average in the marginally good range.