Do you mean trivial pains adding up to severe suffering? I can see how if you would accept lexicality or thresholds to prevent this, you could do the same to prevent trivial pleasures outweighing severe suffering or greater joys.
Yeah, that’s it. As you note these sorts of moves seem to have costs elsewhere, but if one thinks on balance they nonetheless should be accepted, then the V/RC isn’t really a strike against ‘symmetric axiology’ simpliciter, but merely ‘symmetric axiologies with a mistaken account of aggregation’. If instead ‘straightforward/unadorned’ aggregation is the right way to go, then the V/RC is a strike against symmetric views and a strike in favour of minimalist ones; but ‘straightforward’ aggregation can also produce highly counter-intuitive results for minimalist views which symmetric axiologies avoid (e.g. “better N awful lives than TREE(N+3) lives of perfect bliss and a pin-prick”).
Hence (per 3) I feel the OP would be trying to have it both ways if they don’t discuss argumentative resources which could defend a rival theory from objections they mount against it, yet subsequently rely upon those same resources to respond to objections to their preferred theory.
(Re. 2, perhaps it depends on the value of “tiny”—my intuition is the dynamic range of (e.g.) human happiness is much smaller than that for future beings, so ‘very small’ on this scale would still typically be greatly above the ‘marginally good’ range by the lights of classical util. If (e.g.) commonsenically happy human lives/experiences are 10, joyful future beings could go up to 1000, and ‘marginally good’ is anything <1, we’d be surprised to find the optimal average for the maximal aggregate is in the marginally good range. Adding in the ‘V’ bit to this RC adds a further penalty).
With respect to 2, I’m thinking something on the order of insect brains. There are reasons to expect pleasure to scale sublinearly with brain size even in artificial brains optimized for pleasure, e.g. a lot of unnecessary connections that don’t produce additional value, greater complexity in building larger brains without getting things wrong, or even giving weight to the belief that integrating minds actually reduces value, say because of bottlenecks in some of the relevant circuits/functions. Smaller brains are easier/faster to run in parallel.
This is assuming the probability of consciousness doesn’t dominate. There may also be scale efficiencies, since the brains need containers and to be connected to things (even digitally?) or there may be some other overhead.
So, I don’t think it would be too surprising to find the optimal average in the marginally good range.
Yeah, that’s it. As you note these sorts of moves seem to have costs elsewhere, but if one thinks on balance they nonetheless should be accepted, then the V/RC isn’t really a strike against ‘symmetric axiology’ simpliciter, but merely ‘symmetric axiologies with a mistaken account of aggregation’. If instead ‘straightforward/unadorned’ aggregation is the right way to go, then the V/RC is a strike against symmetric views and a strike in favour of minimalist ones; but ‘straightforward’ aggregation can also produce highly counter-intuitive results for minimalist views which symmetric axiologies avoid (e.g. “better N awful lives than TREE(N+3) lives of perfect bliss and a pin-prick”).
Hence (per 3) I feel the OP would be trying to have it both ways if they don’t discuss argumentative resources which could defend a rival theory from objections they mount against it, yet subsequently rely upon those same resources to respond to objections to their preferred theory.
(Re. 2, perhaps it depends on the value of “tiny”—my intuition is the dynamic range of (e.g.) human happiness is much smaller than that for future beings, so ‘very small’ on this scale would still typically be greatly above the ‘marginally good’ range by the lights of classical util. If (e.g.) commonsenically happy human lives/experiences are 10, joyful future beings could go up to 1000, and ‘marginally good’ is anything <1, we’d be surprised to find the optimal average for the maximal aggregate is in the marginally good range. Adding in the ‘V’ bit to this RC adds a further penalty).
That all seems fair to me.
With respect to 2, I’m thinking something on the order of insect brains. There are reasons to expect pleasure to scale sublinearly with brain size even in artificial brains optimized for pleasure, e.g. a lot of unnecessary connections that don’t produce additional value, greater complexity in building larger brains without getting things wrong, or even giving weight to the belief that integrating minds actually reduces value, say because of bottlenecks in some of the relevant circuits/functions. Smaller brains are easier/faster to run in parallel.
This is assuming the probability of consciousness doesn’t dominate. There may also be scale efficiencies, since the brains need containers and to be connected to things (even digitally?) or there may be some other overhead.
So, I don’t think it would be too surprising to find the optimal average in the marginally good range.