How many different plausible definitions of flourishing that differ significantly enough do you expect there to be?
One potential solution would be to divide the future spacetime (not necessarily into contiguous blocks) in proportion to our credences in them (or evenly), and optimize separately for the corresponding view in each. With equal weights, each of n views could get at least about 1/n of what it would if it had 100% weight (taking ratios of expected values), assuming there isn’t costly conflict between the views and no view (significantly) negatively values what another finds near optimal in practice. They could potentially do much better with some moral trades and/or if there’s enough overlap in what they value positively. One view going for a larger share would lead to zero sum work and deadweight loss as others respond to it.
I would indeed guess that a complex theory of flourishing (“complexity of value”, objective list theories, maybe), a preference/desire view and hedonism would assign <1% value to each other’s (practical) optima compared to their own. I think there could be substantial agreement between different complex theories of flourishing, though, since I expect them generally to overlap a lot in their requirements. I could also see hedonism and preference views overlapping considerably and having good moral trades, in case most of the resource usage is just to sustain consciousness (and not to instantiate preference satisfaction or pleasure in particular) and most of the resulting consciousness-sustaining structures/activity can shared without much loss on either view. However, this could just be false.
How many different plausible definitions of flourishing that differ significantly enough do you expect there to be?
One potential solution would be to divide the future spacetime (not necessarily into contiguous blocks) in proportion to our credences in them (or evenly), and optimize separately for the corresponding view in each. With equal weights, each of n views could get at least about 1/n of what it would if it had 100% weight (taking ratios of expected values), assuming there isn’t costly conflict between the views and no view (significantly) negatively values what another finds near optimal in practice. They could potentially do much better with some moral trades and/or if there’s enough overlap in what they value positively. One view going for a larger share would lead to zero sum work and deadweight loss as others respond to it.
I would indeed guess that a complex theory of flourishing (“complexity of value”, objective list theories, maybe), a preference/desire view and hedonism would assign <1% value to each other’s (practical) optima compared to their own. I think there could be substantial agreement between different complex theories of flourishing, though, since I expect them generally to overlap a lot in their requirements. I could also see hedonism and preference views overlapping considerably and having good moral trades, in case most of the resource usage is just to sustain consciousness (and not to instantiate preference satisfaction or pleasure in particular) and most of the resulting consciousness-sustaining structures/activity can shared without much loss on either view. However, this could just be false.