Inside-view, some possible tangles this model could run into:
Some theories care about the morality of actions rather than states. But I guess you can incorporate that into ‘states’ if the history of your actions is included in the world-state—it just makes things a bit harder to compute in practice, and means you need to track “which actions I’ve taken that might be morally meaningful-in-themselves according to some of my moral theories.” (Which doesn’t sound crazy, actually!)
the obvious one: setting boundaries on “okay” states is non-obvious, and is basically arbitrary for some moral theories. And depending on where the boundaries are set for each theory, theories could increase or decrease in influence on one’s actions. How should we think about okayness boundaries?
One potential desideratum is something like “honest baragaining.” Imagine each moral theory as an agent that sets its “okayness level” independent of the others, and acts to maximize good from its POV. Then the our formalism should lead to each agent being incentivized to report its true views. (I think this is a useful goal in practice, since I often do something like weighing considerations by taking turns inhabiting different moral views).
I think this kind of thinking naturally leads to moral parliament models—I haven’t actually read the relevant FHI work, but I imagine it says a bunch of useful things, e.g. about using some equivalent of quadratic voting between theories.
I think there’s an unfortunate tradeoff here, where you either have arbitrary okayness levels or all the complexity of nuanced evaluations. But in practice maybe success maximization could function as the lower level heuristic (or middle level, between easier heuristics and pure act-utilitarianism) of a multi-level utilitarianism approach.
Inside-view, some possible tangles this model could run into:
Some theories care about the morality of actions rather than states. But I guess you can incorporate that into ‘states’ if the history of your actions is included in the world-state—it just makes things a bit harder to compute in practice, and means you need to track “which actions I’ve taken that might be morally meaningful-in-themselves according to some of my moral theories.” (Which doesn’t sound crazy, actually!)
the obvious one: setting boundaries on “okay” states is non-obvious, and is basically arbitrary for some moral theories. And depending on where the boundaries are set for each theory, theories could increase or decrease in influence on one’s actions. How should we think about okayness boundaries?
One potential desideratum is something like “honest baragaining.” Imagine each moral theory as an agent that sets its “okayness level” independent of the others, and acts to maximize good from its POV. Then the our formalism should lead to each agent being incentivized to report its true views. (I think this is a useful goal in practice, since I often do something like weighing considerations by taking turns inhabiting different moral views).
I think this kind of thinking naturally leads to moral parliament models—I haven’t actually read the relevant FHI work, but I imagine it says a bunch of useful things, e.g. about using some equivalent of quadratic voting between theories.
I think there’s an unfortunate tradeoff here, where you either have arbitrary okayness levels or all the complexity of nuanced evaluations. But in practice maybe success maximization could function as the lower level heuristic (or middle level, between easier heuristics and pure act-utilitarianism) of a multi-level utilitarianism approach.