(1) The post attempts to skirt between being completely non-technical, and being very technical. It’s unclear if successfully.
(2) The technical claim is mostly that argmax(actions) is a dumb decision procedure in the real world for boundedly rational agents, if the actions are not very meta.
Softmax is one of the more principled alternative choices (see eg here)
In my view the technical research useful for developing good theory of moral uncertainty for bounded agents in the real world is currently mostly located in other fields of research (ML, decision theory, AI safety, social choice theory, mechanism design, etc), so I would not expect lack of something in the moral uncertainty literature to be evidence of anything. E.g., the internal bargaining you link is mostly simply OCB and HG applying bargaining theory to bargaining between moral theories.
We say worldview diversification is less ad hoc than the other things: worldview diversification is mostly Thompson sampling.
(4) You can often “rescue” some functional form if you really want. Love argmax()? Well, do argmax(ways how to choose actions) or something. Really attached to the label of utilitarianism, but in practice want to do something closer to virtues? Well, do utilitarianism but just on the of actions of the type “select your next self” or similar.
(1) The post attempts to skirt between being completely non-technical, and being very technical. It’s unclear if successfully.
(2) The technical claim is mostly that argmax(actions) is a dumb decision procedure in the real world for boundedly rational agents, if the actions are not very meta.
Softmax is one of the more principled alternative choices (see eg here)
(3) That argmax(actions) is not the optimal thing to do for boundedly rational agents is perhaps best illuminated by information-theoretic bounded rationality.
In my view the technical research useful for developing good theory of moral uncertainty for bounded agents in the real world is currently mostly located in other fields of research (ML, decision theory, AI safety, social choice theory, mechanism design, etc), so I would not expect lack of something in the moral uncertainty literature to be evidence of anything.
E.g., the internal bargaining you link is mostly simply OCB and HG applying bargaining theory to bargaining between moral theories.
We say worldview diversification is less ad hoc than the other things: worldview diversification is mostly Thompson sampling.
(4) You can often “rescue” some functional form if you really want. Love argmax()? Well, do argmax(ways how to choose actions) or something. Really attached to the label of utilitarianism, but in practice want to do something closer to virtues? Well, do utilitarianism but just on the of actions of the type “select your next self” or similar.