(1), I found this post quite hard to understand—it was quite jargon-heavy.
(2) I’d have appreciated it if you’d located this in what you take to be the relevant literature. I’m not sure if you’re making an argument about (A) why you might want to diversify resources across various causes, even if certain in some moral view (for instance because there are diminishing marginal returns, so you fund option X up to some point and then switch to Y) or (B) why you might want to diversify because you are morally uncertain.
(3), because of (2), I’m not sure what your objection to ‘argmax’ is. You say ‘naive argmax’ doesn’t work. But isn’t that a reason to do ‘non-naive argmax’ rather than do something else? Cf. debates where people object to consequentialism by claiming it implies you ought to kill people and harvest their organs, and the consequentialist says that’s naive and not actually what consequentialism would recommend.
Fwiw, the standard approaches to moral uncertainty (‘my favourite theory’ and ‘maximise expected choiceworthiness’) provide no justification in themselves for splitting your resources. In contrast, the ‘worldview diversfication’ approach does do this. You say that worldview diversification is ad hoc, but I think it can be justified by a non-standard approach to moral uncertainty, one I call ‘internal bargaining’ and have written about here.
(1) The post attempts to skirt between being completely non-technical, and being very technical. It’s unclear if successfully.
(2) The technical claim is mostly that argmax(actions) is a dumb decision procedure in the real world for boundedly rational agents, if the actions are not very meta.
Softmax is one of the more principled alternative choices (see eg here)
In my view the technical research useful for developing good theory of moral uncertainty for bounded agents in the real world is currently mostly located in other fields of research (ML, decision theory, AI safety, social choice theory, mechanism design, etc), so I would not expect lack of something in the moral uncertainty literature to be evidence of anything. E.g., the internal bargaining you link is mostly simply OCB and HG applying bargaining theory to bargaining between moral theories.
We say worldview diversification is less ad hoc than the other things: worldview diversification is mostly Thompson sampling.
(4) You can often “rescue” some functional form if you really want. Love argmax()? Well, do argmax(ways how to choose actions) or something. Really attached to the label of utilitarianism, but in practice want to do something closer to virtues? Well, do utilitarianism but just on the of actions of the type “select your next self” or similar.
A couple of comments.
(1), I found this post quite hard to understand—it was quite jargon-heavy.
(2) I’d have appreciated it if you’d located this in what you take to be the relevant literature. I’m not sure if you’re making an argument about (A) why you might want to diversify resources across various causes, even if certain in some moral view (for instance because there are diminishing marginal returns, so you fund option X up to some point and then switch to Y) or (B) why you might want to diversify because you are morally uncertain.
(3), because of (2), I’m not sure what your objection to ‘argmax’ is. You say ‘naive argmax’ doesn’t work. But isn’t that a reason to do ‘non-naive argmax’ rather than do something else? Cf. debates where people object to consequentialism by claiming it implies you ought to kill people and harvest their organs, and the consequentialist says that’s naive and not actually what consequentialism would recommend.
Fwiw, the standard approaches to moral uncertainty (‘my favourite theory’ and ‘maximise expected choiceworthiness’) provide no justification in themselves for splitting your resources. In contrast, the ‘worldview diversfication’ approach does do this. You say that worldview diversification is ad hoc, but I think it can be justified by a non-standard approach to moral uncertainty, one I call ‘internal bargaining’ and have written about here.
(1) The post attempts to skirt between being completely non-technical, and being very technical. It’s unclear if successfully.
(2) The technical claim is mostly that argmax(actions) is a dumb decision procedure in the real world for boundedly rational agents, if the actions are not very meta.
Softmax is one of the more principled alternative choices (see eg here)
(3) That argmax(actions) is not the optimal thing to do for boundedly rational agents is perhaps best illuminated by information-theoretic bounded rationality.
In my view the technical research useful for developing good theory of moral uncertainty for bounded agents in the real world is currently mostly located in other fields of research (ML, decision theory, AI safety, social choice theory, mechanism design, etc), so I would not expect lack of something in the moral uncertainty literature to be evidence of anything.
E.g., the internal bargaining you link is mostly simply OCB and HG applying bargaining theory to bargaining between moral theories.
We say worldview diversification is less ad hoc than the other things: worldview diversification is mostly Thompson sampling.
(4) You can often “rescue” some functional form if you really want. Love argmax()? Well, do argmax(ways how to choose actions) or something. Really attached to the label of utilitarianism, but in practice want to do something closer to virtues? Well, do utilitarianism but just on the of actions of the type “select your next self” or similar.