Yes, I think understanding the microfoundations would be desirable. This need not necessarily be in the form of a proof of optimality, but could come in a different flavour, as you said.
Some concepts that would be interesting to explore further, having thought about this a little bit more (mostly notes to future self):
* Unwillingness to let “noise” be the tie-breaker between exactly equally good options (where expected utility maximisation is indifferent) --> how does this translate to merely “almost equally good” options? This is related to giving some value to “ambiguity aversion”: I can have the preference to diversify as much as possible between equally good options without violating maximising utility, but as soon as there are slight differences I would need to trade off optimality vs ambiguity aversion?
* More general considerations around non-commutativity between splitting funds between reasonable agents first and letting them optimise or letting reasonable agents vote first and then optimising based on the outcome of the voting process. I seem to prefer the first, which seems to be non-utilitarian but more robust.
* Cleaner separation between moral uncertainty and epistemic & empirical uncertainty.
* Understand if and how this ties in with bargaining theory [1], as you said, in particular is there a case for extending bargain theoretical or (more likely) “parliamentary” [2] style approaches beyond moral to epistemic uncertainty?
* How does this interacts with “robustness to adverse selection” as opposed to mere “noise” – eg is there some kind of optimality condition assuming my E[u|data] is in the worst case biased in an adversarial way by whoever gives me data? How does this tie in with robust optimisation? Does this lead to a maximin solution?
Yes, I think understanding the microfoundations would be desirable. This need not necessarily be in the form of a proof of optimality, but could come in a different flavour, as you said.
Some concepts that would be interesting to explore further, having thought about this a little bit more (mostly notes to future self):
* Unwillingness to let “noise” be the tie-breaker between exactly equally good options (where expected utility maximisation is indifferent) --> how does this translate to merely “almost equally good” options? This is related to giving some value to “ambiguity aversion”: I can have the preference to diversify as much as possible between equally good options without violating maximising utility, but as soon as there are slight differences I would need to trade off optimality vs ambiguity aversion?
* More general considerations around non-commutativity between splitting funds between reasonable agents first and letting them optimise or letting reasonable agents vote first and then optimising based on the outcome of the voting process. I seem to prefer the first, which seems to be non-utilitarian but more robust.
* Cleaner separation between moral uncertainty and epistemic & empirical uncertainty.
* Understand if and how this ties in with bargaining theory [1], as you said, in particular is there a case for extending bargain theoretical or (more likely) “parliamentary” [2] style approaches beyond moral to epistemic uncertainty?
* How does this interacts with “robustness to adverse selection” as opposed to mere “noise” – eg is there some kind of optimality condition assuming my E[u|data] is in the worst case biased in an adversarial way by whoever gives me data? How does this tie in with robust optimisation? Does this lead to a maximin solution?
[1] https://philpapers.org/rec/GREABA-8
[2] https://www.fhi.ox.ac.uk/wp-content/uploads/2021/06/Parliamentary-Approach-to-Moral-Uncertainty.pdf