Maximizing a linear objective always leads to a corner solution. So to get an optimal interior allocation, you need to introduce nonlinearity somehow. Different approaches to this problem differ mainly in how they introduce and justify nonlinear utility functions. To me I can’t see where the nonlinearity is introduced in your framework. That makes me suspect the credence-weighted allocation you derive is not actually the optimal allocation even under model uncertainty. Am I missing something?
Yes, agree with all your points. The reason I get a different allocation is indeed because I ultimately don’t maximise—the outermost step is just averaging. This is hard to justify philosophically, but the intuition is sort of “if my maximiser is extremely sensitive to ~noise, I throw out the maximiser and just average over plausible optimal solutions”, which I think is in fact what people often do in different domains. (Where “noise” does a lot of work—of course I am very vague about what part of the probability distribution I’m happy to integrate out before the optimisation and which part I keep.)
Just to add: This is similar to taking the average over what many rational utility maximising agents with slightly different models/world views would do, so in some sense if many people followed this rule the aggregate outcome might be very similar to everyone optimising.
I certainly agree that you’re right about describing why people diversify but I think the interesting challenge is to try and understand under what conditions this behavior is optimal.
You’re hinting at a bargaining microfoundation, where diversification can be justified as the solution arrived at by a group of agents bargaining over how to spend a shared pot of money. I think that’s fascinating and I would explore that more.
Yes, I think understanding the microfoundations would be desirable. This need not necessarily be in the form of a proof of optimality, but could come in a different flavour, as you said.
Some concepts that would be interesting to explore further, having thought about this a little bit more (mostly notes to future self):
* Unwillingness to let “noise” be the tie-breaker between exactly equally good options (where expected utility maximisation is indifferent) --> how does this translate to merely “almost equally good” options? This is related to giving some value to “ambiguity aversion”: I can have the preference to diversify as much as possible between equally good options without violating maximising utility, but as soon as there are slight differences I would need to trade off optimality vs ambiguity aversion?
* More general considerations around non-commutativity between splitting funds between reasonable agents first and letting them optimise or letting reasonable agents vote first and then optimising based on the outcome of the voting process. I seem to prefer the first, which seems to be non-utilitarian but more robust.
* Cleaner separation between moral uncertainty and epistemic & empirical uncertainty.
* Understand if and how this ties in with bargaining theory [1], as you said, in particular is there a case for extending bargain theoretical or (more likely) “parliamentary” [2] style approaches beyond moral to epistemic uncertainty?
* How does this interacts with “robustness to adverse selection” as opposed to mere “noise” – eg is there some kind of optimality condition assuming my E[u|data] is in the worst case biased in an adversarial way by whoever gives me data? How does this tie in with robust optimisation? Does this lead to a maximin solution?
Maximizing a linear objective always leads to a corner solution. So to get an optimal interior allocation, you need to introduce nonlinearity somehow. Different approaches to this problem differ mainly in how they introduce and justify nonlinear utility functions. To me I can’t see where the nonlinearity is introduced in your framework. That makes me suspect the credence-weighted allocation you derive is not actually the optimal allocation even under model uncertainty. Am I missing something?
Yes, agree with all your points.
The reason I get a different allocation is indeed because I ultimately don’t maximise—the outermost step is just averaging.
This is hard to justify philosophically, but the intuition is sort of “if my maximiser is extremely sensitive to ~noise, I throw out the maximiser and just average over plausible optimal solutions”, which I think is in fact what people often do in different domains. (Where “noise” does a lot of work—of course I am very vague about what part of the probability distribution I’m happy to integrate out before the optimisation and which part I keep.)
Just to add: This is similar to taking the average over what many rational utility maximising agents with slightly different models/world views would do, so in some sense if many people followed this rule the aggregate outcome might be very similar to everyone optimising.
I certainly agree that you’re right about describing why people diversify but I think the interesting challenge is to try and understand under what conditions this behavior is optimal.
You’re hinting at a bargaining microfoundation, where diversification can be justified as the solution arrived at by a group of agents bargaining over how to spend a shared pot of money. I think that’s fascinating and I would explore that more.
Yes, I think understanding the microfoundations would be desirable. This need not necessarily be in the form of a proof of optimality, but could come in a different flavour, as you said.
Some concepts that would be interesting to explore further, having thought about this a little bit more (mostly notes to future self):
* Unwillingness to let “noise” be the tie-breaker between exactly equally good options (where expected utility maximisation is indifferent) --> how does this translate to merely “almost equally good” options? This is related to giving some value to “ambiguity aversion”: I can have the preference to diversify as much as possible between equally good options without violating maximising utility, but as soon as there are slight differences I would need to trade off optimality vs ambiguity aversion?
* More general considerations around non-commutativity between splitting funds between reasonable agents first and letting them optimise or letting reasonable agents vote first and then optimising based on the outcome of the voting process. I seem to prefer the first, which seems to be non-utilitarian but more robust.
* Cleaner separation between moral uncertainty and epistemic & empirical uncertainty.
* Understand if and how this ties in with bargaining theory [1], as you said, in particular is there a case for extending bargain theoretical or (more likely) “parliamentary” [2] style approaches beyond moral to epistemic uncertainty?
* How does this interacts with “robustness to adverse selection” as opposed to mere “noise” – eg is there some kind of optimality condition assuming my E[u|data] is in the worst case biased in an adversarial way by whoever gives me data? How does this tie in with robust optimisation? Does this lead to a maximin solution?
[1] https://philpapers.org/rec/GREABA-8
[2] https://www.fhi.ox.ac.uk/wp-content/uploads/2021/06/Parliamentary-Approach-to-Moral-Uncertainty.pdf