if we don’t know the mean of F, is the problem simply intractable? Should we resort to maxmin utility?
It’s possible in a given situation that we’re willing to commit to a range of probabilities, e.g. p∈[a,b] (without committing to E[p]=a+b2 or any other number), so that we can check the recommendations for each value of p (sensitivity analysis).
I don’t think maxmin utility follows, but it’s one approach we can take.
what if we have a hyperprior over the mean of F? Do we just take another level of expectations, and end up with the same solution?
I’m not sure specifically, but I’d expect it to be more permissible and often allow multiple options for a given setup. I think the specific approach in that paper is like assuming that we only know the aggregate (not individual) utility function up to monotonic transformations, not even linear transformations, so that any action which is permissible under some degree of risk aversion with respect to aggregate utility is permissible generally. (We could also have uncertainty about individual utility/welfare functions, too, which makes things more complicated.)
I think we can justify ruling out all options the maximality rule rules out, although it’s very permissive. Maybe we can put more structure on our uncertainty than it assumes. For example, we can talk about distributional properties for p without specifying an actual distribution for p, e.g.p is more likely to be between 0.8 and 0.9 than 0.1 and 0.2, although I won’t commit to a probability for either.
It’s possible in a given situation that we’re willing to commit to a range of probabilities, e.g. p∈[a,b] (without committing to E[p]=a+b2 or any other number), so that we can check the recommendations for each value of p (sensitivity analysis).
I don’t think maxmin utility follows, but it’s one approach we can take.
Yes, I think so.
I’m not sure specifically, but I’d expect it to be more permissible and often allow multiple options for a given setup. I think the specific approach in that paper is like assuming that we only know the aggregate (not individual) utility function up to monotonic transformations, not even linear transformations, so that any action which is permissible under some degree of risk aversion with respect to aggregate utility is permissible generally. (We could also have uncertainty about individual utility/welfare functions, too, which makes things more complicated.)
I think we can justify ruling out all options the maximality rule rules out, although it’s very permissive. Maybe we can put more structure on our uncertainty than it assumes. For example, we can talk about distributional properties for p without specifying an actual distribution for p, e.g.p is more likely to be between 0.8 and 0.9 than 0.1 and 0.2, although I won’t commit to a probability for either.