What are the comparative statics for how uncertainty affects decisionmaking? How does a decisionmaker’s behavior differ under some uncertainty compared to no uncertainty?
Consider a social planner problem where we make transfers to maximize total utility, given idiosyncratic shocks to endowments. There are two agents, A and B, with endowments eA=5 (with probability 1) and eB=0∼p,10∼1−p. So B either gets nothing or twice as much as A.
We choose a transfer T to solve: maxTu(5−T)+p⋅u(0+T)+(1−p)⋅u(10+T) s.t. 0≤T≤5
For a baseline, consider p=0.5 and u=ln. Then we get an optimal transfer of T∗=1.8. Intuitively, as p→0, T∗→0 (if B gets 10 for sure, don’t make any transfer from A to B), and as p→1,T∗→2.5 (if B gets 0 for sure, split A’s endowment equally).
So that’s a scenario with risk (known probabilities), but not uncertainty (unknown probabilities). What if we’re uncertain about the value of p?
Suppose we think p∼F, for some distribution F over [0,1]. If we maximize expected utility, the problem becomes: maxTE[u(5−T)+p⋅u(0+T)+(1−p)⋅u(10+T)] s.t. 0≤T≤5
Since the objective function is linear in probabilities, we end up with the same problem as before, except with E[p] instead of p. If we know the mean of F, we plug it in and solve as before.
So it turns out that this form of uncertainty doesn’t change the problem very much.
Questions: - if we don’t know the mean of F, is the problem simply intractable? Should we resort to maxmin utility? - what if we have a hyperprior over the mean of F? Do we just take another level of expectations, and end up with the same solution? - how does a stochastic dominance decision theory work here?
if we don’t know the mean of F, is the problem simply intractable? Should we resort to maxmin utility?
It’s possible in a given situation that we’re willing to commit to a range of probabilities, e.g. p∈[a,b] (without committing to E[p]=a+b2 or any other number), so that we can check the recommendations for each value of p (sensitivity analysis).
I don’t think maxmin utility follows, but it’s one approach we can take.
what if we have a hyperprior over the mean of F? Do we just take another level of expectations, and end up with the same solution?
I’m not sure specifically, but I’d expect it to be more permissible and often allow multiple options for a given setup. I think the specific approach in that paper is like assuming that we only know the aggregate (not individual) utility function up to monotonic transformations, not even linear transformations, so that any action which is permissible under some degree of risk aversion with respect to aggregate utility is permissible generally. (We could also have uncertainty about individual utility/welfare functions, too, which makes things more complicated.)
I think we can justify ruling out all options the maximality rule rules out, although it’s very permissive. Maybe we can put more structure on our uncertainty than it assumes. For example, we can talk about distributional properties for p without specifying an actual distribution for p, e.g.p is more likely to be between 0.8 and 0.9 than 0.1 and 0.2, although I won’t commit to a probability for either.
What are the comparative statics for how uncertainty affects decisionmaking? How does a decisionmaker’s behavior differ under some uncertainty compared to no uncertainty?
Consider a social planner problem where we make transfers to maximize total utility, given idiosyncratic shocks to endowments. There are two agents, A and B, with endowments eA=5 (with probability 1) and eB=0∼p,10∼1−p. So B either gets nothing or twice as much as A.
We choose a transfer T to solve:
maxT u(5−T)+p⋅u(0+T)+(1−p)⋅u(10+T) s.t. 0≤T≤5
For a baseline, consider p=0.5 and u=ln. Then we get an optimal transfer of T∗=1.8. Intuitively, as p→0, T∗→0 (if B gets 10 for sure, don’t make any transfer from A to B), and as p→1,T∗→2.5 (if B gets 0 for sure, split A’s endowment equally).
So that’s a scenario with risk (known probabilities), but not uncertainty (unknown probabilities). What if we’re uncertain about the value of p?
Suppose we think p∼F, for some distribution F over [0,1]. If we maximize expected utility, the problem becomes:
maxT E[u(5−T)+p⋅u(0+T)+(1−p)⋅u(10+T)] s.t. 0≤T≤5
Since the objective function is linear in probabilities, we end up with the same problem as before, except with E[p] instead of p. If we know the mean of F, we plug it in and solve as before.
So it turns out that this form of uncertainty doesn’t change the problem very much.
Questions:
- if we don’t know the mean of F, is the problem simply intractable? Should we resort to maxmin utility?
- what if we have a hyperprior over the mean of F? Do we just take another level of expectations, and end up with the same solution?
- how does a stochastic dominance decision theory work here?
It’s possible in a given situation that we’re willing to commit to a range of probabilities, e.g. p∈[a,b] (without committing to E[p]=a+b2 or any other number), so that we can check the recommendations for each value of p (sensitivity analysis).
I don’t think maxmin utility follows, but it’s one approach we can take.
Yes, I think so.
I’m not sure specifically, but I’d expect it to be more permissible and often allow multiple options for a given setup. I think the specific approach in that paper is like assuming that we only know the aggregate (not individual) utility function up to monotonic transformations, not even linear transformations, so that any action which is permissible under some degree of risk aversion with respect to aggregate utility is permissible generally. (We could also have uncertainty about individual utility/welfare functions, too, which makes things more complicated.)
I think we can justify ruling out all options the maximality rule rules out, although it’s very permissive. Maybe we can put more structure on our uncertainty than it assumes. For example, we can talk about distributional properties for p without specifying an actual distribution for p, e.g.p is more likely to be between 0.8 and 0.9 than 0.1 and 0.2, although I won’t commit to a probability for either.