Thanks Karthik. I think we might be talking past each other a bit, but replying in order on your first four replies:
My key issue with higher etas isn’t philosophical disagreement, it’s as guidance for practical decision-making. If I had taken your post at face value and used eta=1.5 to value UK GDP relative to other ways we could spend money, I think I would have predictably destroyed a lot of value for the global poor by failing to account for the full set of spillovers (because I think doing so is somewhere between very difficult and impossible). Even within low-income countries there are still pervasive tax, pecuniary, other externalities from high-income spending/consumption on lower-income co-nationals, that are closer to linear than logarithmic in $s. None of this is to deny the possibility or likelihood that in a totally abstract pure notion of consumption where it didn’t have any externalities at all and it was truly final personal consumption, it would be appropriate to have a log or steeper eta, it’s to say that that is a predictably bad approximation of our world and accordingly a bad decision rule given the actual data that we have. I think the main reply here has to be a defense of the feasibility of explicitly accounting for all relevant spillovers, and having made multiple (admittedly weak!) stabs in that direction, I’m personally pessimistic, but I’d certainly love to see others’ attempts.
In the blog post I linked in my #2 above we explicitly consider the set point implied by the IDInsight survey data, and we think it’s consistent with what we’re doing. We’re open to the argument for using a higher fixed constant on being alive, but instead of making you focus more on redistribution of income, the first order consequence of that decision would be to focus more on saving poor people’s lives (which is in fact what we predominantly do). It’s also worth noting that as your weight there gets high, it gets increasingly out of line with people’s revealed preferences and the VSL literature (and it’s not obvious to me why you’d take those revealed preferences less seriously than the revealed preferences around eta).
“I think almost everyone would agree that 10% income increase is worth much more to a poor person than a rich person”—I don’t think that’s right as a descriptive claim but again even if it were the point I’m making in #1 above still holds—if your income measure is imperfect as a measure of purely private consumption without any externalities, and I think they all are, then any small positive externalities that are ~linear in $ will dominate the effective utility calculation as eta gets to or above 1. I think there are many such externalities—taxes, philanthropy, aid, R&D, trade… - such that very high etas will lead to predictably bad policy advice.
You can add a constant normalizing function and it doesn’t change my original point—maybe it’s worth checking the Weitzman paper I linked to get an intuition? There’s genuinely more “at stake” in higher incomes when you have a lower eta vs a higher eta, and so if you’re trying make the correct utilitarian decision under true uncertainty, you don’t want to take a unweighted mean of eta and then run with it, you want to run your scenarios over different etas and weight by the stakes to get the best aggregate outcome. (I think how you specify the units might matter for the conclusion here though, a la the two envelope problem; I’m not sure.)
Got it, I think I misunderstood that point the first time. Yes, I am convinced that this is an issue that is worth choosing log over isoelastic for.
Yes, I agree with the first order consequence of focusing more on saving lives. The purpose of this is just to compare different approaches that only increase income, and I was just suggesting that a high set point is a sufficient way to avoid having that spill over into unappealing implications for saving lives. It is true that a very high set point is inconsistent with revealed preference VSLs, though. I don’t have a good way to resolve that. I have an intuition that low VSLs are a problem and we shouldn’t respect them, but it’s not one I can defend, so I think you’re right on this.
Agreed
I’m on board with the idea of averaging over scenarios ala Weitzman, my original thinking was that a normalizing constant would shrink the scale of differences between the scenarios and thus reduce the effect of outlier etas. But I was confusing two different concepts—a high normalizing constant would reduce the % difference between them, but not the absolute difference between them which is the important quantity for expected value.
Thanks, appreciate it! I sympathize with this for some definition of low FWIW: “I have an intuition that low VSLs are a problem and we shouldn’t respect them” but I think it’s just a question of what the relevant “low” is.
Thanks Karthik. I think we might be talking past each other a bit, but replying in order on your first four replies:
My key issue with higher etas isn’t philosophical disagreement, it’s as guidance for practical decision-making. If I had taken your post at face value and used eta=1.5 to value UK GDP relative to other ways we could spend money, I think I would have predictably destroyed a lot of value for the global poor by failing to account for the full set of spillovers (because I think doing so is somewhere between very difficult and impossible). Even within low-income countries there are still pervasive tax, pecuniary, other externalities from high-income spending/consumption on lower-income co-nationals, that are closer to linear than logarithmic in $s. None of this is to deny the possibility or likelihood that in a totally abstract pure notion of consumption where it didn’t have any externalities at all and it was truly final personal consumption, it would be appropriate to have a log or steeper eta, it’s to say that that is a predictably bad approximation of our world and accordingly a bad decision rule given the actual data that we have. I think the main reply here has to be a defense of the feasibility of explicitly accounting for all relevant spillovers, and having made multiple (admittedly weak!) stabs in that direction, I’m personally pessimistic, but I’d certainly love to see others’ attempts.
In the blog post I linked in my #2 above we explicitly consider the set point implied by the IDInsight survey data, and we think it’s consistent with what we’re doing. We’re open to the argument for using a higher fixed constant on being alive, but instead of making you focus more on redistribution of income, the first order consequence of that decision would be to focus more on saving poor people’s lives (which is in fact what we predominantly do). It’s also worth noting that as your weight there gets high, it gets increasingly out of line with people’s revealed preferences and the VSL literature (and it’s not obvious to me why you’d take those revealed preferences less seriously than the revealed preferences around eta).
“I think almost everyone would agree that 10% income increase is worth much more to a poor person than a rich person”—I don’t think that’s right as a descriptive claim but again even if it were the point I’m making in #1 above still holds—if your income measure is imperfect as a measure of purely private consumption without any externalities, and I think they all are, then any small positive externalities that are ~linear in $ will dominate the effective utility calculation as eta gets to or above 1. I think there are many such externalities—taxes, philanthropy, aid, R&D, trade… - such that very high etas will lead to predictably bad policy advice.
You can add a constant normalizing function and it doesn’t change my original point—maybe it’s worth checking the Weitzman paper I linked to get an intuition? There’s genuinely more “at stake” in higher incomes when you have a lower eta vs a higher eta, and so if you’re trying make the correct utilitarian decision under true uncertainty, you don’t want to take a unweighted mean of eta and then run with it, you want to run your scenarios over different etas and weight by the stakes to get the best aggregate outcome. (I think how you specify the units might matter for the conclusion here though, a la the two envelope problem; I’m not sure.)
Got it, I think I misunderstood that point the first time. Yes, I am convinced that this is an issue that is worth choosing log over isoelastic for.
Yes, I agree with the first order consequence of focusing more on saving lives. The purpose of this is just to compare different approaches that only increase income, and I was just suggesting that a high set point is a sufficient way to avoid having that spill over into unappealing implications for saving lives. It is true that a very high set point is inconsistent with revealed preference VSLs, though. I don’t have a good way to resolve that. I have an intuition that low VSLs are a problem and we shouldn’t respect them, but it’s not one I can defend, so I think you’re right on this.
Agreed
I’m on board with the idea of averaging over scenarios ala Weitzman, my original thinking was that a normalizing constant would shrink the scale of differences between the scenarios and thus reduce the effect of outlier etas. But I was confusing two different concepts—a high normalizing constant would reduce the % difference between them, but not the absolute difference between them which is the important quantity for expected value.
Thanks, appreciate it! I sympathize with this for some definition of low FWIW: “I have an intuition that low VSLs are a problem and we shouldn’t respect them” but I think it’s just a question of what the relevant “low” is.