Someone emailed me this and asked for thoughts, so I thought I’d share some cleaned up reactions here. Full disclosure—I work at Open Phil on some related issues:
Thanks for the post—I think it’s helpful, and I agree that I would like to see the EA community engage more with Lant’s arguments.
If we’re focused primarily on near term human welfare (which seems to be the frame for the post), I think it’s really important to think (and do back of the envelope calculations) more explicitly in terms of utility rather than in terms of absolute dollars. In the post, you allude to the need to adjust for this (“It should be noted that the later stages of the growth accelerations affect progressively richer people, so produce less utility from additional consumption.”), but I think it’s actually first order. In general, I think true humanitarian welfare is distributed much more linearly than exponentially, and that Jones and Klenow’s welfare concept doesn’t map very well to how I think about utility. I don’t have any knock-down arguments here, but I think looking at life satisfaction survey data and lifespan data both suggest the relevant metric is much closer to log(GDP) than pure GDP: many people in rich countries are 100x richer than in poor countries, but I don’t think their lives are 100x (or even 10x) better, and they live <2x longer on average. I could propose some thought examples here (willingness-to-pay to save different lives, how many lives would you live in one place in exchange for just one in another), but I think the intuition is pretty straightforward. Some other thoughts on using GDP $ instead of something like average(log($)) or total(log($)) as your unit:
Using GDP $ ignores distribution, which is key to, e.g., the GiveDirectly case, which explicitly isn’t aiming at global GDP. More generally, growth accelerations typically increase inequality (at least for a while), and in the quick data I googled for India, the median income is less than half of the average; just adjusting for GDP/capita will miss some of the median income dynamics you recognize as important.
Using raw GDP makes you more likely to focus on rich countries: for instance, if I thought that relaxing zoning constraints would increase US GDP by 5% (example source), the perpetuity of that increase would be worth 6 times Lant’s calculation of the Indian growth episodes combined, even though it seems far less morally valuable to me. Log($) type considerations are, I think, a lot of what motivates a focus on developing countries in the first place, and would push towards more attention to poorer countries and domestic inequality within those countries, relative to a GDP-first framework.
Logging $ to get to utility terms generally removes the compounding dynamic in absolute $ that I think partially attracts people to growth arguments; I tend to think that’s good and correct, but am not sure, and would be interested in reading more on this if people have pointers.
One part of the case for a focus on GDP that I think might be right but that I’m uncertain about and would be interested in seeing quantified more is that growth itself causes others benefits (like health, education, etc) that should be counted separately from the more direct economic/subjective wellbeing benefits of growth. That seems like an obvious way that a log(GDP) utility function could be understating the value of growth. My intuition is that it would be surprising if more of the humanitarian impact (according to my values) of growth ran through second order impacts on health than through the direct impact on income/SWB, but I’m not sure about how the causal magnitudes would pencil out.
I totally grant that GDP is important and tightly correlated with lots of other good things, but I think using it as your comparison unit biases the calculation towards growth, since the randomistas are explicitly not aiming to increasing growth, while both groups are aiming to increase welfare all things considered.
Overall, I do find it likely that some form of policy interventions (maybe focused on growth, maybe not) will likely pencil out better than the current GW top charities, but I think measuring impacts in terms of raw GDP is likely to be more distracting than beneficial on that path.
If people are interested in taking up the mantle here and pushing these arguments further, I would be interested to see more of a focus on (a) detailed historical cases where outside efforts to improve policy to accelerate growth have worked—I think it’s unfortunate that we don’t have more concrete evidence on the amount of funding or role of the Indian think tank Lant cites; and (b) concrete proposed interventions/things to do. I agree that the “there’s nothing to do” argument is not a show-stopper, but I do think a real weakness of this overall direction of argument at the moment is answering that question. And in general, I expect an inverse relationship between “clarity/convincingness of policy impact/benefit” and “tractability/policy feasibility” (it seems to me that the clearest growth prescription is more immigration to rich countries, and ~equally clear that we shouldn’t expect major policy changes there), so I think getting the argument down into the weeds about what has worked in the past and where opportunities exist now might be more productive.
FWIW, I agree with the comment from @cole_haus speculating that part of the reason these arguments haven’t gotten traction in EA is that it seems most people who are willing to bite the “high uncertainty, high upside” bullet tend to go further, towards animals or the far future, rather than stop at “advocate for policies to promote growth.”
Again, just want to reiterate that I think this is an interesting and worthwhile question and that I’m sympathetic to the case that EAs should focus more on policy interventions writ large.
(Also, not sure I got formatting right, so let me know if links don’t work and I can try to edit—thanks!)
Thanks for these comments Alex. I agree that it would be best to look at how growth translates into subjective wellbeing, and I am planning to do this or to get someone else to do it soon. However, I’m not sure that this defeats our main claim which is that research on and advocacy for growth are likely to be better than GW top charities. There are a few arguments for this.
(1) GW estimates that deworming is the best way to improve economic outcomes for the extreme poor, in expectation. This seems to me very unlikely to be true since deworming explains almost none of the variance in economic outcomes across the world today, and research on and advocacy for growth looks a much better bet unless you endorse extreme scepticism about growth economics, which no EA has yet argued for. On the welfare metrics endorsed by GiveWell’s staff, deworming is roughly as good as their top charities. It is therefore very unlikely that GW’s top charities are better than research and advocacy for growth.
(2) The cost-effectiveness argument. Many of the huge growth episodes analysed by Lant occurred in countries that were extremely poor before those growth episodes. Looking to the past, it seems unreasonable to deny that funding research on and advocacy for growth is better than the best that one could do with a randomista intervention. The Chinese experience alone seems to me to clearly make this case. Looking to the future, our conjecture is that a 4 person year research effort will show that research and advocacy targeted at LMICs is better than the best GW charities. This takes account of the diminishing marginal utility of money. The case for this claim is unproven, but I think our argument provides strong support for it being probably true.
On the ‘risk-lovers would work on animals/long-termism’ point, I don’t think i agree. To me it seems that people work on these causes because of ethical assumptions about the weight of animals and future beings rather than because of attitudes to risk.
I agree that getting into the weeds is important for our predictive conjecture: the aim of our piece was precisely to motivate getting into these weeds. Moreover, someone needed to make these general arguments at some point as they had been around for many years without response.
Someone emailed me this and asked for thoughts, so I thought I’d share some cleaned up reactions here. Full disclosure—I work at Open Phil on some related issues:
Thanks for the post—I think it’s helpful, and I agree that I would like to see the EA community engage more with Lant’s arguments.
If we’re focused primarily on near term human welfare (which seems to be the frame for the post), I think it’s really important to think (and do back of the envelope calculations) more explicitly in terms of utility rather than in terms of absolute dollars. In the post, you allude to the need to adjust for this (“It should be noted that the later stages of the growth accelerations affect progressively richer people, so produce less utility from additional consumption.”), but I think it’s actually first order. In general, I think true humanitarian welfare is distributed much more linearly than exponentially, and that Jones and Klenow’s welfare concept doesn’t map very well to how I think about utility. I don’t have any knock-down arguments here, but I think looking at life satisfaction survey data and lifespan data both suggest the relevant metric is much closer to log(GDP) than pure GDP: many people in rich countries are 100x richer than in poor countries, but I don’t think their lives are 100x (or even 10x) better, and they live <2x longer on average. I could propose some thought examples here (willingness-to-pay to save different lives, how many lives would you live in one place in exchange for just one in another), but I think the intuition is pretty straightforward. Some other thoughts on using GDP $ instead of something like average(log($)) or total(log($)) as your unit:
Using GDP $ ignores distribution, which is key to, e.g., the GiveDirectly case, which explicitly isn’t aiming at global GDP. More generally, growth accelerations typically increase inequality (at least for a while), and in the quick data I googled for India, the median income is less than half of the average; just adjusting for GDP/capita will miss some of the median income dynamics you recognize as important.
Using raw GDP makes you more likely to focus on rich countries: for instance, if I thought that relaxing zoning constraints would increase US GDP by 5% (example source), the perpetuity of that increase would be worth 6 times Lant’s calculation of the Indian growth episodes combined, even though it seems far less morally valuable to me. Log($) type considerations are, I think, a lot of what motivates a focus on developing countries in the first place, and would push towards more attention to poorer countries and domestic inequality within those countries, relative to a GDP-first framework.
Logging $ to get to utility terms generally removes the compounding dynamic in absolute $ that I think partially attracts people to growth arguments; I tend to think that’s good and correct, but am not sure, and would be interested in reading more on this if people have pointers.
One part of the case for a focus on GDP that I think might be right but that I’m uncertain about and would be interested in seeing quantified more is that growth itself causes others benefits (like health, education, etc) that should be counted separately from the more direct economic/subjective wellbeing benefits of growth. That seems like an obvious way that a log(GDP) utility function could be understating the value of growth. My intuition is that it would be surprising if more of the humanitarian impact (according to my values) of growth ran through second order impacts on health than through the direct impact on income/SWB, but I’m not sure about how the causal magnitudes would pencil out.
I think Carl Shulman’s old posts on log income and the right proxies for measuring long-run flow-through effects are interesting on this.
I totally grant that GDP is important and tightly correlated with lots of other good things, but I think using it as your comparison unit biases the calculation towards growth, since the randomistas are explicitly not aiming to increasing growth, while both groups are aiming to increase welfare all things considered.
Overall, I do find it likely that some form of policy interventions (maybe focused on growth, maybe not) will likely pencil out better than the current GW top charities, but I think measuring impacts in terms of raw GDP is likely to be more distracting than beneficial on that path.
If people are interested in taking up the mantle here and pushing these arguments further, I would be interested to see more of a focus on (a) detailed historical cases where outside efforts to improve policy to accelerate growth have worked—I think it’s unfortunate that we don’t have more concrete evidence on the amount of funding or role of the Indian think tank Lant cites; and (b) concrete proposed interventions/things to do. I agree that the “there’s nothing to do” argument is not a show-stopper, but I do think a real weakness of this overall direction of argument at the moment is answering that question. And in general, I expect an inverse relationship between “clarity/convincingness of policy impact/benefit” and “tractability/policy feasibility” (it seems to me that the clearest growth prescription is more immigration to rich countries, and ~equally clear that we shouldn’t expect major policy changes there), so I think getting the argument down into the weeds about what has worked in the past and where opportunities exist now might be more productive.
FWIW, I agree with the comment from @cole_haus speculating that part of the reason these arguments haven’t gotten traction in EA is that it seems most people who are willing to bite the “high uncertainty, high upside” bullet tend to go further, towards animals or the far future, rather than stop at “advocate for policies to promote growth.”
Again, just want to reiterate that I think this is an interesting and worthwhile question and that I’m sympathetic to the case that EAs should focus more on policy interventions writ large.
(Also, not sure I got formatting right, so let me know if links don’t work and I can try to edit—thanks!)
Thanks for these comments Alex. I agree that it would be best to look at how growth translates into subjective wellbeing, and I am planning to do this or to get someone else to do it soon. However, I’m not sure that this defeats our main claim which is that research on and advocacy for growth are likely to be better than GW top charities. There are a few arguments for this.
(1) GW estimates that deworming is the best way to improve economic outcomes for the extreme poor, in expectation. This seems to me very unlikely to be true since deworming explains almost none of the variance in economic outcomes across the world today, and research on and advocacy for growth looks a much better bet unless you endorse extreme scepticism about growth economics, which no EA has yet argued for. On the welfare metrics endorsed by GiveWell’s staff, deworming is roughly as good as their top charities. It is therefore very unlikely that GW’s top charities are better than research and advocacy for growth.
(2) The cost-effectiveness argument. Many of the huge growth episodes analysed by Lant occurred in countries that were extremely poor before those growth episodes. Looking to the past, it seems unreasonable to deny that funding research on and advocacy for growth is better than the best that one could do with a randomista intervention. The Chinese experience alone seems to me to clearly make this case. Looking to the future, our conjecture is that a 4 person year research effort will show that research and advocacy targeted at LMICs is better than the best GW charities. This takes account of the diminishing marginal utility of money. The case for this claim is unproven, but I think our argument provides strong support for it being probably true.
On the ‘risk-lovers would work on animals/long-termism’ point, I don’t think i agree. To me it seems that people work on these causes because of ethical assumptions about the weight of animals and future beings rather than because of attitudes to risk.
I agree that getting into the weeds is important for our predictive conjecture: the aim of our piece was precisely to motivate getting into these weeds. Moreover, someone needed to make these general arguments at some point as they had been around for many years without response.