The lives vs life years thing shouldn’t change our answer much. I would also not extend the lives of 30 dogs by 1 year compared to extending a human life by 1 year, and honestly the 1⁄100 conversion rate I mentioned is too high for me as well, I just used it as an example of how the comparison changes with a different conversion rate.
This seems to fall under the general confusion and difficulty of evaluating wild animal suffering, and I don’t envy anyone who has to do that.
Got it, I think I misunderstood that point the first time. Yes, I am convinced that this is an issue that is worth choosing log over isoelastic for.
Yes, I agree with the first order consequence of focusing more on saving lives. The purpose of this is just to compare different approaches that only increase income, and I was just suggesting that a high set point is a sufficient way to avoid having that spill over into unappealing implications for saving lives. It is true that a very high set point is inconsistent with revealed preference VSLs, though. I don’t have a good way to resolve that. I have an intuition that low VSLs are a problem and we shouldn’t respect them, but it’s not one I can defend, so I think you’re right on this.
I’m on board with the idea of averaging over scenarios ala Weitzman, my original thinking was that a normalizing constant would shrink the scale of differences between the scenarios and thus reduce the effect of outlier etas. But I was confusing two different concepts—a high normalizing constant would reduce the % difference between them, but not the absolute difference between them which is the important quantity for expected value.
You… are absolutely right. That’s a very good catch. I think your calculation is correct, as the utility translation only happens twice—utility from productivity growth, which I adjusted, and utility from cash transfers, which I did not. Everything else is unchanged from the original framework.
You’re definitely right that it matters whether this is global average/median/poverty level. I think that the issue stems from using productivity A as the input to the utility function, rather than income. This is not an issue for log utility if income is directly proportional to A, since it cancels out, but it is probably better to redo this with income statistics/income growth and see how that changes things.
I’ll make a note about this at the top of the post and update it with a more substantive change to the conclusion when I’ve dug into it further.
Great post. You should submit it for a cause exploration prize for a small chance at Open Phil being convinced by this!
You should know that economics PhDs do not require masters in general, and definitely not for people who did their undergrad in the US (which I’m assuming you did if you are considering the US military!) They are also (usually) fully funded and pay a liveable amount.
Moreover, it is most common for people to do a two-year predoctoral research position before applying to PhDs. Those jobs pay $50-60k a year so they also enable you to save a bit or pay down a bit of debt, before you start grad school. So an economics PhD should not cost you anything.
If you want to talk more about this I’m happy to chat if you DM me.
Great post. This is really enlightening and I want to see more like it.
I think the bigger problem is that a large fraction of the people who EtG was targeted at were people who were risk-averse about their career path. EA motivating people to make crazy career shifts is a very new phenomenon, and “to do the most good, you should start a company with a 90% chance of failure” would not have been a winning message imo.
Thanks for the points, I should have done more due diligence into the arguments for each framework. That said, I don’t see these as fatal flaws:
I don’t know if I see this as a problem. I think it’s good for considerations about policy with international spillovers to be dominated by their effect on low-income countries. For example, I think that the welfare effects of US tariffs should be primarily judged by their impact on exporters in low-income countries, and that economic growth in the US is valuable primarily because of spillovers to the rest of the world. Insofar as log utility brackets this effect away, it doesn’t seem like the right reasoning process.
* Even if you’re uncomfortable with that philosophical commitment, you can still use high etas to evaluate policies that focus on low-income countries, such as growth advocacy. That is considerably narrower than I would like, because I think we should make that philosophical commitment, but it’s still a useful set of scenarios.
Sharpening the tradeoff between life and income is a much bigger problem to me, as I agree that it would be unattractive to place a low value on life. But I don’t think that high etas intrinsically imply a low total welfare. Utility functions are not normalized to scale. We can introduce a large constant for the baseline welfare of being alive, as is done in this framework which has a subsistence welfare s. A high value of s would increase the value of life relative to income, while still maintaining the intuition that each doubling of income is worth less than the last. That s would also be irrelevant for monetary considerations since it would cancel out when looking at the change in utility. Moreover, I think it should be possible to estimate s from IDinsight’s work on beneficiary preferences which retains tractability.
I have to admit that I did not scrutinize the studies and I am very open to them being flawed. But I think almost everyone would agree that 10% income increase is worth much more to a poor person than a rich person. The median economist in the Dropp survey might disagree, but I don’t really place a high weight on a survey of economists, who are a) very attached to log utility as a tractable model and thus incentivized to post-hoc justify it by saying eta = 1, b) not the arbiters of people’s utility functions.
I don’t think that aggregating over implied welfare levels is necessarily the right approach either, since isoelastic utility functions with reasonable values of η are inherently smaller in magnitude than log utility functions. If we arbitrarily squared all welfare levels (utility is invariant to strictly increasing transformations) before averaging them, we would also place a lot more weight on low η, even though nothing has intrinsically changed. More generally, the fact that isoelastic utilities give small numbers is not morally meaningful, because it can be changed by having a normalizing constant.
The simplicity of η=1 may be useful if we don’t know the whole distribution of income, but this kind of exercise for when we do know the whole distribution can produce discounting factors that we can use even when we don’t know the whole distribution of income. So I don’t think that higher η sacrifices much tractability.
Tl, dr; I think most of the features of high η that you identify can be solved by having a high baseline welfare component of the utility function, and the others are not problems.
I missed MEER while looking for nonprofits, but it looks very exciting! I would love to see an RCT evaluation of their interventions. I’ll reach out to the folks there to ask them for more details about it.
Good point! That’s definitely an oversight. I can’t find any more specifics about the adaptation financing, except the sector breakdown in Figure 1.9: half of it went to water/sanitation and agriculture/forestry/fishing. I’ll try to dig into their data sources to see what concrete programs they are going to, and whether those are impactful.
I don’t think the answers are illuminating if the question is “conditional on AGI happening, would it be good or bad”—that doesn’t yield super meaningful answers from people who believe that AGI in the agentic sense is vanishingly unlikely. Or rather it is a meaningful question, but to those people AGI occurs with near zero probability so even if it was very bad it might not be a priority.
That’s fascinating! But I don’t know if that is the same notion of AGI and AI risk that we talk about in EA. It’s very possible to believe that AI will automate most jobs and still not believe that AI will become agentic/misaligned. That’s the notion of AGI that I was referring to.
For what it’s worth the Open Phil framework (with R&D discount factors removed) is looking at the effect of global growth, not growth in rich countries. That should attenuate the gap between their results and the results of modelling this just in LMICs. And I don’t know how big a big difference is, but to take it from my final estimate of 12X to 1000X would require growth promotion in LMICs to be over 80 times more cost effective than global growth promotion, which seems like a lot.
It’s fair to say that I’m not rigorously comparing these two approaches. What I am doing is showing that one has a 90% lower value than estimated, and the other is not affected. In general, this would lead you to update in favor of targeted interventions—hence saying that it looks better. The strength of that update may not be enough to overcome your prior. But I’m not litigating the entire growth vs RD debate here. The argument is just “inequality is a big problem for growth”.
Thanks! I hadn’t thought about it and frankly don’t know if this is substantive criticism/red teaming, but I’ll think about it.
I’m not really interested in dismissing growth as a cause area. (I am annoyed at how little EAs mechanize it beyond “advocate for policies --> ??? --> growth”, but I’m going to write that up soon!) I wrote this because I think people who advocate for growth largely ignore inequality and should discount growth heavily because of inequality. If growth still beats targeted interventions after that heavy discounting, then so be it.
I loved it. Really interesting piece and I felt I learnt a lot from it even as a highly engaged EA.
In RCTs we generally worry about “spillovers” i.e. the control group is affected by the treatment. Usually this is in the opposite direction: for example, in an RCT of cash transfers we might be worried that control households will benefit from the spending of treatment households. This violates one of the core assumptions of RCTs and means that we can’t estimate the true treatment effect.
But I have not seen the opposite effect (control group suffers from treatment group’s advantages) and I do not think development economists think about it a lot. Usually this is not an issue because the experiment should be designed to minimize spillovers of any kind, positive or negative—for example, randomizing at the village level so that treatment and control villages have basically separate economies.
Pritchett’s argument is about the correlation between average income and poverty rates. My argument is about the welfare that people experience from any given level of growth. I’m claiming that conventional evaluations of growth overestimate the value of growth because they weight income growth of middle-income and rich people too heavily. Once you adjust for that, the population welfare from economic growth is now driven mostly by increase in incomes for poor people, and it is much lower than before (90% lower)
If you wanted to value growth solely based on its ability to reduce poverty, an isoelastic utility function does that as well. In the spreadsheet calculations I did, the isoelastic utility penalizes inequality less
(24% vs 36%) because the bottom 50%’s income growth of 50% is almost as good on its own as the whole population income growing 90%.
Separately, I don’t interpret Pritchett’s observation as meaning “and therefore the best way to minimize poverty is to maximize median consumption”. That doesn’t follow at all from a cross-country correlation. For one thing, correlation is not causation and this correlation does not prove that increasing median consumption will decrease poverty. For another thing, we have to consider the costs as well: increasing median consumption through growth could be much more expensive than giving all that money to poor people directly.
This is fascinating. It strikes me that you can’t avoid the population ethics question here. It’s not obvious to me that the lives of stray dogs is net negative, so focusing just on the WALYs lost from pup mortality ignores the WALYs gained from pups being born and living at all. If stray dogs have net positive lives, then the WALY costs from pup mortality actually become negative (compared to if you sterilized the dogs) and the conclusion changes dramatically.
FRD dogs often lead lives filled with neglect, abuse and suffering, with the only solace being that they often die young.
FRD dogs often lead lives filled with neglect, abuse and suffering, with the only solace being that they often die young.
I don’t want to generalize from my experience, but since you appealed to the experience of anyone who has lived in India, I will say that this is not my experience! Stray dogs in my neighborhood basically coexisted with humans who mostly ignored them, and they never bit anyone in my knowledge.
Analytically, the relationship between stray dogs and humans seems super path dependent. If there was one incident in the past of a dog biting a person, that could spiral into humans antagonizing the dogs and dogs antagonizing the humans, but if there hasn’t been such an experience, stray dogs are basically a benign feature of the environment that you can occasionally pet, and they won’t bite because they get fed. It also surely depends on whether the dogs are being fed meat regularly (which anecdotally makes them more aggressive). I would love to see some more systematic evidence on stray dog living conditions, or anything you know of that suggests a wide-ranging experience of suffering.
Also, I care a lot about animal welfare and I love dogs, but a 1⁄30 conversion rate between WALYs and DALYs seems insanely high to me. I would save one human over 30 dogs and I suspect almost everyone would as well. Given that most of the DALYs come from WALYs in your estimate, a more conservative conversion rate like 1⁄100 would bring down the importance of this cause area a lot from 8 million DALYs to 3.3 million DALYs.