As one data point: I’m very positive about CES, and think they’re one of the best marginal uses of funding right now. (Note that Aaron didn’t ask me to write this.)
(Ties: I’ve recommended a grant to CES from Open Phil before, and a further grant is under consideration at OP right now; even given this possible grant, CES would have need for further funding for the coming years.)
I second Julia in her apology. In hindsight, once I’d seen that you didn’t want the post shared I should have simply ignored it, and ensured you knew that it had been accidentally shared with me.
When it was shared with me, the damage had already been done, so I thought it made sense to start prepping a response. I didn’t think your post would change significantly, and at the time I thought it would be good for me to start going through your critique to see if there were indeed grave mistakes in DGB, and offer a speedy response for a more fruitful discussion. I’m sorry that I therefore misrepresented you. As you know, the draft you sent to Julia was quite a bit more hostile than the published version; I can only say that as a result of this I felt under attack, and that clouded my judgment.
I agree with all the points you make here, including on the suggested upvote/downvote distribution, and on the nature of DGB. FWIW, my (current, defeasible) plan for any future trade books I write is that they’d be more highbrow (and more caveated, and therefore drier) than DGB.
I think that’s the right approach for me, at the moment. But presumably at some point the best thing to do (for some people) will be wider advocacy (wider than DGB), which will inevitably involve simplification of ideas. So we’ll have to figure out what epistemic standards are appropriate in that context (given that GiveWell-level detail is off the table).
Some preliminary thoughts on heuristics for this (these are suggestions only):
Standards we’d want to keep as high as ever:
Is the broad brush strokes picture of what is being conveyed accurate? Is there any easy way the broad brush of what is conveyed could have been made more accurate?
Are the sentences being used to support this broad brush strokes picture warranted by the evidence?
Is this the way of communicating the core message about as caveated and detailed as one can reasonably manage?
Standards we’d need to relax:
Does this communicate as much detail as possible with respect to the relevant claims?
Does this communicate all the strongest possible counterarguments to the key claim?
Does this include every reasonable caveat?
I think that a blogpost that does very well with respect to the above, without compromising on the clarity of the core message, is Max Roser’s recent post: ‘The world is much better; The world is awful; The world can be much better’.
I appreciate that you’ve taken the time to consider what I’ve said in the book at such length. However, I do think that there’s quite a lot that’s wrong in your post, and I’ll describe some of that below. Though I think you have noticed a couple of mistakes in the book, I think that most of the alleged errors are not errors.
I’ll just focus on what I take to be the main issues you highlight, and I won’t address the ‘dishonesty’ allegations, as I anticipate it wouldn’t be productive to do so; I’ll leave that charge for others to assess.
Of the main issues you refer to, I think you’ve identified two mistakes in the book: I left out a caveat in my summary of the Baird et al (2016) paper, and I conflated overheads costs and CEO pay in a way that, on the latter aspect, was unfair to Charity Navigator.
In neither case are these errors egregious in the way you suggest. I think that: (i) claiming that the Baird et al (2016) should cause us to believe that there is ‘no effect’ on wages is a misrepresentation of that paper; (ii) my core argument against Charity Navigator, regarding their focus on ‘financial efficiency’ metrics like overhead costs, is both successful and accurately depicts Charity Navigator.
I don’t think that the rest of the alleged major errors are errors. In particular: (i) GiveWell were able to review the manuscript before publication and were happy with how I presented their research; the quotes you give generally conflate how to think about GiveWell’s estimates with how to think about DCP2’s estimates; (ii) There are many lines of evidence supporting the 100x multiplier, and I don’t rely at all on the DCP2 estimates, as you imply.
(Also, caveating up front: for reasons of time limitations, I’m going to have to precommit to this being my last comment on this thread.)
(Also, Alexey’s post keeps changing, so if it looks like I’m responding to something that’s no longer there, that’s why.)
Since the book came out, there has been much more debate about the efficacy of deworming. As I’ve continued to learn about the state and quality of the empirical evidence around deworming, I’ve become less happy with my presentation of the evidence around deworming in Doing Good Better; this fact has been reflected on the errata page on my website for the last two years. On your particular points, however:
Deworming vs textbooks
If textbooks have a positive effect, it’s via how much children learn in school, rather than an incentive for them to spend more time in school. So the fact that there doesn’t seem to be good evidence for textbooks increasing test scores is pretty bad.
If deworming has a positive effect, it could be via a number of mechanisms, including increased school attendance or via learning more in school, or direct health impacts, etc. If there are big gains on any of these dimensions, then deworming looks promising. I agree that more days in school certainly aren’t good in themselves, however, so the better evidence is about the long-run effects.
Deworming’s long-run effects
Here’s how GiveWell describes the study on which I base my discussion of the long-run effects of deworming:
“10-year follow-up: Baird et al. 2016 compared the first two groups of schools to receive deworming (as treatment group) to the final group (as control); the treatment group was assigned 2.41 extra years of deworming on average. The study’s headline effect is that as adults, those in the treatment group worked and earned substantially more, with increased earnings driven largely by a shift into the manufacturing sector.” Then, later: “We have done a variety of analyses to assess the robustness of the core findings from Baird et al. 2016, including reanalyzing the data and code underlying the study, and the results have held up to our scrutiny.”
You are correct that my description of the findings of the Baird et al paper was not fully accurate. When I wrote, “Moreover, when Kremer’s colleagues followed up with the children ten years later, those who had been dewormed were working an extra 3.4 hours per week and earning an extra 20 percent of income compared to those who had not been dewormed,” I should have included the caveat “among non-students with wage employment.” I’m sorry about that, and I’m updating my errata page to reflect this.
As for how much we should update on the basis of the Baird et al paper — that’s a really big discussion, and I’m not going to be able to add anything above what GiveWell have already written (here, here and here). I’ll just note that:
(i) Your gloss on the paper seems misleading to me. If you include people with zero earnings, of course it’s going to be harder to get a statistically significant effect. And the data from those who do have an income but who aren’t in wage employment are noisier, so it’s harder to get a statistically significant effect there too. In particular, see here from the 2015 version of the paper: “The data on [non-agricultural] self-employment profits are likely measured with somewhat more noise. Monthly profits are 22% larger in the treatment group, but the difference is not significant (Table 4, Panel C), in part due to large standard errors created by a few male outliers reporting extremely high profits. In a version of the profit data that trims the top 5% of observations, the difference is 28% (P < 0.10).”
(ii) GiveWell finds the Baird et al paper to be an important part of the evidence behind their support of deworming. If you disagree with that, then you’re engaged in a substantive disagreement with GiveWell’s views; it seems wrong to me to class that as a simple misrepresentation.
2. Cost-effectiveness estimates
Given the previous debate that had occurred between us on how to think and talk about cost-effectiveness estimates, and the mistakes I had made in this regard, I wanted to be sure that I was presenting these estimates in a way that those at GiveWell would be happy with. So I asked an employee of GiveWell to look over the relevant parts of the manuscript of DGB before it was published; in the end five employees did so, and they were happy with how I presented GiveWell’s views and research.
How can that fact be reconciled with the quotes you give in your blog post? It’s because, in your discussion, you conflate two quite different issues: (i) how to represent that cost-effectiveness estimates provided by DCP2, or by single studies; (ii) how to represent the (in my view much more rigorous) cost-effectiveness estimates provided by GiveWell. Almost all the quotes from Holden that you give are about (i). But the quotes you criticise me for are about (ii). So, for example, when I say ‘these estimates’ are order of magnitude estimates that’s referring to (i), not to (ii).
There’s a really big difference between (i) and (ii). I acknowledge that back in 2010 I was badly wrong about the reliability of DCP2 and individual studies, and that GWWC was far too slow to update its web pages after the unreliability of these estimates came to light. But the level of time, care and rigour that have gone into the GiveWell estimates are much greater than those that have gone into the DCP2 estimates. It’s still the case that there’s a huge amount of uncertainty surrounding the GiveWell estimates, but describing them as “the most rigorous estimates” we have seems reasonable to me.
More broadly: Do I really think that you do as much good or more in expectation from donating $3500 to AMF as saving a child’s life? Yes. GiveWell’s estimate of the direct benefits might be optimistic or pessimistic (though it has stayed relatively stable over many years now — the median GiveWell estimate for ‘cost for outcome as good as averting the death of an individual under 5’ is currently $1932), but I really don’t have a view on which is more likely. And, what’s more important, the biggest consideration that’s missing from GiveWell’s analysis is the long-run effects of saving a life. While of course it’s a thorny issue, I personally find it plausible that the long-run expected benefits from a donation to AMF are considerably larger than the short-run benefits — you speed up economic progress just a little bit, in expectation making those in the future just a little bit better off than they would have otherwise been. Because the future is so vast in expectation, that effect is very large. (There’s *plenty* more to discuss on this issue of long-run effects — Might those effects be negative? How should you discount future consumption? etc — but that would take us too far afield.)
3. Charity Navigator
Let’s distinguish: (i) the use of overhead ratio as a metric in assessing charities; (ii) the use of CEO pay as a metric in assessing charities. The idea of evaluating charities on overheads and on the basis of CEO pay are often run together in public discussion, and are both wrong for similar reasons, so I bundled them together in my discussion.
Regarding (ii): CN-of-2014 did talk a lot about CEO pay: they featured CEO pay, in both absolute terms and as a proportion of expenditure, prominently on their charity evaluation pages (see, e.g. their page on Books for Africa), they had top-ten lists like, “10 highly-rated charities with low paid CEOs”, and “10 highly paid CEOs at low-rated charities” (and no lists of “10 highly-rated charities with high paid CEOs” or “10 low-rated charities with low paid CEOs”). However, it is true that CEO pay was not a part of CN’s rating system. And, rereading the relevant passages of DGB, I can see how the reader would have come away with the wrong impression on that score. So I’m sorry about that. (Perhaps I was subconsciously still ornery from their spectacularly hostile hit piece on EA that came out while I was writing DGB, and was therefore less careful than I should have been.) I’ve updated my errata page to make that clear.
Regarding (i): CN’s two key metrics for charities are (a) financial health and (b) accountability and transparency. (a) is in very significant part about the charities’ overheads ratios (in several different forms), where they give a charity a higher score the lower its overheads are, breaking the scores into five broad buckets: see here for more detail. The doughnuts for police officers example shows that a really bad charity could score extremely highly on CN’s metrics, which shows that CN’s metrics must be wrong. Similarly for Books for Africa, which gets a near-perfect score from CN, and features in its ‘ten top-notch charities’ list, in significant part because of its very low overheads, despite having no good evidence to support its program.
I represent CN fairly, and make a fair criticism of its approach to assessing charities. In the extended quote you give, they caveat that very low overheads are not make-or-break for a charity. But, on their charity rating methodology, all other things being equal they give a charity a higher score the lower the charity’s overheads. If that scoring method is a bad one, which it is, then my criticism is justified.
4. Life satisfaction and income and the hundredfold multiplier
The hundredfold multiplier
You make two objections to my 100x multiplier claim: that the DCP2 deworming estimate was off by 100x, and that the Stevenson and Wolfers paper does not support it.
But there are very many lines of evidence in favour of the 100x multiplier, which I reference in Doing Good Better. I mention that there are many independent justifications for thinking that there is a logarithmic (or even more concave) relationship between income and happiness on p.25, and in the endnotes on p.261-2 (all references are to the British paperback edition—yellow cover). In addition to the Stevenson and Wolfers lifetime satisfaction approach (which I discuss later), here are some reasons for thinking that the hundredfold multiplier obtains:
The experiential sampling method of assessing happiness. I mention this in the endnote on p.262, pointing out that, on this method, my argument would be stronger, because on this method the relationship between income and wellbeing is more concave than logarithmic, and is in fact bounded above.
Imputed utility functions from the market behaviour of private individuals and the actions of government. It’s absolutely mainstream economic thought that utility varies with log of income (that is, eta=1 in an isoelastic utility function) or something more concave (eta>1). I reference a paper that takes this approach on p.261, the Groom and Maddison (2013). They estimate eta to be 1.5.
Estimates of cost to save a life. I discuss this in ch.2; I note that this is another strand of supporting evidence prior to my discussion of Stevenson and Wolfers on p.25: “It’s a basic rule of economics that money is less valuable to you the more you have of it. We should therefore expect $1 to provide a larger benefit for an extremely poor Indian farmer than it would for you or me. But how much larger? Economists have sought to answer this question through a variety of methods. We’ll look at some of these in the next chapter, but for now I’ll just discuss one [the Stevenson and Wolfers approach].” Again, you find 100x or more discrepancy in the cost to save a life in rich or poor countries.
Estimate of cost to provide one QALY. As with the previous bullet point.
Note, crucially, that the developing world estimates for cost to provide one QALY or cost to save a life come from GiveWell, not — as you imply — from DCP2 or any individual study.
Is there a causal relationship from income to wellbeing?
It’s true that there Stevenson and Wolfers only shows the correlation is between income and wellbeing. But that there is a causal relationship, from income to wellbeing, is beyond doubt. It’s perfectly obvious that, over the scales we’re talking, higher income enables you to have more wellbeing (you can buy analgesics, healthcare, shelter, eat more and better food, etc).
It’s true that we don’t know exactly the strength of the causal relationship. Understanding this could make my argument stronger or weaker. To illustrate, here’s a quote from another Stevenson and Wolfers paper, with the numerals in square brackets added in by me:
“Although our analysis provides a useful measurement of the bivariate relationship between income and well-being both within and between countries, there are good reasons to doubt that this corresponds to the causal effect of income on well-being. It seems plausible (perhaps even likely) that [i] the within-country well-being-income gradient may be biased upward by reverse causation, as happiness may well be a productive trait in some occupations, raising income. A different perspective, from offered by Kahneman, et al. (2006), suggests that [ii] within-country comparisons overstate the true relationship between subjective well-being and income because of a “focusing illusion”: the very nature of asking about life satisfaction leads people to assess their life relative to others, and they thus focus on where they fall relative to others in regard to concrete measures such as income. Although these specific biases may have a more important impact on within-country comparisons, it seems likely that [iii] the bivariate well-being-GDP relationship may also reflect the influence of third factors, such as democracy, the quality of national laws or government, health, or even favorable weather conditions, and many of these factors raise both GDP per capita and well-being (Kenny, 1999).29 [iv] Other factors, such as increased savings, reduced leisure, or even increasingly materialist values may raise GDP per capita at the expense of subjective well-being. At this stage we cannot address these shortcomings in any detail, although, given our reassessment of the stylized facts, we would suggest an urgent need for research identifying these causal parameters.”
To the extent to which (i), (ii) or (iv) are true, the case for the 100x multiplier becomes stronger. To the extent to which (iii) is true, the case for the 100x multiplier becomes weaker. We don’t know, at the moment, which of these are the most important factors. But, given that the wide variety of different strands of evidence listed in the previous section all point in the same direction, I think that estimating a 100x multiplier as a causal matter is reasonable. (Final point: noting again that all these estimates do not factor in the long-run benefits of donations, which would increase the ratio of benefits others to benefits to yourself even further in the direction of benefits to others.)
On the Stevenson and Wolfers data, is the relationship between income and happiness weaker for poor countries than for rich countries?
If it were the case that money does less to buy happiness (for any given income level) in poor countries than in rich countries, then that would be one counterargument to mine.
However, it doesn’t seem to me that this is true of the Stevenson and Wolfers data. In particular, it’s highly cherry-picked to compare Nigeria and the USA as you do, because Nigeria is a clear outlier in terms of how flat the slope is. I’m only eyeballing the graph, but it seems to me that, of the poorest countries represented (PHL, BGD, EGY, CHN, IND, PAK, NGA, ZAF, IDN), only NGA and ZAF have flatter slopes than USA (and even for ZAF, that’s only true for incomes less than $6000 or so); all the rest have slopes that are similar to or steeper than that of USA (IND, PAK, BGD, CHN, EGY, IDN all seem steeper than USA to me). Given that Nigeria is such an outlier, I’m inclined not to give it too much weight. The average trend across countries, rich and poor, is pretty clear.
Re Etg buy-out—yes, you’re right. For people who think that CEA is a top donation target, hopefully we could just come to agreement as a trade wouldn’t be possible, or would be prohibitively costly (if there were only slight differences in our views on which places were best to fund).
Re local group activities: These are just examples of some of the things I’d be excited about local groups doing, and I know that at least some local groups are funding constrained (e.g. someone is running them part-time, unpaid, and will otherwise need to get a job).
Re AI safety fellowship at ASI—as I understand it, that is currently funding constrained (they had great applicants who wanted to take the fellowship but ASI couldn’t fund it). For other applications (e.g. Google Brain) it could involve, say, spending some amount of time during or after a physics or math PhD in order to learn some machine learning and be more competitive.
Re anthropogenic existential risks—ah, I had thought that it was only in presentation form. In which case: that paper is exactly the sort of thing I’d love to see more of.
It is a successor to EA Ventures, though EA Grants already has funding, and is more focused on individuals than start-up projects.
Yes, the money is raised; we have a pot of £500,000 in the first instance.
“However, $700/year (= $1.91/day, =€1.80/day, =£1.53 /day) (without gifts or handouts) is not a sufficient amount of money to be alive in the west. You would be homeless. You would starve to death. In many places, you would die of exposure in the winter without shelter.”
One could live on that amount of money per day in the West. You’d live in a second-hand tent, you’d scavenge food from bins (which would count towards your ‘expenditure’, because we’re talking about consumption expenditure, but wouldn’t count that much). Your life expectancy would be considerably lower than others in the West, but probably not lower than the 55 years which is the life expectancy in Burkina Faso (as an example comparison, bear in mind that number includes infant mortality). Your life would suck very badly, but you wouldn’t die, and it wouldn’t be that dissimilar to the lives of the millions of people who live in makeshift slums or shanty towns and scavenge from dumps to make a living. (Such people aren’t representative of all extremely poor people, but they are a notable fraction.)
“counts as an xrisk (and therefore as a GCR)”
My understanding: GCR = (something like) risk of major catastrophe that kills 100mn+ people
(I think the GCR book defines it as risk of 10mn+ deaths, but that seemed too low to me).
So, as I was using the term, something being an x-risk does not entail it being a GCR. I’d count ‘Humanity’s moral progress stagnates or we otherwise end up with the wrong values’ as an x-risk but not a GCR.
Interesting (/worrying!) how we’re understanding widely-used terms so differently.
Mea culpa that I switched from “impact on beings alive today” to “benefits over the next 50 years” without noticing.
That’s reasonable, though if the aim is just “benefits over the next 50 years” I think that campaigns against factory farming seem like the stronger comparison:
“We’ve estimated that corporate campaigns can spare over 200 hens from cage confinement for each dollar spent. If we roughly imagine that each hen gains two years of 25%-improved life, this is equivalent to one hen-life-year for every $0.01 spent.”
“One could, of course, value chickens while valuing humans more. If one values humans 10-100x as much, this still implies that corporate campaigns are a far better use of funds (100-1,000x) [So $30-ish per equivalent life saved].”
And to clarify my first comment, “unlikely to be optimal” = I think it’s a contender, but the base rate for “X is an optimal intervention” is really low.
Agree that GCRs are a within-our-lifetime problem. But in my view mitigating GCRs is unlikely to be the optimal donation target if you are only considering the impact on beings alive today. Do you know of any sources that make the opposite case?
And it’s framed as long-run future because we think that there are potentially lots of things that could have a huge positive on the value of the long-run future which aren’t GCRs—like humanity having the right values, for example.
In my previous post I wrote: “The existence of this would bring us into alignment with other societies, which usually have some document that describes the principles that the society stands for, and has some mechanism for ensuring that those who choose to represent themselves as part of that society abides by those principles.” I now think that’s an incorrect statement. EA, currently, is all of the following: an idea/movement, a community, and a small group of organisations. On the ‘movement’ understanding of EA, analogies of EA don’t have a community panel similar to what I suggested, and only some have ‘guiding principles’. (Though communities and organisations, or groups of organisations, often do.)
Julia created a list of potential analogies here:
The closest analogy to what we want to do is given by the open source community: many but not all of the organisations within the open source community created their own codes of conduct, many of them very similar to each other.
One thing to note, re diversification (which I do think is an important point in general) is that it’s easy to think of Open Phil as a single agent, rather than a collection of agents; and because Open Phil is a collective entity, there are gains from diversification even with the funds.
For example, there might be a grant that a program officer wants to make, but there’s internal disagreement about it, and the program officer doesn’t have time (given opportunity cost) to convince others at Open Phil why it’s a good idea. (This has been historically true for, say, the EA Giving Fund). Having a separate pool of money would allow them to fund things like that.
Thanks so much for this, Luke! If someone who spends half their working time dedicating to philanthropy, as you do, says “There is a limit to how much high quality due diligence one could do. It takes time to build relationships, analyse opportunities and monitor them”—that’s pretty useful information!
Thanks! That’s really helpful to know. The Funds are potentially solving a number of problems at once, and we know there’s some demand for each of these problems to be solved, but not how much demand, so comments like this are very useful.