Thanks! I also do favour a tapering-in system, if I had to guess now. And I think that surrogacy voting is pretty interesting, too.
Thanks! I hadn’t seen that before!
Cool! Could you send me a link to the study?
First of all I think using live political examples like this is not a great idea.
I don’t think that a blanket ban on live political examples is a great norm. There are definite risks from tribalism from doing so, but we also just have a lot more information with which to test our views (compared to, say, how age-weighted voting would have affected the French Revolution). If we’re worried about tribalism, we should just call tribalism out directly, rather than ban certain topics.
In this particular case, I found that thinking about Brexit and the Scottish Independence Referendum helpful to test my starting intuitions. In particular, it somewhat weakened my adherence to my starting assumption of rational self-interest of voters’ political positions—I don’t really see the age-related discrepancies in people’s votes on Brexit and Scottish Independence as being well explained by whether the position involves short-term benefits for long-term harms. (Rather than, say, by how much weight one puts on national sovereignty, which is a political view that might just go in and out of fashion.)
(I personally think that I’m better at picking policies at 30 than 20, and expect to be better still at 40.)
Again, see comments to Holly and Larks about where the median voting age ends up. I’m going to add that point as an edit into the main post.
Under your proposal the change happens when the next generation turns 18-37, but doesn’t seem to be lessened. For example, the brexit inconsistency would have been between 20 years ago and today rather than between today and 20 years from now, but it would have been just as large.
This is a good point, and my post overstates the case on this. There is still an important difference, though, which is that if there’s a difference between the views of 60 year olds and 30 year olds, we can foresee there will be an intertemporal inconsistency and can choose to avoid it. Whereas if there’s a difference between the views of 30 year olds and 0 year olds we (presumably) don’t know about it and can’t do anything about it.
There’s another intertemporal inconsistency consideration: If we assume rational self-interest and risk-aversion (just in the sense of consumption having diminishing utility), we should expect that earlier on in life, people will prefer more redistributive policies (e.g. progressive tax and redistribution, social safety net for disabilities, weighing costs to prisoners of harsh penalties against benefits of lower crime rate). This is because they have uncertainty about how much they are going to earn, whether they are going to end up disabled, whether they’ll commit a crime. Whereas older people know how things have turned out for them, and face much less risk: those who are wealthier will no longer support redistributive policies; those who know they aren’t going to jail will prefer harsh on crime policies. The early age-weighting is therefore one way to hold people to the decisions they’d make ex ante. I think it’s up for debate how much that matters, but it’s appealing to me—I’m generally attracted to veil of ignorance arguments, and this makes political decision-making slightly more veil-of-ignorance-y.
I mentioned this in response to Larks too, but one thing to bear in mind is that even using the the weighting scheme I suggested in the post—which seemingly strongly favors young people—that would move the median voter (in the US) from age 55 to age 40. So, at least assuming the median voter theorem is approximately accurate in this context, the key epistocratic question is about 40yr olds vs 55yr olds.
And if I had to choose now, I would also prefer a tapering system, where vote-weight starts off lower, then increases, and then decreases again. A benefit of that system is that you could make the ‘voting age’ a gradual progression rather than an immediate jump. Perhaps 12yr olds get a very weak vote, which scales up until 25, then scales down after 35.
I really like this proposal! And agree it’s radically more tractable than such a major change to voting systems.
Hi, thanks so much for doing this! This is really interesting.
Something I think wasn’t sufficiently clear from the post itself: even using the the weighting scheme I suggested in the post, that would move the median voter (in the US) from age 55 to age 40. (H/T Zach Groff for these numbers. Note this doesn’t account for incentive effects, of younger people being more likely to go out to vote, which could lower the median age to a little under 40.) And under reasonable assumptions (with the most controversial being single-peaked preferences), the median voter is decisive. So it’s not like 20 year olds are now deciding what happens. On the epistocratic question, then, we should be asking whether we think 40yr olds will make better decisions than 55 year olds; not whether 20 year olds make better decisions than 60 year olds. I’d need to dig into the studies a lot more to determine whether 40 year olds discount more steeply than 55 year olds.
And then, I’ve only done a quick scan of the studies you link to, but I don’t think the discounting literature you’re pointing to is actually all that relevant, because the timescales they are looking at are so short: 90 days in one case; up to 6 months in another. Whereas the time horizons for the impact of political decisions, especially the most important ones, are on the order of years or decades—over such timescales, discounting due to risk of death become a much bigger factor than discounting due to impulsiveness / impatience.
Usually such a brief perusal of the literature would not give me a huge amount of confidence in the core claims; however in this case the conclusion should seem prima facie very plausible to anyone who has ever met a young boy.
Again, I think this depends on what timescales we’re talking about. Sure, it seems prima facie plausible that someone who is 21 is more likely to prefer $5 today to $10 in a month’s time than a 60 year old is. But (on the assumption of self-interest) I’d strongly wager that a 21 year old is more likely to prefer $100 in 40 years’ time over $10 in a month’s time than a 60 year old is, because the 21 year old is so much more likely to be around and be able to enjoy the benefits.
The altruism and age discussion is interesting, and I agree that if it were borne out it could form part of an epistocratic argument for the age-weighting going the other way around.
No I don’t think so. Moral realism vs anti-realism is orthogonal to whether one thinks we have a duty or merely an opportunity to be an effective altruist.
For example: a non-cognitivist would interpret my statement, ‘You have a duty to give 10% of your income to charity’ as an expression of the sentiment ‘Hooray to giving away 10% of your income to charity’ or ‘Boo to not-giving away 10% of your income to charity’. Alternatively, a subjectivist (who is sometimes classed as a moral realist, but of a ‘non-robust’ type) would interpret my statement, ‘You have a duty to give 10% of your income to charity’ as made true, in some sense, by the fact that I want you to give away 10% of your income to charity. Similarly a relativist could claim it’s true, but only relative to some standard of assessment.
I am talking about obligations in this Introduction (rather than ‘opportunities’). But I’m not claiming that effective altruism is, by definition, about obligations to do good. I’m arguing that we have an obligation to use at least a significant proportion of our resources to do as much good as we can—i.e. we have an obligation to be partial effective altruists.
1. Yes, it’s definitely taken seriously but it’s currently widely misunderstood—associated very closely with Peter Singer’s views.
2. I think that Larry himself is more sympathetic to what EA is doing after my and others’ conversations with him, or at least has a more nuanced view. But in terms of bystanders—yes, from my impressions at the lectures I think the audience came out more EA-sympathetic than when they went in. And especially at the graduate level there’s a lot of recent interest, driven primarily by GPI, and for that purpose it’s important to engage with critiques, especially if they are high-profile.
3. Honestly, not really. Outsiders usually have some straw man perception of EA, and so the critiques aren’t that helpful. The best critiques I’ve found have tended to come from insiders, but I’m hoping that will change as more unsympathetic academics better understand what EA is and isn’t claiming. I do find engaging with philosophers who have very different views of morality (e.g. that there’s just no such thing as ‘the good’) very helpful though.
As one data point: I’m very positive about CES, and think they’re one of the best marginal uses of funding right now. (Note that Aaron didn’t ask me to write this.)
(Ties: I’ve recommended a grant to CES from Open Phil before, and a further grant is under consideration at OP right now; even given this possible grant, CES would have need for further funding for the coming years.)
I second Julia in her apology. In hindsight, once I’d seen that you didn’t want the post shared I should have simply ignored it, and ensured you knew that it had been accidentally shared with me.
When it was shared with me, the damage had already been done, so I thought it made sense to start prepping a response. I didn’t think your post would change significantly, and at the time I thought it would be good for me to start going through your critique to see if there were indeed grave mistakes in DGB, and offer a speedy response for a more fruitful discussion. I’m sorry that I therefore misrepresented you. As you know, the draft you sent to Julia was quite a bit more hostile than the published version; I can only say that as a result of this I felt under attack, and that clouded my judgment.
I agree with all the points you make here, including on the suggested upvote/downvote distribution, and on the nature of DGB. FWIW, my (current, defeasible) plan for any future trade books I write is that they’d be more highbrow (and more caveated, and therefore drier) than DGB.
I think that’s the right approach for me, at the moment. But presumably at some point the best thing to do (for some people) will be wider advocacy (wider than DGB), which will inevitably involve simplification of ideas. So we’ll have to figure out what epistemic standards are appropriate in that context (given that GiveWell-level detail is off the table).
Some preliminary thoughts on heuristics for this (these are suggestions only):
Standards we’d want to keep as high as ever:
Is the broad brush strokes picture of what is being conveyed accurate? Is there any easy way the broad brush of what is conveyed could have been made more accurate?
Are the sentences being used to support this broad brush strokes picture warranted by the evidence?
Is this the way of communicating the core message about as caveated and detailed as one can reasonably manage?
Standards we’d need to relax:
Does this communicate as much detail as possible with respect to the relevant claims?
Does this communicate all the strongest possible counterarguments to the key claim?
Does this include every reasonable caveat?
I think that a blogpost that does very well with respect to the above, without compromising on the clarity of the core message, is Max Roser’s recent post: ‘The world is much better; The world is awful; The world can be much better’.
I appreciate that you’ve taken the time to consider what I’ve said in the book at such length. However, I do think that there’s quite a lot that’s wrong in your post, and I’ll describe some of that below. Though I think you have noticed a couple of mistakes in the book, I think that most of the alleged errors are not errors.
I’ll just focus on what I take to be the main issues you highlight, and I won’t address the ‘dishonesty’ allegations, as I anticipate it wouldn’t be productive to do so; I’ll leave that charge for others to assess.
Of the main issues you refer to, I think you’ve identified two mistakes in the book: I left out a caveat in my summary of the Baird et al (2016) paper, and I conflated overheads costs and CEO pay in a way that, on the latter aspect, was unfair to Charity Navigator.
In neither case are these errors egregious in the way you suggest. I think that: (i) claiming that the Baird et al (2016) should cause us to believe that there is ‘no effect’ on wages is a misrepresentation of that paper; (ii) my core argument against Charity Navigator, regarding their focus on ‘financial efficiency’ metrics like overhead costs, is both successful and accurately depicts Charity Navigator.
I don’t think that the rest of the alleged major errors are errors. In particular: (i) GiveWell were able to review the manuscript before publication and were happy with how I presented their research; the quotes you give generally conflate how to think about GiveWell’s estimates with how to think about DCP2’s estimates; (ii) There are many lines of evidence supporting the 100x multiplier, and I don’t rely at all on the DCP2 estimates, as you imply.
(Also, caveating up front: for reasons of time limitations, I’m going to have to precommit to this being my last comment on this thread.)
(Also, Alexey’s post keeps changing, so if it looks like I’m responding to something that’s no longer there, that’s why.)
Since the book came out, there has been much more debate about the efficacy of deworming. As I’ve continued to learn about the state and quality of the empirical evidence around deworming, I’ve become less happy with my presentation of the evidence around deworming in Doing Good Better; this fact has been reflected on the errata page on my website for the last two years. On your particular points, however:
Deworming vs textbooks
If textbooks have a positive effect, it’s via how much children learn in school, rather than an incentive for them to spend more time in school. So the fact that there doesn’t seem to be good evidence for textbooks increasing test scores is pretty bad.
If deworming has a positive effect, it could be via a number of mechanisms, including increased school attendance or via learning more in school, or direct health impacts, etc. If there are big gains on any of these dimensions, then deworming looks promising. I agree that more days in school certainly aren’t good in themselves, however, so the better evidence is about the long-run effects.
Deworming’s long-run effects
Here’s how GiveWell describes the study on which I base my discussion of the long-run effects of deworming:
“10-year follow-up: Baird et al. 2016 compared the first two groups of schools to receive deworming (as treatment group) to the final group (as control); the treatment group was assigned 2.41 extra years of deworming on average. The study’s headline effect is that as adults, those in the treatment group worked and earned substantially more, with increased earnings driven largely by a shift into the manufacturing sector.” Then, later: “We have done a variety of analyses to assess the robustness of the core findings from Baird et al. 2016, including reanalyzing the data and code underlying the study, and the results have held up to our scrutiny.”
You are correct that my description of the findings of the Baird et al paper was not fully accurate. When I wrote, “Moreover, when Kremer’s colleagues followed up with the children ten years later, those who had been dewormed were working an extra 3.4 hours per week and earning an extra 20 percent of income compared to those who had not been dewormed,” I should have included the caveat “among non-students with wage employment.” I’m sorry about that, and I’m updating my errata page to reflect this.
As for how much we should update on the basis of the Baird et al paper — that’s a really big discussion, and I’m not going to be able to add anything above what GiveWell have already written (here, here and here). I’ll just note that:
(i) Your gloss on the paper seems misleading to me. If you include people with zero earnings, of course it’s going to be harder to get a statistically significant effect. And the data from those who do have an income but who aren’t in wage employment are noisier, so it’s harder to get a statistically significant effect there too. In particular, see here from the 2015 version of the paper: “The data on [non-agricultural] self-employment profits are likely measured with somewhat more noise. Monthly profits are 22% larger in the treatment group, but the difference is not significant (Table 4, Panel C), in part due to large standard errors created by a few male outliers reporting extremely high profits. In a version of the profit data that trims the top 5% of observations, the difference is 28% (P < 0.10).”
(ii) GiveWell finds the Baird et al paper to be an important part of the evidence behind their support of deworming. If you disagree with that, then you’re engaged in a substantive disagreement with GiveWell’s views; it seems wrong to me to class that as a simple misrepresentation.
2. Cost-effectiveness estimates
Given the previous debate that had occurred between us on how to think and talk about cost-effectiveness estimates, and the mistakes I had made in this regard, I wanted to be sure that I was presenting these estimates in a way that those at GiveWell would be happy with. So I asked an employee of GiveWell to look over the relevant parts of the manuscript of DGB before it was published; in the end five employees did so, and they were happy with how I presented GiveWell’s views and research.
How can that fact be reconciled with the quotes you give in your blog post? It’s because, in your discussion, you conflate two quite different issues: (i) how to represent that cost-effectiveness estimates provided by DCP2, or by single studies; (ii) how to represent the (in my view much more rigorous) cost-effectiveness estimates provided by GiveWell. Almost all the quotes from Holden that you give are about (i). But the quotes you criticise me for are about (ii). So, for example, when I say ‘these estimates’ are order of magnitude estimates that’s referring to (i), not to (ii).
There’s a really big difference between (i) and (ii). I acknowledge that back in 2010 I was badly wrong about the reliability of DCP2 and individual studies, and that GWWC was far too slow to update its web pages after the unreliability of these estimates came to light. But the level of time, care and rigour that have gone into the GiveWell estimates are much greater than those that have gone into the DCP2 estimates. It’s still the case that there’s a huge amount of uncertainty surrounding the GiveWell estimates, but describing them as “the most rigorous estimates” we have seems reasonable to me.
More broadly: Do I really think that you do as much good or more in expectation from donating $3500 to AMF as saving a child’s life? Yes. GiveWell’s estimate of the direct benefits might be optimistic or pessimistic (though it has stayed relatively stable over many years now — the median GiveWell estimate for ‘cost for outcome as good as averting the death of an individual under 5’ is currently $1932), but I really don’t have a view on which is more likely. And, what’s more important, the biggest consideration that’s missing from GiveWell’s analysis is the long-run effects of saving a life. While of course it’s a thorny issue, I personally find it plausible that the long-run expected benefits from a donation to AMF are considerably larger than the short-run benefits — you speed up economic progress just a little bit, in expectation making those in the future just a little bit better off than they would have otherwise been. Because the future is so vast in expectation, that effect is very large. (There’s *plenty* more to discuss on this issue of long-run effects — Might those effects be negative? How should you discount future consumption? etc — but that would take us too far afield.)
3. Charity Navigator
Let’s distinguish: (i) the use of overhead ratio as a metric in assessing charities; (ii) the use of CEO pay as a metric in assessing charities. The idea of evaluating charities on overheads and on the basis of CEO pay are often run together in public discussion, and are both wrong for similar reasons, so I bundled them together in my discussion.
Regarding (ii): CN-of-2014 did talk a lot about CEO pay: they featured CEO pay, in both absolute terms and as a proportion of expenditure, prominently on their charity evaluation pages (see, e.g. their page on Books for Africa), they had top-ten lists like, “10 highly-rated charities with low paid CEOs”, and “10 highly paid CEOs at low-rated charities” (and no lists of “10 highly-rated charities with high paid CEOs” or “10 low-rated charities with low paid CEOs”). However, it is true that CEO pay was not a part of CN’s rating system. And, rereading the relevant passages of DGB, I can see how the reader would have come away with the wrong impression on that score. So I’m sorry about that. (Perhaps I was subconsciously still ornery from their spectacularly hostile hit piece on EA that came out while I was writing DGB, and was therefore less careful than I should have been.) I’ve updated my errata page to make that clear.
Regarding (i): CN’s two key metrics for charities are (a) financial health and (b) accountability and transparency. (a) is in very significant part about the charities’ overheads ratios (in several different forms), where they give a charity a higher score the lower its overheads are, breaking the scores into five broad buckets: see here for more detail. The doughnuts for police officers example shows that a really bad charity could score extremely highly on CN’s metrics, which shows that CN’s metrics must be wrong. Similarly for Books for Africa, which gets a near-perfect score from CN, and features in its ‘ten top-notch charities’ list, in significant part because of its very low overheads, despite having no good evidence to support its program.
I represent CN fairly, and make a fair criticism of its approach to assessing charities. In the extended quote you give, they caveat that very low overheads are not make-or-break for a charity. But, on their charity rating methodology, all other things being equal they give a charity a higher score the lower the charity’s overheads. If that scoring method is a bad one, which it is, then my criticism is justified.
4. Life satisfaction and income and the hundredfold multiplier
The hundredfold multiplier
You make two objections to my 100x multiplier claim: that the DCP2 deworming estimate was off by 100x, and that the Stevenson and Wolfers paper does not support it.
But there are very many lines of evidence in favour of the 100x multiplier, which I reference in Doing Good Better. I mention that there are many independent justifications for thinking that there is a logarithmic (or even more concave) relationship between income and happiness on p.25, and in the endnotes on p.261-2 (all references are to the British paperback edition—yellow cover). In addition to the Stevenson and Wolfers lifetime satisfaction approach (which I discuss later), here are some reasons for thinking that the hundredfold multiplier obtains:
The experiential sampling method of assessing happiness. I mention this in the endnote on p.262, pointing out that, on this method, my argument would be stronger, because on this method the relationship between income and wellbeing is more concave than logarithmic, and is in fact bounded above.
Imputed utility functions from the market behaviour of private individuals and the actions of government. It’s absolutely mainstream economic thought that utility varies with log of income (that is, eta=1 in an isoelastic utility function) or something more concave (eta>1). I reference a paper that takes this approach on p.261, the Groom and Maddison (2013). They estimate eta to be 1.5.
Estimates of cost to save a life. I discuss this in ch.2; I note that this is another strand of supporting evidence prior to my discussion of Stevenson and Wolfers on p.25: “It’s a basic rule of economics that money is less valuable to you the more you have of it. We should therefore expect $1 to provide a larger benefit for an extremely poor Indian farmer than it would for you or me. But how much larger? Economists have sought to answer this question through a variety of methods. We’ll look at some of these in the next chapter, but for now I’ll just discuss one [the Stevenson and Wolfers approach].” Again, you find 100x or more discrepancy in the cost to save a life in rich or poor countries.
Estimate of cost to provide one QALY. As with the previous bullet point.
Note, crucially, that the developing world estimates for cost to provide one QALY or cost to save a life come from GiveWell, not — as you imply — from DCP2 or any individual study.
Is there a causal relationship from income to wellbeing?
It’s true that there Stevenson and Wolfers only shows the correlation is between income and wellbeing. But that there is a causal relationship, from income to wellbeing, is beyond doubt. It’s perfectly obvious that, over the scales we’re talking, higher income enables you to have more wellbeing (you can buy analgesics, healthcare, shelter, eat more and better food, etc).
It’s true that we don’t know exactly the strength of the causal relationship. Understanding this could make my argument stronger or weaker. To illustrate, here’s a quote from another Stevenson and Wolfers paper, with the numerals in square brackets added in by me:
“Although our analysis provides a useful measurement of the bivariate relationship between income and well-being both within and between countries, there are good reasons to doubt that this corresponds to the causal effect of income on well-being. It seems plausible (perhaps even likely) that [i] the within-country well-being-income gradient may be biased upward by reverse causation, as happiness may well be a productive trait in some occupations, raising income. A different perspective, from offered by Kahneman, et al. (2006), suggests that [ii] within-country comparisons overstate the true relationship between subjective well-being and income because of a “focusing illusion”: the very nature of asking about life satisfaction leads people to assess their life relative to others, and they thus focus on where they fall relative to others in regard to concrete measures such as income. Although these specific biases may have a more important impact on within-country comparisons, it seems likely that [iii] the bivariate well-being-GDP relationship may also reflect the influence of third factors, such as democracy, the quality of national laws or government, health, or even favorable weather conditions, and many of these factors raise both GDP per capita and well-being (Kenny, 1999).29 [iv] Other factors, such as increased savings, reduced leisure, or even increasingly materialist values may raise GDP per capita at the expense of subjective well-being. At this stage we cannot address these shortcomings in any detail, although, given our reassessment of the stylized facts, we would suggest an urgent need for research identifying these causal parameters.”
To the extent to which (i), (ii) or (iv) are true, the case for the 100x multiplier becomes stronger. To the extent to which (iii) is true, the case for the 100x multiplier becomes weaker. We don’t know, at the moment, which of these are the most important factors. But, given that the wide variety of different strands of evidence listed in the previous section all point in the same direction, I think that estimating a 100x multiplier as a causal matter is reasonable. (Final point: noting again that all these estimates do not factor in the long-run benefits of donations, which would increase the ratio of benefits others to benefits to yourself even further in the direction of benefits to others.)
On the Stevenson and Wolfers data, is the relationship between income and happiness weaker for poor countries than for rich countries?
If it were the case that money does less to buy happiness (for any given income level) in poor countries than in rich countries, then that would be one counterargument to mine.
However, it doesn’t seem to me that this is true of the Stevenson and Wolfers data. In particular, it’s highly cherry-picked to compare Nigeria and the USA as you do, because Nigeria is a clear outlier in terms of how flat the slope is. I’m only eyeballing the graph, but it seems to me that, of the poorest countries represented (PHL, BGD, EGY, CHN, IND, PAK, NGA, ZAF, IDN), only NGA and ZAF have flatter slopes than USA (and even for ZAF, that’s only true for incomes less than $6000 or so); all the rest have slopes that are similar to or steeper than that of USA (IND, PAK, BGD, CHN, EGY, IDN all seem steeper than USA to me). Given that Nigeria is such an outlier, I’m inclined not to give it too much weight. The average trend across countries, rich and poor, is pretty clear.
Re Etg buy-out—yes, you’re right. For people who think that CEA is a top donation target, hopefully we could just come to agreement as a trade wouldn’t be possible, or would be prohibitively costly (if there were only slight differences in our views on which places were best to fund).
Re local group activities: These are just examples of some of the things I’d be excited about local groups doing, and I know that at least some local groups are funding constrained (e.g. someone is running them part-time, unpaid, and will otherwise need to get a job).
Re AI safety fellowship at ASI—as I understand it, that is currently funding constrained (they had great applicants who wanted to take the fellowship but ASI couldn’t fund it). For other applications (e.g. Google Brain) it could involve, say, spending some amount of time during or after a physics or math PhD in order to learn some machine learning and be more competitive.
Re anthropogenic existential risks—ah, I had thought that it was only in presentation form. In which case: that paper is exactly the sort of thing I’d love to see more of.