Hey Alex, thanks for the response! To clarify, I didn’t mean to ask whether no case has been made, or imply that they’ve “never been looked at”, but rather ask whether a compelling case has been made—which I interpret as arguments which seem strong enough to justify the claims made about Givewell charities, as understood by the donors influenced by EA.
I think that the 100x multiplier is a powerful intuition, but that there’s a similarly powerful intuition going the other way: that wealthy countries are many times more influential than developing countries (e.g. as measured in technological progress), which is reason to think that interventions in wealthy countries can do comparable amounts of good overall.
Previously titled “Climate change interventions are generally more effective than global development interventions”. Because of an error the conclusions have significantly changed. I have extended the analysis and now provide a more detailed spreadsheet model below. In the comments below, Benjamin_Todd uses a different guesstimate model and found the climate change came out ~80x better than global health (even though the point estimate found that global health is better).
I haven’t read the full thing, but based on this, it seems like there’s still a lot of uncertainty about the overall conclusion reached, even when the model is focused on direct quantifiable effects, rather than broader effects like movement-building, etc. Meanwhile the 80k article says that “when political campaigns are the best use of someone’s charitable giving is beyond the scope of this article”. I appreciate that these’s more work on these questions which might make the case much more strongly. But given that Givewell is moving over $100M a year from a wide range of people, and that one of the most common criticisms EA receives is that it doesn’t account enough for systemic change, my overall expectation is still that EA’s case against donating to mainstream systemic-change interventions is not strong enough to justify the set of claims that people understand us to be making.
I suspect that our disagreement might be less about what research exists, and more about what standard to apply for justification. Some reasons I think that we should have a pretty high threshold for thinking that claims about Givewell top charities doing the most good are justified:
If we think of EA as an ethical claim (you should care about doing a lot of good) and an empirical claim (if you care about that, then listening to us increases your ability to do so) then the empirical claim should be evaluated against the donations made by people who want to do a lot of good, but aren’t familiar with EA. My guess is that climate change and politics are fairly central examples of such donations.
(As mentioned in a reply to Denise): “Doing the most good per dollar” and “doing the most good that can be verified using a certain class of methodologies” can be very different claims. And the more different that class is methodologies is from most people’s intuitive conception of how to evaluate things, the more important it is to clarify that point. Yet it seems like types of evidence that we have for these charities are very different from the types of evidence that most people rely on to form judgements about e.g. how good it would be if a given political party got elected, which often rely on effects that are much harder to quantify.
Givewell charities are still (I think) the main way that most outsiders perceive EA. We’re now a sizeable movement with many full-time researchers. So I expect that outsiders overestimate how much research backs up the claims they hear about doing the most good per dollar, especially with respect to the comparisons I mentioned. I expect they also underestimate the level of internal disagreement within EA about how much good these charities do.
EA funds a lot of internal movement-building that is hard to quantify. So when our evaluations of other causes exclude factors that we consider important when funding ourselves, we should be very careful.
I didn’t mean to ask whether no case has been made, or imply that they’ve “never been looked at”, but rather ask whether a compelling case has been made
I’m not quite sure what you’re trying to get at here. In some trivial sense we can see that many people were compelled, hence I didn’t bother to distinguish between ‘case’ and ‘compelling case’. I wonder whether by ‘compelling case’ you really mean ‘case I would find convincing’? In which case, I don’t know whether that case was ever made. I’d be happy to chat more offline and try to compel you :)
there’s a similarly powerful intuition going the other way: that wealthy countries are many times more influential than developing countries
I don’t think this intuition is similarly powerful at all, but more importantly I don’t think it ‘goes the other way’, or perhaps don’t understand what you mean by that phrase. Concretely, if we treat GDP-per-capita as a proxy for influentialness-per-person (not perfect, but seems like right ballpark), and how much we can influence people with $x also scales linearly with GDP-per-capita (i.e. it takes Y months’ wages to influence people Z amount), that would suggest that interventions aimed at influencing worldwide events have comparable impact anywhere, rather than actively favouring developed countries by anything like the 100x margin.
I suspect that our disagreement might be less about what research exists, and more about what standard to apply for justification.
I agree. I think the appropriate standard is basically the ‘do you buy your own bullshit’ standard. I.e. if I am donating to Givewell charities over climate change (CC) charities, that is very likely revealing that I truly think those opportunities are better all things considered, not just better according to some narrow criteria. At that point, I could be just plain wrong in expressing that opinion to others, but I’m not being dishonest. By contrast, if I give to CC charities over Givewell charities, I largely don’t think I should evangelise on behalf of Givewell charities, regardless of whether they score better on some specific criteria, unless I am very confident that the person I am talking to cares about those specific criteria (even then I’d want to add ‘I don’t support this personally’ caveats).
My impression is that EA broadly meets this standard, and I would be disappointed to hear of a case where an individual or group had pushed Givewell charities while having no interest in them for their personal or group-influenced donations.
the empirical claim should be evaluated against the donations made by people who want to do a lot of good, but aren’t familiar with EA. My guess is that climate change and politics are fairly central examples of such donations.
I’m happy to evaluate against these examples regardless, but (a) I doubt these are central, but not with high confidence, would be happy to see data and (b) I’m not sure evaluating against typical-for-that-group donations makes a whole lot of sense when for most people donations are a sideshow in their altruistic endeavours. The counterfactual where I don’t get involved with EA doesn’t look like me donating to climate change instead, it looks like me becoming a teacher rather than a trader and simply earning far less, or becoming a trader and retiring at 30 followed by doing volunteer work. On a quick scan of my relatively-altruistic non-EA friends (who skew economically-privileged and very highly educated, so YMMV) doing good in this kind of direct-but-local way looks like a far more typical approach than making large (say >5% of income) donations to favoured non-EA areas.
Givewell charities are still (I think) the main way that most outsiders perceive EA.
Communicating the fact that many core EA organisations have a firmly longtermist focus is something I am strongly in favour of. 80k has been doing a ton of work here to try and shift perceptions of what EA is about.
That said, in this venue I think it’s easy to overestimate the disconnect. 80k/CEA/EA forum/etc. are only one part of the movement, and heavily skew longtermist relative to the whole. Put plainly, in the event that outsiders perceive EA heavily through the lens of Givewell charities because most self-identified EAs are donating and their donations mostly go to Givewell charities, that seems fine, in the sense that perceptions match reality, regardless of what us oddballs are doing. In the event that outsiders perceive this because this used to be the case but is no longer, and there’s a lag, then I’m in favour of doing things to try and reduce the lag, example in previous paragraph.
Hey Alex, thanks for the response! To clarify, I didn’t mean to ask whether no case has been made, or imply that they’ve “never been looked at”, but rather ask whether a compelling case has been made—which I interpret as arguments which seem strong enough to justify the claims made about Givewell charities, as understood by the donors influenced by EA.
I think that the 100x multiplier is a powerful intuition, but that there’s a similarly powerful intuition going the other way: that wealthy countries are many times more influential than developing countries (e.g. as measured in technological progress), which is reason to think that interventions in wealthy countries can do comparable amounts of good overall.
On the specific links you gave: the one on climate change (Global development interventions are generally more effective than climate change interventions) starts as follows:
I haven’t read the full thing, but based on this, it seems like there’s still a lot of uncertainty about the overall conclusion reached, even when the model is focused on direct quantifiable effects, rather than broader effects like movement-building, etc. Meanwhile the 80k article says that “when political campaigns are the best use of someone’s charitable giving is beyond the scope of this article”. I appreciate that these’s more work on these questions which might make the case much more strongly. But given that Givewell is moving over $100M a year from a wide range of people, and that one of the most common criticisms EA receives is that it doesn’t account enough for systemic change, my overall expectation is still that EA’s case against donating to mainstream systemic-change interventions is not strong enough to justify the set of claims that people understand us to be making.
I suspect that our disagreement might be less about what research exists, and more about what standard to apply for justification. Some reasons I think that we should have a pretty high threshold for thinking that claims about Givewell top charities doing the most good are justified:
If we think of EA as an ethical claim (you should care about doing a lot of good) and an empirical claim (if you care about that, then listening to us increases your ability to do so) then the empirical claim should be evaluated against the donations made by people who want to do a lot of good, but aren’t familiar with EA. My guess is that climate change and politics are fairly central examples of such donations.
(As mentioned in a reply to Denise): “Doing the most good per dollar” and “doing the most good that can be verified using a certain class of methodologies” can be very different claims. And the more different that class is methodologies is from most people’s intuitive conception of how to evaluate things, the more important it is to clarify that point. Yet it seems like types of evidence that we have for these charities are very different from the types of evidence that most people rely on to form judgements about e.g. how good it would be if a given political party got elected, which often rely on effects that are much harder to quantify.
Givewell charities are still (I think) the main way that most outsiders perceive EA. We’re now a sizeable movement with many full-time researchers. So I expect that outsiders overestimate how much research backs up the claims they hear about doing the most good per dollar, especially with respect to the comparisons I mentioned. I expect they also underestimate the level of internal disagreement within EA about how much good these charities do.
EA funds a lot of internal movement-building that is hard to quantify. So when our evaluations of other causes exclude factors that we consider important when funding ourselves, we should be very careful.
I’m not quite sure what you’re trying to get at here. In some trivial sense we can see that many people were compelled, hence I didn’t bother to distinguish between ‘case’ and ‘compelling case’. I wonder whether by ‘compelling case’ you really mean ‘case I would find convincing’? In which case, I don’t know whether that case was ever made. I’d be happy to chat more offline and try to compel you :)
I don’t think this intuition is similarly powerful at all, but more importantly I don’t think it ‘goes the other way’, or perhaps don’t understand what you mean by that phrase. Concretely, if we treat GDP-per-capita as a proxy for influentialness-per-person (not perfect, but seems like right ballpark), and how much we can influence people with $x also scales linearly with GDP-per-capita (i.e. it takes Y months’ wages to influence people Z amount), that would suggest that interventions aimed at influencing worldwide events have comparable impact anywhere, rather than actively favouring developed countries by anything like the 100x margin.
I agree. I think the appropriate standard is basically the ‘do you buy your own bullshit’ standard. I.e. if I am donating to Givewell charities over climate change (CC) charities, that is very likely revealing that I truly think those opportunities are better all things considered, not just better according to some narrow criteria. At that point, I could be just plain wrong in expressing that opinion to others, but I’m not being dishonest. By contrast, if I give to CC charities over Givewell charities, I largely don’t think I should evangelise on behalf of Givewell charities, regardless of whether they score better on some specific criteria, unless I am very confident that the person I am talking to cares about those specific criteria (even then I’d want to add ‘I don’t support this personally’ caveats).
My impression is that EA broadly meets this standard, and I would be disappointed to hear of a case where an individual or group had pushed Givewell charities while having no interest in them for their personal or group-influenced donations.
I’m happy to evaluate against these examples regardless, but (a) I doubt these are central, but not with high confidence, would be happy to see data and (b) I’m not sure evaluating against typical-for-that-group donations makes a whole lot of sense when for most people donations are a sideshow in their altruistic endeavours. The counterfactual where I don’t get involved with EA doesn’t look like me donating to climate change instead, it looks like me becoming a teacher rather than a trader and simply earning far less, or becoming a trader and retiring at 30 followed by doing volunteer work. On a quick scan of my relatively-altruistic non-EA friends (who skew economically-privileged and very highly educated, so YMMV) doing good in this kind of direct-but-local way looks like a far more typical approach than making large (say >5% of income) donations to favoured non-EA areas.
Communicating the fact that many core EA organisations have a firmly longtermist focus is something I am strongly in favour of. 80k has been doing a ton of work here to try and shift perceptions of what EA is about.
That said, in this venue I think it’s easy to overestimate the disconnect. 80k/CEA/EA forum/etc. are only one part of the movement, and heavily skew longtermist relative to the whole. Put plainly, in the event that outsiders perceive EA heavily through the lens of Givewell charities because most self-identified EAs are donating and their donations mostly go to Givewell charities, that seems fine, in the sense that perceptions match reality, regardless of what us oddballs are doing. In the event that outsiders perceive this because this used to be the case but is no longer, and there’s a lag, then I’m in favour of doing things to try and reduce the lag, example in previous paragraph.