1) I think you’re missing one important way in which GiveWell and OpenPhil have demonstrated their credibility, which is by showing us many of the outputs of their decision-making processes and letting us judge their quality.
Having evidence that GiveWell’s recommendations had a track record of high impact would give us an absolute recommendation: if you follow their advice, you can expect to do this well. Having evidence that they are good at making decisions (by whatever standard you subscribe to) gives you a relative recommendation: if you follow their advice, you can expect to do better than you would do yourself.
In this sense, GiveWell’s confidence is not “loaned”, it has been earned by continuing to provide evidence of (what the community thinks is) good decision making.
Of course, how well this works depends on how well we can recognize good decision-making. Only judging recommendations by whether they seem sensible to the community renders us vulnerable to groupthink, and untethers us from evidence. Good retrospectives on past recommendations would help us judge whether the decisions that are being made really are good, as well as being indicative of good tendencies within these organizations. So I think it would be great to do more of those (and, indeed, having the resources to run such retrospectives could be one of the advantages of having slightly more “centralised” institutions).
2) I do think that the pre-eminence of GiveWell and OpenPhil in the EA research space is a little unfortunate. Diversity of opinion is good, and in an ideal world I’d like to see several large institutions critiquing and evaluating each others’ work. This is one of the reasons I was sad that GWWC stopped doing charity evaluation research. Even if they think that GiveWell simply does it better, having an independent set of opinions is quite valuable.
3) I don’t quite see what now makes EA “self-recommending”. Previously we said “give your money to these charities”, now we say “give your money to this fund, and we’ll give it to these charities”. I don’t see a significant difference there: in both cases we’re claiming greater expertise than the donors, and asking them to defer to our judgement. It’s just that one of them is more systematized.
What would be worrying is if we were advertising a fund as “the most effective way to donate” and then channeling all the money to EA orgs. That looks like a scam. But the EA Community fund is clearly separate from the others. If you donate to the Global Development fund, your money will be spent on Global Development.
4) It’s good to keep us on our toes about how we sell things. It’s always tempting to oversell, particularly with the recent increasing focus on outreach. But I think we can and should do better than that, so thanks for bringing this stuff up!
It also seems to me that the time to complain about this sort of process is while the results are still plausibly good. If we wait for things to be clearly bad, it’ll be too late to recover the relevant social trust. This way involves some amount of complaining about bad governance used to good ends, but the better the ends, the more compatible they should be with good governance.
On (1) I agree that GiveWell’s done a huge public service by making many parts of decisionmaking process public, letting us track down what their sources are, etc. But making it really easy for an outsider to audit GiveWell’s work, while an admirable behavior, does not imply that GiveWell has done a satisfactory audit of its own work. It seems to me like a lot of people are inferring the latter from the former, and I hope by now it’s clear what reasons there are to be skeptical of this.
On (3), here’s why I’m worried about increasing overt reliance on the argument from “believe me”:
The difference between making a direct argument for X, and arguing for “trust me” and then doing X, is that in the direct case, you’re making it easy for people to evaluate your assumptions about X and disagree with you on the object level. In the “trust me” case, you’re making it about who you are rather than what is to be done. I can seriously consider someone’s arguments without trusting them so much that I’d like to give them my money with no strings attached.
“Most effective way to donate” is vanishingly unlikely to be generically true for all donors, and the aggressive pitching of these funds turns the supposed test of whether there’s underlying demand for EA Funds into a test of whether people believe CEA’s assurances that EA Funds is the right way to give.
The point I was trying to make is that while GiveWell may not have acted “satisfactorily”, they are still well ahead of many of us. I hadn’t “inferred” that GiveWell had audited themselves thoroughly—it hadn’t even occurred to me to ask, which is a sign of just how bad my own epistemics are. And I don’t think I’m unusual in that respect. So GiveWell gets a lot of credit from me for doing “quite well” at their epistemics, even if they could do better (and it’s good to hold them to a high standard!).
I think that making the final decision on where to donate yourself often offers only an illusion of control. If you’re getting all your information from one source you might as well just be giving them your money. But it does at least keep more things out in the open, which is good.
Re-reading your post, I think I may have been misinterpreting you—am I right in thinking that you mainly object to the marketing of the EA Funds as the “default choice”, rather than to their existence for people who want that kind of instrument? I agree that the marketing is perhaps over-selling at the moment.
Yep! I think it’s fine for them to exist in principle, but the aggressive marketing of them is problematic. I’ve seen attempts to correct specific problems that are pointed out e.g. exaggerated claims, but there are so many things pointing in the same direction that it really seems like a mindset problem.
I tried to write more directly about the mindset problem here:
1) I think you’re missing one important way in which GiveWell and OpenPhil have demonstrated their credibility, which is by showing us many of the outputs of their decision-making processes and letting us judge their quality.
Having evidence that GiveWell’s recommendations had a track record of high impact would give us an absolute recommendation: if you follow their advice, you can expect to do this well. Having evidence that they are good at making decisions (by whatever standard you subscribe to) gives you a relative recommendation: if you follow their advice, you can expect to do better than you would do yourself.
In this sense, GiveWell’s confidence is not “loaned”, it has been earned by continuing to provide evidence of (what the community thinks is) good decision making.
Of course, how well this works depends on how well we can recognize good decision-making. Only judging recommendations by whether they seem sensible to the community renders us vulnerable to groupthink, and untethers us from evidence. Good retrospectives on past recommendations would help us judge whether the decisions that are being made really are good, as well as being indicative of good tendencies within these organizations. So I think it would be great to do more of those (and, indeed, having the resources to run such retrospectives could be one of the advantages of having slightly more “centralised” institutions).
2) I do think that the pre-eminence of GiveWell and OpenPhil in the EA research space is a little unfortunate. Diversity of opinion is good, and in an ideal world I’d like to see several large institutions critiquing and evaluating each others’ work. This is one of the reasons I was sad that GWWC stopped doing charity evaluation research. Even if they think that GiveWell simply does it better, having an independent set of opinions is quite valuable.
3) I don’t quite see what now makes EA “self-recommending”. Previously we said “give your money to these charities”, now we say “give your money to this fund, and we’ll give it to these charities”. I don’t see a significant difference there: in both cases we’re claiming greater expertise than the donors, and asking them to defer to our judgement. It’s just that one of them is more systematized.
What would be worrying is if we were advertising a fund as “the most effective way to donate” and then channeling all the money to EA orgs. That looks like a scam. But the EA Community fund is clearly separate from the others. If you donate to the Global Development fund, your money will be spent on Global Development.
4) It’s good to keep us on our toes about how we sell things. It’s always tempting to oversell, particularly with the recent increasing focus on outreach. But I think we can and should do better than that, so thanks for bringing this stuff up!
It also seems to me that the time to complain about this sort of process is while the results are still plausibly good. If we wait for things to be clearly bad, it’ll be too late to recover the relevant social trust. This way involves some amount of complaining about bad governance used to good ends, but the better the ends, the more compatible they should be with good governance.
Yes, in case it wasn’t clear, I think I agree with many of your concrete suggestions, but I think the current situation is not too bad.
On (1) I agree that GiveWell’s done a huge public service by making many parts of decisionmaking process public, letting us track down what their sources are, etc. But making it really easy for an outsider to audit GiveWell’s work, while an admirable behavior, does not imply that GiveWell has done a satisfactory audit of its own work. It seems to me like a lot of people are inferring the latter from the former, and I hope by now it’s clear what reasons there are to be skeptical of this.
On (3), here’s why I’m worried about increasing overt reliance on the argument from “believe me”:
The difference between making a direct argument for X, and arguing for “trust me” and then doing X, is that in the direct case, you’re making it easy for people to evaluate your assumptions about X and disagree with you on the object level. In the “trust me” case, you’re making it about who you are rather than what is to be done. I can seriously consider someone’s arguments without trusting them so much that I’d like to give them my money with no strings attached.
“Most effective way to donate” is vanishingly unlikely to be generically true for all donors, and the aggressive pitching of these funds turns the supposed test of whether there’s underlying demand for EA Funds into a test of whether people believe CEA’s assurances that EA Funds is the right way to give.
The point I was trying to make is that while GiveWell may not have acted “satisfactorily”, they are still well ahead of many of us. I hadn’t “inferred” that GiveWell had audited themselves thoroughly—it hadn’t even occurred to me to ask, which is a sign of just how bad my own epistemics are. And I don’t think I’m unusual in that respect. So GiveWell gets a lot of credit from me for doing “quite well” at their epistemics, even if they could do better (and it’s good to hold them to a high standard!).
I think that making the final decision on where to donate yourself often offers only an illusion of control. If you’re getting all your information from one source you might as well just be giving them your money. But it does at least keep more things out in the open, which is good.
Re-reading your post, I think I may have been misinterpreting you—am I right in thinking that you mainly object to the marketing of the EA Funds as the “default choice”, rather than to their existence for people who want that kind of instrument? I agree that the marketing is perhaps over-selling at the moment.
Yep! I think it’s fine for them to exist in principle, but the aggressive marketing of them is problematic. I’ve seen attempts to correct specific problems that are pointed out e.g. exaggerated claims, but there are so many things pointing in the same direction that it really seems like a mindset problem.
I tried to write more directly about the mindset problem here:
http://benjaminrosshoffman.com/humility-argument-honesty/
http://effective-altruism.com/ea/13w/matchingdonation_fundraisers_can_be_harmfully/
http://benjaminrosshoffman.com/against-responsibility/
Do you think “trust me” arguments are inherently invalid, or that in this case sufficient evidence hasn’t been presented?
I think sufficient evidence hasn’t been presented, in large part because the argument has been tacit rather than overt.