If someone thinks concentrated decisionmaking is better, they should be overtly making the case for concentrated decisionmaking. When I talk with EA leaders about this they generally do not try to sell me on concentrated decisionmaking, they just note that everyone seems eager to trust them so they may as well try to put that resource to good use. Often they say they’d be happy if alternatives emerged.
BenHoffman
Effective altruism is self-recommending
Matching-donation fundraisers can be harmfully dishonest
Cash transfers are not necessarily wealth transfers
GiveWell and the problem of partial funding
On (1) I agree that GiveWell’s done a huge public service by making many parts of decisionmaking process public, letting us track down what their sources are, etc. But making it really easy for an outsider to audit GiveWell’s work, while an admirable behavior, does not imply that GiveWell has done a satisfactory audit of its own work. It seems to me like a lot of people are inferring the latter from the former, and I hope by now it’s clear what reasons there are to be skeptical of this.
On (3), here’s why I’m worried about increasing overt reliance on the argument from “believe me”:
The difference between making a direct argument for X, and arguing for “trust me” and then doing X, is that in the direct case, you’re making it easy for people to evaluate your assumptions about X and disagree with you on the object level. In the “trust me” case, you’re making it about who you are rather than what is to be done. I can seriously consider someone’s arguments without trusting them so much that I’d like to give them my money with no strings attached.
“Most effective way to donate” is vanishingly unlikely to be generically true for all donors, and the aggressive pitching of these funds turns the supposed test of whether there’s underlying demand for EA Funds into a test of whether people believe CEA’s assurances that EA Funds is the right way to give.
Seems worth establishing the fact that bad actors exist, will try to join our community, and engage in this pattern of almost plausibly deniable shamelessly bad behavior. I think EAs often have a mental block around admitting that in most of the world, lying is a cheap and effective strategy for personal gain; I think we make wrong judgments because we’re missing this key fact about how the world works. I think we should generalize from this incident, and having a clear record is helpful for doing so.
Seems a little odd to solve that problem by setting up an “independent” funding source also controlled by Open Phil staff, though of course as mentioned elsewhere that may change later.
My thoughts on this are too long for a comment, but I’ve written them up here—posting a link in the spirit of making this forum post a comprehensive roundup: http://benjaminrosshoffman.com/honesty-and-perjury/
For some balance, see Kelsey Piper’s comments here—it looks like empirically, the picture we get from GiveDirectly is encouraging.
EffectiveAltruism.org’s Introduction to Effective Altruism allocates most of its words to what’s effectively an explanation of global poverty EA. A focus on empirical validation, explicit measurement and quantification, and power inequality between the developed and developing world. The Playpump example figures prominently. This would make no sense if I were trying to persuade someone to support animal charity EA or x-risk EA.
Other EA focus areas that imply very different methods are mentioned, but not in a way that makes it clear how EAs ended up there.
If you click “Donate Effectively,” you end up on the EA Funds site, which presents the four Fund categories as generic products you might want to allocate a portfolio between. Two of the four products are in effect just letting Nick Beckstead do what he thinks is sensible with the money, which as I’ve said above is a good idea but a very large leap from the anti-Playpump pitch. “Trust friendly, sensible-seeming agents and empower them to do what they think is sensible” is a very, very different method than “check everything because it’s easy to spend money on nice-sounding things of no value.”
The GWWC site and Facebook page have a similar dynamic. I mentioned in this post that the page What We Can Achieve mainly references global poverty (though I’ve been advised that this is an old page pending an update). The GWWC Facebook page seems like it’s mostly global poverty stuff, and some promotion of other CEA brands.
It’s very plausible to me that in-person EA groups often don’t have this problem because individuals don’t feel a moral obligation to give the most generically effective pitch for EA, but instead just talk about what they personally care about and find interesting.
I would guess that $300k simply isn’t worth Elie’s time to distribute in small grants, given the enormous funds available via GoodVentures and even GiveWell direct and directed donations.
This is consistent with the optionality story in the beta launch post:
If the EA Funds raises little money, they can spend little additional time allocating the EA Funds’ money but still utilize their deep subject-matter expertise in making the allocation. This reduces the chance that the EA Funds causes fund managers to use their time ineffectively and it means that the lower bound of the quality of the donations is likely to be high enough to justify donations even without knowing the eventual size of the fund.
However, I do think this suggests that—to the extent to which GiveWell is already a known and trusted institution—for global poverty in particular it’s more important to get the fund manager with the most unique relevant expertise than a fund manager with the most expertise.
In principle, if there’s unmet demand for these things, then it’s a great idea to set up such funds. Overall this infrastructure seems plausibly helpful.
But I’m confused about why, if this is a good idea, Open Phil hasn’t already funded it. I wouldn’t make such a claim about any possible fund set up in this way—that way leads to playing the Defectbot strategy in the iterated prisoner’s dilemma. But in this particular case, I’d expect Open Phil to have much more reason than outside donors do to trust Elie’s, Lewis’s, and Nick’s judgment and value-alignment. Though per Kerry’s “minimum viable product” comment below, perhaps this info asymmetry argument will be less true in the future.
I suspect that Open Phil is actually making a mistake by not empowering individuals more to make unaccountable discretionary decisions, so this is seems good to try in its current form anyhow. I weakly expect it to outperform just giving the money to Open Phil or the GiveWell top charities. I’m looking forward to seeing what happens.
I haven’t yet seen a formal approach I find satisfying and compelling for questions like “How should I behave when I perceive a significant risk that I’m badly misguided in a fundamental way?”
Seems like the obvious thing would be to frontload testing your hypotheses, try things that break quickly and perceptibly if a key belief is wrong, minimize the extent to which you try to control the behavior of other agents in ways other than sharing information, and share resources when you happen to be extraordinarily lucky. In other words, behave like you’d like other agents who might be badly misguided in a fundamental way to behave.
For what it’s worth, your comment helped me clarify my position, and I wish I’d been able to express myself that clearly earlier.
Also, somewhat embarrassingly, I am also Benquo (I think I accidentally signed up once via mobile, forgot, and signed up again via desktop.) Hopefully I’ll remember to just use this login going forward.
That’s good to hear. But I didn’t think you were saying that criticism is generally harmful—I thought you were saying that failing to check in with GWWC first is harmful in expectation. If so, I’m curious what the most important scenarios are in which it could cause harm to start this sort of conversation in public rather than in private. If not, when do you think this advice does help?
It additionally seemed like you thought that this advice should be applied, not just to criticism of GWWC’s own conduct, but to criticism of the idea of the pledge itself—which is already public, and not entirely specific to GWWC, as organizations like The Life You Can Save and REG promote similar pledges. I got this impression because Alyssa’s post is limited to discussion of the public pledge itself.
Do you disagree with the first bullet point? Or do you disagree with the second? Or do you disagree that they jointly imply something like the bit you quoted?
The series is long and boring precisely because it tried to address pretty much every claim like that at once. In this case GiveWell’s on record as not wanting their cost per life saved numbers to be held to the standard of “literally true” (one side of that disjunction) so I don’t see the point in going through that whole argument again.
To support a claim that this applies in “virtually all” cases, I’d want to see more engagement with pragmatic problems applying modesty, including:
Identifying experts is far from free epistemically.
Epistemic majoritarianism in practice assumes that no one else is an epistemic majoritarian. Your first guess should be that nearly everyone else is iff you are, in which you should expect information cascades due to the occasional overconfident person. If other people are not majoritarians because they’re too stupid to notice the considerations for it, then it seems a bit silly to defer to them. On the other hand, if they’re not majoritarians because they’re smarter than you are… well, you mention this, but this objection seems to me to be obviously fatal and the only thing left is to explain why the wisdom of the majority disagrees with the epistemically modest.
The vast majority of information available about other people’s opinions does not differentiate clearly between their impressions and their beliefs after adjusting for their knowledge about others’ beliefs.
People lie to maintain socially desirable opinions.
Control over others’ opinions is a valuable social commodity, and apparent expertise gives one some control.
In particular, the last two factors (different sorts of dishonesty) are much bigger deals if most uninformed people copy the opinions of apparently informed people instead of saying “I have no idea”.
Overall, I agree that when you have a verified-independent, verified-honest opinion from a peer, one should weight it equally to one’s own, and defer to one’s verified epistemic superiors—but this has little to do with real life, in which we rarely have that opportunity!
It also seems to me that the time to complain about this sort of process is while the results are still plausibly good. If we wait for things to be clearly bad, it’ll be too late to recover the relevant social trust. This way involves some amount of complaining about bad governance used to good ends, but the better the ends, the more compatible they should be with good governance.