If someone thinks concentrated decisionmaking is better, they should be overtly making the case for concentrated decisionmaking. When I talk with EA leaders about this they generally do not try to sell me on concentrated decisionmaking, they just note that everyone seems eager to trust them so they may as well try to put that resource to good use. Often they say they’d be happy if alternatives emerged.
BenHoffman
On (1) I agree that GiveWell’s done a huge public service by making many parts of decisionmaking process public, letting us track down what their sources are, etc. But making it really easy for an outsider to audit GiveWell’s work, while an admirable behavior, does not imply that GiveWell has done a satisfactory audit of its own work. It seems to me like a lot of people are inferring the latter from the former, and I hope by now it’s clear what reasons there are to be skeptical of this.
On (3), here’s why I’m worried about increasing overt reliance on the argument from “believe me”:
The difference between making a direct argument for X, and arguing for “trust me” and then doing X, is that in the direct case, you’re making it easy for people to evaluate your assumptions about X and disagree with you on the object level. In the “trust me” case, you’re making it about who you are rather than what is to be done. I can seriously consider someone’s arguments without trusting them so much that I’d like to give them my money with no strings attached.
“Most effective way to donate” is vanishingly unlikely to be generically true for all donors, and the aggressive pitching of these funds turns the supposed test of whether there’s underlying demand for EA Funds into a test of whether people believe CEA’s assurances that EA Funds is the right way to give.
Seems worth establishing the fact that bad actors exist, will try to join our community, and engage in this pattern of almost plausibly deniable shamelessly bad behavior. I think EAs often have a mental block around admitting that in most of the world, lying is a cheap and effective strategy for personal gain; I think we make wrong judgments because we’re missing this key fact about how the world works. I think we should generalize from this incident, and having a clear record is helpful for doing so.
Seems a little odd to solve that problem by setting up an “independent” funding source also controlled by Open Phil staff, though of course as mentioned elsewhere that may change later.
My thoughts on this are too long for a comment, but I’ve written them up here—posting a link in the spirit of making this forum post a comprehensive roundup: http://benjaminrosshoffman.com/honesty-and-perjury/
For some balance, see Kelsey Piper’s comments here—it looks like empirically, the picture we get from GiveDirectly is encouraging.
EffectiveAltruism.org’s Introduction to Effective Altruism allocates most of its words to what’s effectively an explanation of global poverty EA. A focus on empirical validation, explicit measurement and quantification, and power inequality between the developed and developing world. The Playpump example figures prominently. This would make no sense if I were trying to persuade someone to support animal charity EA or x-risk EA.
Other EA focus areas that imply very different methods are mentioned, but not in a way that makes it clear how EAs ended up there.
If you click “Donate Effectively,” you end up on the EA Funds site, which presents the four Fund categories as generic products you might want to allocate a portfolio between. Two of the four products are in effect just letting Nick Beckstead do what he thinks is sensible with the money, which as I’ve said above is a good idea but a very large leap from the anti-Playpump pitch. “Trust friendly, sensible-seeming agents and empower them to do what they think is sensible” is a very, very different method than “check everything because it’s easy to spend money on nice-sounding things of no value.”
The GWWC site and Facebook page have a similar dynamic. I mentioned in this post that the page What We Can Achieve mainly references global poverty (though I’ve been advised that this is an old page pending an update). The GWWC Facebook page seems like it’s mostly global poverty stuff, and some promotion of other CEA brands.
It’s very plausible to me that in-person EA groups often don’t have this problem because individuals don’t feel a moral obligation to give the most generically effective pitch for EA, but instead just talk about what they personally care about and find interesting.
I would guess that $300k simply isn’t worth Elie’s time to distribute in small grants, given the enormous funds available via GoodVentures and even GiveWell direct and directed donations.
This is consistent with the optionality story in the beta launch post:
If the EA Funds raises little money, they can spend little additional time allocating the EA Funds’ money but still utilize their deep subject-matter expertise in making the allocation. This reduces the chance that the EA Funds causes fund managers to use their time ineffectively and it means that the lower bound of the quality of the donations is likely to be high enough to justify donations even without knowing the eventual size of the fund.
However, I do think this suggests that—to the extent to which GiveWell is already a known and trusted institution—for global poverty in particular it’s more important to get the fund manager with the most unique relevant expertise than a fund manager with the most expertise.
In principle, if there’s unmet demand for these things, then it’s a great idea to set up such funds. Overall this infrastructure seems plausibly helpful.
But I’m confused about why, if this is a good idea, Open Phil hasn’t already funded it. I wouldn’t make such a claim about any possible fund set up in this way—that way leads to playing the Defectbot strategy in the iterated prisoner’s dilemma. But in this particular case, I’d expect Open Phil to have much more reason than outside donors do to trust Elie’s, Lewis’s, and Nick’s judgment and value-alignment. Though per Kerry’s “minimum viable product” comment below, perhaps this info asymmetry argument will be less true in the future.
I suspect that Open Phil is actually making a mistake by not empowering individuals more to make unaccountable discretionary decisions, so this is seems good to try in its current form anyhow. I weakly expect it to outperform just giving the money to Open Phil or the GiveWell top charities. I’m looking forward to seeing what happens.
I haven’t yet seen a formal approach I find satisfying and compelling for questions like “How should I behave when I perceive a significant risk that I’m badly misguided in a fundamental way?”
Seems like the obvious thing would be to frontload testing your hypotheses, try things that break quickly and perceptibly if a key belief is wrong, minimize the extent to which you try to control the behavior of other agents in ways other than sharing information, and share resources when you happen to be extraordinarily lucky. In other words, behave like you’d like other agents who might be badly misguided in a fundamental way to behave.
For what it’s worth, your comment helped me clarify my position, and I wish I’d been able to express myself that clearly earlier.
Also, somewhat embarrassingly, I am also Benquo (I think I accidentally signed up once via mobile, forgot, and signed up again via desktop.) Hopefully I’ll remember to just use this login going forward.
That’s good to hear. But I didn’t think you were saying that criticism is generally harmful—I thought you were saying that failing to check in with GWWC first is harmful in expectation. If so, I’m curious what the most important scenarios are in which it could cause harm to start this sort of conversation in public rather than in private. If not, when do you think this advice does help?
It additionally seemed like you thought that this advice should be applied, not just to criticism of GWWC’s own conduct, but to criticism of the idea of the pledge itself—which is already public, and not entirely specific to GWWC, as organizations like The Life You Can Save and REG promote similar pledges. I got this impression because Alyssa’s post is limited to discussion of the public pledge itself.
Do you disagree with the first bullet point? Or do you disagree with the second? Or do you disagree that they jointly imply something like the bit you quoted?
The series is long and boring precisely because it tried to address pretty much every claim like that at once. In this case GiveWell’s on record as not wanting their cost per life saved numbers to be held to the standard of “literally true” (one side of that disjunction) so I don’t see the point in going through that whole argument again.
To support a claim that this applies in “virtually all” cases, I’d want to see more engagement with pragmatic problems applying modesty, including:
Identifying experts is far from free epistemically.
Epistemic majoritarianism in practice assumes that no one else is an epistemic majoritarian. Your first guess should be that nearly everyone else is iff you are, in which you should expect information cascades due to the occasional overconfident person. If other people are not majoritarians because they’re too stupid to notice the considerations for it, then it seems a bit silly to defer to them. On the other hand, if they’re not majoritarians because they’re smarter than you are… well, you mention this, but this objection seems to me to be obviously fatal and the only thing left is to explain why the wisdom of the majority disagrees with the epistemically modest.
The vast majority of information available about other people’s opinions does not differentiate clearly between their impressions and their beliefs after adjusting for their knowledge about others’ beliefs.
People lie to maintain socially desirable opinions.
Control over others’ opinions is a valuable social commodity, and apparent expertise gives one some control.
In particular, the last two factors (different sorts of dishonesty) are much bigger deals if most uninformed people copy the opinions of apparently informed people instead of saying “I have no idea”.
Overall, I agree that when you have a verified-independent, verified-honest opinion from a peer, one should weight it equally to one’s own, and defer to one’s verified epistemic superiors—but this has little to do with real life, in which we rarely have that opportunity!
Kerry,
I think that in a writeup for the two funds Nick is managing, CEA has done a fine job making it clear what’s going on. The launch post here on the Forum was also very clear.
My worry is that this isn’t at all what someone attracted by EA’s public image would be expecting, since so much of the material is about experimental validation and audit.
I think that there’s an opportunity here to figure out how to effectively pitch far-future stuff directly, instead of grafting it onto existing global-poverty messaging. There’s a potential pitch centered around: “Future people are morally relevant, neglected, and extremely numerous. Saving the world isn’t just a high-minded phrase—here are some specific ways you could steer the course of the future a lot.” A lot of Nick Bostrom’s early public writing is like this, and a lot of people were persuaded by this sort of thing to try to do something about x-risk. I think there’s a lot of potential value in figuring out how to bring more of those sorts of people together, and—when there are promising things in that domain to fund—help them coordinate to fund those things.
In the meantime, it does make sense to offer a fund oriented around the far future, since many EAs do share those preferences. I’m one of them, and think that Nick’s first grant was a promising one. It just seems off to me to aggressively market it as an obvious, natural thing for someone who’s just been through the GWWC or CEA intro material to put money into. I suspect that many of them would have valid objections that are being rhetorically steamrollered, and a strategy of explicit persuasion has a better chance of actually encountering those objections, and maybe learning from them.
I recognize that I’m recommending a substantial strategy change, and it would be entirely appropriate for CEA to take a while to think about it.
Consider paternalism as a proxy for model error rather than an intrinsic dispreference. We should wonder whether maybe the things we do to people are more likely to cause hidden harm or lack supposed benefits, than things they do for examples.
Deworming is an especially stark example. The mass drug administration program is to go to schools and force all the children, whether sick or healthy, to swallow giant poisonous pills that give them bellyaches, because we hope killing the worms in this way buys big life outcome improvements. GiveWell estimates the effect at about 1.5% of what studies say, but EV is still high. This could involve a lot of unnecessary harm too via unnecessary treatments.
By contrast, the less paternalistic Living Goods (a recent GiveWell “standout charity”) sells deworming pills (at or near cost) so we should expect better targeting to kids sick with worms, and repeat business is more likely if the pills seem helpful.
I wrote a bit about this here: http://benjaminrosshoffman.com/effective-altruism-not-no-brainer/
Yep! I think it’s fine for them to exist in principle, but the aggressive marketing of them is problematic. I’ve seen attempts to correct specific problems that are pointed out e.g. exaggerated claims, but there are so many things pointing in the same direction that it really seems like a mindset problem.
I tried to write more directly about the mindset problem here:
http://benjaminrosshoffman.com/humility-argument-honesty/
http://effective-altruism.com/ea/13w/matchingdonation_fundraisers_can_be_harmfully/
http://benjaminrosshoffman.com/against-responsibility/
Thanks for the detailed response! I wanted to quickly point out something you did here that I think is good practice, and wish more people did:
“Access via size” and “Independence via many funders” were not part of our reasoning.
Marking which parts of someone’s argument you think are relevant and which you think aren’t—and, relatedly, which branches of a disjunction you accept and which you reject—are an important part of how arguments can lead to shared models. A lot of people neglect this sort of thing, because it’s not a clear way to score points for their side. You took care to address it here. Thanks.
(More to follow when I’ve had time to take this in.)
It also seems to me that the time to complain about this sort of process is while the results are still plausibly good. If we wait for things to be clearly bad, it’ll be too late to recover the relevant social trust. This way involves some amount of complaining about bad governance used to good ends, but the better the ends, the more compatible they should be with good governance.