I think an underappreciated part of castlegate is that it fairly easily puts people in an impossible bind.
EA is a complicated morass, but there are a few tenets that are prominent, especially early on. These may be further simplified, especially in people using EA as treatment for their scrupulosity issues. For most of this post I’m going to take that simplified point of view (I’ll mark when we return to my own beliefs).
Two major, major tenets brought up very early in EA are:
You should donate your money to the most impactful possible cause
Some people will additionally internalize “The most impactful in expectation”
GiveWell and OpenPhil have very good judgment.
The natural conclusion of which is that donating GiveWell or OpenPhil-certified causes is a safe and easy way to fulfill your moral duty.
If you’re operating under those assumptions and OpenPhil funds something without making their reasoning legible, there are two possibilities:
The opportunity is bad, which at best means OpenPhil is bad, and at worst means the EA ecosystem is trying to fleece you.
The opportunity is good but you’re not allowed to donate to it, which leaves you in violation of tenet #1.
Both of which are upsetting, and neither of which really got addressed by the discourse.
I don’t think these tenets are correct, or at least they aren’t complete. I think goodharting on a simplified “most possible impact” metric leads very bad places. And I think that OpenPhil isn’t even trying to have “good judgment” in the sense that tenet #2 means it. Even if they weren’t composed of fallible humans, they’re executing a hits-based strategy that means you shouldn’t expect every opportunity to be immediately, legibly good. That’s one reason they don’t ask for money from small donors. Which means OpenPhil funding things that aren’t legibly good doesn’t put me in any sort of bind.
I think it would be harmful to force all of EA to fit the constraints imposed by these two tenets. But I think enough people are under the impression it should that it rises to a level of problem worth addressing, probably through better messaging.
because it’s not legible, and willingness to donate to illegible things opens you up to scams.
OpenPhil also discourages small donations, I believe specifically because they don’t want to have to justify their decisions to the public, but I think will accept them.
Saying you’re not allowed to donate to the projects is much stronger than either of these things though. E.g. re your 2nd point, nothing is stopping someone from giving top up funding to projects/people that have received OpenPhil funding, and I’m not sure anyone feels like they’re being told they shouldn’t? E.g. the Nonlinear Fund was doing exactly this kind of marginal funding.
I agree they’re allowed to seek out frontier donations, or for that matter give to Open Phil. I believe that this doesn’t feel available/acceptable, on an emotional level, to a meaningful portion of the EA population, who have a strong need for both impact and certainty.
I think an underappreciated part of castlegate is that it fairly easily puts people in an impossible bind.
EA is a complicated morass, but there are a few tenets that are prominent, especially early on. These may be further simplified, especially in people using EA as treatment for their scrupulosity issues. For most of this post I’m going to take that simplified point of view (I’ll mark when we return to my own beliefs).
Two major, major tenets brought up very early in EA are:
You should donate your money to the most impactful possible cause
Some people will additionally internalize “The most impactful in expectation”
GiveWell and OpenPhil have very good judgment.
The natural conclusion of which is that donating GiveWell or OpenPhil-certified causes is a safe and easy way to fulfill your moral duty.
If you’re operating under those assumptions and OpenPhil funds something without making their reasoning legible, there are two possibilities:
The opportunity is bad, which at best means OpenPhil is bad, and at worst means the EA ecosystem is trying to fleece you.
The opportunity is good but you’re not allowed to donate to it, which leaves you in violation of tenet #1.
Both of which are upsetting, and neither of which really got addressed by the discourse.
I don’t think these tenets are correct, or at least they aren’t complete. I think goodharting on a simplified “most possible impact” metric leads very bad places. And I think that OpenPhil isn’t even trying to have “good judgment” in the sense that tenet #2 means it. Even if they weren’t composed of fallible humans, they’re executing a hits-based strategy that means you shouldn’t expect every opportunity to be immediately, legibly good. That’s one reason they don’t ask for money from small donors. Which means OpenPhil funding things that aren’t legibly good doesn’t put me in any sort of bind.
I think it would be harmful to force all of EA to fit the constraints imposed by these two tenets. But I think enough people are under the impression it should that it rises to a level of problem worth addressing, probably through better messaging.
Where does the “ you’re not allowed to donate to it” part of #2 come from?
because it’s not legible, and willingness to donate to illegible things opens you up to scams.
OpenPhil also discourages small donations, I believe specifically because they don’t want to have to justify their decisions to the public, but I think will accept them.
Saying you’re not allowed to donate to the projects is much stronger than either of these things though. E.g. re your 2nd point, nothing is stopping someone from giving top up funding to projects/people that have received OpenPhil funding, and I’m not sure anyone feels like they’re being told they shouldn’t? E.g. the Nonlinear Fund was doing exactly this kind of marginal funding.
I agree they’re allowed to seek out frontier donations, or for that matter give to Open Phil. I believe that this doesn’t feel available/acceptable, on an emotional level, to a meaningful portion of the EA population, who have a strong need for both impact and certainty.