[This is my best attempt at summarizing a reasonable outsider’s view of the current state of affairs. Before publication, I had this sanity checked (though not necessarily endorsed) by an EA researcher with more context. Apologies in advance if it misrepresents the actual state of affairs, but that’s precisely the thing I’m trying to clarify for myself and others.]
At GiveWell, the standard of evidence is relatively well understood. We can all see the Cost Effectiveness Analysis spreadsheet (even if it isn’t taken 100% literally), compare QALYs and see that some charities are likely much more effective than others.
In contrast, Open Philanthropy is purposefully opaque. As Holden describes in “Anti-principles” for hits-based giving:
We don’t: require a strong evidence base before funding something. Quality evidence is hard to come by, and usually requires a sustained and well-resourced effort. Requiring quality evidence would therefore be at odds with our interest in neglectedness.
And:
We don’t: expect to be able to fully justify ourselves in writing… Process-wise, we’ve been trying to separate our decision-making process from our public writeup process. Typically, staffers recommend grants via internal writeups. Late in our process, after decision-makers have approved the basic ideas behind the grant, other staff take over and “translate” the internal writeups into writeups that are suitable to post publicly. One reason I’ve been eager to set up our process this way is that I believe it allows people to focus on making the best grants possible, without worrying at the same time about how the grants will be explained.
These are reasonable anti-principles. I’m not here to bemoan obfuscation or question the quality of evidence.
(Also note this recent post which clarifies a distinction within Open Phil between “causes focused on maximizing verifiable impact within our lifetimes” and “causes directly aimed at affecting the very long-run future”. I’m primarily asking about the latter, which could be thought of as HoldenOpenPhil in contrast to the former AlexOpenPhil.)
My question is really: Given that so much of the decision making process for these causes is private, what are we actually debating when we talk about them on the EA Forum?
Of course there are specific points that could be made. Someone could, in relative isolation, estimate the cost of an intervention, or do some work towards estimating its impact.
But when it comes to actually arguing that X is a high priority cause, or even suggesting that it might be, it’s totally unclear to me both:
What level of evidence is required.
What level of estimated impact is required.
To give some more specific examples, it’s unclear to me how someone outside of Open Philanthropy could go about advocating for the importance of an organization like New Science or Qualia Research Institute.
Or in the more recent back-and-forth between Rethink Priorities and Mark Lutter on Charter Cities, Linch (of RP) wrote that:
I don’t get why analyses from all sides [keep] skipping over detailed analysis of indirect effects.* To me by far the strongest argument for charter cities is the experimentation value/”laboratories of governance”angle, such that even if individual charter cities are in expectation negative, we’d still see outsized returns from studying and partially generalizing from the outsized successful charter cities that can be replicated elsewhere, host country or otherwise (I mean that’s the whole selling point of the Shenzhen stylized example after all!).
At least, I think this is the best/strongest argument. Informally, I feel like this argument is practically received wisdom among EAs who think about growth. Yet it’s pretty suspicious that nobody (to the best of my knowledge) has made this argument concrete and formal in a numeric way and thus exposed it to stress-testing.
I agree that this is a strong argument for charter cities. My (loose) impression is that it’s been neglected precisely because it’s harder to express in a formal and numeric way than the existing debate (from both sides) over economic growth rates and subsequent increases to time-discounted log consumption.
Again, I’m not here to complain about streetlight effects or express worry that EA tries too hard to quantify things. I understand the value of that approach. I’m specifically asking, as far as it concerns the Holden Open Phil world, which is expressly (as I understand it) more speculative, risk-neutral and non-transparent than some other EA grant makers, what is the role of public EA Forum discussion?
Some possibilities:
Public discussion is meant to settle specific questions, but not address broader questions of grant-worthiness.
Even in public, discussions can be productive as long as they have sufficient context on Holden Open Phil priorities, either through informal channels, or interfacing with HOP directly (perhaps as a consultancy).
Absent that context, EA Forum serves more like training grounds on the path to work within a formal EA organization.
I was really confused by your post because it seemed to ask for normative rules about not talking about philanthropy and grants to EA causes, which doesn’t seem reasonable.
Now, after reading your comments, I think what you meant is closer to:
“It seems unworkably hard to talk about grants in the new cause areas. What do we do?”
I’m still not sure if this is what you want, but since no one has really answered, I want to try to give thoughts that might serve your purposes.
From your comment:
I don’t understand the statement this “these are not the kinds of issue we are (or should be) discussing”.
To be specific:
This is a cause area question and this seems totally up for discussion.
For example, someone could criticize a cause area by pointing to a substantial period of time, like 3 or 5 years where progress in a cause area is low or stagnant, or that experts say this, or that it is plausibly funded or solved.
(This seems possible but very difficult this is because of the moral and epistemic uncertainty but also because cause areas are not non-zero sum games.)
On the positive side, people can post new cause areas and discuss why they are important.
This seems much more productive, and there may even be strong demand for this.
It seems unlikely that an EA forum discussion alone will establish a new cause area but such a discussion seems like an extremely valuable use of the forum.
It seems reasonable to say that existing advisors are low in value or that new advisors can be added. This can be done diplomatically:
“EA has really benefited from increase in Longtermism community, I wonder if the pool in Open Phil’s advisors has been expanded to match?”
“Here are a list of experts who are consistently highly valued by the community. Has Open Phil considered adding them as advisors?”
“I see that person A was an advisor for this grant. I understand Person B who is also an expert has these beliefs that [plausible for these reasons] that seems to suggest different views for this intervention.”
It seems easy to unduly pick holes in new orgs, but there are situations where things are very defective and the outlook is bad, and it’s very reasonable to point this out, again diplomatically:
“I think this org had several CEOs over a 2 year period. This is different from what I’ve seen in other EA orgs and clarification about [issues with tangible output] is useful.
“I heard the founder talk at Stanford. During the talk, person A pointed out that X and Y were true. I think person A is an expert and their concerns weren’t addressed. Here is a summary of them...”
(Note that I think I have examples of most of the above that actually occurred. I don’t think it’s that productive or becoming to link them all.)
In the above, I tried to focus on criticism, because that is harder.
I think your post might be asking for more positive ways to communicate meta issues—this seems sort of easy (?).
To be clear, you say:
I think a red herring is that in the “Case for the grant”, the wording is very terse. But I don’t think this terseness is a norm outside of grant descriptions, or necessarily the only way to talk or signal the value of organizations.
For example, a post, a few pages long, with a perspective about New Science that point out things that are useful and interesting would certainly would be well received (the org does seem extremely interesting!). For example, it can mention tangible projects, researchers and otherwise write truthful narratives that suggest they are attracting and influencing talent or otherwise improving the Life Sciences ecosystem.
I might have more to say but I am worried I still “don’t get” your question.
Okay that’s helpful to hear.
A lot of this question is inspired by the recent Charter Cities debate. For context:
Charter Cities Institute released a short paper a while back arguing that it could be as good as top GiveWell charities
Rethink Priorities more recently shared a longer report, concluding that it was likely not as good as GiveWell charities
Mark Lutter (who runs CCI) replied, arguing that more optimistic model parameters are reasonable
This all makes sense within the GiveWell-style of philanthropy where we’re making cost-effectiveness estimates on short-run goods like increased consumption or decreased mortality.
But in the HoldenOpenPhil model, where we’re debating things like:
Is this is an important cause area
Does the organization seem well run
Are there trusted expert advisors who endorse the organization
I’m unclear on what kind of EA Forum post is:
Appropriate (meaning it’s not just shilling for a charity, feels like substantial analysis, not just gossip over which organizations and causes are exciting or not among certain groups)
Useful (meaning it carries some weight w/r/t to grant decisions by funders, specifically HoldenOpenPhil in this case, not necessarily that it’s decisive or sufficient)
But isn’t the GiveWell-style philanthropy exactly not applicable for your example of charter cities?
My sense is that the case for charter cities has some macro/systems process that is hard to measure (and that is why it is only now a new cause area and why the debate exists).
I specifically didn’t want to pull out examples, but if it’s helpful here’s another example of a debate for an intervention that relies on difficult to measure outcomes and involves, hard to untangle, divergent worldviews between the respective proponents.
(This is somewhat of a tangent but honestly, your important question is inherently complex and there seems to be a lot of things going on, so clarity from smoothing out some of the points seems valuable.)
I don’t understand why my answer in the previous post above, or these debates aren’t object level responses to how you could discuss the value of these interventions.
I’m worried I’m talking past you and not being helpful.
Now, trying more vigorously / speculatively here:
Maybe one answer is that you are right, it is hard to influence direct granting—furthermore, this means that directly influencing granting is not what we should be focused on in the forum.
At the risk of being prescriptive (which I dislike) I think this is a reasonable attitude on the forum, in the sense that “policing grants” or something, should be a very low priority for organic reasons for most people, and instead learning/communicating and a “scout mindset” is ultimately more productive. But such discussion cannot be proscribed and even a tacit norm against them would be bad.
Maybe you mean that this level of difficulty is “wrong” in some sense. For example, we should respond by paying special, unique attention to the HOP grants or expect them to be communicated and discussed actively. This seems not implausible.
I could see how HOP areas are harder but as in my first comment, I think it’s inherently hard for anyone to criticize any well researched grant, especially if you account for social factors, as you do.
However, I think there are ways to indicate comfort or discomfort with grants or even major EA orgs.
There are specific examples of this, where people have individually started posts that were influential and drew enormous attention to their concerns.
If you go to the all posts page and select “Yearly” and “Top”, you will find an remarkable example in the top 10 for 2021 (please do not link this post in any reply).
I’m pretty sure that some of these posts do influence Open Phil (but maybe not quite in the way we want).
Again, I choose a negative example because it’s harder.
There’s abundant posts that promote or talk about their work in a cause. In some sense. all blog posts from orgs promote their orgs and pretty much any such honest writing from an EA is welcome.
I think there’s a causal chain where this can influence Open Phil or specific grant makers.
By the way, I’m 90% sure that two or more Open Phil members will read/has read your post already.
It seems to me that the problem isn’t just with Open Phil-funded speculative orgs, but with all speculative orgs.
I think it’s just as unclear how someone inside Open Phil could advocate for those. Open Phil might have access to some private information, but that won’t help much with something like estimating the EV of a highly speculative nonprofit.
I also don’t know for sure, but this examples might be illustrative:
Ought General Support:
And:
So it’s not really a big expected value calculation. It’s more like:
We consider AI Safety to be very important
A trusted advisor is excited
Everything checks out at the operational level
It might not follow point-by-point, but I can imagine how a similar framework might apply to New Science / QRI / Charter Cities.
Returning to the original point: As far as I can tell, these are not the kinds of issue we are (or should be) discussing on EA Forum. I could be wrong, but it’s hard to imagine endorsing a norm where many top EA Forum posts are of the form “I talked to Alexey Guzey from New Science, it seems exciting” or worse “I talked to Adam Marblestone about New Science, and he seems excited about it”.
Full disclosure, I did talk to Alexey about New Science and it did seem exciting. I also talked to Andrew at QRI and Mark at Charter Cities, and they all seemed exciting! But precisely the point of this question is to figure out how I’m supposed to frame that endorsement in a way that is both appropriate and useful.