I think there are various reasons for not having such a list public:
It will (literally) create a first tier, second tier, etc of organisations in the effective altruism community, which feels bad/confusing.
People will associate the organisation the grant is given to with the tier, while it was actually that specific grant that was evaluated.
The information that are provided publicly for a given grant are likely only a small subset of the information that was used to decide the tier, but people just looking through the list won’t know or acknowledge that, leading to confusion about the actual bar.
If an organisation submits a funding request containing different activities, Open Phil will fund all those above the bar, but the different activities can be in different tiers, so would should be done in this case?
Organisations will likely want to have more information why their grant is on a specific tier which will might to additional work for lots of people.
Various of the above points just might lead to confusion by people trying to understand what the funding bar is.
I’m also confused slightly confused about the advantages you mention:
Those of us who are creating new projects would have a much better understanding of what OpenPhil would fund and be able to create better more aligned projects to OpenPhil’s goals. The EA community lacks a strong longtermist incubator and this is I expect one of the challenges.
Isn’t this to a large extent already possible as OpenPhil is publishing the grants they made? (I acknowledge that there is a time of maybe a year or so that we are in now where this is not really the case because the bar changed and maybe it would help for this period but not in general.)
Other funders could fill gaps that they believe OpenPhil has missed, or otherwise use OpenPhil’s tiers in their decision making.
I don’t understand the first point, I think this would only work if OpenPhil also would publish grant requests that they don’t fund (?) The second point might be true, but could also be a disadvantage.
It allows OpenPhil to receive useful constructive feedback or critiques.
That’s true, but it could also lead to non-constructive feedback and critiques or non-constructive discussions in the community.
I’m not saying that OpenPhil definitively shouldn’t publish the list, but I think there would be a lot of points for and against to be weigh up.
Yeah, I somewhat agree this would be a challenge, and there is a trade off between the time needed to do this well and carefully (as it would need to be done well and carefully) and other things that could be done.
I think it would surprise a lot if the various issues were insurmountable. I am not an expert in how to publish public evaluations of organisations without upsetting those organisations or misleading people but connected orgs like GiveWell do this frequently enough and must have learnt a thing or two about it in the past few years. To take one the concerns you raise: if you are worried about people reading too much into the list and judging the organisations who requested the grants rather than specific grants you could publish the list in a pseudoanonymised way where you remove names of organisations and exact amounts of funding – sure people could connect the dots but it would help prevent misunderstanding and make it clearer judgement is for grants not organisations.
Anyway to answer your questions:
On creating new projects – it is easier for the Charity Entrepreneurship research team to know how to asses funding availability and the bar to beat for global health projects than for biosecurity projects. Sure we can look at where OpenPhil have given but there is no detail there. It is hard to know how much they base their decisions on different factors such as the trusted-ness of the people running the project versus some bar of expected effectiveness versus something else. Ultimately this can make us more hesitant to try and start new organisations that would be aiming to get funding from OpenPhil’s longtermist teams than we are to start new organisations that would be aiming to get funding from GiveWell (or other very transparent organisations). This uncertainty about future funding is also a barrier we see in potential entrepreneurs and more clarity feels useful
On other funders could fill gaps that they believe OpenPhil has missed – I recently wrote a critique of the Lon-Term Future Fund pointing out that they have ignored policy work. This has led to some other funders looking into the space. This was only possible because their grant and grant evaluations are public. (This did require having inside knowledge of the space about who was looking for funding.) Honestly OpenPhil are already pretty good at this, you can see all their grants and identify gaps (like I believe no longtermist team at OpenPhil has ever given to any policy work outside the US) and then direct funds to fill those gaps. It is unclear to me how much more useful the tiers would be but I expect the lower tiers would highlight areas where OpenPhil is unlikely to fund in the future and other funders could look at what they think is valuable in that space and fund it.
(All views my own not speaking for any org or for Charity Entrepreneurship etc)
There’s a lot of policy work, it’s just not getting identified.
In Biorisk, Openphil funds Center for Health Security, NTI, and Council on Strategic Risks. In AI, they fund GovAI, CNAS, Carnegie, and others. Those are all very policy-heavy.
And without minimizing all the effort that went into the list, it was compiled fairly quickly with a specific purpose in mind. For example, I’d expect OP to devote more of the limited time available to classifying grants near where it expected the new bars to be. For example, ensuring high accuracy in tier 1 vs 2 vs 3 (maybe even vs. high 4) probably wasn’t at the top of the priority list. So it would probably be safer to view the determined tiers as +/- 1 tier, which significantly limits usefulness.
Also, unless OP released a ranked list, we wouldnt know where in a tier the grant fell. My guess is that there isn’t that much difference in absolute quality between the bottom of tier 4 and the top of tier 5, and that line could move based on market conditions, cause area allocation, etc.
I think there are various reasons for not having such a list public:
It will (literally) create a first tier, second tier, etc of organisations in the effective altruism community, which feels bad/confusing.
People will associate the organisation the grant is given to with the tier, while it was actually that specific grant that was evaluated.
The information that are provided publicly for a given grant are likely only a small subset of the information that was used to decide the tier, but people just looking through the list won’t know or acknowledge that, leading to confusion about the actual bar.
If an organisation submits a funding request containing different activities, Open Phil will fund all those above the bar, but the different activities can be in different tiers, so would should be done in this case?
Organisations will likely want to have more information why their grant is on a specific tier which will might to additional work for lots of people.
Various of the above points just might lead to confusion by people trying to understand what the funding bar is.
I’m also confused slightly confused about the advantages you mention:
Those of us who are creating new projects would have a much better understanding of what OpenPhil would fund and be able to create better more aligned projects to OpenPhil’s goals. The EA community lacks a strong longtermist incubator and this is I expect one of the challenges.
Isn’t this to a large extent already possible as OpenPhil is publishing the grants they made? (I acknowledge that there is a time of maybe a year or so that we are in now where this is not really the case because the bar changed and maybe it would help for this period but not in general.)
Other funders could fill gaps that they believe OpenPhil has missed, or otherwise use OpenPhil’s tiers in their decision making.
I don’t understand the first point, I think this would only work if OpenPhil also would publish grant requests that they don’t fund (?) The second point might be true, but could also be a disadvantage.
It allows OpenPhil to receive useful constructive feedback or critiques.
That’s true, but it could also lead to non-constructive feedback and critiques or non-constructive discussions in the community.
I’m not saying that OpenPhil definitively shouldn’t publish the list, but I think there would be a lot of points for and against to be weigh up.
Yeah, I somewhat agree this would be a challenge, and there is a trade off between the time needed to do this well and carefully (as it would need to be done well and carefully) and other things that could be done.
I think it would surprise a lot if the various issues were insurmountable. I am not an expert in how to publish public evaluations of organisations without upsetting those organisations or misleading people but connected orgs like GiveWell do this frequently enough and must have learnt a thing or two about it in the past few years. To take one the concerns you raise: if you are worried about people reading too much into the list and judging the organisations who requested the grants rather than specific grants you could publish the list in a pseudoanonymised way where you remove names of organisations and exact amounts of funding – sure people could connect the dots but it would help prevent misunderstanding and make it clearer judgement is for grants not organisations.
Anyway to answer your questions:
On creating new projects – it is easier for the Charity Entrepreneurship research team to know how to asses funding availability and the bar to beat for global health projects than for biosecurity projects. Sure we can look at where OpenPhil have given but there is no detail there. It is hard to know how much they base their decisions on different factors such as the trusted-ness of the people running the project versus some bar of expected effectiveness versus something else. Ultimately this can make us more hesitant to try and start new organisations that would be aiming to get funding from OpenPhil’s longtermist teams than we are to start new organisations that would be aiming to get funding from GiveWell (or other very transparent organisations). This uncertainty about future funding is also a barrier we see in potential entrepreneurs and more clarity feels useful
On other funders could fill gaps that they believe OpenPhil has missed – I recently wrote a critique of the Lon-Term Future Fund pointing out that they have ignored policy work. This has led to some other funders looking into the space. This was only possible because their grant and grant evaluations are public. (This did require having inside knowledge of the space about who was looking for funding.) Honestly OpenPhil are already pretty good at this, you can see all their grants and identify gaps (like I believe no longtermist team at OpenPhil has ever given to any policy work outside the US) and then direct funds to fill those gaps. It is unclear to me how much more useful the tiers would be but I expect the lower tiers would highlight areas where OpenPhil is unlikely to fund in the future and other funders could look at what they think is valuable in that space and fund it.
(All views my own not speaking for any org or for Charity Entrepreneurship etc)
There’s a lot of policy work, it’s just not getting identified.
In Biorisk, Openphil funds Center for Health Security, NTI, and Council on Strategic Risks. In AI, they fund GovAI, CNAS, Carnegie, and others. Those are all very policy-heavy.
The OP biosecurity and PP team just gave one recently for health security policy work in Australia, albeit a smaller grant
Great! Its good to see things changing :-) Thank you for the update!
And without minimizing all the effort that went into the list, it was compiled fairly quickly with a specific purpose in mind. For example, I’d expect OP to devote more of the limited time available to classifying grants near where it expected the new bars to be. For example, ensuring high accuracy in tier 1 vs 2 vs 3 (maybe even vs. high 4) probably wasn’t at the top of the priority list. So it would probably be safer to view the determined tiers as +/- 1 tier, which significantly limits usefulness.
Also, unless OP released a ranked list, we wouldnt know where in a tier the grant fell. My guess is that there isn’t that much difference in absolute quality between the bottom of tier 4 and the top of tier 5, and that line could move based on market conditions, cause area allocation, etc.
I do think that at least grantees should be told.