I think it would be great to see the full published tiered list.
In global health and development funders (i.e. OpenPhil and Givewell) are very specific about the bar and exactly who they think is under it and who they think is over it. Recently global development funders (well GiveWell) have even actively invited open constructive criticism and debate about their decision making.
It would be great to have the same level of transparency (and openness to challenge) for longtermist grant making.
Is there a plan to publish the full tiered list? If not what’s the reason / best case against having it public?
To flag some of the advantages
Those of us who are creating new projects would have a much better understanding of what OpenPhil would fund and be able to create better more aligned projects to OpenPhil’s goals. The EA community lacks a strong longtermist incubator and this is I expect one of the challenges.
Other funders could fill gaps that they believe OpenPhil has missed, or otherwise use OpenPhil’s tiers in their decision making.
It allows OpenPhil to receive useful constructive feedback or critiques.
I think there are various reasons for not having such a list public:
It will (literally) create a first tier, second tier, etc of organisations in the effective altruism community, which feels bad/confusing.
People will associate the organisation the grant is given to with the tier, while it was actually that specific grant that was evaluated.
The information that are provided publicly for a given grant are likely only a small subset of the information that was used to decide the tier, but people just looking through the list won’t know or acknowledge that, leading to confusion about the actual bar.
If an organisation submits a funding request containing different activities, Open Phil will fund all those above the bar, but the different activities can be in different tiers, so would should be done in this case?
Organisations will likely want to have more information why their grant is on a specific tier which will might to additional work for lots of people.
Various of the above points just might lead to confusion by people trying to understand what the funding bar is.
I’m also confused slightly confused about the advantages you mention:
Those of us who are creating new projects would have a much better understanding of what OpenPhil would fund and be able to create better more aligned projects to OpenPhil’s goals. The EA community lacks a strong longtermist incubator and this is I expect one of the challenges.
Isn’t this to a large extent already possible as OpenPhil is publishing the grants they made? (I acknowledge that there is a time of maybe a year or so that we are in now where this is not really the case because the bar changed and maybe it would help for this period but not in general.)
Other funders could fill gaps that they believe OpenPhil has missed, or otherwise use OpenPhil’s tiers in their decision making.
I don’t understand the first point, I think this would only work if OpenPhil also would publish grant requests that they don’t fund (?) The second point might be true, but could also be a disadvantage.
It allows OpenPhil to receive useful constructive feedback or critiques.
That’s true, but it could also lead to non-constructive feedback and critiques or non-constructive discussions in the community.
I’m not saying that OpenPhil definitively shouldn’t publish the list, but I think there would be a lot of points for and against to be weigh up.
Yeah, I somewhat agree this would be a challenge, and there is a trade off between the time needed to do this well and carefully (as it would need to be done well and carefully) and other things that could be done.
I think it would surprise a lot if the various issues were insurmountable. I am not an expert in how to publish public evaluations of organisations without upsetting those organisations or misleading people but connected orgs like GiveWell do this frequently enough and must have learnt a thing or two about it in the past few years. To take one the concerns you raise: if you are worried about people reading too much into the list and judging the organisations who requested the grants rather than specific grants you could publish the list in a pseudoanonymised way where you remove names of organisations and exact amounts of funding – sure people could connect the dots but it would help prevent misunderstanding and make it clearer judgement is for grants not organisations.
Anyway to answer your questions:
On creating new projects – it is easier for the Charity Entrepreneurship research team to know how to asses funding availability and the bar to beat for global health projects than for biosecurity projects. Sure we can look at where OpenPhil have given but there is no detail there. It is hard to know how much they base their decisions on different factors such as the trusted-ness of the people running the project versus some bar of expected effectiveness versus something else. Ultimately this can make us more hesitant to try and start new organisations that would be aiming to get funding from OpenPhil’s longtermist teams than we are to start new organisations that would be aiming to get funding from GiveWell (or other very transparent organisations). This uncertainty about future funding is also a barrier we see in potential entrepreneurs and more clarity feels useful
On other funders could fill gaps that they believe OpenPhil has missed – I recently wrote a critique of the Lon-Term Future Fund pointing out that they have ignored policy work. This has led to some other funders looking into the space. This was only possible because their grant and grant evaluations are public. (This did require having inside knowledge of the space about who was looking for funding.) Honestly OpenPhil are already pretty good at this, you can see all their grants and identify gaps (like I believe no longtermist team at OpenPhil has ever given to any policy work outside the US) and then direct funds to fill those gaps. It is unclear to me how much more useful the tiers would be but I expect the lower tiers would highlight areas where OpenPhil is unlikely to fund in the future and other funders could look at what they think is valuable in that space and fund it.
(All views my own not speaking for any org or for Charity Entrepreneurship etc)
There’s a lot of policy work, it’s just not getting identified.
In Biorisk, Openphil funds Center for Health Security, NTI, and Council on Strategic Risks. In AI, they fund GovAI, CNAS, Carnegie, and others. Those are all very policy-heavy.
And without minimizing all the effort that went into the list, it was compiled fairly quickly with a specific purpose in mind. For example, I’d expect OP to devote more of the limited time available to classifying grants near where it expected the new bars to be. For example, ensuring high accuracy in tier 1 vs 2 vs 3 (maybe even vs. high 4) probably wasn’t at the top of the priority list. So it would probably be safer to view the determined tiers as +/- 1 tier, which significantly limits usefulness.
Also, unless OP released a ranked list, we wouldnt know where in a tier the grant fell. My guess is that there isn’t that much difference in absolute quality between the bottom of tier 4 and the top of tier 5, and that line could move based on market conditions, cause area allocation, etc.
If grantee concerns are a reason against doing this, you could allow grantees to opt into having their tiers shared publicly. Even an incomplete list could be useful.
I’d personally happily opt in with the Atlas Fellowship, even if the tier wasn’t very good.
If a concern is that the community would read too much into the tiers, some disclaimers and encouragement for independent thinking might help counteract that.
They made ~142 grants in that 18 month period. Assuming some multiple grants, that’s still maybe 100-120 grantees to contact to ask whether they want to opt-in or not. Presumably most grantees will want to see, if not dispute, their tiered ranking before they opt in to publishing it. This will all take a fair amount of time—and perhaps time at a senior level: eg the relevant relationship-holder (presumably the Program Officer) will need to contact the grantees, and then the CEO of the grantee will want to see the ranking and perhaps dispute it. It also runs a fair risk of damaging relationships with grantees.
So I would not be surprised if OpenPhil did not release the full tiered ranking. What they could do is release the list they considered (or confirm if I or others are correct in our attempted replication). Then we can at least know the ‘universe of cases’ they considered.
I’d think that getting a half dozen individual data points would be sufficient for 90+% of the value, and we’re at least 1/3rd of the way there in this thread alone.
I retracted my comment. I still think it would be useful for the Atlas Fellowship to know its tier, and I’d be happy for others to learn about Atlas’s tier even if it was bad.
But I think people would have all kinds of incorrect interpretations of the tiers, and it would produce further low-quality discussion on the Forum (which already seems pretty low, especially as far as Open Phil critiques go), and it could be a hassle for Open Phil. Basically I agree with this comment, and I don’t trust the broader EA community to correctly interpret the tier numbers.
Oh, I also don’t know whether publishing the tiers would be straightforwardly good. Just in case anyone is thinking about making any kind of tier list, including Open Phil ranking orgs, feel free to include Lightcone in it.
It would also be useful for organizations to at least privately know the tiers of past grants to them, to have a better idea of how likely they are to be funded in the future. (Edit: Sanjay said this.)
If organisations were privately informed of their tier, then the additional work of asking (even in the email) whether they would want to opt into tier sharing would be low/negligible.
Of course people may dispute their tier or only be happy to share if they are in a high tier, but this should at least slightly go against the argument of it being a lot of additional work to ask people for consent for the public list.
Thanks for the useful post Holden.
I think it would be great to see the full published tiered list.
In global health and development funders (i.e. OpenPhil and Givewell) are very specific about the bar and exactly who they think is under it and who they think is over it. Recently global development funders (well GiveWell) have even actively invited open constructive criticism and debate about their decision making. It would be great to have the same level of transparency (and openness to challenge) for longtermist grant making.
Is there a plan to publish the full tiered list? If not what’s the reason / best case against having it public?
To flag some of the advantages
Those of us who are creating new projects would have a much better understanding of what OpenPhil would fund and be able to create better more aligned projects to OpenPhil’s goals. The EA community lacks a strong longtermist incubator and this is I expect one of the challenges.
Other funders could fill gaps that they believe OpenPhil has missed, or otherwise use OpenPhil’s tiers in their decision making.
It allows OpenPhil to receive useful constructive feedback or critiques.
I think there are various reasons for not having such a list public:
It will (literally) create a first tier, second tier, etc of organisations in the effective altruism community, which feels bad/confusing.
People will associate the organisation the grant is given to with the tier, while it was actually that specific grant that was evaluated.
The information that are provided publicly for a given grant are likely only a small subset of the information that was used to decide the tier, but people just looking through the list won’t know or acknowledge that, leading to confusion about the actual bar.
If an organisation submits a funding request containing different activities, Open Phil will fund all those above the bar, but the different activities can be in different tiers, so would should be done in this case?
Organisations will likely want to have more information why their grant is on a specific tier which will might to additional work for lots of people.
Various of the above points just might lead to confusion by people trying to understand what the funding bar is.
I’m also confused slightly confused about the advantages you mention:
Those of us who are creating new projects would have a much better understanding of what OpenPhil would fund and be able to create better more aligned projects to OpenPhil’s goals. The EA community lacks a strong longtermist incubator and this is I expect one of the challenges.
Isn’t this to a large extent already possible as OpenPhil is publishing the grants they made? (I acknowledge that there is a time of maybe a year or so that we are in now where this is not really the case because the bar changed and maybe it would help for this period but not in general.)
Other funders could fill gaps that they believe OpenPhil has missed, or otherwise use OpenPhil’s tiers in their decision making.
I don’t understand the first point, I think this would only work if OpenPhil also would publish grant requests that they don’t fund (?) The second point might be true, but could also be a disadvantage.
It allows OpenPhil to receive useful constructive feedback or critiques.
That’s true, but it could also lead to non-constructive feedback and critiques or non-constructive discussions in the community.
I’m not saying that OpenPhil definitively shouldn’t publish the list, but I think there would be a lot of points for and against to be weigh up.
Yeah, I somewhat agree this would be a challenge, and there is a trade off between the time needed to do this well and carefully (as it would need to be done well and carefully) and other things that could be done.
I think it would surprise a lot if the various issues were insurmountable. I am not an expert in how to publish public evaluations of organisations without upsetting those organisations or misleading people but connected orgs like GiveWell do this frequently enough and must have learnt a thing or two about it in the past few years. To take one the concerns you raise: if you are worried about people reading too much into the list and judging the organisations who requested the grants rather than specific grants you could publish the list in a pseudoanonymised way where you remove names of organisations and exact amounts of funding – sure people could connect the dots but it would help prevent misunderstanding and make it clearer judgement is for grants not organisations.
Anyway to answer your questions:
On creating new projects – it is easier for the Charity Entrepreneurship research team to know how to asses funding availability and the bar to beat for global health projects than for biosecurity projects. Sure we can look at where OpenPhil have given but there is no detail there. It is hard to know how much they base their decisions on different factors such as the trusted-ness of the people running the project versus some bar of expected effectiveness versus something else. Ultimately this can make us more hesitant to try and start new organisations that would be aiming to get funding from OpenPhil’s longtermist teams than we are to start new organisations that would be aiming to get funding from GiveWell (or other very transparent organisations). This uncertainty about future funding is also a barrier we see in potential entrepreneurs and more clarity feels useful
On other funders could fill gaps that they believe OpenPhil has missed – I recently wrote a critique of the Lon-Term Future Fund pointing out that they have ignored policy work. This has led to some other funders looking into the space. This was only possible because their grant and grant evaluations are public. (This did require having inside knowledge of the space about who was looking for funding.) Honestly OpenPhil are already pretty good at this, you can see all their grants and identify gaps (like I believe no longtermist team at OpenPhil has ever given to any policy work outside the US) and then direct funds to fill those gaps. It is unclear to me how much more useful the tiers would be but I expect the lower tiers would highlight areas where OpenPhil is unlikely to fund in the future and other funders could look at what they think is valuable in that space and fund it.
(All views my own not speaking for any org or for Charity Entrepreneurship etc)
There’s a lot of policy work, it’s just not getting identified.
In Biorisk, Openphil funds Center for Health Security, NTI, and Council on Strategic Risks. In AI, they fund GovAI, CNAS, Carnegie, and others. Those are all very policy-heavy.
The OP biosecurity and PP team just gave one recently for health security policy work in Australia, albeit a smaller grant
Great! Its good to see things changing :-) Thank you for the update!
And without minimizing all the effort that went into the list, it was compiled fairly quickly with a specific purpose in mind. For example, I’d expect OP to devote more of the limited time available to classifying grants near where it expected the new bars to be. For example, ensuring high accuracy in tier 1 vs 2 vs 3 (maybe even vs. high 4) probably wasn’t at the top of the priority list. So it would probably be safer to view the determined tiers as +/- 1 tier, which significantly limits usefulness.
Also, unless OP released a ranked list, we wouldnt know where in a tier the grant fell. My guess is that there isn’t that much difference in absolute quality between the bottom of tier 4 and the top of tier 5, and that line could move based on market conditions, cause area allocation, etc.
I do think that at least grantees should be told.
If grantee concerns are a reason against doing this, you could allow grantees to opt into having their tiers shared publicly. Even an incomplete list could be useful.
I’d personally happily opt in with the Atlas Fellowship, even if the tier wasn’t very good.
If a concern is that the community would read too much into the tiers, some disclaimers and encouragement for independent thinking might help counteract that.
I happily opt in with regard to Rethink Priorities, even if the tier wasn’t very good.
Same for Lightcone.
They made ~142 grants in that 18 month period. Assuming some multiple grants, that’s still maybe 100-120 grantees to contact to ask whether they want to opt-in or not. Presumably most grantees will want to see, if not dispute, their tiered ranking before they opt in to publishing it. This will all take a fair amount of time—and perhaps time at a senior level: eg the relevant relationship-holder (presumably the Program Officer) will need to contact the grantees, and then the CEO of the grantee will want to see the ranking and perhaps dispute it. It also runs a fair risk of damaging relationships with grantees.
So I would not be surprised if OpenPhil did not release the full tiered ranking. What they could do is release the list they considered (or confirm if I or others are correct in our attempted replication). Then we can at least know the ‘universe of cases’ they considered.
I’d think that getting a half dozen individual data points would be sufficient for 90+% of the value, and we’re at least 1/3rd of the way there in this thread alone.
Same for QURI (Assuming OP ever evaluates/funds QURI)
I retracted my comment. I still think it would be useful for the Atlas Fellowship to know its tier, and I’d be happy for others to learn about Atlas’s tier even if it was bad.
But I think people would have all kinds of incorrect interpretations of the tiers, and it would produce further low-quality discussion on the Forum (which already seems pretty low, especially as far as Open Phil critiques go), and it could be a hassle for Open Phil. Basically I agree with this comment, and I don’t trust the broader EA community to correctly interpret the tier numbers.
Oh, I also don’t know whether publishing the tiers would be straightforwardly good. Just in case anyone is thinking about making any kind of tier list, including Open Phil ranking orgs, feel free to include Lightcone in it.
Similar. I think I’m happy for QURI to be listed if it’s deemed useful.
Also though, I think that sharing information is generally a good thing, this type included.
More transparency here seems pretty good to me. That said, I get that some people really hate public rankings, especially in the early stages of them.
I happily opt in with regards to any future organization I found, but only if the tier is pretty good.
It would also be useful for organizations to at least privately know the tiers of past grants to them, to have a better idea of how likely they are to be funded in the future. (Edit: Sanjay said this.)
If organisations were privately informed of their tier, then the additional work of asking (even in the email) whether they would want to opt into tier sharing would be low/negligible.
Of course people may dispute their tier or only be happy to share if they are in a high tier, but this should at least slightly go against the argument of it being a lot of additional work to ask people for consent for the public list.