If grantee concerns are a reason against doing this, you could allow grantees to opt into having their tiers shared publicly. Even an incomplete list could be useful.
I’d personally happily opt in with the Atlas Fellowship, even if the tier wasn’t very good.
If a concern is that the community would read too much into the tiers, some disclaimers and encouragement for independent thinking might help counteract that.
They made ~142 grants in that 18 month period. Assuming some multiple grants, that’s still maybe 100-120 grantees to contact to ask whether they want to opt-in or not. Presumably most grantees will want to see, if not dispute, their tiered ranking before they opt in to publishing it. This will all take a fair amount of time—and perhaps time at a senior level: eg the relevant relationship-holder (presumably the Program Officer) will need to contact the grantees, and then the CEO of the grantee will want to see the ranking and perhaps dispute it. It also runs a fair risk of damaging relationships with grantees.
So I would not be surprised if OpenPhil did not release the full tiered ranking. What they could do is release the list they considered (or confirm if I or others are correct in our attempted replication). Then we can at least know the ‘universe of cases’ they considered.
I’d think that getting a half dozen individual data points would be sufficient for 90+% of the value, and we’re at least 1/3rd of the way there in this thread alone.
I retracted my comment. I still think it would be useful for the Atlas Fellowship to know its tier, and I’d be happy for others to learn about Atlas’s tier even if it was bad.
But I think people would have all kinds of incorrect interpretations of the tiers, and it would produce further low-quality discussion on the Forum (which already seems pretty low, especially as far as Open Phil critiques go), and it could be a hassle for Open Phil. Basically I agree with this comment, and I don’t trust the broader EA community to correctly interpret the tier numbers.
Oh, I also don’t know whether publishing the tiers would be straightforwardly good. Just in case anyone is thinking about making any kind of tier list, including Open Phil ranking orgs, feel free to include Lightcone in it.
If grantee concerns are a reason against doing this, you could allow grantees to opt into having their tiers shared publicly. Even an incomplete list could be useful.
I’d personally happily opt in with the Atlas Fellowship, even if the tier wasn’t very good.
If a concern is that the community would read too much into the tiers, some disclaimers and encouragement for independent thinking might help counteract that.
I happily opt in with regard to Rethink Priorities, even if the tier wasn’t very good.
Same for Lightcone.
They made ~142 grants in that 18 month period. Assuming some multiple grants, that’s still maybe 100-120 grantees to contact to ask whether they want to opt-in or not. Presumably most grantees will want to see, if not dispute, their tiered ranking before they opt in to publishing it. This will all take a fair amount of time—and perhaps time at a senior level: eg the relevant relationship-holder (presumably the Program Officer) will need to contact the grantees, and then the CEO of the grantee will want to see the ranking and perhaps dispute it. It also runs a fair risk of damaging relationships with grantees.
So I would not be surprised if OpenPhil did not release the full tiered ranking. What they could do is release the list they considered (or confirm if I or others are correct in our attempted replication). Then we can at least know the ‘universe of cases’ they considered.
I’d think that getting a half dozen individual data points would be sufficient for 90+% of the value, and we’re at least 1/3rd of the way there in this thread alone.
Same for QURI (Assuming OP ever evaluates/funds QURI)
I retracted my comment. I still think it would be useful for the Atlas Fellowship to know its tier, and I’d be happy for others to learn about Atlas’s tier even if it was bad.
But I think people would have all kinds of incorrect interpretations of the tiers, and it would produce further low-quality discussion on the Forum (which already seems pretty low, especially as far as Open Phil critiques go), and it could be a hassle for Open Phil. Basically I agree with this comment, and I don’t trust the broader EA community to correctly interpret the tier numbers.
Oh, I also don’t know whether publishing the tiers would be straightforwardly good. Just in case anyone is thinking about making any kind of tier list, including Open Phil ranking orgs, feel free to include Lightcone in it.
Similar. I think I’m happy for QURI to be listed if it’s deemed useful.
Also though, I think that sharing information is generally a good thing, this type included.
More transparency here seems pretty good to me. That said, I get that some people really hate public rankings, especially in the early stages of them.
I happily opt in with regards to any future organization I found, but only if the tier is pretty good.