We’ve been considering an effort like this on Manifund’s side, and will likely publish some (very rudimentary) results soon!
Here are some of my guesses why this hasn’t happened already:
As others mentioned, longtermism/xrisk work has long feedback loops, and the impact of different kinds of work is very sensitive to background assumptions
AI safety is newer as a field—it’s more like early-stage venture funding (which is about speculating on unproven teams and ideas) or academic research, rather than public equities (where there’s lots of data for analysts to go over)
AI safety is also a tight-knit field, so impressions travel by word of mouth rather than through public analyses
It takes a special kind of person to be able to do Givewell-type analyses well; grantmaking skill is rare. It then takes some thick skin to publish work that’s critical of people in a tight-knit field
OpenPhil and Longview don’t have much incentive to publish their own analyses (as opposed to just showing them to their own donors); they’ll get funded either way, and on the flip side, publishing their work exposes them to downside risk
We’ve been considering an effort like this on Manifund’s side, and will likely publish some (very rudimentary) results soon!
Here are some of my guesses why this hasn’t happened already:
As others mentioned, longtermism/xrisk work has long feedback loops, and the impact of different kinds of work is very sensitive to background assumptions
AI safety is newer as a field—it’s more like early-stage venture funding (which is about speculating on unproven teams and ideas) or academic research, rather than public equities (where there’s lots of data for analysts to go over)
AI safety is also a tight-knit field, so impressions travel by word of mouth rather than through public analyses
It takes a special kind of person to be able to do Givewell-type analyses well; grantmaking skill is rare. It then takes some thick skin to publish work that’s critical of people in a tight-knit field
OpenPhil and Longview don’t have much incentive to publish their own analyses (as opposed to just showing them to their own donors); they’ll get funded either way, and on the flip side, publishing their work exposes them to downside risk