Agreed. GiveWell has revised their estimates numerous times based on public feedback, including dropping entire programmes after evidence emerged that their initial reasons for funding were excessively optimistic, and is nevertheless generally well-regarded including outside EA. Most people understand its analysis will not be bug free.
OpenPhil’s decision to fund Wytham Abbey, on the other hand, was hotly debated before they’d published even the paragraph summary. I don’t think declining to make any metrics available except the price tag increased people’s confidence in the decision making process, and participants in it appear to admit that with hindsight they would have been better off doing more research and/or more consideration of external opinion. If the intent is to shield leadership from criticism, it isn’t working.
Obviously GiveWell exists to advise the public so sharing detail is their raison d’etre, whereas OpenPhil exists to advise Dustin Moskovitz and Cari Tuna, who will have access to all the detail they need to decide on a recommendation. But I think there are wider considerations to publicising more on the project and the rationale behind decisions even if OpenPhil doesn’t expect to find corrections to its calculations useful
Increased clarity about funding criteria would reduce time spent (on both sides) on proposals for projects OpenPhil would be highly unlikely to fund, and probably improve the relevance and quality of the average submission.
There are a lot of other funders out there and many OpenPhil supported causes have room for additional funding.
Publicly-shared OpenPhil analysis could help other donors conclude particular organizations are worth funding (just as I imagine OpenPhil itself is happy to use assessments by organizations it trusts), ultimately leading to its favoured causes having more funds at their disposal
Lastly, whilst OpenPhil’s primary purpose is to help Dustin and Cari give their money away it’s also the flagship grantmaker of EA, so the signals it sends about effectiveness, rigour, transparency and willingness to update has an outsized effect on whether people believe the movement overall is living up to its own hype. I think that alone is a bigger reputational issue than a grantmaker using a disputed figure or getting their sums wrong.
The non-reputational costs matter too and it’d be unreasonable to expect enormously time-consuming GiveWell and CE style analysis for every grant, especially with the grants already made and recipients sometimes not even considering additional funding sources. But there’s a happy medium between elaborate reasoning/spreadsheets and a single paragraph. Even publishing sections from the original application (essentially zero additional work) would be an improvement in transparency.
Agreed. GiveWell has revised their estimates numerous times based on public feedback, including dropping entire programmes after evidence emerged that their initial reasons for funding were excessively optimistic, and is nevertheless generally well-regarded including outside EA. Most people understand its analysis will not be bug free.
OpenPhil’s decision to fund Wytham Abbey, on the other hand, was hotly debated before they’d published even the paragraph summary. I don’t think declining to make any metrics available except the price tag increased people’s confidence in the decision making process, and participants in it appear to admit that with hindsight they would have been better off doing more research and/or more consideration of external opinion. If the intent is to shield leadership from criticism, it isn’t working.
Obviously GiveWell exists to advise the public so sharing detail is their raison d’etre, whereas OpenPhil exists to advise Dustin Moskovitz and Cari Tuna, who will have access to all the detail they need to decide on a recommendation. But I think there are wider considerations to publicising more on the project and the rationale behind decisions even if OpenPhil doesn’t expect to find corrections to its calculations useful
Increased clarity about funding criteria would reduce time spent (on both sides) on proposals for projects OpenPhil would be highly unlikely to fund, and probably improve the relevance and quality of the average submission.
There are a lot of other funders out there and many OpenPhil supported causes have room for additional funding.
Publicly-shared OpenPhil analysis could help other donors conclude particular organizations are worth funding (just as I imagine OpenPhil itself is happy to use assessments by organizations it trusts), ultimately leading to its favoured causes having more funds at their disposal
Or EA methodologies could in theory be adopted by other grantmakers doing their own analysis. It seems private foundations are much happier borrowing more recent methodological ideas from Mackenzie Scott, but generally have a negative perception of EA. Adoption of TBF might be mainly down to its relative simplicity, but you don’t exactly make a case for the virtues of the ITN framework by hiding the analysis...
Lastly, whilst OpenPhil’s primary purpose is to help Dustin and Cari give their money away it’s also the flagship grantmaker of EA, so the signals it sends about effectiveness, rigour, transparency and willingness to update has an outsized effect on whether people believe the movement overall is living up to its own hype. I think that alone is a bigger reputational issue than a grantmaker using a disputed figure or getting their sums wrong.
The non-reputational costs matter too and it’d be unreasonable to expect enormously time-consuming GiveWell and CE style analysis for every grant, especially with the grants already made and recipients sometimes not even considering additional funding sources. But there’s a happy medium between elaborate reasoning/spreadsheets and a single paragraph. Even publishing sections from the original application (essentially zero additional work) would be an improvement in transparency.