I think itâs a travesty that so many valuable analyses are never publicly shared, but due to unreasonable external expectations itâs currently hard for any single organization to become more transparent without occurring enormous costs.
If open phil actually were to start publishing their internal analyses behind each grant, I will bet you at good odds the the following scenario is going to play out on the EA Forum:
Somebody digs deep into a specific analysis carried out. It turns out Open Philâs analysis has several factual errors that any domain expert could have alerted them to, additionally they entirely failed to consider some important aspect which may change the conclusion.
Somebody in the comments accuses Open Phil of shoddy and irresponsible work. That they are making such large donations decisions based on work filled with errors, proves their irresponsibility. Moreover, why have they still not responded to the criticism?
A new meta-post argues that the EA movement needs reform, and uses the above as one of several examples showing the incompetence of âEA leadershipâ.
Several things would be true about the above hypothetical example:
Open Philâs analysis did, in fact, have errors.
It would have been better for Open Philâs work not to have those errors.
The errors were only found because they chose to make the analysis public.
The costs for Open Phil to reduce the error rate of analyses, would not be worth the benefits.
These mistakes were found, and at no cost (outside of reputation) to the organization.
Criticism shouldnât have to warrant a response if it takes time away from work which is more important. The internal analyses from open phil Iâve been privileged to see were pretty good. They were also made by humans, who make errors all the time.
In my ideal world, every one of these analyses would be open to the public. Like open-source programming people would be able to contribute to every analysis, fixing bugs, adding new insights, and updating old analyses as new evidence comes out.
But like an open-source programming project, there has to be an understanding that no repository is ever going to be bug-free or have every feature.
If open phil shared all their analyses and nobody was able to discover important omissions or errors, my main conclusion would be they are spending far too much time on each analysis.
Some EA organizations are held to impossibly high standards. Whenever somebody points this out, a common response is: âBut the EA community should be held to a higher standard!â. Iâm not so sure! The bar is where itâs at because it takes significant effort to higher it. EA organizations are subject to the same constraints the rest of the world is subject to.
More openness requires a lowering of expectations. We should strive for a culture that is high in criticism, but low in judgement.
I think you are placing far too little faith in the power of the truth. None of the events you list above are bad. Itâs implied that they are bad because they will cause someone to unfairly judge Open Phil poorly. But why presume that more information will lead to worse judgment? It may lead to better judgment.
As an example, GiveWell publishes detailed cost-effectiveness spreadsheets and analyses, which definitely make me take their judgment way more seriously than I would otherwise. They also provide fertile ground for criticism (a popular recent magazine article and essay did just that, nitpicking various elements of the analyses that it thought were insufficient.) The idea that GiveWellâs audience would then think worse of them in the end because of the existence of such criticism is not credible to me.
Agreed. GiveWell has revised their estimates numerous times based on public feedback, including dropping entire programmes after evidence emerged that their initial reasons for funding were excessively optimistic, and is nevertheless generally well-regarded including outside EA. Most people understand its analysis will not be bug free.
OpenPhilâs decision to fund Wytham Abbey, on the other hand, was hotly debated before theyâd published even the paragraph summary. I donât think declining to make any metrics available except the price tag increased peopleâs confidence in the decision making process, and participants in it appear to admit that with hindsight they would have been better off doing more research and/âor more consideration of external opinion. If the intent is to shield leadership from criticism, it isnât working.
Obviously GiveWell exists to advise the public so sharing detail is their raison dâetre, whereas OpenPhil exists to advise Dustin Moskovitz and Cari Tuna, who will have access to all the detail they need to decide on a recommendation. But I think there are wider considerations to publicising more on the project and the rationale behind decisions even if OpenPhil doesnât expect to find corrections to its calculations useful
Increased clarity about funding criteria would reduce time spent (on both sides) on proposals for projects OpenPhil would be highly unlikely to fund, and probably improve the relevance and quality of the average submission.
There are a lot of other funders out there and many OpenPhil supported causes have room for additional funding.
Publicly-shared OpenPhil analysis could help other donors conclude particular organizations are worth funding (just as I imagine OpenPhil itself is happy to use assessments by organizations it trusts), ultimately leading to its favoured causes having more funds at their disposal
Lastly, whilst OpenPhilâs primary purpose is to help Dustin and Cari give their money away itâs also the flagship grantmaker of EA, so the signals it sends about effectiveness, rigour, transparency and willingness to update has an outsized effect on whether people believe the movement overall is living up to its own hype. I think that alone is a bigger reputational issue than a grantmaker using a disputed figure or getting their sums wrong.
The non-reputational costs matter too and itâd be unreasonable to expect enormously time-consuming GiveWell and CE style analysis for every grant, especially with the grants already made and recipients sometimes not even considering additional funding sources. But thereâs a happy medium between elaborate reasoning/âspreadsheets and a single paragraph. Even publishing sections from the original application (essentially zero additional work) would be an improvement in transparency.
As a critic of many institutions and organizations in EA, I agree with the above dynamic and would like people to be less nitpicky about this kind of thing (and I tried to live up to that virtue by publishing my own quite rough grant evaluations in my old Long Term Future Fund writeups)
I think itâs a travesty that so many valuable analyses are never publicly shared, but due to unreasonable external expectations itâs currently hard for any single organization to become more transparent without occurring enormous costs.
I think this applies to organisations with uncertain funding, but not Open Philanthropy, which is essentially funded by a billionaire quite aligned with their strategy?
The internal analyses from open phil Iâve been privileged to see were pretty good. They were also made by humans, who make errors all the time.
Even if the analyses do not contain errors per se, it would be nice to get clarity on morals. I wonder whether Open Philanthropyâs prioritisation among human and animal welfare interventions in their global health and wellbeing (GHW) portfolio considers 1 unit of welfare in humans as valuable as 1 unit of welfare in animals. It does not look like so, as I estimate the cost-effectiveness of corporate campaigns for chicken welfare is 680 times Open Philanthropyâs GHW bar.
The costs for Open Phil to reduce the error rate of analyses, would not be worth the benefits.
[...]
Criticism shouldnât have to warrant a response if it takes time away from work which is more important.
On the one hand, I agree it is important to be mindful of the time it would take to improve decisions. On the other, I think it would be quite worth it for Open Philanthropy to have the main text of the write-ups of its millionaire grants longer than 1 paragraph, and to explain how they prioritise between human and animal interventions. Hundreds of millions of dollars are at stake in these decisions. Open Philanthropy also has great researchers which could (relatively) quickly provide adequate context for their decisions. My sense is that transparency is not among Open Philanthropyâs priorities.
Thereâs a lot of room between publishing more than ~1 paragraph and âpublishing their internal analyses.â I didnât read Vasco as suggesting publication of the full analyses.
Assertion 4 -- âThe costs for Open Phil to reduce the error rate of analyses, would not be worth the benefitsââseems to be doing a lot of work in your model here. But it seems to be based on assumptions about the nature and magnitude of errors that would be detected. If a number of errors were material (in the sense that correcting them would have changed the grant/âno grant decision, or would have seriously changed the funding level), I donât think it would take many errors for assertion 4 to be incorrect.
Moreover, if an error were found inâe.g., a five-paragraph summary of a grant rationaleâthe odds of the identified error being material /â important would seem higher than the average error found in (say) a 30-page writeup. Presumably the facts and conclusions that made the short writeup would be ~the more important ones.
What you say is true. One thing to keep in mind is that academic data, analysis and papers are usually all made public these days. Yes with OpenPhil funding rather than just academic rigor is involved, but I feel like we should aim to at least have the same level of transparency as academia...
What if, instead of releasing very long reports about decisions that were already made, there were a steady stream of small analyses on specific proposals, or even parts of proposals, to enlist others to aid error detection before each decision?
I think itâs a travesty that so many valuable analyses are never publicly shared, but due to unreasonable external expectations itâs currently hard for any single organization to become more transparent without occurring enormous costs.
If open phil actually were to start publishing their internal analyses behind each grant, I will bet you at good odds the the following scenario is going to play out on the EA Forum:
Somebody digs deep into a specific analysis carried out. It turns out Open Philâs analysis has several factual errors that any domain expert could have alerted them to, additionally they entirely failed to consider some important aspect which may change the conclusion.
Somebody in the comments accuses Open Phil of shoddy and irresponsible work. That they are making such large donations decisions based on work filled with errors, proves their irresponsibility. Moreover, why have they still not responded to the criticism?
A new meta-post argues that the EA movement needs reform, and uses the above as one of several examples showing the incompetence of âEA leadershipâ.
Several things would be true about the above hypothetical example:
Open Philâs analysis did, in fact, have errors.
It would have been better for Open Philâs work not to have those errors.
The errors were only found because they chose to make the analysis public.
The costs for Open Phil to reduce the error rate of analyses, would not be worth the benefits.
These mistakes were found, and at no cost (outside of reputation) to the organization.
Criticism shouldnât have to warrant a response if it takes time away from work which is more important. The internal analyses from open phil Iâve been privileged to see were pretty good. They were also made by humans, who make errors all the time.
In my ideal world, every one of these analyses would be open to the public. Like open-source programming people would be able to contribute to every analysis, fixing bugs, adding new insights, and updating old analyses as new evidence comes out.
But like an open-source programming project, there has to be an understanding that no repository is ever going to be bug-free or have every feature.
If open phil shared all their analyses and nobody was able to discover important omissions or errors, my main conclusion would be they are spending far too much time on each analysis.
Some EA organizations are held to impossibly high standards. Whenever somebody points this out, a common response is: âBut the EA community should be held to a higher standard!â. Iâm not so sure! The bar is where itâs at because it takes significant effort to higher it. EA organizations are subject to the same constraints the rest of the world is subject to.
More openness requires a lowering of expectations. We should strive for a culture that is high in criticism, but low in judgement.
I think you are placing far too little faith in the power of the truth. None of the events you list above are bad. Itâs implied that they are bad because they will cause someone to unfairly judge Open Phil poorly. But why presume that more information will lead to worse judgment? It may lead to better judgment.
As an example, GiveWell publishes detailed cost-effectiveness spreadsheets and analyses, which definitely make me take their judgment way more seriously than I would otherwise. They also provide fertile ground for criticism (a popular recent magazine article and essay did just that, nitpicking various elements of the analyses that it thought were insufficient.) The idea that GiveWellâs audience would then think worse of them in the end because of the existence of such criticism is not credible to me.
Agreed. GiveWell has revised their estimates numerous times based on public feedback, including dropping entire programmes after evidence emerged that their initial reasons for funding were excessively optimistic, and is nevertheless generally well-regarded including outside EA. Most people understand its analysis will not be bug free.
OpenPhilâs decision to fund Wytham Abbey, on the other hand, was hotly debated before theyâd published even the paragraph summary. I donât think declining to make any metrics available except the price tag increased peopleâs confidence in the decision making process, and participants in it appear to admit that with hindsight they would have been better off doing more research and/âor more consideration of external opinion. If the intent is to shield leadership from criticism, it isnât working.
Obviously GiveWell exists to advise the public so sharing detail is their raison dâetre, whereas OpenPhil exists to advise Dustin Moskovitz and Cari Tuna, who will have access to all the detail they need to decide on a recommendation. But I think there are wider considerations to publicising more on the project and the rationale behind decisions even if OpenPhil doesnât expect to find corrections to its calculations useful
Increased clarity about funding criteria would reduce time spent (on both sides) on proposals for projects OpenPhil would be highly unlikely to fund, and probably improve the relevance and quality of the average submission.
There are a lot of other funders out there and many OpenPhil supported causes have room for additional funding.
Publicly-shared OpenPhil analysis could help other donors conclude particular organizations are worth funding (just as I imagine OpenPhil itself is happy to use assessments by organizations it trusts), ultimately leading to its favoured causes having more funds at their disposal
Or EA methodologies could in theory be adopted by other grantmakers doing their own analysis. It seems private foundations are much happier borrowing more recent methodological ideas from Mackenzie Scott, but generally have a negative perception of EA. Adoption of TBF might be mainly down to its relative simplicity, but you donât exactly make a case for the virtues of the ITN framework by hiding the analysis...
Lastly, whilst OpenPhilâs primary purpose is to help Dustin and Cari give their money away itâs also the flagship grantmaker of EA, so the signals it sends about effectiveness, rigour, transparency and willingness to update has an outsized effect on whether people believe the movement overall is living up to its own hype. I think that alone is a bigger reputational issue than a grantmaker using a disputed figure or getting their sums wrong.
The non-reputational costs matter too and itâd be unreasonable to expect enormously time-consuming GiveWell and CE style analysis for every grant, especially with the grants already made and recipients sometimes not even considering additional funding sources. But thereâs a happy medium between elaborate reasoning/âspreadsheets and a single paragraph. Even publishing sections from the original application (essentially zero additional work) would be an improvement in transparency.
As a critic of many institutions and organizations in EA, I agree with the above dynamic and would like people to be less nitpicky about this kind of thing (and I tried to live up to that virtue by publishing my own quite rough grant evaluations in my old Long Term Future Fund writeups)
Thanks for the thoughtful reply, Mathias!
I think this applies to organisations with uncertain funding, but not Open Philanthropy, which is essentially funded by a billionaire quite aligned with their strategy?
Even if the analyses do not contain errors per se, it would be nice to get clarity on morals. I wonder whether Open Philanthropyâs prioritisation among human and animal welfare interventions in their global health and wellbeing (GHW) portfolio considers 1 unit of welfare in humans as valuable as 1 unit of welfare in animals. It does not look like so, as I estimate the cost-effectiveness of corporate campaigns for chicken welfare is 680 times Open Philanthropyâs GHW bar.
On the one hand, I agree it is important to be mindful of the time it would take to improve decisions. On the other, I think it would be quite worth it for Open Philanthropy to have the main text of the write-ups of its millionaire grants longer than 1 paragraph, and to explain how they prioritise between human and animal interventions. Hundreds of millions of dollars are at stake in these decisions. Open Philanthropy also has great researchers which could (relatively) quickly provide adequate context for their decisions. My sense is that transparency is not among Open Philanthropyâs priorities.
Transparency also facilates productive criticism.
Thereâs a lot of room between publishing more than ~1 paragraph and âpublishing their internal analyses.â I didnât read Vasco as suggesting publication of the full analyses.
Assertion 4 -- âThe costs for Open Phil to reduce the error rate of analyses, would not be worth the benefitsââseems to be doing a lot of work in your model here. But it seems to be based on assumptions about the nature and magnitude of errors that would be detected. If a number of errors were material (in the sense that correcting them would have changed the grant/âno grant decision, or would have seriously changed the funding level), I donât think it would take many errors for assertion 4 to be incorrect.
Moreover, if an error were found inâe.g., a five-paragraph summary of a grant rationaleâthe odds of the identified error being material /â important would seem higher than the average error found in (say) a 30-page writeup. Presumably the facts and conclusions that made the short writeup would be ~the more important ones.
What you say is true. One thing to keep in mind is that academic data, analysis and papers are usually all made public these days. Yes with OpenPhil funding rather than just academic rigor is involved, but I feel like we should aim to at least have the same level of transparency as academia...
What if, instead of releasing very long reports about decisions that were already made, there were a steady stream of small analyses on specific proposals, or even parts of proposals, to enlist others to aid error detection before each decision?