I previously gave a fair bit of feedback to this document. I wanted to quickly give my take on a few things.
Overall, I found the analysis interesting and useful. However, I overall have a somewhat different take than Nuno did.
On OP: - Aaron Gertler / OP were given a previous version of this that was less carefully worded. To my surprise, he recommended going forward with publishing it, for the sake of community discourse. This surprised me and I’m really thankful. - This analysis didn’t get me to change my mind much about Open Philanthropy. I thought fairly highly of them before and after, and expect that many others who have been around would think similarly. I think they’re a fair bit away from being an “idealized utilitarian agent” (in part because they explicitly claim not to be), but still much better than most charitable foundations and the like.
On this particular issue: - My guess is that in the case of criminal justice reform, there were some key facts of the decision-making process that aren’t public and are unlikely to ever be public. It’s very common in large organizations for compromises to be made for various political or social reasons, for example. I’ve previously written a bit about similar things [here](https://twitter.com/ozziegooen/status/1456992079326978052). - I think Nuno’s quantitative estimates were pretty interesting, but I wouldn’t be too surprised if other smart people would come up with numbers that are fairly different. For those reading this, I’d take the quantitative estimates with a lot of uncertainty. - My guess is that a “highly intelligent idealized utilitarian agent” probably would have invested a fair bit less in criminal justice reform than OP did, if at all.
On evaluation, more broadly: - I’ve found OP to be a very intimidating target of critique or evaluation, mainly just because of their position. Many of us are likely to want funding from them in the future (or from people that listen to them), so the risk of getting people at OP upset is very high. From a cost-benefit position, publicly critiquing OP (or other high-status EA organizations) seems pretty risky. This is obviously unfortunate; these groups are often appreciative of feedback, and of course, they are some of the most useful groups to get feedback. (Sometimes prestigious EAs complain about getting too little feedback, I think this is one reason why). - I really would hate for this post to be taken as “ammunition” by people with agendas against OP. I’m fairly paranoid about this. That wasn’t the point of this piece at all. If future evaluations are mainly used as “ammunition” by “groups with grudges”, then that makes it far more hazardous and costly to publish them. If we want lots of great evaluations, we’ll need an environment that doesn’t weaponize them. - Similarly to the above point, I prefer these sorts of analysis and the resulting discussions to be fairly dispassionate and rational. When dealing with significant charity decisions I think it’s easy for some people to get emotional. “$200M could have saved X lives!”. But in the scheme of things, there are many decisions like this to make, and there will definitely be large mistakes made. Our main goals should be to learn quickly and continue to improve in our decisions going forward. - One huge set of missing information is OP’s internal judgements of specific grants. I’m sure they’re very critical now of some groups they’ve previously funded (in all causes, not just criminal justice). However, it would likely be very awkward and unprofessional to actually release this information publicly. - For many of the reasons mentioned above, I think we can rarely fully trust the public reasons for large actions by large institutions. When a CEO leaves to “spend more time with family”, there’s almost always another good explanation. I think OP is much better than most organizations at being honest, but I’d expect that they still face this issue to an extent. As such, I think we shouldn’t be too surprised when some decisions they make seem strange when evaluating them based on their given public explanations.
It’s interesting that you say that given what is in my eyes a low amount of content in this comment. What is a model or model-extracted part that you liked in this comment?
Some of my models feel like they have a mix of reasonable stuff and wanton speculation, and this comment sort of makes it a bit more clear which of the wanton speculation is more reasonable, and which is more on the deep end.
For instance:
in the case of criminal justice reform, there were some key facts of the decision-making process that aren’t public and are unlikely to ever be public
.
My guess is that a “highly intelligent idealized utilitarian agent” probably would have invested a fair bit less in criminal justice reform than OP did, if at all.
.
I think we can rarely fully trust the public reasons for large actions by large institutions. When a CEO leaves to “spend more time with family”, there’s almost always another good explanation. I think OP is much better than most organizations at being honest, but I’d expect that they still face this issue to an extent. As such, I think we shouldn’t be too surprised when some decisions they make seem strange when evaluating them based on their given public explanations.
in the case of criminal justice reform, there were some key facts of the decision-making process that aren’t public and are unlikely to ever be public
Seems obviously true and in fact a continued premise of your post is that there are key facts absent that could explain or fail to explain one decision or the other. Is this particularly true in crminal justice reform? Compared to IDK orgs like AMF (which are hyper transparent by design) maybe, compared to stuff around AI risk I think not.
My guess is that a “highly intelligent idealized utilitarian agent” probably would have invested a fair bit less in criminal justice reform than OP did, if at all.
This is like the same thesis as your post, does not actually convey much information (it is what anyone I assume would have already guessed Ozzie thought).
I think we can rarely fully trust the public reasons for large actions by large institutions. When a CEO leaves to “spend more time with family”, there’s almost always another good explanation. I think OP is much better than most organizations at being honest, but I’d expect that they still face this issue to an extent. As such, I think we shouldn’t be too surprised when some decisions they make seem strange when evaluating them based on their given public explanations.
Yeah I mean, no kidding. But it’s called Open Philanthropy. It’s easy to imagine there exists a niche for a meta-charity with high transaparency and visibility. It also seems clear that Open Philanthropy advertises as a fulfillment of this niche as much as possible and that donors do want this. So when their behavior seems strange in a cause area and the amount of transparency on it is very low, I think this is notable, even if the norm among orgs is to obfuscate internal phenomena. So I don’t rlly endorse any normative takeaway from this point about how orgs usually obfuscate information.
Yeah I mean, no kidding. But it’s called Open Philanthropy. It’s easy to imagine there exists a niche for a meta-charity with high transaparency and visibility. It also seems clear that Open Philanthropy advertises as a fulfillment of this niche as much as possible and that donors do want this.
I don’t understand this point. Can you spell it out?
From my perspective, Open Phil’s main legible contribution is a) identifying great donation opportunities, b) recommending Cari Tuna and Dustin Moskovitz to donate to such opportunities, and c) building up an apparatus to do so at scale.
Their donors are specific people, not hypothetical “donors who want transparency.” I assume Open Phil is quite candid/transparent with their actual donors, though of course I don’t have visibility here.
In fairness, the situation is a bit confusing. Open Phil came from GiveWell, which is meant for external donors. In comparison, as Linch mentioned, Open Phil mainly recommends donations just to Good Ventures (Cari Tuna and Dustin Moskovitz). My impression is that OP’s main concern is directly making good grants, not recommending good grants to other funders. Therefore, a large amount of public research is not particularly crucial.
I think the name is probably not quite ideal for this purpose. I think of it more like “Highly Effective Philanthropy”; it seems their comparative advantage / unique attribute is much more their choices of focus and their talent pool, than it is their openness, at this point.
If there is frustration here, it seems like the frustration is a bit more “it would be nice if they could change their name to be more reflective of their current focus”, than “they should change their work to reflect the previous title they chose”.
How do you balance your high opinion of OpenPhil with the assumption that there’s information that cannot be made public, and which tips the scale in important decisions? How can you judge OpenPhil’s decisions in this case?
How do you balance your high opinion of OpenPhil with the assumption that there’s information that cannot be made public, and which tips the scale in important decisions?
This is almost always the case for large organizations. All CEOs or government officials have a lot of private information that influences their decision making.
This private information does make it much more difficult for external evaluators to evaluate them. However, there’s often still a lot that can be inferred. It’s really important that these evaluators stay humble about their analysis in light of the fact that there’s a lot of private information, but it’s also important that evaluators still try, given the information available.
I previously gave a fair bit of feedback to this document. I wanted to quickly give my take on a few things.
Overall, I found the analysis interesting and useful. However, I overall have a somewhat different take than Nuno did.
On OP:
- Aaron Gertler / OP were given a previous version of this that was less carefully worded. To my surprise, he recommended going forward with publishing it, for the sake of community discourse. This surprised me and I’m really thankful.
- This analysis didn’t get me to change my mind much about Open Philanthropy. I thought fairly highly of them before and after, and expect that many others who have been around would think similarly. I think they’re a fair bit away from being an “idealized utilitarian agent” (in part because they explicitly claim not to be), but still much better than most charitable foundations and the like.
On this particular issue:
- My guess is that in the case of criminal justice reform, there were some key facts of the decision-making process that aren’t public and are unlikely to ever be public. It’s very common in large organizations for compromises to be made for various political or social reasons, for example. I’ve previously written a bit about similar things [here](https://twitter.com/ozziegooen/status/1456992079326978052).
- I think Nuno’s quantitative estimates were pretty interesting, but I wouldn’t be too surprised if other smart people would come up with numbers that are fairly different. For those reading this, I’d take the quantitative estimates with a lot of uncertainty.
- My guess is that a “highly intelligent idealized utilitarian agent” probably would have invested a fair bit less in criminal justice reform than OP did, if at all.
On evaluation, more broadly:
- I’ve found OP to be a very intimidating target of critique or evaluation, mainly just because of their position. Many of us are likely to want funding from them in the future (or from people that listen to them), so the risk of getting people at OP upset is very high. From a cost-benefit position, publicly critiquing OP (or other high-status EA organizations) seems pretty risky. This is obviously unfortunate; these groups are often appreciative of feedback, and of course, they are some of the most useful groups to get feedback. (Sometimes prestigious EAs complain about getting too little feedback, I think this is one reason why).
- I really would hate for this post to be taken as “ammunition” by people with agendas against OP. I’m fairly paranoid about this. That wasn’t the point of this piece at all. If future evaluations are mainly used as “ammunition” by “groups with grudges”, then that makes it far more hazardous and costly to publish them. If we want lots of great evaluations, we’ll need an environment that doesn’t weaponize them.
- Similarly to the above point, I prefer these sorts of analysis and the resulting discussions to be fairly dispassionate and rational. When dealing with significant charity decisions I think it’s easy for some people to get emotional. “$200M could have saved X lives!”. But in the scheme of things, there are many decisions like this to make, and there will definitely be large mistakes made. Our main goals should be to learn quickly and continue to improve in our decisions going forward.
- One huge set of missing information is OP’s internal judgements of specific grants. I’m sure they’re very critical now of some groups they’ve previously funded (in all causes, not just criminal justice). However, it would likely be very awkward and unprofessional to actually release this information publicly.
- For many of the reasons mentioned above, I think we can rarely fully trust the public reasons for large actions by large institutions. When a CEO leaves to “spend more time with family”, there’s almost always another good explanation. I think OP is much better than most organizations at being honest, but I’d expect that they still face this issue to an extent. As such, I think we shouldn’t be too surprised when some decisions they make seem strange when evaluating them based on their given public explanations.
I really appreciate this comment; it feels like it’s drawing from models deeper than my own.
It’s interesting that you say that given what is in my eyes a low amount of content in this comment. What is a model or model-extracted part that you liked in this comment?
Some of my models feel like they have a mix of reasonable stuff and wanton speculation, and this comment sort of makes it a bit more clear which of the wanton speculation is more reasonable, and which is more on the deep end.
For instance:
.
.
.
Well this is still confusing to me
Seems obviously true and in fact a continued premise of your post is that there are key facts absent that could explain or fail to explain one decision or the other. Is this particularly true in crminal justice reform? Compared to IDK orgs like AMF (which are hyper transparent by design) maybe, compared to stuff around AI risk I think not.
This is like the same thesis as your post, does not actually convey much information (it is what anyone I assume would have already guessed Ozzie thought).
Yeah I mean, no kidding. But it’s called Open Philanthropy. It’s easy to imagine there exists a niche for a meta-charity with high transaparency and visibility. It also seems clear that Open Philanthropy advertises as a fulfillment of this niche as much as possible and that donors do want this. So when their behavior seems strange in a cause area and the amount of transparency on it is very low, I think this is notable, even if the norm among orgs is to obfuscate internal phenomena. So I don’t rlly endorse any normative takeaway from this point about how orgs usually obfuscate information.
I don’t understand this point. Can you spell it out?
From my perspective, Open Phil’s main legible contribution is a) identifying great donation opportunities, b) recommending Cari Tuna and Dustin Moskovitz to donate to such opportunities, and c) building up an apparatus to do so at scale.
Their donors are specific people, not hypothetical “donors who want transparency.” I assume Open Phil is quite candid/transparent with their actual donors, though of course I don’t have visibility here.
In fairness, the situation is a bit confusing. Open Phil came from GiveWell, which is meant for external donors. In comparison, as Linch mentioned, Open Phil mainly recommends donations just to Good Ventures (Cari Tuna and Dustin Moskovitz). My impression is that OP’s main concern is directly making good grants, not recommending good grants to other funders. Therefore, a large amount of public research is not particularly crucial.
I think the name is probably not quite ideal for this purpose. I think of it more like “Highly Effective Philanthropy”; it seems their comparative advantage / unique attribute is much more their choices of focus and their talent pool, than it is their openness, at this point.
If there is frustration here, it seems like the frustration is a bit more “it would be nice if they could change their name to be more reflective of their current focus”, than “they should change their work to reflect the previous title they chose”.
Sorry I did not realize that OP doesn’t solicit donations from non megadonors. I agree this recontextualizes how we should interpret transparency.
Given the lack of donor diversity, tho, I am confused why their cause areas would be so diverse.
How do you balance your high opinion of OpenPhil with the assumption that there’s information that cannot be made public, and which tips the scale in important decisions? How can you judge OpenPhil’s decisions in this case?
This is almost always the case for large organizations. All CEOs or government officials have a lot of private information that influences their decision making.
This private information does make it much more difficult for external evaluators to evaluate them. However, there’s often still a lot that can be inferred. It’s really important that these evaluators stay humble about their analysis in light of the fact that there’s a lot of private information, but it’s also important that evaluators still try, given the information available.