Something I did not see mentioned here as a potential critique: Open Phil’s work on macro policy seemed to be motivated by a (questionable?) assumption of differing values from those who promote tighter policy. Here is Holden Karnofsky with Ezra Klein [I think the last sentence got transcribed poorly, but the point is clear]:
And so, this is not about Holden going and learning all about macroeconomic policy and then going and explaining to the Federal Reserve that they’ve got it wrong. That’s not what happened. We funded groups that have their own expertise, that are part of the debate going on. There are experts on both sides. But we funded a particular set of values that says full employment is very important if you kind of value all people equally and you care a lot about how the working class is doing and what their bargaining power is.
And historically, the Federal Reserve has often had a bit of an obsession with controlling inflation that may be very related to their professional incentives. And so we do have a point of view on when there’s a debate among experts, which ones are taking the position they’re taking, because that’s what you would do if you were valuing everyone and trying to help everyone the most, versus which you’re taking position for some other reason.
This struck me as a bit odd, because I think if you asked individuals more hawk-ish than Open Phil if they cared impartially about all Americans, they would answer yes. I suspect Open Phil may have overestimated the degree to which genuine technocratic disagreement was in fact a difference of values.
Maybe OP is/was right, but it would take significant technical expertise to identify which side of the debate is substantively correct, so as to conclude one side must not be motivated by impartial welfare.
I also think it’s prudent to assume that even hawkish central bankers are broadly impartial and welfarist (but maybe just place more emphasis on longterm growth).
But I think maybe OpenPhil’s general theory of change and reasoning is still plausibly correct for 2008-2020, and people systematically undervalued how bad unemployment was for wellbeing and were perhaps too worried about inflation for political reasons.
Holden and Open Philanthropy’s thinking about this is so bad.
“Learning all about macroeconomic policy” is obviously what they should have been doing. Making high-risk grants usefully requires doing the intellectual work to understand the arguments for each and every one of them yourself. Funding “a particular set of values that says full employment is very important if you kind of value all people equally and you care a lot about how the working class is doing and what their bargaining power is” is…
Oh dear.
Open Phil does do the intellectual work in some other areas. That Holden states they didn’t here, and tries to frame the fact that they didn’t as a good thing, indicates to me they are biased and should stay away from this area.
I don’t understand what you think Holden / OpenPhil’s bias is. I can see why they might have happened to be wrong, but I don’t see what in their process makes them systematically wrong in a particular way.
I also think it’s generally reasonable to form expectations about who in an expert disagreement is correct using heuristics that don’t directly engage with the content of the arguments. Such heuristics, again, can go wrong, but I think they still carry information, and I think we often have to ultimately rely on them when there’s just too many issues to investigate them all.
I don’t understand what you think Holden / OpenPhil’s bias is.
It’s not the kind of bias you’re thinking of; not a cognitive or epistemic bias, that is. It’s dovish bias, as in a bias to favor expansionary policy. The non-biased alternative would be a nondiscretionary target that does not systematically favor either expansionary or contractionary policy.
(If we want to talk about epistemic bias, and if I allow myself to be more provocative, there could also be a different kind of bias, social desirability: “you kind of value all people equally and you care a lot about how the working class is doing and what their bargaining power is” sounds good and is the kind of language you expect to find in a political party platform. This was in an interview and in a response prompted by Ezra Klein, but just seeing language like that used could be a red flag.)
I also think it’s generally reasonable to form expectations about who in an expert disagreement is correct using heuristics that don’t directly engage with the content of the arguments.
Yes, but:
Not when making high-risk grants, where the value comes from your inside-view evaluations of the arguments for each grant (or category of grants, if you’re funding multiple people working on the same or similar things but you have evaluated these things for yourself in sufficient detail to be confident that the grants are overall worth doing).
Not as a substitute for directly engaging with the content of the arguments, but in addition to doing that and as a way to guide your engagement with the arguments (to help you see the context and know what arguments to look at). Unless you really don’t have the time to engage with the arguments, but there are a lot of hours in a year and this is kind of Open Philanthropy’s job.
Never while framing as a good thing the fact that you’re deferring to experts instead of engaging with the arguments yourself, never while implying that there would be something wrong about engaging with the arguments yourself instead of (or in addition to) deferring to experts.
Something I did not see mentioned here as a potential critique: Open Phil’s work on macro policy seemed to be motivated by a (questionable?) assumption of differing values from those who promote tighter policy. Here is Holden Karnofsky with Ezra Klein [I think the last sentence got transcribed poorly, but the point is clear]:
This struck me as a bit odd, because I think if you asked individuals more hawk-ish than Open Phil if they cared impartially about all Americans, they would answer yes. I suspect Open Phil may have overestimated the degree to which genuine technocratic disagreement was in fact a difference of values.
Maybe OP is/was right, but it would take significant technical expertise to identify which side of the debate is substantively correct, so as to conclude one side must not be motivated by impartial welfare.
I also think it’s prudent to assume that even hawkish central bankers are broadly impartial and welfarist (but maybe just place more emphasis on longterm growth).
But I think maybe OpenPhil’s general theory of change and reasoning is still plausibly correct for 2008-2020, and people systematically undervalued how bad unemployment was for wellbeing and were perhaps too worried about inflation for political reasons.
Holden and Open Philanthropy’s thinking about this is so bad.
“Learning all about macroeconomic policy” is obviously what they should have been doing. Making high-risk grants usefully requires doing the intellectual work to understand the arguments for each and every one of them yourself. Funding “a particular set of values that says full employment is very important if you kind of value all people equally and you care a lot about how the working class is doing and what their bargaining power is” is…
Oh dear.
Open Phil does do the intellectual work in some other areas. That Holden states they didn’t here, and tries to frame the fact that they didn’t as a good thing, indicates to me they are biased and should stay away from this area.
I don’t understand what you think Holden / OpenPhil’s bias is. I can see why they might have happened to be wrong, but I don’t see what in their process makes them systematically wrong in a particular way.
I also think it’s generally reasonable to form expectations about who in an expert disagreement is correct using heuristics that don’t directly engage with the content of the arguments. Such heuristics, again, can go wrong, but I think they still carry information, and I think we often have to ultimately rely on them when there’s just too many issues to investigate them all.
It’s not the kind of bias you’re thinking of; not a cognitive or epistemic bias, that is. It’s dovish bias, as in a bias to favor expansionary policy. The non-biased alternative would be a nondiscretionary target that does not systematically favor either expansionary or contractionary policy.
(If we want to talk about epistemic bias, and if I allow myself to be more provocative, there could also be a different kind of bias, social desirability: “you kind of value all people equally and you care a lot about how the working class is doing and what their bargaining power is” sounds good and is the kind of language you expect to find in a political party platform. This was in an interview and in a response prompted by Ezra Klein, but just seeing language like that used could be a red flag.)
Yes, but:
Not when making high-risk grants, where the value comes from your inside-view evaluations of the arguments for each grant (or category of grants, if you’re funding multiple people working on the same or similar things but you have evaluated these things for yourself in sufficient detail to be confident that the grants are overall worth doing).
Not as a substitute for directly engaging with the content of the arguments, but in addition to doing that and as a way to guide your engagement with the arguments (to help you see the context and know what arguments to look at). Unless you really don’t have the time to engage with the arguments, but there are a lot of hours in a year and this is kind of Open Philanthropy’s job.
Never while framing as a good thing the fact that you’re deferring to experts instead of engaging with the arguments yourself, never while implying that there would be something wrong about engaging with the arguments yourself instead of (or in addition to) deferring to experts.