> early critiques of GiveWell were basically “Who are you, with no background in global development or in traditional philanthropy, to think you can provide good charity evaluations?”
That seems like a perfectly reasonable, fair challenge to put to GiveWell. That’s the right question for people to ask!
I agree with this if you read the challenge literally, but the actual challenges were usually closer to a reflexive dismissal without actually engaging with GiveWell’s work.
Also, I disagree that the only way we were able to build trust in GiveWell was through this:
only when people become properly educated (i.e., via formal education or a process approximating formal education) or credentialed in a subject.
We can often just look at object-level work, study research & responses to the research, and make up our mind. Credentials are often useful to navigate this, but not always necessary.
but the actual challenges were usually closer to a reflexive dismissal
I don’t know the specific, actual criticisms of GiveWell you’re referring to, so I can’t comment on them — how fair or reasonable they were.
My point is more abstract: just that, in general, it is fair to be to challenge non-experts who are trying to do serious work in area outside of their expertise. It is a challenge that anyone in the position of the GiveWell founders should gladly and willingly accept, or else they’re not up to the job.
Reputation, trust, and credibility in an area where you are a neophyte is not a right owed to you automatically. It’s something you earn by providing evidence that you trustworthy, credible, and deserve a good reputation.
We can often just look at object-level work, study research & responses to the research, and make up our mind. Credentials are often useful to navigate this, but not always necessary.
This is hazy and general, so I don’t know what you specifically mean by it. But there are all kinds of reasons that non-experts are, in general, not competent to assess the research on a topic. For example, they might be unacquainted with the nuances of statistics, experimental designs, and theories of underlying mechanisms involved in studies on a certain topic. Errors or caveats that an expert would catch might be missed by an amateur. And so on.
I am extremely skeptical of any claim that an individual or a group is competent at assessing research in any and all extant fields of study, since this would seem to imply that individual or group possesses preternatural abilities that just aren’t realistic given what we know about human limitations. I think the sort of Tony Stark or Sherlock Holmes general-purposes geniuses of fiction are only fictional. But even if they existed, we would know who they are, and they would have a litany of objectively impressive accomplishments.
In cases where there is an established science or academic field or mainstream expert community, the default stance of people in EA should be nearly complete deference to expert opinion, with deference moderately decreasing only when people become properly educated (i.e., via formal education or a process approximating formal education) or credentialed in a subject.
If you took this seriously, in 2011 you’d have had no basis to trust GiveWell (quite new to charity evaluation, not strongly connected to the field, no credentials) over Charity Navigator (10 years of existence, considered mainstream experts, CEO with 30 years of experience in charity sector).
But, you could have just looked at their website (GiveWell, Charity Navigator) and tried to figure out yourself whether one of these organisations is better at evaluating charities.
I am extremely skeptical of any claim that an individual or a group is competent at assessing research in any and all extant fields of study, since this would seem to imply that individual or group possesses preternatural abilities that just aren’t realistic given what we know about human limitations.
This feels like a Motte (“skeptical of any claim that an individual or a group is competent at assessing research in any and all extant fields of study”) and Bailey (almost complete deference with deference only decreasing with formal education or credentials). GiveWell obviously never claimed to be experts in much beyond GHW charity evaluation.
If you took this seriously, in 2011 you’d have had no basis to trust GiveWell (quite new to charity evaluation, not strongly connected to the field, no credentials) over Charity Navigator (10 years of existence, considered mainstream experts, CEO with 30 years of experience in charity sector).
Well, no. Because I did hold that view very seriously (as I still do) in the late 2000s and early 2010s, and I came to trust GiveWell.
Charity Navigator doesn’t even claim to evaluate cost-effectiveness; they don’t do cost-effectiveness estimates.
Even prior to GiveWell, there were similar ideas kicking around. A clunky early term that was used was ‘philanthrocapitalism’ (which is a mouthful and also ambiguous). It meant that charities should seek an ROI in terms of impact like businesses do in terms of profit.
Back in the day, I read the development economist William Easterly’s blog Aid Watch (a project of NYU’s Development Research Institute) and he called it something like the smart aid movement, or the smart giving movement.
The old blog is still there in the Wayback Machine, but the Wayback Machine doesn’t allow for keyword search, so it’s hard to track down specific posts.
I had forgotten until I just went spelunking in the archive that William Easterly and Peter Singer had a debate in 2009 about global poverty, foreign aid, and charity effectiveness. The blog post summary says that even though it was a debate and they disagreed on things, they agreed on recommendations to donate to some specific charities.
My point here is that charity effectiveness had been a public conversation involving aid experts like Easterly going back a long time. You never would have taken away from this public conversation that you should pay attention to something like Charity Navigator rather than something like GiveWell.
In the late 2000s and early 2010s, what international development experts would have told you to look at Charity Navigator?
This feels like a Motte (“skeptical of any claim that an individual or a group is competent at assessing research in any and all extant fields of study”) and Bailey (almost complete deference with deference only decreasing with formal education or credentials). GiveWell obviously never claimed to be experts in much beyond GHW charity evaluation.
I might have done a poor job getting across what I’m trying to say. Let me try again.
What I mean is that, in order for a person or a group of people to avoid deferring to experts in a field, they would have to be competent at assessing research in that field. And maybe they are for one or a few fields, but not all fields. So, at some point, they have to defer to experts on some things — on many things, actually.
What I said about this wasn’t intended as a commentary on GiveWell — sorry for the confusion. I think GiveWell’s approach was sensible. They realized that competently assessing the relevant research on global poverty/global health would be a full-time job, and they would need to learn a lot, and get a lot of input from experts — and still probably make some big mistakes. I think that’s an admirable approach, and the right way to do it.
I think this is quite different from spending a few weeks researching covid and trying to second-guess expert communities, rather than just trying to find out what the consensus views among expert communities are. If some people in EA had decided in, say, 2018 to start focusing full-time on epidemiology and public health, and then started weighing in on covid-19 in 2020 — while actively seeking input from experts — that would have been closer to the GiveWell approach.
I agree with this if you read the challenge literally, but the actual challenges were usually closer to a reflexive dismissal without actually engaging with GiveWell’s work.
Also, I disagree that the only way we were able to build trust in GiveWell was through this:
We can often just look at object-level work, study research & responses to the research, and make up our mind. Credentials are often useful to navigate this, but not always necessary.
I don’t know the specific, actual criticisms of GiveWell you’re referring to, so I can’t comment on them — how fair or reasonable they were.
My point is more abstract: just that, in general, it is fair to be to challenge non-experts who are trying to do serious work in area outside of their expertise. It is a challenge that anyone in the position of the GiveWell founders should gladly and willingly accept, or else they’re not up to the job.
Reputation, trust, and credibility in an area where you are a neophyte is not a right owed to you automatically. It’s something you earn by providing evidence that you trustworthy, credible, and deserve a good reputation.
This is hazy and general, so I don’t know what you specifically mean by it. But there are all kinds of reasons that non-experts are, in general, not competent to assess the research on a topic. For example, they might be unacquainted with the nuances of statistics, experimental designs, and theories of underlying mechanisms involved in studies on a certain topic. Errors or caveats that an expert would catch might be missed by an amateur. And so on.
I am extremely skeptical of any claim that an individual or a group is competent at assessing research in any and all extant fields of study, since this would seem to imply that individual or group possesses preternatural abilities that just aren’t realistic given what we know about human limitations. I think the sort of Tony Stark or Sherlock Holmes general-purposes geniuses of fiction are only fictional. But even if they existed, we would know who they are, and they would have a litany of objectively impressive accomplishments.
If you took this seriously, in 2011 you’d have had no basis to trust GiveWell (quite new to charity evaluation, not strongly connected to the field, no credentials) over Charity Navigator (10 years of existence, considered mainstream experts, CEO with 30 years of experience in charity sector).
But, you could have just looked at their website (GiveWell, Charity Navigator) and tried to figure out yourself whether one of these organisations is better at evaluating charities.
This feels like a Motte (“skeptical of any claim that an individual or a group is competent at assessing research in any and all extant fields of study”) and Bailey (almost complete deference with deference only decreasing with formal education or credentials). GiveWell obviously never claimed to be experts in much beyond GHW charity evaluation.
Well, no. Because I did hold that view very seriously (as I still do) in the late 2000s and early 2010s, and I came to trust GiveWell.
Charity Navigator doesn’t even claim to evaluate cost-effectiveness; they don’t do cost-effectiveness estimates.
Even prior to GiveWell, there were similar ideas kicking around. A clunky early term that was used was ‘philanthrocapitalism’ (which is a mouthful and also ambiguous). It meant that charities should seek an ROI in terms of impact like businesses do in terms of profit.
Back in the day, I read the development economist William Easterly’s blog Aid Watch (a project of NYU’s Development Research Institute) and he called it something like the smart aid movement, or the smart giving movement.
The old blog is still there in the Wayback Machine, but the Wayback Machine doesn’t allow for keyword search, so it’s hard to track down specific posts.
I had forgotten until I just went spelunking in the archive that William Easterly and Peter Singer had a debate in 2009 about global poverty, foreign aid, and charity effectiveness. The blog post summary says that even though it was a debate and they disagreed on things, they agreed on recommendations to donate to some specific charities.
My point here is that charity effectiveness had been a public conversation involving aid experts like Easterly going back a long time. You never would have taken away from this public conversation that you should pay attention to something like Charity Navigator rather than something like GiveWell.
In the late 2000s and early 2010s, what international development experts would have told you to look at Charity Navigator?
I might have done a poor job getting across what I’m trying to say. Let me try again.
What I mean is that, in order for a person or a group of people to avoid deferring to experts in a field, they would have to be competent at assessing research in that field. And maybe they are for one or a few fields, but not all fields. So, at some point, they have to defer to experts on some things — on many things, actually.
What I said about this wasn’t intended as a commentary on GiveWell — sorry for the confusion. I think GiveWell’s approach was sensible. They realized that competently assessing the relevant research on global poverty/global health would be a full-time job, and they would need to learn a lot, and get a lot of input from experts — and still probably make some big mistakes. I think that’s an admirable approach, and the right way to do it.
I think this is quite different from spending a few weeks researching covid and trying to second-guess expert communities, rather than just trying to find out what the consensus views among expert communities are. If some people in EA had decided in, say, 2018 to start focusing full-time on epidemiology and public health, and then started weighing in on covid-19 in 2020 — while actively seeking input from experts — that would have been closer to the GiveWell approach.