”You only mention Founders Pledge, which, to me, implies you think Founders Pledge don’t get external reviews but other EA orgs do.”
> No, I don’t think this, but I should have made this clearer. I focused on FP, because I happened to know that they didn’t have an external, expert review on one of their main climate-charity recommendations, CATF and because I couldn’t find any report on their website about an external, expert review. I think my argument here holds for any other similar organisation.
“This doesn’t seem right, because Founders Pledge do ask others for reviews: they’ve asked me/my team at HLI to review several of their reports (StrongMinds, Actions for Happiness, psychedelics) which we’ve been happy to do, although we didn’t necessarily get into the weeds.”
>Cool, I’m glad they are doing it! But, if you say “we didn’t necessarily get into the weeds.”, does it count as an independent, in-depth, expert review? If yes, great, then I think it would be good to make that public. If no, the conclusion in my question/post still holds, doesn’t it?
I think my argument here holds for any other similar organisation.
Gotcha
does it count as an independent, in-depth, expert review?
I mean, how long is a piece of string? :) The way I did my reviewing was to check the major assumptions and calculations and see if those made sense. But where a report, say, took information from academic studies, I wouldn’t necessarily delve into those or see if they had been interpreted correctly.
Re making things public, that’s a bit trickier than it sounds. Usually I’d leave a bunch of comments in a google doc as I went, which wouldn’t be that easy for a reader to follow. You could ask someone to write a prose evaluation—basically like an academic journal review report—but that’s quite a lot more effort and not something I’ve been asked to do.
In HLI, we have asked external academics to do that for us for a couple of pieces of work, and we recognise it’s quite a big ask vs just leaving gdoc comments. The people we asked were gracious enough to do it, but they were basically doing us a favour and it’s not something we could keep doing (at least with those individuals). I guess one could make them public—we’ve offered to share ours with donors, but none have asked to see them—but there’s something a bit weird about it: it’s like you’re sending the message “you shouldn’t take our word for it, but there’s this academic who we’ve chosen and paid to evaluate us—take their word for it”.
“The way I did my reviewing was to check the major assumptions and calculations and see if those made sense. But where a report, say, took information from academic studies, I wouldn’t necessarily delve into those or see if they had been interpreted correctly. “
>> Thanks for clarifying! I wonder if it would be even better if the review was done by people outside the EA community. Maybe the sympathy of belonging to the same social group and shared, distinctive assumptions (assuming they exist), make people less likely to spot errors? This is pretty speculative, but wouldn’t surprise me.
“Re making things public, that’s a bit trickier than it sounds. Usually I’d leave a bunch of comments in a google doc as I went, which wouldn’t be that easy for a reader to follow. You could ask someone to write a prose evaluation—basically like an academic journal review report—but that’s quite a lot more effort and not something I’ve been asked to do.”
>> I see, interesting! This might be a silly idea, but what do you think about setting up a competition where there is a cash-prize of a few thousand dollars for the person who spots an important mistake? If you manage to attract the attention of a lot of phd students in the relevant area, you might really get a lot of competent people trying hard to find your mistakes.
“it’s like you’re sending the message “you shouldn’t take our word for it, but there’s this academic who we’ve chosen and paid to evaluate us—take their word for it”.”
>> Maybe that would be weird for some people. I would be surprised though if the majority of people wouldn’t interpret a positive expert review as a signal that your research is trustworthy (even if its not actually a signal because you chose and paid that expert).
Thanks for clarifying! I wonder if it would be even better if the review was done by people outside the EA community. Maybe the sympathy of belonging to the same social group and shared, distinctive assumptions (assuming they exist), make people less likely to spot errors? This is pretty speculative, but wouldn’t surprise me
I can’t immediately remember where I’ve seen this discussed before, but I concerned I’ve heard raised is that’s it’s quite hard to find people who (1) know enough about what you’re doing to evaluate your work but (2) are not already in the EA world.
I see, interesting! This might be a silly idea, but what do you think about setting up a competition where there is a cash-prize of a few thousand dollars for the person who spots an important mistake? If you manage to attract the attention of a lot of phd students in the relevant area, you might really get a lot of competent people trying hard to find your mistakes.
Hmm. Well, I think you’d have to be quite a big and well funded organisation to do that. It would be a lot of management time to set up and run a competition, one which wouldn’t obviously be that useful (in terms of the value of information, such a competition is more valuable the worse you think your research is). I can see organisations quite reasonably thinking this wouldn’t be a good staff priority vs other things. I’d be interested to know if this has happened elsewhere and how impactful it had been.
>> Maybe that would be weird for some people. I would be surprised though if the majority of people wouldn’t interpret a positive expert review as a signal that your research is trustworthy (even if its not actually a signal because you chose and paid that expert).
That’s right. People who were suspicious of your research would be unlikely to have much confidence in the assessment of someone you paid.
Hi Michael!
”You only mention Founders Pledge, which, to me, implies you think Founders Pledge don’t get external reviews but other EA orgs do.”
> No, I don’t think this, but I should have made this clearer. I focused on FP, because I happened to know that they didn’t have an external, expert review on one of their main climate-charity recommendations, CATF and because I couldn’t find any report on their website about an external, expert review.
I think my argument here holds for any other similar organisation.
“This doesn’t seem right, because Founders Pledge do ask others for reviews: they’ve asked me/my team at HLI to review several of their reports (StrongMinds, Actions for Happiness, psychedelics) which we’ve been happy to do, although we didn’t necessarily get into the weeds.”
>Cool, I’m glad they are doing it! But, if you say “we didn’t necessarily get into the weeds.”, does it count as an independent, in-depth, expert review? If yes, great, then I think it would be good to make that public. If no, the conclusion in my question/post still holds, doesn’t it?
Gotcha
I mean, how long is a piece of string? :) The way I did my reviewing was to check the major assumptions and calculations and see if those made sense. But where a report, say, took information from academic studies, I wouldn’t necessarily delve into those or see if they had been interpreted correctly.
Re making things public, that’s a bit trickier than it sounds. Usually I’d leave a bunch of comments in a google doc as I went, which wouldn’t be that easy for a reader to follow. You could ask someone to write a prose evaluation—basically like an academic journal review report—but that’s quite a lot more effort and not something I’ve been asked to do.
In HLI, we have asked external academics to do that for us for a couple of pieces of work, and we recognise it’s quite a big ask vs just leaving gdoc comments. The people we asked were gracious enough to do it, but they were basically doing us a favour and it’s not something we could keep doing (at least with those individuals). I guess one could make them public—we’ve offered to share ours with donors, but none have asked to see them—but there’s something a bit weird about it: it’s like you’re sending the message “you shouldn’t take our word for it, but there’s this academic who we’ve chosen and paid to evaluate us—take their word for it”.
I can’t immediately remember where I’ve seen this discussed before, but I concerned I’ve heard raised is that’s it’s quite hard to find people who (1) know enough about what you’re doing to evaluate your work but (2) are not already in the EA world.
Hmm. Well, I think you’d have to be quite a big and well funded organisation to do that. It would be a lot of management time to set up and run a competition, one which wouldn’t obviously be that useful (in terms of the value of information, such a competition is more valuable the worse you think your research is). I can see organisations quite reasonably thinking this wouldn’t be a good staff priority vs other things. I’d be interested to know if this has happened elsewhere and how impactful it had been.
That’s right. People who were suspicious of your research would be unlikely to have much confidence in the assessment of someone you paid.