Thanks for asking for feedback, and for all your reviews! I very much encorage you to reach out to organisations before you publish your reviews. However, I also think reviews of organisations which have not been reviewed by them are much better than no reviews.
Risk 1: Charities could alter, conceal, fabricate and/or destroy evidence to cover their tracks.
I do not recall this having happened with organisations aligned with effective altruism.
Prior to releasing our reviews, we have always taken screen recordings of important evidence. This was done in case one of the above scenarios were to ever happen. In the future, we also plan to create internet archives for the important evidence we use. However, we remain concerned that in the case of a dispute, we would be accused of creating fake screen recordings/archives.
I would be surprised if organisations aligned with effective altruism claimed the above type of evidence had been fabricated by you.
Risk 2: Unconscious biases from interacting with charity staff.
You can try correcting for this by not updating as much towards being positive about the organisation as naively warranted by the feedback you received from people working there. In addition, you can avoid meeting with people from the organisation, restricting yourself to sharing a doc with your review, and discussing it via messages.
Reason 1: Organizations should be held accountable for serious mistakes.
Organisations would be held accountable if risk 1 was mitigated.
Reason 2: Charities should be incentivized to provide sufficient and publicly stated evidence to justify their important publicly stated claims.
I agree with the points you make in this section, although I think sharing the reviews is still worth it overall.
> Risk 1: Charities could alter, conceal, fabricate and/or destroy evidence to cover their tracks.
I do not recall this having happened with organisations aligned with effective altruism.
(FWIW, it happened with Leverage Research at multiple points in time, with active effort to remove various pieces of evidence from all available web archives. My best guess is it also happened with early CEA while I worked there, because many Leverage members worked at CEA at the time and they considered this relatively common practice. My best guess is you can find many other instances.)
At one point CEA released a doctored EAG photo with a “Leverage Research” sign edited to be bizarrely blank. (Archive page with doctored photo, original photo.) I assume this was an effort to bury their Leverage association after the fact.
To the extent that you update against an org, of currently existing orgs this would be 80k, not CEA. At the time that this happened current CEA and current 80k were both independently managed efforts under the umbrella organization then known as CEA and now known as EV (more).
Separately, I agree this editing was bad, but doing it in the context of a review would be much worse.
The current version of CEA employs Julia Wise, your wife. Previously Alexey Guzey sent Wise a draft of a post critical of her superior Will MacAskill and a request for confidentiality. Wise accidentally (or “accidentally”) leaked the draft to MacAskill, who then used it to prepare an adversarial public response to the upcoming post rather than to give Guzey feedback ahead of publication as he’d requested. Neither Wise nor MacAskill disclosed this until after the leak was caught because MacAskill publicly responded to parts of the draft which were removed before publication. Wise remains in her role as CEA’s community liaison, where she is the point person for confidential information from people who worry that leaks would provoke adversarial action from powerful community insiders.
Are you trying to say I should have included some sort of disclosure in my comment? Or trying to give this as an example of the kind of thing VettedCauses is worried about with sharing reviews before publication? Something else?
You pointed out the lack of staff continuity between the present CEA and the subset of then-CEA-now-EV which posted the doctored image, to argue that their behavior does not reflect on the present CEA, so that we have no particular reason to expect sketchy or adversarial comms from the present CEA.
Your argument about lack of staff continuity is valid as a local counterpoint which carries some weight (IMO not an extreme amount of weight, given the social and institutional links between the different orgs siloed under then-CEA-now-EV, but others might reasonably disagree). Nevertheless I object to your conclusion about present CEA, largely because of a separate incident involving present CEA staff. So, I brought up this other incident to explain why.
It’s true that this is also an example of the kind of thing VettedCauses is worried about, but that’s not what made me think of it here.
I don’t think I gave any conclusion about CEA? I was pointing out that 80k’s past actions are primarily evidence about what we should expect from 80k in the future.
I think your comment is still pretty misleading: “CEA released …” would be much clearer as “80k released …” or perhaps “80k, at the time a sibling project of CEA, released …”.
separate incident involving present CEA staff
FYI I’m not getting into the separate incident because, as you point out, it involves my partner.
Thanks, Sarah! I have checked the links, and I agree that is a clear example of faking evidence. I assume this is an exception, and encourage CEA to disclose any similar instance where they have clearly faked evidence.
(I think this level of brazenness is an exception, the broader thing has I think occurred many dozens of times. My best guess, though I know of no specific example, is that probably as a result of the FTX stuff, many EA organizations changed websites and made requests to delete references from archives, in order to lower their association with FTX)
That sort of “it’s hard to archive things reliably long-term” seems less relevant in the context of a review, where there’s a pretty short time between sharing the doc with the charity and making the review public.
To be clear, many of my links were to archive.is and archive.org and stuff, and they still broke. I do agree I could have taken full offline copies, and the basic problem here seems overcomable (if requiring at least a small amount of web-development expertise and understanding).
Thanks for sharing, Habryka! If VettedCauses reviewed a random organisation recommended by Animal Charity Evaluators, and shared their review before publication, I guess there would only be a 20 % chance they would regret having shared the review specifically due to risk 1. What would be your guess?
Risk 2: Unconscious biases from interacting with charity staff.
You can try correcting for this by not updating as much towards being positive about the organisation as naively warranted by the feedback you received from people working there. In addition, you can avoid meeting with people from the organisation, restricting yourself to sharing a doc with your review, and discussing it via messages.
These are fair points. We agree the risk of bias is likely minimal if this is how you handle it.
We agree the risk of bias is likely minimal if this is how you handle it.
I think there is still some significant risk. My reviews of organisations very often become more positive towards the organisations after I integrate their feedback, although I try to update the reviews based on what makes sense to me instead of what appeals to the organisations. I try to counter the selection bias naturally present in the feedback I receive from organisations by actively searching for evidence against them after receiving their feedback, especially if I updated significantly as a result of their feedback. I search for contrary evidence not only in the updated parts of the review, but also elsewhere. In addition, you can ask feedback from people who you think are pessimistic about the organisations or their interventions.
Thanks for asking for feedback, and for all your reviews! I very much encorage you to reach out to organisations before you publish your reviews. However, I also think reviews of organisations which have not been reviewed by them are much better than no reviews.
I do not recall this having happened with organisations aligned with effective altruism.
I would be surprised if organisations aligned with effective altruism claimed the above type of evidence had been fabricated by you.
You can try correcting for this by not updating as much towards being positive about the organisation as naively warranted by the feedback you received from people working there. In addition, you can avoid meeting with people from the organisation, restricting yourself to sharing a doc with your review, and discussing it via messages.
Organisations would be held accountable if risk 1 was mitigated.
I agree with the points you make in this section, although I think sharing the reviews is still worth it overall.
(FWIW, it happened with Leverage Research at multiple points in time, with active effort to remove various pieces of evidence from all available web archives. My best guess is it also happened with early CEA while I worked there, because many Leverage members worked at CEA at the time and they considered this relatively common practice. My best guess is you can find many other instances.)
At one point CEA released a doctored EAG photo with a “Leverage Research” sign edited to be bizarrely blank. (Archive page with doctored photo, original photo.) I assume this was an effort to bury their Leverage association after the fact.
To the extent that you update against an org, of currently existing orgs this would be 80k, not CEA. At the time that this happened current CEA and current 80k were both independently managed efforts under the umbrella organization then known as CEA and now known as EV (more).
Separately, I agree this editing was bad, but doing it in the context of a review would be much worse.
The current version of CEA employs Julia Wise, your wife. Previously Alexey Guzey sent Wise a draft of a post critical of her superior Will MacAskill and a request for confidentiality. Wise accidentally (or “accidentally”) leaked the draft to MacAskill, who then used it to prepare an adversarial public response to the upcoming post rather than to give Guzey feedback ahead of publication as he’d requested. Neither Wise nor MacAskill disclosed this until after the leak was caught because MacAskill publicly responded to parts of the draft which were removed before publication. Wise remains in her role as CEA’s community liaison, where she is the point person for confidential information from people who worry that leaks would provoke adversarial action from powerful community insiders.
I’m confused why you’re posting this?
Are you trying to say I should have included some sort of disclosure in my comment? Or trying to give this as an example of the kind of thing VettedCauses is worried about with sharing reviews before publication? Something else?
You pointed out the lack of staff continuity between the present CEA and the subset of then-CEA-now-EV which posted the doctored image, to argue that their behavior does not reflect on the present CEA, so that we have no particular reason to expect sketchy or adversarial comms from the present CEA.
Your argument about lack of staff continuity is valid as a local counterpoint which carries some weight (IMO not an extreme amount of weight, given the social and institutional links between the different orgs siloed under then-CEA-now-EV, but others might reasonably disagree). Nevertheless I object to your conclusion about present CEA, largely because of a separate incident involving present CEA staff. So, I brought up this other incident to explain why.
It’s true that this is also an example of the kind of thing VettedCauses is worried about, but that’s not what made me think of it here.
I don’t think I gave any conclusion about CEA? I was pointing out that 80k’s past actions are primarily evidence about what we should expect from 80k in the future.
I think your comment is still pretty misleading: “CEA released …” would be much clearer as “80k released …” or perhaps “80k, at the time a sibling project of CEA, released …”.
FYI I’m not getting into the separate incident because, as you point out, it involves my partner.
Thanks, Sarah! I have checked the links, and I agree that is a clear example of faking evidence. I assume this is an exception, and encourage CEA to disclose any similar instance where they have clearly faked evidence.
(I think this level of brazenness is an exception, the broader thing has I think occurred many dozens of times. My best guess, though I know of no specific example, is that probably as a result of the FTX stuff, many EA organizations changed websites and made requests to delete references from archives, in order to lower their association with FTX)
Which EA organizations do you know have made requests to delete references from archives?
Were they successful in getting evidence removed from web archives?
Yes, many of my links over the years broke, and I haven’t been able to get any working copy.
That sort of “it’s hard to archive things reliably long-term” seems less relevant in the context of a review, where there’s a pretty short time between sharing the doc with the charity and making the review public.
To be clear, many of my links were to archive.is and archive.org and stuff, and they still broke. I do agree I could have taken full offline copies, and the basic problem here seems overcomable (if requiring at least a small amount of web-development expertise and understanding).
Thanks for sharing, Habryka! If VettedCauses reviewed a random organisation recommended by Animal Charity Evaluators, and shared their review before publication, I guess there would only be a 20 % chance they would regret having shared the review specifically due to risk 1. What would be your guess?
Thanks for the feedback, Vasco!
These are fair points. We agree the risk of bias is likely minimal if this is how you handle it.
Thanks!
I think there is still some significant risk. My reviews of organisations very often become more positive towards the organisations after I integrate their feedback, although I try to update the reviews based on what makes sense to me instead of what appeals to the organisations. I try to counter the selection bias naturally present in the feedback I receive from organisations by actively searching for evidence against them after receiving their feedback, especially if I updated significantly as a result of their feedback. I search for contrary evidence not only in the updated parts of the review, but also elsewhere. In addition, you can ask feedback from people who you think are pessimistic about the organisations or their interventions.