I’d also be curious about whether evaluators generally should or shouldn’t give the people and organizations being evaluated the chance to respond before publication.
My experience is that it is generally good to share a draft, because organisations can be very touchy about irrelevant details that you don’t really care much about and are happy to correct. If you don’t give them this opportunity they will be annoyed and your credibility will be reduced when the truth comes out, even if it doesn’t have any real logical bearing on your conclusions. This doesn’t protect you against different people in the org having different views on the draft, and some objecting afterwards, but it should get you most of the way there.
On the other hand it is a little harder if you want to be anonymous, perhaps because you are afraid of retribution, and you’re definitely right that it adds a lot of time cost.
I don’t think there’s any obligation to print their response in the main text however. If you think their objections are valid, you should adjust your conclusions; if they are specious, let them duke it out in the comment section. You could include them inline, but I wouldn’t feel obliged to quote verbatim. Something like this would seem perfectly responsible to me:
Organisation X said they were going to research ways to prevent famines using new crop varieties, but seem to lack scientific expertise. In an email they disputed this, pointing to their head of research, Dr Wesley Stadtler, but all his publications are in low quality journals and unrelated fields.
This allows readers to see the other POV, assuming you summarise it fairly, without giving them excessive space on the page or the last word.
I agree that any organisation that is soliciting funds or similar from the public is fair game. It’s unclear to me to what extend this also applies to those which solicit money from a fund like the LTFF which is itself primarily dependant on soliciting money from the public.
My experience is that it is generally good to share a draft, because organisations can be very touchy about irrelevant details that you don’t really care much about and are happy to correct. If you don’t give them this opportunity they will be annoyed and your credibility will be reduced when the truth comes out, even if it doesn’t have any real logical bearing on your conclusions.
To defend the side of the organizations a little, one reason for this is that they may have fairly different threat models from you/evaluators.
A concrete example in our community recently is the Scott Alexander/New York Times kerfuffle, where the seemingly irrelevant detail of Scott’s real last name was actually critical (in a way that the NYT journalist didn’t understand or chose not to understand) to maintaining a within-institution job of being a psychiatrist. There was a similar example with Naomi Wu iirc.
A much more minor example is that I noticed Peter (and others) usually being somewhat touchy and quick to correct people about any misrepresentations related to how much they pay employees, eg seehere. I don’t think his correction at all altered Ben_West’s core point, but from the perspective of leading a growing organization, having correct public numbers on how much new employees are paid may be pretty important for hiring.
Just brainstorming: I imagine we could eventually have infrastructures for dealing with such situations better.
Right now this sort of work requires:
Figuring out who in the organization is a good fit for asking about this.
Finding their email address.
Emailing them.
If they don’t respond, trying to figure out how long you should wait until you post anyway.
If they do respond and it becomes a thread, figure out where to cut off things.
If you’re anonymous, setting up an extra email account.
Ideally it might be nice to have policies and infrastructure for such work. For example:
Coded practices and norms for responses. Organizations can specify which person is responsible and what their email address is. They also commit to responding in some timeframe.
Services for responses. Maybe there’s a middleman who knows the people at the orgs and could help do some of the grunt work of routing signals back and forth.
I think part of the problems you point to (though not all) could be easily fixed by just simple tweaks to the initial email: In the initial email, say when you plan to post by if you don’t get a response (include that in bold) and say something to indicate how much back-and-forth you’re ok with / how much time you’re able/willing to invest in that (basically to set expectations).
I think you could also email anyone in the org out of the set of people whose email address you can quickly find and whose role/position sounds somewhat appropriate, and ask them to forward it to someone else if that’s better.
...your credibility will be reduced when the truth comes out, even if it doesn’t have any real logical bearing on your conclusions.
I’ve had this happen to me before, and it was annoying...
...but I still think that it’s appropriate for people to reduce their trust in my conclusions if I’m getting “irrelevant details” wrong. If an author makes errors that I happen to notice, I’m going to raise my estimate for how many errors they’ve made that I didn’t notice, or wouldn’t be capable of noticing. (If a statistics paper gets enough basic facts wrong, I’m going to be more suspicious of the math, even if I lack the skills to fact-check that part.)
This extends to the author’s conclusion; the irrelevant details aren’t discrediting, but they are credibility-reducing.
(For what it’s worth, if someone finds that I’ve gotten several details wrong in something I’ve written, that’s probably a sign that I wrote it too quickly, didn’t check it with other people, or was in some other condition that also reduced the strength of my reasoning.)
...but I still think that it’s appropriate for people to reduce their trust in my conclusions if I’m getting “irrelevant details” wrong. If I notice an author make errors that I happen to notice, I’m going to raise my estimate for how many errors they’ve made that I didn’t notice
This makes sense, but I don’t think this is bad. In particular, I’m unsure about my own error rate, and maybe I do want to let people estimate my unknown-error rate as a function of my “irrelevant details” error rate.
My experience is that it is generally good to share a draft, because organisations can be very touchy about irrelevant details that you don’t really care much about and are happy to correct. If you don’t give them this opportunity they will be annoyed and your credibility will be reduced when the truth comes out, even if it doesn’t have any real logical bearing on your conclusions. This doesn’t protect you against different people in the org having different views on the draft, and some objecting afterwards, but it should get you most of the way there.
On the other hand it is a little harder if you want to be anonymous, perhaps because you are afraid of retribution, and you’re definitely right that it adds a lot of time cost.
I don’t think there’s any obligation to print their response in the main text however. If you think their objections are valid, you should adjust your conclusions; if they are specious, let them duke it out in the comment section. You could include them inline, but I wouldn’t feel obliged to quote verbatim. Something like this would seem perfectly responsible to me:
This allows readers to see the other POV, assuming you summarise it fairly, without giving them excessive space on the page or the last word.
I agree that any organisation that is soliciting funds or similar from the public is fair game. It’s unclear to me to what extend this also applies to those which solicit money from a fund like the LTFF which is itself primarily dependant on soliciting money from the public.
To defend the side of the organizations a little, one reason for this is that they may have fairly different threat models from you/evaluators.
A concrete example in our community recently is the Scott Alexander/New York Times kerfuffle, where the seemingly irrelevant detail of Scott’s real last name was actually critical (in a way that the NYT journalist didn’t understand or chose not to understand) to maintaining a within-institution job of being a psychiatrist. There was a similar example with Naomi Wu iirc.
A much more minor example is that I noticed Peter (and others) usually being somewhat touchy and quick to correct people about any misrepresentations related to how much they pay employees, eg see here. I don’t think his correction at all altered Ben_West’s core point, but from the perspective of leading a growing organization, having correct public numbers on how much new employees are paid may be pretty important for hiring.
Yup, I agree with that, and am typically happy to make such requested changes.
Just brainstorming:
I imagine we could eventually have infrastructures for dealing with such situations better.
Right now this sort of work requires:
Figuring out who in the organization is a good fit for asking about this.
Finding their email address.
Emailing them.
If they don’t respond, trying to figure out how long you should wait until you post anyway.
If they do respond and it becomes a thread, figure out where to cut off things.
If you’re anonymous, setting up an extra email account.
Ideally it might be nice to have policies and infrastructure for such work. For example:
Coded practices and norms for responses. Organizations can specify which person is responsible and what their email address is. They also commit to responding in some timeframe.
Services for responses. Maybe there’s a middleman who knows the people at the orgs and could help do some of the grunt work of routing signals back and forth.
I think part of the problems you point to (though not all) could be easily fixed by just simple tweaks to the initial email: In the initial email, say when you plan to post by if you don’t get a response (include that in bold) and say something to indicate how much back-and-forth you’re ok with / how much time you’re able/willing to invest in that (basically to set expectations).
I think you could also email anyone in the org out of the set of people whose email address you can quickly find and whose role/position sounds somewhat appropriate, and ask them to forward it to someone else if that’s better.
I’ve had this happen to me before, and it was annoying...
...but I still think that it’s appropriate for people to reduce their trust in my conclusions if I’m getting “irrelevant details” wrong. If an author makes errors that I happen to notice, I’m going to raise my estimate for how many errors they’ve made that I didn’t notice, or wouldn’t be capable of noticing. (If a statistics paper gets enough basic facts wrong, I’m going to be more suspicious of the math, even if I lack the skills to fact-check that part.)
This extends to the author’s conclusion; the irrelevant details aren’t discrediting, but they are credibility-reducing.
(For what it’s worth, if someone finds that I’ve gotten several details wrong in something I’ve written, that’s probably a sign that I wrote it too quickly, didn’t check it with other people, or was in some other condition that also reduced the strength of my reasoning.)
This makes sense, but I don’t think this is bad. In particular, I’m unsure about my own error rate, and maybe I do want to let people estimate my unknown-error rate as a function of my “irrelevant details” error rate.
I also don’t think it’s bad. Did I imply that I thought it was bad for people to update in this way? (I might be misunderstanding what you meant.)
Reading it again, you didn’t