Another possible (but less realistic?) way to make this happen:
Organisations/researchers do something like encouraging red teaming of their own output, setting up a bounty/prize for high-quality instances of that, or similar
An example of something roughly like this is a post on the GiveWell blog that says at the start: “This is a guest post by David Barry, a GiveWell supporter. He emailed us at the end of December to point out some mistakes and issues in our cost-effectiveness calculations for deworming, and we asked him to write up his thoughts to share here. We made minor wording and organizational suggestions but have otherwise published as is; we have not vetted his sources or his modifications to our spreadsheet for comparing deworming and cash. Note that since receiving his initial email, we have discussed the possibility of paying him to do more work like this in the future.”
But I think GiveWell haven’t done that since then?
It seems like this might make sense and be mutually beneficial
Orgs/researchers presumably want more ways to increase the accuracy of their claims and conclusions
A good red teaming of their work might also highlight additional directions for further research and surface someone who’d be a good employee for that org or collaborator for that researcher
Red teaming of that work might provide a way for people to build skills and test fit for work on precisely the topics that the org/researcher presumably considers important and wants more people working on
But I’d guess that this is unlikely to happen in this form
I think this is mainly due to inertia plus people feeling averse to the idea
Another argument against is that, for actually directly improving the accuracy of some piece of work, it’s probably more effective to pay people who are already know to be good at relevant work to do reviewing / red-teaming prior to publication
Another argument against is that, for actually directly improving the accuracy of some piece of work, it’s probably more effective to pay people who are already know to be good at relevant work to do reviewing / red-teaming prior to publication
Yeah I think this is key. I’m much more optimistic about getting trainees to do this being a good training intervention than a “directly improve research quality” intervention. There are some related arguments why you want to pay people who are either a) already good at the relevant work or b) specialized reviewers/red-teamers
paying people to criticize your work would risk creating a weird power dynamic, and more experienced reviewers would be better at navigating this
For example, trainees may be afraid of criticizing you too harshly.
Also, if the critique is in fact bad, you may be placed in a somewhat awkward position when deciding whether to publish/publicize it.
Another possible (but less realistic?) way to make this happen:
Organisations/researchers do something like encouraging red teaming of their own output, setting up a bounty/prize for high-quality instances of that, or similar
An example of something roughly like this is a post on the GiveWell blog that says at the start: “This is a guest post by David Barry, a GiveWell supporter. He emailed us at the end of December to point out some mistakes and issues in our cost-effectiveness calculations for deworming, and we asked him to write up his thoughts to share here. We made minor wording and organizational suggestions but have otherwise published as is; we have not vetted his sources or his modifications to our spreadsheet for comparing deworming and cash. Note that since receiving his initial email, we have discussed the possibility of paying him to do more work like this in the future.”
But I think GiveWell haven’t done that since then?
It seems like this might make sense and be mutually beneficial
Orgs/researchers presumably want more ways to increase the accuracy of their claims and conclusions
A good red teaming of their work might also highlight additional directions for further research and surface someone who’d be a good employee for that org or collaborator for that researcher
Red teaming of that work might provide a way for people to build skills and test fit for work on precisely the topics that the org/researcher presumably considers important and wants more people working on
But I’d guess that this is unlikely to happen in this form
I think this is mainly due to inertia plus people feeling averse to the idea
But there may also be good arguments against
This post is probably relevant: https://forum.effectivealtruism.org/posts/gTaDDJFDzqe7jnTWG/some-thoughts-on-public-discourse
Another argument against is that, for actually directly improving the accuracy of some piece of work, it’s probably more effective to pay people who are already know to be good at relevant work to do reviewing / red-teaming prior to publication
Yeah I think this is key. I’m much more optimistic about getting trainees to do this being a good training intervention than a “directly improve research quality” intervention. There are some related arguments why you want to pay people who are either a) already good at the relevant work or b) specialized reviewers/red-teamers
paying people to criticize your work would risk creating a weird power dynamic, and more experienced reviewers would be better at navigating this
For example, trainees may be afraid of criticizing you too harshly.
Also, if the critique is in fact bad, you may be placed in a somewhat awkward position when deciding whether to publish/publicize it.