Thanks for writing this up. It seems like a good idea, and you address what I view as the main risks. I think that (contingent on a program like this going well) there is a pretty good chance that it would generate useful insights (Why #3). This seems particularly important to me for a couple reasons.
Having better ideas and quality scrutiny = good
Relatively new EAs who do a project like this and have their work be received as meaningful/valuable would probably feel much more accepted/wanted in the community
I would therefore add what I think is helpful structure, the goal being to increase the chances of a project like this generating useful insights. In your Desiderata you mention
“Red-teaming targets should ideally be actual problems from EA researchers who would like to have an idea/approach/model/conclusion/… red-teamed against.”
I propose a stronger view here: topics are chosen in conjunction with EA researchers or community members who want a specific idea/approach/model/conclusion/… red-teamed against and agree to provide feedback at the end. Setting up this relationship from the beginning seems important if you actually want the right people to read your report. I think with a less structured format, I’m worried folks might construct decent arguments or concerns in their red-team write up, but nobody or not the right people read them, so it’s useless.
Note 1: maybe researchers are really busy so this is actually “I will provide feedback on a 2 page summary”
Note 2: asking people what they want red-teamed is maybe a little ironic when a goal is good epistemic norms. This makes me quite uncertain that this is a useful approach, but it also might be that researchers are okay providing feedback on anything. But it seems like one way of increasing the chances of projects like this having actual impact.
This idea makes me really excited because I would love to do this!
I agree that this gets around most of the issues with paying program participants.
Thanks for writing this up. It seems like a good idea, and you address what I view as the main risks. I think that (contingent on a program like this going well) there is a pretty good chance that it would generate useful insights (Why #3). This seems particularly important to me for a couple reasons.
Having better ideas and quality scrutiny = good
Relatively new EAs who do a project like this and have their work be received as meaningful/valuable would probably feel much more accepted/wanted in the community
I would therefore add what I think is helpful structure, the goal being to increase the chances of a project like this generating useful insights. In your Desiderata you mention
I propose a stronger view here: topics are chosen in conjunction with EA researchers or community members who want a specific idea/approach/model/conclusion/… red-teamed against and agree to provide feedback at the end. Setting up this relationship from the beginning seems important if you actually want the right people to read your report. I think with a less structured format, I’m worried folks might construct decent arguments or concerns in their red-team write up, but nobody or not the right people read them, so it’s useless.
Note 1: maybe researchers are really busy so this is actually “I will provide feedback on a 2 page summary”
Note 2: asking people what they want red-teamed is maybe a little ironic when a goal is good epistemic norms. This makes me quite uncertain that this is a useful approach, but it also might be that researchers are okay providing feedback on anything. But it seems like one way of increasing the chances of projects like this having actual impact.
This idea makes me really excited because I would love to do this!
I agree that this gets around most of the issues with paying program participants.