I think a charitable reading of the extra virtues is that they are simply more actionable than “change our mind”. I recently wrote an entry for the Cause Exploration Prizes and I struggled with how to frame it given that I had never written something similar. The sample that they provided as a good model was really helpful for me to understand in much richer detail how I could make my points. Did it bias me towards a certain way of doing things that might not be optimal? Perhaps, but there was a countervailing benefit of reducing the barrier to entry. It made it much easier to grasp what I could do and reduced my self doubt about whether what I was saying was important.
That contest is not the same as the red teaming contest and I agree it’s much more important to question foundational assumptions here. But I think that most people like myself are very uncertain in their ability to critique EA well at all. To the extent that some extra criteria can make it easier to do that, it certainly has some benefits to go with the costs you describe.
Edit: to be concrete about why more guidance is needed, see how vague “change our mind” is. There’s no clarity on what beliefs the evaluator even has! I just saw an entry about why Fermi estimates are better than ITN evaluation. Let’s say that an evaluator reads it and they already held that belief. Then the entry can’t possibly be evaluated on whether it changes their mind. The guidelines they lay out are more objective/independent of who is evaluating them which is quite important in my opinion.
I think a charitable reading of the extra virtues is that they are simply more actionable than “change our mind”. I recently wrote an entry for the Cause Exploration Prizes and I struggled with how to frame it given that I had never written something similar. The sample that they provided as a good model was really helpful for me to understand in much richer detail how I could make my points. Did it bias me towards a certain way of doing things that might not be optimal? Perhaps, but there was a countervailing benefit of reducing the barrier to entry. It made it much easier to grasp what I could do and reduced my self doubt about whether what I was saying was important.
That contest is not the same as the red teaming contest and I agree it’s much more important to question foundational assumptions here. But I think that most people like myself are very uncertain in their ability to critique EA well at all. To the extent that some extra criteria can make it easier to do that, it certainly has some benefits to go with the costs you describe.
Edit: to be concrete about why more guidance is needed, see how vague “change our mind” is. There’s no clarity on what beliefs the evaluator even has! I just saw an entry about why Fermi estimates are better than ITN evaluation. Let’s say that an evaluator reads it and they already held that belief. Then the entry can’t possibly be evaluated on whether it changes their mind. The guidelines they lay out are more objective/independent of who is evaluating them which is quite important in my opinion.
I think Zvi mentions this at some point, but an alternative would be to frame the ‘criteria’ as very loose suggestions not requirements