It might be helpful if you elaborated more on what you mean by ‘aim for neutrality’. What >actions would that entail, if you did that, in the real world, yourself?
I meant picking someone with no stake whatsoever in the outcome. Someone who, though exposed to arguments about AI risk, has no strong opinions one way or another. In other words, someone without a strong prior on AI risk as a cause area. Naturally, we all have biases, even if they are not explicit, so I am not proposing this as a disqualifying standard, just a goal worth shooting for.
An even broader selection tool I think worth considering alongside this is simply “people who know about AI risk” but that’s basically the same as Rob’s original point of “have some association with the general rationality or AI community.”
Edit: Should say “Naturally, we all have priors...”
I meant picking someone with no stake whatsoever in the outcome. Someone who, though exposed to arguments about AI risk, has no strong opinions one way or another. In other words, someone without a strong prior on AI risk as a cause area. Naturally, we all have biases, even if they are not explicit, so I am not proposing this as a disqualifying standard, just a goal worth shooting for.
An even broader selection tool I think worth considering alongside this is simply “people who know about AI risk” but that’s basically the same as Rob’s original point of “have some association with the general rationality or AI community.”
Edit: Should say “Naturally, we all have priors...”