I don’t disagree on the problems of getting someone who thinks there is “negligible probability” of AI causing extinction being not suited for the task. That’s why I said to aim for neutrality.
But I think we may be disagreeing over whether “thinks AI risk is an important cause” is too close to “is broadly positive towards AI risk as a cause area.” I think so. You think not?
But I think we may be disagreeing over whether “thinks AI risk is an important cause” is too close to “is broadly positive towards AI risk as a cause area.” I think so. You think not?
Are there alternatives to a person like this? It doesn’t seem to me like there are.
“Is broadly positive towards AI risk as a cause area” could mean “believes that there should exist effective organizations working on mitigating AI risk”, or could mean “automatically gives more credence to the effectiveness of organizations that are attempting to mitigate AI risk.”
It might be helpful if you elaborated more on what you mean by ‘aim for neutrality’. What actions would that entail, if you did that, in the real world, yourself? What does hiring the ideal survey supervisor look like in your mind if you can’t use the words “neutral” or “neutrality” or any clever rephrasings thereof?
It might be helpful if you elaborated more on what you mean by ‘aim for neutrality’. What >actions would that entail, if you did that, in the real world, yourself?
I meant picking someone with no stake whatsoever in the outcome. Someone who, though exposed to arguments about AI risk, has no strong opinions one way or another. In other words, someone without a strong prior on AI risk as a cause area. Naturally, we all have biases, even if they are not explicit, so I am not proposing this as a disqualifying standard, just a goal worth shooting for.
An even broader selection tool I think worth considering alongside this is simply “people who know about AI risk” but that’s basically the same as Rob’s original point of “have some association with the general rationality or AI community.”
Edit: Should say “Naturally, we all have priors...”
I don’t disagree on the problems of getting someone who thinks there is “negligible probability” of AI causing extinction being not suited for the task. That’s why I said to aim for neutrality.
But I think we may be disagreeing over whether “thinks AI risk is an important cause” is too close to “is broadly positive towards AI risk as a cause area.” I think so. You think not?
Are there alternatives to a person like this? It doesn’t seem to me like there are.
“Is broadly positive towards AI risk as a cause area” could mean “believes that there should exist effective organizations working on mitigating AI risk”, or could mean “automatically gives more credence to the effectiveness of organizations that are attempting to mitigate AI risk.”
It might be helpful if you elaborated more on what you mean by ‘aim for neutrality’. What actions would that entail, if you did that, in the real world, yourself? What does hiring the ideal survey supervisor look like in your mind if you can’t use the words “neutral” or “neutrality” or any clever rephrasings thereof?
I meant picking someone with no stake whatsoever in the outcome. Someone who, though exposed to arguments about AI risk, has no strong opinions one way or another. In other words, someone without a strong prior on AI risk as a cause area. Naturally, we all have biases, even if they are not explicit, so I am not proposing this as a disqualifying standard, just a goal worth shooting for.
An even broader selection tool I think worth considering alongside this is simply “people who know about AI risk” but that’s basically the same as Rob’s original point of “have some association with the general rationality or AI community.”
Edit: Should say “Naturally, we all have priors...”