Hmm, I’m imagining that someone who has not been exposed to AI-risk arguments could be fairly confused by this survey. You don’t actually explain the reason that such a proposal is being considered. I would advise adding a little context as to what x-risk concerns are, and then maybe giving them a chance to explain whether they agree/disagree with those concerns.
I am concerned that only people who are super-familiar with AI risk will answer the survey, thus biasing the results. This can be ameliorated with questions at the end about whether they were familiar with AI x-risk arguments and whether they agreed with them prior to the survey. You want to make sure that someone who thinks that x-risk is completely ridiculous will still complete the survey and find the questions reasonable.
I did write the survey assuming AI researchers have at least been exposed to these ideas, even if they were completely unconvinced by them, as that’s my personal experience of AI researchers who don’t care about alignment. But if my experiences don’t generalize, I agree that more explanation is necessary.
Hmm, I’m imagining that someone who has not been exposed to AI-risk arguments could be fairly confused by this survey. You don’t actually explain the reason that such a proposal is being considered. I would advise adding a little context as to what x-risk concerns are, and then maybe giving them a chance to explain whether they agree/disagree with those concerns.
I am concerned that only people who are super-familiar with AI risk will answer the survey, thus biasing the results. This can be ameliorated with questions at the end about whether they were familiar with AI x-risk arguments and whether they agreed with them prior to the survey. You want to make sure that someone who thinks that x-risk is completely ridiculous will still complete the survey and find the questions reasonable.
It’s a good idea though, keep it up.
I did write the survey assuming AI researchers have at least been exposed to these ideas, even if they were completely unconvinced by them, as that’s my personal experience of AI researchers who don’t care about alignment. But if my experiences don’t generalize, I agree that more explanation is necessary.