“Why should the person overseeing the survey think AI risk is an important cause?”
Because someone who believes it’s a real risk has strong personal incentives to try to make the survey informative and report the results correctly (i.e. they don’t want to die). Someone who believed it’s a dumb cause would be tempted to discredit the cause by making MIRI look bad (or at least wouldn’t be as trusted by prospective MIRI donors).
Such personal incentives are important but, again, I didn’t advocate getting someone hostile to AI risk. I proposed aiming for someone neutral. I know, no one is “truly” neutral but you have to weigh potential positive personal incentives of someone invested against potential motivated thinking (or more accurately in this case, “motivated selection”).
Someone who was just neutral on the cause area would probably be fine, but I think there are few of those as it’s a divisive issue, and they probably wouldn’t be that motivated to do the work.
“Why should the person overseeing the survey think AI risk is an important cause?”
Because someone who believes it’s a real risk has strong personal incentives to try to make the survey informative and report the results correctly (i.e. they don’t want to die). Someone who believed it’s a dumb cause would be tempted to discredit the cause by making MIRI look bad (or at least wouldn’t be as trusted by prospective MIRI donors).
Such personal incentives are important but, again, I didn’t advocate getting someone hostile to AI risk. I proposed aiming for someone neutral. I know, no one is “truly” neutral but you have to weigh potential positive personal incentives of someone invested against potential motivated thinking (or more accurately in this case, “motivated selection”).
Someone who was just neutral on the cause area would probably be fine, but I think there are few of those as it’s a divisive issue, and they probably wouldn’t be that motivated to do the work.