Such personal incentives are important but, again, I didn’t advocate getting someone hostile to AI risk. I proposed aiming for someone neutral. I know, no one is “truly” neutral but you have to weigh potential positive personal incentives of someone invested against potential motivated thinking (or more accurately in this case, “motivated selection”).
Someone who was just neutral on the cause area would probably be fine, but I think there are few of those as it’s a divisive issue, and they probably wouldn’t be that motivated to do the work.
Such personal incentives are important but, again, I didn’t advocate getting someone hostile to AI risk. I proposed aiming for someone neutral. I know, no one is “truly” neutral but you have to weigh potential positive personal incentives of someone invested against potential motivated thinking (or more accurately in this case, “motivated selection”).
Someone who was just neutral on the cause area would probably be fine, but I think there are few of those as it’s a divisive issue, and they probably wouldn’t be that motivated to do the work.