I recently posted that I’d like AI researchers to establish a consensus (>=70%) opinion on this question: What properties would a hypothetical AI system need to demonstrate for you to agree that we should completely halt AI development?
I’d welcome feedback on how to make this form even better, and I’d also appreciate if you’d forward it to an X-risk skeptical AI researcher in your network. Thanks!
‘AI Emergency Eject Criteria’ Survey
I recently posted that I’d like AI researchers to establish a consensus (>=70%) opinion on this question: What properties would a hypothetical AI system need to demonstrate for you to agree that we should completely halt AI development?
So in the spirit of proactivity, I’ve created a short google form to collect researchers’ opinions: https://docs.google.com/forms/d/e/1FAIpQLScD2NbeWT7uF70irTagPsTEzYx7q5yCOy7Qtb0RcgNjX7JZng/viewform
I’d welcome feedback on how to make this form even better, and I’d also appreciate if you’d forward it to an X-risk skeptical AI researcher in your network. Thanks!