ARC & Open Philanthropy state in a press release “In a sane world, all AGI progress should stop. If we don’t, there’s more than a 10% chance we will all die.”
People at AGI labs working in the safety teams echo this message publicly.
Genuine question: Do the majority of people in open Phil, or at AGI safety labs, actually believe the statement above?
I’m all for honesty, but it seems like an alternate explanation for a lot of people not saying the statement above is that they don’t believe that the statement is true. I worry that setting this kind of standard will just lead to accusations that genuine disagreement is secret dishonesty.
Yes, would be good to hear more from them directly. I’m disappointed that OpenPhil have not given any public update on their credences on the two main questions their AI Worldviews Contest sought to address.
Genuine question: Do the majority of people in open Phil, or at AGI safety labs, actually believe the statement above?
I’m all for honesty, but it seems like an alternate explanation for a lot of people not saying the statement above is that they don’t believe that the statement is true. I worry that setting this kind of standard will just lead to accusations that genuine disagreement is secret dishonesty.
Yes, would be good to hear more from them directly. I’m disappointed that OpenPhil have not given any public update on their credences on the two main questions their AI Worldviews Contest sought to address.
Your question reminded me of the following quote:
Maybe here we are talking about an alternative version of this: