Excellent suggestion. I think the main benefit of asking your question of leading AI researchers (‘What properties would a hypothetical AI system need to demonstrate for you to agree that we should completely halt AI development?‘) would be that many of them would say ‘There are no AI properties that would make me advocate for halting AI development’. (For example, I can’t imagine Yann LeCun or any hard-core AI accelerationists arguing for a halt, under almost any conditions, given their recent rhetoric on Twitter.)
It would be valuable for ordinary citizens to see such responses, because it would clarify for them that, for many AI advocates, the AI itself is the goal, and any impacts on humanity are considered trivial, tangential, or transient. In other words, the AI accelerationists would reveal themselves as ideologues who view humanity as a disposable bridge to superintelligence, and ordinary folks would be horrified, and galvanized to advocate for stronger pauses and/or halts earlier.
Excellent suggestion. I think the main benefit of asking your question of leading AI researchers (‘What properties would a hypothetical AI system need to demonstrate for you to agree that we should completely halt AI development?‘) would be that many of them would say ‘There are no AI properties that would make me advocate for halting AI development’. (For example, I can’t imagine Yann LeCun or any hard-core AI accelerationists arguing for a halt, under almost any conditions, given their recent rhetoric on Twitter.)
It would be valuable for ordinary citizens to see such responses, because it would clarify for them that, for many AI advocates, the AI itself is the goal, and any impacts on humanity are considered trivial, tangential, or transient. In other words, the AI accelerationists would reveal themselves as ideologues who view humanity as a disposable bridge to superintelligence, and ordinary folks would be horrified, and galvanized to advocate for stronger pauses and/or halts earlier.