I’ve noticed a decrease in the quality and accuracy of communication among people and organizations advocating for pro-safety views in the AI policy space. More often than not, I’m seeing people go with the least charitable interpretations of various claims made by AI leaders.
Arguments are increasingly looking like soldiers to me.
Take the following twitter thread from Dr. Peter S. Clark describing his new paper co-authored with Max Tegmark.
The authors use game theory to justify a slew of normative claims that don’t follow. The choice of language makes refutations difficult and pollutes epistemic commons. For example, they choose terms such as ‘pro-human’ and include parameters such as ‘naivety’.
These are rhetorical sleights of hand. Arguing for the benefits from automation to the consumer is now anti-human! You wouldn’t want to be naive and anti-human now, would you?
I don’t want this to be an attack on those who are against further AI development. PauseAI is a great example of what open and honest advocacy can look like. Being a vocal advocate for a cause is fine, disguising opinion as fact is not!
I’ve noticed a decrease in the quality and accuracy of communication among people and organizations advocating for pro-safety views in the AI policy space. More often than not, I’m seeing people go with the least charitable interpretations of various claims made by AI leaders.
Arguments are increasingly looking like soldiers to me.
Take the following twitter thread from Dr. Peter S. Clark describing his new paper co-authored with Max Tegmark.
The authors use game theory to justify a slew of normative claims that don’t follow. The choice of language makes refutations difficult and pollutes epistemic commons. For example, they choose terms such as ‘pro-human’ and include parameters such as ‘naivety’.
These are rhetorical sleights of hand. Arguing for the benefits from automation to the consumer is now anti-human! You wouldn’t want to be naive and anti-human now, would you?
I don’t want this to be an attack on those who are against further AI development. PauseAI is a great example of what open and honest advocacy can look like. Being a vocal advocate for a cause is fine, disguising opinion as fact is not!
Any suggestions for improving this?