I think the difference between me and Yudkowsky has less to do with social effects on our speech and more to do with differing epistemic practices, i.e. about how confident one can reasonably be about the effects of poorly understood future technologies emerging in future, poorly understood circumstances.
This isn’t expressing disagreement, but I think it’s also important to consider the social effects of our speaking in line with different epistemic practices, i.e.,
When someone says “AI will kill us all” do people understand us as expressing 100% confidence in extinction, or do they interpret it as mere hyperbole and rhetoric, and infer that what we actually mean is that AI will potentially kill us all or have other drastic effects
When someone says “There’s a high risk AI kills us all or disempowers us” do people understand this as us expressing very high confidence that it kills us all or as saying it almost certainly won’t kill us all.
This isn’t expressing disagreement, but I think it’s also important to consider the social effects of our speaking in line with different epistemic practices, i.e.,
When someone says “AI will kill us all” do people understand us as expressing 100% confidence in extinction, or do they interpret it as mere hyperbole and rhetoric, and infer that what we actually mean is that AI will potentially kill us all or have other drastic effects
When someone says “There’s a high risk AI kills us all or disempowers us” do people understand this as us expressing very high confidence that it kills us all or as saying it almost certainly won’t kill us all.