I think some number of crying-wolf-adjacent incidents in the future are inevitable as I said in the post. Doesn’t mean we can’t at least try to make it harder for people to weaponise them against us by hedging, acknowledging uncertainty etc.
like I said, this is just my opinion. open to arguments for why signalling confidence is actually the right move, even at the risk of lost credibility
I think it’s very hard to get urgent political action if all communication about the issue is hedged and emphasises uncertainty—i.e. the kind of language that AI Safety, EA and LW people here are used to, rather than the kind of language that in used in everyday politics, let alone the kind of language that is typically used to emphasise the need for urgent evasive action.
I think the risk of lost credibility from signalling too much confidence is only really credibility in the eyes of technical AI people, not the general public or government policymakers / regulators—which are the people that matter now.
To be clear, I’m not saying that all nuance should be lost—as with anything, detailed nuanced information and opinion will always there for people to read should they wish to dive deeper. But it’s fine to signal confidence in short public-facing comms, given the stakes (likely short timelines and high p(doom)).
I think some number of crying-wolf-adjacent incidents in the future are inevitable as I said in the post. Doesn’t mean we can’t at least try to make it harder for people to weaponise them against us by hedging, acknowledging uncertainty etc.
like I said, this is just my opinion. open to arguments for why signalling confidence is actually the right move, even at the risk of lost credibility
I think it’s very hard to get urgent political action if all communication about the issue is hedged and emphasises uncertainty—i.e. the kind of language that AI Safety, EA and LW people here are used to, rather than the kind of language that in used in everyday politics, let alone the kind of language that is typically used to emphasise the need for urgent evasive action.
I think the risk of lost credibility from signalling too much confidence is only really credibility in the eyes of technical AI people, not the general public or government policymakers / regulators—which are the people that matter now.
To be clear, I’m not saying that all nuance should be lost—as with anything, detailed nuanced information and opinion will always there for people to read should they wish to dive deeper. But it’s fine to signal confidence in short public-facing comms, given the stakes (likely short timelines and high p(doom)).