I gave an argument for why I don’t think the cry wolf-effects would be as large as one might think in World A. Afaict your comment doesn’t engage with my argument.
I’m not sure what you’re trying to say with your comment about World B. If we manage to permanently solve the risks relating to AI, then we’ve solved the problem. Whether some people will then be accused of having cried wolf seems far less important relative to that.
You’re right—my comment is addressing an additional problem. (So I maybe should’ve made it a standalone comment)
As far as your second point is concerned—that’s true, unless we will face risk (again, and possibly more) at a later point. I agree with you that “crying wolf-effects” matter less or not at all under conditions where a problem is solved once and for all (unless it affects the credibility of a community which simultaneously works on other problems which remain unsolved, as is probably true of the EA community).
I gave an argument for why I don’t think the cry wolf-effects would be as large as one might think in World A. Afaict your comment doesn’t engage with my argument.
I’m not sure what you’re trying to say with your comment about World B. If we manage to permanently solve the risks relating to AI, then we’ve solved the problem. Whether some people will then be accused of having cried wolf seems far less important relative to that.
You’re right—my comment is addressing an additional problem. (So I maybe should’ve made it a standalone comment)
As far as your second point is concerned—that’s true, unless we will face risk (again, and possibly more) at a later point. I agree with you that “crying wolf-effects” matter less or not at all under conditions where a problem is solved once and for all (unless it affects the credibility of a community which simultaneously works on other problems which remain unsolved, as is probably true of the EA community).