Related to my other comment, I have some vague sense that I’d have preferred the post to be clearer that some of the AI’s claims can’t yet be known to be true with the level of confidence the AI implies. I think that that would’ve increased my sense that this post was steel-manning (alongside critiquing) my current mindset, or “giving it a fair shot”.
For example, the AI says:
[Even once we/humans know everything,] There will always remain judgment calls still to make. Different human experts will come down on different sides of those judgment calls.
I’d guess that those claims are likely to be true. And perhaps a superintelligent AI would be able to be highly confident about those claims without having to see humans who know everything, so maybe it’s ok for that line to be in this dialogue. But it seems to me that we currently lack clear empirical evidence of this, as we’ve never had a situation where some humans knew everything. And I don’t know of a reason to be extremely confident on the matter. (There might be a reason I don’t know of. Also, I might not say this if we were talking about any arbitrary agents who know everything, rather than just about humans.)
Somewhat similarly, I’d have preferred the phrase “make sense out of apparent nonsense” to the phrase “make sense out of nonsense”, given that we’re talking about there being a 0.2% chance that the “nonsense” somehow isn’t actually nonsense.
Yeah, I made the AI really confident for purposes of sharpening the implications of the dialogue. I want to be clear that I don’t think the AI’s arguments are obviously true.
(Maybe I should flag this more clearly in the dialogue itself, or at least the introduction. But I think this is at least implicitly explained in the current wording.)
Related to my other comment, I have some vague sense that I’d have preferred the post to be clearer that some of the AI’s claims can’t yet be known to be true with the level of confidence the AI implies. I think that that would’ve increased my sense that this post was steel-manning (alongside critiquing) my current mindset, or “giving it a fair shot”.
For example, the AI says:
I’d guess that those claims are likely to be true. And perhaps a superintelligent AI would be able to be highly confident about those claims without having to see humans who know everything, so maybe it’s ok for that line to be in this dialogue. But it seems to me that we currently lack clear empirical evidence of this, as we’ve never had a situation where some humans knew everything. And I don’t know of a reason to be extremely confident on the matter. (There might be a reason I don’t know of. Also, I might not say this if we were talking about any arbitrary agents who know everything, rather than just about humans.)
Somewhat similarly, I’d have preferred the phrase “make sense out of apparent nonsense” to the phrase “make sense out of nonsense”, given that we’re talking about there being a 0.2% chance that the “nonsense” somehow isn’t actually nonsense.
(Maybe this is just nit-picking.)
Yeah, I made the AI really confident for purposes of sharpening the implications of the dialogue. I want to be clear that I don’t think the AI’s arguments are obviously true.
(Maybe I should flag this more clearly in the dialogue itself, or at least the introduction. But I think this is at least implicitly explained in the current wording.)