Related to my other comment, I have some vague sense that Iād have preferred the post to be clearer that some of the AIās claims canāt yet be known to be true with the level of confidence the AI implies. I think that that wouldāve increased my sense that this post was steel-manning (alongside critiquing) my current mindset, or āgiving it a fair shotā.
For example, the AI says:
[Even once we/āhumans know everything,] There will always remain judgment calls still to make. Different human experts will come down on different sides of those judgment calls.
Iād guess that those claims are likely to be true. And perhaps a superintelligent AI would be able to be highly confident about those claims without having to see humans who know everything, so maybe itās ok for that line to be in this dialogue. But it seems to me that we currently lack clear empirical evidence of this, as weāve never had a situation where some humans knew everything. And I donāt know of a reason to be extremely confident on the matter. (There might be a reason I donāt know of. Also, I might not say this if we were talking about any arbitrary agents who know everything, rather than just about humans.)
Somewhat similarly, Iād have preferred the phrase āmake sense out of apparent nonsenseā to the phrase āmake sense out of nonsenseā, given that weāre talking about there being a 0.2% chance that the ānonsenseā somehow isnāt actually nonsense.
Yeah, I made the AI really confident for purposes of sharpening the implications of the dialogue. I want to be clear that I donāt think the AIās arguments are obviously true.
(Maybe I should flag this more clearly in the dialogue itself, or at least the introduction. But I think this is at least implicitly explained in the current wording.)
Related to my other comment, I have some vague sense that Iād have preferred the post to be clearer that some of the AIās claims canāt yet be known to be true with the level of confidence the AI implies. I think that that wouldāve increased my sense that this post was steel-manning (alongside critiquing) my current mindset, or āgiving it a fair shotā.
For example, the AI says:
Iād guess that those claims are likely to be true. And perhaps a superintelligent AI would be able to be highly confident about those claims without having to see humans who know everything, so maybe itās ok for that line to be in this dialogue. But it seems to me that we currently lack clear empirical evidence of this, as weāve never had a situation where some humans knew everything. And I donāt know of a reason to be extremely confident on the matter. (There might be a reason I donāt know of. Also, I might not say this if we were talking about any arbitrary agents who know everything, rather than just about humans.)
Somewhat similarly, Iād have preferred the phrase āmake sense out of apparent nonsenseā to the phrase āmake sense out of nonsenseā, given that weāre talking about there being a 0.2% chance that the ānonsenseā somehow isnāt actually nonsense.
(Maybe this is just nit-picking.)
Yeah, I made the AI really confident for purposes of sharpening the implications of the dialogue. I want to be clear that I donāt think the AIās arguments are obviously true.
(Maybe I should flag this more clearly in the dialogue itself, or at least the introduction. But I think this is at least implicitly explained in the current wording.)