This article is extremely well written and I really appreciated how well he supported his positions with facts.
However, this article seems to suggest that he doesn’t quite understand the argument for making alignment the priority. This is understandable as it’s rarely articulated clearly. The core limitation of differential tech development/d/acc/coceleration is that these kinds of imperfect defenses only buy time (this judgment can be justified with the articles he provides in his article). An aligned ASI, if it were possible, would be capable of a degree of perfection beyond that of human institutions. This would give us a stable long-term solution. Plans that involve less powerful AIs or a more limited degree of alignment mostly do not
This article is extremely well written and I really appreciated how well he supported his positions with facts.
However, this article seems to suggest that he doesn’t quite understand the argument for making alignment the priority. This is understandable as it’s rarely articulated clearly. The core limitation of differential tech development/d/acc/coceleration is that these kinds of imperfect defenses only buy time (this judgment can be justified with the articles he provides in his article). An aligned ASI, if it were possible, would be capable of a degree of perfection beyond that of human institutions. This would give us a stable long-term solution. Plans that involve less powerful AIs or a more limited degree of alignment mostly do not
Answering on the LW thread