Indeed, 4 and 5 are the weakest parts of the AI risk argument, and often seems to be based on an overly magical view of what computation/intelligence can achieve, and neglecting the fact that all intelligences are fallible. There is an overly large reliance on making up science fiction scenarios without putting any effort into proving that said scenarios are likely or even possible (see Yudkowsky’s absurd “mixing proteins to make nanobots that kill everyone” scenario).
I’m working on a post elaborating in more depth on this based on my experience as a computational physicist.
Thanks for your comment, which helps me to zoom in on claims 4 and 5 in my own thinking.
I was thinking of another point on intelligence fallibility, specifically whether intelligence really allows the AGI to fully shape the future to its will. Was thinking along the lines of Laplace’s Demon which asks the question: if there is a demon which knows the position of every atom in the universe, and the direction which it travels in, will it be able to predict (and hence shape) the future? I think it is not clear that it will. In fact, Heisenberg’s uncertainty principle suggests that it will not (at least at the quantum level). Similarly, it is not clear that the AGI would be able to do so even if it has complete knowledge of everything.
Happy to comment on your post before/when you publish it!
Indeed, 4 and 5 are the weakest parts of the AI risk argument, and often seems to be based on an overly magical view of what computation/intelligence can achieve, and neglecting the fact that all intelligences are fallible. There is an overly large reliance on making up science fiction scenarios without putting any effort into proving that said scenarios are likely or even possible (see Yudkowsky’s absurd “mixing proteins to make nanobots that kill everyone” scenario).
I’m working on a post elaborating in more depth on this based on my experience as a computational physicist.
Thanks for your comment, which helps me to zoom in on claims 4 and 5 in my own thinking.
I was thinking of another point on intelligence fallibility, specifically whether intelligence really allows the AGI to fully shape the future to its will. Was thinking along the lines of Laplace’s Demon which asks the question: if there is a demon which knows the position of every atom in the universe, and the direction which it travels in, will it be able to predict (and hence shape) the future? I think it is not clear that it will. In fact, Heisenberg’s uncertainty principle suggests that it will not (at least at the quantum level). Similarly, it is not clear that the AGI would be able to do so even if it has complete knowledge of everything.
Happy to comment on your post before/when you publish it!
I encourage you to publish that post. I also feel that the AI safety argument leans too heavily on the DNA sequences → diamondoid nanobots scenario
Consider entering your post in this competition: https://forum.effectivealtruism.org/posts/W7C5hwq7sjdpTdrQF/announcing-the-future-fund-s-ai-worldview-prize