I haven’t read the whole paper yet, so forgive me if I miss some of the major points by just commenting on this post.
The image seems to imply that non-aligned AI would only extinguish human life on Earth. How do you figure that? It seems that an AI could extinguish all the rest of life on Earth too, even including itself in the process. [edit: this has since been corrected in the blog post]
For example, you could have an AI system that has the objective of performing some task X, before time Y, without leaving Earth, and then harvests all locally available resources in order to perform that task, before eventually running out of energy and switching off. This would seem to extinguish all life on Earth by any definition.
We could also discuss whether AI might extinguish all civilizations in the visible universe. This also seems possible. One reason for this is that humans might be the only civilization in the universe.
It is hard to encapsulate this all into a simple scale, but we wanted to recognize that false vacuum decay that would destroy the Universe at light speed would be worse than bad AI, at least if you think the future will be net positive. Bad AI could be constrained by a more powerful civilization.
In the article AI is destroying all life on earth, but on the previous version of the image in this blog post the image was somewhat redesign to better visibility and AI risk jumped to the kill all humans. I corrected the image now, so it is the same as in the article, - so the previous comment was valid.
Will the AI be able to destroy other civilizations in the universe depends on the fact if these civilizations will create their own AI before intelligence explosion wave from us arrive to them.
So AI will kill only potential and young civilizations in the universe, but not mature civilizations.
But it is not the case for false vacuum decay wave which will kill everything (according to our current understanding of AI and vacuum).
I haven’t read the whole paper yet, so forgive me if I miss some of the major points by just commenting on this post.
The image seems to imply that non-aligned AI would only extinguish human life on Earth. How do you figure that? It seems that an AI could extinguish all the rest of life on Earth too, even including itself in the process. [edit: this has since been corrected in the blog post]
For example, you could have an AI system that has the objective of performing some task X, before time Y, without leaving Earth, and then harvests all locally available resources in order to perform that task, before eventually running out of energy and switching off. This would seem to extinguish all life on Earth by any definition.
We could also discuss whether AI might extinguish all civilizations in the visible universe. This also seems possible. One reason for this is that humans might be the only civilization in the universe.
It is hard to encapsulate this all into a simple scale, but we wanted to recognize that false vacuum decay that would destroy the Universe at light speed would be worse than bad AI, at least if you think the future will be net positive. Bad AI could be constrained by a more powerful civilization.
No, in the paper we clearly said that non-alaigned AI is the risk to the whole universe in the worst case scenario.
Also, the image above indicates AI would likely destroy all life on earth, not only human life.
In the article AI is destroying all life on earth, but on the previous version of the image in this blog post the image was somewhat redesign to better visibility and AI risk jumped to the kill all humans. I corrected the image now, so it is the same as in the article, - so the previous comment was valid.
Will the AI be able to destroy other civilizations in the universe depends on the fact if these civilizations will create their own AI before intelligence explosion wave from us arrive to them.
So AI will kill only potential and young civilizations in the universe, but not mature civilizations.
But it is not the case for false vacuum decay wave which will kill everything (according to our current understanding of AI and vacuum).