The recent OpenAI announcement of progress in research has artificial intelligence safety at the forefront of many of our minds. With the prospect of this new technology “remaking society” in my lifetime (something I never thought I would live to see), and with that organization explicitly stating their goal is to create an artificial general intelligence (AGI) and raise it to superintelligent level (ASI), with the overall goal of, to paraphrase former and current CEO Sam Altman, making human lives easier and happier.
As I am sure all of you know, AI safety and existential risks are two of the main issues that Effective Altruism (EA) seeks to address. As someone brand new to the EA community and ideology, what convinced me to join was seeing the way EAs apply reason, logic, and science to moral questions. This approach is so refreshing in a world where morality is so often said to be rooted in the unfalsifiable, and moral debates are often highly emotional and self-referential. To put it another way, EA (morality, value, even hope) has made progress in a field long considered by many to be beyond the realm of reason. The EA perspective has been like opening a window to greater understanding, and it comes with an amazing community as well.
So my question for those EAs who would consider yourselves in favor of the existence of AI on Earth in any sense (whether you are for governance, regulation, safety, alignment, or “acceleration”): how do you mitigate the existential risk to a negligible level?
Say that research, alignment efforts and AI regulation lead to the development of an artificial general intelligence (AGI) released to the public in 10 years time. Say it had a 99 percent chance of being aligned with the good of humanity and other animal species on Earth (with a one percent chance of unpredictable, potentially hazardous outcomes) . Would this level of existential risk be acceptable to you? Keep in mind that we would never accept a one percent chance that an airplane would crash, due to the obvious tragic consequences that come with aviation accidents. We would never even accept a 0.01% chance of a crash. We test, re-test, and regulate air travel so intensely because of the very large consequences that would come with a mistake. According to the 2022 Expert Survey on Progress in AI (ESPAI), the chance of AGI eliminating all life on Earth is ten percent.
An unsafe AGI can kill far, far more than even the worst air accident. It can kill more conscious beings than train crashes, shipwrecks, terror attacks, pandemics, and even nuclear wars combined. It can kill every sentient being on Earth and render the planet permanently uninhabitable by any biological lifeforms. AI (and more specifically AGI/ASI) could also find a way to leave planet Earth, eventually consuming other sentient beings in different star systems, even in the absence of superluminal travel. And experts have determined there is a significant chance that this will happen before the end of the 21st Century.
So my question is: can AGI /ASI safely exist at all? And if so, what level of existential risk are you willing to accept?
As I see it, the only acceptable level of existential risk is zero. Therefore all AI research and development should be permanently suspended and all existing AIs shut down.
[Question] Can AI safely exist at all?
The recent OpenAI announcement of progress in research has artificial intelligence safety at the forefront of many of our minds. With the prospect of this new technology “remaking society” in my lifetime (something I never thought I would live to see), and with that organization explicitly stating their goal is to create an artificial general intelligence (AGI) and raise it to superintelligent level (ASI), with the overall goal of, to paraphrase former and current CEO Sam Altman, making human lives easier and happier.
As I am sure all of you know, AI safety and existential risks are two of the main issues that Effective Altruism (EA) seeks to address. As someone brand new to the EA community and ideology, what convinced me to join was seeing the way EAs apply reason, logic, and science to moral questions. This approach is so refreshing in a world where morality is so often said to be rooted in the unfalsifiable, and moral debates are often highly emotional and self-referential. To put it another way, EA (morality, value, even hope) has made progress in a field long considered by many to be beyond the realm of reason. The EA perspective has been like opening a window to greater understanding, and it comes with an amazing community as well.
So my question for those EAs who would consider yourselves in favor of the existence of AI on Earth in any sense (whether you are for governance, regulation, safety, alignment, or “acceleration”): how do you mitigate the existential risk to a negligible level?
Say that research, alignment efforts and AI regulation lead to the development of an artificial general intelligence (AGI) released to the public in 10 years time. Say it had a 99 percent chance of being aligned with the good of humanity and other animal species on Earth (with a one percent chance of unpredictable, potentially hazardous outcomes) . Would this level of existential risk be acceptable to you? Keep in mind that we would never accept a one percent chance that an airplane would crash, due to the obvious tragic consequences that come with aviation accidents. We would never even accept a 0.01% chance of a crash. We test, re-test, and regulate air travel so intensely because of the very large consequences that would come with a mistake. According to the 2022 Expert Survey on Progress in AI (ESPAI), the chance of AGI eliminating all life on Earth is ten percent.
An unsafe AGI can kill far, far more than even the worst air accident. It can kill more conscious beings than train crashes, shipwrecks, terror attacks, pandemics, and even nuclear wars combined. It can kill every sentient being on Earth and render the planet permanently uninhabitable by any biological lifeforms. AI (and more specifically AGI/ASI) could also find a way to leave planet Earth, eventually consuming other sentient beings in different star systems, even in the absence of superluminal travel. And experts have determined there is a significant chance that this will happen before the end of the 21st Century.
So my question is: can AGI /ASI safely exist at all? And if so, what level of existential risk are you willing to accept?
As I see it, the only acceptable level of existential risk is zero. Therefore all AI research and development should be permanently suspended and all existing AIs shut down.