I never said “indefinite pause” was bad or net-negative. Normally I’d say it’s good but I think it depends on the precise definition and maybe you’re using the term in a way such that it’s actually bad.
Clearly sometimes advocacy for a bad thing can be good. I’m just trying to model the world correctly.
Zach in a hypothetical world that pauses AI development, how many years do you think it would take medical science, at the current rate of progress, which is close to zero, to find
(1) treatments for aging
(2) treatments for all forms of dementia
And once treatments are found, what about the practical nature of actually carrying them out? Manipulating thr human body is extremely dangerous and risky. Ultimately all ICUs fail, their patients will always eventually enter a complex failure state that current doctors don’t have the tools or knowledge to stop. (Always fail in the sense that if you release ICU patients and wait a few years and they come back, eventually they will die there)
It is possible that certain hypothetical medical procedures like a series of transplants to replace an entire body, or to edit adult genes across entire organs, are impossible for human physicians to perform without an unacceptable mortality rate. In the same way there are aircraft that human pilots can’t actually fly. It takes automation and algorithms to do it at all.
What I am trying to say is a world free of aging and death is possible, but perhaps it’s 50-100 years away with ASI, and 1000+ years away in AI pause worlds. (Possibly quite a bit longer than 1000 years, see the repression of technology in China.)
It seems like if your mental discount rate counts people who will exist past 1000 years from now with non negligible weight, you could support an AI pause. Is this the crux of it? If a human alive today is worth 1.0, what is the worth of someone who might exist in 1000 years?
I never said “indefinite pause” was bad or net-negative. Normally I’d say it’s good but I think it depends on the precise definition and maybe you’re using the term in a way such that it’s actually bad.
In that case, I do think the arguments in the post probably address your beliefs. I think the downsides of doing an indefinite pause seem large. I’m curious if you have any direct reply to these arguments, even if you think that we are extremely unlikely to do an indefinite pause.
Clearly sometimes advocacy for a bad thing can be good.
I agree, but as a general rule, I think EAs should be very suspicious of arguments that assert X is bad while advocating for X is good.
I never said “indefinite pause” was bad or net-negative. Normally I’d say it’s good but I think it depends on the precise definition and maybe you’re using the term in a way such that it’s actually bad.
Clearly sometimes advocacy for a bad thing can be good. I’m just trying to model the world correctly.
Zach in a hypothetical world that pauses AI development, how many years do you think it would take medical science, at the current rate of progress, which is close to zero, to find
(1) treatments for aging (2) treatments for all forms of dementia
And once treatments are found, what about the practical nature of actually carrying them out? Manipulating thr human body is extremely dangerous and risky. Ultimately all ICUs fail, their patients will always eventually enter a complex failure state that current doctors don’t have the tools or knowledge to stop. (Always fail in the sense that if you release ICU patients and wait a few years and they come back, eventually they will die there)
It is possible that certain hypothetical medical procedures like a series of transplants to replace an entire body, or to edit adult genes across entire organs, are impossible for human physicians to perform without an unacceptable mortality rate. In the same way there are aircraft that human pilots can’t actually fly. It takes automation and algorithms to do it at all.
What I am trying to say is a world free of aging and death is possible, but perhaps it’s 50-100 years away with ASI, and 1000+ years away in AI pause worlds. (Possibly quite a bit longer than 1000 years, see the repression of technology in China.)
It seems like if your mental discount rate counts people who will exist past 1000 years from now with non negligible weight, you could support an AI pause. Is this the crux of it? If a human alive today is worth 1.0, what is the worth of someone who might exist in 1000 years?
In that case, I do think the arguments in the post probably address your beliefs. I think the downsides of doing an indefinite pause seem large. I’m curious if you have any direct reply to these arguments, even if you think that we are extremely unlikely to do an indefinite pause.
I agree, but as a general rule, I think EAs should be very suspicious of arguments that assert X is bad while advocating for X is good.