I agree with you that some alternatives to “pause” or “indefinite pause” are better
I’m agnostic on what advocacy folks should advocate for; I think advocating indefinite pause is net-positive
I disagree on P(global totalitarianism for AI pause); I think it is extremely unlikely
I disagree with some vibes, like your focus on the downsides of totalitarianism (rather than its probability) and your “presumption in favor of innovation” even for predictably dangerous AI; they don’t seem to be load-bearing for your precise argument but I think they’re likely to mislead incautious readers
I agree with you that some alternatives to “pause” or “indefinite pause” are better
Thanks for clarifying. Assuming those alternative policies compete for attention and trade off against each other in some non-trivial way, I think that’s a pretty big deal.
I think advocating indefinite pause is net-positive
I find it interesting that you seem to think that advocacy for X is good even if X is bad, in this case. Maybe this is a crux for me? I think EAs shouldn’t advocate bad things just because we think we’ll fail at getting them, and will get some separate good thing instead.
I never said “indefinite pause” was bad or net-negative. Normally I’d say it’s good but I think it depends on the precise definition and maybe you’re using the term in a way such that it’s actually bad.
Clearly sometimes advocacy for a bad thing can be good. I’m just trying to model the world correctly.
Zach in a hypothetical world that pauses AI development, how many years do you think it would take medical science, at the current rate of progress, which is close to zero, to find
(1) treatments for aging
(2) treatments for all forms of dementia
And once treatments are found, what about the practical nature of actually carrying them out? Manipulating thr human body is extremely dangerous and risky. Ultimately all ICUs fail, their patients will always eventually enter a complex failure state that current doctors don’t have the tools or knowledge to stop. (Always fail in the sense that if you release ICU patients and wait a few years and they come back, eventually they will die there)
It is possible that certain hypothetical medical procedures like a series of transplants to replace an entire body, or to edit adult genes across entire organs, are impossible for human physicians to perform without an unacceptable mortality rate. In the same way there are aircraft that human pilots can’t actually fly. It takes automation and algorithms to do it at all.
What I am trying to say is a world free of aging and death is possible, but perhaps it’s 50-100 years away with ASI, and 1000+ years away in AI pause worlds. (Possibly quite a bit longer than 1000 years, see the repression of technology in China.)
It seems like if your mental discount rate counts people who will exist past 1000 years from now with non negligible weight, you could support an AI pause. Is this the crux of it? If a human alive today is worth 1.0, what is the worth of someone who might exist in 1000 years?
I never said “indefinite pause” was bad or net-negative. Normally I’d say it’s good but I think it depends on the precise definition and maybe you’re using the term in a way such that it’s actually bad.
In that case, I do think the arguments in the post probably address your beliefs. I think the downsides of doing an indefinite pause seem large. I’m curious if you have any direct reply to these arguments, even if you think that we are extremely unlikely to do an indefinite pause.
Clearly sometimes advocacy for a bad thing can be good.
I agree, but as a general rule, I think EAs should be very suspicious of arguments that assert X is bad while advocating for X is good.
I agree with you that some alternatives to “pause” or “indefinite pause” are better
I’m agnostic on what advocacy folks should advocate for; I think advocating indefinite pause is net-positive
I disagree on P(global totalitarianism for AI pause); I think it is extremely unlikely
I disagree with some vibes, like your focus on the downsides of totalitarianism (rather than its probability) and your “presumption in favor of innovation” even for predictably dangerous AI; they don’t seem to be load-bearing for your precise argument but I think they’re likely to mislead incautious readers
Thanks for clarifying. Assuming those alternative policies compete for attention and trade off against each other in some non-trivial way, I think that’s a pretty big deal.
I find it interesting that you seem to think that advocacy for X is good even if X is bad, in this case. Maybe this is a crux for me? I think EAs shouldn’t advocate bad things just because we think we’ll fail at getting them, and will get some separate good thing instead.
I never said “indefinite pause” was bad or net-negative. Normally I’d say it’s good but I think it depends on the precise definition and maybe you’re using the term in a way such that it’s actually bad.
Clearly sometimes advocacy for a bad thing can be good. I’m just trying to model the world correctly.
Zach in a hypothetical world that pauses AI development, how many years do you think it would take medical science, at the current rate of progress, which is close to zero, to find
(1) treatments for aging (2) treatments for all forms of dementia
And once treatments are found, what about the practical nature of actually carrying them out? Manipulating thr human body is extremely dangerous and risky. Ultimately all ICUs fail, their patients will always eventually enter a complex failure state that current doctors don’t have the tools or knowledge to stop. (Always fail in the sense that if you release ICU patients and wait a few years and they come back, eventually they will die there)
It is possible that certain hypothetical medical procedures like a series of transplants to replace an entire body, or to edit adult genes across entire organs, are impossible for human physicians to perform without an unacceptable mortality rate. In the same way there are aircraft that human pilots can’t actually fly. It takes automation and algorithms to do it at all.
What I am trying to say is a world free of aging and death is possible, but perhaps it’s 50-100 years away with ASI, and 1000+ years away in AI pause worlds. (Possibly quite a bit longer than 1000 years, see the repression of technology in China.)
It seems like if your mental discount rate counts people who will exist past 1000 years from now with non negligible weight, you could support an AI pause. Is this the crux of it? If a human alive today is worth 1.0, what is the worth of someone who might exist in 1000 years?
In that case, I do think the arguments in the post probably address your beliefs. I think the downsides of doing an indefinite pause seem large. I’m curious if you have any direct reply to these arguments, even if you think that we are extremely unlikely to do an indefinite pause.
I agree, but as a general rule, I think EAs should be very suspicious of arguments that assert X is bad while advocating for X is good.