I appreciate your post and think it presents some good arguments. I also just think my post is about a different focus. I’m talking about an indefinite AI pause, which is an explicit policy that at least 4 major EA leaders seem to have argued for in the past. I think it’s reasonable to talk about this proposal without needing to respond to all the modest proposals that others have given before.
Eliezer Yudkowsky, perhaps the most influential person in the AI risk community, has already demanded an “indefinite and worldwide” moratorium on large training runs. This sentiment isn’t exactly new. Some effective altruists, such as Toby Ord, have argued that humanity should engage in a “long reflection” before embarking on ambitious and irreversible technological projects, including AGI. William MacAskill suggested that this pause should perhaps last “a million years”. Two decades ago, Nick Bostrom considered the ethics of delaying new technologies in a utilitarian framework and concluded a delay of “over 10 million years” may be justified if it reduces existential risk by a single percentage point.
Thanks. Unfortunately only Yudkowsky is loudly publicly saying that we need to pause (or Stop / Shut Down, in his words). I hope more of the major EA leaders start being more vocal about this soon.
I appreciate your post and think it presents some good arguments. I also just think my post is about a different focus. I’m talking about an indefinite AI pause, which is an explicit policy that at least 4 major EA leaders seem to have argued for in the past. I think it’s reasonable to talk about this proposal without needing to respond to all the modest proposals that others have given before.
Who are the 4 major EA leaders?
From my post,
Thanks. Unfortunately only Yudkowsky is loudly publicly saying that we need to pause (or Stop / Shut Down, in his words). I hope more of the major EA leaders start being more vocal about this soon.