Eliezer Yudkowsky, perhaps the most influential person in the AI risk community, has already demanded an “indefinite and worldwide” moratorium on large training runs. This sentiment isn’t exactly new. Some effective altruists, such as Toby Ord, have argued that humanity should engage in a “long reflection” before embarking on ambitious and irreversible technological projects, including AGI. William MacAskill suggested that this pause should perhaps last “a million years”. Two decades ago, Nick Bostrom considered the ethics of delaying new technologies in a utilitarian framework and concluded a delay of “over 10 million years” may be justified if it reduces existential risk by a single percentage point.
Thanks. Unfortunately only Yudkowsky is loudly publicly saying that we need to pause (or Stop / Shut Down, in his words). I hope more of the major EA leaders start being more vocal about this soon.
From my post,
Thanks. Unfortunately only Yudkowsky is loudly publicly saying that we need to pause (or Stop / Shut Down, in his words). I hope more of the major EA leaders start being more vocal about this soon.