If you are interested, Magnus Vinding outlines a few counterarguments to this idea in his article about Pause AI (though of course, heās far from alone in having argued this, but this is the first post that comes to mind).
Thanks for the article. I do not see how Magnusās article argues against the fact that without any human alive there will be no species to fix moral issue. He just says that it is not because we have still humans that we cannot have an horrible futur (and I agree on it). The only alternative I see is that an other species (like alien or AI-related beings) adopt some morality, but it stays quite speculative. We also do not know how this morality will fit values like impartialism or sentientism.
My bad, I wasnāt very clear when I used the term ācounterargumentā, and ānuanceā or something else might have fit better. It doesnāt argue against the fact that without humans, there wonāt be any species concerned with moral issues, only the case that humans are potentially so immoral that their presence might make the future worse than one with no humans. Which is indeed not really a ācounterargumentā to the idea that weād need humans to fix moral issues, but instead argues against the point that this would make it more likely positive than not for the future (since he argues that humans may have very bad moral values, and thus ensure a bad future).
If you are interested, Magnus Vinding outlines a few counterarguments to this idea in his article about Pause AI (though of course, heās far from alone in having argued this, but this is the first post that comes to mind).
Thanks for the article.
I do not see how Magnusās article argues against the fact that without any human alive there will be no species to fix moral issue. He just says that it is not because we have still humans that we cannot have an horrible futur (and I agree on it).
The only alternative I see is that an other species (like alien or AI-related beings) adopt some morality, but it stays quite speculative. We also do not know how this morality will fit values like impartialism or sentientism.
My bad, I wasnāt very clear when I used the term ācounterargumentā, and ānuanceā or something else might have fit better. It doesnāt argue against the fact that without humans, there wonāt be any species concerned with moral issues, only the case that humans are potentially so immoral that their presence might make the future worse than one with no humans. Which is indeed not really a ācounterargumentā to the idea that weād need humans to fix moral issues, but instead argues against the point that this would make it more likely positive than not for the future (since he argues that humans may have very bad moral values, and thus ensure a bad future).