Intuitively, I don’t see the point to perpetuate humanity if it’s with life full of suffering. After reading arguments on the other side, feel much more uncertain. Indeed, it will be hard to fix value issues without any humans (based on the fact that we are the only species that think about moral issues)
If you are interested, Magnus Vinding outlines a few counterarguments to this idea in his article about Pause AI (though of course, he’s far from alone in having argued this, but this is the first post that comes to mind).
Thanks for the article. I do not see how Magnus’s article argues against the fact that without any human alive there will be no species to fix moral issue. He just says that it is not because we have still humans that we cannot have an horrible futur (and I agree on it). The only alternative I see is that an other species (like alien or AI-related beings) adopt some morality, but it stays quite speculative. We also do not know how this morality will fit values like impartialism or sentientism.
My bad, I wasn’t very clear when I used the term “counterargument”, and “nuance” or something else might have fit better. It doesn’t argue against the fact that without humans, there won’t be any species concerned with moral issues, only the case that humans are potentially so immoral that their presence might make the future worse than one with no humans. Which is indeed not really a “counterargument” to the idea that we’d need humans to fix moral issues, but instead argues against the point that this would make it more likely positive than not for the future (since he argues that humans may have very bad moral values, and thus ensure a bad future).
Intuitively, I don’t see the point to perpetuate humanity if it’s with life full of suffering.
After reading arguments on the other side, feel much more uncertain.
Indeed, it will be hard to fix value issues without any humans (based on the fact that we are the only species that think about moral issues)
If you are interested, Magnus Vinding outlines a few counterarguments to this idea in his article about Pause AI (though of course, he’s far from alone in having argued this, but this is the first post that comes to mind).
Thanks for the article.
I do not see how Magnus’s article argues against the fact that without any human alive there will be no species to fix moral issue. He just says that it is not because we have still humans that we cannot have an horrible futur (and I agree on it).
The only alternative I see is that an other species (like alien or AI-related beings) adopt some morality, but it stays quite speculative. We also do not know how this morality will fit values like impartialism or sentientism.
My bad, I wasn’t very clear when I used the term “counterargument”, and “nuance” or something else might have fit better. It doesn’t argue against the fact that without humans, there won’t be any species concerned with moral issues, only the case that humans are potentially so immoral that their presence might make the future worse than one with no humans. Which is indeed not really a “counterargument” to the idea that we’d need humans to fix moral issues, but instead argues against the point that this would make it more likely positive than not for the future (since he argues that humans may have very bad moral values, and thus ensure a bad future).