Thanks for the article. I do not see how Magnusâs article argues against the fact that without any human alive there will be no species to fix moral issue. He just says that it is not because we have still humans that we cannot have an horrible futur (and I agree on it). The only alternative I see is that an other species (like alien or AI-related beings) adopt some morality, but it stays quite speculative. We also do not know how this morality will fit values like impartialism or sentientism.
My bad, I wasnât very clear when I used the term âcounterargumentâ, and ânuanceâ or something else might have fit better. It doesnât argue against the fact that without humans, there wonât be any species concerned with moral issues, only the case that humans are potentially so immoral that their presence might make the future worse than one with no humans. Which is indeed not really a âcounterargumentâ to the idea that weâd need humans to fix moral issues, but instead argues against the point that this would make it more likely positive than not for the future (since he argues that humans may have very bad moral values, and thus ensure a bad future).
Thanks for the article.
I do not see how Magnusâs article argues against the fact that without any human alive there will be no species to fix moral issue. He just says that it is not because we have still humans that we cannot have an horrible futur (and I agree on it).
The only alternative I see is that an other species (like alien or AI-related beings) adopt some morality, but it stays quite speculative. We also do not know how this morality will fit values like impartialism or sentientism.
My bad, I wasnât very clear when I used the term âcounterargumentâ, and ânuanceâ or something else might have fit better. It doesnât argue against the fact that without humans, there wonât be any species concerned with moral issues, only the case that humans are potentially so immoral that their presence might make the future worse than one with no humans. Which is indeed not really a âcounterargumentâ to the idea that weâd need humans to fix moral issues, but instead argues against the point that this would make it more likely positive than not for the future (since he argues that humans may have very bad moral values, and thus ensure a bad future).