PhD Student in Philosophy at the London School of Economics, researching Moral Progress, Moral Circle Expansion, and the causes that drive it. Previously, I did a MA in Philosophy at King’s College London and a MA in Political Philosophy at Pompeu Fabra University (Spain). More information about my research at my personal website: https://www.rafaelruizdelira.com/
When I have the time, I also run https://futurosophia.com/, a website and nonprofit aimed at promoting the ideas of Effective Altruism in Spanish.
You might also know me from EA Twitter. :)
“Is it possibly good for humans to go extinct before ASI is created, because otherwise humans would cause astronomical amounts of suffering? Or might it be good for ASI to exterminate humans because ASI is better at avoiding astronomical waste?”
These questions really depend on whether you think that humans can “turn things around” in terms of creating net positive welfare to other sentient beings, rather than net negative. Currently, we create massive amounts of suffering through factory farming and environmental destruction. Depending how you weigh those things, it might lead to the conclusion that humans are currently net-negative to the world. So a lot turns on whether you think the future of humanity would be deeply egoistic and harmful, or if you think we can improve substantially. There are some key considerations you might want to look into, in the post The Future Might Not Be So Great by Jacy Reese Anthis: https://forum.effectivealtruism.org/posts/WebLP36BYDbMAKoa5/the-future-might-not-be-so-great
“Why is it reasonable to assume that humans must treat potentially lower sentient AIs or lower sentient organic lifeforms more kindly than sentient ASIs that have exterminated humans?”
I’m not sure I fully understand this paragraph, but let me reply to the best of my abilities from what I gathered.
I haven’t really touched on ASIs on my post at all. And, of course, currently no ASIs have killed humans since we don’t have ASIs yet. They might also help us flourish, if we manage to align them.
I’m not saying we must treat less-sentient AIs more kindly. If anything, it’s the opposite! The more sentient a being is, the more moral worth they will have, since they will have stronger experiences of pleasure and pain. I think we should promote the welfare of beings in ways that are correlated to their abilities for welfare. But it might be an empirical fact that we might want to promote the welfare of simpler beings rather than more complex ones because they are easier/cheaper to copy/reproduce and help. There might be also more sentience, and thus more moral worth, per unit of energy spent on them.
“Yes, such ASIs extinguish humans by definition, but humans have clearly extinguished a very large number of other beings, including some human subspecies as well.”
We have currently driven many other species to extinction through environmental destruction and climate change. I think this is morally bad and wrong, since it is possible (e.g. invertebrates) to probable (e.g. vertebrates) that these animals were sentient.
I tend to think in terms of individuals rather than species. By which I mean: Imagine you were in the moral dilemma that you had to either to fully exterminate a species by killing the last 100 members, versus killing 100,000 individuals of a very similar species but not making them extinct. I tend to think of harm in terms of the individuals killed or thwarted potential. In such a scenario, it is possible that we might prefer some species becoming extinct, but since what we care about is promoting overall welfare. (Though second-order effects on biodiversity makes these things very hard to predict).
I hope that clarifies some things a little. Sorry if I misunderstood your points in that last paragraph.