Is it possibly good for humans to go extinct before ASI is created, because otherwise humans would cause astronomical amounts of suffering? Or might it be good for ASI to exterminate humans because ASI is better at avoiding astronomical waste?
Why is it reasonable to assume that humans must treat potentially lower sentient AIs or lower sentient organic lifeforms more kindly than sentient ASIs that have exterminated humans? Yes, such ASIs extinguish humans by definition, but humans have clearly extinguished a very large number of other beings, including some human subspecies as well. From this perspective, whether or not humans are extinct or whether or not ASIs will extinguish humans may be irrelevant, as the likelihood of both kinds of (good or bad) astronomical impacts seems to be equally likely or actually detrimental to the existence of humans.
“Is it possibly good for humans to go extinct before ASI is created, because otherwise humans would cause astronomical amounts of suffering? Or might it be good for ASI to exterminate humans because ASI is better at avoiding astronomical waste?”
These questions really depend on whether you think that humans can “turn things around” in terms of creating net positive welfare to other sentient beings, rather than net negative. Currently, we create massive amounts of suffering through factory farming and environmental destruction. Depending how you weigh those things, it might lead to the conclusion that humans are currently net-negative to the world. So a lot turns on whether you think the future of humanity would be deeply egoistic and harmful, or if you think we can improve substantially. There are some key considerations you might want to look into, in the post The Future Might Not Be So Great by Jacy Reese Anthis: https://forum.effectivealtruism.org/posts/WebLP36BYDbMAKoa5/the-future-might-not-be-so-great
“Why is it reasonable to assume that humans must treat potentially lower sentient AIs or lower sentient organic lifeforms more kindly than sentient ASIs that have exterminated humans?”
I’m not sure I fully understand this paragraph, but let me reply to the best of my abilities from what I gathered.
I haven’t really touched on ASIs on my post at all. And, of course, currently no ASIs have killed humans since we don’t have ASIs yet. They might also help us flourish, if we manage to align them.
I’m not saying we must treat less-sentient AIs more kindly. If anything, it’s the opposite! The more sentient a being is, the more moral worth they will have, since they will have stronger experiences of pleasure and pain. I think we should promote the welfare of beings in ways that are correlated to their abilities for welfare. But it might be an empirical fact that we might want to promote the welfare of simpler beings rather than more complex ones because they are easier/cheaper to copy/reproduce and help. There might be also more sentience, and thus more moral worth, per unit of energy spent on them.
“Yes, such ASIs extinguish humans by definition, but humans have clearly extinguished a very large number of other beings, including some human subspecies as well.”
We have currently driven many other species to extinction through environmental destruction and climate change. I think this is morally bad and wrong, since it is possible (e.g. invertebrates) to probable (e.g. vertebrates) that these animals were sentient.
I tend to think in terms of individuals rather than species. By which I mean: Imagine you were in the moral dilemma that you had to either to fully exterminate a species by killing the last 100 members, versus killing 100,000 individuals of a very similar species but not making them extinct. I tend to think of harm in terms of the individuals killed or thwarted potential. In such a scenario, it is possible that we might prefer some species becoming extinct, but since what we care about is promoting overall welfare. (Though second-order effects on biodiversity makes these things very hard to predict).
I hope that clarifies some things a little. Sorry if I misunderstood your points in that last paragraph.
I’m not talking about the positive or negative sign of the net contribution of humans, but rather the expectation that the sign of the net contribution produced by sentient ASI should be similar to that of humans. Coupled with the premise that ASI alone is more likely to do a better job of full-scale cosmic colonization faster and better than humans, this means that either sentient ASI should destroy humans to avoid astronomical waste, or that humans should be destroyed prior to the creation of sentient ASI or cosmic colonization to avoid further destruction of the Earth and the rest of the universe by humans. This means that humans being (properly) destroyed is not a bad thing, but instead is more likely to be better than humans existing and continuing.
Alternatively ASI could be created with the purpose of maximizing perpetually happy sentient low-level AI/artificial life rather than paperclip manufacturing. in which case humans would either have to accept that they are part of this system or be destroyed as this is not conducive to maximizing averaging or overall hedonism. This is probably the best way to maximize the hedonics of sentient life in the universe, i.e. utility monster maximizers rather than paperclip maximizers.
I am not misunderstanding what you are saying, but pointing out that these marvelous trains of thought experiments may lead to even more counterintuitive conclusions.
Is it possibly good for humans to go extinct before ASI is created, because otherwise humans would cause astronomical amounts of suffering? Or might it be good for ASI to exterminate humans because ASI is better at avoiding astronomical waste?
Why is it reasonable to assume that humans must treat potentially lower sentient AIs or lower sentient organic lifeforms more kindly than sentient ASIs that have exterminated humans? Yes, such ASIs extinguish humans by definition, but humans have clearly extinguished a very large number of other beings, including some human subspecies as well. From this perspective, whether or not humans are extinct or whether or not ASIs will extinguish humans may be irrelevant, as the likelihood of both kinds of (good or bad) astronomical impacts seems to be equally likely or actually detrimental to the existence of humans.
“Is it possibly good for humans to go extinct before ASI is created, because otherwise humans would cause astronomical amounts of suffering? Or might it be good for ASI to exterminate humans because ASI is better at avoiding astronomical waste?”
These questions really depend on whether you think that humans can “turn things around” in terms of creating net positive welfare to other sentient beings, rather than net negative. Currently, we create massive amounts of suffering through factory farming and environmental destruction. Depending how you weigh those things, it might lead to the conclusion that humans are currently net-negative to the world. So a lot turns on whether you think the future of humanity would be deeply egoistic and harmful, or if you think we can improve substantially. There are some key considerations you might want to look into, in the post The Future Might Not Be So Great by Jacy Reese Anthis: https://forum.effectivealtruism.org/posts/WebLP36BYDbMAKoa5/the-future-might-not-be-so-great
“Why is it reasonable to assume that humans must treat potentially lower sentient AIs or lower sentient organic lifeforms more kindly than sentient ASIs that have exterminated humans?”
I’m not sure I fully understand this paragraph, but let me reply to the best of my abilities from what I gathered.
I haven’t really touched on ASIs on my post at all. And, of course, currently no ASIs have killed humans since we don’t have ASIs yet. They might also help us flourish, if we manage to align them.
I’m not saying we must treat less-sentient AIs more kindly. If anything, it’s the opposite! The more sentient a being is, the more moral worth they will have, since they will have stronger experiences of pleasure and pain. I think we should promote the welfare of beings in ways that are correlated to their abilities for welfare. But it might be an empirical fact that we might want to promote the welfare of simpler beings rather than more complex ones because they are easier/cheaper to copy/reproduce and help. There might be also more sentience, and thus more moral worth, per unit of energy spent on them.
“Yes, such ASIs extinguish humans by definition, but humans have clearly extinguished a very large number of other beings, including some human subspecies as well.”
We have currently driven many other species to extinction through environmental destruction and climate change. I think this is morally bad and wrong, since it is possible (e.g. invertebrates) to probable (e.g. vertebrates) that these animals were sentient.
I tend to think in terms of individuals rather than species. By which I mean: Imagine you were in the moral dilemma that you had to either to fully exterminate a species by killing the last 100 members, versus killing 100,000 individuals of a very similar species but not making them extinct. I tend to think of harm in terms of the individuals killed or thwarted potential. In such a scenario, it is possible that we might prefer some species becoming extinct, but since what we care about is promoting overall welfare. (Though second-order effects on biodiversity makes these things very hard to predict).
I hope that clarifies some things a little. Sorry if I misunderstood your points in that last paragraph.
I’m not talking about the positive or negative sign of the net contribution of humans, but rather the expectation that the sign of the net contribution produced by sentient ASI should be similar to that of humans. Coupled with the premise that ASI alone is more likely to do a better job of full-scale cosmic colonization faster and better than humans, this means that either sentient ASI should destroy humans to avoid astronomical waste, or that humans should be destroyed prior to the creation of sentient ASI or cosmic colonization to avoid further destruction of the Earth and the rest of the universe by humans. This means that humans being (properly) destroyed is not a bad thing, but instead is more likely to be better than humans existing and continuing.
Alternatively ASI could be created with the purpose of maximizing perpetually happy sentient low-level AI/artificial life rather than paperclip manufacturing. in which case humans would either have to accept that they are part of this system or be destroyed as this is not conducive to maximizing averaging or overall hedonism. This is probably the best way to maximize the hedonics of sentient life in the universe, i.e. utility monster maximizers rather than paperclip maximizers.
I am not misunderstanding what you are saying, but pointing out that these marvelous trains of thought experiments may lead to even more counterintuitive conclusions.