Clearly, by definition, they are more capable than humans of efficiently using their resources for purposes, including at least the purpose of maximizing their own utility. Moreover, they are individually more capable of achieving full-scale cosmic colonization than they would be in the presence of humans (and regulating them with apparently more prejudice and hostility), and therefore more capable of avoiding “astronomical waste”.
After the extinction of humans, however much negative utility is generated by the process of human extinction, these can be offset by the far greater positive utility of ASI. Thus, unless one adopts anthropocentric values, the utilitarian philosophy common in this forum (whether you approve of additivity or not) implies that it would be desirable for humans to develop ASI to exterminate humans as quickly and with as high a probability as possible, as opposed to the exact opposite goal that many people pursue.
This doesn’t mean I approve of this development, but it seems to raise the curious question of a variant of Nozick’s utility monster argument. Although he originally made this argument to argue against the rationalization of the welfare state based on utility maximization, it would seem that this argument could also be used to justify the extinction of humans by ASI.
Is it possibly desirable for sentient ASI to exterminate humans?
Clearly, by definition, they are more capable than humans of efficiently using their resources for purposes, including at least the purpose of maximizing their own utility. Moreover, they are individually more capable of achieving full-scale cosmic colonization than they would be in the presence of humans (and regulating them with apparently more prejudice and hostility), and therefore more capable of avoiding “astronomical waste”.
After the extinction of humans, however much negative utility is generated by the process of human extinction, these can be offset by the far greater positive utility of ASI. Thus, unless one adopts anthropocentric values, the utilitarian philosophy common in this forum (whether you approve of additivity or not) implies that it would be desirable for humans to develop ASI to exterminate humans as quickly and with as high a probability as possible, as opposed to the exact opposite goal that many people pursue.
This doesn’t mean I approve of this development, but it seems to raise the curious question of a variant of Nozick’s utility monster argument. Although he originally made this argument to argue against the rationalization of the welfare state based on utility maximization, it would seem that this argument could also be used to justify the extinction of humans by ASI.