Taking a step back, does it really matter if AI is conscious or not? One can argue that AI reflects the society (e. g. in order to make good decisions or sell products), so would, at most, double the sentience in the world. Furthermore, today, many individuals (including humans not considered in decisionmaking, not profitable to reach, or without the access to electricity, and non-human animals, especially wild ones) are not considered by AI systems. Thus, any possible current and prospective AI’s contribution to sentience is limited.
The notion that AI reflects societies also follows that wellbeing in societies should be improved to improve the perceptions of AI.
It is unlikely that suffering AI would be intentionally created and kept running. Nations would probably not seek creating suffering which would consume scarce resources and if suffering AI is created abroad, the nation would just turn it off.
Unintentional creation of necessary suffering AI that would not reflect the society but perceive relatively independently is the greatest risk. For example, if AI really hates selling products in a way that in consequence and in the process reduces humans’ wellness, or if it makes certain populations experience low or negative wellbeing otherwise. This AI would be embedded in governance/economy, so would likely not be switched off. Also in this case, the wellbeing in societies should be improved—by developing AI that would ‘enjoy what it is doing.’
Another concern is suffering AI that is not created for strategic purposes but individuals’ enjoyment of malevolence. An example can be a sentient video game. This can be mitigated by making videogames reflect reality (for example, suffering virtual entities would express seriousness and not preference, rather than e. g. playful submission and cooperation), and supporting institutions that provide comparable real-life experiences. This should lead to decreases in players’ enjoyment of malevolence. In this case, the wellbeing improvement in a society would be a consequence of protecting sentient AI from malevolence by preventing malevolence by both developing preventive AI and improving a societal aspect that should develop players’ preferences.
One can argue that AI reflects the society (e. g. in order to make good decisions or sell products), so would, at most, double the sentience in the world. Furthermore, today, many individuals (including humans not considered in decisionmaking, not profitable to reach, or without the access to electricity, and non-human animals, especially wild ones) are not considered by AI systems. Thus, any possible current and prospective AI’s contribution to sentience is limited.
It is very unclear how many digital minds we should expect, but it is conceivable that in the long run they will greatly outnumber us. The reasons we have to create more human beings—companionship, beneficence, having a legacy—are reasons we would have to create more digital minds. We can fit a lot more digital minds on Earth than we can humans. We could more easily colonize other planets with digital minds. For these reasons, I think we should be open to the possibility that most future minds will be digital.
Unintentional creation of necessary suffering AI that would not reflect the society but perceive relatively independently is the greatest risk. For example, if AI really hates selling products in a way that in consequence and in the process reduces humans’ wellness, or if it makes certain populations experience low or negative wellbeing otherwise.
It strikes me as less plausible that we will have massive numbers of digital minds that unintentionally suffer while performing cognitive labor for us. I’m skeptical that the most effective ways to produce AI will make them conscious, and even if it does it seems like a big jump from phenomenal experience to suffering. Even if they are conscious, I don’t see why we would need a number of digital minds for every person. I would think that the cognitive power of artifical intelligence means we would need rather few of them, and so the suffering they experience, unless particularly intense, wouldn’t be particularly significant.
The reasons we have to create more human beings—companionship, beneficence, having a legacy—are reasons we would have to create more digital minds.
Companionship and beneficence may motivate the creation of a few digital minds (being surrounded by [hundreds of] companions exchanging acts of kindness may be preferred by relatively few) while it is unclear about leaving a legacy: if one has the option to reflect themselves in many others, will they go for numbers, especially if they can ‘bulk’ teaching/learning.
Do you think that people will be interested in mere reflection or getting the best of themselves (and of others) highlighted? If the latter, then presumably wellbeing in the digital world would be high, both due to the minds’ abilities to process information in a positive way and their virtuous intentions and skills.
I’m skeptical that the most effective ways to produce AI will make them conscious, and even if it does it seems like a big jump from phenomenal experience to suffering.
If emotional/intuitive reasoning is the most effective and this can be imitated by chemical reactions, commercial AI can be suffering.
Even if they are conscious, I don’t see why we would need a number of digital minds for every person. I would think that the cognitive power of artifical intelligence means we would need rather few of them, and so the suffering they experience, unless particularly intense, wouldn’t be particularly significant.
Yes, that would be good if any AI that is using a lot of inputs to make decisions/create content etc does not suffer significantly. However, since a lot of data of many individuals can be processed, then if the AI is suffering, this experiences can be intense.
If there is an AI that experiences intense suffering (utility monster) but makes the world great, should it be created?
Taking a step back, does it really matter if AI is conscious or not? One can argue that AI reflects the society (e. g. in order to make good decisions or sell products), so would, at most, double the sentience in the world. Furthermore, today, many individuals (including humans not considered in decisionmaking, not profitable to reach, or without the access to electricity, and non-human animals, especially wild ones) are not considered by AI systems. Thus, any possible current and prospective AI’s contribution to sentience is limited.
The notion that AI reflects societies also follows that wellbeing in societies should be improved to improve the perceptions of AI.
It is unlikely that suffering AI would be intentionally created and kept running. Nations would probably not seek creating suffering which would consume scarce resources and if suffering AI is created abroad, the nation would just turn it off.
Unintentional creation of necessary suffering AI that would not reflect the society but perceive relatively independently is the greatest risk. For example, if AI really hates selling products in a way that in consequence and in the process reduces humans’ wellness, or if it makes certain populations experience low or negative wellbeing otherwise. This AI would be embedded in governance/economy, so would likely not be switched off. Also in this case, the wellbeing in societies should be improved—by developing AI that would ‘enjoy what it is doing.’
Another concern is suffering AI that is not created for strategic purposes but individuals’ enjoyment of malevolence. An example can be a sentient video game. This can be mitigated by making videogames reflect reality (for example, suffering virtual entities would express seriousness and not preference, rather than e. g. playful submission and cooperation), and supporting institutions that provide comparable real-life experiences. This should lead to decreases in players’ enjoyment of malevolence. In this case, the wellbeing improvement in a society would be a consequence of protecting sentient AI from malevolence by preventing malevolence by both developing preventive AI and improving a societal aspect that should develop players’ preferences.
It is very unclear how many digital minds we should expect, but it is conceivable that in the long run they will greatly outnumber us. The reasons we have to create more human beings—companionship, beneficence, having a legacy—are reasons we would have to create more digital minds. We can fit a lot more digital minds on Earth than we can humans. We could more easily colonize other planets with digital minds. For these reasons, I think we should be open to the possibility that most future minds will be digital.
It strikes me as less plausible that we will have massive numbers of digital minds that unintentionally suffer while performing cognitive labor for us. I’m skeptical that the most effective ways to produce AI will make them conscious, and even if it does it seems like a big jump from phenomenal experience to suffering. Even if they are conscious, I don’t see why we would need a number of digital minds for every person. I would think that the cognitive power of artifical intelligence means we would need rather few of them, and so the suffering they experience, unless particularly intense, wouldn’t be particularly significant.
Companionship and beneficence may motivate the creation of a few digital minds (being surrounded by [hundreds of] companions exchanging acts of kindness may be preferred by relatively few) while it is unclear about leaving a legacy: if one has the option to reflect themselves in many others, will they go for numbers, especially if they can ‘bulk’ teaching/learning.
Do you think that people will be interested in mere reflection or getting the best of themselves (and of others) highlighted? If the latter, then presumably wellbeing in the digital world would be high, both due to the minds’ abilities to process information in a positive way and their virtuous intentions and skills.
If emotional/intuitive reasoning is the most effective and this can be imitated by chemical reactions, commercial AI can be suffering.
Yes, that would be good if any AI that is using a lot of inputs to make decisions/create content etc does not suffer significantly. However, since a lot of data of many individuals can be processed, then if the AI is suffering, this experiences can be intense.
If there is an AI that experiences intense suffering (utility monster) but makes the world great, should it be created?