The reasons we have to create more human beings—companionship, beneficence, having a legacy—are reasons we would have to create more digital minds.
Companionship and beneficence may motivate the creation of a few digital minds (being surrounded by [hundreds of] companions exchanging acts of kindness may be preferred by relatively few) while it is unclear about leaving a legacy: if one has the option to reflect themselves in many others, will they go for numbers, especially if they can ‘bulk’ teaching/learning.
Do you think that people will be interested in mere reflection or getting the best of themselves (and of others) highlighted? If the latter, then presumably wellbeing in the digital world would be high, both due to the minds’ abilities to process information in a positive way and their virtuous intentions and skills.
I’m skeptical that the most effective ways to produce AI will make them conscious, and even if it does it seems like a big jump from phenomenal experience to suffering.
If emotional/intuitive reasoning is the most effective and this can be imitated by chemical reactions, commercial AI can be suffering.
Even if they are conscious, I don’t see why we would need a number of digital minds for every person. I would think that the cognitive power of artifical intelligence means we would need rather few of them, and so the suffering they experience, unless particularly intense, wouldn’t be particularly significant.
Yes, that would be good if any AI that is using a lot of inputs to make decisions/create content etc does not suffer significantly. However, since a lot of data of many individuals can be processed, then if the AI is suffering, this experiences can be intense.
If there is an AI that experiences intense suffering (utility monster) but makes the world great, should it be created?
Companionship and beneficence may motivate the creation of a few digital minds (being surrounded by [hundreds of] companions exchanging acts of kindness may be preferred by relatively few) while it is unclear about leaving a legacy: if one has the option to reflect themselves in many others, will they go for numbers, especially if they can ‘bulk’ teaching/learning.
Do you think that people will be interested in mere reflection or getting the best of themselves (and of others) highlighted? If the latter, then presumably wellbeing in the digital world would be high, both due to the minds’ abilities to process information in a positive way and their virtuous intentions and skills.
If emotional/intuitive reasoning is the most effective and this can be imitated by chemical reactions, commercial AI can be suffering.
Yes, that would be good if any AI that is using a lot of inputs to make decisions/create content etc does not suffer significantly. However, since a lot of data of many individuals can be processed, then if the AI is suffering, this experiences can be intense.
If there is an AI that experiences intense suffering (utility monster) but makes the world great, should it be created?