I expect, with around 75% confidence, that rapid and unregulated growth and development of AI partners will become a huge blow to society, on a scale comparable to the blow from unregulated social media.
Isn’t social media approximately not a problem at all, at least on the scale of other EA causes? There are some disputed findings that it may cause increased anxiety, depression, or suicide among some demographic groups (e.g. Jonathan Haidt claims it is responsible for mental illness in teenage girls and there is an ongoing scientific debate on this) but even if these are all true, this seems very low priority compared to neglected diseases, and nowhere near the scale of other problems to do with digital minds if they have equal moral value to people and you don’t discount lives in the far future.
I worry about the effect that AI friends and partners could have on values. It seems plausible that most people could come to have a good AI friend in the coming decades. Our AI friends might always be there for us. They might get us. They might be funny and insightful and eloquent. How would it play out if they’re opinions are crafted by tech companies, or the government, or even are reflections of what we want our friends to think? Maybe AI will develop fast enough and be powerful enough that it won’t matter what individuals think or value, but I see reasons for concern potentially much greater than the individual harms of social media.
Harris and Raskin talked about the risk that AI partners will be used for “product placement” or political manipulation here, but I’m sceptical about this. These AI partners will surely have a subscription business model rather than a freemium model, and, given how user trust will be extremely important for these businesses, I don’t think they will try to manipulate the users in this way.
More broadly speaking, values will surely change, there is no doubt about that. The very value of “human connection” and “human relationships” is eroded by definition if people are in AI relationships. A priori, I don’t think value drift is a bad thing. But in this particular case, this value change will inevitably go along with the reduction of the population, which is a bad thing (according to my ethics, and the ethics of most other people, I believe).
Maybe I’m Haidt- and Humane Tech-pilled, but to me, the widespread addiction of new generations to the present-form social media is a massive problem which could contribute substantially to how the AI transition eventually plays out, because social media directly affects social cohesion, i.e., the ability of society to work out responses to big questions concerning the AI (such as, should we build AGI at all? Should we try to build conscious AIs that are moral subjects? How the post-scarcity economy should look like?), and, indeed, the level of interest and engagement of people in these questions at all.
The “meh” attitude of the EA community towards the issues surrounding social media, digital addiction, and AI romance is still surprising to me, I still don’t understand the underlying factors or deeply held disagreements which elicit such different responses to these issues in me (for example) and most EAs. Note that this is not because I’m a “conservative who doesn’t understand new things”: for example, I think much more favourably of AR and VR, I mostly agree with Chalmers’ “Reality Plus”, etc.
nowhere near the scale of other problems to do with digital minds if they have equal moral value to people and you don’t discount lives in the far future.
I agree with this, but by this token, most issues which EAs concern with are nowhere near the scale of S-risks and other potential problems to do with future digital minds. Also, these problems only become relevant if we decide to build conscious AIs and there is no widespread legal and cultural opposition to that, which is a big “if”.
Isn’t social media approximately not a problem at all, at least on the scale of other EA causes? There are some disputed findings that it may cause increased anxiety, depression, or suicide among some demographic groups (e.g. Jonathan Haidt claims it is responsible for mental illness in teenage girls and there is an ongoing scientific debate on this) but even if these are all true, this seems very low priority compared to neglected diseases, and nowhere near the scale of other problems to do with digital minds if they have equal moral value to people and you don’t discount lives in the far future.
I worry about the effect that AI friends and partners could have on values. It seems plausible that most people could come to have a good AI friend in the coming decades. Our AI friends might always be there for us. They might get us. They might be funny and insightful and eloquent. How would it play out if they’re opinions are crafted by tech companies, or the government, or even are reflections of what we want our friends to think? Maybe AI will develop fast enough and be powerful enough that it won’t matter what individuals think or value, but I see reasons for concern potentially much greater than the individual harms of social media.
Harris and Raskin talked about the risk that AI partners will be used for “product placement” or political manipulation here, but I’m sceptical about this. These AI partners will surely have a subscription business model rather than a freemium model, and, given how user trust will be extremely important for these businesses, I don’t think they will try to manipulate the users in this way.
More broadly speaking, values will surely change, there is no doubt about that. The very value of “human connection” and “human relationships” is eroded by definition if people are in AI relationships. A priori, I don’t think value drift is a bad thing. But in this particular case, this value change will inevitably go along with the reduction of the population, which is a bad thing (according to my ethics, and the ethics of most other people, I believe).
Maybe I’m Haidt- and Humane Tech-pilled, but to me, the widespread addiction of new generations to the present-form social media is a massive problem which could contribute substantially to how the AI transition eventually plays out, because social media directly affects social cohesion, i.e., the ability of society to work out responses to big questions concerning the AI (such as, should we build AGI at all? Should we try to build conscious AIs that are moral subjects? How the post-scarcity economy should look like?), and, indeed, the level of interest and engagement of people in these questions at all.
The “meh” attitude of the EA community towards the issues surrounding social media, digital addiction, and AI romance is still surprising to me, I still don’t understand the underlying factors or deeply held disagreements which elicit such different responses to these issues in me (for example) and most EAs. Note that this is not because I’m a “conservative who doesn’t understand new things”: for example, I think much more favourably of AR and VR, I mostly agree with Chalmers’ “Reality Plus”, etc.
I agree with this, but by this token, most issues which EAs concern with are nowhere near the scale of S-risks and other potential problems to do with future digital minds. Also, these problems only become relevant if we decide to build conscious AIs and there is no widespread legal and cultural opposition to that, which is a big “if”.