I expect, with around 75% confidence, that rapid and unregulated growth and development of AI partners will become a huge blow to society, on a scale comparable to the blow from unregulated social media.
Isnât social media approximately not a problem at all, at least on the scale of other EA causes? There are some disputed findings that it may cause increased anxiety, depression, or suicide among some demographic groups (e.g. Jonathan Haidt claims it is responsible for mental illness in teenage girls and there is an ongoing scientific debate on this) but even if these are all true, this seems very low priority compared to neglected diseases, and nowhere near the scale of other problems to do with digital minds if they have equal moral value to people and you donât discount lives in the far future.
I worry about the effect that AI friends and partners could have on values. It seems plausible that most people could come to have a good AI friend in the coming decades. Our AI friends might always be there for us. They might get us. They might be funny and insightful and eloquent. How would it play out if theyâre opinions are crafted by tech companies, or the government, or even are reflections of what we want our friends to think? Maybe AI will develop fast enough and be powerful enough that it wonât matter what individuals think or value, but I see reasons for concern potentially much greater than the individual harms of social media.
Harris and Raskin talked about the risk that AI partners will be used for âproduct placementâ or political manipulation here, but Iâm sceptical about this. These AI partners will surely have a subscription business model rather than a freemium model, and, given how user trust will be extremely important for these businesses, I donât think they will try to manipulate the users in this way.
More broadly speaking, values will surely change, there is no doubt about that. The very value of âhuman connectionâ and âhuman relationshipsâ is eroded by definition if people are in AI relationships. A priori, I donât think value drift is a bad thing. But in this particular case, this value change will inevitably go along with the reduction of the population, which is a bad thing (according to my ethics, and the ethics of most other people, I believe).
Maybe Iâm Haidt- and Humane Tech-pilled, but to me, the widespread addiction of new generations to the present-form social media is a massive problem which could contribute substantially to how the AI transition eventually plays out, because social media directly affects social cohesion, i.e., the ability of society to work out responses to big questions concerning the AI (such as, should we build AGI at all? Should we try to build conscious AIs that are moral subjects? How the post-scarcity economy should look like?), and, indeed, the level of interest and engagement of people in these questions at all.
The âmehâ attitude of the EA community towards the issues surrounding social media, digital addiction, and AI romance is still surprising to me, I still donât understand the underlying factors or deeply held disagreements which elicit such different responses to these issues in me (for example) and most EAs. Note that this is not because Iâm a âconservative who doesnât understand new thingsâ: for example, I think much more favourably of AR and VR, I mostly agree with Chalmersâ âReality Plusâ, etc.
nowhere near the scale of other problems to do with digital minds if they have equal moral value to people and you donât discount lives in the far future.
I agree with this, but by this token, most issues which EAs concern with are nowhere near the scale of S-risks and other potential problems to do with future digital minds. Also, these problems only become relevant if we decide to build conscious AIs and there is no widespread legal and cultural opposition to that, which is a big âifâ.
Isnât social media approximately not a problem at all, at least on the scale of other EA causes? There are some disputed findings that it may cause increased anxiety, depression, or suicide among some demographic groups (e.g. Jonathan Haidt claims it is responsible for mental illness in teenage girls and there is an ongoing scientific debate on this) but even if these are all true, this seems very low priority compared to neglected diseases, and nowhere near the scale of other problems to do with digital minds if they have equal moral value to people and you donât discount lives in the far future.
I worry about the effect that AI friends and partners could have on values. It seems plausible that most people could come to have a good AI friend in the coming decades. Our AI friends might always be there for us. They might get us. They might be funny and insightful and eloquent. How would it play out if theyâre opinions are crafted by tech companies, or the government, or even are reflections of what we want our friends to think? Maybe AI will develop fast enough and be powerful enough that it wonât matter what individuals think or value, but I see reasons for concern potentially much greater than the individual harms of social media.
Harris and Raskin talked about the risk that AI partners will be used for âproduct placementâ or political manipulation here, but Iâm sceptical about this. These AI partners will surely have a subscription business model rather than a freemium model, and, given how user trust will be extremely important for these businesses, I donât think they will try to manipulate the users in this way.
More broadly speaking, values will surely change, there is no doubt about that. The very value of âhuman connectionâ and âhuman relationshipsâ is eroded by definition if people are in AI relationships. A priori, I donât think value drift is a bad thing. But in this particular case, this value change will inevitably go along with the reduction of the population, which is a bad thing (according to my ethics, and the ethics of most other people, I believe).
Maybe Iâm Haidt- and Humane Tech-pilled, but to me, the widespread addiction of new generations to the present-form social media is a massive problem which could contribute substantially to how the AI transition eventually plays out, because social media directly affects social cohesion, i.e., the ability of society to work out responses to big questions concerning the AI (such as, should we build AGI at all? Should we try to build conscious AIs that are moral subjects? How the post-scarcity economy should look like?), and, indeed, the level of interest and engagement of people in these questions at all.
The âmehâ attitude of the EA community towards the issues surrounding social media, digital addiction, and AI romance is still surprising to me, I still donât understand the underlying factors or deeply held disagreements which elicit such different responses to these issues in me (for example) and most EAs. Note that this is not because Iâm a âconservative who doesnât understand new thingsâ: for example, I think much more favourably of AR and VR, I mostly agree with Chalmersâ âReality Plusâ, etc.
I agree with this, but by this token, most issues which EAs concern with are nowhere near the scale of S-risks and other potential problems to do with future digital minds. Also, these problems only become relevant if we decide to build conscious AIs and there is no widespread legal and cultural opposition to that, which is a big âifâ.