Firstly, I don’t think it’s so much that people don’t care about AI Safety, I think it’s largely who cares about a threat is highly related to who it affects. Natural disasters etc affect everyone relatively (though not exactly) equally, whereas AI harms overwhelmingly affect the underpriviliged and vulnerable. People who are vastly underrepresented in both EA and in wider STEM/academia, who are less able to collate and utilise resources, who are less able to raise alarms. As a result, AI Safety is a field where many of the current and future threats are hidden from view.
Secondly, AGI Safety as a field tends to isolate itself from other areas of AI Safety as if the two aren’t massively related, and goes off on kind of a majorly theoretical angle as a result. As a consequence, AGI/ASI Safety folk are seen as something of living in a fantasy world of their own making compared to lots of other areas of AI risk by both the public and people within AI. I don’t personally agree with this, but it’s something I hear a lot in AI research.
No argument about AGI risk that I’ve seen argues that it affects the underprivileged most. In fact, arguments emphasize how every single one of us is vulnerable to AI and that AI takeover would be a catastrophe for all of humanity. There is no story in which misaligned AI only hurts poor/vulnerable people.
The representation argument doesn’t make sense as it would imply that EA, as a pretty undiverse space, would not care about AGI risk. That is not the case. Moreover it would imply that there are many suppressed advocates for AI safety among social activists and leaders of underprivileged groups. That is definitely not the case.
No argument about AGI risk that I’ve seen argues that it affects the underprivileged most. In fact, arguments emphasize how every single one of us is vulnerable to AI and that AI takeover would be a catastrophe for all of humanity. There is no story in which misaligned AI only hurts poor/vulnerable people.
You’re misunderstanding something about why many people are not concerned with AGI risks despite being sympathetic to various aspects of AI ethics. No one concerned with AGI x-risk is arguing it will disproportionately harm the underprivileged. But current AI harms are from things like discriminatory criminal sentencing algorithms, so a lot of the AI ethics discourse involves fairness and privilege, and people concerned with those issues don’t fully appreciate that misaligned AGI 1) hurts everyone, and 2) is a real thing that very well might happen within 20 years, not just some imaginary sci-fi story made up by overprivileged white nerds.
There is some discourse around technological unemployment putting low-skilled employees out of work, but this is a niche political argument that I’ve mostly heard of proponents of UBI. I think it’s less critical than x-risk, and if artificial intelligence gains the ability to do diverse tasks as well as humans can, I’ll be just as unemployed a computer programmer as anyone else is as a coal miner.
You raise some fair points, but some others I would disagree with. I would say that just because there isn’t a popular argument that AGI risk affects underpriviliged people the most, doesn’t make it not true. I can’t think of a transformative technology in human history that didn’t impact people more the lower down the social strata you go, and AI thus far has not only followed this trend but greatly exaccerbated it. Current AI harms are overwhelmingly targetted towards these groups. I can’t think of any reason why much more powerful AI such as AGI would for whatever reason buck this trend. Obviously if we only focused on existential risk this may not be the case, but even a marginally misaligned AGI would exaggerate current AI harms, particularly in suffering ethics cases.
People are concerned about AGI because it could lead to human extinction or civilizational collapse. That really seems like it affects everyone. It’s more analogous to nuclear war. If there was a full scale global nuclear war, being privileged would not help you very much.
Besides, if you’re going to make the point that AI is just like every other issue in affecting the most vulnerable, then you haven’t explained why people don’t care about AI risk. That is, you haven’t identified something unique about AI. You could apply the same argument to climate change, to pandemic risk, to inequality. All of these issues disproportionately affect the poor, yet all of them occupy substantially more public discussion than AI. What makes AI different?
Great post! I’m gonna throw out two spicy takes.
Firstly, I don’t think it’s so much that people don’t care about AI Safety, I think it’s largely who cares about a threat is highly related to who it affects. Natural disasters etc affect everyone relatively (though not exactly) equally, whereas AI harms overwhelmingly affect the underpriviliged and vulnerable. People who are vastly underrepresented in both EA and in wider STEM/academia, who are less able to collate and utilise resources, who are less able to raise alarms. As a result, AI Safety is a field where many of the current and future threats are hidden from view.
Secondly, AGI Safety as a field tends to isolate itself from other areas of AI Safety as if the two aren’t massively related, and goes off on kind of a majorly theoretical angle as a result. As a consequence, AGI/ASI Safety folk are seen as something of living in a fantasy world of their own making compared to lots of other areas of AI risk by both the public and people within AI. I don’t personally agree with this, but it’s something I hear a lot in AI research.
The first argument seems suspect on a few levels.
No argument about AGI risk that I’ve seen argues that it affects the underprivileged most. In fact, arguments emphasize how every single one of us is vulnerable to AI and that AI takeover would be a catastrophe for all of humanity. There is no story in which misaligned AI only hurts poor/vulnerable people.
The representation argument doesn’t make sense as it would imply that EA, as a pretty undiverse space, would not care about AGI risk. That is not the case. Moreover it would imply that there are many suppressed advocates for AI safety among social activists and leaders of underprivileged groups. That is definitely not the case.
You’re misunderstanding something about why many people are not concerned with AGI risks despite being sympathetic to various aspects of AI ethics. No one concerned with AGI x-risk is arguing it will disproportionately harm the underprivileged. But current AI harms are from things like discriminatory criminal sentencing algorithms, so a lot of the AI ethics discourse involves fairness and privilege, and people concerned with those issues don’t fully appreciate that misaligned AGI 1) hurts everyone, and 2) is a real thing that very well might happen within 20 years, not just some imaginary sci-fi story made up by overprivileged white nerds.
There is some discourse around technological unemployment putting low-skilled employees out of work, but this is a niche political argument that I’ve mostly heard of proponents of UBI. I think it’s less critical than x-risk, and if artificial intelligence gains the ability to do diverse tasks as well as humans can, I’ll be just as unemployed a computer programmer as anyone else is as a coal miner.
This is the opposite of the point made in the parent comment, and I agree with it.
You raise some fair points, but some others I would disagree with. I would say that just because there isn’t a popular argument that AGI risk affects underpriviliged people the most, doesn’t make it not true. I can’t think of a transformative technology in human history that didn’t impact people more the lower down the social strata you go, and AI thus far has not only followed this trend but greatly exaccerbated it. Current AI harms are overwhelmingly targetted towards these groups. I can’t think of any reason why much more powerful AI such as AGI would for whatever reason buck this trend. Obviously if we only focused on existential risk this may not be the case, but even a marginally misaligned AGI would exaggerate current AI harms, particularly in suffering ethics cases.
People are concerned about AGI because it could lead to human extinction or civilizational collapse. That really seems like it affects everyone. It’s more analogous to nuclear war. If there was a full scale global nuclear war, being privileged would not help you very much.
Besides, if you’re going to make the point that AI is just like every other issue in affecting the most vulnerable, then you haven’t explained why people don’t care about AI risk. That is, you haven’t identified something unique about AI. You could apply the same argument to climate change, to pandemic risk, to inequality. All of these issues disproportionately affect the poor, yet all of them occupy substantially more public discussion than AI. What makes AI different?