This is a really great, high-quality look at this area. Thank you all so much for writing it, especially in such an easy to read way that doesn’t sacrifice any detail.
One bit I really like about this is it addresses a major blind spot in AI safety:
”AGI is not necessary for AI to have long-term impacts. Many long-term impacts we consider could happen with “merely” comprehensive AI services, or plausibly also with non-comprehensive AI services (e.g. Sections 3.2 and 5.2).”
I feel that Section 4 is an area that the current AI Safety research really neglects, which is odd because in my opinion it is the area of AI Safety that is perhaps most vital—both for preventing suffering and for preventing long-term risk.
I’ve been doing a lot of research in this area within Criminal Justice, and we recently published a report for the UK Cabinet Office and the Centre for Data Ethics and Innovation on their trials of a new Algorithmic Transparency Standard which is somewhat down this avenue of investigation. The standard’s aim was to try and mitigate some of these risks, and the work was evaluating how it would interact in real-world situations. You can read it here if you’re interested: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4155549
For my PhD research I am looking at how AI is used within criminal justice, both as intelligence and as evidence. This area has a really big impact on AI Safety and yet very few people (especially EA-aligned) are working on it, so it was really refreshing to see you cover issues such as facial recognition in your post. The issues around how the technology is developed, sold, and used have a load of parallels with the wider AI Safety debate, in addition to being directly related from an authoritarian risk stance.
I really enjoyed your coverage in Section 6.2, and have considered for some time writing up an in-depth primer to many of the issues in this area. There’s a lot of factors at play that are impacting the (very rapid) rollout of automation and AI in policing and crime, and the dynamic between developers and purchasers is particularly crucial to the wider AI Safety field. Lots of AI Safety’s biggest fears about AI in a century’s time are already becoming a reality at small scale, but the nature of criminal justice is that it often occurs behind closed doors, to people who can’t publicise what has happened to them, and is often not interpretable to those not involved in the criminal justice system. When an AI sends someone to prison based on its say-so, that doesn’t end up in a research paper. The person just disappears into the void. Hence many researchers are unaware that it’s happening. I’m fairly convinced that this is an important cause area for exploration, so if anyone else wants to work on projects like this feel free to drop me a message.
Circling back to the research paper above, an important quote stood out to me:
”‘Most algorithms that have been designed in policing aren’t tested for bias. They’re not designed with any transparency. Mostly the people who built them don’t know what they do, let alone anybody else”—Interview Subject C5
This has strong parallels to what you were saying about how AI, particularly when it comes to power and society, doesn’t have to be AGI to be harmful. We spend a lot of time worrying about super-labs creating superintelligence, but merely thoughtlessly made software in the wrong place with a lot of power under the code’s “fingertips” means that despite only being a fancy decision tree, it can still cause a lot of harm very quickly.
Your point in the conclusion is also really important:
”Power and inequality: there are a lot of pathways through which AI seems likely to increase power concentration and inequality, though there is little analysis of the potential long-term impacts of these pathways. Nonetheless, AI precipitating more extreme power concentration and inequality than exists today seems a real possibility on current trends.”
I think a lot of the reason there is little analysis in this area is that:
1. There is, for reasons I am unsure of, often a great dislike of anything that remotely smells like ‘social justice’ in EA. No-one is ever clear about their reasoning, but I’ve heard a lot that people greatly fear EA becoming ‘infiltrated by SJWs’ or whatever. When in reality, social justice is a major part of Longtermism.
2. Most EA people tend to be from the kind of backgrounds where this power inequality won’t affect. This means that AI can be causing lots of harm, but to the kinds of people who are underrepresented in EA. It’s why increasing the range of people we attract to the community is vital. It’s not EDI box-ticking, it’s a matter of intellectual diversity. I know CEA are making some progress on this, but I’d like to see more formal effort put into this because I think it has significant knock-on impact for the type of AI (and other) research to do with this area we see.
All in all, great read and I can’t wait to see more surveys from you all.
This is a really great, high-quality look at this area. Thank you all so much for writing it, especially in such an easy to read way that doesn’t sacrifice any detail.
One bit I really like about this is it addresses a major blind spot in AI safety:
”AGI is not necessary for AI to have long-term impacts. Many long-term impacts we consider could happen with “merely” comprehensive AI services, or plausibly also with non-comprehensive AI services (e.g. Sections 3.2 and 5.2).”
I feel that Section 4 is an area that the current AI Safety research really neglects, which is odd because in my opinion it is the area of AI Safety that is perhaps most vital—both for preventing suffering and for preventing long-term risk.
I’ve been doing a lot of research in this area within Criminal Justice, and we recently published a report for the UK Cabinet Office and the Centre for Data Ethics and Innovation on their trials of a new Algorithmic Transparency Standard which is somewhat down this avenue of investigation. The standard’s aim was to try and mitigate some of these risks, and the work was evaluating how it would interact in real-world situations. You can read it here if you’re interested: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4155549
For my PhD research I am looking at how AI is used within criminal justice, both as intelligence and as evidence. This area has a really big impact on AI Safety and yet very few people (especially EA-aligned) are working on it, so it was really refreshing to see you cover issues such as facial recognition in your post. The issues around how the technology is developed, sold, and used have a load of parallels with the wider AI Safety debate, in addition to being directly related from an authoritarian risk stance.
I really enjoyed your coverage in Section 6.2, and have considered for some time writing up an in-depth primer to many of the issues in this area. There’s a lot of factors at play that are impacting the (very rapid) rollout of automation and AI in policing and crime, and the dynamic between developers and purchasers is particularly crucial to the wider AI Safety field. Lots of AI Safety’s biggest fears about AI in a century’s time are already becoming a reality at small scale, but the nature of criminal justice is that it often occurs behind closed doors, to people who can’t publicise what has happened to them, and is often not interpretable to those not involved in the criminal justice system. When an AI sends someone to prison based on its say-so, that doesn’t end up in a research paper. The person just disappears into the void. Hence many researchers are unaware that it’s happening. I’m fairly convinced that this is an important cause area for exploration, so if anyone else wants to work on projects like this feel free to drop me a message.
Circling back to the research paper above, an important quote stood out to me:
”‘Most algorithms that have been designed in policing aren’t tested for bias. They’re not designed with any transparency. Mostly the people who built them don’t know what they do, let alone anybody else”—Interview Subject C5
This has strong parallels to what you were saying about how AI, particularly when it comes to power and society, doesn’t have to be AGI to be harmful. We spend a lot of time worrying about super-labs creating superintelligence, but merely thoughtlessly made software in the wrong place with a lot of power under the code’s “fingertips” means that despite only being a fancy decision tree, it can still cause a lot of harm very quickly.
Your point in the conclusion is also really important:
”Power and inequality: there are a lot of pathways through which AI seems likely to increase power concentration and inequality, though there is little analysis of the potential long-term impacts of these pathways. Nonetheless, AI precipitating more extreme power concentration and inequality than exists today seems a real possibility on current trends.”
I think a lot of the reason there is little analysis in this area is that:
1. There is, for reasons I am unsure of, often a great dislike of anything that remotely smells like ‘social justice’ in EA. No-one is ever clear about their reasoning, but I’ve heard a lot that people greatly fear EA becoming ‘infiltrated by SJWs’ or whatever. When in reality, social justice is a major part of Longtermism.
2. Most EA people tend to be from the kind of backgrounds where this power inequality won’t affect. This means that AI can be causing lots of harm, but to the kinds of people who are underrepresented in EA. It’s why increasing the range of people we attract to the community is vital. It’s not EDI box-ticking, it’s a matter of intellectual diversity. I know CEA are making some progress on this, but I’d like to see more formal effort put into this because I think it has significant knock-on impact for the type of AI (and other) research to do with this area we see.
All in all, great read and I can’t wait to see more surveys from you all.
Thanks, I’m glad this was helpful to you!