Most of my current PhD research is focused on exploring a particular AI harm, which is the current AI clustermess in the field of criminal law—particularly the concepts of reliability and fairness in intelligence and evidence. I spend a lot of timing looking at AI-generated evidence against people. Most of it deeply flawed. What I find time and time again is that AI companies and organisations deploy unsafe and unstable AI systems which cause lots of damage not because they don’t know better or because there’s no technical solution—but because it’s easy.
Are you talking about things like recidivism scores? If so, it’s a bit of a stretch to describe logistic regression as AI.
A whole range of things, from elements on the ‘fancy spreadsheet’ side such as recidivism and predpol to the more complex elements surrounding evidential aspects. I am aware none of these are close to AGI, considering no current AI is given its hyper-specialism, but the point of that paragraph isn’t about the AI but how humans and organisations have been shown to use AI (or automation software, if you’re more comfortable with that phrase). When the first actual AGI is developed, it is likely to be in a very well-funded lab—a lab likely under control of an organisation who are no exception to the usual weaknesses of capitalistic behaviour. After all, the entire point of developing general intelligence for many people isn’t for scientific endeavour but for commercial reasons.
Maybe humanity has 1 AGI or 1 ASI, but if we end up with dozens or hundreds of systems, we can’t rely on people to do the ‘right thing’ to prevent misalignment problems. There needs to be an actual system of governance ready to keep them in check, so it’s not just a technical problem.
Are you talking about things like recidivism scores? If so, it’s a bit of a stretch to describe logistic regression as AI.
A whole range of things, from elements on the ‘fancy spreadsheet’ side such as recidivism and predpol to the more complex elements surrounding evidential aspects. I am aware none of these are close to AGI, considering no current AI is given its hyper-specialism, but the point of that paragraph isn’t about the AI but how humans and organisations have been shown to use AI (or automation software, if you’re more comfortable with that phrase). When the first actual AGI is developed, it is likely to be in a very well-funded lab—a lab likely under control of an organisation who are no exception to the usual weaknesses of capitalistic behaviour. After all, the entire point of developing general intelligence for many people isn’t for scientific endeavour but for commercial reasons.
Maybe humanity has 1 AGI or 1 ASI, but if we end up with dozens or hundreds of systems, we can’t rely on people to do the ‘right thing’ to prevent misalignment problems. There needs to be an actual system of governance ready to keep them in check, so it’s not just a technical problem.
Let me know if that clarified at all.