Thank you for this post Matthew, it is just as thoughtful and detailed as your last one. I am excited to see more posts from you in future!
I have some thoughts and comments as someone with experience in this area. Apologies in advance if this comment ends up being long—I prefer to mirror the effort of the original post creator in my replies and you have set a very high bar!
Risk Assessments – How should frontier AI organisations design their risk assessment procedures in order to sufficiently acknowledge – and prepare for – the breadth, severity and complexity of risks associated with developing frontier AI models?
This is a really great first area of focus, and if I may arrogantly share a self-plug, I recently posted something along this specific theme here. Clearly it has been field-changing, achieving a whopping 3 karma in the month since posting. I am truly a beacon of our field!
Jest aside, I agree this is an important area and one that is hugely neglected. A major issue is that academia is not good at understanding how this actually works in practice. Much more industry-academia partnership is needed but that can be difficult to arrange where it really counts—which is something you successfully allude to in your post.
Senior leadership of firms operate with limited information. Members of senior management of large companies themselves cannot know of everything that goes on in the firm. Therefore, strong communication channels and systems of oversight are needed to effectively manage risks.
This is a fantastic point, and one that is frequently a problem. Not long ago I was having a chat with the head of a major government organisation who quite confidently stated that his department did not use a specific type of AI system. I had the uncomfortable moral duty to inform that it did, because I had helped advise on risk mitigation for that system only some weeks earlier. It’s a fun story, but the higher up the chain you are in large organisations the harder it can be. Another good, recent example is also Nottinghamshire Police publicly claiming that they do not use and do not plan to use AFR in an FOI request—seemingly unaware their force revealed a new AFR tool to the media earlier that week.
Although much can be learned from practices in other industries, there are a number of unique challenges in implementing good corporate governance in AI firms. One such challenge is the immaturity of the field and technology. This makes it difficult currently to define standardised risk frameworks for the development and deployment of systems. It also means that many of the firms conducting cutting edge research are still relatively small; even the largest still in many ways operate with “start-up” cultures. These are cultures that are fantastic for innovation, but terrible for safety and careful action.
This is such a fantastic point, and to back this up it’s the source of I reckon about 75% of the risk scenarios I’ve advised on in the past year. Although I don’t think ‘AI firms’ is a good focus term because many major corporations are making AI as part of their coverage but are not themselves “AI Firms”, your point still stands well in the face of evidence because a major problem right now is AI startups selling immature, untested, ungoverned tools to major organisations who don’t know better and don’t know how to question what they’re buying. This isn’t just a problem with corporations but with government, too. It’s such a huge risk vector.
For Sections 2 and 3, engineering and energy are fantastic industries to draw from in terms of their processes for risk and incident reporting. They’re certainly amongst the strictest I’ve had experience of working alongside.
Ethics committees take a key role in decision making that may have particularly large negative impacts on society. For frontier AI labs, such committees will have their work cut out for them. Work should be done to consider the full list of processes ethics committees should have input in, but it will likely include decisions around:
Model training, including
Appropriate data usage
The dangers of expected capabilities
Model deployments
Research approval
This is an area that’s seen a lot of really good outcomes in AI in high-risk industries. I would advise reading this research which covers a fantastic use-case in detail. There are also some really good ones in the process of getting the correct approvals which I’m not entirely sure I can post here yet but if you want kept updated shoot me an inbox and I’ll keep you informed.
The challenge for frontier AI firms by comparison is that many of the severe risks posed by AI are of a more esoteric nature, with much current uncertainty about how failure modes may present themselves. One potential area of study is the development of more general forms of risk awareness training, e.g. training for developing a “scout mindset” or to improve awareness of black swan events.
This is actually one of the few sections I disagree with you on. Of all the high-risk AI systems I’ve worked with in a governance capacity, exceptionally few have had esoteric risks. Many times AI systems interact with the world via existing processes which themselves are fairly risk scoped. Exceptions if you meant far-future AI systems which obviously would be currently unpredictable. For contemporary and near-future AI systems though the risk landscape is quite well explored.
7 – Open Research Questions
These are fantastic questions, and I’m glad to see some of these are covered by a recent grant application I made. Hopefully the grant decision-makers read these forums! I actually have something of a research group forming in this precise area, so feel free to drop me a message if there’s likely to be any overlap there and I’m happy to share research directions etc :)
There are huge technical research questions that must be answered to avoid tragedy, including important advancements in technical AI safety, evaluations and regulation. It is the author’s opinion that corporate governance should sit alongside these fields, with a few questions requiring particular priority and focus:
One final point of input that may be valuable is that in most of my experience of hiring people for risk management / compliance / governance roles in high-risk AI systems is the best candidates in the long run seem to be people with an interdisciplinary STEM and social studies background. It is tremendously hard to find these people. There needs to be much, much more effort put towards sharing of skills and knowledge between the socio-legal and STEM spheres, but a glance at my profile might show a bit of bias in this statement! Still, for these type of roles that kind of balance is important. I understand that many European universities now offer such interdisciplinary courses, but no degrees yet. Perhaps the winds will change.
Apologies if this comment was overly long! This is a very important area of AI governance and it was worth taking the time to put some thoughts on your fantastic post together. Looking forward to seeing your future posts—particularly in this area!
Thank you for this post Matthew, it is just as thoughtful and detailed as your last one. I am excited to see more posts from you in future!
I have some thoughts and comments as someone with experience in this area. Apologies in advance if this comment ends up being long—I prefer to mirror the effort of the original post creator in my replies and you have set a very high bar!
This is a really great first area of focus, and if I may arrogantly share a self-plug, I recently posted something along this specific theme here. Clearly it has been field-changing, achieving a whopping 3 karma in the month since posting. I am truly a beacon of our field!
Jest aside, I agree this is an important area and one that is hugely neglected. A major issue is that academia is not good at understanding how this actually works in practice. Much more industry-academia partnership is needed but that can be difficult to arrange where it really counts—which is something you successfully allude to in your post.
This is a fantastic point, and one that is frequently a problem. Not long ago I was having a chat with the head of a major government organisation who quite confidently stated that his department did not use a specific type of AI system. I had the uncomfortable moral duty to inform that it did, because I had helped advise on risk mitigation for that system only some weeks earlier. It’s a fun story, but the higher up the chain you are in large organisations the harder it can be. Another good, recent example is also Nottinghamshire Police publicly claiming that they do not use and do not plan to use AFR in an FOI request—seemingly unaware their force revealed a new AFR tool to the media earlier that week.
This is such a fantastic point, and to back this up it’s the source of I reckon about 75% of the risk scenarios I’ve advised on in the past year. Although I don’t think ‘AI firms’ is a good focus term because many major corporations are making AI as part of their coverage but are not themselves “AI Firms”, your point still stands well in the face of evidence because a major problem right now is AI startups selling immature, untested, ungoverned tools to major organisations who don’t know better and don’t know how to question what they’re buying. This isn’t just a problem with corporations but with government, too. It’s such a huge risk vector.
For Sections 2 and 3, engineering and energy are fantastic industries to draw from in terms of their processes for risk and incident reporting. They’re certainly amongst the strictest I’ve had experience of working alongside.
This is an area that’s seen a lot of really good outcomes in AI in high-risk industries. I would advise reading this research which covers a fantastic use-case in detail. There are also some really good ones in the process of getting the correct approvals which I’m not entirely sure I can post here yet but if you want kept updated shoot me an inbox and I’ll keep you informed.
This is actually one of the few sections I disagree with you on. Of all the high-risk AI systems I’ve worked with in a governance capacity, exceptionally few have had esoteric risks. Many times AI systems interact with the world via existing processes which themselves are fairly risk scoped. Exceptions if you meant far-future AI systems which obviously would be currently unpredictable. For contemporary and near-future AI systems though the risk landscape is quite well explored.
These are fantastic questions, and I’m glad to see some of these are covered by a recent grant application I made. Hopefully the grant decision-makers read these forums! I actually have something of a research group forming in this precise area, so feel free to drop me a message if there’s likely to be any overlap there and I’m happy to share research directions etc :)
One final point of input that may be valuable is that in most of my experience of hiring people for risk management / compliance / governance roles in high-risk AI systems is the best candidates in the long run seem to be people with an interdisciplinary STEM and social studies background. It is tremendously hard to find these people. There needs to be much, much more effort put towards sharing of skills and knowledge between the socio-legal and STEM spheres, but a glance at my profile might show a bit of bias in this statement! Still, for these type of roles that kind of balance is important. I understand that many European universities now offer such interdisciplinary courses, but no degrees yet. Perhaps the winds will change.
Apologies if this comment was overly long! This is a very important area of AI governance and it was worth taking the time to put some thoughts on your fantastic post together. Looking forward to seeing your future posts—particularly in this area!