I am a software engineer who transitioned to tech/AI policy/governance. I strongly agree with the overall message (or at least title) of this article: that AI governance needs technical people/work, especially for the ability to enforce regulation.
However in the ‘types of technical work’ you lay out I see some gaping governance questions/gaps. You outline various tools that could be built to improve the capability of actors in the governance space, but there are many such actors, and tools by their nature are dual use—where is the piece on who these tools would be wielded by, and how they can be used responsibly? I would be more excited about seeing new initiatives in this space that clearly set out which actors it works with for which kinds of policy issues and which not and why. Also there is a big hole around not being conflicted etc. There’s lots of legal issues that can’t be avoided that crop up when you need to actually use such tools in any context beyond a voluntary initiative of a company (which does not give as many guarantees as things that apply to all current and future companies, like regulations or to some extent standards). There is and will be increasingly a huge demand for companies with practical AI auditing expertise—this is a big opportunity to start trying to fill that gap.
I think the section on ‘advising on the above’ could be fleshed out a whole lot more. At least I’ve found that because this area is very new, there is a lot of talking to do with lots of different people, lots of translation, before getting to actually do these things… it’s helpful if you’re the kind of technical person who is willing to learn how to communicate to a non-technical audience, and to learn from people with other backgrounds about the constraints and complexities of the policymaking world, and derives satisfaction from this. I think this is hugely worthwhile though—and if you’re the kind of person who is willing to do that and looking for work in the area, do get in touch as I have some opportunities (in the UK).
Finally, I’ll just more explicitly now highlight the risk of technical people being used for the aims of others (that may or may not lead to good outcomes) in this space. In my view, if you really want to work in this intersection you should be asking the above questions about anything you build—who will use this thing and how, and what are the risks and can I reduce them. And when you advise powerful actors, bringing your technical knowledge and expertise, do not be afraid to also give your opinions to decision-makers on what might lead to what kinds of real world outcomes, and ask questions about the application aims, and improve those aims.
Thanks for the comment! I agree these are important considerations and that there’s plenty my post doesn’t cover. (Part of that is because I assumed the target audience of this post—technical readers of this forum—would have limited interest in governance issues and would already be inclined to think about the impacts of their work. Though maybe I’m being too optimistic with the latter assumption.)
Were there any specific misuse risks involving the tools discussed in the post that stood out to you as being especially important to consider?
I am a software engineer who transitioned to tech/AI policy/governance. I strongly agree with the overall message (or at least title) of this article: that AI governance needs technical people/work, especially for the ability to enforce regulation.
However in the ‘types of technical work’ you lay out I see some gaping governance questions/gaps. You outline various tools that could be built to improve the capability of actors in the governance space, but there are many such actors, and tools by their nature are dual use—where is the piece on who these tools would be wielded by, and how they can be used responsibly? I would be more excited about seeing new initiatives in this space that clearly set out which actors it works with for which kinds of policy issues and which not and why. Also there is a big hole around not being conflicted etc. There’s lots of legal issues that can’t be avoided that crop up when you need to actually use such tools in any context beyond a voluntary initiative of a company (which does not give as many guarantees as things that apply to all current and future companies, like regulations or to some extent standards). There is and will be increasingly a huge demand for companies with practical AI auditing expertise—this is a big opportunity to start trying to fill that gap.
I think the section on ‘advising on the above’ could be fleshed out a whole lot more. At least I’ve found that because this area is very new, there is a lot of talking to do with lots of different people, lots of translation, before getting to actually do these things… it’s helpful if you’re the kind of technical person who is willing to learn how to communicate to a non-technical audience, and to learn from people with other backgrounds about the constraints and complexities of the policymaking world, and derives satisfaction from this. I think this is hugely worthwhile though—and if you’re the kind of person who is willing to do that and looking for work in the area, do get in touch as I have some opportunities (in the UK).
Finally, I’ll just more explicitly now highlight the risk of technical people being used for the aims of others (that may or may not lead to good outcomes) in this space. In my view, if you really want to work in this intersection you should be asking the above questions about anything you build—who will use this thing and how, and what are the risks and can I reduce them. And when you advise powerful actors, bringing your technical knowledge and expertise, do not be afraid to also give your opinions to decision-makers on what might lead to what kinds of real world outcomes, and ask questions about the application aims, and improve those aims.
Thanks for the comment! I agree these are important considerations and that there’s plenty my post doesn’t cover. (Part of that is because I assumed the target audience of this post—technical readers of this forum—would have limited interest in governance issues and would already be inclined to think about the impacts of their work. Though maybe I’m being too optimistic with the latter assumption.)
Were there any specific misuse risks involving the tools discussed in the post that stood out to you as being especially important to consider?