The thing I have found most useful is the work of the UK’s Institute for Government. Both their reports and podcasts. I often find I pick up useful things on ideal system design like it may well be that a mix of private and public services are better than 100% one or the other as can compare and see which is working better and take best practice from both (this was from their empirical work on prisons). The caveat is that if you are not into UK policy there may be too much context to wade through to reach the interesting conclusions. But worth a look.
Also when looking into the ideal governance structures for AI companies I think I found it very useful to look at the nuclear system. Civil nuclear risk is surprisingly (compared to other areas of policy I have experience of) well managed at both a international level, a regulatory level (in the UK), and a company level. And it is a hard topic because the aim is to stop the one very bad and very unlikely scenario of a major meltdown. Nuclear is obviously more understood than AI alignment but interesting nevertheless. Not sure the best reading on this but perhaps guidance notes form the IAEA or the ONR.
[I have thoughts to add on brainstorming but might have to add that at another time]
For “empirical research”
The thing I have found most useful is the work of the UK’s Institute for Government. Both their reports and podcasts. I often find I pick up useful things on ideal system design like it may well be that a mix of private and public services are better than 100% one or the other as can compare and see which is working better and take best practice from both (this was from their empirical work on prisons). The caveat is that if you are not into UK policy there may be too much context to wade through to reach the interesting conclusions. But worth a look.
Also when looking into the ideal governance structures for AI companies I think I found it very useful to look at the nuclear system. Civil nuclear risk is surprisingly (compared to other areas of policy I have experience of) well managed at both a international level, a regulatory level (in the UK), and a company level. And it is a hard topic because the aim is to stop the one very bad and very unlikely scenario of a major meltdown. Nuclear is obviously more understood than AI alignment but interesting nevertheless. Not sure the best reading on this but perhaps guidance notes form the IAEA or the ONR.
[I have thoughts to add on brainstorming but might have to add that at another time]