Thanks for doing this! I’m a mid-career policy person currently in what I consider to be a relatively high-impact role in a non-ai cause area. I am good at my job but keep thinking about ai and and whether I should be pivoting to an ai policy role. I’ve been reading books about ai and I’ve listened to lots of 80k hours, Dwarkesh, and other podcasts on the topic. I have a strong sense of various threat models from ai but I still don’t have developed views on what good ai policy would look like. What should I be reading/listening to in order to develop these views? Do you think it make sense to apply to ai policy roles before my views on optimal policy are well-developed?
I’m not sure anyone is confident what “optimal policy” is! Waiting to develop your own views on optimal policy can take some time; in the meanwhile you could check out what some researchers and think tanks have to say, e.g.:
Finally, for U.S. citizens, Emerging Tech Policy Careers is great! Check out their AI policy resources.
In general, applying is a great way to get feedback and get calibrated on what you bring to table and what you need to work on, as mentioned elsewhere in these comments, so, yes, I think, you should be biased towards applying to things. There are some nuances to this, including being aware of when a rejection is likely to result in a ‘cooling off period’ where you may not be able to reapply for 6-12 months.
Thanks for doing this! I’m a mid-career policy person currently in what I consider to be a relatively high-impact role in a non-ai cause area. I am good at my job but keep thinking about ai and and whether I should be pivoting to an ai policy role. I’ve been reading books about ai and I’ve listened to lots of 80k hours, Dwarkesh, and other podcasts on the topic. I have a strong sense of various threat models from ai but I still don’t have developed views on what good ai policy would look like. What should I be reading/listening to in order to develop these views? Do you think it make sense to apply to ai policy roles before my views on optimal policy are well-developed?
Hi, thanks for the question!
I’m not sure anyone is confident what “optimal policy” is! Waiting to develop your own views on optimal policy can take some time; in the meanwhile you could check out what some researchers and think tanks have to say, e.g.:
CLTR’s policy proposals
CAIS’s policy proposals
ARI’s recommendations
Some longer reads:
International AI safety report
MIRI’s AI governance research agenda
OpenPhil’s AI Governance RFP might have some pointers to some reasonable ideas
Finally, for U.S. citizens, Emerging Tech Policy Careers is great! Check out their AI policy resources.
In general, applying is a great way to get feedback and get calibrated on what you bring to table and what you need to work on, as mentioned elsewhere in these comments, so, yes, I think, you should be biased towards applying to things. There are some nuances to this, including being aware of when a rejection is likely to result in a ‘cooling off period’ where you may not be able to reapply for 6-12 months.