There is something I would really like to know, although it is only tangentially related to the above: how is taking a postdoctoral position at FHI seen comparatively with other “standard academia” paths? How could it affect future research career options? I am personally more interested from the technical side, but feel free to comment on whatever you feel is interesting.
And since you mention it: What EU AI Policy we should push for? And what mechanisms do you see the EU AI policy having a positive impact, compared with the US for example? What ways do you see for technical people to influence AI governance?
how is taking a postdoctoral position at FHI seen comparatively with other “standard academia” paths? How could it affect future research career options?
My guess is that for folks who are planning on working on FHI-esque topics in the long term, FHI is a great option. Even if you treat the role as a postdoc, staying for say 2 years, I think you could be well set up to go to continue doing important research at other institutions. Examples of this model include Owain Evans, Jan Leike, and Miles Brundage. Though all of them went on to do research in non-academic orgs, I think their time at FHI would have set them up decently for that.
The other option is planning on staying at FHI for the longer term. For people who end up being a good fit, I think that’s often an excellent option. Even though these roles are fixed term for 2 years, we’re able to extend contracts beyond that if the person ends up being a good fit, provided we have the funding (which I expect us to do for the next couple of years).
What EU AI Policy we should push for?
My overall answer is that I’m not quite sure, but that it seems important to figure out! More reasoning below.
And what mechanisms do you see the EU AI policy having a positive impact, compared with the US for example?
Earlier this year, the EU Commission published its White Paper on AI, laying out a legislative agenda that’s likely to be put into practice over the next couple of years. There’s reason to think that this legislation will be very influential globally, due to it plausibly being subject to the Brussels Effect. The Brussels Effect (Wikipedia, recent excellent book) is the EU’s legislation having a large effect globally, as it’s adopted by companies and/or governments globally. We’ve seen this effect at work with regards to GDPR, food safety, etc. In brief, the reason this happens is that the EU has a large domestic market that is immovable, which means companies are strongly incentivised to operate on the market. Further, the EU tends to put in place legislation that is much harsher than other jurisdictions. As such, need to comply with the legislation at least in the EU market. In some cases, it won’t make sense for companies to provide different products to different markets, and so they adhere to the EU standard globally. Once this occurs, there’s pressure for the companies to push for EU-level standards in other markets and legislators in other jurisdictions may be tempted to put in place the EU-level standard. However, there’s a number of ways one could imagine that the Brussels Effect won’t be at work in the case of AI, which I’d be very keen to see someone start researching.
As such, I believe that the EU is much more likely than the US to determine what legislation tech/AI companies need to adhere to. The US could do the same thing, but it probably won’t because the US legislator has a much smaller appetite for legislation than the EU.
What ways do you see for technical people to influence AI governance?
There are lots! I think technical folks can help solve technical problems that will help AI governance, for example following the Cooperative AI agenda, working on verification of AI systems, or otherwise solving bits of the technical AI safety problem (which would make the governance problem easier). They can also be helpful by working more directly on AI governance/policy issues. A technical background will be very useful as a credential (policy makers will take you more seriously) and it seems likely to improve your ability to think through the problems. I also think technical folks are uniquely positioned to help steer the AI researcher community in positive directions, shaping norms in favour of safe AI development and deployment.
Your paragraph on the Brussels effect was remarkably similar to the main research proposal in my FHI research scholar application that I hastily wrote, but didn’t finish before the deadline.
The Brussels effect it strikes me as one of the best levers available to Europeans looking to influence global AI governance. It seems to me that better understanding how international law such as the Geneva conventions came to be, will shed light on the importance of diplomatic third parties in negotiations between super powers.
I have been pursuing this project on my own time, figuring that if I didn’t, nobody would. How can I make my output the most useful to someone at FHI wanting to know about this?
That’s exciting to hear! Is your plan still to head into EU politics for this reason? (not sure I’m remembering correctly!)
To make it maximally helpful, you’d work with someone at FHI in putting it together. You could consider applying for the GovAI Fellowship once we open up applications. If that’s not possible (we do get a lot more good applications than we’re able to take on) getting plenty of steer / feedback seems helpful (you can feel to send it past myself). I would recommend spending a significant amount of time making sure the piece is clearly written, such that someone can quickly grasp what you’re saying and whether it will be relevant to their interests.
In addition to Markus’ suggestion that you could consider applying to the GovAI fellowship, you could also considering applying for a researcher role at GovAI. Deadline is October 19th.
(I don’t mean to imply that the only way to do this is to be at FHI. I don’t believe that that’s the case. I just wanted to mention that option, since Markus had mentioned a different position but not that one.)
There is something I would really like to know, although it is only tangentially related to the above: how is taking a postdoctoral position at FHI seen comparatively with other “standard academia” paths? How could it affect future research career options? I am personally more interested from the technical side, but feel free to comment on whatever you feel is interesting.
And since you mention it:
What EU AI Policy we should push for? And what mechanisms do you see the EU AI policy having a positive impact, compared with the US for example? What ways do you see for technical people to influence AI governance?
Many thanks in advance!
Thanks, Pablo. Excellent questions!
My guess is that for folks who are planning on working on FHI-esque topics in the long term, FHI is a great option. Even if you treat the role as a postdoc, staying for say 2 years, I think you could be well set up to go to continue doing important research at other institutions. Examples of this model include Owain Evans, Jan Leike, and Miles Brundage. Though all of them went on to do research in non-academic orgs, I think their time at FHI would have set them up decently for that.
The other option is planning on staying at FHI for the longer term. For people who end up being a good fit, I think that’s often an excellent option. Even though these roles are fixed term for 2 years, we’re able to extend contracts beyond that if the person ends up being a good fit, provided we have the funding (which I expect us to do for the next couple of years).
My overall answer is that I’m not quite sure, but that it seems important to figure out! More reasoning below.
Earlier this year, the EU Commission published its White Paper on AI, laying out a legislative agenda that’s likely to be put into practice over the next couple of years. There’s reason to think that this legislation will be very influential globally, due to it plausibly being subject to the Brussels Effect. The Brussels Effect (Wikipedia, recent excellent book) is the EU’s legislation having a large effect globally, as it’s adopted by companies and/or governments globally. We’ve seen this effect at work with regards to GDPR, food safety, etc. In brief, the reason this happens is that the EU has a large domestic market that is immovable, which means companies are strongly incentivised to operate on the market. Further, the EU tends to put in place legislation that is much harsher than other jurisdictions. As such, need to comply with the legislation at least in the EU market. In some cases, it won’t make sense for companies to provide different products to different markets, and so they adhere to the EU standard globally. Once this occurs, there’s pressure for the companies to push for EU-level standards in other markets and legislators in other jurisdictions may be tempted to put in place the EU-level standard. However, there’s a number of ways one could imagine that the Brussels Effect won’t be at work in the case of AI, which I’d be very keen to see someone start researching.
As such, I believe that the EU is much more likely than the US to determine what legislation tech/AI companies need to adhere to. The US could do the same thing, but it probably won’t because the US legislator has a much smaller appetite for legislation than the EU.
There are lots! I think technical folks can help solve technical problems that will help AI governance, for example following the Cooperative AI agenda, working on verification of AI systems, or otherwise solving bits of the technical AI safety problem (which would make the governance problem easier). They can also be helpful by working more directly on AI governance/policy issues. A technical background will be very useful as a credential (policy makers will take you more seriously) and it seems likely to improve your ability to think through the problems. I also think technical folks are uniquely positioned to help steer the AI researcher community in positive directions, shaping norms in favour of safe AI development and deployment.
Your paragraph on the Brussels effect was remarkably similar to the main research proposal in my FHI research scholar application that I hastily wrote, but didn’t finish before the deadline.
The Brussels effect it strikes me as one of the best levers available to Europeans looking to influence global AI governance. It seems to me that better understanding how international law such as the Geneva conventions came to be, will shed light on the importance of diplomatic third parties in negotiations between super powers.
I have been pursuing this project on my own time, figuring that if I didn’t, nobody would. How can I make my output the most useful to someone at FHI wanting to know about this?
That’s exciting to hear! Is your plan still to head into EU politics for this reason? (not sure I’m remembering correctly!)
To make it maximally helpful, you’d work with someone at FHI in putting it together. You could consider applying for the GovAI Fellowship once we open up applications. If that’s not possible (we do get a lot more good applications than we’re able to take on) getting plenty of steer / feedback seems helpful (you can feel to send it past myself). I would recommend spending a significant amount of time making sure the piece is clearly written, such that someone can quickly grasp what you’re saying and whether it will be relevant to their interests.
In addition to Markus’ suggestion that you could consider applying to the GovAI fellowship, you could also considering applying for a researcher role at GovAI. Deadline is October 19th.
(I don’t mean to imply that the only way to do this is to be at FHI. I don’t believe that that’s the case. I just wanted to mention that option, since Markus had mentioned a different position but not that one.)