Hi Markus, this sounds really promising. I’ve been wanting to ask—does GovAI have any available opportunities for very early career EAs (e.g., in undergrad), and if not, do you plan to offer some in the future? I’ve been interested in AI policy/governance for a while but I’m not quite sure on where to really begin working and engaging with the field.
Hi Lexley, Good question. Kirsten’s suggestions are all great. To that, I’d add:
Try to work as a research assistant to someone who you think is doing interesting work. Quite often, more so than other roles, RA roles are quite often not advertised and set up on a more ad hoc basis. Perhaps the best route in is to read someone’s work and
Another thing you could do is to try to take a stab independently on some important-seeming question. You could e.g. pick a research question hinted at in a paper/piece (some have a section specifically with suggestions for further work), mentioned in a research agenda (e.g. Dafoe 2018), or in lists of research ideas (GovAI collated one here and Michael Aird, I think, sporadically updates this collection of lists of EA-relevant research questions).
You could also look into the various “ERIs”: SERI, CHERI, CERI, and so on.
As for GovAI, we have in the past engaged undergrads as research assistants and I could imagine us taking on particularly promising undergrads for the GovAI Fellowship. However, overall, I expect our comparative advantage will be working with folks who either have significant context on AI governance or who have relevant experience from some other domain. It may also lay in producing writing that can help people navigate the field.
One other option: My AI Governance and Strategy team at Rethink Priorities offers 3-5 month fellowships and permanent research assistant roles, either of which can be done at anywhere from 20h/w to 40h/w depending on the candidates’ preference. And we hire almost entirely based on performance in our work tests & interviews rather than on credentials/experience (though of course experience often helps people succeed in our work tests & interviews), and have sometimes hired people during or right after undergrad degrees.
We aren’t currently actively hiring, but people can express interest here.
(I just happened to read this post because I’m interested in GovAI, and then realised my team’s roles seem relevant to this thread—I don’t originally come here to do recruiting :)
Michael Aird, I think, sporadically updates this collection of lists of EA-relevant research questions)
Yeah, I update that whenever I learn of a new relevant collection of research questions.
That said, fwiw, I’d generally recommend that people interested in getting into research in some area:
Focus mostly on things like applying to jobs, expressing interest in working with some mentor, or applying to research training programs like the ERIs.
See independent research as only (a) a “cheap test of fit” that you spend a few days on on weekends and such, rather than a few months on, or (b) a backup option if applying to lots of roles isn’t yet working on, or a thing you do while waiting to hear back.
Some people/situations would be exceptions to that general advice, but generally I think having more structure, mentorship, feedback, etc. is better.
Thank you so much for taking the time reply! There’s so many availabe resources and most advice doesn’t seem to be aimed at people in my current career level, so these are really helpful in nudging me to the right direction :D
Hi Lexley, I’m sure Markus will come back with an answer, but I thought I’d suggest some other ways an undergraduate or new grad could build their knowledge and credibility:
a) Write a relevant essay or do a project for one of your classes. For example, if you’re taking a political science or economics class, you could write an essay about “Does [major theory we’ve studied] explain what we’re seeing in the current governance of AI?” You could share your essay for feedback on the Facebook group “Effective Altruism Editing and Review” and potentially even post it here, or post a summary.
b) Take an internship or job somewhere that you can learn about government or governance. For example, working in local or national government; working for a regulator; working for a corporate governance body like “fair trade” or “organic”; working for a tech company or lobbyist, especially if you can get a job taking notes for their boards or something like that. Pay attention to who’s making decisions, and who the decision-makers pay attention to—who has the power in different situations?
c) Read papers and articles in the area you’re interested in, and leave polite comments or questions. If a professor at your university has written a paper you think might be relevant, go to their office hours or ask to meet them and ask them some questions about how their work could be applied to AI governance. Consider starting a blog writing summaries or reviews of relevant papers and/or introducing some of your own thoughts. Consider going on Twitter, following people you admire, and replying to them occasionally.
I hope these ideas are useful and please let me know if you try them! I’m @Kirsten3531 on Twitter if you decide to go the Twitter route :)
Hi Kirsten, thank you so much for this write-up!! :D This is really the sort of guidance I’ve been searching for, since most advice seems to be primarily aimed at those in their mid-career or those who have already held senior positions. Will follow you on Twitter if that’s okay! (Also just realized we interacted earlier today, I’m @doseofzero :> )
“Resources that are only relevant to people interested in AI governance and (to some extent) technical AI safety
You could participate in the AGI Safety Fundamentals course’s Governance track, or—when the course isn’t running—work through all or part of the curriculum independently. This seems like an unusually good way for most people to learn about AI risk and AI governance (from a longtermist or existential-risk-focused perspective).
But I’d suggest being discerning with this list, as I also think that some of those ideas are relatively low-priority and that the arguments presented for prioritizing those particular ideas are relatively weak, at least from a longtermist/existential-risk-focused perspective.”
Hi Markus, this sounds really promising. I’ve been wanting to ask—does GovAI have any available opportunities for very early career EAs (e.g., in undergrad), and if not, do you plan to offer some in the future? I’ve been interested in AI policy/governance for a while but I’m not quite sure on where to really begin working and engaging with the field.
Hi Lexley, Good question. Kirsten’s suggestions are all great. To that, I’d add:
Try to work as a research assistant to someone who you think is doing interesting work. Quite often, more so than other roles, RA roles are quite often not advertised and set up on a more ad hoc basis. Perhaps the best route in is to read someone’s work and
Another thing you could do is to try to take a stab independently on some important-seeming question. You could e.g. pick a research question hinted at in a paper/piece (some have a section specifically with suggestions for further work), mentioned in a research agenda (e.g. Dafoe 2018), or in lists of research ideas (GovAI collated one here and Michael Aird, I think, sporadically updates this collection of lists of EA-relevant research questions).
My impression is that you can join the AGI Safety Fundamentals as an undergrad.
You could also look into the various “ERIs”: SERI, CHERI, CERI, and so on.
As for GovAI, we have in the past engaged undergrads as research assistants and I could imagine us taking on particularly promising undergrads for the GovAI Fellowship. However, overall, I expect our comparative advantage will be working with folks who either have significant context on AI governance or who have relevant experience from some other domain. It may also lay in producing writing that can help people navigate the field.
One other option: My AI Governance and Strategy team at Rethink Priorities offers 3-5 month fellowships and permanent research assistant roles, either of which can be done at anywhere from 20h/w to 40h/w depending on the candidates’ preference. And we hire almost entirely based on performance in our work tests & interviews rather than on credentials/experience (though of course experience often helps people succeed in our work tests & interviews), and have sometimes hired people during or right after undergrad degrees.
We aren’t currently actively hiring, but people can express interest here.
(I just happened to read this post because I’m interested in GovAI, and then realised my team’s roles seem relevant to this thread—I don’t originally come here to do recruiting :)
Also, I’m really excited about GovAI’s work and about them getting great hires, and I’d suggest people typically apply to many orgs/roles and see what happens rather than trying to just choose one or a few to apply to.)
Yeah, I update that whenever I learn of a new relevant collection of research questions.
That said, fwiw, I’d generally recommend that people interested in getting into research in some area:
Focus mostly on things like applying to jobs, expressing interest in working with some mentor, or applying to research training programs like the ERIs.
See independent research as only (a) a “cheap test of fit” that you spend a few days on on weekends and such, rather than a few months on, or (b) a backup option if applying to lots of roles isn’t yet working on, or a thing you do while waiting to hear back.
Some people/situations would be exceptions to that general advice, but generally I think having more structure, mentorship, feedback, etc. is better.
Thank you so much for taking the time reply! There’s so many availabe resources and most advice doesn’t seem to be aimed at people in my current career level, so these are really helpful in nudging me to the right direction :D
Hi Lexley, I’m sure Markus will come back with an answer, but I thought I’d suggest some other ways an undergraduate or new grad could build their knowledge and credibility:
a) Write a relevant essay or do a project for one of your classes. For example, if you’re taking a political science or economics class, you could write an essay about “Does [major theory we’ve studied] explain what we’re seeing in the current governance of AI?” You could share your essay for feedback on the Facebook group “Effective Altruism Editing and Review” and potentially even post it here, or post a summary.
b) Take an internship or job somewhere that you can learn about government or governance. For example, working in local or national government; working for a regulator; working for a corporate governance body like “fair trade” or “organic”; working for a tech company or lobbyist, especially if you can get a job taking notes for their boards or something like that. Pay attention to who’s making decisions, and who the decision-makers pay attention to—who has the power in different situations?
c) Read papers and articles in the area you’re interested in, and leave polite comments or questions. If a professor at your university has written a paper you think might be relevant, go to their office hours or ask to meet them and ask them some questions about how their work could be applied to AI governance. Consider starting a blog writing summaries or reviews of relevant papers and/or introducing some of your own thoughts. Consider going on Twitter, following people you admire, and replying to them occasionally.
I hope these ideas are useful and please let me know if you try them! I’m @Kirsten3531 on Twitter if you decide to go the Twitter route :)
Hi Kirsten, thank you so much for this write-up!! :D This is really the sort of guidance I’ve been searching for, since most advice seems to be primarily aimed at those in their mid-career or those who have already held senior positions. Will follow you on Twitter if that’s okay! (Also just realized we interacted earlier today, I’m @doseofzero :> )
Just popping in to say you might find this post (of mine) useful: Interested in EA/longtermist research careers? Here are my top recommended resources Also this comment I left on it:
“Resources that are only relevant to people interested in AI governance and (to some extent) technical AI safety
You could participate in the AGI Safety Fundamentals course’s Governance track, or—when the course isn’t running—work through all or part of the curriculum independently. This seems like an unusually good way for most people to learn about AI risk and AI governance (from a longtermist or existential-risk-focused perspective).
Description of some organizations relevant to long-term AI governance (non-exhaustive) (2021) collects and overviews some organizations you might be interested in applying to. (This link is from week 7 of the AGI Safety Fundamentals course’s Governance track.)
I think Some AI Governance Research Ideas would be my top recommendation for a public list of AI governance research ideas.
But I’d suggest being discerning with this list, as I also think that some of those ideas are relatively low-priority and that the arguments presented for prioritizing those particular ideas are relatively weak, at least from a longtermist/existential-risk-focused perspective.”
“I’d suggest being discerning with this list”
Definitely agree with this!