Announcing the Cambridge ERA:AI Fellowship 2024
The Cambridge ERA:AI Fellowship is excited to announce applications for our eight-week, paid summer research internship in Cambridge, UK are now open.
This year, ERA (formerly Existential Risks Alliance) will be focusing on AI safety and governance research, working in collaboration with several research centres at the University of Cambridge including the Centre for the Study of Existential Risk (CSER), the Leverhulme Institute for the Future of Intelligence (CFI), and the Krueger AI Safety Lab. Fellows in this program will research essential aspects of AI safety, including technical foundations, design principles, and governance frameworks needed to ensure that increasingly-capable AI systems are safe, secure, and reflect human values.
We invite early-career researchers from around the globe, including undergraduate students, to join us from July 1 to August 23, in Cambridge, UK. This is an exceptional chance to steer the rapid progress in transformative AI through safety research and responsible governance.
During the fellowship, participants will receive:
Full funding: Fellows receive a salary equivalent to £34,125 per year, which will be prorated to the duration of the Fellowship. On top of this, our fellows receive complimentary accommodation, meal provisions during working hours, visa support, and travel expense coverage.
Expert mentorship: Fellows will work closely with a mentor on their research agenda for the summer. See our Mentors page to learn about previous mentors.
Research Support: Many of our alumni have gone on to publish their research in top journals and conferences, and we provide dedicated research management support to help you become strong researchers / policymakers in the field.
Community: Fellows are immersed in a living-learning environment. They will have a dedicated desk space at our office in central Cambridge and are housed together at Emmanuel College, Cambridge.
Networking and learning opportunities: We assist fellows in developing the necessary skills, expertise, and networks to thrive in an AI safety or policy career. We offer introductions to pertinent professionals and organisations, including in Oxford and London. In special cases, we also provide extra financial assistance to support impactful career transitions.
Our Research
The rapid advancement of artificial intelligence in recent years has brought about transformative changes across various domains. As AI systems become more sophisticated and autonomous, their potential impact on our society grows exponentially. With this increased capability comes a heightened responsibility to ensure that these systems are developed and deployed in a safe, secure, and reliable manner.
As part of the Cambridge ERA:AI Fellowship, fellows will spend 8-weeks working on a research project related to AI safety. Based on four categories of possible risk — malicious use, AI race, organisational risk, and rogue AIs — we have outlined some ways to address these risks and avenues for further research. This list is far from being exhaustive — instead, we hope it serves as inspiration and guidance for the types of projects we expect to see over the summer.[1]
Who Can Apply?
Anyone! We are looking to support fellows from a wide range of subject areas who are committed to reducing risks posed by advances in AI.
However, we expect the Cambridge ERA:AI Fellowship might be most useful to students (from undergraduates to postgraduates) and to people early in their careers who are looking for opportunities to conduct short research projects on topics related to AI safety and governance. Note that we are currently unable to accept applicants who will be under the age of 18 on 1st July 2024.
The Application Process
We review applications on a rolling basis and urge candidates to apply early, as offers will be extended promptly upon identification of suitable candidates. Please note that the application deadline is April 5, 2024, at 23:59 US Eastern Daylight Time.
The first stage consists of essay-style questions. Applicants who progress to the next stage will be invited to interview. Successful applicants will be notified by May, and afterwards, we will work with accepted fellows to develop their project ideas and pair them with relevant mentors.
If you know someone who would excel in this opportunity, we strongly encourage you to recommend that they apply. Personal recommendations can significantly increase the likelihood of applications, even from those already aware of the opportunity. Additionally, if you lead or are involved in relevant community spaces, please consider sharing an announcement about the fellowship, including a link to our site.
To apply and learn more, please visit the ERA website.
If you have questions about anything else which is not covered on our website or our FAQs, please email us at hello@erafellowship.org.
* Please note that in previous years, ERA’s research focus was broadly on existential risks, including biosecurity, climate change, nuclear warfare, AI safety, and meta topics. This year, we are focusing on AI safety to direct our resources and attention toward an increasingly capable emerging technology. If you are interested in doing research on X-risk/GCRs outside of AI safety and governance, you may consider applying for UChicago’s Summer Fellowship on existential risk.
- ^
The four categories mentioned here are from the Center for AI Safety’s Report Overview of Catastrophic AI Risks (2023).
AI was—in your words—already “an increasingly capable emerging technology” in 2023. Can you share more information on what made you prioritize it to the exclusion of all other existential risk cause areas (bio, nuclear, etc.) this year?
[Disclaimer: I previously worked for ERA as the Research Manager for AI Governance and—briefly—as Associate Director.]
Sure, this is a very reasonable question. The decision to prioritize AI this year stems largely from our comparative advantage and ERA’s established track record.
The Cambridge community has really exceptional AI talent, and we’ve taken advantage of this by partnering closely with the Leverhulme Centre for the Future of Intelligence and the Krueger AI Safety Lab within the Cambridge University Engineering Department (alongside AI researchers at CSER). Furthermore, the Meridian Office, the base for the ERA team and Fellowship, is also the site of the Cambridge AI Safety Hub (CAISH) and various independent AI safety projects. This is an ideal ecosystem for finding outstanding mentors and research managers with AI expertise.
A more crucial factor in our focus on AI is the success of ERA alumni, particularly those from the AI safety and governance track, who have continued to conduct significant research and build impactful careers. For instance, 4 out of the 6 alumni stories highlighted here involve fellows engaged in AI safety projects beyond the fellowship. This says little about fellows from other cause areas but rather suggests a unique opportunity for early-career researchers to make a significant impact in AI-related organizations — a feat that appears more challenging in well-established fields like nuclear or climate change.
Given the importance of AI safety and the timely opportunity to influence its development both technically and in policy-making, focusing our resources on AI appears strategically sound, especially with the aforementioned strong AI community. It is worth adding a disclaimer: our emphasis on AI does not diminish the importance of other X-risk or GCR research areas. It simply reflects our comparative strengths and track records, suggesting that our AI focus is likely to be the most effective use of resources.
As another former fellow and research manager (climate change), this seems perhaps a bit of a strange justification.
The infrastructure is here—similar to Moritz’s point, whilst Cambridge clearly has a very strong AI infrastructure, the comparative advantage of Cambridge over any other location, would, at least to my mind, be the fact it has always been a place of collaboration across different cause areas and considerations of the intersections and synergies involved (ie through CSER). It strikes me that in fact other locales, such as London (which probably has one of the highest concentration of AI Governance talent in the world) may have been a better location than Cambridge. I think this idea that Cambridge is best suited for purely AI seems surprising, when many fellows commented (me included) on the usefulness of having people from lots of different cause areas around, and the events we managed to organise (largely due to the Cambridge location) were mostly non-AI yet got good attendence throughout the cause areas.
Success of AI-safety alumni—similar to Moritz, I remain skeptical of this point (I think there is a closely related point which I probably endorse, which I will discuss later). It doesn’t seem obvious that, when accounting for career level, and whether participants were currently in education, that AI safety actually scores better. Firstly, you have the problem of differing sample size, for example, take climate change; there have only been 7 climate change fellows (5 of which were last summer, and of those (depending on how you judge it), only 3 have been available for job opportunities for more than 3 months after the fellowship, so the sample size is much smaller than AI Safety and governance (and they have achieved a lot in that time). Its also, ironically, not clear that the AI Safety and Governance cause areas have been more successful at the metric of ‘engaging in AI Safety projects’; for example, 75% of one of the non-AI cause areas’ fellows from 2022 are currently employed in, or have offers for PhD’s in, AI XRisk related projects, which seems a similar rate of success than AI in 2022.
I think the bigger thing that acts in favour of making it AI focused it that it is much easier for junior people to get jobs or internships in AI Safety and Governance than in XRisk focused work in some other cause areas; there simply are more role available for talented junior people that are clearly XRisk related. This might be clearly one reason to make ERA about AI. However, whilst I mostly buy this argument, its not 100% clear to me that this means counterfactual impact is higher. Many of the people entering into the AI safety part of the programme may have gone on to fill these roles anyway (I know of something similar to this being the case with a few rejected applicants), or the person whom they got the role above may have been only marginally worse. Whereas, for some of the cause areas, the participants leaned less XRisk-y by background, so ERA’s counterfactual impact may be stronger, although it also may be higher variance. I think on balance, this does seem to support the AI switch, but by no margin am I sure of this.
Thanks for the detailed reply. I completely understand the felt need to seize on windows of opportunity to contribute to AI Safety—I myself have changed my focus somewhat radically over the past 12 months.
I remain skeptical on a few of the points you mention, in descending order of importance to your argument (correct me if I’m wrong):
“ERA’s ex-AI-Fellows have a stronger track record” I believe we are dealing with confounding factors here. Most importantly, AI Fellows were (if I recall correctly) significantly more senior on average than other fellows. Some had multiple years of work experience. Naturally, I would expect them to score higher on your metric of “engaging in AI Safety projects” (which we could also debate how good of a metric it is). [The problem here I suspect is the uneven recruitment across cause areas, which limits comparability.] There were also simply a lot more of them (since you mention absolute numbers). I would also think that there have been a lot more AI opportunities opening up compared to e.g. nuclear or climate in the last year, so it shouldn’t surprise us if more Fellows found work and/or funding more easily. (This is somewhat balanced out by the high influx of talent into the space.) Don’t get me wrong: I am incredibly proud of what the Fellows I managed have gone on to do, and helping some of them find roles after the Fellowship may have easily been the most impactful thing I’ve done during my time at ERA. I just don’t think it’s a solid argument in the context in which you bring it up.
“The infrastructure is here” This strikes me as a weird argument at least. First of all, the infrastructure (Leverhulme etc.) has long been there (and AFAIK, the Meridian Office has always been the home of CERI/ERA), so is this a realisation you only came to now? Also: If “the infrastructure is here” is an argument, would the conclusion “you should focus on a broader set of risks because CSER is a good partner nearby” seem right to you?
“It doesn’t diminish the importance of other x-risks or GCR research areas” It may not be what you intended, but there is something interesting about an organisation that used to be called the “Existential Risk Alliance” pivot like this. Would I be right in assuming we can expect a new ToC alongside the change in scope? (https://forum.effectivealtruism.org/posts/9tG7daTLzyxArfQev/era-s-theory-of-change)
CHERI is also planning to run this year I believe, for anyone looking to do non-AI projects (I am not involved with CHERI).