As another former fellow and research manager (climate change), this seems perhaps a bit of a strange justification.
The infrastructure is here—similar to Moritz’s point, whilst Cambridge clearly has a very strong AI infrastructure, the comparative advantage of Cambridge over any other location, would, at least to my mind, be the fact it has always been a place of collaboration across different cause areas and considerations of the intersections and synergies involved (ie through CSER). It strikes me that in fact other locales, such as London (which probably has one of the highest concentration of AI Governance talent in the world) may have been a better location than Cambridge. I think this idea that Cambridge is best suited for purely AI seems surprising, when many fellows commented (me included) on the usefulness of having people from lots of different cause areas around, and the events we managed to organise (largely due to the Cambridge location) were mostly non-AI yet got good attendence throughout the cause areas.
Success of AI-safety alumni—similar to Moritz, I remain skeptical of this point (I think there is a closely related point which I probably endorse, which I will discuss later). It doesn’t seem obvious that, when accounting for career level, and whether participants were currently in education, that AI safety actually scores better. Firstly, you have the problem of differing sample size, for example, take climate change; there have only been 7 climate change fellows (5 of which were last summer, and of those (depending on how you judge it), only 3 have been available for job opportunities for more than 3 months after the fellowship, so the sample size is much smaller than AI Safety and governance (and they have achieved a lot in that time). Its also, ironically, not clear that the AI Safety and Governance cause areas have been more successful at the metric of ‘engaging in AI Safety projects’; for example, 75% of one of the non-AI cause areas’ fellows from 2022 are currently employed in, or have offers for PhD’s in, AI XRisk related projects, which seems a similar rate of success than AI in 2022.
I think the bigger thing that acts in favour of making it AI focused it that it is much easier for junior people to get jobs or internships in AI Safety and Governance than in XRisk focused work in some other cause areas; there simply are more role available for talented junior people that are clearly XRisk related. This might be clearly one reason to make ERA about AI. However, whilst I mostly buy this argument, its not 100% clear to me that this means counterfactual impact is higher. Many of the people entering into the AI safety part of the programme may have gone on to fill these roles anyway (I know of something similar to this being the case with a few rejected applicants), or the person whom they got the role above may have been only marginally worse. Whereas, for some of the cause areas, the participants leaned less XRisk-y by background, so ERA’s counterfactual impact may be stronger, although it also may be higher variance. I think on balance, this does seem to support the AI switch, but by no margin am I sure of this.
As another former fellow and research manager (climate change), this seems perhaps a bit of a strange justification.
The infrastructure is here—similar to Moritz’s point, whilst Cambridge clearly has a very strong AI infrastructure, the comparative advantage of Cambridge over any other location, would, at least to my mind, be the fact it has always been a place of collaboration across different cause areas and considerations of the intersections and synergies involved (ie through CSER). It strikes me that in fact other locales, such as London (which probably has one of the highest concentration of AI Governance talent in the world) may have been a better location than Cambridge. I think this idea that Cambridge is best suited for purely AI seems surprising, when many fellows commented (me included) on the usefulness of having people from lots of different cause areas around, and the events we managed to organise (largely due to the Cambridge location) were mostly non-AI yet got good attendence throughout the cause areas.
Success of AI-safety alumni—similar to Moritz, I remain skeptical of this point (I think there is a closely related point which I probably endorse, which I will discuss later). It doesn’t seem obvious that, when accounting for career level, and whether participants were currently in education, that AI safety actually scores better. Firstly, you have the problem of differing sample size, for example, take climate change; there have only been 7 climate change fellows (5 of which were last summer, and of those (depending on how you judge it), only 3 have been available for job opportunities for more than 3 months after the fellowship, so the sample size is much smaller than AI Safety and governance (and they have achieved a lot in that time). Its also, ironically, not clear that the AI Safety and Governance cause areas have been more successful at the metric of ‘engaging in AI Safety projects’; for example, 75% of one of the non-AI cause areas’ fellows from 2022 are currently employed in, or have offers for PhD’s in, AI XRisk related projects, which seems a similar rate of success than AI in 2022.
I think the bigger thing that acts in favour of making it AI focused it that it is much easier for junior people to get jobs or internships in AI Safety and Governance than in XRisk focused work in some other cause areas; there simply are more role available for talented junior people that are clearly XRisk related. This might be clearly one reason to make ERA about AI. However, whilst I mostly buy this argument, its not 100% clear to me that this means counterfactual impact is higher. Many of the people entering into the AI safety part of the programme may have gone on to fill these roles anyway (I know of something similar to this being the case with a few rejected applicants), or the person whom they got the role above may have been only marginally worse. Whereas, for some of the cause areas, the participants leaned less XRisk-y by background, so ERA’s counterfactual impact may be stronger, although it also may be higher variance. I think on balance, this does seem to support the AI switch, but by no margin am I sure of this.