I direct the AI:Futures and Responsibility Programme (https://www.ai-far.org/) at the University of Cambridge, which works on AI strategy, safety and governance. I also work on global catastrophic risks with the Centre for the Study of Existential Risk and AI strategy/policy with the Centre for the Future of Intelligence.
Sean_o_h
These will still be massive, and massively expensive, training runs though—big operations that will constitute very big strategic decisions only available to the best-resourced actors.
This is great! Also, I very much hope that the series on skill-building happens.
I’m not taking a position on the question of whether Nick should stay on as Director, and as noted in the post I’m on record as having been unhappy with his apology (which remains my position)*, but for balance and completeness I’d like to provide a perspective on the importance of Nick’s leadership, at least in the past.
I worked closely with Nick at FHI from 2011 to 2015. While I’ve not been at FHI much in recent years (due to busyness elsewhere) I remember the FHI of that time being a truly unique-in-academia place; devoted to letting and helping brilliant people think about important challenges in unusual ways. That was in very large part down to Nick—he is visionary, and remarkably stubborn and difficult—with the benefits and drawbacks this comes with. It is difficult to understate the degree of pressure in academia to pull you away from doing something unique and visionary and to instead do more generic things, put time into impressing committees, keeping everyone happy etc**. - It’s that stubbornness (combined with the vision) in my view that allowed FHI to come into being and thrive (at least for a time). It is (in my view) the same stubbornness and difficultness that contributes to other issues noted in in the post.
Whether Nick was the right leader at that time isn’t a question to me—FHI couldn’t have happened under anyone else. And the great work done by multiple people there (not just Nick), and a fairly remarkable range of fhi alumni post-fhi, must stand to that vision. Whether a different leader would be able to keep the positive aspects of the vision—and fight for them—while also being able to address the problems—maybe, I don’t know.
One model FHI might consider is a meaningful, and properly empowered, co-directorship model. I felt I had a good relationship with Nick at the time, and was able to regularly shut down ideas I thought foolish or unnecessarily annoying to the university (although it was stressful). I was also able to put time into maintaining university relationships for FHI, which seemed to keep things on the rails. But that required me being pretty stubborn too, and it seems like others may have had less success in this regard later on (although I know little of the details). It may be possible to make such a model work, with a properly empowered fellow director (e.g. an exec director / research director model).
* I am not taking a position on issues raised in the post such as whether Nick’s brand is too damaged, etc. This may be the case. For whatever it’s worth I never saw/heard racist views during my time at FHI (if I had, I would have left). I do recall initiatives, enthusiastically initiated by Nick, to engage and support scholars from under-represented regions like South America, and to encourage intellectual hubs outside of Europe/North America.
** I’ve spent a lot of time trying to navigate these things in academia, and have the scar tissue to show for it.- Apr 21, 2024, 10:54 AM; 227 points) 's comment on Future of Humanity Institute 2005-2024: Final Report by (
- Mar 5, 2023, 12:42 AM; 141 points) 's comment on Nick Bostrom should step down as Director of FHI by (
- Mar 4, 2023, 5:51 PM; 30 points) 's comment on Nick Bostrom should step down as Director of FHI by (
- Mar 5, 2023, 11:22 AM; 4 points) 's comment on Nick Bostrom should step down as Director of FHI by (
Reasons I would disagree:
(1) Bing is not going to make us ‘not alive’ on a coming-year time scale. It’s (in my view) a useful and large-scale manifestation of problems with LLMs that can certainly be used to push ideas and memes around safety etc, but it’s not a direct global threat.
(2) The people best-placed to deal with EA ‘scandal’ issues are unlikely to perfectly overlap with the people best-placed to deal wit the opportunities/challenges Bing poses.
(3) I think it’s bad practice for a community to justify backburnering pressing community issues with an external issue, unless the case for the external issue is strong; it’s a norm that can easily become self-serving.
Thanks for putting this together, very helpful given the growth of activities in the UK!
Strong agree. I’ve been part of other communities/projects that withered away in this way.
Rees has also written multiple blurbs for Will MacAskill, Nick Bostrom et al.
Great to see such a detailed, focused, and well-researched analysis of this topic, thank you. I haven’t yet read beyond the executive summary yet other than a skim of the longer report, but I’m looking forward to doing so.
A clarification that CSER gets some EA funds (combination of SFF, SoGive, BERI in kind, individual LTFF projects) but likely 1⁄3 or less of its budget at any given time. The overall point (all these are a small fraction of overall EA funds) is not affected.
7.4% actually seems quite high to me (for a university without a long-time established intellectual hub, etc); I would have predicted lower in advance.
An early output from this project: Research Agenda (pre-review)
Lessons from COVID-19 for GCR governance: a research agenda
The Lessons from Covid-19 Research Agenda offers a structure to study the COVID-19 pandemic and the pandemic response from a Global Catastrophic Risk (GCR) perspective. The agenda sets out the aims of our study, which is to investigate the key decisions and actions (or failures to decide or to act) that significantly altered the course of the pandemic, with the aim of improving disaster preparedness and response in the future. It also asks how we can transfer these lessons to other areas of (potential) global catastrophic risk management such as extreme climate change, radical loss of biodiversity and the governance of extreme risks posed by new technologies.
Our study aims to identify key moments- ‘inflection points’- that significantly shaped the catastrophic trajectory of COVID-19. To that end this Research Agenda has identified four broad clusters where such inflection points are likely to exist: pandemic preparedness, early action, vaccines and non-pharmaceutical interventions. The aim is to drill down into each of these clusters to ascertain whether and how the course of the pandemic might have gone differently, both at the national and the global level, using counterfactual analysis. Four aspects are used to assess candidate inflection points within each cluster: 1. the information available at the time; 2. the decision-making processes used; 3. the capacity and ability to implement different courses of action, and 4. the communication of information and decisions to different publics. The Research Agenda identifies crucial questions in each cluster for all four aspects that should enable the identification of the key lessons from COVID-19 and the pandemic response.
At least these ones involve very different cause areas, so should be obvious from context (as contrasted with two organisations that work on long-term risk where AI risk is a focus).
Also, have some pity for the Partnership on AI and the Global Partnership on AI.
[disclaimer: acting director of CSER, but writing in personal capacity]. I’d also like to add my strongest endorsement of Carrick—as ASB says, a rare and remarkable combination of intellectual brilliance, drive, and tremendous compassion. It was a privilege to work with him at Oxford for a few years. It would be wonderful to see more people like Carrick succeeding in politics; I believe it would make for a better world.
Seán Ó hÉigeartaigh here. Since I have been named specifically, I would like to make it clear that when I write here, I do so under Sean_o_h, and have only ever done so. I am not Rubi, and I don’t know who Rubi is. I ask that the moderators check IP addresses, and reach out to me for any information that can help confirm this.
I am on leave and have not read the rest of this discussion, or the current paper (which I imagine is greatly improved from the draft I saw), so I will not participate further in this discussion at this time.
I note the rider says it’s not directed at regular forum users/people necessarily familiar with longtermism.
The Torres critiques are getting attention in non-longtermist contexts, especially with people not very familiar with the source material being critiqued. I expect to find myself linking to this post regularly when discussing with academic colleagues who have come across the Torres critiques; several sections (the “missing context/selective quotations” section in particular) demonstrate effectively places in which the critiques are not representing the source material entirely fairly.
Thanks for this article. Just to add another project in this space: CSER’s Haydn Belfield and collaborator Shin-Shin Hua are working on a series of papers relating to corporate governance of AI, looking at topics including how to resolve tensions between competition law and cooperation on e.g. AI safety. This work is motivated by similar reasoning as captured in this post.
The first output (in the yale journal of law and technology) is here
https://yjolt.org/ai-antitrust-reconciling-tensions-between-competition-law-and-cooperative-ai-development
We have given policy advice to and provided connections and support to various people and groups in the policy space. This includes UK civil servants, CSER staff, the Centre for Long-Term Resilience (CLTR), and the UN.
I’d like to confirm that the APPGFG’s advice/connections/support has been very helpful to various of us at CSER. I also think that the APPG has done really good work this year—to Sam, Caroline and Natasha’s great credit. Moreover, I think there is a lot to be learned from the very successful and effective policy engagement network that has grown up in the UK in recent years; which includes the APPGFG, the Centre for Long-Term Resilience, and (often with the support and guidance of the former two) input from various of the academic orgs. I think all this is likely to have played a significant role in the UK government’s present level of active engagement with issues around GCR/Xrisk and long-term issues.
For those interested in the ‘epistemic security’ topic, the most relevant report is here; it’s an area we (provisionally) plan to do more on.
https://www.repository.cam.ac.uk/handle/1810/317073Or a brief overview by the lead author is here:
https://www.bbc.com/future/article/20210209-the-greatest-security-threat-of-the-post-truth-age
Yes, I think this is plausible-to-likely, and is a strong counter-argument to the concern I raise here.