Centre for the Study of Existential Risk update
Centre for the Study of Existential Risk Update: Summer 2016
Summary: Update on CSER’s progress over the last 12 months since it secured its grant funding in 2015. Focuses on our hires, activities, research community-building efforts, upcoming plans, and funding situation+priorities.
1. Operational
In the past year CSER’s mainly been focused on hiring, and getting projects set up. From September 2015 to August 2016, we’ve gone from 1 researcher to 8 with a couple more to come.
Our team now consists of Shahar Avin (currently working on a classification framework for global catastrophic risk scenarios), Yang Liu (decision theory for advanced AI), Bonnie Wintle (horizon-scanning for catastrophic risk), Catherine Rhodes (biorisk, biotech governance, and academic project management), Julius Weitzdorfer (law, governance, and catastrophic risk), Simon Beard (population ethics, future generations, and alternatives to cost-benefit analysis), Adrian Currie (extreme risk and the culture of science), as well as Huw Price and Seán Ó hÉigeartaigh.
Tatsuya Amano, focusing on catastrophic environmental risk, starts in September, and we will be interviewing shortly for an administrator and a biorisk postdoc. We also hope to recruit for an academic project manager shortly, as we are heavily ops-constrained. We are also delighted to be forming a bigger family of undergrads, PhDs, postdocs, and senior staff from related departments who take part in our workshops, discussion groups, and occasionally attend outside workshops on our behalf. We’ve spun off one major side project, the AI-focused Centre for the Future of Intelligence, which launches in October and will work in partnership with CSER on AI safety/policy.
2. Overall early aims as a new Centre.
In addition to specific research, our overarching early aim has been to grow the community of scholars, industry partners, and policy partners focused on working to identify and mitigate global catastrophic and existential risks. This involves reaching out to form partnerships with relevant outside communities both with the aim of gaining insights from relevant areas of expertise, and also raising the level of knowledge and priority surrounding GCR and X-risk within these communities. We are also continuing trying to raise the reputability of X-risk concerns internationally.
3. Implementation of (2)
At Cambridge, this has taken the form of joint discussions, partnerships and collaborative projects with
the Synthetic Biology Strategic Research Initiative (biorisk/biotech governance)
the Global Food Security Strategic Research Initiative (biorisk/agri risk/systemic risk)
the Machine Learning Group (technical AI safety, AI impacts)
the Centre for Science and Policy (policy engagement)
the Centre for Risk Studies (policy engagement)
the Conservation Research Initiative (environmental risk),
the Lauterpacht International Law Centre (international law/governance and GCR),
the Dept of Politics and International Relations (governance)
plus the Cambridge Philosophy departments and climate science community.
We have also been engaging with Cambridge’s MIRIx group, EA societies (e.g. 80K Cambridge and a new Cambridge x-risk student society), and relevant student policy activities (e.g. Wilberforce Conference on AI Policy).
We have been inviting leading experts from these areas to give talks as part of our Blavatnik Public Lecture Series, as well as internal lectures. In the last year we have hosted academic visits and internal lectures/discussion groups with a range of experts including Victoria Krakovna (AI), Andy Parker (Geoengineering), Eric Drexler (AI), Craig Bakker (global food security), Silya Voeneky (law, human rights, and existential risk), John Sulston (biorisk, population and resource risk), Jaime Yassif (biosecurity and pandemic preparedness), Jan Leike (AI), and others.
Outside Cambridge we have been
Presenting, organising workshops at and attending major conferences and symposia in machine learning, synthetic biology, environmental risk, governance of emerging technologies (including e.g. the recent Whitehouse Future of AI series).
Establishing links with relevant departments in the UK Government
Initiating discussions with relevant departments at the United Nations (in partnership with FHI)
Collaborating with the Future of Humanity Institute and the Global Priorities Project on workshops and reports
Initiating and continuing discussions with industry partners in AI.
4. Upcoming recruitment
CSER is nearing the end of its current growth phase. We will recruit for an Academic Project Manager shortly. We may also recruit for a postdoc to work either on AI strategy and policy, or AI milestones and roadmapping, depending on whether we can identify a suitable candidate. We are open to adding other members to our team beyond this – for example, if
Exceptional researcher candidates, or senior scholars, come to our attention and funding can be obtained
High priority new research projects emerge that demand urgent attention
Exceptional candidates apply on external funding (e.g. we are currently supporting a Marie Curie Fellowship application for a nuclear disarmament project).
We remain interested in growing our domain expertise in relevant science/technology areas, and are particularly interested in candidates with strong domain expertise in AI, geoengineering, biorisk, nuclear risk, and scientific/technological foresight, and/or deep existing expertise in x-risk/GCR. However, CSER’s current grant funding is now committed, and its main focus will be on carrying out its research projects.
However, our sister centre, the Centre for the Future of Intelligence, will open recruitment for 5 postdocs in Cambridge (technical AI safety; AI roadmapping; AI policy and responsible innovation; philosophy) from late Autumn, aiming for a start date in April. Research positions will also be available at the CFI spokes in Oxford, Imperial, and Berkeley.
5. Funding situation and priorities
Our fundraising priorities over the coming year are likely to focus on improving our operations capacity (academic project management, communications and network-building, development, admin) as we are heavily constrained in these areas compared to similar organisations working in this space; the majority of our grants don’t allow for hiring for such roles. Where we identify top mission-aligned research talent with deep scientific/technological expertise, we may fundraise/write grants to enable us to hire them.
Beyond this, we are adequately funded through to mid-2018 for the majority of our current planned activities. Unconstrained funds are always of high value, as they allow us to take advantage of high-impact targets of opportunity (organising a timely workshop, committing ‘leverage’ funds to increase likelihood of winning a grant, or grabbing a talented researcher, funding a promising PhD student). The latter in particular are in keeping with our aim of being a hub and training ground for future X-risk researchers and research leaders.
However our long-term major goal over the coming two years will be to win grants and secure funds that can be used beyond 2018, rather than continuing to expand greatly. Our grants are time-limited, and without further developments CSER’s overall funding ends in September 2018. Due in part to our initial reputational resources, leadership, and opportunities, our initial successful grants had a heavy philosophy and social science component. Over time, and as we build up expertise and links in the relevant areas, I expect our focus to shift more heavily towards grants that allow hiring from relevant sciences, technology, and technology governance (while maintaining our philosophy and social science streams).
Separately, we expect to develop a set of additional high-value AI safety-related projects under the CFI umbrella that we may aim to raise additional funding for; especially within technical AI safety, AI governance and roadmapping.
6. Current priority research areas (papers to be online shortly)
Decision Theory and AI (Yang Liu, Huw Price), several papers produced (one under review at Synthese). Workshop in preparation.
A global catastrophic risks framework focusing on critical systems needed for global survival (group project led by Shahar Avin) (paper in preparation)
Horizon-scanning and expertise elicitation and aggregation methodologies for catastrophic risk and technology (Bonnie Wintle, Bill Sutherland) (1 paper published, 1 in preparation; 2 workshops in preparation)
Biotechnology governance and biosecurity (Catherine Rhodes) (several papers and workshops in preparation)
Population ethics and policy (Simon Beard and Partha Dasgupta), 2 workshops and 1 paper in preparation.
Extreme Risk and the Culture of Science (Huw Price, Adrian Currie) (newly begun project).
Disaster Law and governance relating to risky technologies (Julius Weitzdorfer)
Responsible development of risky technologies (Seán Ó hÉigeartaigh, Martin Rees)
Milestones in AI (early stage working group combining experts from ML, logical systems, neuroscience, cognition; 1 paper in progress)
Climate risk indicators (workshop in progress).
7. Additional research priorities for the coming year include
Technical research on AI safety (In collaboration with Adrian Weller, Cambridge Machine Learning and the Centre for the Future of Intelligence)
Strategy and policy in AI (In collaboration with the Strategic AI Research Centre and the Centre for the Future of Intelligence)
Governance of Lethal Autonomous Weapons (In collaboration with the Lauterpacht Centre for International Law)
Catastrophic Environmental risk.
Information hazards
(Possibly) Risks on the intersection of AI and cybersecurity
8. Autumn activities
Public Lecture Series (monthly; see website)
Internal talks and working group (by invitation only)
Workshops in preparation (by invitation only):
September 26th/27th, Population and ethics (conference and closed workshop with policymakers), in collaboration with Cumberland Lodge.
October, Cambridge. Climate Risk and data analytics Bringing machine learning, data science and statistics to bear on developing climate risk indicators.
October 18th, Cambridge. Gene Drive, responsible application, and global risk: Workshop with Kevin Esvelt (MIT MediaLab), Luke Alphey, CSaP, the Centre for Law, Medicine and the Life Sciences, and the Synthetic Biology SRI.
October, Cambridge. Investment strategies, shareholder engagement and climate change workshop. 1 full-day workshop in collaboration with Positive Investment (http://positiveinvest.org/).
October, Cambridge. Risk workshop with UK government department
November, Cambridge. Horizon-scan for advances in bioengineering, and potential impacts and risks.
December: Supporting organisation of workshop on AI, law and policy at NIPS (under leadership of Adrian Weller and others).
On a personal note, I’d like to apologise for not communicating more effectively with the global EA community over the past year, and for not being present at this year’s EA Global. This was a result of it being an exceptionally high-workload year as we’ve been getting these various projects set up. As we settle into our work, I look forward to us being a more visible presence on the landscape. We are tremendously grateful to members of the Cambridge and UK EA community who have been tremendously valuable volunteers with CSER, and participants in our research discussions, over the last year.
Seán Ó hÉigeartaigh,
Executive Director, CSER.
Very impressive so far. Thanks for all your hard work, Sean!
This was a good write-up. Thank you very much for doing it, Sean—and good luck with the future developments. I’m very much looking forward to following the research coming out of CSER!