I direct the AI:Futures and Responsibility Programme (https://www.ai-far.org/) at the University of Cambridge, which works on AI strategy, safety and governance. I also work on global catastrophic risks with the Centre for the Study of Existential Risk and AI strategy/policy with the Centre for the Future of Intelligence.
Sean_o_h
I am interested in the answer to this question. However, I would point out that Seth is listed as a major contributor to the FHI-GCF report.
Postdoctoral research positions at CSER (Cambridge, UK)
A quick note to say that CSER has benefited tremendously from help from members of the effective altruism community over the past year, on areas that include outreach/web, lecture/seminar organisation and promotion, background research on foundations and research projects, grant preparation, feedback on specific research areas, community-building in Cambridge, not to mention philanthropic support. We have particularly benefited from local assistance/involvement. EAs I’d especially like to thank include Nick Robinson, Kristian Ronn, Ryan Carey, Will MacAskill, Amanda MacAskill, Alasdair Phillips-Robins, Paul Crowley. There are some pretty substantial achievements that we owe to the assistance of the EA community.
At this moment in time I don’t think there are areas in which we can effectively make use of more volunteer assistance, for a couple of reasons. However, I have a few projects on the backburner that I think might be both interesting and suitable for volunteer involvement; I aim to write them up a little better when I have a little time over the summer. In the meantime, I would encourage interested EAs in the Cambridge/London area to attend our talks, and discuss existential risk matters with new people they meet there. One of our aims over the next year is to continue building up the community of young academics and students in various disciplines who are interested in existential risk, particularly in Cambridge and London, and so far our local events seem to be acting as a v good catalyst for this. And this complements the very satisfying progress we’ve been making in drawing in more senior academics at Cambridge to our various planned research projects and concerns.
(EDIT: I bet I’ve forgotten at least one super-important person. Please don’t take offence; under time pressure and a little low on sleep!]
(1) Emailing admin@cser.org would be good; I don’t have time to answer all emails at present (sorry!), but I do read everything and keep track of volunteer offers. (2) Keep an eye out on the EA facebook/Lesswrong discussion forum as I may from time to time make requests for help/project involvement offers. A question (to moderators): is it ok to make such posts on this forum?
For (1), a paragraph about your background/strengths and a CV (not essential but helpful) would be v helpful. The kind of info FLI ask for on their volunteer form is very useful (https://docs.google.com/forms/d/17Hez-zEzrOq7Pk4agM8r7VDrBvCB6-hO_m0XJxAuVuI/viewform) I.e. background/experience, areas of expertise, skills/interests, what you’d most be interested in doing, whether you’re local to Cambridge. I would add to this expected availability—knowing if either (a) someone can provide a lot of hours in the near-term (for a near-term project) or (b) can offer a consistent X hrs/month over a longer period (for regular tasks) is tremendously helpful.
Thanks for this very interesting and clearly articulated post. A comment specifically on the “camps” thing.
Within the people actually working on existential risk/far future, my impression is that this ‘competition’ mindset doesn’t exist to nearly the same extent (I imagine the same is true in the ‘evidence’ causes, to borrow your framing). And so it’s a little alarming at least to me, to see competitive camps drawing up in the broader EA community, and to hear (for example) reports of people who value xrisk research ‘dismissing’ global poverty work.
Toby Ord, for example, is heavily involved in both global poverty/disease and far future work with FHI. In my own case, I spread my bets by working on existential risk but my donations (other than unclaimed expenses) go to AMF and SCI. This is because I have a lot of uncertainty on the matter, and frankly I think it’s unrealistic not to have a lot of uncertainty on it. I think this line (” There should definitely be people in the world who think about existential risk and there should definitely be people in the world providing evidence on the effectiveness of charitable interventions.”) more accurately sums up the views of most researchers I know working on existential risk.
I realise that this might be seen as going against the EA ‘ethos’ to a certain extent—a lot of the aim is to be able to rank things clearly and objectively, and choose the best causes. But this gets very difficult when you start to include the speculative causes. It’s the nature of existential risk research to be wrong a lot of time—a lot of work surrounds high impact, low probability risks that may not come to pass, many of the interventions may not have effect until much further in the future, it is hard to predict whether it’s our work which makes the crucial difference, etc—all of this makes it difficult to measure.
I’m happy to say existential risk (and global catastrophic risk) are important areas of work. I think there are strong, evidence-based arguments that it has been under-served and underfunded globally to date, for reasons well-articulated elsewhere. I think there are also strong arguments that e.g. global poverty is under-served and underfunded for a set of reasons. I’m happy to say I consider these both to be great causes, with strong reasons to fund them. But reducing down “donate to AMF vs donate to CSER” into e.g. lives saved in the present versus speculative lives saved in the future involves so much gross simplification and assumptions that could be wrong by so many orders of magnitude that I’m not comfortable doing it. Add to this moral uncertainty over value of present lives versus value of speculative future lives, value of animal lives, and so on, and it gets even more difficult.
I don’t know how to resolve this fully within the EA framing. My personal ‘dodge’ has been to prioritise raising funds from non-EA sources for FHI and CSER (>95%, if one excludes Musk, >80% if one includes him). I would be a hypocrite to recommend to someone to stop funding AMF in favour of CSER, given that I’m not doing that myself. But I do appreciate that an EA still has to decide what to do with her funds between xrisk, global poverty, animal altruism, and other causes. I think we will learn from continuing excellent work by ‘meta’ groups like GiveWell/OPP and others. But to a certain extent, I think we will have to recognise, and respect, that at some point there are moral and empirical uncertainties that are hard to reduce away.
Perhaps for now the best we can say is “There are a number of very good causes that are globally under-served. There are significant uncertainties that make it difficult to ‘rank’ between them, and it will partly depend on a person’s moral beliefs and appetite for ‘long shots’ vs ‘safe bets’, as well as near-term opportunities for making a clear difference in a particular area. But we can agree that there are solid reasons to support this set of causes over others”.
New positions and recent hires at the Centre for the Study of Existential Risk
Please note: I have a heavy travel and deadline schedule over the next few weeks, so will answer questions when I can—please excuse any delays!
A quick reminder: our deadline closes a week from tomorrow (midday UK time) - so now would be a great time to apply if you were thinking of it, or to remind fellow researchers! Thanks so much, Seán.
New Leverhulme Centre on the Future of AI (developed at CSER with spokes led by Bostrom, Russell, Shanahan)
Thanks so much for the spot Daniel, greatly appreciated! I was working a little too quickly yesterday :)
(On a lighter note) On re-reading Nick Beckstead’s post, I spent a while thinking “dear lord, he was an impossibly careful-thinking/well-informed teenager”*. Then I realised you’d meant 2013, not 2003 ;)
(*This is not to say he wasn’t a very smart, well-informed teenager, of course. And with the EA community in particular, this would not be so unlikely—the remarkable quality and depth of analysis in posts being published on the EA forum by people in their late teens and early twenties is one of the things that makes me most excited about the future!)
I just wanted to express my gratitude for putting extensive time into this. Articles of these kind are very useful for me, and I’m sure, for many others!
Thank you for posting this, very helpful.
This is incredibly impressive Bernadette—not just the efficient use of money and level of philanthropy, but the overall incredible balancing of commitments and lifestyle. I’m always particularly humbled when hearing what EAs with young families are achieving; a reminder of how easy those of us without such commitments have it, so no excuses for us!
Environmental risk postdoctoral research position at CSER.
Happy to discontinue posting about research position openings, if these are not of interest, or the EA forum is no longer an appropriate venue. Thanks!
Thanks for the feedback Stefan! Was responding to an initial downvote; wanted to make sure this wasn’t seen as inappropriate for EA forum.
Hi Evan,
My apologies, I didn’t mean to overlook your work or others. As I’m not online as much as I’d like, I wasn’t quite sure if environmental risk was a priority cause area in EA at the moment, so I’d held off posting the opening on the forum at first. The GWWC post this week updated me towards it being of interest.
Was really excited to see your very detailed taxonomy of posts in this area you’ve been planning to write. The article you describe sounds very helpful for a variety of reasons. Our own deadline is May 11, so if it encourages people that this is an important area, they should still have 1-2 weeks to apply. Thanks so much!
Strongly agreed!
Seth is a very smart, formidably well-informed and careful thinker—I’d highly recommend jumping on the opportunity to ask him questions.
His latest piece in the Bulletin of the Atomic Scientists is worth a read too. It’s on the “Stop Killer Robots” campaign. He agrees with Stuart Russell (and others)’s view that this is a bad road to go down, and also presents it as a test case for existential risk—a pre-emptive ban on a dangerous future technology:
“However, the most important aspect of the Campaign to Stop Killer Robots is the precedent it sets as a forward-looking effort to protect humanity from emerging technologies that could permanently end civilization or cause human extinction. Developments in biotechnology, geoengineering, and artificial intelligence, among other areas, could be so harmful that responding may not be an option. The campaign against fully autonomous weapons is a test-case, a warm-up. Humanity must get good at proactively protecting itself from new weapon technologies, because we react to them at our own peril.”
http://thebulletin.org/stopping-killer-robots-and-other-future-threats8012