Computer science student at UCL. Previously finance lead at CEA, Nov 2019 - Aug 2021.
Ben
Sounds interesting! I’d be interested in:
Could Richard give a summary of his conversation with Eliezer, and on what points he agrees and disagrees with him?
(Perhaps this has been covered somewhere else) Could Richard give an broad overview of different approaches to AI alignment and which ones he thinks are most promising?
Thanks!
Thanks for sharing this piece and looking for constructive feedback. I’d agree with most of the points made by other commenters. I would also suggest:
Engage more with primary sources and more things written by people outside of effective altruism. There are thousands of climate scientists with interesting things to say, and a relatively small number of people in EA thinking about this.
General humility about this field—we don’t have great data on what the climate and society will be like in 50, 100, 200, 500+ years time, and it’s hard to know what the limits for habitation / adaption will be.
How would you define existential threat? I’ve heard David Wallace-Wells say that he thinks climate change is already an existential threat because it’s already leading us to change how we live our lives. You seem to use Bostrom’s definition. Why do you think it’s better?
Neuralink, and the Culture series also has voice-activated assistants that are a bit like Alexa
Thanks! Did you think it was worth a read?
Interesting. OK, I added a link to this as an answer. Thanks for suggesting!
I put down some fiction with a bit of a longtermist bent that I enjoyed here.
I don’t think any of the protagonists / characters in these books are “an EA” (whatever that means) in the way that question seems to be looking for.
Fascinating, thanks for sharing!
Some longtermist fiction
Thanks—happy to help.
1) You’re right that pandemics and climate change are both part of the course. Taking the figures in the Precipice at face value, the biggest risks are unaligned AI and engineered pandemics. From the unit list at UCL, and the biographies of the unit leaders on the ‘natural and anthropogenic risks’ unit, Joanna Faure Walker (a geophysicist) and Punam Yadav (who focuses on humanitarian projects), I couldn’t see any specific content on weaponisation and conflict, which are topics I’m more interested in. That is not to say the course is not valuable—and there is no one EA route—but from my own perspective I think there’s a lot of technical background I’d like to cover. Also I could see nothing on risks from AI anywhere in the UCL course, which seems like an oversight given advances in autonomous weapons today.2) Yes I’ve heard good things about the course. Having worked at CEA, I think it’s worth dispelling the myth that there are recommended courses by the whole EA community—it seems to me that EA is a lot more disparate than people think. And you might disagree with me, and decide to do the course anyway—or do many other totally different things!
3) It took me about 2 years to gradually move my interests over from international relations and conflict theory to wanting to study computer science.
Applying to masters courses is quite costly in time and fees—they require a personal statement, references from your undergraduate degree, and something like a £150 application fee, per application.
Bootcamps are typically rolling applications throughout the year, and if you did the masters you’d probably start Sep 2022, which means applying around Nov 2021 - June 2022 - quite a while away.
If you’re submitting masters applications and considering coding bootcamps at the same time, it seems to me like it’s worth spending a few months thinking about what you’re interested in, and your comparative advantage might be first. One way of doing this could be to try some online coding courses, and also read some IR books (e.g. Destined for War) and maybe write up a summary and review on the forum. Each of those would probably give you more information about each route and be a cheaper test than submitting applications.
Hi there,
[Some unofficial thoughts from my own research before considering whether I should do a course like this one to be a civil servant. Other people come from different perspectives which could change the conclusions for them]
I wanted to learn more about global risks, and had the aim of working on security policy. I spent several months researching courses, speaking to people at the departments. There are quite a lot that I think could be good—this list are all places in London, but they seemed to be the best UK ones I could find when I was looking last year.
Risk Analysis, Disasters and Resilience MSc at Kings College London
Security and Resilience: Science and Technology at Imperial College London
For the UCL courses, I found that the UCL Institute for Risk and Disaster Reduction has a big focus on natural risks, and so the degrees have a comparatively small amount of content on the anthropogenic risks (caused by humans) - see the unit guide for the course above here. In my view, in agreement with Toby Ord and much of the EA community, I think that anthropogenic risks are a much larger risk factor and so I felt the UCL course was not well targeted. For example there appeared to be more content on space weather (which is still important) than on nuclear security (which I think is far more important). I contacted the department at UCL to ask what they thought about their focus, given Toby Ord’s arguments, and didn’t get a response. Still, there could be something useful in that course. It seemed to me that this group at UCL was more focused on geography and physical risks rather than conflict, which I saw as a weakness.
I was much more impressed by the Kings College London faculty of War Studies, which has at least a dozen different degrees and as you might guess from the name, leans a lot more heavily on the anthropogenic risks, and looks at both state and non-state actors. Of all of their courses, I was most interested in the Science and International Security MA, and I’ve heard that one of the advisors at the Open Philanthropy Project did the course, and also a friend I made at EA London, who recommended the course. I found out more about the reading list, and I applied for the course and was made an offer. But when I went for the open day I didn’t find the other students interested in the course that motivating—I’d been working for about six years and most of the other people had come straight from university, though there were exceptions.
At the time I was (and still am) interested in lots of things: risks from new technologies, which included things like cybersecurity, encryption, autonomous weapons, and synthetic biology.
I also thought more about what I’d be learning on the course, and I had big update when I heard this in the 80K interview with Stuart Russell, a hugely influential figure in AI safety.
Stuart Russell: But in all of these cases, the shortage is always people who understand AI. There’s no shortage of people who have degrees in international relations or degrees in political science or whatever.
...
Stuart Russell: So my view is I think you need to understand two things. You need to understand AI. I think that’s really important. Actually understand it. Actually take a course. Learn how to program.
I realised that most of these courses would probably give me a good exposure to the key concepts in these areas, but having lurked around the EA forum and read a few books in these areas, and also worked in consulting, that I wasn’t likely to develop any radically new skills that would differentiate me as a graduate. I also looked up graduates from the courses on LinkedIn (a great trick, this one) and found that they often went into roles more junior than the one I was currently in. So I decided against the program, and accepted a job offer to work at CEA instead.
A few months ago, I stumbled across the Security and Resilience: Science and Technology at Imperial College London. This looks like a really interesting course, but I don’t have the technical knowledge to start on it.
I still want to develop technical skills, and I decided the best way to learn them was through doing some coding myself, and also doing the UCL MSc in Computer Science, which I was very pleased to get on to, and am looking forward to starting at the end of September. I think this one could potentially have a lot more value. I think this course has a lot of career capital which could be deployed in all of the many applications of ML, on information security, pandemic modelling, and lots of other things.
Other people will be in different positions to me, and I know several really smart people who’ve done these courses and got a lot of value out of them. For the position I was at with my career I ultimately decided they weren’t worth it for me personally, but other people in similar positions to me might have come to different conclusions.
Great, thanks for sharing!
Super useful, thanks!
[Question] Are mice or rats (as pests) a potential area of animal welfare improvement?
Thanks for posting this Ben, and great to see the discussions. Wishing you all the best!
Strongly upvoted for the link to the Castle. Btw in one podcast I’m pretty sure I heard Wiblin say “the general vibe of the thing”
[New org] Canning What We Give
Ah ok, no worries. I’m considering the course—do you know anyone who’s currently on it?
Thanks Aaron. Sure—“perhaps you’re not aware” was not intended to be condescending at all. And yes, the later sentence you wrote was the tone I was hoping for.
Eliot Higgins, investigative journalist and founder of bellingcat. He also wrote this (which I haven’t read), and who featured, maybe hosted this podcast series which I thought was interesting.