Haydn has been a Research Associate and Academic Project Manager at the University of Cambridge’s Centre for the Study of Existential Risk since Jan 2017.
HaydnBelfield
Are you really in a race? The Cautionary Tales of Szilárd and Ellsberg
Nathan A. Sears (1987-2023)
Lord Martin Rees: an appreciation
Response to Torres’ ‘The Case Against Longtermism’
13 ideas for new Existential Risk Movies & TV Shows – what are your ideas?
See also Neel Nanda’s recent Simplify EA Pitches to “Holy Shit, X-Risk”.
I think this is a good demonstration that the existential risk argument can go through without the longtermism argument. I see it as helpfully building on Carl Shulman’s podcast.
To extend it even further—I posted the graphic below on twitter back in Nov. These three communities & sets of ideas overlap a lot and I think reinforce one another, but they are intellectually & practically separable, and there are people in each section doing great work. My personal approach is to be supportive of all 7 sections, but recognise just because someone is in one section doesn’t mean they have to be, or are, committed to others.
I think this is a very cool idea!
To offer some examples of similar things that I’ve been involved in—the trigger has often been some new regulatory or legislative process.
“woah the EU is going to regulate for AI safety … we should get some people together to work out how this could be helpful/harmful, whether/how to nudge, what to say, and whether we need someone full-time on this” → here
“woah the US (NIST) is going to regulate for AI safety...” → here
“woah the UK wants to have a new Resilience Strategy...” → here
“woah the UK wants to set up a UK ARPA...” → here
“woah the UN is redoing the Sendai Framework for Disaster Risk Reduction? It would be cool to get existential risk in that” → here, from Clarissa Rios Rojas
This is the kind of reactive, cross-organisational, quick response you’re talking about. At the moment, this is done mostly through informal, trusted networks. Could be good to expand this, have a bigger set of people willing to jump in to help on various topics. The list seems most promising on that regard.
Other organisations:
CSET was in some ways a response to “woah the conversation around AI in DC is terrible and ill-informed”—a kind of emergency response.
FLI have been good at taking advantage of critical junctures through e.g. their huge Open Letters.
ALLFED has a rapid response capability, they wrote about it here. Having a plan, triaging, and bringing in volunteers seem like sensible steps.
Some of the monitoring work being done full-time (not by volunteers) in DC, London and Brussels seems especially useful for raising the alert to others.
Finally, CSER’s Lara Mani has been doing some really cool stuff around scenario exercises and rapid response—like this workshop. For example, she went to Saint Vincent to help with the evaluation of their response to the eruption of La Soufrière (linked to her work on volcanic GCR). She also co-wrote: When It Strikes, Are We Ready? Lessons Identified at the 7th Planetary Defense Conference in Preparing for a Near-Earth Object Impact Scenario. Basically, I think exercises could be really useful too.
How VCs can avoid being tricked by obvious frauds: Rohit Krishnan on Noahpinion (linkpost)
3 notes on the discussion in the comments.
1. OP is clearly talking about the last 4 or so years, not FHI in eg 2010 to 2014. So quality of FHI or Bostrom as a manager in that period is not super relevant to the discussion. The skills needed to run a small, new, scrappy, blue-sky-thinking, obscure group are different from a large, prominent, policy-influencing organisation in the media spotlight.
2. The OP is not relitigating the debate over the Apology (which I, like Miles, have discussed elsewhere) but instead is pointing out the practical difficulties of Bostrom staying. Commenters may have different views from the University, some FHI staff, FHI funders and FHI collaborators—that doesn’t mean FHI wouldn’t struggle to engage these key stakeholders.
3. In the last few weeks the heads of Open Phil and CEA have stepped aside. Before that, the leadership of CSER and 80,000 Hours has changed. There are lots of other examples in EA and beyond. Leadership change is normal and good. While there aren’t a huge number of senior staff left at FHI, presumably either Ord or Sandberg could step up (and do fine given administrative help and willingness to delegate) - or someone from outside like Greaves plausibly could be Director.
Just for context on event costs
Wilton Park
They do lots of workshops on international security. Their events cost around £54,000 for two nights.
(see page 14 of their Annual Report: “In 2020⁄21, we delivered 128 (76 in 2019⁄20) events at average net revenue of £13k (£54k in 2019⁄20). The lower average net revenue this year was due to the reduced income generated from virtual events compared to that generated by face to face events in 2019⁄20. Virtual events are shorter, generally lasting half a day, compared to face to face events which are generally for two nights.”)
West Court, Jesus College, Cambridge
I’ve been to several academic workshops and conferences here. Their prices are, for a 24 hour (overnight) rate:
West Court single ensuite from £205 Let’s say 100 attendees overnight for 3 days (a weekend workshop) in the cheapest rooms: £200*100*3 = £60,000.
Shakeel offers the further examples of “traditional specialist conference centres, e.g. Oberwolfach, The Rockefeller Foundation Bellagio Center or the Brocher Foundation.”
50 events (one a week) like these a year would cost £3m (£60,000*50=£3,000,000). So break even (assuming £15m was the actual cost) in 5 years—quicker if they paid less, which seems likely.
No idea if this is a good use of money, just sharing some information for context.
Remarkable interview. One key section (people should read the whole thing!):
When you talk about your mistakes, you talk about your intent. Your mother once described you as a “take-no-prisoners utilitarian.” Shouldn’t your intent be irrelevant?
Except to the extent that it’s predictive of the future. But yeah, at the end of the day, I do think that, what happened happened, and whatever I was thinking or not thinking or trying to do or not trying to do, it happened. And that sucks. That’s really bad. A lot of people got hurt. And I think that, thinking about why it happened, there are some perspectives from which it matters, including trying to figure out what to do going forward. But that doesn’t change the fact that it happened. And as you said, I’m not expecting people to say, Oh, that’s all good then. Sam didn’t intend for me to lose money. I don’t miss that money anymore. That’s not how it works.
One of your close personal mentors, the effective altruism philosopher Will MacAskill, has disavowed you. Have you talked with him since?
I haven’t talked with him. [Five second pause.] I don’t blame him. [20-second pause and false starts.] I feel incredibly bad about the impact of this on E.A. and on him, and more generally on all of the things I had been wanting to support. At the end of the day, this isn’t any of their faults.
This fucked up a lot of their plans, and a lot of plans that people had to do a lot of good for the world. And that’s terrible. And to your point, from a consequentialist perspective, what happened was really bad. And independent of intent or of anything like that, it’s still really bad.
Have you talked with your brother Gabe, who ran your Guarding Against Pandemics group? Are you worried, frankly, that you might have ruined his career too?
It doesn’t feel good either. Like, none of these things feel good.
Have you apologized to him?
Yeah, I spent a lot of last month apologizing, but I don’t know how much the apologies mean to people at the end of the day. Because what happened happened, and it’s cold comfort in a lot of ways.
I don’t want to put words in his mouth. I feel terrible about what happened to all the things he’s trying to do. He’s family, and he’s been supportive even when he didn’t have to be. But I don’t know what’s going through his head from his perspective, and I don’t want to put words in it.
Do you think someone like you deserves to go to jail? On a moral level, doesn’t someone who has inflicted so much pain—intent be damned—deserve it? There are a lot of people incarcerated in this country for far less.
What happens happens. That’s not up to me.
I can tell you what I think personally, viscerally, and morally feels right to me. Which is that I feel like I have a duty to sort of spend the rest of my life doing what I can to try and make things right as I can.
You shocked a lot of people when you referred in a recent interview to the “dumb game that we woke Westerners play.” My understanding is that you were talking about corporate social responsibility and E.S.G., not about effective altruism, right?
That’s right.
To what extent do you feel your image and donations gave you cover? I know you say you didn’t do anything wrong intentionally. But I wonder how much you were in on the joke.
Gave me cover to do what, though? I think what I was in on, so to speak, was that a lot of the C.S.R. stuff was bullshit. Half of that was always just branding, and I think that’s true for most companies. And to some extent everyone knew, but it was a game everyone played together. And it’s a dumb game.
- 7 Dec 2022 19:44 UTC; 28 points) 's comment on New interview with SBF on Will MacAskill, “earn to give” and EA by (
How Could AI Governance Go Wrong?
For other readers that might be similarly confused to me—there’s more in the profile on ‘indirect extinction risks’ and on other longrun effects on humanity’s potential.
Seems a bit odd to me to just post the ‘direct extinction’ bit, as essentially no serious researcher argues that there is a significant chance that climate change could ‘directly’ (and we can debate what that means) cause extinction. However, maybe this view is more widespread amongst the general public (and therefore worth responding to)?
On ‘indirect risk’, I’d be interested in hearing more on these two claims:
“it’s less important to reduce upstream issues that could be making them worse vs trying to fix them directly” (footnote 25); and
“our guess is that [climate change’s ‘indirect’] contribution to other existential risks is at most an order of magnitude higher — so something like 1 in 1,000”—which “still seems more than 10 times less likely to cause extinction than nuclear war or pandemics.”
If people are interested in reading more about climate change as a contributor to GCR, here are two CSER papers from last year (and we have a big one coming out soon)
I think I have a different view on the purpose of local group events than Larks. They’re not primarily about like exploring the outer edges of knowledge, breaking new intellectual ground, discovering cause x, etc.
They’re primarily about attracting people to effective altruism. They’re about recruitment, persuasion, raising awareness and interest, starting people on the funnel, deepening engagement etc etc.
So its good not to have a speaker at your event who is going to repel the people you want to attract.
The Rival AI Deployment Problem: a Pre-deployment Agreement as the least-bad response
Governments are concerned/interested in near-term AI. See EU, US, UK and Chinese regulation and investment. They’re maybe about as interested in it as like clean tech and satellites, more than lab-grown meat.
Transformative AI is several decades away, governments aren’t good at planning for possibilities over long time periods. If/when we get closer to transformative capabilities, governments will pay more attention. See: nuclear energy + weapons, bioweapons + biotech, cryptography, cyberweapons, etc etc.
Jade Leung’s thesis is useful on this. So to is Jess Whittlestone’s conceptual clarifications of near/long distinctions (with Carina Prunkl) and on transformative AI (with Ross Gruetzemacher)
New CSER Director: Prof Matthew Connelly
One of their Directors Thomas Meier came to our most recent Cambridge Conference on Catastrophic Risk (2022). They’ve also got some good people on their board like Elaine Scarry.
I would note that my sense is that they’re a bit more focussed on analysing ‘apocalyptic imaginaries’ from a sociological and criticial theory perspective. See for example their first journal issue, which is mostly critical analysis of narratives of apocalypse in fiction or conspiracy theories (rather than e.g. climate modelling of nuclear winter). They strike me as somewhat similar to the Centre for the Critical Study of Apocalyptic and Millenarian Movements. Maybe a crude analagous distinction would be between scientists and philosophers of science?
On the youtube video, I wasn’t super impressed by that talk. It seemed more interested in pathologising research on global risks than engaging on the object level, similar to some of the more lurid recent work from Torres and Gebru. But I’m going to Schwarz’s talk this Friday in Cambridge so hopefully will be able to dig deeper.
This is a side-note, but I dislike the EA jargon terms hinge/hingey/hinginess and think we should use the term “critical juncture” and “criticalness” instead. This is the common term used in political science, international relations and other social sciences. Its better theorised and empirically backed than “hingey”, doesn’t sound silly, and is more legible to a wider community.
Critical Junctures—Oxford Handbooks Online
The Study of Critical Junctures—JSTOR
https://users.ox.ac.uk/~ssfc0073/Writings%20pdf/Critical%20Junctures%20Ox%20HB%20final.pdf
https://en.wikipedia.org/wiki/Critical_juncture_theory