Haydn has been a Research Associate and Academic Project Manager at the University of Cambridge’s Centre for the Study of Existential Risk since Jan 2017.
HaydnBelfield
See also Neel Nanda’s recent Simplify EA Pitches to “Holy Shit, X-Risk”.
I think this is a good demonstration that the existential risk argument can go through without the longtermism argument. I see it as helpfully building on Carl Shulman’s podcast.
To extend it even further—I posted the graphic below on twitter back in Nov. These three communities & sets of ideas overlap a lot and I think reinforce one another, but they are intellectually & practically separable, and there are people in each section doing great work. My personal approach is to be supportive of all 7 sections, but recognise just because someone is in one section doesn’t mean they have to be, or are, committed to others.
I think this is a very cool idea!
To offer some examples of similar things that I’ve been involved in—the trigger has often been some new regulatory or legislative process.
“woah the EU is going to regulate for AI safety … we should get some people together to work out how this could be helpful/harmful, whether/how to nudge, what to say, and whether we need someone full-time on this” → here
“woah the US (NIST) is going to regulate for AI safety...” → here
“woah the UK wants to have a new Resilience Strategy...” → here
“woah the UK wants to set up a UK ARPA...” → here
“woah the UN is redoing the Sendai Framework for Disaster Risk Reduction? It would be cool to get existential risk in that” → here, from Clarissa Rios Rojas
This is the kind of reactive, cross-organisational, quick response you’re talking about. At the moment, this is done mostly through informal, trusted networks. Could be good to expand this, have a bigger set of people willing to jump in to help on various topics. The list seems most promising on that regard.
Other organisations:
CSET was in some ways a response to “woah the conversation around AI in DC is terrible and ill-informed”—a kind of emergency response.
FLI have been good at taking advantage of critical junctures through e.g. their huge Open Letters.
ALLFED has a rapid response capability, they wrote about it here. Having a plan, triaging, and bringing in volunteers seem like sensible steps.
Some of the monitoring work being done full-time (not by volunteers) in DC, London and Brussels seems especially useful for raising the alert to others.
Finally, CSER’s Lara Mani has been doing some really cool stuff around scenario exercises and rapid response—like this workshop. For example, she went to Saint Vincent to help with the evaluation of their response to the eruption of La Soufrière (linked to her work on volcanic GCR). She also co-wrote: When It Strikes, Are We Ready? Lessons Identified at the 7th Planetary Defense Conference in Preparing for a Near-Earth Object Impact Scenario. Basically, I think exercises could be really useful too.
3 notes on the discussion in the comments.
1. OP is clearly talking about the last 4 or so years, not FHI in eg 2010 to 2014. So quality of FHI or Bostrom as a manager in that period is not super relevant to the discussion. The skills needed to run a small, new, scrappy, blue-sky-thinking, obscure group are different from a large, prominent, policy-influencing organisation in the media spotlight.
2. The OP is not relitigating the debate over the Apology (which I, like Miles, have discussed elsewhere) but instead is pointing out the practical difficulties of Bostrom staying. Commenters may have different views from the University, some FHI staff, FHI funders and FHI collaborators—that doesn’t mean FHI wouldn’t struggle to engage these key stakeholders.
3. In the last few weeks the heads of Open Phil and CEA have stepped aside. Before that, the leadership of CSER and 80,000 Hours has changed. There are lots of other examples in EA and beyond. Leadership change is normal and good. While there aren’t a huge number of senior staff left at FHI, presumably either Ord or Sandberg could step up (and do fine given administrative help and willingness to delegate) - or someone from outside like Greaves plausibly could be Director.
Just for context on event costs
Wilton Park
They do lots of workshops on international security. Their events cost around £54,000 for two nights.
(see page 14 of their Annual Report: “In 2020⁄21, we delivered 128 (76 in 2019⁄20) events at average net revenue of £13k (£54k in 2019⁄20). The lower average net revenue this year was due to the reduced income generated from virtual events compared to that generated by face to face events in 2019⁄20. Virtual events are shorter, generally lasting half a day, compared to face to face events which are generally for two nights.”)
West Court, Jesus College, Cambridge
I’ve been to several academic workshops and conferences here. Their prices are, for a 24 hour (overnight) rate:
West Court single ensuite from £205 Let’s say 100 attendees overnight for 3 days (a weekend workshop) in the cheapest rooms: £200*100*3 = £60,000.
Shakeel offers the further examples of “traditional specialist conference centres, e.g. Oberwolfach, The Rockefeller Foundation Bellagio Center or the Brocher Foundation.”
50 events (one a week) like these a year would cost £3m (£60,000*50=£3,000,000). So break even (assuming £15m was the actual cost) in 5 years—quicker if they paid less, which seems likely.
No idea if this is a good use of money, just sharing some information for context.
Remarkable interview. One key section (people should read the whole thing!):
When you talk about your mistakes, you talk about your intent. Your mother once described you as a “take-no-prisoners utilitarian.” Shouldn’t your intent be irrelevant?
Except to the extent that it’s predictive of the future. But yeah, at the end of the day, I do think that, what happened happened, and whatever I was thinking or not thinking or trying to do or not trying to do, it happened. And that sucks. That’s really bad. A lot of people got hurt. And I think that, thinking about why it happened, there are some perspectives from which it matters, including trying to figure out what to do going forward. But that doesn’t change the fact that it happened. And as you said, I’m not expecting people to say, Oh, that’s all good then. Sam didn’t intend for me to lose money. I don’t miss that money anymore. That’s not how it works.
One of your close personal mentors, the effective altruism philosopher Will MacAskill, has disavowed you. Have you talked with him since?
I haven’t talked with him. [Five second pause.] I don’t blame him. [20-second pause and false starts.] I feel incredibly bad about the impact of this on E.A. and on him, and more generally on all of the things I had been wanting to support. At the end of the day, this isn’t any of their faults.
This fucked up a lot of their plans, and a lot of plans that people had to do a lot of good for the world. And that’s terrible. And to your point, from a consequentialist perspective, what happened was really bad. And independent of intent or of anything like that, it’s still really bad.
Have you talked with your brother Gabe, who ran your Guarding Against Pandemics group? Are you worried, frankly, that you might have ruined his career too?
It doesn’t feel good either. Like, none of these things feel good.
Have you apologized to him?
Yeah, I spent a lot of last month apologizing, but I don’t know how much the apologies mean to people at the end of the day. Because what happened happened, and it’s cold comfort in a lot of ways.
I don’t want to put words in his mouth. I feel terrible about what happened to all the things he’s trying to do. He’s family, and he’s been supportive even when he didn’t have to be. But I don’t know what’s going through his head from his perspective, and I don’t want to put words in it.
Do you think someone like you deserves to go to jail? On a moral level, doesn’t someone who has inflicted so much pain—intent be damned—deserve it? There are a lot of people incarcerated in this country for far less.
What happens happens. That’s not up to me.
I can tell you what I think personally, viscerally, and morally feels right to me. Which is that I feel like I have a duty to sort of spend the rest of my life doing what I can to try and make things right as I can.
You shocked a lot of people when you referred in a recent interview to the “dumb game that we woke Westerners play.” My understanding is that you were talking about corporate social responsibility and E.S.G., not about effective altruism, right?
That’s right.
To what extent do you feel your image and donations gave you cover? I know you say you didn’t do anything wrong intentionally. But I wonder how much you were in on the joke.
Gave me cover to do what, though? I think what I was in on, so to speak, was that a lot of the C.S.R. stuff was bullshit. Half of that was always just branding, and I think that’s true for most companies. And to some extent everyone knew, but it was a game everyone played together. And it’s a dumb game.
- 7 Dec 2022 19:44 UTC; 28 points) 's comment on New interview with SBF on Will MacAskill, “earn to give” and EA by (
For other readers that might be similarly confused to me—there’s more in the profile on ‘indirect extinction risks’ and on other longrun effects on humanity’s potential.
Seems a bit odd to me to just post the ‘direct extinction’ bit, as essentially no serious researcher argues that there is a significant chance that climate change could ‘directly’ (and we can debate what that means) cause extinction. However, maybe this view is more widespread amongst the general public (and therefore worth responding to)?
On ‘indirect risk’, I’d be interested in hearing more on these two claims:
“it’s less important to reduce upstream issues that could be making them worse vs trying to fix them directly” (footnote 25); and
“our guess is that [climate change’s ‘indirect’] contribution to other existential risks is at most an order of magnitude higher — so something like 1 in 1,000”—which “still seems more than 10 times less likely to cause extinction than nuclear war or pandemics.”
If people are interested in reading more about climate change as a contributor to GCR, here are two CSER papers from last year (and we have a big one coming out soon)
I think I have a different view on the purpose of local group events than Larks. They’re not primarily about like exploring the outer edges of knowledge, breaking new intellectual ground, discovering cause x, etc.
They’re primarily about attracting people to effective altruism. They’re about recruitment, persuasion, raising awareness and interest, starting people on the funnel, deepening engagement etc etc.
So its good not to have a speaker at your event who is going to repel the people you want to attract.
Governments are concerned/interested in near-term AI. See EU, US, UK and Chinese regulation and investment. They’re maybe about as interested in it as like clean tech and satellites, more than lab-grown meat.
Transformative AI is several decades away, governments aren’t good at planning for possibilities over long time periods. If/when we get closer to transformative capabilities, governments will pay more attention. See: nuclear energy + weapons, bioweapons + biotech, cryptography, cyberweapons, etc etc.
Jade Leung’s thesis is useful on this. So to is Jess Whittlestone’s conceptual clarifications of near/long distinctions (with Carina Prunkl) and on transformative AI (with Ross Gruetzemacher)
One of their Directors Thomas Meier came to our most recent Cambridge Conference on Catastrophic Risk (2022). They’ve also got some good people on their board like Elaine Scarry.
I would note that my sense is that they’re a bit more focussed on analysing ‘apocalyptic imaginaries’ from a sociological and criticial theory perspective. See for example their first journal issue, which is mostly critical analysis of narratives of apocalypse in fiction or conspiracy theories (rather than e.g. climate modelling of nuclear winter). They strike me as somewhat similar to the Centre for the Critical Study of Apocalyptic and Millenarian Movements. Maybe a crude analagous distinction would be between scientists and philosophers of science?
On the youtube video, I wasn’t super impressed by that talk. It seemed more interested in pathologising research on global risks than engaging on the object level, similar to some of the more lurid recent work from Torres and Gebru. But I’m going to Schwarz’s talk this Friday in Cambridge so hopefully will be able to dig deeper.
Congratulations on this growth, really exciting!
Have you thought about including randomisation to facilitate evaluation?
E.g. you could include some randomisation in who invited to events (of those who applied), which universities/cities get organisers (of those on the shortlist) etc. This could also be done with 80k coaching calls, dunno if it has been tried.
You then track who did and didn’t get the treatment, to see what effect it had. This doesn’t have to involve denying ‘treatment’ to people/places—presumably there are more applicants than there are places—you introduce randomisation at the cutoff.
This would allow some causal inference (RCT/Randomista, does x cause y, etc) as to what effect these treatments are having (vs the control, and null hypothesis of no effect). This could help justify impact to the community and funders. I’m sure people at eg JPAL, Rethink, etc could help with research design.
I think its a really good point that there’s something very different between research/policy orgs and orgs that deliver products and services at scale. I basically agree, but I’d slightly tweak this to
”It is very hard for a charity to scale to more than $100 million per year without delivering aphysicalproduct or service.”Because digital orgs/companies who deliver a digital service (GiveDirectly, Facebook/Google/etc) obviously can scale to $100 million per year.
- 8 Aug 2021 1:54 UTC; 18 points) 's comment on Most research/advocacy charities are not scalable by (
- 10 Aug 2021 11:00 UTC; 4 points) 's comment on Most research/advocacy charities are not scalable by (
I didn’t mean to imply that you were plagiarising Neel. I more wanted to point out that that many reasonable people (see also Carl Shulman’s podcast) are pointing out that the existential risk argument can go through without the longtermism argument.
I posted the graphic below on twitter back in Nov. These three communities & sets of ideas overlap a lot and I think reinforce one another, but they are intellectually & practically separable, and there are people in each section doing great work. Just because someone is in one section doesn’t mean they have to be, or are, committed to others.
Hi Larks and John, Thanks for sharing this with me ahead of posting.
Five notes for readers.
1.
First, this Bill isn’t an EA Bill. This is recognised a bit in the post, but I really want to underline it. Its led by Lord John Bird and his office, and supported by Today for Tomorrow. It mostly builds on the Welsh Commissioner for Future Generations. None or them are ‘EA’. There are about 3-4 supporters that could plausibly be labelled EA, out of ~100 institutional supporters.
2.
Second, on the merits of the Bill—to add a little to Sam’s excellent overview. Some useful further readings:
The independent House of Lords library produced 2 useful briefings:
https://lordslibrary.parliament.uk/wellbeing-of-future-generations-bill-hl/
https://lordslibrary.parliament.uk/research-briefings/lln-2019-0076/Great CSER overview of several forms of institutional representation: https://www.cser.ac.uk/resources/representation-future-generations/
The text of the 2015 Welsh Act: https://www.futuregenerations.wales/about-us/future-generations-act/
An assessment of how the 2015 Act has gone in Wales over the last 7 years, from one of its architects: https://www.futuregenerations.wales/resources_posts/futuregen-lessons-from-a-small-country/
A more independent assessment: https://link.springer.com/book/10.1007/978-3-030-02230-3
Overview of many longtermist institutions: https://globalprioritiesinstitute.org/wp-content/uploads/Tyler-M-John-and-William-MacAskill_Longtermist-institutional-reform.pdf
3.
Third, I think the Bill is mainly about ‘broad longtermism’ https://80000hours.org/podcast/episodes/ben-todd-on-varieties-of-longtermism/
I think the most important parts of the bill are about longtermist representation, rather than big welfare-affecting policies. For example, the parliamentary committee on future generations, an independent commissioner, the responsibility on ministers, the NAO/OBR oversight, the longer Risk Register timeframe, the “set some longterm goals”/impact assessments—everything seems designed to just nudge politicians to think more about future generations.
The idea, I presume, is that all that procedural nudging (without being specific about substance, which should be left to current elected politicians) will prompt more long-term thinking and move away from our incredible shorttermism (eg see some of the stuff Cummings has written about how incredibly shortterm our political culture is).
4.
Fourth, the authors note “Some supporters seem to think most of the chance for this or a similar bill being passed rests on a future Labour government, but this may not happen for many years”
I presume this is just based on me (as I said in comments on the draft). However, that’s not my view. I think both parties could (and should) support it—its being led in the Commons by a Conservative. I don’t think it shold be coded as Labour, and if I tweeted stuff that may have given that impression, I regret that. I do think there’s a slightly higher chance of it being passed by a Labour govt, but mainly because of the Welsh link—most Welsh MPs are Labour.
5.
Fifth and finally, I find it helpful to return to the Bill’s overview, as laid out by Lord John Bird in the explanatory notes. I think there’s a lot in here for our community to like. Let’s debate this large selection of options, identify the strongest options, and work together to implement them.
“1. The first part of this Bill establishes a set of national wellbeing goals, formulated by the Secretary of State and confirmed via a public consultation. It places a duty on public bodies and government departments to set objectives in line with these goals, whilst demonstrating certain ‘ways of working’; these are a consideration for the long-term, prevention, planning for risk, collaboration, integration and involvement. Decisions are to be accompanied by future generations impact assessments to ensure longer-term unintended consequences on national wellbeing are mitigated.
2. The second part of this Bill focuses on improving planning and spending within Government. The Bill establishes a futures and forecasting report which assesses the risks and trends, for at least the forthcoming 25 years, and lays out detailed plans on mitigating these risks; the Bill makes provision that when doing so, the views of various relevant groups must be accounted for, including the UK and UN Climate Change Committees and the views of 11-25 year olds on wellbeing. This is to improve the United Kingdom’s preparedness for existential risk. Currently, the Cabinet Office’s National Risk Register only accounts for two years into the future. The Bill also requires departments to categorise their spending into preventative tiers to encourage public bodies to think about investing more money in the short-term to make savings in the long-term, encouraging a pivot towards prevention rather than immediate relief.
3. To improve transparency and accountability within Government, the Bill allocates powers to the head of the National Audit Office to conduct examinations on public bodies in order to assess whether a body has acted in accordance with its wellbeing duties. The Bill extends the Office for Budget Responsibility’s responsibilities to examine the extent to which progress is being made towards the national indicators and subsequent milestones. This, combined with the futures and forecasting report, is used to produce advice to the Treasury to ensure long-term fiscal risks are mitigated. A Joint Select Committee on Future Generations is also established by the Bill to ensure any relevant incoming legislation can be reviewed and amendments suggested. The Bill makes provision for there to be a minister in each Government department in charge of safeguarding future generations’ interests. Their role is to promote the wellbeing goals when formulating policy and, through observing how the Bill is applied within departments, they can also feed back into how the national indicators should be adapted (after consulting with the Joint Committee and the Commission). A Future Generations Commission is to be established, consisting of an expert from each country of the United Kingdom and a young person from each devolved country to improve understanding of the future generations principle amongst public bodies and the public.”
Thanks for this! Its mentioned in the post and James and Fluttershy have made the point, but I just wanted to emphasise the benefits to others of Open Philanthropy continuing to engage in public discourse. Especially as this article seems to focus mostly on the cost/benefits to Open Philanthropy itself (rather than to others) of Open Philanthropy engaging in public discourse.
The analogy of academia was used. One of the reasons academics publish is to get feedback, improve their reputation and to clarify their thinking. But another, perhaps more important, reason academics publish academic papers and popular articles is to spread knowledge.
As an organisation/individual becomes more expert and established, I agree that the benefits to itself decrease and the costs increase. But the benefit to others of their work increases. It might be argued that when one is starting out the benefits of public discourse go mostly to oneself, and when one is established the benefits go mostly to others.
So in Open Philanthropy’s case it seems clear that the benefits to itself (feedback, reputation, clarifying ideas) have decreased and the costs (time and risk) have increased. But the benefits to others of sharing knowledge have increased, as it has become more expert and better at communicating.
For example, speaking personally, I have found Open Philanthropy’s shallow investigations on Global Catastrophic Risks a very valuable resource in getting people up to speed – posts like Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity have also been very informative and useful. I’m sure people working on global poverty would agree.
Again, just wanted to emphasise that others get a lot of benefit from Open Philanthropy continuing to engage in public discourse (in the quantity and quality at which it does so now).
How much funding is committed to effective altruism (going forward)? Around $46 billion.
For reference, the Bill & Melinda Gates Foundation is the second largest charitable foundation in the world, holding $49.8 billion in assets.
I agree that Effective Altruism and the existential risk prevention movement are not the same thing. Let me use this as an opportunity to trot out my Venn diagrams again. The point is that these communities and ideas overlap but don’t necessarily imply each other—you don’t have to agree to all of them because you agree with one of them, and there are good people doing good work in all the segments.
Interesting post! If you wanted to read into the comparative political science literature a little more, you might be interested in diving into the subfield of democratic backsliding (as opposed to emergence):
A third wave of autocratization is here: what is new about it? Lührmann & Lindberg 2019
How Democracies Die. Steven Levitsky and Daniel Ziblatt 2018
On Democratic Backsliding Bermeo, Nancy 2016
Two Modes of Democratic Breakdown: A Competing Risks Analysis of Democratic Durability; Maeda, K. 201
Authoritarian Reversals and Democratic Consolidation in American Political Science Review; Milan Svolik; 2008
Institutional Design and Democratic Consolidation in the Third World Timothy J. Power; Mark J. Gasiorowski; 04/1997
What Makes Democracies Endure? Jose Antonio Cheibub; Adam Przeworski; Fernando Papaterra Limongi Neto; Michael M. Alvarez 1996
The breakdown of democratic regimes: crisis, breakdown, and reequilibration Book by Juan J. Linz 1978
One of the common threads in this subfield is that once a democracy has ‘consolidated’, it seems to be fairly resilient to coups and perhaps incumbent takeover.
I certainly agree that how this interacts with new AI systems: automation, surveillance and targeting/profiling, and autonomous weapons systems is absolutely fascinating. For one early stab, you might be interested in my colleagues’:
Really helpful contribution: focuses on the key issues, balanced, evidenced and has concrete next steps. Thanks.
This is a side-note, but I dislike the EA jargon terms hinge/hingey/hinginess and think we should use the term “critical juncture” and “criticalness” instead. This is the common term used in political science, international relations and other social sciences. Its better theorised and empirically backed than “hingey”, doesn’t sound silly, and is more legible to a wider community.
Critical Junctures—Oxford Handbooks Online
The Study of Critical Junctures—JSTOR
https://users.ox.ac.uk/~ssfc0073/Writings%20pdf/Critical%20Junctures%20Ox%20HB%20final.pdf
https://en.wikipedia.org/wiki/Critical_juncture_theory