Haydn has been a Research Associate and Academic Project Manager at the University of Cambridge’s Centre for the Study of Existential Risk since Jan 2017.
HaydnBelfield
I looked at OpenPhil grants tagged as “Longtermism” in 2021 or 2022 (from here or the downloadable spreadsheet). I count 142 grants made since the beginning of July 2021. The bar isn’t being set by grants per se, but as a percentage: “% of our longtermist grantmaking over the last 18 months (by dollars)”. These 142 grants are about $235m, so half would be $117m.
For context, some grants are tiny, e.g. the smallest is $2,400. The top 10 represent over half that funding ($129m).
Centre for Effective Altruism — Biosecurity Coworking Space Centre for Effective Altruism Biosecurity & Pandemic Preparedness 5,318,000 Aug-22 Effective Altruism Funds — Re-Granting Support Center for Effective Altruism Effective Altruism Community Growth (Longtermism) 7,084,000 Feb-22 Centre for Effective Altruism — Harvard Square Coworking Space Center for Effective Altruism Effective Altruism Community Growth (Longtermism) 8,875,000 Aug-22 Redwood Research — General Support Redwood Research Potential Risks from Advanced AI 9,420,000 Nov-21 Good Forever — Regranting for Biosecurity Projects Good Forever Foundation Biosecurity & Pandemic Preparedness 10,000,000 Feb-22 Redwood Research — General Support (2022) Redwood Research Potential Risks from Advanced AI 10,700,000 Aug-22 Californians Against Pandemics — California Pandemic Early Detection and Prevention Act Ballot Initiative Californians Against Pandemics Biosecurity & Pandemic Preparedness 11,100,000 Oct-21 Massachusetts Institute of Technology — AI Trends and Impacts Research (2022) Massachusetts Institute of Technology Potential Risks from Advanced AI 13,277,348 Mar-22 Funding for AI Alignment Projects Working With Deep Learning Systems Potential Risks from Advanced AI 14,459,002 Apr-22 Center for Security and Emerging Technology — General Support (August 2021) Center for Security and Emerging Technology Potential Risks from Advanced AI 38,920,000 Aug-21 Edit: cut the following text thanks to Linch’s spot:
Though I note that you mentioned “we also included a number of grants now-defunct FTX-associated funders had made.”—it would be helpful for you to at least release the spreadsheet of grants you considered, even if for very understandable reasons you don’t want to publish the Tier ranking.
One reading of this is that Open Phil’s new ‘bar for funding’ is that
“we should fund everything at tier 4 and better, as well as funding tier-5 grants under various conditions”
″about 40-70% of the grantmaking we did over the last 18 months would’ve qualified for funding under the new bar. I think 55% would be a reasonable point estimate.”So (very roughly!) a new project would need to be ranked as better than the average longtermist grant over the past 18 months in order to be funded.
Is that a wild misrepresentation?
If this one goes well, would be good to organise one in Europe as well.
I wanted to post to say I agree, EAGxLatinAmerica was wonderful—so exciting to meet such interesting people doing great work!
Also...
“I met Rob Wiblin at lunch and I didn’t recognize him.”
Ha rekt
I appreciate this quick and clear statement from CEA.
We came up with our rankings seperately, but when we compared it turned out we agreed on the top 4 + honourable mention. We then worked on the texts together.
Results from the AI testing hackathon
I think the crucial thing is funding levels.
It was only by October 1941 (after substantial nudging from the British) that Roosevelt approved serious funding. As a reminder, I’m particularly interested in ‘sprint’ projects with substantial funding: for example those in which the peak year funding reached 0.4% of GDP (Stine, 2009, see also Grace, 2015).
So to some extent they were in a race 1939-1942, but I would suggest it wasn’t particularly intense, it wasn’t a sprint race.
These tragic tradeoffs also worry me deeply. Existential wagering is to me one of the more worrying, but also possible to avoid. However, the tradeoff between existential co-option and blackmail seems particularly hard to avoid for AI.
I think my point is more like “if anyone gets anywhere near advanced AI, governments will have something to say about it—they will be a central player in shaping its development and deployment.” It seems very unlikely to me that governments would not notice or do anything about such a potentially transformative technology. It seems very unlikely to me that a company could train and deploy an advanced AI system of the kind you’re thinking about without governments regulating and directing it. On funding specifically, I would probably be >50% on governments getting involved in meaningful private-public collaboration if we get closer to substantial leaps in capabilities (though it seems unlikely to me that AI progress will get to that point by 2030).
On your regulation question, I’d note that the EU AI Act, likely to pass next year, already proposes the following requirements applying to companies providing (eg selling, licensing or selling access to) ‘general purpose AI systems’ (eg large foundation models):
Risk Management System
Data and data governance
Technical documentation
Record-keeping
Transparency and provision of information to users
Human oversight
Accuracy, robustness and cybersecurity
So they’ll already have to do (post-training) safety testing before deployment. Regulating the training of these models is different and harder, but even that seems plausible to me at some point, if the training runs become ever huger and potentially more consequential. Consider the analogy that we regulate biological experiments.
Strongly agree, upvoted.
Just a minor point on the Putin quote, as it comes up so often, he was talking to a bunch of schoolkids, encouraging them to do science and technology. He said similarly supportive things about a bunch of other technologies. I’m at >90% he wasn’t referring to AGI. He’s not even that committed to AI leadership: he’s taken few actions indicating serious interest in ‘leading in AI’. Indeed, his Ukraine invasion has cut off most of his chip supplies and led to a huge exodus of AI/CS talent. It was just an off-the-cuff rhetorical remark.
This is a really useful and interesting post that I’m glad you’ve written! I agree with a lot of it, but I’ll mention one bit I’m less sure about.
I think we can have more nuance about governments “being in the race” or their “policy having strong effects”. I agree that pre-2030, a large, centralised, government-run development programme like the Apollo Project is less likely (I assume this is the central thing you have in your mind). However there are other ways governments could be involved, including funding, regulating and ‘directing’ development and deployment.
I think cyber weapons and cyber defence is a useful comparison. Much of the development—and even deployment—is led by the private sector: defence contractors in the US, criminals in some other states. Nevertheless, much of it is funded, regulated and directed by states. People didn’t think this would happen in the late 1990s and 2000s—they thought it would be private sector led. But nevertheless with cyber, we’re now in a situation where the major states (e.g. those in the P5, with big economies, militaries and nuclear weapons) have the preponderance of cyber power—they have directed and are responsible for all the largest cyber attacks (Stuxnet, 2016 espionage, NotPetya, WannaCry etc). It’s a public-private partnership, but states are in the driving seat.
Something similar might happen with AI this side of 2030, without the situation resembling the Apollo Project.
For much more on this, Jade Leung’s thesis is great: Who will govern artificial intelligence? Learning from the history of strategic politics in emerging technologies
Sure—happy to. Deleted.
I agree
Remarkable interview. One key section (people should read the whole thing!):
When you talk about your mistakes, you talk about your intent. Your mother once described you as a “take-no-prisoners utilitarian.” Shouldn’t your intent be irrelevant?
Except to the extent that it’s predictive of the future. But yeah, at the end of the day, I do think that, what happened happened, and whatever I was thinking or not thinking or trying to do or not trying to do, it happened. And that sucks. That’s really bad. A lot of people got hurt. And I think that, thinking about why it happened, there are some perspectives from which it matters, including trying to figure out what to do going forward. But that doesn’t change the fact that it happened. And as you said, I’m not expecting people to say, Oh, that’s all good then. Sam didn’t intend for me to lose money. I don’t miss that money anymore. That’s not how it works.
One of your close personal mentors, the effective altruism philosopher Will MacAskill, has disavowed you. Have you talked with him since?
I haven’t talked with him. [Five second pause.] I don’t blame him. [20-second pause and false starts.] I feel incredibly bad about the impact of this on E.A. and on him, and more generally on all of the things I had been wanting to support. At the end of the day, this isn’t any of their faults.
This fucked up a lot of their plans, and a lot of plans that people had to do a lot of good for the world. And that’s terrible. And to your point, from a consequentialist perspective, what happened was really bad. And independent of intent or of anything like that, it’s still really bad.
Have you talked with your brother Gabe, who ran your Guarding Against Pandemics group? Are you worried, frankly, that you might have ruined his career too?
It doesn’t feel good either. Like, none of these things feel good.
Have you apologized to him?
Yeah, I spent a lot of last month apologizing, but I don’t know how much the apologies mean to people at the end of the day. Because what happened happened, and it’s cold comfort in a lot of ways.
I don’t want to put words in his mouth. I feel terrible about what happened to all the things he’s trying to do. He’s family, and he’s been supportive even when he didn’t have to be. But I don’t know what’s going through his head from his perspective, and I don’t want to put words in it.
Do you think someone like you deserves to go to jail? On a moral level, doesn’t someone who has inflicted so much pain—intent be damned—deserve it? There are a lot of people incarcerated in this country for far less.
What happens happens. That’s not up to me.
I can tell you what I think personally, viscerally, and morally feels right to me. Which is that I feel like I have a duty to sort of spend the rest of my life doing what I can to try and make things right as I can.
You shocked a lot of people when you referred in a recent interview to the “dumb game that we woke Westerners play.” My understanding is that you were talking about corporate social responsibility and E.S.G., not about effective altruism, right?
That’s right.
To what extent do you feel your image and donations gave you cover? I know you say you didn’t do anything wrong intentionally. But I wonder how much you were in on the joke.
Gave me cover to do what, though? I think what I was in on, so to speak, was that a lot of the C.S.R. stuff was bullshit. Half of that was always just branding, and I think that’s true for most companies. And to some extent everyone knew, but it was a game everyone played together. And it’s a dumb game.
- Dec 7, 2022, 7:44 PM; 28 points) 's comment on New interview with SBF on Will MacAskill, “earn to give” and EA by (
Yeah the breakeven point is a super rough figure, originally wasn’t going to include it. Paying for staff and food would push out, more events or hiring out for events would bring closer, etc. Main thing I wanted to add to the conversation is a sense of how expensive workshops and retreats are.
Wilton Park and West Court are both historic, so thought they’d be good comparisons (and ones I know).
Just for context on event costs
Wilton Park
They do lots of workshops on international security. Their events cost around £54,000 for two nights.
(see page 14 of their Annual Report: “In 2020⁄21, we delivered 128 (76 in 2019⁄20) events at average net revenue of £13k (£54k in 2019⁄20). The lower average net revenue this year was due to the reduced income generated from virtual events compared to that generated by face to face events in 2019⁄20. Virtual events are shorter, generally lasting half a day, compared to face to face events which are generally for two nights.”)
West Court, Jesus College, Cambridge
I’ve been to several academic workshops and conferences here. Their prices are, for a 24 hour (overnight) rate:
West Court single ensuite from £205 Let’s say 100 attendees overnight for 3 days (a weekend workshop) in the cheapest rooms: £200*100*3 = £60,000.
Shakeel offers the further examples of “traditional specialist conference centres, e.g. Oberwolfach, The Rockefeller Foundation Bellagio Center or the Brocher Foundation.”
50 events (one a week) like these a year would cost £3m (£60,000*50=£3,000,000). So break even (assuming £15m was the actual cost) in 5 years—quicker if they paid less, which seems likely.
No idea if this is a good use of money, just sharing some information for context.
I think a lot of the disagreements in the comments is coming down to different conceptions of headhunting. Dan, you refer to targeting/specific/direct outreach to particular individuals, but that doesn’t seem the crucial difference, its in the intent, tone and incentives.
“Hey X person, you’re doing a great job at your current job. You might be totally happy at your current job, but I thought I’d flag this cool new opportunity that seems really impactful—happy to discuss why it might be a good fit” seems fine.
Giving a hard sell, strongly denigrating the current employer or being strongly incentivised for switches (eg paid a commission) seems way less fine.
Thanks for this speech Gideon, an important point and one that I obviously agree with a lot. I thought I’d just throw in a point about policy advocacy. One benefit of the simple/siloed/hazard-centric approach is that that really is how government departments, academic fields and NGO networks are often structured. There’s a nuclear weapons field of academics, advocates and military officials that barely interacts with even the the biological weapons field.
Of course, one thing that ‘complex-model’ thinking can hopefully do is identify new approaches to reduce risk and/or new affected parties and potential coalition partners—such as bringing in DEFRA and thinking about food networks.
As a field, we need to be able to zoom in and out, focus on different levels of analysis to spot possible solutions.
Notable that Mary Robinson (Chair of The Elders and former UN High Commissioner for Human Rights) and Ban Ki-moon (Deputy Chair of The Elders and former Secretary-General of the United Nations) were both at the 2020 and 2023 Doomsday Clock announcements:
https://thebulletin.org/2023/01/press-release-doomsday-clock-set-at-90-seconds-to-midnight/
https://thebulletin.org/2020/01/press-release-it-is-now-100-seconds-to-midnight/