Excellent overview, and I completely agree that the AI Act is an important policy for AI governance.
One quibble: as far as I know, the Center for Data Innovation is just a lobbying group for Big Tech—I was a little surprised to see it listed in “public responses from various EA and EA Adjacent organisations”.
Cool post, very interesting! I’m fascinated by this topic—the PhD thesis I’m writing is on nuclear, bio and cyber weapons arms control regimes and what lessons can be drawn for AI. So obviously I’m very into this, and want to see more work done on this. Really excellent to see you exploring the parallels. A few thoughts:
Your point on ‘lock-in’ seems crucial. It currently seems to me that there are ‘critical junctures’ (Capoccia) in which regimes get set and then its very hard to change them. So e.g. the failure to control nukes or cyber in early years. ABM is a complex example—very very hard to get back on the table, but Rumsfeld +others managed it after 30 years of battling.
My impression is that the BWC (and CWC) - the meetings/conferences etc—are often seen as arms control regimes that are pretty good at keeping up with technical developments—maybe a point in favour of centralisation.
Just on the details of the BWC, seems worth mentioning a few things. (Nitpicky: when the UK proposed a BWC, it said verification wasn’t technically possible at the time ). First, the Nixon Administration thought BW were militarily useless and had already unilaterally disarmed, so verification was less of a priority . Second, one of the reasons to want a Verification Protocol in the 90s was the revelation that the Soviets cheated over the 70s-80s, building the biggest BW program ever. Third, the Bush Admin rejected the Verification Protocol in 2001 (pre 9/11!), its first year—at the same time as it was ripping up START III, Kyoto, and the ABM Treaty. This is all to suggest that state interest, and elites’ changing conceptions of state interest, can create space for change.
Here’s my piece on this question, from February 2021 - https://forum.effectivealtruism.org/posts/ST8vFfPropD9AYqkX/alternatives-to-donor-lotteries
FYI this link is broken for me: “Audio version available at Cold Takes ”
Interesting first point, but I disagree. To me, the increased salience of climate change in recent years can be traced back to the 2018 Special Report on Global Warming of 1.5 °C (SR15), and in particular the meme ’12 years to save the world’. Seems to have contributed to the start of School Strike for Climate, Extinction Rebellion and the Green New Deal. Another big new scary IPCC report on catastrophic climate change would further raise the salience of this issue-area.
I was thinking that $100m would be for all four of these topics, and that we’d get cause-prioritisation VOI across all four of these areas. $100m for impact and VOI across all four seems pretty good to me (however I’m a researcher not a funder!)
On solar geo, I’m not an expert on it and am not arguing for it myself, merely reporting that its top of the ‘asks’ list for orgs like Silver Lining.
I actually rather like the framing in Xu & Ram—I don’t think we know enough about >5 °C scenarios, so describing them as “unknown, implying beyond catastrophic, including existential threats” seems pretty reasonable to me. In any case, I cited that more to demonstrate the lack of research thats been done on these scenarios.
I think its a really good point that there’s something very different between research/policy orgs and orgs that deliver products and services at scale. I basically agree, but I’d slightly tweak this to”It is very hard for a charity to scale to more than $100 million per year without delivering a physical product or service.”
Because digital orgs/companies who deliver a digital service (GiveDirectly, Facebook/Google/etc) obviously can scale to $100 million per year.
Hell yeah! Get JGL to star—https://www.eaglobal.org/speakers/joseph-gordon-levitt/
Do you mean just the fourth bullet, or do you think this about all four?
The 1980s nuclear winter and asteroid papers (I’m thinking especially Sagan et al, and Alvarez et al) were very influential in changing political behaviour—Gorbachev and Reagan explicitly acknowledged that on nuclear, the asteroid evidence contributed to the 90s asteroid films and the (hugely successful!) NASA effort to track all ‘dino-killers’. On the margin now, I think more scary stuff would be motivating. There’s also VOI in resolving how big a concern nuclear winter is (eg some recent papers are skeptical) - if it turned out to not be as existential as we thought, that would change cause prioritisation for GCRs.
On geoengineering (sorry ‘climate interventions’(!)), note ‘getting more climate modelling’ is a key aim for e.g. Silver Lining.
On the fourth one, on the margin, I think more research—especially if it were the basis for an IPCC special report—would be influential. There’s also VOI for our cause priotisation. It just is really remarkable how understudied it is!https://www.pnas.org/content/114/39/10315https://forum.effectivealtruism.org/posts/HaXxEtx4QdykBjJi7/betting-on-the-best-case-higher-end-warming-is
Megaprojects cost $1 billion or more. Ben Todd was using the (admittedly somewhat confusing) term ‘EA megaproject’ by which he meant a new project that could usefully spend $100m a year. So these concerns about megaprojects don’t apply.How about we use the term ‘$100m-scale project’? (I considered ‘kiloproject’ but that’s really niche.)
Here’s the interesting, frustrating evaluation report: https://www.macfound.org/media/article_pdfs/nuclear-challenges-synthesis-report_public-final-1.29.21.pdf.pdfLooks to me like a classic hits-based giving bet—you mostly don’t make much impact, then occassionaly (Nixon arms control, H.W. Bush’s START and Nunn-Lugar, maybe Obama JCPOA/New START) get a home run.
9 PACs have raised/spent more than $100m (source). So an EA PAC?
Although I guess Sam Bankman-Fried was the second-largest donor to Biden (coindesk, Vox), and Dustin Moskovitz gave $50m; and they’re both involved with Future Forward and Mind The Gap, so maybe EA is already kinda doing this.
Developing new climate models has costs in the hundreds of millions of dollars. Useful longtermist climate modelling could include:
nuclear winter modelling
volcanic/asteroid impact modelling
solar geoengineering modelling
catastrophic (<5C) climate change modelling (note higher end warming is hugely underrepresented in the literature—as argued here: https://forum.effectivealtruism.org/posts/HaXxEtx4QdykBjJi7/betting-on-the-best-case-higher-end-warming-is )
Hard science funding seems able to absorb this scale of funding, though this might not count as ‘EA-specific’ projects:On climate: carbon capture, new solar materials, new battery R&D, maybe even fusion as ‘hits-based giving’?On bio preparedness there’s quite a lot, e.g. Cassidy Nelson recommendations, Andy Weber recommendations
Filling the $100m funding gap in nuclear, since the MacArthur Foundation is pulling out of nuclear policy.
“Since 2015 alone, MacArthur directed 231 grants totaling >$100m in some cases providing more than half the annual funding for individual institutions or programs.”″MacArthur was providing something like 40 to 55 percent of all the funding worldwide of the non-government funding worldwide on nuclear policy”https://t.co/srsq45ejc7?amp=1
On Twitter I noted that when it comes to GCRs, its hard to spend $100m on a policy research organisation. Note CSET was $55m/5 years: in the 10m range. OpenPhil’s grants to CHS & NTIbio similar.
Anthropic raised $124m—so they might be the most recent EA megaproject.
How much funding is committed to effective altruism (going forward)? Around $46 billion.
For reference, the Bill & Melinda Gates Foundation is the second largest charitable foundation in the world, holding $49.8 billion in assets.
There’s been quite a bit written on the “pro” side:
Also ARCHES, Concrete Problems in AI safety, etc
But not so much on the “con” side—people have generally just thought about opportunity cost. Your point that it might speed up harmful (due to safety, misuse or structural risks) applications is a really useful and important one! Would be hard to weigh things up—getting into tricky differential technological development territory. Would love for there to be more thinking on this topic.
On the other hand, this isn’t as much of a constraint in opposition. Political Advisors are like senior senior parliamentary researchers—everyone’s part of one (tiny!) team.
This is a great overview, thanks for writing it up—more people should work for MPs!
Some other useful resoures from 80,000 Hours on this topic: https://80000hours.org/career-reviews/party-politics-uk/ https://80000hours.org/2014/02/an-estimate-of-the-expected-influence-of-becoming-a-politician/ https://80000hours.org/2012/02/how-hard-is-it-to-become-prime-minister-of-the-united-kingdom/