Thanks for clarifying, that seems reasonable.
FWIW I share the view that sending all 4 volumes might not be optimal. I think I’d find it a nuisance to receive such a large/heavy item (~3 litres/~2kg by my estimate) unsolicited.
Thanks for clarifying, that seems reasonable.
FWIW I share the view that sending all 4 volumes might not be optimal. I think I’d find it a nuisance to receive such a large/heavy item (~3 litres/~2kg by my estimate) unsolicited.
General comment: Huge fan of the newsletter, and think it’s awesome you’re doing this sort of review. I should also caveat that I’m not an AIS researcher, so not exactly target audience.
My first guess is that there’s significant value in someone maintaining an open, exhaustive database of AIS research. My main uncertainty is whether you are the best positioned to do this as things ramp up. It is plausible to me that an org with a safety team (e.g. DeepMind/OpenAI) is already doing this in-house, or planning to do so. It’s less clear that they would be willing to maintain a public resource. I’d want to verify this, and make sure that you’re coordinating with them to avoid any unnecessary duplication. More broadly, these labs might have some good systems in place for maintaining databases of new research in areas with a much higher volume than AIS, so could potentially share some best-practices.
I’m excited to read this series!
It would take a lot of nuclear weapons to produce nuclear winter climate effects, so if we’re particularly worried about nuclear winter, we should focus on nuclear exchange scenarios that would involve large nuclear arsenals.
I don’t think this is quite right. Robock 2007 finds a severe nuclear winter effect from an exchange with just 100x 15kt bombs. AFAIK, the only country with an arsenal below that threshold today is North Korea, which would suggest that — on Robock’s modelling at least—any bilateral exchange involving nuclear powers other than NK is large enough to pose a significant risk of nuclear winter.
there are an expected 1 million centuries to come, and the natural prior on the claim that we’re in the most influential century ever is 1 in 1 million. This would be too low in one important way, namely that the number of future people is decreasing every century, so it’s much less likely that the final century will be more influential than the first century. But even if we restricted ourselves to a uniform prior over the first 10% of civilisation’s history, the prior would still be as low as 1 in 100,000.
Half-baked thought: you might think that the very very long futures will mostly have been locked in very close to their start—i.e. that timescales for locking in the best futures are much much shorter than the maximum lifespan for civilisation. This would push you towards a prior over an even smaller chunk of the expected future.
Something like this view seems implicit in some ways of talking about the future, and feels plausible to me, though I’m not sure what the best arguments are.
I would add Future Perfect, and Policy.AI (CSET’s new AI policy newsletter)
+1 to all of this, and thanks for the other excellent comments.
There were, however, several accidents where the conventional explosives (that would trigger a nuclear detonation in intended use cases) in a nuclear weapon detonated (but where safety features prevented a nuclear detonation)
It’s probably worse than that—there is at least one incident where critical safety features failed, and it was luck that prevented a nuclear explosion
From a declassified report on a 1961 incident, in which a bomber carrying two 4MT warheads broke up over North Carolina [1]:
Weapon 1, which landed essentially intact, was in the “safe” position when it dropped, preventing detonation. The T-249 Arm/Safe switch worked exactly as it was supposed to, preventing a nuclear explosion.
...
[Weapon 2] landed in a free-fall. Without the parachute operating, the timer did not initiate the bomb’s high voltage battery (“trajectory arming”), a step in the arming sequence. While the Arm/Safe switch was in the “safe” position, it had become virtually armed because the impact of the crash had rotated the indicator drum to the “armed” position. But the shock also damaged the switch contacts, which had to be intact for the weapon to detonate. While Weapon 2 was not close to detonation, the fact that the physical impact of a crash could activate the same arming mechanism that had kept Weapon 1 safe showed the danger of such accidents.
In other words—the critical safety mechanism that prevented one bomb from detonating failed on the other bomb (and detonation of this bomb was avoided due to contingent features of the crash).
[2] More info on the incident: https://nsarchive2.gwu.edu/nukevault/ebb475/
[I broadly agree with above comment and OP]
Something I find missing from the discussion of CC as an indirect existential risk is what this means for prioritisation. It’s often used implicitly to support CC-mitigation as a high-priority intervention. But in the case of geoengineering, funding for governance/safety is probably on the order of millions (at most), making it many orders of magnitude more neglected than CC-mitigation, and this is similar for targeted nuclear risk mitigation, reducing risk of great power war, etc.
This suggests that donors who believe there is substantial indirect existential risk from CC are (all else equal) much better off funding the terminal risks, insofar as there are promising interventions that are substantially more underfunded.
This seems unlikely to be a useful tie-break in most cases, provided one can switch membership. UK party leadership elections are rarely contemporaneous [1] (unlike in the US), so the likelihood of a given party member being able to realise their leverage will generally differ by more than a factor of 4.5x at any given time.
[1] Conservatives: 1975, 1990, 1997, 2001, 2005, 2019
Labour: 1980, 1983, 1992, 1994, 2010, 2015, 2016, 2020
The philosophers who developed the long-termist astronomical waste argument openly use it to promote a range of abhorrent hawkish geopolitical responses (eg premptive nuclear strikes).
I find this surprising. Can you point to examples?
Sorry I should have disclaimed that I don’t think this is a sensible strategy, and that people should approach party membership in good faith (for roughly the reasons Greg outlines above). Thanks for prompting me to clarify this.
My comment was just to point out that timing is an important factor in leverage-per-member.
In fact, x-risks that eliminate human life, but leave animal life unaffected would generally be almost negligible in value to prevent compared to preventing x-risks to animals and improving their welfare.
Eliminating human life would lock in a very narrow set of futures for animals—something similar to the status quo (minus factory farming) until the Earth becomes uninhabitable. What reason is there to think the difference between these futures, and those we could expect if humanity continues to exist, would be negligible?
As far as we know, humans are the only thing capable of moral reasoning, systematically pushing the world toward more valuable states, embarking on multi-generational plans, etc. etc. This gives very strong reasons for thinking the extinction of humanity would be of profound significance to the value of the future for non-humans.
Thanks for writing this!
In the early stages, it will be doubling every week approximately
I’d be interested in pointers on how to interpret all the evidence on this:
until Jan 4: (Li et al) find 7.4 days
Jan 16–Jan 30: (Cheng & Shan) find ~1.8 days in China, before quarantine measures start kicking in.
Jan 20–Feb 6: (Muniz-Rodriguez et al) find 2.5 for Hubei [95%: 2.4–2.7], and other provinces ranging from 1.5 to 3.0 (with much wider error bars).
Eyeballing the most recent charts:
outside China looks like ~4–5 days
South Korea and Italy look shorter (~2-3 days?)
I’ve also seen it suggested that the outside-China growth might be inflated due to ‘catch up’ from slow roll-out of testing.
Altogether, what is our best guess, and what evidence should we be looking out?
I will investigate this and get back to you!
The audiobook will not include the endnotes. We really couldn’t see any good way of doing this, unfortunately.
Toby is right that there’s a huge amount of great stuff in there, particularly for those already more familiar with existential risk, so I would highly recommend getting your hands on a physical or ebook version (IMO ebook is the best format for endnotes, since they’ll be hyperlinked).
Yes I gave authorization!
Also note that your estimate for emissions in the AI explosion scenario exceeds the highest estimates for how much fossil fuel there is left to burn. The upper bound given in IPCC AR5 (WG3.C7.p.525) is ~13.6 PtC (or ~5*10^16 tons CO2).
Awesome post!
$43/unit is still quite high—could you elaborate a bit more?