Previously I’ve been a Research Fellow at the Forethought Foundation, where I worked on What We Owe The Future with Will MacAskill; an Applied Researcher at Founders Pledge; and a Program Analyst for UNDP.
Stephen Clare
Thanks again for this post, Vasco, and for sharing it with me for discussion beforehand. I really appreciate your work on this question. It’s super valuable to have more people thinking deeply about these issues and this post is a significant contribution.
The headline of my response is I think you’re pointing in the right direction and the estimates I gave in my original post are too high. But I think you’re overshooting and the probabilities you give here seem too low.
I have a couple of points to expand on; please do feel free to respond to each in individual comments to facilitate better discussion!
To summarize, my points are:
I think you’re right that my earlier estimates were too high; but I think this way overcorrects the other way.
There are some issues with using the historical war data
I’m still a bit confused and uneasy about your choice to use proportion killed per year rather than proportion or total killed per war.
I think your preferred estimate is so infinitesimally small that something must be going wrong.
First, you’re very likely right that my earlier estimates were too high. Although I still put some credence in a power law model, I think I should have incorporated more model uncertainty, and noted that other models would imply (much) lower chances of extinction-level wars.
I think @Ryan Greenblatt has made good points in other comments so won’t belabour this point other than to add that I think some method of using the mean, or geometric mean, rather than median seems reasonable to me when we face this degree of model uncertainty.
One other minor point here: a reason I still like the power law fit is that there’s at least some theoretical support for this distribution (as Bear wrote about in Only the Dead). Whereas I haven’t seen arguments that connect other potential fits to the theory of the underlying data generating process. This is pretty speculative and uncertain, but is another reason why I don’t want to throw away the power law entirely yet.
Second, I’m still skeptical that the historical war data is the “right” prior to use. It may be “a” prior but your title might be overstating things. This is related to Aaron’s point you quote in footnote 9, about assuming wars are IID over time. I think maybe we can assume they’re I (independent), but not that they’re ID (identically distributed) over time.
I think we can be pretty confident that WWII was so much larger than other wars not just randomly, but in fact because globalization[1] and new technologies like machine guns and bombs shifted the distribution of potential war outcomes. And I think similarly that distribution has shifted again since. Cf. my discussion of war-making capacity here. Obviously past war size isn’t completely irrelevant to the potential size of current wars, but I do think not adjusting for this shift at all likely biases your estimate down.
Third, I’m still uneasy about your choice to use annual proportion of population killed rather than number of deaths per war. This is just very rare in the IR world. I don’t know enough about how the COW data is created to assess it properly. Maybe one problem here is that it just clearly breaks the IID assumption. If we’re modelling each year as a draw, then since major wars last more than a year the probabilities of subsequent draws are clearly dependent on previous draws. Whereas if we just model each war as a whole as a draw (either in terms of gross deaths or in terms of deaths as a proportion of world population), then we’re at least closer to an IID world. Not sure about this, but it feels like it also biases your estimate down.
Finally, I’m a bit suspicious of infinitesimal probabilities due to the strength they give the prior. They imply we’d need enormously strong evidence to update much at all in a way that seems unreasonable to me.
Let’s take your preferred estimate of an annual probability of “6.36*10^-14”. That’s a 1 in 15,723,270,440,252 chance. That is, 1 in 15 trillion years.
I look around at the world and I see a nuclear-armed state fighting against a NATO-backed ally in Ukraine; I see conflict once again spreading throughout the Middle East; I see the US arming and perhaps preparing to defend Taiwan against China, which is governed by a leader who claims to consider reunification both inevitable and an existential issue for his nation.
And I see nuclear arsenals that still top 12,000 warheads and growing; I see ongoing bioweapons research powered by ever-more-capable biotechnologies; and I see obvious military interest in developing AI systems and autonomous weapons.
This does not seem like a situation that only leads to total existential destruction once every 15 trillion years.
I know you’re only talking about the prior, but your preferred estimate implies we’d need a galactically-enormous update to get to a posterior probability of war x-risk that seems reasonable. So I think something might be going wrong. Cf. some of Joe’s discussion of settling on infinitesimal priors here.
All that said, let me reiterate that I really appreciate this work!
- ^
What I mean here is that we should adjust somewhat for the fact that world wars are even possible nowadays. WWII was fought across three or four continents; that just couldn’t have happened before the 1900s. But about 1⁄3 of the COW dataset is for pre-1900 wars.
“I inferred for Stephen’s results, the probability of a war causing human extinction conditional on it causing an annual population loss of at least 10 % has to be at least 14.8 %.”
This is interesting! I hadn’t thought about it that way and find this framing intuitively compelling.
That does seem high to me, though perhaps not ludicrously high. Past events have probably killed at least 10% of the global population, WWII was within an order of magnitude of that, and we’ve increased out warmaking capacity since then. So I think it would be reasonable to put that annual chance of a war killing at least 10% of the global population at at least 1%.
That could give some insight into the extinction tail, perhaps implying that my estimate was about 10x too high. That would still make it importantly wrong, but less egregiously than the many orders of magnitude you estimate in the main post?
Hm, yeah, I think you’re right. I remember seeing some curve where the value of saving a life initially rises as a person ages, then falls, but it must be determined by the other factors mentioned by others rather than the mortality thing.
This is very sad to think about, but in some contexts, it may also not be the case that “saving the baby leads to greater total lifespan”. In places with high childhood mortality, for example, the expected number of life-years gained from saving a relatively young adult might be higher than a baby. This is because some proportion of babies will die from various diseases early in life, whereas young adults who have “made it through” are more likely to die in old age.
I’m not sure how high infant mortality rates would have to be to make a difference though. I believe the major considerations are the philosophical ones Richard mentions.
There is also a great deal of thoughtful analysis on this question on GiveWell’s site. For example, a short intro here and a long post about AMF and population ethics here
Super interesting list! I hadn’t heard of most of these and have ordered a few of them to read. Thank you!
Hey Ben, thanks for this great post. Really interesting to read about your experience and decision not to continue in this space.
I’m wondering if you have any sense of how quickly returns to new projects in this space might diminish? Founding an AI policy research and advocacy org seems like a slam dunk, but I’m wondering how many more ideas nearly that promising are out there.
(I edited an earlier comment to include this, but it’s a bit buried now, so I wanted to make a new comment.)
I’ve read most of the post and appendix (still not everything). To be a bit more constructive, I want to expand on how I think you could have responded better (and more quickly):
We were sad to hear that two of our former employees had such negative experiences working with us. We were aware of some of their complaints, but others took us by surprise.
We have a different perspective on many of the issues they raise. In particular, we dispute some of the most serious allegations. We’re attaching some evidence here to show that the employees were well-compensated, provided vegan food, and were absolutely not asked to transport illegal substances.
We are also aware that one of the ex-employees has a concerning history of behaviour which we think affects how she perceives her time working with us.
However, we also recognize that we made mistakes. In particular, we put ourselves and others in a risky situation by travelling and living in foreign countries with people who we both didn’t know very well and were employing. We also chose to eschew some standard practices around employment and compensation.
We’ve been reflecting on this, and are committing to make some fundamental changes to how we work. These include:
[Insert meaningful changes here, including perhaps consulting with an outside management consultant to get more perspective and help]
I think (4) and (5) are largely missing, though I do recognize you’re making some good changes and note those about halfway through the appendix document.
- Dec 15, 2023, 4:03 PM; 4 points) 's comment on Nonlinear’s Evidence: Debunking False and Misleading Claims by (
Thanks for this update, your leadership, and your hard work over the last year, Zach.
It’s great to hear that Mintz’s investigation has wrapped (and to hear they found no evidence of knowledge of fraud, though of course I’m not surprised by that). I’m wondering if it would be possible for them to issue an independent statement or comment confirming your summary?
If I’m understanding this right, you assume that if someone upvoted the post, it’s because they changed their mind?
FWIW I reached out to someone involved in this at a high level a few months ago to see if there was a potential project here. They said the problem was “persuading WHO to accelerate a fairly logistically complex process”. It didn’t seem like there were many opportunities to turn money or time into impact so I didn’t pursue anything further.
I can see where Ollie’s coming from, frankly. You keep referring to these hundreds of pages of evidence, but it seems very likely you would have been better off just posting a few screenshots of the text messages that contradict some of the most egregious claims months ago. The hypothesising about “what went wrong”, the photos, the retaliation section, the guilt-tripping about focusing on this, etc. - these all undermine the discussion about the actual facts by (1) diluting the relevant evidence and (2) making this entire post bizarre and unsettling.
Hi Vasco, thank you for this! I agree with you that just extrapolating the power law likely overestimates the chance of an enormous or extinction-level war by quite a bit. I’d mentioned this in my 80,000 Hours article but just as an intuition, so it’s useful to have a mathematical argument, too. I’d be very interested to see you run the numbers, especially to see how they compare to the estimates from other strands of evidence I talk about in the 80K article.
Interesting, thanks for checking that!
What I had in mind were the data from this Pritchett paper. He sets out a range of estimates depending on what exactly you measure. For example he shows that the US wage for construction work is 10x the median of the poorest 30 countries (p. 5). The income gains for a low skill worker moving to the US vary depending on where they’re coming from, but range from 2.4x (Thailand) to 16x (Nigeria) (p. 4).
That’s pretty different than the paper you cite. I’m not sure what accounts for that right now. Hopefully we see more work in this area!
As an example of how powerful these demographic shifts will be, this recent paper claims that ~all of Japan’s poor economic performance relative to other developed nations since the ’90s can be explained by its demographic shift (specifically the decline in the population share of working age adults). Think about how much consternation there has been about Japan’s slow growth. We’re all headed that way.
Interestingly, AFAIK Japan has not drastically liberalized its immigration much in response to its slow growth. The proportion of foreign-born residents has grown a bit, but not much. Maybe this is changing, but we’ll see if anything actually happens, and Japan has been struggling to grow for decades.
I think both of these trends can occur simultaneously
I’m not sure it’s very helpful to think of this as “jobs moving from one country to another”. It makes it seem zero-sum, whereas it is actually a positive-sum efficiency gain
Migrants to higher-income countries benefit from public goods like better services and public safety in addition to higher incomes
As Lant has pointed out, the higher income someone gains from moving from a low- to high-income country is enormous. IIRC it can be something like a 10x increase in consumption even if they’re working the same job. So even if we imagine some fixed pool of jobs, I’m not sure it is “far better” for jobs to move from high- to low-income countries. Given the choice of working the same job in a high-income or a low-income country, I think many people would choose to move to the high-income country.
Great post, Tom, thanks for writing!
One thought is that a GCR framing isn’t the only alternative to longtermism. We could also talk about caring for future generations.
This has fewer of the problems you point out (e.g. differentiates between recoverable global catastrophes and existential catastrophes). To me, it has warm, positive associations. And it’s pluralistic, connected to indigenous worldviews and environmentalist rhetoric.
I loved Chris Miller’s Chip War.
If you’re looking for something less directly related to things like AI, I like Siddhartha Mukerjee’s books (The Emperor of all Maladies, The Gene), Charles C. Mann’s The Wizard and the Prophet, and Andrew Roberts’ Napoleon the Great
What a sweet post 💙
Thank you also to you for setting up Vida Plena! By putting so much work into setting up a new organization you’ve helped a lot of people.
Ranil Dissanayake actually just published an article in Asterisk about the history of the poverty line concept. The dollar-a-day (now $1.25 a day or something) line was kind of arbitrary and kind of not:
rather than make their own judgment on what constituted sufficient living, they could instead rely on the judgment of poor countries themselves. They would simply take an average of the poorest countries in the world and declare this to be the global minimum of human sufficiency
noting further in a footnote that
Of course, things are never quite so pure: The bank was closely involved in the development of national poverty lines around the world, so there was some element of circularity to the development of the global line.
The whole article is very interesting, worth a read for people in this space.
Thanks Vasco! I’ll come back to this to respond in a bit more depth next week (this is a busy week).
In the meantime, curious what you make of my point that setting a prior that gives only a 1 in 15 trillion chance of experiencing an extinction-level war in any given year seems wrong?