X-risks to all life v. to humans

TL:DR: It doesn’t only matter whether an event wipes out humanity, its effect on other life matters too as this affects the probability of intelligent life re-evolving. This could change how we choose to prioritise and allocate resources between different x-risk areas.

Acknowledgements: This post was inspired by a conversation I had with Toby Ord when he visited EA Warwick. I’m very grateful for his ideas and for suggesting I write this. I’m grateful also to Aimee Watts, Simon Marshall, Charlotte Seigmann and others from EA Warwick for feedback and useful discussion.

I have not come across my below argument before, but am not certain it has not been made before.

Introduction

One of the main reasons we care about preventing x-risks is because they could deny the possibility of a grand future in which galactic expansion allows for huge amounts of welfare. Such a future can be attained by humans, but also by other intelligent life that might originate on earth.

I believe we are guilty of a substitution fallacy when considering extinction events. We are substituting:

1) “What is the probability of an event denying a grand future?” with

2) “What is the probability of the event killing all humans/​killing humans to the extent that human civilisation never achieves a grand future?”

Why are these two questions different? Because humanity may not recover, but a new species may emerge on Earth and attain this grand future.

An Example

Why is this consideration important? Because the answer to these two questions may be different, and crucially the way they differ may not be the same for each x-risk. Consider the following two scenarios:

  1. A genetically engineered pathogen is released and kills 99% of all humans. It does not cross into other species.

  2. An asteroid with a diameter of 15km (larger than the Chixculub asteroid thought to have killed the dinosaurs) collides with the Earth killing 99% of all humans. No land vertebrate weighing more than 25 kg survives.

In each scenario, humanity almost certainly goes extinct. However, the chance of human-level intelligent life re-evolving intuitively seems like it would be very different. Other species are unaffected in scenario A and so other somewhat intelligent species are more likely to evolve into intelligent life that could achieve a grand future, compared to a world where all medium to large vertebrates have been killed.

Even if our understanding here is wrong, it seems very likely that the probability of intelligent life not re-evolving would be different in each scenario.

Suppose in the next century there is a 1% chance of scenario A), and a 0.01% chance of scenario B). If we only think about the substituted question 2 where we think about the extinction of humans, then we care almost “100 times” more about scenario A). But suppose that in scenario A) there is a 70% chance of intelligent life not re-evolving and in scenario B) there is a 95% chance of intelligent life not re-evolving.

Then for question 1, with A), we get a 0.7% chance of a true existential event, and with B) we get a 0.0095% chance of a true existential event. So we still care more about A), but now by only ~70 times more.

This probabilistic reasoning is not complete, as new human-level intelligent life may succumb to these x-risks again.

Another crucial consideration may be the timeline of intelligent life re-evolving. In scenario B), intelligent life may re-evolve but it may take 100 million years, as opposed to 1 million years in scenario A). How to weigh this in the probabilities is not immediately clear, but also gives us reason to reduce the difference in how much we care between the two.

Terminology

A useful distinction may be to differentiate between a “human existential threat” which prevents humanity attaining a grand future and a “total existential threat” which prevents earth originating intelligent life attaining a grand future.

Even this may not be a correct definition of a total existential threat as intelligent life may originate from other planets. This distinction is important as an unaligned AI may threaten life on other planets too whereas climate change on Earth for example only threatens life on Earth. The term “earth existential threat” may then be appropriate for an event which prevents earth originating intelligent life attaining a grand future.

Numerical Values

To put actual values on life re-evolving and timelines is an incredibly difficult task for which I do not have suitable experience. However I offer some numbers here purely speculatively and to illustrate what comparisons we would hope to make.

For probabilities of human extinction from each event I use the probabilities given by Toby Ord in The Precipice.



*If an unaligned AI causes the extinction of humans it seems it would also cause the extinction of any other equally intelligent beings that naturally evolved.

This calculation may not change which risks we are most worried about, but there may be not insignificant changes in probabilities that affect resource allocations once considerations of tractability and neglectedness are factored in. For example, we may invest more heavily into AI research at the expense of biorisk.

How to incorporate the timeline values is less clear, and would require considering a more complex model.

A possible model

The Earth seems like it will be habitable for another 600 million years. Hence there is a time limit on intelligent life re-evolving. We could model some sort of measure N of how long until current life on earth reaches existential security. Let T be the amount of time the earth will remain habitable for. Then in each century there are various existential events with different probabilities (some independent of N, some dependent on N), which could occur and increase N by a certain amount, or trigger a complete fail state. Each century T decreases by 1. The question is whether N reaches zero before T does or a fail state occurs.

Such a model becomes complicated quite quickly with different x-risks with different probability distributions and impacts on N. Even this model is a simplification, as the amount of time for intelligent life to redevelop after an existential event is itself a random variable.

What conclusions can we draw?

It seems that such arguments could cause us to weigh more heavily x-risks that threaten more life on earth than just humans. This could increase how much we care about risks such as global warming and nuclear war compared to biorisk.

We could also conclude that x-risks and GCRs are not fundamentally different. Each sets back the time till existential security is reached, just by radically different amounts.

How could this be taken further?

Further reading and research specific to individual x-risks to compare chances of intelligent life re-evolving.

Further development of a mathematical model to realise how important timelines for re-evolution are.

Note: This is my first post on the EA forum. I am very grateful for any feedback or comments.

Edit: 0.095% changed to 0.0095% for risk of true existential event from a meteor impact in “An Example”