The fixed 0.1% extinction risk is used as a discount rate in the Stern report. That closes the model to give finite values (instead of infinite benefits) after they exclude pure temporal preference discounting on ethical grounds. Unfortunately, the assumption of infinite confidence in a fixed extinction rate, gives very different (lower) expected values than a distribution that accounts for the possibility of extinction risks eventually becoming stably low for long periods (the Stern version gives a probability of less than 1 in 20,000 to civilization surviving another 10,000 years, when agriculture is already 10,000 years old).
don’t forget the doomsday argument.
https://arxiv.org/abs/1705.08807 has a question about the probability that the outcome of AI will be “extremely bad.”
Where in the Stern report are you looking?
The fixed 0.1% extinction risk is used as a discount rate in the Stern report. That closes the model to give finite values (instead of infinite benefits) after they exclude pure temporal preference discounting on ethical grounds. Unfortunately, the assumption of infinite confidence in a fixed extinction rate, gives very different (lower) expected values than a distribution that accounts for the possibility of extinction risks eventually becoming stably low for long periods (the Stern version gives a probability of less than 1 in 20,000 to civilization surviving another 10,000 years, when agriculture is already 10,000 years old).