?!? What does “acceptable” mean? Obviously losing 0.1% of the future’s value is very bad, and should be avoided if possible!!! But I’d be fine with saying that this isn’t quite an existential risk, by Bostrom’s original phrasing.
So I reskimmed the paper, and FWIW, Bostrom’s original phrasing doesn’t seem obviously sensitive to 2 orders of magnitude by my reading of it. “drastically curtail” feels more like poetic language than setting up clear boundaries.
He does have some lower bounds:
> However, the true lesson is a different one. If what we are concerned with is (something like) maximizing the expected number of worthwhile lives that we will create, then in addition to the opportunity cost of delayed colonization, we have to take into account the risk of failure to colonize at all. We might fall victim to an existential risk, one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.[8] Because the lifespan of galaxies is measured in billions of years, whereas the time-scale of any delays that we could realistically affect would rather be measured in years or decades, the consideration of risk trumps the consideration of opportunity cost. For example, a single percentage point of reduction of existential risks would be worth (from a utilitarian expected utility point-of-view) a delay of over 10 million years. (LZ: I was unable to make this section quote-text)
Taking “decades” conservatively to mean “at most ten decades”, this would suggest that something equivalent to a delay of ten decade (100 years) probably does not count as an existential catastrophe. However, this is a lower bound of 100⁄10 million * 1%, or 10^-7, far smaller than the 10^-3 I mentioned upthread.
(I agree that “acceptable” is sloppy language on my end, and losing 0.1% of the future’s value is very bad.)
So I reskimmed the paper, and FWIW, Bostrom’s original phrasing doesn’t seem obviously sensitive to 2 orders of magnitude by my reading of it. “drastically curtail” feels more like poetic language than setting up clear boundaries.
He does have some lower bounds:
> However, the true lesson is a different one. If what we are concerned with is (something like) maximizing the expected number of worthwhile lives that we will create, then in addition to the opportunity cost of delayed colonization, we have to take into account the risk of failure to colonize at all. We might fall victim to an existential risk, one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.[8] Because the lifespan of galaxies is measured in billions of years, whereas the time-scale of any delays that we could realistically affect would rather be measured in years or decades, the consideration of risk trumps the consideration of opportunity cost. For example, a single percentage point of reduction of existential risks would be worth (from a utilitarian expected utility point-of-view) a delay of over 10 million years. (LZ: I was unable to make this section quote-text)
Taking “decades” conservatively to mean “at most ten decades”, this would suggest that something equivalent to a delay of ten decade (100 years) probably does not count as an existential catastrophe. However, this is a lower bound of 100⁄10 million * 1%, or 10^-7, far smaller than the 10^-3 I mentioned upthread.
(I agree that “acceptable” is sloppy language on my end, and losing 0.1% of the future’s value is very bad.)