I can imagine some longtermists thinking that getting 90% of the possible value is basically an existential win
What’s the definition of an “existential win”? I agree that this would be a win, and would involve us beating some existential risks that currently loom large. But I also think this would be an existential catastrophe. So if “win” means “zero x-catastrophes”, I wouldn’t call this a win.
Bostrom’s original definition of existential risk talked about things that “drastically curtail [the] potential” of “Earth-originating intelligent life”. Under that phrasing, I think losing 10% of our total potential qualifies.
I think you’re implicitly agreeing with my comment that losing 0.1% of the future is acceptable, but I’m unsure if this is endorsed.
?!? What does “acceptable” mean? Obviously losing 0.1% of the future’s value is very bad, and should be avoided if possible!!! But I’d be fine with saying that this isn’t quite an existential risk, by Bostrom’s original phrasing.
If you were to redo the survey for people like me, I’d have preferred a phrasing that says more like
“a drastic reduction (>X%) of the future’s value.”
Agreed, I’d probably have gone with a phrasing like that.
?!? What does “acceptable” mean? Obviously losing 0.1% of the future’s value is very bad, and should be avoided if possible!!! But I’d be fine with saying that this isn’t quite an existential risk, by Bostrom’s original phrasing.
So I reskimmed the paper, and FWIW, Bostrom’s original phrasing doesn’t seem obviously sensitive to 2 orders of magnitude by my reading of it. “drastically curtail” feels more like poetic language than setting up clear boundaries.
He does have some lower bounds:
> However, the true lesson is a different one. If what we are concerned with is (something like) maximizing the expected number of worthwhile lives that we will create, then in addition to the opportunity cost of delayed colonization, we have to take into account the risk of failure to colonize at all. We might fall victim to an existential risk, one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.[8] Because the lifespan of galaxies is measured in billions of years, whereas the time-scale of any delays that we could realistically affect would rather be measured in years or decades, the consideration of risk trumps the consideration of opportunity cost. For example, a single percentage point of reduction of existential risks would be worth (from a utilitarian expected utility point-of-view) a delay of over 10 million years. (LZ: I was unable to make this section quote-text)
Taking “decades” conservatively to mean “at most ten decades”, this would suggest that something equivalent to a delay of ten decade (100 years) probably does not count as an existential catastrophe. However, this is a lower bound of 100⁄10 million * 1%, or 10^-7, far smaller than the 10^-3 I mentioned upthread.
(I agree that “acceptable” is sloppy language on my end, and losing 0.1% of the future’s value is very bad.)
What’s the definition of an “existential win”? I agree that this would be a win, and would involve us beating some existential risks that currently loom large. But I also think this would be an existential catastrophe. So if “win” means “zero x-catastrophes”, I wouldn’t call this a win.
Bostrom’s original definition of existential risk talked about things that “drastically curtail [the] potential” of “Earth-originating intelligent life”. Under that phrasing, I think losing 10% of our total potential qualifies.
?!? What does “acceptable” mean? Obviously losing 0.1% of the future’s value is very bad, and should be avoided if possible!!! But I’d be fine with saying that this isn’t quite an existential risk, by Bostrom’s original phrasing.
Agreed, I’d probably have gone with a phrasing like that.
So I reskimmed the paper, and FWIW, Bostrom’s original phrasing doesn’t seem obviously sensitive to 2 orders of magnitude by my reading of it. “drastically curtail” feels more like poetic language than setting up clear boundaries.
He does have some lower bounds:
> However, the true lesson is a different one. If what we are concerned with is (something like) maximizing the expected number of worthwhile lives that we will create, then in addition to the opportunity cost of delayed colonization, we have to take into account the risk of failure to colonize at all. We might fall victim to an existential risk, one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.[8] Because the lifespan of galaxies is measured in billions of years, whereas the time-scale of any delays that we could realistically affect would rather be measured in years or decades, the consideration of risk trumps the consideration of opportunity cost. For example, a single percentage point of reduction of existential risks would be worth (from a utilitarian expected utility point-of-view) a delay of over 10 million years. (LZ: I was unable to make this section quote-text)
Taking “decades” conservatively to mean “at most ten decades”, this would suggest that something equivalent to a delay of ten decade (100 years) probably does not count as an existential catastrophe. However, this is a lower bound of 100⁄10 million * 1%, or 10^-7, far smaller than the 10^-3 I mentioned upthread.
(I agree that “acceptable” is sloppy language on my end, and losing 0.1% of the future’s value is very bad.)