My point was that we know humanity is capable of lasting 200,000 years, because it already did that. So on priors, we should expect humanity to last about another 200,000 years. We might update this prior downward based on facts like “we have nukes now” or “we might develop unfriendly AI soon”. But if we assume a 0.2% annual probability of extinction, that gives a 1 in 10^174 chance of surviving 200,000 years, which requires an absurdly strong update away from the prior.
Can’t we just say it is unlikely—it logically must involve extremely low probabilities
I find it really implausible that 10^-174 is the true probability that humanity survives 200,000 years. I don’t think we are 10^-174 confident about anything ever.
Personally, I see it as something like “There’s a 5-90% chance that people like Toby Ord are basically right, and thus that 2 is true. I’m not very confident about that, and 1 is also very plausible. But this is enough to make the expected value of existential risk reduction very high (as long as there are tractable reduction strategies which wouldn’t be adopted “by default”).”
The conclusion does not follow, for two reasons. The value of reducing x-risk might actually be lower if x-risk is higher. For an explanation, see the appendix of this paper: https://onlinelibrary.wiley.com/doi/full/10.1111/1758-5899.12318 (I think you need an account to download, but you can also get the paper on sci-hub.) But there are good arguments that decreasing the discount rate is more important than increasing consumption, which is also discussed in that paper.
“Long-run” means “discount rate that applies after the short-run”.
2. I think I now have a better sense of what you mean.
2a. It sounds like, when you wrote:
The current relatively high probability of extinction will maintain indefinitely.
...you’d include “The high probability maintains for a while, and then we do go extinct” as a case where the high probability maintains indefinitely?
This seems an odd way of phrasing things to me, given that, if we go extinct, the probability that we go extinct at any time after that is 0, and the probability that we are extinct at any time after that is 1. So whatever the current probability is, it would change after that point. (Though I guess we could talk about the probability that we will be extinct at the end of a time period, which would be high − 1 - post-extinction, so if that probability is currently high it could then stay high indefinitely, even if the actual probability changes.)
I thought you were instead talking about a case where the probability stays relatively high for a very long time, without us going extinct. (That seemed to me like the most intuitive interpretation of the current probability maintaining indefinitely.) That’s why I was saying that that’s just unlikely “by definition”, basically.
Relatedly, when you wrote:
Currently, we have a relatively high probability of extinction, but if we survive through the current crucial period, then this probability will dramatically decrease.
Would that hypothesis include cases where we don’t survive through the current period?
My view would basically be that the probability might be low now or might be relatively high. And if it is relatively high, then it must be either that it’ll go down before a long time passes or that we’ll become extinct. I’m not currently sure whether that means I split my credence over the 1st and 2nd views you outline only, or over all 3.
2b. It also sounds like you were actually focusing on an argument like that the “natural” extinction rate must be low, given how long humanity has survived thus far. This would be similar to an argument Ord gives in The Precipice, and that’s also given in this paper I haven’t actually read, which says in the abstract:
Using only the information that Homo sapiens has existed at least 200,000 years, we conclude that the probability that humanity goes extinct from natural causes in any given year is almost guaranteed to be less than one in 14,000, and likely to be less than one in 87,000.
That’s an argument I agree with. I also see it as a reason to believe that, if we handle all the anthropogenic extinction risks, the extinction risk level from then on would be much lower than it might now be.
Though I’m not sure I’d draw from it the implication you draw: it seems totally plausible we could enter a state with a new, higher “background” extinction rate, which is also driven by our activities. And it seems to me that the only obvious reasons to believe this state wouldn’t last a long time are (a) the idea that humanity will likely strive to get out of this state, and (b) the simple fact that, if the rate is high enough and lasts for long enough, extinction happening at some point becomes very likely. (One can also argue against believing that we’d enter such a state in the first place, or that we’ve done so thus far—I’m just talking about why we might not believe the state would last a long time, if we did enter it.)
So when you say:
if we assume a 0.2% annual probability of extinction, that gives a 1 in 10^174 chance of surviving 200,000 years, which requires an absurdly strong update away from the prior.
Wouldn’t it make more sense to instead say something like: “The non-anthropogenic annual human extinction rate seems likely to be less than 1 in 87,000. To say the current total annual human extinction rate is 1 in 500 (0.2%) requires updating away from priors by a factor of 174 (87,000/500).” (Perhaps this should instead be phrased as ”...requires thinking that humans have caused the total rate to increase by a factor of 174.”)
Updating by a factor of 174 seems far more reasonable than the sort of update you referred to.
And then lasting 200,000 years at such an annual rate is indeed extremely implausible, but I don’t think anyone’s really arguing against that idea. The implication of a 0.2% annual rate, which isn’t reduced, would just be that extinction becomes very likely in much less than 200,000 years.
3.
The conclusion does not follow, for two reasons. The value of reducing x-risk might actually be lower if x-risk is higher.
I haven’t read that paper, but Ord makes what I think is a similar point in The Precipice. But, if I recall correctly, that was in a simple model, and he thought that in a more realistic model it does seem important how high the risk is now.
Essentially, I think x-risk work may be most valuable if the “background” x-risk level is quite low, but currently the risk levels are unusually high, such that (a) the work is urgent (we can’t just punt to the future, or there’d be a decent chance that future wouldn’t materialise), and (b) if we do succeed in that work, humanity is likely to last for a long time.
If instead the risk is high now but this is because there are new and large risks that emerge in each period, and what we do to fix them doesn’t help with the later risks, then that indeed doesn’t necessarily suggest x-risk work is worth prioritising.
And if instead the risk is pretty low across all time, that canstill suggest x-risk work is worth prioritising, because we have a lower chance of succumbing to a risk in any given period but would lose more in expectation if we do. (And that’s definitely an interesting and counterintuitive implication of that argument that Ord mentions.) But I think being in that situation would push somewhat more in favour of things like investing, movement-building, etc., rather than working on x-risks “directly” “right now”.
So if we’re talking about the view that “Currently, we have a relatively high probability of extinction, but if we survive through the current crucial period, then this probability will dramatically decrease”, I think more belief in that view does push more in favour of work on x-risks now.
My point was that we know humanity is capable of lasting 200,000 years, because it already did that. So on priors, we should expect humanity to last about another 200,000 years. We might update this prior downward based on facts like “we have nukes now” or “we might develop unfriendly AI soon”. But if we assume a 0.2% annual probability of extinction, that gives a 1 in 10^174 chance of surviving 200,000 years, which requires an absurdly strong update away from the prior.
I find it really implausible that 10^-174 is the true probability that humanity survives 200,000 years. I don’t think we are 10^-174 confident about anything ever.
The conclusion does not follow, for two reasons. The value of reducing x-risk might actually be lower if x-risk is higher. For an explanation, see the appendix of this paper: https://onlinelibrary.wiley.com/doi/full/10.1111/1758-5899.12318 (I think you need an account to download, but you can also get the paper on sci-hub.) But there are good arguments that decreasing the discount rate is more important than increasing consumption, which is also discussed in that paper.
“Long-run” means “discount rate that applies after the short-run”.
(Possibly somewhat rambly, sorry)
2. I think I now have a better sense of what you mean.
2a. It sounds like, when you wrote:
...you’d include “The high probability maintains for a while, and then we do go extinct” as a case where the high probability maintains indefinitely?
This seems an odd way of phrasing things to me, given that, if we go extinct, the probability that we go extinct at any time after that is 0, and the probability that we are extinct at any time after that is 1. So whatever the current probability is, it would change after that point. (Though I guess we could talk about the probability that we will be extinct at the end of a time period, which would be high − 1 - post-extinction, so if that probability is currently high it could then stay high indefinitely, even if the actual probability changes.)
I thought you were instead talking about a case where the probability stays relatively high for a very long time, without us going extinct. (That seemed to me like the most intuitive interpretation of the current probability maintaining indefinitely.) That’s why I was saying that that’s just unlikely “by definition”, basically.
Relatedly, when you wrote:
Would that hypothesis include cases where we don’t survive through the current period?
My view would basically be that the probability might be low now or might be relatively high. And if it is relatively high, then it must be either that it’ll go down before a long time passes or that we’ll become extinct. I’m not currently sure whether that means I split my credence over the 1st and 2nd views you outline only, or over all 3.
2b. It also sounds like you were actually focusing on an argument like that the “natural” extinction rate must be low, given how long humanity has survived thus far. This would be similar to an argument Ord gives in The Precipice, and that’s also given in this paper I haven’t actually read, which says in the abstract:
That’s an argument I agree with. I also see it as a reason to believe that, if we handle all the anthropogenic extinction risks, the extinction risk level from then on would be much lower than it might now be.
Though I’m not sure I’d draw from it the implication you draw: it seems totally plausible we could enter a state with a new, higher “background” extinction rate, which is also driven by our activities. And it seems to me that the only obvious reasons to believe this state wouldn’t last a long time are (a) the idea that humanity will likely strive to get out of this state, and (b) the simple fact that, if the rate is high enough and lasts for long enough, extinction happening at some point becomes very likely. (One can also argue against believing that we’d enter such a state in the first place, or that we’ve done so thus far—I’m just talking about why we might not believe the state would last a long time, if we did enter it.)
So when you say:
Wouldn’t it make more sense to instead say something like: “The non-anthropogenic annual human extinction rate seems likely to be less than 1 in 87,000. To say the current total annual human extinction rate is 1 in 500 (0.2%) requires updating away from priors by a factor of 174 (87,000/500).” (Perhaps this should instead be phrased as ”...requires thinking that humans have caused the total rate to increase by a factor of 174.”)
Updating by a factor of 174 seems far more reasonable than the sort of update you referred to.
And then lasting 200,000 years at such an annual rate is indeed extremely implausible, but I don’t think anyone’s really arguing against that idea. The implication of a 0.2% annual rate, which isn’t reduced, would just be that extinction becomes very likely in much less than 200,000 years.
3.
I haven’t read that paper, but Ord makes what I think is a similar point in The Precipice. But, if I recall correctly, that was in a simple model, and he thought that in a more realistic model it does seem important how high the risk is now.
Essentially, I think x-risk work may be most valuable if the “background” x-risk level is quite low, but currently the risk levels are unusually high, such that (a) the work is urgent (we can’t just punt to the future, or there’d be a decent chance that future wouldn’t materialise), and (b) if we do succeed in that work, humanity is likely to last for a long time.
If instead the risk is high now but this is because there are new and large risks that emerge in each period, and what we do to fix them doesn’t help with the later risks, then that indeed doesn’t necessarily suggest x-risk work is worth prioritising.
And if instead the risk is pretty low across all time, that can still suggest x-risk work is worth prioritising, because we have a lower chance of succumbing to a risk in any given period but would lose more in expectation if we do. (And that’s definitely an interesting and counterintuitive implication of that argument that Ord mentions.) But I think being in that situation would push somewhat more in favour of things like investing, movement-building, etc., rather than working on x-risks “directly” “right now”.
So if we’re talking about the view that “Currently, we have a relatively high probability of extinction, but if we survive through the current crucial period, then this probability will dramatically decrease”, I think more belief in that view does push more in favour of work on x-risks now.
(I could be wrong about that, though.)
4. Thanks for the clarification!