I don’t think we’ll ever leave the galaxy or maybe not even the solar system.
. . .
[44:27]
I see the recurrence of war in human history so frequently, and I’m not completely convinced by Steven Pinker [author of the book Better Angels of Our Nature, which argues that human violence is declining]. I agree with Steven Pinker, that the chance of a very violent war indeed has gone down and is going down, maybe every year, but the tail risk is still there. And if you let the clock tick out for a long enough period of time, at some point it will happen.
Powerful abilities to manipulate energy also mean powerful weapons, eventually powerful weapons in decentralized hands. I don’t think we know how stable that process is, but again, let the clock tick out, and you should be very worried.
How likely do you think it is that humans (or post-humans) will get to a point where existential risk becomes extremely low? Have you looked into the question of whether interstellar colonization will be possible in the future, and if so, do you broadly agree with Nick Beckstead’s conclusion in this piece? Do you think Cowen’s argument should push EAs towards forms of existential risk reduction (referenced by you in your recent 80,000 Hours interview) that are “not just dealing with today’s threats, [but] actually fundamentally enhancing our ability to understand and manage this risk”? Does positively shaping the development of artificial intelligence fall into that category?
Edit (likely after Toby recorded his answer): This comment from Pablo Stafforini also mentions the idea of “reduc[ing] the risk of extinction for all future generations.”
This math problem is relevant, although maybe the assumptions aren’t realistic. Basically, under certain assumptions, either our population has to increase without bound, or we go extinct.
EDIT: The main assumption is effectively that extinction risk is bounded below by a constant that depends only on the current population size, and not the time (when the generation happens). But you could imagine that even for a stable population size, this risk could be decreased asymptotically to 0 over time. I think that’s basically the only other way out.
So, either:
1. We go extinct,
2. Our population increases without bound, or
3. We decrease extinction risk towards 0 in the long-run.
Of course, extinction could still take a long time, and a lot of (dis)value could happen before then. This result isn’t so interesting if we think extinction is almost guaranteed anyway, due to heat death, etc..
re: 3 — to be more precise, one can show that $\prod_i (1 - p_i) > 0$ iff $\sum p_i < ∞$, where $p_i \in [0, 1)$ is a probability of extinction in a given year.
I’ve seen and liked that book. But i don’t think there really is enough information about risks (eg earth being hit by a comet or meteor that kills everything) to really say much—maybe if cosmology makes major advances or in other fields one can say somerthing but that might takes centuries.
In an 80,000 Hours interview, Tyler Cowen states:
How likely do you think it is that humans (or post-humans) will get to a point where existential risk becomes extremely low? Have you looked into the question of whether interstellar colonization will be possible in the future, and if so, do you broadly agree with Nick Beckstead’s conclusion in this piece? Do you think Cowen’s argument should push EAs towards forms of existential risk reduction (referenced by you in your recent 80,000 Hours interview) that are “not just dealing with today’s threats, [but] actually fundamentally enhancing our ability to understand and manage this risk”? Does positively shaping the development of artificial intelligence fall into that category?
Edit (likely after Toby recorded his answer): This comment from Pablo Stafforini also mentions the idea of “reduc[ing] the risk of extinction for all future generations.”
This math problem is relevant, although maybe the assumptions aren’t realistic. Basically, under certain assumptions, either our population has to increase without bound, or we go extinct.
EDIT: The main assumption is effectively that extinction risk is bounded below by a constant that depends only on the current population size, and not the time (when the generation happens). But you could imagine that even for a stable population size, this risk could be decreased asymptotically to 0 over time. I think that’s basically the only other way out.
So, either:
1. We go extinct,
2. Our population increases without bound, or
3. We decrease extinction risk towards 0 in the long-run.
Of course, extinction could still take a long time, and a lot of (dis)value could happen before then. This result isn’t so interesting if we think extinction is almost guaranteed anyway, due to heat death, etc..
Source for the screenshot: Samuel Karlin & Howard E. Taylor, A First Course in Stochastic Processes, 2nd ed., New York: Academic Press, 1975.
re: 3 — to be more precise, one can show that $\prod_i (1 - p_i) > 0$ iff $\sum p_i < ∞$, where $p_i \in [0, 1)$ is a probability of extinction in a given year.
Should that be ∑ilog(1−pi)>−∞? Just taking logarithms.
This is a valid convergence test. But I think it’s easier to reason about \sum p_i < ∞. See math.SE for a proof.
I’ve seen and liked that book. But i don’t think there really is enough information about risks (eg earth being hit by a comet or meteor that kills everything) to really say much—maybe if cosmology makes major advances or in other fields one can say somerthing but that might takes centuries.