What would it mean for humanity to protect its potential, but use it poorly?
Arguments for and against Toby Ord’s “grand strategy for humanity”
Does protecting humanity’s potential guarantee its fulfilment?
A typology of strategies for influencing the future
Working titles of things I plan/vaguely hope to write
Note: If you might be interested in writing about similar ideas, feel very free to reach out to me. It’s very unlikely I’ll be able to write all of these posts by myself, so potentially we could collaborate, or I could just share my thoughts and notes with you and let you take it from there.
Update: It’s now very unlikely that I’ll get around to writing any of these things.
The Terrible Funnel: Estimating odds of each step on the x-risk causal path (working title)
The idea here would be to adapt something like the “Great Filter” or “Drake Equation” reasoning to estimating the probability of existential catastrophe, using how humanity has fared in prior events that passed or could’ve passed certain “steps” on certain causal chains to catastrophe.
E.g., even though we’ve never faced a pandemic involving a bioengineered pathogen, perhaps our experience with how many natural pathogens have moved from each “step” to the next one can inform what would likely happen if we did face a bioengineered pathogen, or if it did get to a pandemic level.
This idea seems sort of implicit in the Precipice, but isn’t really spelled out there. Also, as is probably obvious, I need to do more to organise my thoughts on it myself.
This may include discussion of how Ord distinguishes natural and anthropogenic risks, and why the standard arguments for an upper bound for natural extinction risks don’t apply to natural pandemics. Or that might be a separate post.
Developing—but not deploying—drastic backup plans (see my comment here)
“Macrostrategy”: Attempted definitions and related concepts
This would relate in part to Ord’s concept of “grand strategy for humanity”
Collection of notes
A post summarising the ideas of existential risk factors and existential security factors?
I suspect I won’t end up writing this, but I think someone should. For one thing, it’d be good to have something people can reference/link to that explains that idea (sort of like the role EA Concepts serves).
List of things I’ve written or may write that are relevant to The Precipice
Things I’ve written
Some thoughts on Toby Ord’s existential risk estimates
Database of existential risk estimates
Clarifying existential risks and existential catastrophes
Existential risks are not just about humanity
Failures in technology forecasting? A reply to Ord and Yudkowsky
What is existential security?
Why I’m less optimistic than Toby Ord about New Zealand in nuclear winter, and maybe about collapse more generally
Thoughts on Toby Ord’s policy & research recommendations
“Toby Ord seems to imply that economic stagnation is clearly an existential risk factor. But I that we should actually be more uncertain about that”
Why I think The Precipice might understate the significance of population ethics
My Google Play review
My review of Tom Chivers’ review of Toby Ord’s The Precipice
If a typical mammalian species survives for ~1 million years, should a 200,000 year old species expect another 800,000 years, or another million years?
Upcoming posts
What would it mean for humanity to protect its potential, but use it poorly?
Arguments for and against Toby Ord’s “grand strategy for humanity”
Does protecting humanity’s potential guarantee its fulfilment?
A typology of strategies for influencing the future
Working titles of things I plan/vaguely hope to write
Note: If you might be interested in writing about similar ideas, feel very free to reach out to me. It’s very unlikely I’ll be able to write all of these posts by myself, so potentially we could collaborate, or I could just share my thoughts and notes with you and let you take it from there.
Update: It’s now very unlikely that I’ll get around to writing any of these things.
The Terrible Funnel: Estimating odds of each step on the x-risk causal path (working title)
The idea here would be to adapt something like the “Great Filter” or “Drake Equation” reasoning to estimating the probability of existential catastrophe, using how humanity has fared in prior events that passed or could’ve passed certain “steps” on certain causal chains to catastrophe.
E.g., even though we’ve never faced a pandemic involving a bioengineered pathogen, perhaps our experience with how many natural pathogens have moved from each “step” to the next one can inform what would likely happen if we did face a bioengineered pathogen, or if it did get to a pandemic level.
This idea seems sort of implicit in the Precipice, but isn’t really spelled out there. Also, as is probably obvious, I need to do more to organise my thoughts on it myself.
This may include discussion of how Ord distinguishes natural and anthropogenic risks, and why the standard arguments for an upper bound for natural extinction risks don’t apply to natural pandemics. Or that might be a separate post.
Developing—but not deploying—drastic backup plans (see my comment here)
“Macrostrategy”: Attempted definitions and related concepts
This would relate in part to Ord’s concept of “grand strategy for humanity”
Collection of notes
A post summarising the ideas of existential risk factors and existential security factors?
I suspect I won’t end up writing this, but I think someone should. For one thing, it’d be good to have something people can reference/link to that explains that idea (sort of like the role EA Concepts serves).
Some selected Precipice-related works by others
80,000 Hours’ interview with Toby Ord
Slate Star Codex’s review of the book
FLI Podcast interview with Toby Ord