What would it mean for humanity to protect its potential, but use it poorly?
Arguments for and against Toby Ordâs âgrand strategy for humanityâ
Does protecting humanityâs potential guarantee its fulfilment?
A typology of strategies for influencing the future
Working titles of things I plan/âvaguely hope to write
Note: If you might be interested in writing about similar ideas, feel very free to reach out to me. Itâs very unlikely Iâll be able to write all of these posts by myself, so potentially we could collaborate, or I could just share my thoughts and notes with you and let you take it from there.
Update: Itâs now very unlikely that Iâll get around to writing any of these things.
The Terrible Funnel: Estimating odds of each step on the x-risk causal path (working title)
The idea here would be to adapt something like the âGreat Filterâ or âDrake Equationâ reasoning to estimating the probability of existential catastrophe, using how humanity has fared in prior events that passed or couldâve passed certain âstepsâ on certain causal chains to catastrophe.
E.g., even though weâve never faced a pandemic involving a bioengineered pathogen, perhaps our experience with how many natural pathogens have moved from each âstepâ to the next one can inform what would likely happen if we did face a bioengineered pathogen, or if it did get to a pandemic level.
This idea seems sort of implicit in the Precipice, but isnât really spelled out there. Also, as is probably obvious, I need to do more to organise my thoughts on it myself.
This may include discussion of how Ord distinguishes natural and anthropogenic risks, and why the standard arguments for an upper bound for natural extinction risks donât apply to natural pandemics. Or that might be a separate post.
Developingâbut not deployingâdrastic backup plans (see my comment here)
âMacrostrategyâ: Attempted definitions and related concepts
This would relate in part to Ordâs concept of âgrand strategy for humanityâ
Collection of notes
A post summarising the ideas of existential risk factors and existential security factors?
I suspect I wonât end up writing this, but I think someone should. For one thing, itâd be good to have something people can reference/âlink to that explains that idea (sort of like the role EA Concepts serves).
List of things Iâve written or may write that are relevant to The Precipice
Things Iâve written
Some thoughts on Toby Ordâs existential risk estimates
Database of existential risk estimates
Clarifying existential risks and existential catastrophes
Existential risks are not just about humanity
Failures in technology forecasting? A reply to Ord and Yudkowsky
What is existential security?
Why Iâm less optimistic than Toby Ord about New Zealand in nuclear winter, and maybe about collapse more generally
Thoughts on Toby Ordâs policy & research recommendations
âToby Ord seems to imply that economic stagnation is clearly an existential risk factor. But I that we should actually be more uncertain about thatâ
Why I think The Precipice might understate the significance of population ethics
My Google Play review
My review of Tom Chiversâ review of Toby Ordâs The Precipice
If a typical mammalian species survives for ~1 million years, should a 200,000 year old species expect another 800,000 years, or another million years?
Upcoming posts
What would it mean for humanity to protect its potential, but use it poorly?
Arguments for and against Toby Ordâs âgrand strategy for humanityâ
Does protecting humanityâs potential guarantee its fulfilment?
A typology of strategies for influencing the future
Working titles of things I plan/âvaguely hope to write
Note: If you might be interested in writing about similar ideas, feel very free to reach out to me. Itâs very unlikely Iâll be able to write all of these posts by myself, so potentially we could collaborate, or I could just share my thoughts and notes with you and let you take it from there.
Update: Itâs now very unlikely that Iâll get around to writing any of these things.
The Terrible Funnel: Estimating odds of each step on the x-risk causal path (working title)
The idea here would be to adapt something like the âGreat Filterâ or âDrake Equationâ reasoning to estimating the probability of existential catastrophe, using how humanity has fared in prior events that passed or couldâve passed certain âstepsâ on certain causal chains to catastrophe.
E.g., even though weâve never faced a pandemic involving a bioengineered pathogen, perhaps our experience with how many natural pathogens have moved from each âstepâ to the next one can inform what would likely happen if we did face a bioengineered pathogen, or if it did get to a pandemic level.
This idea seems sort of implicit in the Precipice, but isnât really spelled out there. Also, as is probably obvious, I need to do more to organise my thoughts on it myself.
This may include discussion of how Ord distinguishes natural and anthropogenic risks, and why the standard arguments for an upper bound for natural extinction risks donât apply to natural pandemics. Or that might be a separate post.
Developingâbut not deployingâdrastic backup plans (see my comment here)
âMacrostrategyâ: Attempted definitions and related concepts
This would relate in part to Ordâs concept of âgrand strategy for humanityâ
Collection of notes
A post summarising the ideas of existential risk factors and existential security factors?
I suspect I wonât end up writing this, but I think someone should. For one thing, itâd be good to have something people can reference/âlink to that explains that idea (sort of like the role EA Concepts serves).
Some selected Precipice-related works by others
80,000 Hoursâ interview with Toby Ord
Slate Star Codexâs review of the book
FLI Podcast interview with Toby Ord