I agree with all the points you make in the “Implications” section. I also provide some relevant ideas in Some thoughts on Toby Ord’s existential risk estimates, in the section on how Ord’s estimates (especially the relatively high estimates for “other” and “unforeseen” risks) should update our career and donation decisions.
To deal with currently-possible unknown risks, we could spend more effort thinking about possible sources of risk, but this strategy probably wouldn’t help us predict x-risks that depend on future technology.
I was confused by this claim. Wouldn’t you put risks from AI, risks from nanotechnology, and much of biorisk in the category of “x-risks that depend on future technology”, rather than currently possibly yet unrecognised risks? And wouldn’t you say that effort thinking about possible sources of risk has helped us identify and mitigate (or strategise about how to mitigate) those three risks? If so, I’d guess that similar efforts could help us with similar types of risks in future, as well.
If we expect that deliberate efforts to reduce x-risk will likely come into play before new and currently-unknown x-risks emerge, that doesn’t mean we should deprioritize unknown risks. Our actions today to prioritize unknown risks could be the very reason that such risks will not seriously threaten future civilization.
I agree. I’d also add that I think a useful way to think about this is in terms of endogenous vs exogenous increases in deliberate efforts to reduce x-risk, where the former essentially means “caused by people like us, because they thought about arguments like this”, and the latter essentially means “caused by other people or other events” (e.g., mainstream governments and academics coming to prioritise x-risk more for reasons other than the influence of EA). The more we expect exogenous increases in deliberate efforts to reduce x-risk, the less we should prioritise unknown risks. The same is not true with regards to endogenous increases, because deprioritising this area makes the endogenous increases unlikely.
(This distinction came to my mind due to SjirH making a similar distinction in relation to learning about giving opportunities.)
This is similar to discussions about how “sane” society is, and how much we should expect problems to be solved “by default”.
We developed nuclear weapons in 1945, but it was not until almost 40 years later that we realized their use could lead to a nuclear winter.
You seem to be using this data point to support the argument “There are many risks we’ve discovered only recently, and we should be worried about these and about risks we have yet to discover.” That seems fair.
But I think that, if you’re using that data point in that way, it’s probably worth also more explicitly noting the following: People were worried about existential catastrophe from nuclear war via mechanisms that we now know don’t really warrant concern (as alluded to in the Ord quote you also provide). So there seems to be a companion data point which supports the argument “In the past, many risks that were recently discovered later turned out to not be big deals, so we should be less worried about recently discovered or to-be-discovered risks than we might otherwise think.”
(And I’d say that both data points provide relatively weak evidence for their respective arguments.)