I agree with all the points you make in the āImplicationsā section. I also provide some relevant ideas in Some thoughts on Toby Ordās existential risk estimates, in the section on how Ordās estimates (especially the relatively high estimates for āotherā and āunforeseenā risks) should update our career and donation decisions.
To deal with currently-possible unknown risks, we could spend more effort thinking about possible sources of risk, but this strategy probably wouldnāt help us predict x-risks that depend on future technology.
I was confused by this claim. Wouldnāt you put risks from AI, risks from nanotechnology, and much of biorisk in the category of āx-risks that depend on future technologyā, rather than currently possibly yet unrecognised risks? And wouldnāt you say that effort thinking about possible sources of risk has helped us identify and mitigate (or strategise about how to mitigate) those three risks? If so, Iād guess that similar efforts could help us with similar types of risks in future, as well.
If we expect that deliberate efforts to reduce x-risk will likely come into play before new and currently-unknown x-risks emerge, that doesnāt mean we should deprioritize unknown risks. Our actions today to prioritize unknown risks could be the very reason that such risks will not seriously threaten future civilization.
I agree. Iād also add that I think a useful way to think about this is in terms of endogenous vs exogenous increases in deliberate efforts to reduce x-risk, where the former essentially means ācaused by people like us, because they thought about arguments like thisā, and the latter essentially means ācaused by other people or other eventsā (e.g., mainstream governments and academics coming to prioritise x-risk more for reasons other than the influence of EA). The more we expect exogenous increases in deliberate efforts to reduce x-risk, the less we should prioritise unknown risks. The same is not true with regards to endogenous increases, because deprioritising this area makes the endogenous increases unlikely.
(This distinction came to my mind due to SjirH making a similar distinction in relation to learning about giving opportunities.)
This is similar to discussions about how āsaneā society is, and how much we should expect problems to be solved āby defaultā.
We developed nuclear weapons in 1945, but it was not until almost 40 years later that we realized their use could lead to a nuclear winter.
You seem to be using this data point to support the argument āThere are many risks weāve discovered only recently, and we should be worried about these and about risks we have yet to discover.ā That seems fair.
But I think that, if youāre using that data point in that way, itās probably worth also more explicitly noting the following: People were worried about existential catastrophe from nuclear war via mechanisms that we now know donāt really warrant concern (as alluded to in the Ord quote you also provide). So there seems to be a companion data point which supports the argument āIn the past, many risks that were recently discovered later turned out to not be big deals, so we should be less worried about recently discovered or to-be-discovered risks than we might otherwise think.ā
(And Iād say that both data points provide relatively weak evidence for their respective arguments.)
I agree with all the points you make in the āImplicationsā section. I also provide some relevant ideas in Some thoughts on Toby Ordās existential risk estimates, in the section on how Ordās estimates (especially the relatively high estimates for āotherā and āunforeseenā risks) should update our career and donation decisions.
I was confused by this claim. Wouldnāt you put risks from AI, risks from nanotechnology, and much of biorisk in the category of āx-risks that depend on future technologyā, rather than currently possibly yet unrecognised risks? And wouldnāt you say that effort thinking about possible sources of risk has helped us identify and mitigate (or strategise about how to mitigate) those three risks? If so, Iād guess that similar efforts could help us with similar types of risks in future, as well.
I agree. Iād also add that I think a useful way to think about this is in terms of endogenous vs exogenous increases in deliberate efforts to reduce x-risk, where the former essentially means ācaused by people like us, because they thought about arguments like thisā, and the latter essentially means ācaused by other people or other eventsā (e.g., mainstream governments and academics coming to prioritise x-risk more for reasons other than the influence of EA). The more we expect exogenous increases in deliberate efforts to reduce x-risk, the less we should prioritise unknown risks. The same is not true with regards to endogenous increases, because deprioritising this area makes the endogenous increases unlikely.
(This distinction came to my mind due to SjirH making a similar distinction in relation to learning about giving opportunities.)
This is similar to discussions about how āsaneā society is, and how much we should expect problems to be solved āby defaultā.
You seem to be using this data point to support the argument āThere are many risks weāve discovered only recently, and we should be worried about these and about risks we have yet to discover.ā That seems fair.
But I think that, if youāre using that data point in that way, itās probably worth also more explicitly noting the following: People were worried about existential catastrophe from nuclear war via mechanisms that we now know donāt really warrant concern (as alluded to in the Ord quote you also provide). So there seems to be a companion data point which supports the argument āIn the past, many risks that were recently discovered later turned out to not be big deals, so we should be less worried about recently discovered or to-be-discovered risks than we might otherwise think.ā
(And Iād say that both data points provide relatively weak evidence for their respective arguments.)