For a while I’ve been thinking about an idea I call Artificial Environment risk, but I haven’t had the time to develop the concept in detail (or come up with a better name). The idea is roughly that the natural environment is relatively robust (since it’s been around for a long time, and average species extinction rates and reasons are somewhat predictable), but as a higher proportion of humanity’s environment is artificially created we voyage into an unknown without a track record of safety or stability. So the risk of dangerous phenomena increase dramatically—as we go to a paradigm where we depend on an environment that has been roughly stable for millions of years to one for which exists evidence measured maybe in decades (or less!). Global warming and ozone degradation are obvious examples of this. But I also think that AI risk, biosecurity, nuclear war etc. fall under this (perhaps overly large) umbrella—as humans gain more and more ability to manipulate our environment, we accumulate more and more opportunities for systemic destruction.
Some risks, like global warming, are fairly observable and relatively straightforward epistemically. But then things like the attention environment and influence of social media are more complicated—as tools for attracting attention become more and more powerful, it creates unforeseen (and hard-to-specify or verify) effects (e.g. perhaps education polarization, weakening of elite mediation of information, harm to public discourse) that may be quite influential (and may interact with or compound other effects) but will be hard to plan for, understand, or mitigate.
One reason I find the idea worth trying to sketch out is, assuming technological development continues to progress, as humanity’s control over our environment increases, risk will generally continue to rise, and fewer technologies will be realistically considered riskless. (So we will have more tradeoffs about things like whether to develop technologies that can end infectious disease but also enable better weapons).
The idea of differential technological acceleration is aimed at this problem, but I am not sure how predictable offense/defense will be or how to effectively make political decisions about which fields and industries to nurture or cull. Part of the implication I draw from categorizing this broad set of risks together is that the space for new scientific and technological development will become more crowded—with fewer value-neutral or obviously positive opportunities for growth over time.
I think this may also tend to manifest in more of a clear division between what you might call left-EAs (progress-focused) and right-EAs (security-focused) (in some sense this corresponds to global health focused EAs vs. existential risk focused EAs currently, but the division is less clear). But that also goes to a separate view I have that EAs will have to accept more internal ideological diversity over time and recognize that the goals of different effective altruists will conflict with one another (e.g. extending human lifespan may be bad for animal welfare; synthetic biology advances to cure infectious disease may increase risks to biohazards etc.).
Very possible these ideas aren’t original—as I said they’re very thinly sketched at the moment, but have been thinking about them for a while so figured I should write them out.
This paper starts with a simple model that formalizes a tradeoff between technological progress increasing growth/wellbeing/consumption, and having a small chance of a massive disaster that kills off a lot of people. When do we choose to keep growing? The intuitive idea is that we should keep growing if the growth rate is higher than (the odds ratio of death) times (the dollar value of a statistical life). If the value of life in dollar terms is low—because everyone is desperately poor, so dollars are worth a lot—then growth is worth it. But under very mild assumptions about preferences, where the value of life relative to money grows with income, we will eventually choose to stop growing.
However, the question becomes different is the value of new technologies is saving lives rather than increasing prosperity. Money has diminishing marginal utility; life does not. So technology that saves lives with certainty, but destroys a lot of lives with some probability, is just a gamble. We decide to keep progressing if it saves more lives in expectation than stopping, but unfortunately that’s not a very helpful answer.
The hygiene hypothesis (especially the autoimmune disease variant, brief 2-paragraph summary here if you Ctrl+F “Before we go”) could be another example.
On a somewhat related note, Section V of this SlateStarCodex post goes through some similar examples where humans departing from long-lived tradition has negative effects that don’t become visible for a long time.
For a while I’ve been thinking about an idea I call Artificial Environment risk, but I haven’t had the time to develop the concept in detail (or come up with a better name). The idea is roughly that the natural environment is relatively robust (since it’s been around for a long time, and average species extinction rates and reasons are somewhat predictable), but as a higher proportion of humanity’s environment is artificially created we voyage into an unknown without a track record of safety or stability. So the risk of dangerous phenomena increase dramatically—as we go to a paradigm where we depend on an environment that has been roughly stable for millions of years to one for which exists evidence measured maybe in decades (or less!). Global warming and ozone degradation are obvious examples of this. But I also think that AI risk, biosecurity, nuclear war etc. fall under this (perhaps overly large) umbrella—as humans gain more and more ability to manipulate our environment, we accumulate more and more opportunities for systemic destruction.
Some risks, like global warming, are fairly observable and relatively straightforward epistemically. But then things like the attention environment and influence of social media are more complicated—as tools for attracting attention become more and more powerful, it creates unforeseen (and hard-to-specify or verify) effects (e.g. perhaps education polarization, weakening of elite mediation of information, harm to public discourse) that may be quite influential (and may interact with or compound other effects) but will be hard to plan for, understand, or mitigate.
One reason I find the idea worth trying to sketch out is, assuming technological development continues to progress, as humanity’s control over our environment increases, risk will generally continue to rise, and fewer technologies will be realistically considered riskless. (So we will have more tradeoffs about things like whether to develop technologies that can end infectious disease but also enable better weapons).
The idea of differential technological acceleration is aimed at this problem, but I am not sure how predictable offense/defense will be or how to effectively make political decisions about which fields and industries to nurture or cull. Part of the implication I draw from categorizing this broad set of risks together is that the space for new scientific and technological development will become more crowded—with fewer value-neutral or obviously positive opportunities for growth over time.
I think this may also tend to manifest in more of a clear division between what you might call left-EAs (progress-focused) and right-EAs (security-focused) (in some sense this corresponds to global health focused EAs vs. existential risk focused EAs currently, but the division is less clear). But that also goes to a separate view I have that EAs will have to accept more internal ideological diversity over time and recognize that the goals of different effective altruists will conflict with one another (e.g. extending human lifespan may be bad for animal welfare; synthetic biology advances to cure infectious disease may increase risks to biohazards etc.).
Very possible these ideas aren’t original—as I said they’re very thinly sketched at the moment, but have been thinking about them for a while so figured I should write them out.
This paper starts with a simple model that formalizes a tradeoff between technological progress increasing growth/wellbeing/consumption, and having a small chance of a massive disaster that kills off a lot of people. When do we choose to keep growing? The intuitive idea is that we should keep growing if the growth rate is higher than (the odds ratio of death) times (the dollar value of a statistical life). If the value of life in dollar terms is low—because everyone is desperately poor, so dollars are worth a lot—then growth is worth it. But under very mild assumptions about preferences, where the value of life relative to money grows with income, we will eventually choose to stop growing.
However, the question becomes different is the value of new technologies is saving lives rather than increasing prosperity. Money has diminishing marginal utility; life does not. So technology that saves lives with certainty, but destroys a lot of lives with some probability, is just a gamble. We decide to keep progressing if it saves more lives in expectation than stopping, but unfortunately that’s not a very helpful answer.
The hygiene hypothesis (especially the autoimmune disease variant, brief 2-paragraph summary here if you Ctrl+F “Before we go”) could be another example.
On a somewhat related note, Section V of this SlateStarCodex post goes through some similar examples where humans departing from long-lived tradition has negative effects that don’t become visible for a long time.