In Week 2 we start to investigate specific existential risks.
Why? Doesn’t a focus on particular risks assume that we will be able to mitigate not only particular risks, but all such existential risks? What difference does it make if we solve one problem if we are then destroyed by another?
Is it credible for us to propose that we can succeed in mitigating each and every existential threat which will emerge from an ever accelerating knowledge explosion which is generating ever more, ever larger powers, at an ever accelerating rate?
Wouldn’t such an assumption be a failure of holistic thinking which fails to take in to account that the maturity of the human beings who are to manage such an assembly line of risks will not advance at the same pace as the challenge being presented?
Thanks for the comment! I first want to register strong agreement with many of your points, e.g. the root of the problem isn’t necessarily technology inherently, but rather our inability to do things like coordinate well and think in a long-term way. I also think that focusing too much on individual risks while avoiding the larger picture is a failure mode that some in the community fall into, and Ord’s book might have done well to spend some time taking this perspective (he does talk about risk factors which is part of the way to a more systemic perspective, but he doesn’t really address the fundamental drivers of many of these risks, which I agree seems like a missed opportunity).
That being said, I think I have a few main disagreements here:
Lack of good opportunities for more general longtermist interventions. I think if there were really promising avenues for advancing along the frontiers you suggest (e.g. trying to encourage cultural philosophical perspective shifts, if I’m understanding your point here correctly) then I’d probably change my mind here. But it still seems imo like these kinds of interventions aren’t as promising as direct work on individual risks, which is still super neglected in cases like bio/AI.
Work on individual risks does (at least partially) generalise. For instance, in the case of work on specific future risks e.g. bio and AI, it doesn’t seem like we can draw useful lessons about what kinds of strategies work (e.g. regulation/slowing research, better public materials and education about the risks, integrating more with the academic community) unless we actually try out these strategies.
Addressing some risks might directly reduce others. For instance, getting AI alignment right would probably be a massive boon for our ability to handle other natural risks. This is pretty speculative though, because we don’t really know what a future where we get AI right looks like.
Hi Callum, thanks for the response, much appreciated.
the root of the problem isn’t necessarily technology inherently, but rather our inability to do things like coordinate well and think in a long-term way.
I would describe it as an unwillingness or inability to think holistically, to consider human limitations as one of the factors that must be taken in to account when designing our technological future. The “more is better” relationship with knowledge which science culture is built upon seems to largely ignore this. “More is better” without limit is not going to work when one of the components of the “machine” is limited.
(e.g. trying to encourage cultural philosophical perspective shifts, if I’m understanding your point here correctly
Yes, that’s a good summary. I’m not against working on particular risks, I just see that as a failed strategy if we don’t also bring focus to the knowledge explosion which is generating all the risks. More here:
For instance, in the case of work on specific future risks e.g. bio and AI, it doesn’t seem like we can draw useful lessons about what kinds of strategies work (e.g. regulation/slowing research, better public materials and education about the risks, integrating more with the academic community) unless we actually try out these strategies.
Fair point. No complaints. I don’t object to such work. I’m objecting to what I perceive to be an almost exclusive focus on particular risks, at the cost of largely ignoring the source of the risks. Example: After seventy years we still have basically no idea how to escape the nuclear weapons threat, and yet we keep piling on more and more risks, faster and faster.
So long as the risk pipeline is generating new risks faster than our ability to respond, working on particular risks will ultimately fail. You know, when we’re talking about existential risks, we have to win every time, and can’t afford to lose once. It won’t matter if we solve AI, if some other existential risk destroys the system anyway.
For instance, getting AI alignment right would probably be a massive boon for our ability to handle other natural risks
Meaning no disrespect, I see this notion that intellectual elites in prestigious universities can solve the AI problem as a self serving mythology. How do such well meaning elites intend to manage the Russians, Chinese, North Koreans, drug gangs, terror groups, hacker boys on Reddit etc?
On the genetic engineering front, leaders like Jennifer Doudna respond to concerns by discussing governing mechanisms. Industry leaders would like us to believe that they will be able to control the spread and development of genetic engineering, but that’s just silly. It’s almost insulting that they keep expecting us to believe that.
To me, the central question is, how do we take control of the knowledge explosion so that it is proceeding at a rate which human beings can successfully manage?
The challenge here is that the “more is better” relationship with knowledge has been with us since the beginning, and has delivered many miracles, so people have a very hard wrapping their minds around the fact that the success of the knowledge explosion has created a very different new environment which we are required to adapt to, like it or not.
This problem is amplified by the fact that, generally speaking, the science community seems not to grasp this, and they are the ones with most of the cultural authority on such issues. To a significant degree we are being led by people who are living in the past.
Ok, that’s enough words for now, too many really. Thanks again for engaging, and for the work you are doing. I look forward to more exchanges as your time and interest permit.
Why? Doesn’t a focus on particular risks assume that we will be able to mitigate not only particular risks, but all such existential risks? What difference does it make if we solve one problem if we are then destroyed by another?
Is it credible for us to propose that we can succeed in mitigating each and every existential threat which will emerge from an ever accelerating knowledge explosion which is generating ever more, ever larger powers, at an ever accelerating rate?
Wouldn’t such an assumption be a failure of holistic thinking which fails to take in to account that the maturity of the human beings who are to manage such an assembly line of risks will not advance at the same pace as the challenge being presented?
Thanks for the comment! I first want to register strong agreement with many of your points, e.g. the root of the problem isn’t necessarily technology inherently, but rather our inability to do things like coordinate well and think in a long-term way. I also think that focusing too much on individual risks while avoiding the larger picture is a failure mode that some in the community fall into, and Ord’s book might have done well to spend some time taking this perspective (he does talk about risk factors which is part of the way to a more systemic perspective, but he doesn’t really address the fundamental drivers of many of these risks, which I agree seems like a missed opportunity).
That being said, I think I have a few main disagreements here:
Lack of good opportunities for more general longtermist interventions. I think if there were really promising avenues for advancing along the frontiers you suggest (e.g. trying to encourage cultural philosophical perspective shifts, if I’m understanding your point here correctly) then I’d probably change my mind here. But it still seems imo like these kinds of interventions aren’t as promising as direct work on individual risks, which is still super neglected in cases like bio/AI.
Work on individual risks does (at least partially) generalise. For instance, in the case of work on specific future risks e.g. bio and AI, it doesn’t seem like we can draw useful lessons about what kinds of strategies work (e.g. regulation/slowing research, better public materials and education about the risks, integrating more with the academic community) unless we actually try out these strategies.
Addressing some risks might directly reduce others. For instance, getting AI alignment right would probably be a massive boon for our ability to handle other natural risks. This is pretty speculative though, because we don’t really know what a future where we get AI right looks like.
Hi Callum, thanks for the response, much appreciated.
I would describe it as an unwillingness or inability to think holistically, to consider human limitations as one of the factors that must be taken in to account when designing our technological future. The “more is better” relationship with knowledge which science culture is built upon seems to largely ignore this. “More is better” without limit is not going to work when one of the components of the “machine” is limited.
Yes, that’s a good summary. I’m not against working on particular risks, I just see that as a failed strategy if we don’t also bring focus to the knowledge explosion which is generating all the risks. More here:
https://forum.effectivealtruism.org/posts/F76dnd5xvQHdqBPd8/what-is-the-most-effective-way-to-look-at-existential-risk
Fair point. No complaints. I don’t object to such work. I’m objecting to what I perceive to be an almost exclusive focus on particular risks, at the cost of largely ignoring the source of the risks. Example: After seventy years we still have basically no idea how to escape the nuclear weapons threat, and yet we keep piling on more and more risks, faster and faster.
So long as the risk pipeline is generating new risks faster than our ability to respond, working on particular risks will ultimately fail. You know, when we’re talking about existential risks, we have to win every time, and can’t afford to lose once. It won’t matter if we solve AI, if some other existential risk destroys the system anyway.
Meaning no disrespect, I see this notion that intellectual elites in prestigious universities can solve the AI problem as a self serving mythology. How do such well meaning elites intend to manage the Russians, Chinese, North Koreans, drug gangs, terror groups, hacker boys on Reddit etc?
On the genetic engineering front, leaders like Jennifer Doudna respond to concerns by discussing governing mechanisms. Industry leaders would like us to believe that they will be able to control the spread and development of genetic engineering, but that’s just silly. It’s almost insulting that they keep expecting us to believe that.
To me, the central question is, how do we take control of the knowledge explosion so that it is proceeding at a rate which human beings can successfully manage?
The challenge here is that the “more is better” relationship with knowledge has been with us since the beginning, and has delivered many miracles, so people have a very hard wrapping their minds around the fact that the success of the knowledge explosion has created a very different new environment which we are required to adapt to, like it or not.
This problem is amplified by the fact that, generally speaking, the science community seems not to grasp this, and they are the ones with most of the cultural authority on such issues. To a significant degree we are being led by people who are living in the past.
Ok, that’s enough words for now, too many really. Thanks again for engaging, and for the work you are doing. I look forward to more exchanges as your time and interest permit.