If I could give more than a Strong Upvote for your bringing up the dual-use issue as a crucial consideration for working on asteroid deflection capabilities, I would. I was considering doing a write-up on this as well. It is a wonderful example of second-order considerations making the effort to reduce risk actually increase it.
I think this is strong enough as a factor that I now update to the position that derisking our exposure to natural extinction risks via increasing the sophistication of our knowledge and capability to control those risks is actually bad and we should not do it. Maybe this generalizes to working on all existential risks...
I think this is strong enough as a factor that I now update to the position that derisking our exposure to natural extinction risks via increasing the sophistication of our knowledge and capability to control those risks is actually bad and we should not do it.
I would feel a bit wary about making a sweeping statement like this. I agree that there might be a more general dyanmic where (i) natural risks are typically small per century, and (ii) the technologies capable of controlling those risks might often be powerful enough to pose a non-negligible risk of their own, such that (iii) carelessly developing those technologies could sometimes increase risk on net, and (iv) we might want to delay building those capabilities while other competences catch up, such as our understanding of their effects and some meaure of international trust that we’ll use them responsibly. Very ambitious geoengineering comes to mind as close to an example.
Maybe this generalizes to working on all existential risks...
Perhaps I’m misunderstanding you, but I’m very hopeful that it doesn’t. One reason is that (it seems to me) very little existential risk work is best described as “let’s do build dual-use capabilities whose primary aim is to reduce some risk, and hope they don’t get misused”; but a lot of existential risk work can be described as either (i) “some people are building dual-use technologies ostensibly to reduce some risk or produce some benefits, but we think that could be really bad, let’s do something about that” and (ii) “this technology already looks set to become radically more powerful, let’s see if we can help shape its development so it doesn’t turn to do catastrophic harm”.
I think the meme of x-risk and related will spread and degrade beyond careful thinkers such as readers of this forum, and a likely subset of responses to a perception of impending doom are to take drastic actions to gain perceived control, exacerbating risk. The concept of x-risk is itself dual-use.
If I could give more than a Strong Upvote for your bringing up the dual-use issue as a crucial consideration for working on asteroid deflection capabilities, I would. I was considering doing a write-up on this as well. It is a wonderful example of second-order considerations making the effort to reduce risk actually increase it.
I think this is strong enough as a factor that I now update to the position that derisking our exposure to natural extinction risks via increasing the sophistication of our knowledge and capability to control those risks is actually bad and we should not do it. Maybe this generalizes to working on all existential risks...
Thank you for the kind words!
I would feel a bit wary about making a sweeping statement like this. I agree that there might be a more general dyanmic where (i) natural risks are typically small per century, and (ii) the technologies capable of controlling those risks might often be powerful enough to pose a non-negligible risk of their own, such that (iii) carelessly developing those technologies could sometimes increase risk on net, and (iv) we might want to delay building those capabilities while other competences catch up, such as our understanding of their effects and some meaure of international trust that we’ll use them responsibly. Very ambitious geoengineering comes to mind as close to an example.
Perhaps I’m misunderstanding you, but I’m very hopeful that it doesn’t. One reason is that (it seems to me) very little existential risk work is best described as “let’s do build dual-use capabilities whose primary aim is to reduce some risk, and hope they don’t get misused”; but a lot of existential risk work can be described as either (i) “some people are building dual-use technologies ostensibly to reduce some risk or produce some benefits, but we think that could be really bad, let’s do something about that” and (ii) “this technology already looks set to become radically more powerful, let’s see if we can help shape its development so it doesn’t turn to do catastrophic harm”.
I think the meme of x-risk and related will spread and degrade beyond careful thinkers such as readers of this forum, and a likely subset of responses to a perception of impending doom are to take drastic actions to gain perceived control, exacerbating risk. The concept of x-risk is itself dual-use.