Recently, I’ve been part of a small team that is working on the risks posed by technologies that allow humans to steer asteroids (opening the possibility of deliberately striking the Earth). We presented some of these results in a poster at EA Global SF 2019.
At the moment, we’re expanding this work into a paper. My current position is that this is an interesting and noteworthy technological risk that is (probably) strictly less dangerous/powerful than AI, but working on it can be useful. My reasons include: mitigating a risk that is largely orthogonal to AI is still useful; succeeding at preemptive regulation of a technological risk would improve our ability to do it for more difficult cases (e.g., AI); and popularizing the X-risk concept effectively via a more concrete/non-abstract manifestation than the more abstract risks from technologies like AI/biotech (most people understand the prevailing theory of the extinction of the dinosaurs and can somewhat easily imagine such a disaster in the future).
That’s a very interesting topic that I hadn’t considered before, and your argument for why it’s worth having at least some people thinking about and working on it seems sound to me.
But I also wondered when reading your comment whether publicly discussing such an idea is net negative due to posing information hazards. (That would probably just mean research on the idea should only be discussed individually with people who’ve been at least briefly vetted for sensibleness, not that research shouldn’t be conducted at all.) I had never heard of this potential issue, and don’t think I ever would’ve thought of it by myself, and my knee-jerk guess would be that the same would be true of most policymakers, members of the public, scientists, etc.
Have you thought about the possible harms of publicising this idea, and ran the idea of publicising it by sensible people to check there’s no unilateralist’s curse occurring?
(Edit: Some parts of your poster have updated me towards thinking it’s more likely than I previously thought that relevant decision-makers are or will become aware of this idea anyway. But I still think it may be worth at least considering potential information hazards here—which you may already have done.
A related point is that I recall someone—I think they were from FHI, but I can’t easily find the source—arguing that publicly emphasising the possibility of an AI arms race could make matters worse by making arms race dynamics more likely.)
Thanks for taking a look at the arguments and taking the time to post a reply here! Since this topic is still pretty new, it benefits a lot from each new person taking a look at the arguments and data.
I agree completely regarding information hazards. We’ve been thinking about these extensively over the last several months (and consulting with various people who are able to hold us to task about our position on them). In short, we chose every point on that poster with care. In some cases we’re talking about things that have been explored extensively by major public figures or sources, such as Carl Sagan or the RAND corporation. In other cases, we’re in new territory. We’ve definitely considered keeping our silence on both counts (also see https://forum.effectivealtruism.org/posts/CoXauRRzWxtsjhsj6/terrorism-tylenol-and-dangerous-information if you haven’t seen it yet). As it stands, we believe that the arguments in the poster (and the information undergirding those points) is of pretty high value to the world today and would actually be more dangerous if it were publicized at a later date (e.g., when space technologies are already much more mature and there are many status quo space forces and space industries who will fight regulation of their capabilities).
If you’re interested in the project itself, or in further discussions of these hazards/opportunities, let me know!
Regarding the “arms race” terminology concern, you may be referring to https://www.researchgate.net/publication/330280774_An_AI_Race_for_Strategic_Advantage_Rhetoric_and_Risks which I think is a worthy set of arguments to consider when weighing whether and how to speak on key subjects. I do think that a systematic case needs to be made in favor of particular kinds of speech, particularly around 1) constructively framing a challenge that humanity faces and 2) fostering the political will needed to show strategic restraint in the development and deployment of transformative technologies (e.g., though institutionalization in a global project). I think information hazards are an absolutely crucial part of this story, but they aren’t the entire story. With luck, I hope to contribute more thoughts along these lines in the coming months.
It sounds like you’ve given the possibility of information hazards careful attention, recognised the value of consulting others, and made reasonable decisions. (I expected you probably would’ve done so—just thought it’d be worth asking.)
I also definitely agree that the possibility of information hazards shouldn’t just serve as a blanket, argument-ending reason to not fairly publicly discuss any potentially dangerous technologies, and that it always has to be weighed against the potential benefits of such discussion.
Recently, I’ve been part of a small team that is working on the risks posed by technologies that allow humans to steer asteroids (opening the possibility of deliberately striking the Earth). We presented some of these results in a poster at EA Global SF 2019.
At the moment, we’re expanding this work into a paper. My current position is that this is an interesting and noteworthy technological risk that is (probably) strictly less dangerous/powerful than AI, but working on it can be useful. My reasons include: mitigating a risk that is largely orthogonal to AI is still useful; succeeding at preemptive regulation of a technological risk would improve our ability to do it for more difficult cases (e.g., AI); and popularizing the X-risk concept effectively via a more concrete/non-abstract manifestation than the more abstract risks from technologies like AI/biotech (most people understand the prevailing theory of the extinction of the dinosaurs and can somewhat easily imagine such a disaster in the future).
That’s a very interesting topic that I hadn’t considered before, and your argument for why it’s worth having at least some people thinking about and working on it seems sound to me.
But I also wondered when reading your comment whether publicly discussing such an idea is net negative due to posing information hazards. (That would probably just mean research on the idea should only be discussed individually with people who’ve been at least briefly vetted for sensibleness, not that research shouldn’t be conducted at all.) I had never heard of this potential issue, and don’t think I ever would’ve thought of it by myself, and my knee-jerk guess would be that the same would be true of most policymakers, members of the public, scientists, etc.
Have you thought about the possible harms of publicising this idea, and ran the idea of publicising it by sensible people to check there’s no unilateralist’s curse occurring?
(Edit: Some parts of your poster have updated me towards thinking it’s more likely than I previously thought that relevant decision-makers are or will become aware of this idea anyway. But I still think it may be worth at least considering potential information hazards here—which you may already have done.
A related point is that I recall someone—I think they were from FHI, but I can’t easily find the source—arguing that publicly emphasising the possibility of an AI arms race could make matters worse by making arms race dynamics more likely.)
Thanks for taking a look at the arguments and taking the time to post a reply here! Since this topic is still pretty new, it benefits a lot from each new person taking a look at the arguments and data.
I agree completely regarding information hazards. We’ve been thinking about these extensively over the last several months (and consulting with various people who are able to hold us to task about our position on them). In short, we chose every point on that poster with care. In some cases we’re talking about things that have been explored extensively by major public figures or sources, such as Carl Sagan or the RAND corporation. In other cases, we’re in new territory. We’ve definitely considered keeping our silence on both counts (also see https://forum.effectivealtruism.org/posts/CoXauRRzWxtsjhsj6/terrorism-tylenol-and-dangerous-information if you haven’t seen it yet). As it stands, we believe that the arguments in the poster (and the information undergirding those points) is of pretty high value to the world today and would actually be more dangerous if it were publicized at a later date (e.g., when space technologies are already much more mature and there are many status quo space forces and space industries who will fight regulation of their capabilities).
If you’re interested in the project itself, or in further discussions of these hazards/opportunities, let me know!
Regarding the “arms race” terminology concern, you may be referring to https://www.researchgate.net/publication/330280774_An_AI_Race_for_Strategic_Advantage_Rhetoric_and_Risks which I think is a worthy set of arguments to consider when weighing whether and how to speak on key subjects. I do think that a systematic case needs to be made in favor of particular kinds of speech, particularly around 1) constructively framing a challenge that humanity faces and 2) fostering the political will needed to show strategic restraint in the development and deployment of transformative technologies (e.g., though institutionalization in a global project). I think information hazards are an absolutely crucial part of this story, but they aren’t the entire story. With luck, I hope to contribute more thoughts along these lines in the coming months.
It sounds like you’ve given the possibility of information hazards careful attention, recognised the value of consulting others, and made reasonable decisions. (I expected you probably would’ve done so—just thought it’d be worth asking.)
I also definitely agree that the possibility of information hazards shouldn’t just serve as a blanket, argument-ending reason to not fairly publicly discuss any potentially dangerous technologies, and that it always has to be weighed against the potential benefits of such discussion.
See also: Do we know how many big asteroids could impact Earth?