Thanks for taking a look at the arguments and taking the time to post a reply here! Since this topic is still pretty new, it benefits a lot from each new person taking a look at the arguments and data.
I agree completely regarding information hazards. We’ve been thinking about these extensively over the last several months (and consulting with various people who are able to hold us to task about our position on them). In short, we chose every point on that poster with care. In some cases we’re talking about things that have been explored extensively by major public figures or sources, such as Carl Sagan or the RAND corporation. In other cases, we’re in new territory. We’ve definitely considered keeping our silence on both counts (also see https://forum.effectivealtruism.org/posts/CoXauRRzWxtsjhsj6/terrorism-tylenol-and-dangerous-information if you haven’t seen it yet). As it stands, we believe that the arguments in the poster (and the information undergirding those points) is of pretty high value to the world today and would actually be more dangerous if it were publicized at a later date (e.g., when space technologies are already much more mature and there are many status quo space forces and space industries who will fight regulation of their capabilities).
If you’re interested in the project itself, or in further discussions of these hazards/opportunities, let me know!
Regarding the “arms race” terminology concern, you may be referring to https://www.researchgate.net/publication/330280774_An_AI_Race_for_Strategic_Advantage_Rhetoric_and_Risks which I think is a worthy set of arguments to consider when weighing whether and how to speak on key subjects. I do think that a systematic case needs to be made in favor of particular kinds of speech, particularly around 1) constructively framing a challenge that humanity faces and 2) fostering the political will needed to show strategic restraint in the development and deployment of transformative technologies (e.g., though institutionalization in a global project). I think information hazards are an absolutely crucial part of this story, but they aren’t the entire story. With luck, I hope to contribute more thoughts along these lines in the coming months.
It sounds like you’ve given the possibility of information hazards careful attention, recognised the value of consulting others, and made reasonable decisions. (I expected you probably would’ve done so—just thought it’d be worth asking.)
I also definitely agree that the possibility of information hazards shouldn’t just serve as a blanket, argument-ending reason to not fairly publicly discuss any potentially dangerous technologies, and that it always has to be weighed against the potential benefits of such discussion.
Thanks for taking a look at the arguments and taking the time to post a reply here! Since this topic is still pretty new, it benefits a lot from each new person taking a look at the arguments and data.
I agree completely regarding information hazards. We’ve been thinking about these extensively over the last several months (and consulting with various people who are able to hold us to task about our position on them). In short, we chose every point on that poster with care. In some cases we’re talking about things that have been explored extensively by major public figures or sources, such as Carl Sagan or the RAND corporation. In other cases, we’re in new territory. We’ve definitely considered keeping our silence on both counts (also see https://forum.effectivealtruism.org/posts/CoXauRRzWxtsjhsj6/terrorism-tylenol-and-dangerous-information if you haven’t seen it yet). As it stands, we believe that the arguments in the poster (and the information undergirding those points) is of pretty high value to the world today and would actually be more dangerous if it were publicized at a later date (e.g., when space technologies are already much more mature and there are many status quo space forces and space industries who will fight regulation of their capabilities).
If you’re interested in the project itself, or in further discussions of these hazards/opportunities, let me know!
Regarding the “arms race” terminology concern, you may be referring to https://www.researchgate.net/publication/330280774_An_AI_Race_for_Strategic_Advantage_Rhetoric_and_Risks which I think is a worthy set of arguments to consider when weighing whether and how to speak on key subjects. I do think that a systematic case needs to be made in favor of particular kinds of speech, particularly around 1) constructively framing a challenge that humanity faces and 2) fostering the political will needed to show strategic restraint in the development and deployment of transformative technologies (e.g., though institutionalization in a global project). I think information hazards are an absolutely crucial part of this story, but they aren’t the entire story. With luck, I hope to contribute more thoughts along these lines in the coming months.
It sounds like you’ve given the possibility of information hazards careful attention, recognised the value of consulting others, and made reasonable decisions. (I expected you probably would’ve done so—just thought it’d be worth asking.)
I also definitely agree that the possibility of information hazards shouldn’t just serve as a blanket, argument-ending reason to not fairly publicly discuss any potentially dangerous technologies, and that it always has to be weighed against the potential benefits of such discussion.