Thatās a very interesting topic that I hadnāt considered before, and your argument for why itās worth having at least some people thinking about and working on it seems sound to me.
But I also wondered when reading your comment whether publicly discussing such an idea is net negative due to posing information hazards. (That would probably just mean research on the idea should only be discussed individually with people whoāve been at least briefly vetted for sensibleness, not that research shouldnāt be conducted at all.) I had never heard of this potential issue, and donāt think I ever wouldāve thought of it by myself, and my knee-jerk guess would be that the same would be true of most policymakers, members of the public, scientists, etc.
Have you thought about the possible harms of publicising this idea, and ran the idea of publicising it by sensible people to check thereās no unilateralistās curse occurring?
(Edit: Some parts of your poster have updated me towards thinking itās more likely than I previously thought that relevant decision-makers are or will become aware of this idea anyway. But I still think it may be worth at least considering potential information hazards hereāwhich you may already have done.
A related point is that I recall someoneāI think they were from FHI, but I canāt easily find the sourceāarguing that publicly emphasising the possibility of an AI arms race could make matters worse by making arms race dynamics more likely.)
Thanks for taking a look at the arguments and taking the time to post a reply here! Since this topic is still pretty new, it benefits a lot from each new person taking a look at the arguments and data.
I agree completely regarding information hazards. Weāve been thinking about these extensively over the last several months (and consulting with various people who are able to hold us to task about our position on them). In short, we chose every point on that poster with care. In some cases weāre talking about things that have been explored extensively by major public figures or sources, such as Carl Sagan or the RAND corporation. In other cases, weāre in new territory. Weāve definitely considered keeping our silence on both counts (also see https://āāforum.effectivealtruism.org/āāposts/āāCoXauRRzWxtsjhsj6/āāterrorism-tylenol-and-dangerous-information if you havenāt seen it yet). As it stands, we believe that the arguments in the poster (and the information undergirding those points) is of pretty high value to the world today and would actually be more dangerous if it were publicized at a later date (e.g., when space technologies are already much more mature and there are many status quo space forces and space industries who will fight regulation of their capabilities).
If youāre interested in the project itself, or in further discussions of these hazards/āopportunities, let me know!
Regarding the āarms raceā terminology concern, you may be referring to https://āāwww.researchgate.net/āāpublication/āā330280774_An_AI_Race_for_Strategic_Advantage_Rhetoric_and_Risks which I think is a worthy set of arguments to consider when weighing whether and how to speak on key subjects. I do think that a systematic case needs to be made in favor of particular kinds of speech, particularly around 1) constructively framing a challenge that humanity faces and 2) fostering the political will needed to show strategic restraint in the development and deployment of transformative technologies (e.g., though institutionalization in a global project). I think information hazards are an absolutely crucial part of this story, but they arenāt the entire story. With luck, I hope to contribute more thoughts along these lines in the coming months.
It sounds like youāve given the possibility of information hazards careful attention, recognised the value of consulting others, and made reasonable decisions. (I expected you probably wouldāve done soājust thought itād be worth asking.)
I also definitely agree that the possibility of information hazards shouldnāt just serve as a blanket, argument-ending reason to not fairly publicly discuss any potentially dangerous technologies, and that it always has to be weighed against the potential benefits of such discussion.
Thatās a very interesting topic that I hadnāt considered before, and your argument for why itās worth having at least some people thinking about and working on it seems sound to me.
But I also wondered when reading your comment whether publicly discussing such an idea is net negative due to posing information hazards. (That would probably just mean research on the idea should only be discussed individually with people whoāve been at least briefly vetted for sensibleness, not that research shouldnāt be conducted at all.) I had never heard of this potential issue, and donāt think I ever wouldāve thought of it by myself, and my knee-jerk guess would be that the same would be true of most policymakers, members of the public, scientists, etc.
Have you thought about the possible harms of publicising this idea, and ran the idea of publicising it by sensible people to check thereās no unilateralistās curse occurring?
(Edit: Some parts of your poster have updated me towards thinking itās more likely than I previously thought that relevant decision-makers are or will become aware of this idea anyway. But I still think it may be worth at least considering potential information hazards hereāwhich you may already have done.
A related point is that I recall someoneāI think they were from FHI, but I canāt easily find the sourceāarguing that publicly emphasising the possibility of an AI arms race could make matters worse by making arms race dynamics more likely.)
Thanks for taking a look at the arguments and taking the time to post a reply here! Since this topic is still pretty new, it benefits a lot from each new person taking a look at the arguments and data.
I agree completely regarding information hazards. Weāve been thinking about these extensively over the last several months (and consulting with various people who are able to hold us to task about our position on them). In short, we chose every point on that poster with care. In some cases weāre talking about things that have been explored extensively by major public figures or sources, such as Carl Sagan or the RAND corporation. In other cases, weāre in new territory. Weāve definitely considered keeping our silence on both counts (also see https://āāforum.effectivealtruism.org/āāposts/āāCoXauRRzWxtsjhsj6/āāterrorism-tylenol-and-dangerous-information if you havenāt seen it yet). As it stands, we believe that the arguments in the poster (and the information undergirding those points) is of pretty high value to the world today and would actually be more dangerous if it were publicized at a later date (e.g., when space technologies are already much more mature and there are many status quo space forces and space industries who will fight regulation of their capabilities).
If youāre interested in the project itself, or in further discussions of these hazards/āopportunities, let me know!
Regarding the āarms raceā terminology concern, you may be referring to https://āāwww.researchgate.net/āāpublication/āā330280774_An_AI_Race_for_Strategic_Advantage_Rhetoric_and_Risks which I think is a worthy set of arguments to consider when weighing whether and how to speak on key subjects. I do think that a systematic case needs to be made in favor of particular kinds of speech, particularly around 1) constructively framing a challenge that humanity faces and 2) fostering the political will needed to show strategic restraint in the development and deployment of transformative technologies (e.g., though institutionalization in a global project). I think information hazards are an absolutely crucial part of this story, but they arenāt the entire story. With luck, I hope to contribute more thoughts along these lines in the coming months.
It sounds like youāve given the possibility of information hazards careful attention, recognised the value of consulting others, and made reasonable decisions. (I expected you probably wouldāve done soājust thought itād be worth asking.)
I also definitely agree that the possibility of information hazards shouldnāt just serve as a blanket, argument-ending reason to not fairly publicly discuss any potentially dangerous technologies, and that it always has to be weighed against the potential benefits of such discussion.