Thatâs a very interesting topic that I hadnât considered before, and your argument for why itâs worth having at least some people thinking about and working on it seems sound to me.
But I also wondered when reading your comment whether publicly discussing such an idea is net negative due to posing information hazards. (That would probably just mean research on the idea should only be discussed individually with people whoâve been at least briefly vetted for sensibleness, not that research shouldnât be conducted at all.) I had never heard of this potential issue, and donât think I ever wouldâve thought of it by myself, and my knee-jerk guess would be that the same would be true of most policymakers, members of the public, scientists, etc.
Have you thought about the possible harms of publicising this idea, and ran the idea of publicising it by sensible people to check thereâs no unilateralistâs curse occurring?
(Edit: Some parts of your poster have updated me towards thinking itâs more likely than I previously thought that relevant decision-makers are or will become aware of this idea anyway. But I still think it may be worth at least considering potential information hazards hereâwhich you may already have done.
A related point is that I recall someoneâI think they were from FHI, but I canât easily find the sourceâarguing that publicly emphasising the possibility of an AI arms race could make matters worse by making arms race dynamics more likely.)
Thanks for taking a look at the arguments and taking the time to post a reply here! Since this topic is still pretty new, it benefits a lot from each new person taking a look at the arguments and data.
I agree completely regarding information hazards. Weâve been thinking about these extensively over the last several months (and consulting with various people who are able to hold us to task about our position on them). In short, we chose every point on that poster with care. In some cases weâre talking about things that have been explored extensively by major public figures or sources, such as Carl Sagan or the RAND corporation. In other cases, weâre in new territory. Weâve definitely considered keeping our silence on both counts (also see https://ââforum.effectivealtruism.org/ââposts/ââCoXauRRzWxtsjhsj6/ââterrorism-tylenol-and-dangerous-information if you havenât seen it yet). As it stands, we believe that the arguments in the poster (and the information undergirding those points) is of pretty high value to the world today and would actually be more dangerous if it were publicized at a later date (e.g., when space technologies are already much more mature and there are many status quo space forces and space industries who will fight regulation of their capabilities).
If youâre interested in the project itself, or in further discussions of these hazards/âopportunities, let me know!
Regarding the âarms raceâ terminology concern, you may be referring to https://ââwww.researchgate.net/ââpublication/ââ330280774_An_AI_Race_for_Strategic_Advantage_Rhetoric_and_Risks which I think is a worthy set of arguments to consider when weighing whether and how to speak on key subjects. I do think that a systematic case needs to be made in favor of particular kinds of speech, particularly around 1) constructively framing a challenge that humanity faces and 2) fostering the political will needed to show strategic restraint in the development and deployment of transformative technologies (e.g., though institutionalization in a global project). I think information hazards are an absolutely crucial part of this story, but they arenât the entire story. With luck, I hope to contribute more thoughts along these lines in the coming months.
It sounds like youâve given the possibility of information hazards careful attention, recognised the value of consulting others, and made reasonable decisions. (I expected you probably wouldâve done soâjust thought itâd be worth asking.)
I also definitely agree that the possibility of information hazards shouldnât just serve as a blanket, argument-ending reason to not fairly publicly discuss any potentially dangerous technologies, and that it always has to be weighed against the potential benefits of such discussion.
Thatâs a very interesting topic that I hadnât considered before, and your argument for why itâs worth having at least some people thinking about and working on it seems sound to me.
But I also wondered when reading your comment whether publicly discussing such an idea is net negative due to posing information hazards. (That would probably just mean research on the idea should only be discussed individually with people whoâve been at least briefly vetted for sensibleness, not that research shouldnât be conducted at all.) I had never heard of this potential issue, and donât think I ever wouldâve thought of it by myself, and my knee-jerk guess would be that the same would be true of most policymakers, members of the public, scientists, etc.
Have you thought about the possible harms of publicising this idea, and ran the idea of publicising it by sensible people to check thereâs no unilateralistâs curse occurring?
(Edit: Some parts of your poster have updated me towards thinking itâs more likely than I previously thought that relevant decision-makers are or will become aware of this idea anyway. But I still think it may be worth at least considering potential information hazards hereâwhich you may already have done.
A related point is that I recall someoneâI think they were from FHI, but I canât easily find the sourceâarguing that publicly emphasising the possibility of an AI arms race could make matters worse by making arms race dynamics more likely.)
Thanks for taking a look at the arguments and taking the time to post a reply here! Since this topic is still pretty new, it benefits a lot from each new person taking a look at the arguments and data.
I agree completely regarding information hazards. Weâve been thinking about these extensively over the last several months (and consulting with various people who are able to hold us to task about our position on them). In short, we chose every point on that poster with care. In some cases weâre talking about things that have been explored extensively by major public figures or sources, such as Carl Sagan or the RAND corporation. In other cases, weâre in new territory. Weâve definitely considered keeping our silence on both counts (also see https://ââforum.effectivealtruism.org/ââposts/ââCoXauRRzWxtsjhsj6/ââterrorism-tylenol-and-dangerous-information if you havenât seen it yet). As it stands, we believe that the arguments in the poster (and the information undergirding those points) is of pretty high value to the world today and would actually be more dangerous if it were publicized at a later date (e.g., when space technologies are already much more mature and there are many status quo space forces and space industries who will fight regulation of their capabilities).
If youâre interested in the project itself, or in further discussions of these hazards/âopportunities, let me know!
Regarding the âarms raceâ terminology concern, you may be referring to https://ââwww.researchgate.net/ââpublication/ââ330280774_An_AI_Race_for_Strategic_Advantage_Rhetoric_and_Risks which I think is a worthy set of arguments to consider when weighing whether and how to speak on key subjects. I do think that a systematic case needs to be made in favor of particular kinds of speech, particularly around 1) constructively framing a challenge that humanity faces and 2) fostering the political will needed to show strategic restraint in the development and deployment of transformative technologies (e.g., though institutionalization in a global project). I think information hazards are an absolutely crucial part of this story, but they arenât the entire story. With luck, I hope to contribute more thoughts along these lines in the coming months.
It sounds like youâve given the possibility of information hazards careful attention, recognised the value of consulting others, and made reasonable decisions. (I expected you probably wouldâve done soâjust thought itâd be worth asking.)
I also definitely agree that the possibility of information hazards shouldnât just serve as a blanket, argument-ending reason to not fairly publicly discuss any potentially dangerous technologies, and that it always has to be weighed against the potential benefits of such discussion.