As they say, “I didn’t have time to write you a short comment, so I wrote you a long one”:
From your reply to HaydnBelfield: “It would be scary if we happen to live in a world where simply bringing x-risks as a general topic into public consciousness significantly increases odds of a bad actor finding a new x-risk. I also wonder how you get govts or the public to focus on solving x-risks if you don’t actually want the public to spend time thinking about x-risks.”
I myself was recently working on a draft of a post I was going to call “Big List of Dubious X-risks”—the idea was to collect even silly-seeming X-risks (like civilization getting wiped out by first contact with aliens) into a list, where they could maybe be studied for commonalities or etc.
Initially, my draft had a section devoted to biological risks, but it soon became clear to me that people working in biosecurity are extremely concerned about infohazards, to the point where experts in the field (I ended up talking to some of them for unrelated reasons) think that even publishing non-technical concepts (“maybe you could make a virus that worked like this...”) is almost certainly net-negative for civilization.
Furthermore, they almost implied that popularizing and “raising awareness” of GCBR risk at all was a bad idea in most situations.
And, most surprising of all, they also seemed skeptical of the general idea of a “Big List of Dubious X-risks” post, even with the biological-risks section removed and replaced by an infohazard disclaimer.
This is obviously a strange and paradoxical situation—how can you learn about and fix something if you can’t talk about it publicly? Their concerns about GCBR technical details were obviously justified, but at first I thought maybe the GCBR experts (due to the nature of their field) were being way too paranoid about milder infohazards—it seemed crazy that we should discourage any kind of brainstorming about novel X-risks!
But I have mostly come around to their point of view as I’ve thought about it more.
Sometimes, the thinking up of new X-risk details is not actually very helpful for guarding against that X-risk. Precisely because there are so many different ways to create a killer pandemic (including probably some that even the experts won’t manage to think up), our best defense is working on broad-spectrum technologies that are handy in lots of different scenarios. Similarly, if you were worried about something like the USA collapsing into a tyrannical dictatorship, it would possibly be helpful to try and plot out a detailed coup plan in order to see what other people might try and then attempt patch those particular security holes. But since there are probably many different paths to tyranny, at some point you’d want to stop patching individual holes and instead try to work on general countermeasures that would help stop most tyranny scenarios.
This position of secrecy seemed bizarre and unusual and incongruous with the rest of my experience of EA, but as an internet commenter my only experience of EA was the open internet discussions (and not the internal conversations at EA orgs, etc). Open internet discussions are on the extreme end of being totally public and accessible. Almost every other form of human social organization has more room for privacy, secrecy, and compartmentalization—in-person conversations, businesses both large and small, political campaigns/movements, church groups, professional relationships like therapists/doctors/lawyers, etc. Obviously government espionage and military operations are at the far opposite end of this spectrum. So, seen in context, the worry about infohazards is not so unusual—rather it seems more like a sensible reaction to the unusual situation of trying to discuss serious things responsibly in the unusual format of a movement using open internet discussions.
Anyways, presently I agree that encouraging people to speculate publicly about anthropogenic X-risks is a delicate and often net-negative endeavor, and thus the project of the EA Forum is more fraught than it would appear. Your idea wasn’t to encourage people to speculate publicly but rather to submit private ideas to a bounty program, which works better, but is still dangerous insofar as, like Hadyn says, this is “a prize for people to come up with new dangerous ideas”, which they might later publish elsewhere.
Personally, I think the EA Forum needs to up its game in terms of how it handles infohazards and provides guidance on their thinking in this area. I think rather than a bounty program, a good idea would be to create a kind of “infohazard hotline”, where people who have potentially dangerous but also potentially helpful new ideas could be encouraged to share their info privately—ideas about nuclear risk could be directed to a trusted nuclear expert, AI ideas to an AI expert, etc. This would help avoid the paradox of “how can we make progress if we can’t discuss things openly?” by providing a more obvious and accessible way to share ideas with experts without making them maximally public (more legible and accessible to new people than eg knowing who to private-message about something).
Interesting idea. Wanted to throw in a few reflections from working at the Centre for the Study of Existential Risk for four years.
Just want to give a big plus one to the infohazards section. Several states and terrorist groups have been inspired by bioweapons information in the public domain—its a real problem. At CSER we’ve occassionally thought up what might be a new contributor to existential risk—and have decided not to publish on it. I’m sure Anders Sandberg has come up with tonnes too (thankfully he’s on the good side!) - and has also published good stuff on them. Very important bit.
I imagine you’d get lots of kooks writing in (e.g. we get lots of Biblical prediction books in the post), so you’d need some way to sift through that. You’d also need some way to handle disagreement (eg. I think climate change is a major contributor to existential risk, some other researchers in the field do not). Also worth thinking about incentives—in a way, this is a prize for people to come up with new dangerous ideas.
As they say, “I didn’t have time to write you a short comment, so I wrote you a long one”:
From your reply to HaydnBelfield: “It would be scary if we happen to live in a world where simply bringing x-risks as a general topic into public consciousness significantly increases odds of a bad actor finding a new x-risk. I also wonder how you get govts or the public to focus on solving x-risks if you don’t actually want the public to spend time thinking about x-risks.”
I myself was recently working on a draft of a post I was going to call “Big List of Dubious X-risks”—the idea was to collect even silly-seeming X-risks (like civilization getting wiped out by first contact with aliens) into a list, where they could maybe be studied for commonalities or etc.
Initially, my draft had a section devoted to biological risks, but it soon became clear to me that people working in biosecurity are extremely concerned about infohazards, to the point where experts in the field (I ended up talking to some of them for unrelated reasons) think that even publishing non-technical concepts (“maybe you could make a virus that worked like this...”) is almost certainly net-negative for civilization.
Furthermore, they almost implied that popularizing and “raising awareness” of GCBR risk at all was a bad idea in most situations.
And, most surprising of all, they also seemed skeptical of the general idea of a “Big List of Dubious X-risks” post, even with the biological-risks section removed and replaced by an infohazard disclaimer.
This is obviously a strange and paradoxical situation—how can you learn about and fix something if you can’t talk about it publicly? Their concerns about GCBR technical details were obviously justified, but at first I thought maybe the GCBR experts (due to the nature of their field) were being way too paranoid about milder infohazards—it seemed crazy that we should discourage any kind of brainstorming about novel X-risks!
But I have mostly come around to their point of view as I’ve thought about it more.
Sometimes, the thinking up of new X-risk details is not actually very helpful for guarding against that X-risk. Precisely because there are so many different ways to create a killer pandemic (including probably some that even the experts won’t manage to think up), our best defense is working on broad-spectrum technologies that are handy in lots of different scenarios. Similarly, if you were worried about something like the USA collapsing into a tyrannical dictatorship, it would possibly be helpful to try and plot out a detailed coup plan in order to see what other people might try and then attempt patch those particular security holes. But since there are probably many different paths to tyranny, at some point you’d want to stop patching individual holes and instead try to work on general countermeasures that would help stop most tyranny scenarios.
This position of secrecy seemed bizarre and unusual and incongruous with the rest of my experience of EA, but as an internet commenter my only experience of EA was the open internet discussions (and not the internal conversations at EA orgs, etc). Open internet discussions are on the extreme end of being totally public and accessible. Almost every other form of human social organization has more room for privacy, secrecy, and compartmentalization—in-person conversations, businesses both large and small, political campaigns/movements, church groups, professional relationships like therapists/doctors/lawyers, etc. Obviously government espionage and military operations are at the far opposite end of this spectrum. So, seen in context, the worry about infohazards is not so unusual—rather it seems more like a sensible reaction to the unusual situation of trying to discuss serious things responsibly in the unusual format of a movement using open internet discussions.
Anyways, presently I agree that encouraging people to speculate publicly about anthropogenic X-risks is a delicate and often net-negative endeavor, and thus the project of the EA Forum is more fraught than it would appear. Your idea wasn’t to encourage people to speculate publicly but rather to submit private ideas to a bounty program, which works better, but is still dangerous insofar as, like Hadyn says, this is “a prize for people to come up with new dangerous ideas”, which they might later publish elsewhere.
Personally, I think the EA Forum needs to up its game in terms of how it handles infohazards and provides guidance on their thinking in this area. I think rather than a bounty program, a good idea would be to create a kind of “infohazard hotline”, where people who have potentially dangerous but also potentially helpful new ideas could be encouraged to share their info privately—ideas about nuclear risk could be directed to a trusted nuclear expert, AI ideas to an AI expert, etc. This would help avoid the paradox of “how can we make progress if we can’t discuss things openly?” by providing a more obvious and accessible way to share ideas with experts without making them maximally public (more legible and accessible to new people than eg knowing who to private-message about something).
″ I think the EA Forum needs to up its game in terms of how it handles infohazards and provides guidance on their thinking in this area.”
+1 to this
Interesting idea. Wanted to throw in a few reflections from working at the Centre for the Study of Existential Risk for four years.
Just want to give a big plus one to the infohazards section. Several states and terrorist groups have been inspired by bioweapons information in the public domain—its a real problem. At CSER we’ve occassionally thought up what might be a new contributor to existential risk—and have decided not to publish on it. I’m sure Anders Sandberg has come up with tonnes too (thankfully he’s on the good side!) - and has also published good stuff on them. Very important bit.
I imagine you’d get lots of kooks writing in (e.g. we get lots of Biblical prediction books in the post), so you’d need some way to sift through that. You’d also need some way to handle disagreement (eg. I think climate change is a major contributor to existential risk, some other researchers in the field do not). Also worth thinking about incentives—in a way, this is a prize for people to come up with new dangerous ideas.