2: I don’t think there’s a neat distinction between ‘technical dangerous information’ and ‘broader ideas about possible risks’, with the latter being generally safe to publicise and discuss.
I have this idea of independent infrastructure, trying to make infrastructure (electricity/water/food/computing) that is on a smaller scale than current infrastructure. This is for a number of reasons, one of which includes mitigating risks, How should I build broad-scale support for my ideas without talking about the risks I am mitigating?
4.1: In addition to the considerations around the unilateralist’s curse offered by Brian Wang (I have written a bit about this in the context of biotechnology here) there is also an asymmetry in the sense that it is much easier to disclose previously-secret information than make previously-disclosed information secret. The irreversibility of disclosure warrants further caution in cases of uncertainty like this.
Although in some scenarios non-disclosure is irreversible as well, as conditions change. Consider if someone had the idea of hacking a computer and had managed to convince the designers of C to create a more secure list indexing and also everyone not to use other insecure languages. Now we would not be fighting the network effect of all the bad C code when trying to get people to code computers securely.
This irreversibility of non-disclosure seems to only occur if if something is not a huge threat right now, but may become more so as technology develops and gets more widely used and locked in. Not really relevant to the biotech arena. that I can think of immediately at least. But an interesting scenario nonetheless.
Hi Gregory,
A couple of musings generated by your comment.
I have this idea of independent infrastructure, trying to make infrastructure (electricity/water/food/computing) that is on a smaller scale than current infrastructure. This is for a number of reasons, one of which includes mitigating risks, How should I build broad-scale support for my ideas without talking about the risks I am mitigating?
Although in some scenarios non-disclosure is irreversible as well, as conditions change. Consider if someone had the idea of hacking a computer and had managed to convince the designers of C to create a more secure list indexing and also everyone not to use other insecure languages. Now we would not be fighting the network effect of all the bad C code when trying to get people to code computers securely.
This irreversibility of non-disclosure seems to only occur if if something is not a huge threat right now, but may become more so as technology develops and gets more widely used and locked in. Not really relevant to the biotech arena. that I can think of immediately at least. But an interesting scenario nonetheless.