I think that if you take these infohazards seriously enough, you probably even shouldn’t do that. Because if everyone has a 95% likelihood to keep it secret, with 10 persons in the know is 60%.
I see what you mean, but if you value cause prioritization seriously enough, it is really stifling to have literally no place to discuss x-risks in detail. Carefully managed private spaces are the best compromise I’ve seen so far, but if there’s something better then I’d be really glad to learn about it.
I think that I’d be glad to stay as long as we can in the domain of aggregate probabilities and proxies of real scenarios, particularly for biorisks. Mostly because I think that most people can’t do a lot about infohazardy things so the first-order effect is just net negative.
You could avoid such infohazards by drawing up the scenarios in a private message or private doc that’s only shared with select people.
I think that if you take these infohazards seriously enough, you probably even shouldn’t do that. Because if everyone has a 95% likelihood to keep it secret, with 10 persons in the know is 60%.
I see what you mean, but if you value cause prioritization seriously enough, it is really stifling to have literally no place to discuss x-risks in detail. Carefully managed private spaces are the best compromise I’ve seen so far, but if there’s something better then I’d be really glad to learn about it.
I think that I’d be glad to stay as long as we can in the domain of aggregate probabilities and proxies of real scenarios, particularly for biorisks.
Mostly because I think that most people can’t do a lot about infohazardy things so the first-order effect is just net negative.