many project ideas are close to infohazards, and we don’t advise people going and doing such work without (private) discussions about risks and precautions, to avoid unilateralist’s curse.
In case any readers are unfamiliar with these terms, here are the brief EA Concepts pages on them:
Review of past, i.e. definitely no longer hazardous, examples of info hazards and how they’ve played out (e.g., here are 10 examples of streisand effects in history)
In case any readers are unfamiliar with these terms, here are the brief EA Concepts pages on them:
https://concepts.effectivealtruism.org/concepts/information-hazards/
https://concepts.effectivealtruism.org/concepts/unilateralists-curse/
And here are a few collections of posts/sources on those topics.
If I’m interpreting this question correctly, I think the following interesting paper addresses a somewhat similar question in the context of AI (and thus might be helpful to people considering addressing this question): The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI Research Reduce Misuse?
(There’s also some commentary on the paper here.)
A useful starting point on this might be Exploring the Streisand Effect.
Thanks for this—I have added the EA Concepts links to the post, and linked to this comment for more information.