more better—thanks for the clarification. I have no idea about how to handle number 3 (reducing search engine/LLM awareness of infohazards).
For number 2 (being cautious about raising awareness of infohazards on public forums), I guess one strategy would be to ask very vague questions at first to test the waters, and see if anybody replies with a caution that you might be edging into infohazard territory. And then if nobody with more expertise raises an alarm, gradually escalate the specificity of one’s questions, narrowing the focus one step at a time, until eventually you either get a satisfactory answer, or credible experts call for caution about raising the topic.
Really, EA and related communities need some specific, consensual ‘safeword’ that cautions other people that they’re edging into infohazard territory. I’m open to any suggestions about that.
Trouble is, a lot of topics are treated as toxic infohazards that really aren’t (e.g. behavior genetics, intelligence research, evolutionary psychology, sex research, etc). Most of these take the form of ‘here’s a behavioral sciences theory or finding that is probably true, but that the general public shouldn’t learn about, because they don’t have the political or emotional maturity to handle it’.
So we’d need a couple of different safewords—one that refers to specific technical knowledge that could actually increase true existential risks (e.g. software for autonomous assassination drones, for genetically engineering more lethal pandemics, for enriching uranium, etc), versus one that refers to more general knowledge that (allegedly) could lead people to updating their social/political views in directions that some might consider unacceptable.
more better—thanks for the clarification. I have no idea about how to handle number 3 (reducing search engine/LLM awareness of infohazards).
For number 2 (being cautious about raising awareness of infohazards on public forums), I guess one strategy would be to ask very vague questions at first to test the waters, and see if anybody replies with a caution that you might be edging into infohazard territory. And then if nobody with more expertise raises an alarm, gradually escalate the specificity of one’s questions, narrowing the focus one step at a time, until eventually you either get a satisfactory answer, or credible experts call for caution about raising the topic.
Really, EA and related communities need some specific, consensual ‘safeword’ that cautions other people that they’re edging into infohazard territory. I’m open to any suggestions about that.
Trouble is, a lot of topics are treated as toxic infohazards that really aren’t (e.g. behavior genetics, intelligence research, evolutionary psychology, sex research, etc). Most of these take the form of ‘here’s a behavioral sciences theory or finding that is probably true, but that the general public shouldn’t learn about, because they don’t have the political or emotional maturity to handle it’.
So we’d need a couple of different safewords—one that refers to specific technical knowledge that could actually increase true existential risks (e.g. software for autonomous assassination drones, for genetically engineering more lethal pandemics, for enriching uranium, etc), versus one that refers to more general knowledge that (allegedly) could lead people to updating their social/political views in directions that some might consider unacceptable.
Thanks, I appreciate these insights and these are good ideas.