It’s general easy for an IT organization to fix a bug once the bug is disclosed. It’s not easy to close are vulnerabilities of physical security that might be discovered.
If it was feasible (and I’m a little bit skeptical), a ‘social safety bugs’ program rewarding people for sharing destructive ideas could be useful even if the ‘bugs’ were hard fix, by identifying them beforehand, by raising awareness of this problem of dangerous information, and perhaps even by using the frequency of repetitions of an idea as a proxy to measure how spread it is among the population.
Couldn’t it misfire? I mean, do dangerous people know they could be more effective if they researched a little bit more on new ways to do harm? Wouldn’t they start crowdsourcing it or something, if they knew it? If they don’t, the problem of dangerous info is a dangerous info, and we should be careful with raising awareness of it, too.
It’s general easy for an IT organization to fix a bug once the bug is disclosed. It’s not easy to close are vulnerabilities of physical security that might be discovered.
Closing what was revealed on https://www.schneier.com/blog/archives/2006/06/movieplot_threa_1.html would be very expensive in contrast to the work required to come up with the ideas.
If it was feasible (and I’m a little bit skeptical), a ‘social safety bugs’ program rewarding people for sharing destructive ideas could be useful even if the ‘bugs’ were hard fix, by identifying them beforehand, by raising awareness of this problem of dangerous information, and perhaps even by using the frequency of repetitions of an idea as a proxy to measure how spread it is among the population. Couldn’t it misfire? I mean, do dangerous people know they could be more effective if they researched a little bit more on new ways to do harm? Wouldn’t they start crowdsourcing it or something, if they knew it? If they don’t, the problem of dangerous info is a dangerous info, and we should be careful with raising awareness of it, too.