If it was feasible (and I’m a little bit skeptical), a ‘social safety bugs’ program rewarding people for sharing destructive ideas could be useful even if the ‘bugs’ were hard fix, by identifying them beforehand, by raising awareness of this problem of dangerous information, and perhaps even by using the frequency of repetitions of an idea as a proxy to measure how spread it is among the population.
Couldn’t it misfire? I mean, do dangerous people know they could be more effective if they researched a little bit more on new ways to do harm? Wouldn’t they start crowdsourcing it or something, if they knew it? If they don’t, the problem of dangerous info is a dangerous info, and we should be careful with raising awareness of it, too.
If it was feasible (and I’m a little bit skeptical), a ‘social safety bugs’ program rewarding people for sharing destructive ideas could be useful even if the ‘bugs’ were hard fix, by identifying them beforehand, by raising awareness of this problem of dangerous information, and perhaps even by using the frequency of repetitions of an idea as a proxy to measure how spread it is among the population. Couldn’t it misfire? I mean, do dangerous people know they could be more effective if they researched a little bit more on new ways to do harm? Wouldn’t they start crowdsourcing it or something, if they knew it? If they don’t, the problem of dangerous info is a dangerous info, and we should be careful with raising awareness of it, too.