It would be great to have some kind of a committee for info-hazards assessment, like a group of trusted people who will a) will take responsibility to decide whether the idea should be published or not b) will read all incoming suggestions in timely manner с) their contacts (but may be not all the personalities) will be publicly known.
I believe this is something worth exploring. My model is that while most active people thinking about x-risks have some sort of social network links so they can ask others, there may be a long tail of people thinking in isolation, who may at some point just post something dangerous on LessWrong.
(Also there is a problem of incentives, which are often strongly in favor of publishing. You don’t get much credit for not publishing dangerous ideas, if you are not allready part of some established group.)
It would be great to have some kind of a committee for info-hazards assessment, like a group of trusted people who will a) will take responsibility to decide whether the idea should be published or not b) will read all incoming suggestions in timely manner с) their contacts (but may be not all the personalities) will be publicly known.
I believe this is something worth exploring. My model is that while most active people thinking about x-risks have some sort of social network links so they can ask others, there may be a long tail of people thinking in isolation, who may at some point just post something dangerous on LessWrong.
(Also there is a problem of incentives, which are often strongly in favor of publishing. You don’t get much credit for not publishing dangerous ideas, if you are not allready part of some established group.)