The main goal was to argue for preventing AC. The main intervention discussed was to prevent AC through research and development monitoring. It will likely require the implementation of protocols and labels of certain kinds of consciousness and neurophysics research as DURC or components of concern. I think a close analogue is the biothreat screening projects (IBBIS
, SecureDNA
) but it’s unclear how a similar project would be implemented for AC “threats”.
By suggesting a call for Artificial Consciousness Safety I am expressing that I don’t think we know any concrete actions that will definitely help and if the need is there (for ACS) we should pursue research to develop interventions. Just like in AI safety no one really knows how to make AI safe. Because I think AC will not be safe and that the risk may not outweigh the benefits, we could seriously pursue strategies that make this common knowledge so things like researchers unintentionally contributing to its creation don’t happen. We may have a significant chance to act before it becomes well known that AC might be possible or profitable. Unlike the runaway effects of AI companies now, we can still prevent the AC economy from even starting.
Silica
Karma: 46
Thanks for this book suggestion, it does seem like an interesting case study.
I’m quite sceptical any one person could reverse engineer consciousness and I don’t buy that it’s good reasoning to go ahead with publication simply because someone else might. I’ll have to look into Solms and return to this.
May I ask, what is your position on creating artificial consciousness?
Do you see digital suffering as a risk? If so, should we be careful to avoid creating AC?