Thanks very much for taking time to reply to my comments. I can agree with point 2 that human specified goals could be dangerous -it’s humans that should be closely monitored rather than the tools they use! Also doesn’t your point one contradict point 4. Entropy will increase so any AI system will break and ultimately turn itself off?
Overall, I’m not worried by AI, so I wish you all the success in your endeavours and will live in happy ignorance knowing you are worrying on my behalf. I would only point out that, in my experience, humans over worry and inflate risks. It gives us an evolutionary survival advantage, but causes a lot of stress, wasted effort and holds us back from great achievements.
So, just go for it, but keep your risk assessments real!
Entropy will increase so any AI system will break and ultimately turn itself off?
There are plenty of sources of negentropy around, like the sun, as we humans and other forms of life make use of. It’s little consolation that a misaligned AI would eventually fall to the heat death of the universe.
Hi Trev!
Very briefly on your points:
We don’t think AI needs to break thermodynamics to be dangerous.
We don’t think all human-specified goals are safe, and we don’t know how to give a safe one to an extremely powerful AI.
We are not worried about self-awareness or consciousness in particular.
Turning off highly capable systems is likely to be extremely challenging, unless the stop-button research problem is solved.
Consider familiarizing yourself with some of the basic arguments, for example using this playlist, “The Road to Superintelligence” and “Our Immortality or Extinction” posts on WaitBuyWhy for a fun, accessible introduction, and Vox’s “The case for taking AI seriously as a threat to humanity” as a high-quality mainstream explainer piece.
The free online Cambridge course on AGI Safety Fundamentals provides a strong grounding in much of the field and a cohort + mentor to learn with.[1]
Links borrowed from Stampy, your one-stop-shop for answering questions about AI Safety.
Dear Holden
Thanks very much for taking time to reply to my comments. I can agree with point 2 that human specified goals could be dangerous -it’s humans that should be closely monitored rather than the tools they use! Also doesn’t your point one contradict point 4. Entropy will increase so any AI system will break and ultimately turn itself off?
Overall, I’m not worried by AI, so I wish you all the success in your endeavours and will live in happy ignorance knowing you are worrying on my behalf. I would only point out that, in my experience, humans over worry and inflate risks. It gives us an evolutionary survival advantage, but causes a lot of stress, wasted effort and holds us back from great achievements.
So, just go for it, but keep your risk assessments real!
Best Regards
Trevor Prew
(no reply necessary but if interested, see my essay “fear” on trevorprew.blogspot.com)
Minor, but “plex” is probably not Holden.
My apologies to Plex. Please excuse my newbie error.
There are plenty of sources of negentropy around, like the sun, as we humans and other forms of life make use of. It’s little consolation that a misaligned AI would eventually fall to the heat death of the universe.