I’m new to EA and I’m surprised by the fear expressed about AI. Have you heard of the ludities? Well I suppose fear of scientific progress is only natural, considering the popularity of Mary Shelley’s Frankenstein since it was written in the early 19th century.
Firstly will AI systems defy the second law of thermodynamics, where entropy (disorder) increases with time? Will AI systems be perpetual calculating machines that never break? Surely not, they will be like every other computer that fails from time to time, especially when a cosmic ray smashes into a critical electron at a critical time.
Secondly programmes may be set up to evolve for improved performance (a process that will not be directly controlled by humans), but the performance will be judged against a human defined goal, so humans are always in control.
Thirdly, if you are worried that AI computers will become self aware, I would recommend the book The Evolution of the Sensitive soul by Simona Ginsburg and Eva Jabolonka for their views on consciousness. Will AI systems be able to perform unlimited associated learning and if they do will it be harmful or dangerous? Animals only attack for food or if threatened, anything else is a waste of energy. Why would AI systems be any different?
Fourthly, AI systems will need a power source, so humans can always pull the plug.
So my personal view, which will no doubt up set people, is that so called human “intelligence” (or lack of it) is a much bigger problem than AI, and it would be more effective altruism to focus on solutions to the current human caused problems, rather than imagined ones based on an emotional, irrational fear of future machines.
Thanks very much for taking time to reply to my comments. I can agree with point 2 that human specified goals could be dangerous -it’s humans that should be closely monitored rather than the tools they use! Also doesn’t your point one contradict point 4. Entropy will increase so any AI system will break and ultimately turn itself off?
Overall, I’m not worried by AI, so I wish you all the success in your endeavours and will live in happy ignorance knowing you are worrying on my behalf. I would only point out that, in my experience, humans over worry and inflate risks. It gives us an evolutionary survival advantage, but causes a lot of stress, wasted effort and holds us back from great achievements.
So, just go for it, but keep your risk assessments real!
Entropy will increase so any AI system will break and ultimately turn itself off?
There are plenty of sources of negentropy around, like the sun, as we humans and other forms of life make use of. It’s little consolation that a misaligned AI would eventually fall to the heat death of the universe.
Hello EA community
I’m new to EA and I’m surprised by the fear expressed about AI. Have you heard of the ludities? Well I suppose fear of scientific progress is only natural, considering the popularity of Mary Shelley’s Frankenstein since it was written in the early 19th century.
Firstly will AI systems defy the second law of thermodynamics, where entropy (disorder) increases with time? Will AI systems be perpetual calculating machines that never break? Surely not, they will be like every other computer that fails from time to time, especially when a cosmic ray smashes into a critical electron at a critical time.
Secondly programmes may be set up to evolve for improved performance (a process that will not be directly controlled by humans), but the performance will be judged against a human defined goal, so humans are always in control.
Thirdly, if you are worried that AI computers will become self aware, I would recommend the book The Evolution of the Sensitive soul by Simona Ginsburg and Eva Jabolonka for their views on consciousness. Will AI systems be able to perform unlimited associated learning and if they do will it be harmful or dangerous? Animals only attack for food or if threatened, anything else is a waste of energy. Why would AI systems be any different?
Fourthly, AI systems will need a power source, so humans can always pull the plug.
So my personal view, which will no doubt up set people, is that so called human “intelligence” (or lack of it) is a much bigger problem than AI, and it would be more effective altruism to focus on solutions to the current human caused problems, rather than imagined ones based on an emotional, irrational fear of future machines.
I look forward to your comments.
Yours faithfully
Trevor Prew
Sheffield UK
Hi Trev!
Very briefly on your points:
We don’t think AI needs to break thermodynamics to be dangerous.
We don’t think all human-specified goals are safe, and we don’t know how to give a safe one to an extremely powerful AI.
We are not worried about self-awareness or consciousness in particular.
Turning off highly capable systems is likely to be extremely challenging, unless the stop-button research problem is solved.
Consider familiarizing yourself with some of the basic arguments, for example using this playlist, “The Road to Superintelligence” and “Our Immortality or Extinction” posts on WaitBuyWhy for a fun, accessible introduction, and Vox’s “The case for taking AI seriously as a threat to humanity” as a high-quality mainstream explainer piece.
The free online Cambridge course on AGI Safety Fundamentals provides a strong grounding in much of the field and a cohort + mentor to learn with.[1]
Links borrowed from Stampy, your one-stop-shop for answering questions about AI Safety.
Dear Holden
Thanks very much for taking time to reply to my comments. I can agree with point 2 that human specified goals could be dangerous -it’s humans that should be closely monitored rather than the tools they use! Also doesn’t your point one contradict point 4. Entropy will increase so any AI system will break and ultimately turn itself off?
Overall, I’m not worried by AI, so I wish you all the success in your endeavours and will live in happy ignorance knowing you are worrying on my behalf. I would only point out that, in my experience, humans over worry and inflate risks. It gives us an evolutionary survival advantage, but causes a lot of stress, wasted effort and holds us back from great achievements.
So, just go for it, but keep your risk assessments real!
Best Regards
Trevor Prew
(no reply necessary but if interested, see my essay “fear” on trevorprew.blogspot.com)
Minor, but “plex” is probably not Holden.
My apologies to Plex. Please excuse my newbie error.
There are plenty of sources of negentropy around, like the sun, as we humans and other forms of life make use of. It’s little consolation that a misaligned AI would eventually fall to the heat death of the universe.