Thanks very much for taking time to reply to my comments. I can agree with point 2 that human specified goals could be dangerous -it’s humans that should be closely monitored rather than the tools they use! Also doesn’t your point one contradict point 4. Entropy will increase so any AI system will break and ultimately turn itself off?
Overall, I’m not worried by AI, so I wish you all the success in your endeavours and will live in happy ignorance knowing you are worrying on my behalf. I would only point out that, in my experience, humans over worry and inflate risks. It gives us an evolutionary survival advantage, but causes a lot of stress, wasted effort and holds us back from great achievements.
So, just go for it, but keep your risk assessments real!
Entropy will increase so any AI system will break and ultimately turn itself off?
There are plenty of sources of negentropy around, like the sun, as we humans and other forms of life make use of. It’s little consolation that a misaligned AI would eventually fall to the heat death of the universe.
Dear Holden
Thanks very much for taking time to reply to my comments. I can agree with point 2 that human specified goals could be dangerous -it’s humans that should be closely monitored rather than the tools they use! Also doesn’t your point one contradict point 4. Entropy will increase so any AI system will break and ultimately turn itself off?
Overall, I’m not worried by AI, so I wish you all the success in your endeavours and will live in happy ignorance knowing you are worrying on my behalf. I would only point out that, in my experience, humans over worry and inflate risks. It gives us an evolutionary survival advantage, but causes a lot of stress, wasted effort and holds us back from great achievements.
So, just go for it, but keep your risk assessments real!
Best Regards
Trevor Prew
(no reply necessary but if interested, see my essay “fear” on trevorprew.blogspot.com)
Minor, but “plex” is probably not Holden.
My apologies to Plex. Please excuse my newbie error.
There are plenty of sources of negentropy around, like the sun, as we humans and other forms of life make use of. It’s little consolation that a misaligned AI would eventually fall to the heat death of the universe.