Hey! This is another update from the distillers at the AI Safety Info website (and its more playful clone Stampy).
Here are a couple of the answers that we wrote up over the last month. As always let us know if there are any questions that you guys have that you would like to see answered.
The list below redirects to individual links, and the collective URL above renders all of the answers in the list on one page at once.
How could a superintelligent AI use the internet to take over the physical world?
What can we expect the motivations of a superintelligent machine to be?
Wouldn’t a superintelligence be slowed down by the need to do experiments in the physical world?
What are some AI governance exercises and projects I can try?
What is “metaphilosophy” and how does it relate to AI safety?
Are there any detailed example stories of what unaligned AGI would look like?
Crossposted from LessWrong: https://www.lesswrong.com/posts/EELddDmBknLyjwgbu/stampy-s-ai-safety-info-new-distillations-2
I would love to see you turn this into a newsletter. I think it would be a great resource for beginners to learn more about AI safety.