I am currently trying to form my own views on AI risk and, having skimmed this post, think I will find it very useful, thank you very much.
Particular aspects of this post that helped:
Bullet point lists
Clear, fairly precise summaries of important ideas (eg: FOOM) with links to learn more.
Outlining alternatives for the core ideas (eg: hard vs soft takeoff)
Expressing opinion (implicitly) on the most important parts of each issue. Often these lists will end up listing every reasonable post on a subject (eg: in the concrete AI stories section). This isn’t helpful for a newcomer because you need guidance on where to focus your reading.
I am currently trying to form my own views on AI risk and, having skimmed this post, think I will find it very useful, thank you very much.
Particular aspects of this post that helped:
Bullet point lists
Clear, fairly precise summaries of important ideas (eg: FOOM) with links to learn more.
Outlining alternatives for the core ideas (eg: hard vs soft takeoff)
Expressing opinion (implicitly) on the most important parts of each issue. Often these lists will end up listing every reasonable post on a subject (eg: in the concrete AI stories section). This isn’t helpful for a newcomer because you need guidance on where to focus your reading.