If you haven’t read this piece by Ajeya Cotra, Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover I would highly recommend it. Some of the post on AI alignment here (aimed at a general audience) might also be helpful.
Thanks, I’ll check out the Cotra post. I’ve have skimmed some of the Cold Takes posts and not found where he addresses the specific confusions I have above.
Current theme: default
Less Wrong (text)
Less Wrong (link)
If you haven’t read this piece by Ajeya Cotra, Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover I would highly recommend it. Some of the post on AI alignment here (aimed at a general audience) might also be helpful.
Thanks, I’ll check out the Cotra post. I’ve have skimmed some of the Cold Takes posts and not found where he addresses the specific confusions I have above.