I summarized this in AN #118, along with a summary of this related podcast and some of my own thoughts about how this compares to more classical intent alignment risks.
Current theme: default
Less Wrong (text)
Less Wrong (link)
I summarized this in AN #118, along with a summary of this related podcast and some of my own thoughts about how this compares to more classical intent alignment risks.