Thanks for the update! At least from what’s described in the post, the team’s research seems to have a higher impact potential than most of the AI safety field.
In particular I’m excited about:
Research directions being adjusted as you learn more
Attention given to governments and to China
Reasonable feedback loops
Ideas almost all of each seem important, neglected and at least somewhat tractable
Thanks for the update! At least from what’s described in the post, the team’s research seems to have a higher impact potential than most of the AI safety field.
In particular I’m excited about:
Research directions being adjusted as you learn more
Attention given to governments and to China
Reasonable feedback loops
Ideas almost all of each seem important, neglected and at least somewhat tractable