That would be good too! And it would fill a different niche. This list is mostly meant for AI strategy researchers rather than busy laymen, and it’s certainly not meant to be read cover to cover.
(Note also that this list isn’t really about AI risk and certainly isn’t about AI alignment.)
(Note also that I’m not trying to make people “more likely” to read it—it’s optimal for some people to engage with it and not optimal for others.)
That would be good too! And it would fill a different niche. This list is mostly meant for AI strategy researchers rather than busy laymen, and it’s certainly not meant to be read cover to cover.
(Note also that this list isn’t really about AI risk and certainly isn’t about AI alignment.)
(Note also that I’m not trying to make people “more likely” to read it—it’s optimal for some people to engage with it and not optimal for others.)