Finally, have you talked much to people outside the alignment/effective altruism communities about your report? How have reactions varied by background? Are you reluctant to publish work like this broadly? If so, why? Do you see risks of increasing awareness of these issues pushing unsafe capabilities work?
I haven’t engaged much with people outside the EA and AI alignment communities, and I’d guess that very few people outside these communities have heard about the report. I don’t personally feel sold that the risks of publishing this type of analysis more broadly (in terms of potentially increasing capabilities work) outweigh the benefits of helping people better understand what to expect with AI and giving us a better chance of figuring out if our views are wrong. However, some other people in the AI risk reduction community who we consulted (TBC, not my manager or Open Phil as an institution) were more concerned about this, and I respect their judgment, so I chose to publish the draft report on LessWrong and avoid doing things that could result in it being shared much more widely, especially in a “low-bandwidth” way (e.g. just the “headline graph” being shared on social media).
To clarify, we are planning to seek more feedback from people outside the EA community on our views about TAI timelines, but we’re seeing that as a separate project from this report (and may gather feedback from outside the EA community without necessarily publicizing the report more widely).
I haven’t engaged much with people outside the EA and AI alignment communities, and I’d guess that very few people outside these communities have heard about the report. I don’t personally feel sold that the risks of publishing this type of analysis more broadly (in terms of potentially increasing capabilities work) outweigh the benefits of helping people better understand what to expect with AI and giving us a better chance of figuring out if our views are wrong. However, some other people in the AI risk reduction community who we consulted (TBC, not my manager or Open Phil as an institution) were more concerned about this, and I respect their judgment, so I chose to publish the draft report on LessWrong and avoid doing things that could result in it being shared much more widely, especially in a “low-bandwidth” way (e.g. just the “headline graph” being shared on social media).
To clarify, we are planning to seek more feedback from people outside the EA community on our views about TAI timelines, but we’re seeing that as a separate project from this report (and may gather feedback from outside the EA community without necessarily publicizing the report more widely).