I think this is an important tension that’s been felt for a while. I believe there’s been discussion on this at least 10 years back. For a while, few people were “allowed”[1] to publicly promote AI safety issues, because it was so easy to mess things up.
I’d flag that there isn’t much work actively marketing information about there being short timelines. There’s research here, but generally EAs aren’t excited to heavily market this research broadly. I think there’s a tricky line between “doing useful research in ways that are transparent” and “not raising alarm in ways that could be damaging.”
[1] As in, if someone wanted to host a big event on AI safety, and they weren’t close to (and respected by) the MIRI cluster, they were often discouraged from this.
I think this is an important tension that’s been felt for a while. I believe there’s been discussion on this at least 10 years back. For a while, few people were “allowed”[1] to publicly promote AI safety issues, because it was so easy to mess things up.
I’d flag that there isn’t much work actively marketing information about there being short timelines. There’s research here, but generally EAs aren’t excited to heavily market this research broadly. I think there’s a tricky line between “doing useful research in ways that are transparent” and “not raising alarm in ways that could be damaging.”
Generally, there is some marketing on focused AI safety discussions. For example, see Robert Miles or Rational Animations.
[1] As in, if someone wanted to host a big event on AI safety, and they weren’t close to (and respected by) the MIRI cluster, they were often discouraged from this.