the question of whether we (i.e. anyone) should be doing mass outreach on the topic of AI Safety is over. It is happening.
This feels like a very hostile statement. It’s not at all obvious that this question is over.
I personally feel a lot more cautious about doing mass outreach. I think there’s a decent chance people could accidentally do significant harm to future efforts. Policy, politics and advocacy are complicated—regardless of the area you’re working in.
For what it’s worth, I’ve spoken to Nik and I think some of the work he’s doing is great. I’m especially excited about narrative testing.
Whilst I didn’t write that, I do basically feel the same way. Sorry if it comes across as hostile, but we’re in a pretty desperate situation. Analysis paralysis here could actually be lethal. What timelines are you envisaging re “future efforts”? I feel like we have a few months to get a Pause in place if we actually want a high (90%) chance of survival. The H100 “summoning portals” are already being built.
This feels like a very hostile statement. It’s not at all obvious that this question is over.
I personally feel a lot more cautious about doing mass outreach. I think there’s a decent chance people could accidentally do significant harm to future efforts. Policy, politics and advocacy are complicated—regardless of the area you’re working in.
For what it’s worth, I’ve spoken to Nik and I think some of the work he’s doing is great. I’m especially excited about narrative testing.
Whilst I didn’t write that, I do basically feel the same way. Sorry if it comes across as hostile, but we’re in a pretty desperate situation. Analysis paralysis here could actually be lethal. What timelines are you envisaging re “future efforts”? I feel like we have a few months to get a Pause in place if we actually want a high (90%) chance of survival. The H100 “summoning portals” are already being built.