Not to be rude but this seems like a lot of worrying about nothing. “AI is powerful and uncontrollable and could kill all of humanity, like seriously” is not a complicated message.
To first order, the problem isn’t that the message is complicated. “Bioterrorism might kill you, here are specific viruses that they can use, we should stop that.” is also not a complicated message, but it’ll be a bad idea to indiscriminately spread that message as well.
this is a really strong assumption, an untested one at that
Well there was DeepMind, and then OpenAI, and then Anthropic.
I stated ways that different actors taking the problem more seriously would lead to progress; I’m not sure that a delay is actually the main impact. On this last point, note that (as I expected when it was first released) the main effect of the FLI letter is that a lot more people have heard of AI Safety and people who have heard of it are taking it more seriously (the latter based largely on Twitter observations), not that a delay is actually being considered.
I don’t view this as a crux. I weakly think additional attention is a cost, not a benefit.
I don’t actually know where you’re getting “these issues in communication...historically have led to a lot of x-risk” from
I meant in AI. Also I feel like this might be the crux here. I currently think that past communications (like early Yudkowsky and Superintelligence) have done a lot of harm (though there might have been nontrivial upsides as well). If you don’t believe this you should be more optimistic about indiscriminate AI safety comms than I am, though maybe not to quite the same extent as the OP.
Tbh in contrast with the three target groups you mentioned, I feel more generally optimistic about the “public’s” involvement. I can definitely see worlds where mass outreach is net positive, though of course this is a sharp departure from past attempts (and failures) in communication.
Ahh, I didn’t read it as you talking about the effects of Eliezer’s past outreach. I strongly buy “this time is different”, and not just because of the salience of AI in tech. The type of media coverage we’re getting is very different: the former CEO of Google advocating AI risk and a journalist asking about AI risk in the White House press briefing is just nothing like we’ve ever seen before. We’re reaching different audiences here. The AI landscape is also very different; AI risk arguments are a lot more convincing when we have a very good AI to point to (GPT-4) and when we have facts like “a majority of AI researchers think p(AI killing humanity)>10%”.
But even if you believe this time won’t be different, I think we need to think critically about which world we would rather live in:
the current one, where AI Capabilities research keeps humming along with what seems to be inadequate AI Safety research and nobody outside of EA is really paying attention to AI Safety. All we can do is hope that AI risk isn’t as plausible as Eliezer thinks and that Sam Altman is really careful.
One where there is another SOTA AI capabilities lab, maybe owned by the government, but AI is treated as a dangerous and scary technology that must be treated with care. We have more alignment research, the government keeps tabs on AI labs to make sure they’re not doing anything stupid and maybe adds red tape that slow them down, and AI capabilities researchers everywhere don’t do obviously stupid things.
Let’s even think about the history here. Early Eliezer advocating for AGI to prevent nanotech from killing all of humanity was probably bad. But I am unconvinced that Eliezer’s advocacy from afterwards up until 2015 or whatever was net-negative. My understanding is that though his work led to development of AI capabilities labs, there was nobody at the time working on alignment anyway. This reflex of “AI capabilities research bad” only holds if there is sufficient progress on ensuring AI safety in the meantime.
One last note, on “power”. Assuming Eliezer isn’t horribly wrong about things, the worlds in which we survive AI are those where AI is widely acknowledged as extremely powerful. We’re just not going to make it if policy-makers and/or tech people don’t understand what they are dealing with here. Maybe there are reasons to delay this understanding a few years—I personally strongly oppose this—but let’s be clear about this.
To first order, the problem isn’t that the message is complicated. “Bioterrorism might kill you, here are specific viruses that they can use, we should stop that.” is also not a complicated message, but it’ll be a bad idea to indiscriminately spread that message as well.
Well there was DeepMind, and then OpenAI, and then Anthropic.
I don’t view this as a crux. I weakly think additional attention is a cost, not a benefit.
I meant in AI. Also I feel like this might be the crux here. I currently think that past communications (like early Yudkowsky and Superintelligence) have done a lot of harm (though there might have been nontrivial upsides as well). If you don’t believe this you should be more optimistic about indiscriminate AI safety comms than I am, though maybe not to quite the same extent as the OP.
Tbh in contrast with the three target groups you mentioned, I feel more generally optimistic about the “public’s” involvement. I can definitely see worlds where mass outreach is net positive, though of course this is a sharp departure from past attempts (and failures) in communication.
Ahh, I didn’t read it as you talking about the effects of Eliezer’s past outreach. I strongly buy “this time is different”, and not just because of the salience of AI in tech. The type of media coverage we’re getting is very different: the former CEO of Google advocating AI risk and a journalist asking about AI risk in the White House press briefing is just nothing like we’ve ever seen before. We’re reaching different audiences here. The AI landscape is also very different; AI risk arguments are a lot more convincing when we have a very good AI to point to (GPT-4) and when we have facts like “a majority of AI researchers think p(AI killing humanity)>10%”.
But even if you believe this time won’t be different, I think we need to think critically about which world we would rather live in:
the current one, where AI Capabilities research keeps humming along with what seems to be inadequate AI Safety research and nobody outside of EA is really paying attention to AI Safety. All we can do is hope that AI risk isn’t as plausible as Eliezer thinks and that Sam Altman is really careful.
One where there is another SOTA AI capabilities lab, maybe owned by the government, but AI is treated as a dangerous and scary technology that must be treated with care. We have more alignment research, the government keeps tabs on AI labs to make sure they’re not doing anything stupid and maybe adds red tape that slow them down, and AI capabilities researchers everywhere don’t do obviously stupid things.
Let’s even think about the history here. Early Eliezer advocating for AGI to prevent nanotech from killing all of humanity was probably bad. But I am unconvinced that Eliezer’s advocacy from afterwards up until 2015 or whatever was net-negative. My understanding is that though his work led to development of AI capabilities labs, there was nobody at the time working on alignment anyway. This reflex of “AI capabilities research bad” only holds if there is sufficient progress on ensuring AI safety in the meantime.
One last note, on “power”. Assuming Eliezer isn’t horribly wrong about things, the worlds in which we survive AI are those where AI is widely acknowledged as extremely powerful. We’re just not going to make it if policy-makers and/or tech people don’t understand what they are dealing with here. Maybe there are reasons to delay this understanding a few years—I personally strongly oppose this—but let’s be clear about this.