Ahh, I didn’t read it as you talking about the effects of Eliezer’s past outreach. I strongly buy “this time is different”, and not just because of the salience of AI in tech. The type of media coverage we’re getting is very different: the former CEO of Google advocating AI risk and a journalist asking about AI risk in the White House press briefing is just nothing like we’ve ever seen before. We’re reaching different audiences here. The AI landscape is also very different; AI risk arguments are a lot more convincing when we have a very good AI to point to (GPT-4) and when we have facts like “a majority of AI researchers think p(AI killing humanity)>10%”.
But even if you believe this time won’t be different, I think we need to think critically about which world we would rather live in:
the current one, where AI Capabilities research keeps humming along with what seems to be inadequate AI Safety research and nobody outside of EA is really paying attention to AI Safety. All we can do is hope that AI risk isn’t as plausible as Eliezer thinks and that Sam Altman is really careful.
One where there is another SOTA AI capabilities lab, maybe owned by the government, but AI is treated as a dangerous and scary technology that must be treated with care. We have more alignment research, the government keeps tabs on AI labs to make sure they’re not doing anything stupid and maybe adds red tape that slow them down, and AI capabilities researchers everywhere don’t do obviously stupid things.
Let’s even think about the history here. Early Eliezer advocating for AGI to prevent nanotech from killing all of humanity was probably bad. But I am unconvinced that Eliezer’s advocacy from afterwards up until 2015 or whatever was net-negative. My understanding is that though his work led to development of AI capabilities labs, there was nobody at the time working on alignment anyway. This reflex of “AI capabilities research bad” only holds if there is sufficient progress on ensuring AI safety in the meantime.
One last note, on “power”. Assuming Eliezer isn’t horribly wrong about things, the worlds in which we survive AI are those where AI is widely acknowledged as extremely powerful. We’re just not going to make it if policy-makers and/or tech people don’t understand what they are dealing with here. Maybe there are reasons to delay this understanding a few years—I personally strongly oppose this—but let’s be clear about this.
Ahh, I didn’t read it as you talking about the effects of Eliezer’s past outreach. I strongly buy “this time is different”, and not just because of the salience of AI in tech. The type of media coverage we’re getting is very different: the former CEO of Google advocating AI risk and a journalist asking about AI risk in the White House press briefing is just nothing like we’ve ever seen before. We’re reaching different audiences here. The AI landscape is also very different; AI risk arguments are a lot more convincing when we have a very good AI to point to (GPT-4) and when we have facts like “a majority of AI researchers think p(AI killing humanity)>10%”.
But even if you believe this time won’t be different, I think we need to think critically about which world we would rather live in:
the current one, where AI Capabilities research keeps humming along with what seems to be inadequate AI Safety research and nobody outside of EA is really paying attention to AI Safety. All we can do is hope that AI risk isn’t as plausible as Eliezer thinks and that Sam Altman is really careful.
One where there is another SOTA AI capabilities lab, maybe owned by the government, but AI is treated as a dangerous and scary technology that must be treated with care. We have more alignment research, the government keeps tabs on AI labs to make sure they’re not doing anything stupid and maybe adds red tape that slow them down, and AI capabilities researchers everywhere don’t do obviously stupid things.
Let’s even think about the history here. Early Eliezer advocating for AGI to prevent nanotech from killing all of humanity was probably bad. But I am unconvinced that Eliezer’s advocacy from afterwards up until 2015 or whatever was net-negative. My understanding is that though his work led to development of AI capabilities labs, there was nobody at the time working on alignment anyway. This reflex of “AI capabilities research bad” only holds if there is sufficient progress on ensuring AI safety in the meantime.
One last note, on “power”. Assuming Eliezer isn’t horribly wrong about things, the worlds in which we survive AI are those where AI is widely acknowledged as extremely powerful. We’re just not going to make it if policy-makers and/or tech people don’t understand what they are dealing with here. Maybe there are reasons to delay this understanding a few years—I personally strongly oppose this—but let’s be clear about this.