Some supporters of AI Safety may overestimate the imminence of AGI. It’s not clear to me how much of a problem that is?
It seems plausible that there could be significant adverse effects on AI Safety itself. There’s been an increasing awareness of the importance of policy solutions, whose theory of impact requires support outside the AI Safety community. I think there’s a risk that AI Safety is becoming linked in the minds of third parties with a belief in AGI imminence in a way that will seriously if not irrevocably damage the former’s credibility in the event of a bubble / crash.
One might think that publicly embracing imminence is worth the risk, of course. For example, policymakers are less likely to endorse strong action for anything that is expected to have consequences many decades in the future. But being perceived as crying wolf if a bubble pops is likely to have some consequences.
It seems plausible that there could be significant adverse effects on AI Safety itself. There’s been an increasing awareness of the importance of policy solutions, whose theory of impact requires support outside the AI Safety community. I think there’s a risk that AI Safety is becoming linked in the minds of third parties with a belief in AGI imminence in a way that will seriously if not irrevocably damage the former’s credibility in the event of a bubble / crash.
One might think that publicly embracing imminence is worth the risk, of course. For example, policymakers are less likely to endorse strong action for anything that is expected to have consequences many decades in the future. But being perceived as crying wolf if a bubble pops is likely to have some consequences.