Personally, I still think there is a lot of uncertainty around how governments will act. There are at least some promising signs (e.g., UK AI Safety Summit) that governments could intervene to end or substantially limit the race toward AGI. Relatedly, I think there’s a lot to be done in terms of communicating AI risks to the public & policymakers, drafting concrete policy proposals, and forming coalitions to get meaningful regulation through.
Some folks also have hope that internal governance (lab governance) could still be useful. I am not as optimistic here, but I don’t want to rule it out entirely.
There’s also some chance that we end up getting more concrete demonstrations of risks. I do not think we should wait for these, and I think there’s a sizable chance we do not get them in time, but I think “have good plans ready to go in case we get a sudden uptick in political will & global understanding of AI risks” is still important.
Personally, I still think there is a lot of uncertainty around how governments will act. There are at least some promising signs (e.g., UK AI Safety Summit) that governments could intervene to end or substantially limit the race toward AGI. Relatedly, I think there’s a lot to be done in terms of communicating AI risks to the public & policymakers, drafting concrete policy proposals, and forming coalitions to get meaningful regulation through.
Some folks also have hope that internal governance (lab governance) could still be useful. I am not as optimistic here, but I don’t want to rule it out entirely.
There’s also some chance that we end up getting more concrete demonstrations of risks. I do not think we should wait for these, and I think there’s a sizable chance we do not get them in time, but I think “have good plans ready to go in case we get a sudden uptick in political will & global understanding of AI risks” is still important.
I think that trying to get safe concrete demonstrations of risk by doing research seems well worth pursuing (I don’t think you were saying it’s not).