Is there still anything EA community can do regarding AGI safety if full scale armed race towards AGI is coming soon with OpenAI almost surely been absorbed by Microsoft?
Personally, I still think there is a lot of uncertainty around how governments will act. There are at least some promising signs (e.g., UK AI Safety Summit) that governments could intervene to end or substantially limit the race toward AGI. Relatedly, I think there’s a lot to be done in terms of communicating AI risks to the public & policymakers, drafting concrete policy proposals, and forming coalitions to get meaningful regulation through.
Some folks also have hope that internal governance (lab governance) could still be useful. I am not as optimistic here, but I don’t want to rule it out entirely.
There’s also some chance that we end up getting more concrete demonstrations of risks. I do not think we should wait for these, and I think there’s a sizable chance we do not get them in time, but I think “have good plans ready to go in case we get a sudden uptick in political will & global understanding of AI risks” is still important.
(Not that it matters much, but my own guess is that many of the us who are x-risk focused should a) be as cooperative and honest with the public about our concerns with superhuman AI systems as possible, and hope that there’s enough time left for the balance of reason to win out, and b) focus on technical projects that don’t involve much internal politics.
Working on cause areas that are less fraught than x-risk also seems like a comparatively good idea, now.
Organizational politics is both corrupting and not really our (or at least my) strong suit, so best to leave it to others).
Is there still anything EA community can do regarding AGI safety if full scale armed race towards AGI is coming soon with OpenAI almost surely been absorbed by Microsoft?
Personally, I still think there is a lot of uncertainty around how governments will act. There are at least some promising signs (e.g., UK AI Safety Summit) that governments could intervene to end or substantially limit the race toward AGI. Relatedly, I think there’s a lot to be done in terms of communicating AI risks to the public & policymakers, drafting concrete policy proposals, and forming coalitions to get meaningful regulation through.
Some folks also have hope that internal governance (lab governance) could still be useful. I am not as optimistic here, but I don’t want to rule it out entirely.
There’s also some chance that we end up getting more concrete demonstrations of risks. I do not think we should wait for these, and I think there’s a sizable chance we do not get them in time, but I think “have good plans ready to go in case we get a sudden uptick in political will & global understanding of AI risks” is still important.
I think that trying to get safe concrete demonstrations of risk by doing research seems well worth pursuing (I don’t think you were saying it’s not).
Too soon to tell I think. Probably better to wait for the dust to settle.
(Not that it matters much, but my own guess is that many of the us who are x-risk focused should a) be as cooperative and honest with the public about our concerns with superhuman AI systems as possible, and hope that there’s enough time left for the balance of reason to win out, and b) focus on technical projects that don’t involve much internal politics.
Working on cause areas that are less fraught than x-risk also seems like a comparatively good idea, now.
Organizational politics is both corrupting and not really our (or at least my) strong suit, so best to leave it to others).