My guess is that this site focuses on the prosaic, mainstream sense of AI harms, e.g. automation, privacy, competition, what Acemoglu means here.
By the way, of the content on the webpage “advancing trustworthy AI” seems like it could be the most relevant to AGI/ASI risk. But the link is broken, which is really on the nose!
Personally, this looked more promising than anything else I had seen. There was a section titled adversarial AI, which I thought might be about AGI, but upon further reading, it wasn’t. So this appears to also be in the vein of what Ozzie is saying. However, It seems they have events semi-frequently. I think someone from EA should really try to go to they are allowed. The second link is the closest chapter in the report to AGI stuff if anyone wants to take a look- again though it’s not that impressive.
But I can’t really tell if this is the DODs org or Howard universities; it seems like they only hire Howard professors and students so probably the latter.
My guess is that this site focuses on the prosaic, mainstream sense of AI harms, e.g. automation, privacy, competition, what Acemoglu means here.
By the way, of the content on the webpage “advancing trustworthy AI” seems like it could be the most relevant to AGI/ASI risk. But the link is broken, which is really on the nose!
The link for the trustworth AI wasn’t broken for me? https://www.ai.gov/strategic-pillars/advancing-trustworthy-ai/#Use-of-AI-by-the-Federal-Government
But unsurprisingly, it mostly seems like they are talking about bigoted algorithms and not singularity.
However it did link this:
https://www.nscai.gov/
Find their abriged 2021 report here:
https://reports.nscai.gov/final-report/table-of-contents/
https://reports.nscai.gov/final-report/chapter-7/
Personally, this looked more promising than anything else I had seen. There was a section titled adversarial AI, which I thought might be about AGI, but upon further reading, it wasn’t. So this appears to also be in the vein of what Ozzie is saying. However, It seems they have events semi-frequently. I think someone from EA should really try to go to they are allowed. The second link is the closest chapter in the report to AGI stuff if anyone wants to take a look- again though it’s not that impressive.
And Also I found this: https://www.dod-coe4ai-ml.org/leadership-members
But I can’t really tell if this is the DODs org or Howard universities; it seems like they only hire Howard professors and students so probably the latter.
Closest paper I could find from them to anything AGI related: https://www.techrxiv.org/articles/preprint/Recent_Advances_in_Trustworthy_Explainable_Artificial_Intelligence_Status_Challenges_and_Perspectives/17054396/1
Yep; The US government is definitely taking some actions to progress AI development in general.
Its work to promote AI safety, and particularly, regulate or at least discuss what to do about AGI, seems to be much more lacking.
Indeed, ai.gov doesn’t have even a single mention of the term “AGI”.