You might want to look into “AI for Epistemics”, which I think overlaps substantially with (or possibly complements) your concerns and approach. Some resources:
I think your arguments are directionally correct, but without more description, it’s hard to say whether I support specific conclusions or interventions. Also, unfortunately, I don’t think the buck stops at within-country governance, but there is urgent work required on international AI governance as well.
As a journalist, look into the Tarbell fellowship, or consider whether being an independent writer/thinker/commentator (e.g. Shakeel’s Transformer, Nathan’s Cognitive Revolution Podcast) is a path you’d be excited to traverse; there is so much room/demand for high-quality AI-risk-aware content, and so few players.
There are all kinds of other paths to pursue, e.g. in think tanks, civil service, politics, etc. that can help reduce AI risks, should you want to explore. Consider applying for 80k advising!
You might want to look into “AI for Epistemics”, which I think overlaps substantially with (or possibly complements) your concerns and approach. Some resources:
https://forum.effectivealtruism.org/posts/jPKoNFRowKJwGgGyy/what-s-important-in-ai-for-epistemics
https://80000hours.org/2024/05/project-idea-ai-for-epistemics/
https://www.lesswrong.com/posts/Gi8NP9CMwJMMSCWvc/ai-for-epistemics-hackathon (completed)
https://www.flf.org/fellowship (closed)
I think your arguments are directionally correct, but without more description, it’s hard to say whether I support specific conclusions or interventions. Also, unfortunately, I don’t think the buck stops at within-country governance, but there is urgent work required on international AI governance as well.
As a journalist, look into the Tarbell fellowship, or consider whether being an independent writer/thinker/commentator (e.g. Shakeel’s Transformer, Nathan’s Cognitive Revolution Podcast) is a path you’d be excited to traverse; there is so much room/demand for high-quality AI-risk-aware content, and so few players.
There are all kinds of other paths to pursue, e.g. in think tanks, civil service, politics, etc. that can help reduce AI risks, should you want to explore. Consider applying for 80k advising!