I’m a journalist researching and writing about democratic reforms that would counterweight or constrain populism. The more I delve into this the more it seems like an urgent priority. I think the incentive structure created by today’s media-political environment encourages politicians like Trump to become more populist and that leads to lower quality governance, with visibly much less well informed people in charge.
But governance increasingly seems like an AI problem to me. If super intelligence is coming quickly it will radically increase the power of some governments (eg US) and could cause existing power structures to ossify. If politicians in charge have bad priorities or incentives, they could use AI to make the world much worse instead of much better. We may be running out of time to fix these incentives.
Reform in my mind means far greater citizen participation, using a mix of online platforms, “alignment assemblies” (online citizens assemblies + deliberation, but much faster and cheaper), sortition for citizen observers of government at all scales, and other ways to depolarise debate, build trust, and find consensus. A Tang-derived ambitious digital democracy.
But this is a surprisingly neglected problem, with comparatively tiny resources devoted to it, especially in comparison to the huge heft behind algorithms like those on social media which distort political debate.
Does this very rough sketch of my argument seem reasonable? Is this the sort of project EA orgs might fund some research on? If so, how could I contribute to that effort?
You might want to look into “AI for Epistemics”, which I think overlaps substantially with (or possibly complements) your concerns and approach. Some resources:
I think your arguments are directionally correct, but without more description, it’s hard to say whether I support specific conclusions or interventions. Also, unfortunately, I don’t think the buck stops at within-country governance, but there is urgent work required on international AI governance as well.
As a journalist, look into the Tarbell fellowship, or consider whether being an independent writer/thinker/commentator (e.g. Shakeel’s Transformer, Nathan’s Cognitive Revolution Podcast) is a path you’d be excited to traverse; there is so much room/demand for high-quality AI-risk-aware content, and so few players.
There are all kinds of other paths to pursue, e.g. in think tanks, civil service, politics, etc. that can help reduce AI risks, should you want to explore. Consider applying for 80k advising!
I’m a journalist researching and writing about democratic reforms that would counterweight or constrain populism. The more I delve into this the more it seems like an urgent priority. I think the incentive structure created by today’s media-political environment encourages politicians like Trump to become more populist and that leads to lower quality governance, with visibly much less well informed people in charge.
But governance increasingly seems like an AI problem to me. If super intelligence is coming quickly it will radically increase the power of some governments (eg US) and could cause existing power structures to ossify. If politicians in charge have bad priorities or incentives, they could use AI to make the world much worse instead of much better. We may be running out of time to fix these incentives.
Reform in my mind means far greater citizen participation, using a mix of online platforms, “alignment assemblies” (online citizens assemblies + deliberation, but much faster and cheaper), sortition for citizen observers of government at all scales, and other ways to depolarise debate, build trust, and find consensus. A Tang-derived ambitious digital democracy.
But this is a surprisingly neglected problem, with comparatively tiny resources devoted to it, especially in comparison to the huge heft behind algorithms like those on social media which distort political debate.
Does this very rough sketch of my argument seem reasonable? Is this the sort of project EA orgs might fund some research on? If so, how could I contribute to that effort?
You might want to look into “AI for Epistemics”, which I think overlaps substantially with (or possibly complements) your concerns and approach. Some resources:
https://forum.effectivealtruism.org/posts/jPKoNFRowKJwGgGyy/what-s-important-in-ai-for-epistemics
https://80000hours.org/2024/05/project-idea-ai-for-epistemics/
https://www.lesswrong.com/posts/Gi8NP9CMwJMMSCWvc/ai-for-epistemics-hackathon (completed)
https://www.flf.org/fellowship (closed)
I think your arguments are directionally correct, but without more description, it’s hard to say whether I support specific conclusions or interventions. Also, unfortunately, I don’t think the buck stops at within-country governance, but there is urgent work required on international AI governance as well.
As a journalist, look into the Tarbell fellowship, or consider whether being an independent writer/thinker/commentator (e.g. Shakeel’s Transformer, Nathan’s Cognitive Revolution Podcast) is a path you’d be excited to traverse; there is so much room/demand for high-quality AI-risk-aware content, and so few players.
There are all kinds of other paths to pursue, e.g. in think tanks, civil service, politics, etc. that can help reduce AI risks, should you want to explore. Consider applying for 80k advising!