I suspect it’s mostly the way you’ve written it. As a rule of thumb, always aim for high reasoning transparency. I asked ChatGPT o3 to rewrite it in a style that’s more likely to appeal to EAs and that frames it in terms of reducing the risk of stable totalitarianism. I’ve pasted its output below.
”Claim. A second Trump presidency would raise the probability that the United States drifts toward a technologically-entrenched autocracy, thereby increasing the global risk of stable totalitarianism — a scenario where an oppressive regime locks in power for centuries or more.¹
Why that matters. Even a <1 % chance of permanent totalitarian lock-in constitutes an existential risk: it would foreclose almost all future value while inflicting vast suffering.² Emerging tech — especially frontier AI, ubiquitous surveillance, and autonomous weapons — could remove the usual checks (elite defection, popular uprising, leadership succession) that historically topple dictatorships.³
Mechanisms by which Trump plausibly raises the risk: • Erosion of democratic guard-rails. Intent to purge the civil service and use federal agencies for partisan aims weakens the institutions that normally resist autocratic consolidation. • Politicised AI and surveillance. Allies have floated centralising control of federal datasets and AI models; misused, these tools could neutralise opposition and entrench rule. • Norms against power transfer. Open refusal to accept electoral defeat in 2020 signals willingness to test the limits of constitutional constraint.
Scale & neglectedness. The U.S. controls ~25 % of world GDP and a decisive share of AI R&D; trajectory changes here propagate globally. Yet only ~$70 m/yr flows to non-partisan democracy-protection charities, versus >$10 bn in partisan spend.
Tractability. Cost-effective levers include:
State-level democracy infrastructure (voter-registration, local media fact-checks) — historical cost ≈ $300–400 per net vote.
Legal defence funds for civil-service whistle-blowers.
AI-governance policy work that limits executive control over surveillance and autonomous-weapon deployment.
Next steps for funders / organisers: commission a quick Rethink Priorities dive to refine the risk delta; pilot $1–5 m to the most effective democracy-protection orgs; reassess post-election.
¹ See 80,000 Hours problem profile on risks of stable totalitarianism. 80,000 Hours ² 80k’s BOTEC puts the century-level risk at ~0.3 %, with other experts’ estimates up to 5 %. 80,000 Hours ³ Advanced AI could give a ruler decisive military, surveillance and succession advantages, removing historic failure modes for dictatorships.”
I suspect it’s mostly the way you’ve written it. As a rule of thumb, always aim for high reasoning transparency. I asked ChatGPT o3 to rewrite it in a style that’s more likely to appeal to EAs and that frames it in terms of reducing the risk of stable totalitarianism. I’ve pasted its output below.
”Claim. A second Trump presidency would raise the probability that the United States drifts toward a technologically-entrenched autocracy, thereby increasing the global risk of stable totalitarianism — a scenario where an oppressive regime locks in power for centuries or more.¹
Why that matters. Even a <1 % chance of permanent totalitarian lock-in constitutes an existential risk: it would foreclose almost all future value while inflicting vast suffering.² Emerging tech — especially frontier AI, ubiquitous surveillance, and autonomous weapons — could remove the usual checks (elite defection, popular uprising, leadership succession) that historically topple dictatorships.³
Mechanisms by which Trump plausibly raises the risk:
• Erosion of democratic guard-rails. Intent to purge the civil service and use federal agencies for partisan aims weakens the institutions that normally resist autocratic consolidation.
• Politicised AI and surveillance. Allies have floated centralising control of federal datasets and AI models; misused, these tools could neutralise opposition and entrench rule.
• Norms against power transfer. Open refusal to accept electoral defeat in 2020 signals willingness to test the limits of constitutional constraint.
Scale & neglectedness. The U.S. controls ~25 % of world GDP and a decisive share of AI R&D; trajectory changes here propagate globally. Yet only ~$70 m/yr flows to non-partisan democracy-protection charities, versus >$10 bn in partisan spend.
Tractability. Cost-effective levers include:
State-level democracy infrastructure (voter-registration, local media fact-checks) — historical cost ≈ $300–400 per net vote.
Legal defence funds for civil-service whistle-blowers.
AI-governance policy work that limits executive control over surveillance and autonomous-weapon deployment.
Next steps for funders / organisers: commission a quick Rethink Priorities dive to refine the risk delta; pilot $1–5 m to the most effective democracy-protection orgs; reassess post-election.
¹ See 80,000 Hours problem profile on risks of stable totalitarianism. 80,000 Hours
² 80k’s BOTEC puts the century-level risk at ~0.3 %, with other experts’ estimates up to 5 %. 80,000 Hours
³ Advanced AI could give a ruler decisive military, surveillance and succession advantages, removing historic failure modes for dictatorships.”
That is awesome feedback, James. Thank you!
You’re welcome :)