The “AI 2027″ scenario is pretty aggressive on timelines, but also features a lot of detailed reasoning about potential power-struggles over control of transformative AI which feels relevant to thinking about coup scenarios. (Or classic AI takeover scenarios, for that matter. Or broader, coup-adjacent / non-coup-authoritarianism scenarios of the sort Thiel seems to be worried about, where instead of getting taken over unexpectedly by China, Trump, or etc, today’s dominant western liberal institutions themselves slowly become more rigid and controlling.)
For some of the shenanigans that real-world AI companies are pulling today, see the 80,000 Hours podcast on OpenAI’s clever ploys to do away with its non-profit structure, or Zvi Mowshowitz on xAI’s embarrassingly blunt, totally not-thought-through attempts to manipulate Grok’s behavior on various political issues (or a similar, earlier incident at Google).
I’m relieved to see someone bring up the coup in all of this—I think there is a lot of focus on this post about what Thiel believes or is “thinking ” (which makes sense for a community founded on philosophy) versus what Thiel is “doing” (which is more entrepreneurship/silicon valley approach). We can dig into the ‘what led him down this path’ later imo but the more important objective is that he’s rich, powerful and making moves. Stopping or slowing those moves is the first step at this point… I definitely think the 2027 hype is not about reaching AGI but about groups vying for control and OpenAI has been making not so subtle moves toward that positioning…
I’m curious about the link that goes to AI-enabled coups and it isn’t working, could you perhaps relink it?
Sorry about that! I think I just intended to link to the same place I did for my earlier use of the phrase “AI-enabled coups”, namely this Forethought report by Tom Davidson and pals, subtitled “How a Small Group Could Use AI to Seize Power”: https://www.forethought.org/research/ai-enabled-coups-how-a-small-group-could-use-ai-to-seize-power
But also relevant to the subject is this Astral Codex Ten post about who should control an LLM’s “spec”: https://www.astralcodexten.com/p/deliberative-alignment-and-the-spec
The “AI 2027″ scenario is pretty aggressive on timelines, but also features a lot of detailed reasoning about potential power-struggles over control of transformative AI which feels relevant to thinking about coup scenarios. (Or classic AI takeover scenarios, for that matter. Or broader, coup-adjacent / non-coup-authoritarianism scenarios of the sort Thiel seems to be worried about, where instead of getting taken over unexpectedly by China, Trump, or etc, today’s dominant western liberal institutions themselves slowly become more rigid and controlling.)
For some of the shenanigans that real-world AI companies are pulling today, see the 80,000 Hours podcast on OpenAI’s clever ploys to do away with its non-profit structure, or Zvi Mowshowitz on xAI’s embarrassingly blunt, totally not-thought-through attempts to manipulate Grok’s behavior on various political issues (or a similar, earlier incident at Google).
I’m relieved to see someone bring up the coup in all of this—I think there is a lot of focus on this post about what Thiel believes or is “thinking ” (which makes sense for a community founded on philosophy) versus what Thiel is “doing” (which is more entrepreneurship/silicon valley approach). We can dig into the ‘what led him down this path’ later imo but the more important objective is that he’s rich, powerful and making moves. Stopping or slowing those moves is the first step at this point… I definitely think the 2027 hype is not about reaching AGI but about groups vying for control and OpenAI has been making not so subtle moves toward that positioning…
[redacted]