Nice post! I am a pretty close follower of the Thiel Cinematic Universe (ie his various interviews, essays, etc), so here are a ton of sprawling, rambly thoughts. I tried to put my best material first, so feel free to stop reading whenever!
There is a pretty good Girard documentary (free to watch on youtube, likely funded in part by Theil and friends) that came out recently.
Unrelated to Thiel or Girard, but if you enjoy that documentary, and you crave more content in the niche genre of “christian theology that is also potentially-groundbreaking sociological theory explaining political & cultural dynamics”, then I highly recommend this Richard Ngo blog post, about preference falsification, decision theory, and Kierkegaard’s concept of a “leap of faith” from his book Fear and Trembling.
I think Peter Thiel’s beef with EA is broader and deeper than just the AI-specific issue of “EA wants to regulate AI, and regulating AI is the antichrist, therefore EA is the antichrist”. Consider this bit of an interview from three years ago where he’s getting really spooked about Bostrom’s “Vulnerable World Hypothesis” paper (wherein Bostrom indeed states that an extremely pervasive, hitherto-unseen form of technologically-enabled totalitarianism might be necessary if humanity is to survive the invention of some hypothetical, extremely-dangerous technologies).
Thiel definitely thinks that EA embodies a general tendency in society (a tendency which has been dominant since the 1970s, ie the environmentalist and anti-nuclear movements) to shut down new technologies out of fear.
It’s unclear if he thinks EA is cynically executing a fear-of-technology-themed strategy to influence governments, gain power, and do antichrist things itself… Or if he thinks EA is merely a useful-idiot, sincerely motivated by its fear of technology (but in a way that unwittingly makes society worse and plays into the hands of would-be antichrists who co-opt EA ideas / efforts / etc to gain power).
I think Thiel is also personally quite motivated (understandably) by wanting to avoid death. This obviously relates to a kind of accelerationist take on AI that sets him against EA, but again, there’s a deeper philosophical difference here. Classic Yudkowsky essays (and a memorable Bostrom short story, video adaptation here) share this strident anti-death, pro-medical-progress attitude (cryonics, etc), as do some philanthropists like Vitalik Buterin. But these days, you don’t hear so much about “FDA delenda est” or anti-aging research from effective altruism. Perhaps there are valid reasons for this (low tractability, perhaps). But some of the arguments given by EAs against aging’s importance are a little weak, IMO (more on this later) -- in Thiel’s view, maybe suspiciously weak. This is a weird thing to say, but I think to Thiel, EA looks like a fundamentally statist / fascist ideology, insofar as it is seeking to place the state in a position of central importance, with human individuality / agency / consciousness pushed aside.
Somebody like Thiel might say that whole concept of “longtermism” is about suppressing the individual (and their desires for immortality / freedom / whatever), instead controlling society and optimizing (slowing) the path of technological development for the sake of overall future civilization (aka, the state). One might cite books like Ernest Becker’s The Denial of Death (which claims, per that wikipedia page, that “human civilization is a defense mechanism against the knowledge of our mortality” and that people manage their “death anxiety” by pouring their efforts into an “immortal project”—which “enables the individual to imagine at least some vestige of meaning continuing beyond their own lifespan”). In this modern age, when heroic cultural narratives and religious delusions no longer do the job, and when building LITERAL giant pyramids in the desert for the glorification of the state is out of style, what better a project than “longtermism” with which to harness individuals’ energy while keeping them under control by providing comfortable relief from their death-anxiety?
In the standard EA version of total hedonic utilitarianism (not always mentioned directly, but often present in EA thinking/analysis as a convenient background assumption), wherein there is no difference between individuals (10 people living 40 years is the same number of QALYs as 5 people living 80 years), no inherent notion of fundamental human rights or freedoms (perhaps instead you should content yourself with a kind of standard UBI of positively-valenced qualia), a kind of Rawlsian tendency towards communistic redistribution rather than traditional property-ownership and inequality, no accounting for Nietzschean-style aesthetics of virtue and excellence, et cetera. Utilitarianism as it is usually talked about has a bit of a “live in the pod, eat the bugs” vibe.
For the secular version of Thiel’s argument more directly, see Peter Thiel’s speech on “Anti-Anti-Anti-Anti Classical Liberalism”, in which Thiel ascends what Nick Bostrom would call a “deliberation ladder of crucial considerations” for and against classical liberalism (really more like “universities”), which (if I recall correctly—and note I’m describing not necessarily agreeing) goes something like this:
Classical liberalism (and in particular, universities / academia / other institutions driving scientific progress) are good for all the usual reasons
Anti: But look at all this crazy wokeness and postmodernism and other forms of absurd sophistry, the universities are so corrupt with these dumb ideologies, look at all this waste and all this leftist madness. If classical liberalism inexorably led to this mess, then classical liberalism has got to go.
Anti-anti: Okay, but actually all that woke madness and sophistry is mostly confined to the humanities; things are not so bad in the sciences. Harvard et al might emit some crazy noises about BLM or Gaza, but there are lots of quiet science/engineering/etc departments slowly pushing forward cures for diseases, progress towards fusion power, etc. (And note that the sciences have been growing dramatically as a percentage of all college graduates! Humanities are basically withering away due to their own irrelevance.) Zooming out from the universities, maybe you could make a similar point about “our politics is full of insane woke / MAGA madness, but beneath all that shouting you find that the stock market is up, capitalism is humming along better than ever, etc”. So, classical liberalism is good.
Anti-anti-anti: But actually, all that scientific progress is ultimately bad, because although it’s improving our standard of living here and now, ultimately it’s leading us into terrible existential risks (as we already experience with nuclear weapons, and perhaps soon with pandemics, AI, etc).
Anti-anti-anti-anti: Okay, but you forgetting some things on your list of risks to worry about. Consider that 1. totalitarian one-world government is about as likely as any of those existential risks, and classical liberalism / technological progress is a good defense against that. And 2. zero technological progress isn’t a safe state, but would be a horrible zero-growth regime that would cause people to turn against each other, start wars, etc. So, the necessity of technological progress for avoiding stable totalitarianism means that classical liberalism / universities / etc are ultimately good.
I think part of the reason for Thiel talking about the antichrist (beyond his presumably sincere belief in this stuff, on whatever level of metaphoricalness vs literalness he believes Christianity) is that he probably wants to culturally normalize the use of the term “antichrist” to refer metaphorically to stable totalitarianism, in the same sense that lots of people talk about “armageddon” in a totally secular context to refer to existential risks like nuclear war. In Thiel’s view, the very fact that “armageddon” is totally normal, serious-person vocabulary, but “antichrist” connotes a ranting conspiracy theorist, is yet more evidence of society’s unhealthy tilt between the Scylla of extinction risk and the Charybdis of stable totalitarianism.
As for my personal take on Thiel’s views—I’m often disappointed at the sloppiness (blunt-ness? or low-decoupling-ness?) of his criticisms, which attack the EA for having a problematic “vibe” and political alignment, but without digging into any specific technical points of disagreement. But I do think some of his higher-level, vibe-based critiques have a point.
Stable totalitarianism is pretty obviously a big deal, yet it goes essentially ignored by mainstream EA. (80K gives it just a 0.3% chance of happening over the next century? I feel like AI-enabled coupsalone are surely above 0.3%, and that’s just one path of several!) Much of the stable-totalitarian-related discussion I see around here are left-coded things like “fighting misinformation” (presumably via a mix of censorship and targeted “education” on certain topics), “protecting democracy” (often explicitly motivated by the desire to protect people from electing right-wing populists like Trump).
Where is the emphasis on empowering the human individual, growing human freedom, and trying to make current human freedoms more resilient and robust? I can sort of imagine a more liberty-focused EA that puts more emphasis on things like abundance-agenda deregulatory reforms, charter cities / network states, lobbying for US fiscal/monetary policy to optimize for long-run economic growth, boosting privacy-enhancing technologies (encryption of all sorts, including Vitalik-style cryptocurrency stuff, etc), delenda-ing the FDA, full steam ahead on technology for superbabies and BCIs / IQ enhancement, pushing for very liberal rules on high-skill immigration, et cetera. And indeed, a lot of this stuff is sorta present in EA to some degree. But, with the recent exception of an Ezra-Klein-endorsed abundance agenda, it kinda lives around the periphery; it isn’t the dominant vibe. Most of this stuff is probably just way lower importance / neglectedness / tractability than the existing cause areas, of course—not all cause areas can be the most important cause area! But I do think there is a bit of a blind spot here.
The one thing that I think should clearly be a much bigger deal within EA is object-level attempts to minimize stable totalitarianism—it seems to me this should perhaps be on a par with EA’s focus on biosecurity (or at the very least, nuclear war), but IRL it gets much less attention. Consider the huge emphasis devoted to mapping out the possible long-term future of AI—people are even doing wacky stuff like figuring out what kind of space-governance laws we should pass to assign ownership of distant galaxies, on the off chance that our superintelligences end up with lawful-neutral alignment and decide to respect UN treaties. Where is the similar attention on mapping out all the laws we should be passing and precedents we should be setting that will help prevent stable totalitarianism in the future?
Like maybe passing laws mandating that brain-computer-interface data be encrypted by default?
Or a law clarifying that emulated human minds have the same rights as biological humans?
Or a law attempting to ban the use of LLMs for NSA-style mass surveillance / censorship purposes, despite the fact that LLMs are obviously extremely well-suited for these tasks?
Maybe somebody should hire Rethink / Forethought / etc to map out various paths that might lead to a stable-totalitarian world government and rank them by plausibility—AI-enabled coup? Or a more traditional slow slide into socialism like Thiel et al are always on about? Or the most traditional path of all, via some charismatic right-wing dictator blitzkrieging everyone? Does it start in one nation and overrun other nations’ opposition, or emerge (as Thiel seems to imply) via a kind of loose global consensus akin to how lots of different nations had weirdly similar policy responses to Covid-19 (and to nuclear power). Does it route through the development of certain new technologies like extremley good AI-powered lie-detection, or AI superpersuasion, or autonomous weapons, or etc?
As far as I can tell, this isn’t really a cause area within EA (aside from a very nascent and still very small amount of attention placed on AI-enabled coups specifically).
It does feel like there are a lot of potential cause areas—spicy stuff like superbabies, climate geoengineering, perhaps some longevity or BCI-related ideas, but also just “any slightly right-coded policy work” that EA is forced to avoid for essentially PR reasons, because they don’t fit the international liberal zeitgeist. To be clear, I think it’s extremely understandable that the literal organizations Good Ventures and Open Philanthropy are constrained in this way, and I think they are probably making absolutely the right decision to avoid funding this stuff. But I think it’s a shame that the wider movement / idea of “effective altruism” is so easily tugged around by the PR constraints that OP/GV have to operate under. I think it’s a shame that EA hasn’t been able to spin up some “EA-adjacent” orgs (besides, idk, ACX grants) that specialize in some of this more-controversial stuff. (Although maybe this is already happening on a larger scale than I suspect—naturally, controversial projects would try to keep a low profile.)
I do think that EA is perhaps underrating longevity and other human-enhancement tech as a cause area. Although unlike with stable totalitarianism, I don’t think that it’s underrating the cause area SO MUCH that longevity actually deserves to be a top cause area.
But if we ever feel like it’s suddenly a top priority to try and appease Thiel and the accelerationists, and putting more money into mere democrat-approved-abundance-agenda stuff doesn’t seem to be doing the trick, it might nevertheless be worthwhile from a cynical PR perspective to put some token effort into this transhumanist stuff (and some of the the “human-liberty-promoting” ideas from earlier), convince them that we aren’t actually the antichrist.
Thanks! Do you know if there is anywhere he has engaged more seriously with the possibility that AI could actually be transformative? His “maybe heterodox thinking matters” statement I quoted above feels like relatively superficial engagement with the topic.
He certainly seems very familiar with the arguments involved, the idea of superintelligence, etc, even if he disagrees in some ways (hard to tell exactly which ways), and seems really averse to talking about AI the familiar rationalist style (scaling laws, AI timelines, p-dooms, etc), and kinda thinks about everything in his characteristic style: vague, vibes- and political-alignment- based, lots of jumping around and creative metaphors, not interested in detailed chains of technical arguments.
Here is a Wired article tracing Peter Thiel’s early funding of the Singularity Institute, way back in 2005. And here’s a talk from two years ago where he is talking about his early involvement with the Singularity Institute, then mocking the bay-area rationalist community for devolving from a proper transhumanist movement into a “burning man, hippie luddite” movement (not accurate IMO!), culminating in the hyper-pessimism of Yudkowsky’s “Death with Dignity” essay.
When he is bashing EA’s focus on existential risk (like in that “anti-anti-anti-anti classical liberalism” presentation), he doesn’t do what most normal people do and say that existential risk is a big fat nothingburger. Instead, he acknowledges that existential risk is at least somewhat real (even if people have exaggerated fears about it—eg, he relates somewhere that people should have been “afraid of the blast” from nuclear weapons, but instead became “afraid of the radiation”, which leads them to ban nuclear power), but that the real existential risk is counterbalanced by the urgent need to avoid stagnation and one-world-government (and presumably, albeit usually unstated, the need to race ahead to achieve transhumanist benefits like immortality).
His whole recent schtick about “Why can we talk about the existential-risk / AI apocalypse, but not the stable-totalitarian / stagnation Antichrist?”, which of course places him squarely in the “techno-optimist” / accelerationist part of the tech right, is actually quite the pivot from a few years ago, when one of his most common catchphrases went along the lines of “If technologies can have political alignments, since everyone admits that cryptocurrency is libertarian, then why isn’t it okay to say that AI is communist?” (Here is one example.) Back then he seemed mainly focused on an (understandable) worry about the potential for AI to be a hugely power-centralizing technology, performing censorship and tracking individuals’ behavior and so forth (for example, how China uses facial and gait recognition against hong kong protestors, xinjiang residents, etc).
(Thiel’s positions on AI, on government spying, on libertarianism, etc, coexist in a complex and uneasy way with the fact that of course he is a co-founder of Palantir, the premier AI-enabled-government-spying corporation, which he claims to have founded in order to “reduce terrorism while preserving civil liberties”.)
Thiel describing a 2024 conversation with Elon Musk and Demis Hassabis, where Elon is saying “I’m working on going to mars, it’s the most important project in the world” and Demis argues “actually my project is the most important in the world; my superintelligence will change everything, and it will follow you to mars”. (This is in the context of Thiel’s long pivot from libertarianism to a darker strain of conservativism / neoreaction, having realized that that “there’s nowhere else to go” to escape mainstream culture/civilization, that you can’t escape to outer space, cyberspace, or the oceans as he once hoped, but can only stay and fight to sieze control of the one future (hence all these musings about carl schmidtt and etc that make me feel wary he is going to be egging on J D Vance to try and auto-coup the government).
Followed by (correctly IMO) mocking Elon for being worried about the budget deficit, which doesn’t make any sense if you really are fully confident that superintelligent AI is right around the corner as Elon claims.
A couple more quotes on the subject of superintelligence from the recent Ross Douthat conversation (transcript, video):
Thiel claims to be one of those people who (very wrongly IMO) thinks that AI might indeed achieve 3000 IQ, but that it’ll turn out being 3000 IQ doesn’t actually help you do amazing things like design nanotech or take over the world:
PETER THIEL: It’s probably a Silicon Valley ideology and maybe, maybe in a weird way it’s more liberal than a conservative thing, but people are really fixated on IQ in Silicon Valley and that it’s all about smart people. And if you have more smart people, they’ll do great things. And then the economics anti IQ argument is that people actually do worse. The smarter they are, the worse they do. And they, you know, it’s just, they don’t know how to apply it, or our society doesn’t know what to do with them and they don’t fit in. And so that suggests that the gating factor isn’t IQ, but something, you know, that’s deeply wrong with our society.
ROSS DOUTHAT: So is that a limit on intelligence or a problem of the sort of personality types human superintelligence creates? I mean, I’m very sympathetic to the idea and I made this case when I did an episode of this, of this podcast with a sort of AI accelerationist that just throwing, that certain problems can just be solved if you ramp up intelligence. It’s like, we ramp up intelligence and boom, Alzheimer’s is solved. We ramp up intelligence and the AI can, you know, figure out the automation process that builds you a billion robots overnight. I, I’m an intelligent skeptic in the sense I don’t think, yeah, I think you probably have limits.
PETER THIEL: It’s, it’s, it’s hard to prove one way or it’s always hard to prove these things.
Thiel talks about transhumanism for a bit (albeit devolves into making fun of transgender people for being insufficiently ambitious) -- see here for the Dank EA Meme version of this exchange:
ROSS DOUTHAT: But the world of AI is clearly filled with people who at the very least seem to have a more utopian, transformative, whatever word you want to call it, view of the technology than you’re expressing here, and you were mentioned earlier the idea that the modern world used to promise radical life extension and doesn’t anymore. It seems very clear to me that a number of people deeply involved in artificial intelligence see it as a kind of mechanism for transhumanism, for transcendence of our mortal flesh and either some kind of creation of a successor species, or some kind of merger of mind and machine. Do you think that’s just all kind of irrelevant fantasy? Or do you think it’s just hype? Do you think people are trying to raise money by pretending that we’re going to build a machine god? Is it delusion? Is it something you worry about? I think you, you would prefer the human race to endure, right? You’re hesitating.
PETER THIEL: I don’t know. I, I would… I would...
ROSS DOUTHAT: This is a long hesitation.
PETER THIEL: There’s so many questions and pushes.
ROSS DOUTHAT: Should the human race survive?
PETER THIEL: Yes.
ROSS DOUTHAT: Okay.
PETER THIEL: But, but I, I also would. I, I also would like us to, to radically solve these problems. Transhumanism is this, you know, the ideal was this radical transformation where your human natural body gets transformed into an immortal body. And there’s a critique of, let’s say, the trans people in a sexual context or, I don’t know, transvestite is someone who changes their clothes and cross dresses, and a transsexual is someone where you change your, I don’t know, penis into a vagina. And we can then debate how well those surgeries work, but we want more transformation than that. The critique is not that it’s weird and unnatural. It’s man, it’s so pathetically little. And okay, we want more than cross dressing or changing your sex organs. We want you to be able to change your heart and change your mind and change your whole body.
Making fun of Elon for simultaneously obsessing over budget deficits while also claiming to be confident that a superintelligence-powered industrial explosion is right around the corner:
PETER THIEL: A conversation I had with Elon a few weeks ago about this was, he said, “We’re going to have a billion humanoid robots in the US in 10 years.” And I said, “Well, if that’s true, you don’t need to worry about the budget deficits because we’re going to have so much growth. The growth will take care of this.” And then, well, he’s still worried about the budget deficits. And then this doesn’t prove that he doesn’t believe in the billion robots, but it suggests that maybe he hasn’t thought it through or that he doesn’t think it’s going to be as transformative economically, or that there are big error bars around it.
Thiel describing a 2024 conversation with Elon Musk and Demis Hassabis, where Elon is saying “I’m working on going to mars, it’s the most important project in the world” and Demis argues “actually my project is the most important in the world; my superintelligence will change everything, and it will follow you to mars”. (This is in the context of Thiel’s long pivot from libertarianism to a darker strain of conservativism / neoreaction, having realized that that “there’s nowhere else to go” to escape mainstream culture/civilization, that you can’t escape to outer space, cyberspace, or the oceans as he once hoped, but can only stay and fight to sieze control of the one future (hence all these musings about carl schmidtt and etc that make me feel wary he is going to be egging on J D Vance to try and auto-coup the government).
This was really insightful but i’m curious if you assign much or any probability to the idea that he doesn’t actually have any strong ethical view(s). Seems plausible that he likes talking and sounding smart, has some weakly held views, but when the chips hit the table will mainly just optimize for money and power?
I think Thiel really does have a variety of strongly held views. Whether these are “ethical” views, ie views that are ultimately motivated by moral considerations… idk, kinda depends on what you are willing to certify as “ethical”.
I think you could build a decent simplified model of Thiel’s motivations (although this would be crediting him with WAY more coherence and single-mindedness than he or anyone else really has IMO) by imagining he is totally selfishly focused on obtaining transhumanist benefits (immortality, etc) for himself, but realizes that even if he becomes one of the richest people on the planet, you obviously can’t just go out and buy immortality, or even pay for a successful immortality research program—it’s too expensive, there are too many regulatory roadblocks to progress, etc. You need to create a whole society that is pro-freedom and pro-property-rights (so it’s a pleasant, secure place for you to live) and radically pro-progress. Realistically it’s not possible to just create an offshoot society, like a charter city in the ocean or a new country on Mars (the other countries will mess with you and shut you down). So this means that just to get a personal benefit to yourself, you actually have to influence the entire trajectory of civilization, avoiding various apocalyptic outcomes along the way (nuclear war, stable totalitarianism), etc. Is this an “ethical” view?
Obviously, creating a utopian society and defeating death would create huge positive externalities for all of humanity, not just Mr Thiel.
(Although longtermists would object that this course of action is net-negative from an impartial utilitarian perspective—he’s short-changing unborn future generations of humanity, running a higher level of extinction risk in order to sprint to grab the transhumanist benefits within his own lifetime.)
But if the positive externalities are just a side-benefit, and the main motivation is the personal benefit, then it is a selfish rather than altruistic view. (Can a selfish desire for personal improvement and transcendence still be “ethical”, if you’re not making other people worse off?)
Would Thiel press a button to destroy the whole world if it meant he personally got to live forever? I would guess he wouldn’t, which would go to show that this simplified monomanaical model of his motivations is wrong, and that there’s at least a substantial amount of altruistic motivation in there.
I also think that lots of big, world-spanning goals (including altruistic things like “minimize existential risk to civilization”, or “minimimze animal suffering”, or “make humanity an interplanetary species”) often problematically route through the convergent instrumental goal of “optimize for money and power”, while also being sincerely-held views. And none moreso than a personal quest for immortality! But he doesn’t strike me as optimising for power-over-others as a sadistic goal for its own sake (as it may have been for, say, Stalin) -- he seems to have such a strong belief in the importance of individual human freedom and agency that it would be suprising if he’s secretly dreaming of enslaving everyone and making them do his bidding. (Rather, he consistently sees himeself as trying to help the world throw off the shackles of a stultifying, controlling, anti-progress regime.)
But getting away from this big-picture philosophy, Thiel also seems to have lots of views which, although they technically fit nicely into the overall “perfect rational selfishness” model above, seem to at least in part be fueled by an ethical sense of anger at the injustice of the world. For example, sometime in the past few years Thiel started becoming a huge Georgist. (Disclaimer: I myself am a huge Georgist, and I think it always reflects well on people, both morally and in terms of the quality of their world-models / ability to discern truth.)
Here is a video lecture where Thiel spends half an hour at the National Conservatism Conference, desperately begging Republicans to stop just being obsessed with culture-war chum and instead learn a little bit about WHY California is so messed up (ie, the housing market), and therefore REALIZE that they need to pass a ton of “Yimby” laws right away in all the red states, or else red-state housing markets will soon become just as disfunctional as California’s, and hurt middle class and poor people there just like they do in California. There is some mean-spiritedness and a lot of Republican in-group signalling throughout the video (like when he is mocking the 2020 dem presidential primary candidates), but fundamentally, giving a speech trying to save the American middle class by Yimby-pilling the Republicans seems like a very good thing, potentially motivated by sincere moral belief that ordinary people shouldn’t be squeezed by artificial scarcity creating insane rents.
Here’s a short, two-minute video where Thiel is basically just spreading the Good News about Henry George, wherin he says that housing markets in anglosphere countries are a NIMBY catastrophe which has been “a massive hit to the lower-middle class and to young people”.
Thiel’s georgism ties into some broader ideas about a broken “inter-generational compact”, whereby the boomer generation has unjustly stolen from younger generations via housing scarcity pushing up rents, via ever-growing medicare / social-security spending and growing government debt, via shutting down technological progress in favor of safetyism, via a “corrupt” higher-education system that charges ever-higher tuition and not providing good enough value for money, and various other means.
The cynical interpretation of this is that this is just a piece of his overall project to “make the world safe for capitalism”, which in turn is part of his overall selfish motivation: He realizes that young people are turning socialist because the capitalist system seems broken to them. It seems broken to them, not because ALL of capitalism is actually corrupt, but specifically because they are getting unjustly scammed by NIMBYism. So he figures that to save capitalism from being overthrown by angry millenials voting for Bernie, we need to make America YIMBY so that the system finally works for young people and they have a stake in the system. (This is broadly correct analysis IMO) Somewhere I remember Thiel explicitly explaining this (ie, saying “we need to repair the intergenerational compact so all these young people stop turning socialist”), but unfortunately I don’t remember where he said this so I don’t have a link.
So you could say, “Aha! It’s really just selfishness all the way down, the guy is basically voldemort.” But, idk… altruistically trying to save young people from the scourge of high housing prices seems like going pretty far out of your way if your motivations are entirely selfish. It seems much more straightforwardly motivated by caring about justice and about individual freedom, and wanting to create a utopian world of maximally meritocratic, dynamic capitalism rather than a world of stagnant rent-seeking that crushes individual human agency.
Somewhere I remember Thiel explicitly explaining this (ie, saying “we need to repair the intergenerational compact so all these young people stop turning socialist”), but unfortunately I don’t remember where he said this so I don’t have a link.
The “AI 2027″ scenario is pretty aggressive on timelines, but also features a lot of detailed reasoning about potential power-struggles over control of transformative AI which feels relevant to thinking about coup scenarios. (Or classic AI takeover scenarios, for that matter. Or broader, coup-adjacent / non-coup-authoritarianism scenarios of the sort Thiel seems to be worried about, where instead of getting taken over unexpectedly by China, Trump, or etc, today’s dominant western liberal institutions themselves slowly become more rigid and controlling.)
For some of the shenanigans that real-world AI companies are pulling today, see the 80,000 Hours podcast on OpenAI’s clever ploys to do away with its non-profit structure, or Zvi Mowshowitz on xAI’s embarrassingly blunt, totally not-thought-through attempts to manipulate Grok’s behavior on various political issues (or a similar, earlier incident at Google).
I’m relieved to see someone bring up the coup in all of this—I think there is a lot of focus on this post about what Thiel believes or is “thinking ” (which makes sense for a community founded on philosophy) versus what Thiel is “doing” (which is more entrepreneurship/silicon valley approach). We can dig into the ‘what led him down this path’ later imo but the more important objective is that he’s rich, powerful and making moves. Stopping or slowing those moves is the first step at this point… I definitely think the 2027 hype is not about reaching AGI but about groups vying for control and OpenAI has been making not so subtle moves toward that positioning…
Nice post! I am a pretty close follower of the Thiel Cinematic Universe (ie his various interviews, essays, etc), so here are a ton of sprawling, rambly thoughts. I tried to put my best material first, so feel free to stop reading whenever!
There is a pretty good Girard documentary (free to watch on youtube, likely funded in part by Theil and friends) that came out recently.
Unrelated to Thiel or Girard, but if you enjoy that documentary, and you crave more content in the niche genre of “christian theology that is also potentially-groundbreaking sociological theory explaining political & cultural dynamics”, then I highly recommend this Richard Ngo blog post, about preference falsification, decision theory, and Kierkegaard’s concept of a “leap of faith” from his book Fear and Trembling.
I think Peter Thiel’s beef with EA is broader and deeper than just the AI-specific issue of “EA wants to regulate AI, and regulating AI is the antichrist, therefore EA is the antichrist”. Consider this bit of an interview from three years ago where he’s getting really spooked about Bostrom’s “Vulnerable World Hypothesis” paper (wherein Bostrom indeed states that an extremely pervasive, hitherto-unseen form of technologically-enabled totalitarianism might be necessary if humanity is to survive the invention of some hypothetical, extremely-dangerous technologies).
Thiel definitely thinks that EA embodies a general tendency in society (a tendency which has been dominant since the 1970s, ie the environmentalist and anti-nuclear movements) to shut down new technologies out of fear.
It’s unclear if he thinks EA is cynically executing a fear-of-technology-themed strategy to influence governments, gain power, and do antichrist things itself… Or if he thinks EA is merely a useful-idiot, sincerely motivated by its fear of technology (but in a way that unwittingly makes society worse and plays into the hands of would-be antichrists who co-opt EA ideas / efforts / etc to gain power).
I think Thiel is also personally quite motivated (understandably) by wanting to avoid death. This obviously relates to a kind of accelerationist take on AI that sets him against EA, but again, there’s a deeper philosophical difference here. Classic Yudkowsky essays (and a memorable Bostrom short story, video adaptation here) share this strident anti-death, pro-medical-progress attitude (cryonics, etc), as do some philanthropists like Vitalik Buterin. But these days, you don’t hear so much about “FDA delenda est” or anti-aging research from effective altruism. Perhaps there are valid reasons for this (low tractability, perhaps). But some of the arguments given by EAs against aging’s importance are a little weak, IMO (more on this later) -- in Thiel’s view, maybe suspiciously weak. This is a weird thing to say, but I think to Thiel, EA looks like a fundamentally statist / fascist ideology, insofar as it is seeking to place the state in a position of central importance, with human individuality / agency / consciousness pushed aside.
Somebody like Thiel might say that whole concept of “longtermism” is about suppressing the individual (and their desires for immortality / freedom / whatever), instead controlling society and optimizing (slowing) the path of technological development for the sake of overall future civilization (aka, the state). One might cite books like Ernest Becker’s The Denial of Death (which claims, per that wikipedia page, that “human civilization is a defense mechanism against the knowledge of our mortality” and that people manage their “death anxiety” by pouring their efforts into an “immortal project”—which “enables the individual to imagine at least some vestige of meaning continuing beyond their own lifespan”). In this modern age, when heroic cultural narratives and religious delusions no longer do the job, and when building LITERAL giant pyramids in the desert for the glorification of the state is out of style, what better a project than “longtermism” with which to harness individuals’ energy while keeping them under control by providing comfortable relief from their death-anxiety?
In the standard EA version of total hedonic utilitarianism (not always mentioned directly, but often present in EA thinking/analysis as a convenient background assumption), wherein there is no difference between individuals (10 people living 40 years is the same number of QALYs as 5 people living 80 years), no inherent notion of fundamental human rights or freedoms (perhaps instead you should content yourself with a kind of standard UBI of positively-valenced qualia), a kind of Rawlsian tendency towards communistic redistribution rather than traditional property-ownership and inequality, no accounting for Nietzschean-style aesthetics of virtue and excellence, et cetera. Utilitarianism as it is usually talked about has a bit of a “live in the pod, eat the bugs” vibe.
For the secular version of Thiel’s argument more directly, see Peter Thiel’s speech on “Anti-Anti-Anti-Anti Classical Liberalism”, in which Thiel ascends what Nick Bostrom would call a “deliberation ladder of crucial considerations” for and against classical liberalism (really more like “universities”), which (if I recall correctly—and note I’m describing not necessarily agreeing) goes something like this:
Classical liberalism (and in particular, universities / academia / other institutions driving scientific progress) are good for all the usual reasons
Anti: But look at all this crazy wokeness and postmodernism and other forms of absurd sophistry, the universities are so corrupt with these dumb ideologies, look at all this waste and all this leftist madness. If classical liberalism inexorably led to this mess, then classical liberalism has got to go.
Anti-anti: Okay, but actually all that woke madness and sophistry is mostly confined to the humanities; things are not so bad in the sciences. Harvard et al might emit some crazy noises about BLM or Gaza, but there are lots of quiet science/engineering/etc departments slowly pushing forward cures for diseases, progress towards fusion power, etc. (And note that the sciences have been growing dramatically as a percentage of all college graduates! Humanities are basically withering away due to their own irrelevance.) Zooming out from the universities, maybe you could make a similar point about “our politics is full of insane woke / MAGA madness, but beneath all that shouting you find that the stock market is up, capitalism is humming along better than ever, etc”. So, classical liberalism is good.
Anti-anti-anti: But actually, all that scientific progress is ultimately bad, because although it’s improving our standard of living here and now, ultimately it’s leading us into terrible existential risks (as we already experience with nuclear weapons, and perhaps soon with pandemics, AI, etc).
Anti-anti-anti-anti: Okay, but you forgetting some things on your list of risks to worry about. Consider that 1. totalitarian one-world government is about as likely as any of those existential risks, and classical liberalism / technological progress is a good defense against that. And 2. zero technological progress isn’t a safe state, but would be a horrible zero-growth regime that would cause people to turn against each other, start wars, etc. So, the necessity of technological progress for avoiding stable totalitarianism means that classical liberalism / universities / etc are ultimately good.
I think part of the reason for Thiel talking about the antichrist (beyond his presumably sincere belief in this stuff, on whatever level of metaphoricalness vs literalness he believes Christianity) is that he probably wants to culturally normalize the use of the term “antichrist” to refer metaphorically to stable totalitarianism, in the same sense that lots of people talk about “armageddon” in a totally secular context to refer to existential risks like nuclear war. In Thiel’s view, the very fact that “armageddon” is totally normal, serious-person vocabulary, but “antichrist” connotes a ranting conspiracy theorist, is yet more evidence of society’s unhealthy tilt between the Scylla of extinction risk and the Charybdis of stable totalitarianism.
As for my personal take on Thiel’s views—I’m often disappointed at the sloppiness (blunt-ness? or low-decoupling-ness?) of his criticisms, which attack the EA for having a problematic “vibe” and political alignment, but without digging into any specific technical points of disagreement. But I do think some of his higher-level, vibe-based critiques have a point.
Stable totalitarianism is pretty obviously a big deal, yet it goes essentially ignored by mainstream EA. (80K gives it just a 0.3% chance of happening over the next century? I feel like AI-enabled coups alone are surely above 0.3%, and that’s just one path of several!) Much of the stable-totalitarian-related discussion I see around here are left-coded things like “fighting misinformation” (presumably via a mix of censorship and targeted “education” on certain topics), “protecting democracy” (often explicitly motivated by the desire to protect people from electing right-wing populists like Trump).
Where is the emphasis on empowering the human individual, growing human freedom, and trying to make current human freedoms more resilient and robust? I can sort of imagine a more liberty-focused EA that puts more emphasis on things like abundance-agenda deregulatory reforms, charter cities / network states, lobbying for US fiscal/monetary policy to optimize for long-run economic growth, boosting privacy-enhancing technologies (encryption of all sorts, including Vitalik-style cryptocurrency stuff, etc), delenda-ing the FDA, full steam ahead on technology for superbabies and BCIs / IQ enhancement, pushing for very liberal rules on high-skill immigration, et cetera. And indeed, a lot of this stuff is sorta present in EA to some degree. But, with the recent exception of an Ezra-Klein-endorsed abundance agenda, it kinda lives around the periphery; it isn’t the dominant vibe. Most of this stuff is probably just way lower importance / neglectedness / tractability than the existing cause areas, of course—not all cause areas can be the most important cause area! But I do think there is a bit of a blind spot here.
The one thing that I think should clearly be a much bigger deal within EA is object-level attempts to minimize stable totalitarianism—it seems to me this should perhaps be on a par with EA’s focus on biosecurity (or at the very least, nuclear war), but IRL it gets much less attention. Consider the huge emphasis devoted to mapping out the possible long-term future of AI—people are even doing wacky stuff like figuring out what kind of space-governance laws we should pass to assign ownership of distant galaxies, on the off chance that our superintelligences end up with lawful-neutral alignment and decide to respect UN treaties. Where is the similar attention on mapping out all the laws we should be passing and precedents we should be setting that will help prevent stable totalitarianism in the future?
Like maybe passing laws mandating that brain-computer-interface data be encrypted by default?
Or a law clarifying that emulated human minds have the same rights as biological humans?
Or a law attempting to ban the use of LLMs for NSA-style mass surveillance / censorship purposes, despite the fact that LLMs are obviously extremely well-suited for these tasks?
Maybe somebody should hire Rethink / Forethought / etc to map out various paths that might lead to a stable-totalitarian world government and rank them by plausibility—AI-enabled coup? Or a more traditional slow slide into socialism like Thiel et al are always on about? Or the most traditional path of all, via some charismatic right-wing dictator blitzkrieging everyone? Does it start in one nation and overrun other nations’ opposition, or emerge (as Thiel seems to imply) via a kind of loose global consensus akin to how lots of different nations had weirdly similar policy responses to Covid-19 (and to nuclear power). Does it route through the development of certain new technologies like extremley good AI-powered lie-detection, or AI superpersuasion, or autonomous weapons, or etc?
As far as I can tell, this isn’t really a cause area within EA (aside from a very nascent and still very small amount of attention placed on AI-enabled coups specifically).
It does feel like there are a lot of potential cause areas—spicy stuff like superbabies, climate geoengineering, perhaps some longevity or BCI-related ideas, but also just “any slightly right-coded policy work” that EA is forced to avoid for essentially PR reasons, because they don’t fit the international liberal zeitgeist. To be clear, I think it’s extremely understandable that the literal organizations Good Ventures and Open Philanthropy are constrained in this way, and I think they are probably making absolutely the right decision to avoid funding this stuff. But I think it’s a shame that the wider movement / idea of “effective altruism” is so easily tugged around by the PR constraints that OP/GV have to operate under. I think it’s a shame that EA hasn’t been able to spin up some “EA-adjacent” orgs (besides, idk, ACX grants) that specialize in some of this more-controversial stuff. (Although maybe this is already happening on a larger scale than I suspect—naturally, controversial projects would try to keep a low profile.)
I do think that EA is perhaps underrating longevity and other human-enhancement tech as a cause area. Although unlike with stable totalitarianism, I don’t think that it’s underrating the cause area SO MUCH that longevity actually deserves to be a top cause area.
But if we ever feel like it’s suddenly a top priority to try and appease Thiel and the accelerationists, and putting more money into mere democrat-approved-abundance-agenda stuff doesn’t seem to be doing the trick, it might nevertheless be worthwhile from a cynical PR perspective to put some token effort into this transhumanist stuff (and some of the the “human-liberty-promoting” ideas from earlier), convince them that we aren’t actually the antichrist.
Thanks! Do you know if there is anywhere he has engaged more seriously with the possibility that AI could actually be transformative? His “maybe heterodox thinking matters” statement I quoted above feels like relatively superficial engagement with the topic.
He certainly seems very familiar with the arguments involved, the idea of superintelligence, etc, even if he disagrees in some ways (hard to tell exactly which ways), and seems really averse to talking about AI the familiar rationalist style (scaling laws, AI timelines, p-dooms, etc), and kinda thinks about everything in his characteristic style: vague, vibes- and political-alignment- based, lots of jumping around and creative metaphors, not interested in detailed chains of technical arguments.
Here is a Wired article tracing Peter Thiel’s early funding of the Singularity Institute, way back in 2005. And here’s a talk from two years ago where he is talking about his early involvement with the Singularity Institute, then mocking the bay-area rationalist community for devolving from a proper transhumanist movement into a “burning man, hippie luddite” movement (not accurate IMO!), culminating in the hyper-pessimism of Yudkowsky’s “Death with Dignity” essay.
When he is bashing EA’s focus on existential risk (like in that “anti-anti-anti-anti classical liberalism” presentation), he doesn’t do what most normal people do and say that existential risk is a big fat nothingburger. Instead, he acknowledges that existential risk is at least somewhat real (even if people have exaggerated fears about it—eg, he relates somewhere that people should have been “afraid of the blast” from nuclear weapons, but instead became “afraid of the radiation”, which leads them to ban nuclear power), but that the real existential risk is counterbalanced by the urgent need to avoid stagnation and one-world-government (and presumably, albeit usually unstated, the need to race ahead to achieve transhumanist benefits like immortality).
His whole recent schtick about “Why can we talk about the existential-risk / AI apocalypse, but not the stable-totalitarian / stagnation Antichrist?”, which of course places him squarely in the “techno-optimist” / accelerationist part of the tech right, is actually quite the pivot from a few years ago, when one of his most common catchphrases went along the lines of “If technologies can have political alignments, since everyone admits that cryptocurrency is libertarian, then why isn’t it okay to say that AI is communist?” (Here is one example.) Back then he seemed mainly focused on an (understandable) worry about the potential for AI to be a hugely power-centralizing technology, performing censorship and tracking individuals’ behavior and so forth (for example, how China uses facial and gait recognition against hong kong protestors, xinjiang residents, etc).
(Thiel’s positions on AI, on government spying, on libertarianism, etc, coexist in a complex and uneasy way with the fact that of course he is a co-founder of Palantir, the premier AI-enabled-government-spying corporation, which he claims to have founded in order to “reduce terrorism while preserving civil liberties”.)
Thiel describing a 2024 conversation with Elon Musk and Demis Hassabis, where Elon is saying “I’m working on going to mars, it’s the most important project in the world” and Demis argues “actually my project is the most important in the world; my superintelligence will change everything, and it will follow you to mars”. (This is in the context of Thiel’s long pivot from libertarianism to a darker strain of conservativism / neoreaction, having realized that that “there’s nowhere else to go” to escape mainstream culture/civilization, that you can’t escape to outer space, cyberspace, or the oceans as he once hoped, but can only stay and fight to sieze control of the one future (hence all these musings about carl schmidtt and etc that make me feel wary he is going to be egging on J D Vance to try and auto-coup the government).
Followed by (correctly IMO) mocking Elon for being worried about the budget deficit, which doesn’t make any sense if you really are fully confident that superintelligent AI is right around the corner as Elon claims.
A couple more quotes on the subject of superintelligence from the recent Ross Douthat conversation (transcript, video):
Thiel claims to be one of those people who (very wrongly IMO) thinks that AI might indeed achieve 3000 IQ, but that it’ll turn out being 3000 IQ doesn’t actually help you do amazing things like design nanotech or take over the world:
PETER THIEL: It’s probably a Silicon Valley ideology and maybe, maybe in a weird way it’s more liberal than a conservative thing, but people are really fixated on IQ in Silicon Valley and that it’s all about smart people. And if you have more smart people, they’ll do great things. And then the economics anti IQ argument is that people actually do worse. The smarter they are, the worse they do. And they, you know, it’s just, they don’t know how to apply it, or our society doesn’t know what to do with them and they don’t fit in. And so that suggests that the gating factor isn’t IQ, but something, you know, that’s deeply wrong with our society.
ROSS DOUTHAT: So is that a limit on intelligence or a problem of the sort of personality types human superintelligence creates? I mean, I’m very sympathetic to the idea and I made this case when I did an episode of this, of this podcast with a sort of AI accelerationist that just throwing, that certain problems can just be solved if you ramp up intelligence. It’s like, we ramp up intelligence and boom, Alzheimer’s is solved. We ramp up intelligence and the AI can, you know, figure out the automation process that builds you a billion robots overnight. I, I’m an intelligent skeptic in the sense I don’t think, yeah, I think you probably have limits.
PETER THIEL: It’s, it’s, it’s hard to prove one way or it’s always hard to prove these things.
Thiel talks about transhumanism for a bit (albeit devolves into making fun of transgender people for being insufficiently ambitious) -- see here for the Dank EA Meme version of this exchange:
ROSS DOUTHAT: But the world of AI is clearly filled with people who at the very least seem to have a more utopian, transformative, whatever word you want to call it, view of the technology than you’re expressing here, and you were mentioned earlier the idea that the modern world used to promise radical life extension and doesn’t anymore. It seems very clear to me that a number of people deeply involved in artificial intelligence see it as a kind of mechanism for transhumanism, for transcendence of our mortal flesh and either some kind of creation of a successor species, or some kind of merger of mind and machine. Do you think that’s just all kind of irrelevant fantasy? Or do you think it’s just hype? Do you think people are trying to raise money by pretending that we’re going to build a machine god? Is it delusion? Is it something you worry about? I think you, you would prefer the human race to endure, right? You’re hesitating.
PETER THIEL: I don’t know. I, I would… I would...
ROSS DOUTHAT: This is a long hesitation.
PETER THIEL: There’s so many questions and pushes.
ROSS DOUTHAT: Should the human race survive?
PETER THIEL: Yes.
ROSS DOUTHAT: Okay.
PETER THIEL: But, but I, I also would. I, I also would like us to, to radically solve these problems. Transhumanism is this, you know, the ideal was this radical transformation where your human natural body gets transformed into an immortal body. And there’s a critique of, let’s say, the trans people in a sexual context or, I don’t know, transvestite is someone who changes their clothes and cross dresses, and a transsexual is someone where you change your, I don’t know, penis into a vagina. And we can then debate how well those surgeries work, but we want more transformation than that. The critique is not that it’s weird and unnatural. It’s man, it’s so pathetically little. And okay, we want more than cross dressing or changing your sex organs. We want you to be able to change your heart and change your mind and change your whole body.
Making fun of Elon for simultaneously obsessing over budget deficits while also claiming to be confident that a superintelligence-powered industrial explosion is right around the corner:
PETER THIEL: A conversation I had with Elon a few weeks ago about this was, he said, “We’re going to have a billion humanoid robots in the US in 10 years.” And I said, “Well, if that’s true, you don’t need to worry about the budget deficits because we’re going to have so much growth. The growth will take care of this.” And then, well, he’s still worried about the budget deficits. And then this doesn’t prove that he doesn’t believe in the billion robots, but it suggests that maybe he hasn’t thought it through or that he doesn’t think it’s going to be as transformative economically, or that there are big error bars around it.
FTR: while Thiel has already claimed this version before, the more common version (e.g. here, here, here from Hassabis’ mouth, and more obliquely here in his lawsuit against Altman) is that Hassabis was warning Musk about existential risk from unaligned AGI, not threatening him with his own personally aligned AGI. However, this interpretation is interestingly resonant with Elon Musk’s creation of OpenAI being motivated by fear of Hassabis becoming an AGI dictator (a fear his co-founders apparently shared). It is certainly an interesting hypothesis that Thiel and Musk engineered together for a decade both the AGI race and global democratic backsliding wholly motivated by a same single one-sentence possible slight by Hassabis in 2012.
Thanks a lot for all your comments on this post, I found them very informative. (And the top-level post from Ben as well).
I think this picture of EA ignoring stable totalitarianism is missing the longtime focus on China.
Also, see this thread on Open Phil’s ability to support right-of-center policy work.
This was really insightful but i’m curious if you assign much or any probability to the idea that he doesn’t actually have any strong ethical view(s). Seems plausible that he likes talking and sounding smart, has some weakly held views, but when the chips hit the table will mainly just optimize for money and power?
I think Thiel really does have a variety of strongly held views. Whether these are “ethical” views, ie views that are ultimately motivated by moral considerations… idk, kinda depends on what you are willing to certify as “ethical”.
I think you could build a decent simplified model of Thiel’s motivations (although this would be crediting him with WAY more coherence and single-mindedness than he or anyone else really has IMO) by imagining he is totally selfishly focused on obtaining transhumanist benefits (immortality, etc) for himself, but realizes that even if he becomes one of the richest people on the planet, you obviously can’t just go out and buy immortality, or even pay for a successful immortality research program—it’s too expensive, there are too many regulatory roadblocks to progress, etc. You need to create a whole society that is pro-freedom and pro-property-rights (so it’s a pleasant, secure place for you to live) and radically pro-progress. Realistically it’s not possible to just create an offshoot society, like a charter city in the ocean or a new country on Mars (the other countries will mess with you and shut you down). So this means that just to get a personal benefit to yourself, you actually have to influence the entire trajectory of civilization, avoiding various apocalyptic outcomes along the way (nuclear war, stable totalitarianism), etc. Is this an “ethical” view?
Obviously, creating a utopian society and defeating death would create huge positive externalities for all of humanity, not just Mr Thiel.
(Although longtermists would object that this course of action is net-negative from an impartial utilitarian perspective—he’s short-changing unborn future generations of humanity, running a higher level of extinction risk in order to sprint to grab the transhumanist benefits within his own lifetime.)
But if the positive externalities are just a side-benefit, and the main motivation is the personal benefit, then it is a selfish rather than altruistic view. (Can a selfish desire for personal improvement and transcendence still be “ethical”, if you’re not making other people worse off?)
Would Thiel press a button to destroy the whole world if it meant he personally got to live forever? I would guess he wouldn’t, which would go to show that this simplified monomanaical model of his motivations is wrong, and that there’s at least a substantial amount of altruistic motivation in there.
I also think that lots of big, world-spanning goals (including altruistic things like “minimize existential risk to civilization”, or “minimimze animal suffering”, or “make humanity an interplanetary species”) often problematically route through the convergent instrumental goal of “optimize for money and power”, while also being sincerely-held views. And none moreso than a personal quest for immortality! But he doesn’t strike me as optimising for power-over-others as a sadistic goal for its own sake (as it may have been for, say, Stalin) -- he seems to have such a strong belief in the importance of individual human freedom and agency that it would be suprising if he’s secretly dreaming of enslaving everyone and making them do his bidding. (Rather, he consistently sees himeself as trying to help the world throw off the shackles of a stultifying, controlling, anti-progress regime.)
But getting away from this big-picture philosophy, Thiel also seems to have lots of views which, although they technically fit nicely into the overall “perfect rational selfishness” model above, seem to at least in part be fueled by an ethical sense of anger at the injustice of the world. For example, sometime in the past few years Thiel started becoming a huge Georgist. (Disclaimer: I myself am a huge Georgist, and I think it always reflects well on people, both morally and in terms of the quality of their world-models / ability to discern truth.)
Here is a video lecture where Thiel spends half an hour at the National Conservatism Conference, desperately begging Republicans to stop just being obsessed with culture-war chum and instead learn a little bit about WHY California is so messed up (ie, the housing market), and therefore REALIZE that they need to pass a ton of “Yimby” laws right away in all the red states, or else red-state housing markets will soon become just as disfunctional as California’s, and hurt middle class and poor people there just like they do in California. There is some mean-spiritedness and a lot of Republican in-group signalling throughout the video (like when he is mocking the 2020 dem presidential primary candidates), but fundamentally, giving a speech trying to save the American middle class by Yimby-pilling the Republicans seems like a very good thing, potentially motivated by sincere moral belief that ordinary people shouldn’t be squeezed by artificial scarcity creating insane rents.
Here’s a short, two-minute video where Thiel is basically just spreading the Good News about Henry George, wherin he says that housing markets in anglosphere countries are a NIMBY catastrophe which has been “a massive hit to the lower-middle class and to young people”.
Thiel’s georgism ties into some broader ideas about a broken “inter-generational compact”, whereby the boomer generation has unjustly stolen from younger generations via housing scarcity pushing up rents, via ever-growing medicare / social-security spending and growing government debt, via shutting down technological progress in favor of safetyism, via a “corrupt” higher-education system that charges ever-higher tuition and not providing good enough value for money, and various other means.
The cynical interpretation of this is that this is just a piece of his overall project to “make the world safe for capitalism”, which in turn is part of his overall selfish motivation: He realizes that young people are turning socialist because the capitalist system seems broken to them. It seems broken to them, not because ALL of capitalism is actually corrupt, but specifically because they are getting unjustly scammed by NIMBYism. So he figures that to save capitalism from being overthrown by angry millenials voting for Bernie, we need to make America YIMBY so that the system finally works for young people and they have a stake in the system. (This is broadly correct analysis IMO) Somewhere I remember Thiel explicitly explaining this (ie, saying “we need to repair the intergenerational compact so all these young people stop turning socialist”), but unfortunately I don’t remember where he said this so I don’t have a link.
So you could say, “Aha! It’s really just selfishness all the way down, the guy is basically voldemort.” But, idk… altruistically trying to save young people from the scourge of high housing prices seems like going pretty far out of your way if your motivations are entirely selfish. It seems much more straightforwardly motivated by caring about justice and about individual freedom, and wanting to create a utopian world of maximally meritocratic, dynamic capitalism rather than a world of stagnant rent-seeking that crushes individual human agency.
https://www.techemails.com/p/mark-zuckerberg-peter-thiel-millennials
I’m curious about the link that goes to AI-enabled coups and it isn’t working, could you perhaps relink it?
Sorry about that! I think I just intended to link to the same place I did for my earlier use of the phrase “AI-enabled coups”, namely this Forethought report by Tom Davidson and pals, subtitled “How a Small Group Could Use AI to Seize Power”: https://www.forethought.org/research/ai-enabled-coups-how-a-small-group-could-use-ai-to-seize-power
But also relevant to the subject is this Astral Codex Ten post about who should control an LLM’s “spec”: https://www.astralcodexten.com/p/deliberative-alignment-and-the-spec
The “AI 2027″ scenario is pretty aggressive on timelines, but also features a lot of detailed reasoning about potential power-struggles over control of transformative AI which feels relevant to thinking about coup scenarios. (Or classic AI takeover scenarios, for that matter. Or broader, coup-adjacent / non-coup-authoritarianism scenarios of the sort Thiel seems to be worried about, where instead of getting taken over unexpectedly by China, Trump, or etc, today’s dominant western liberal institutions themselves slowly become more rigid and controlling.)
For some of the shenanigans that real-world AI companies are pulling today, see the 80,000 Hours podcast on OpenAI’s clever ploys to do away with its non-profit structure, or Zvi Mowshowitz on xAI’s embarrassingly blunt, totally not-thought-through attempts to manipulate Grok’s behavior on various political issues (or a similar, earlier incident at Google).
I’m relieved to see someone bring up the coup in all of this—I think there is a lot of focus on this post about what Thiel believes or is “thinking ” (which makes sense for a community founded on philosophy) versus what Thiel is “doing” (which is more entrepreneurship/silicon valley approach). We can dig into the ‘what led him down this path’ later imo but the more important objective is that he’s rich, powerful and making moves. Stopping or slowing those moves is the first step at this point… I definitely think the 2027 hype is not about reaching AGI but about groups vying for control and OpenAI has been making not so subtle moves toward that positioning…
[redacted]