Scriptwriter for RationalAnimations! Interested in lots of EA topics, but especially ideas for new institutions like prediction markets, charter cities, georgism, etc. Also a big fan of EA / rationalist fiction!
Jackson Wagner
it could be the case that he is either lying or cognitively biased to believe in the ideas he also thinks are good investments
Yeah. Thiel is often, like, so many layers deep into metaphor and irony in his analysis, that it’s hard to believe he keeps everything straight inside his head. Some of his investments have a pretty plausible story about how they’re value-aligned, but notably his most famous and most lucrative investment (he was the first outside investor in Facebook, and credits Girardian ideas for helping him see the potential value) seems ethically disastrous! And not just from the commonly-held liberal-ish perspective that social media is bad for people’s mental health and/or seems partly responsible for today’s unruly populist politics. From a Girardian perspective it seems even worse!! Facebook/instagram/twitter/etc are literally the embodiment of mimetic desire, hugely accelerating the pace and intensity of the scapegoat process (cancel culture, wokeness, etc—the very things Thiel despises!) and hastening a catastrophic Girardian war of all against all as people become too similar in their desires and patterns of thinking (the kind of groupthink that is such anathema to him!).
Palantir also seems like a dicey, high-stakes situation where its ultimate impact could be strongly positive or strongly negative, very hard to figure out which.
If you take seriously either of these donations, they directly contradict your claim that he is worried about stable totalitarianism and certainly personal liberty
I would say it seems like there are three potential benefits that Thiel might see for his support for Blake / Masters:
Grim neoreactionary visions of steering the future of the country by doing unlawful, potentially coup-like stuff at some point in the future. (I think this is a terrible idea.)
A kind of vague, vibes-based sense that we need to support conservatives in order to shake up the stagnant liberal establishment and “change the conversation” and shift the culture. (I think this is a dumb idea that has backfired so far.)
The normal concept of trying to support people who agree with you on various policies, in the hopes they pass those policies—maybe now, or maybe only after 2028 on the off chance that Vance becomes president later. (I don’t know much about the details here, but at least this plan isn’t totally insane?)
Neoreaction: In this comment I try to map out the convoluted logic by which how Thiel might be reconciling his libertarian beliefs like “I am worried about totalitarianism” with neoreactionary ideas like “maybe I should help overthrow the American government”. (Spoilers: I really don’t think his logic adds up; any kind of attempt at a neoreactionary power-grab strikes me as extremely bad in expectation.) I truly do think this is at least some part of Thiel’s motivation here. But I don’t think that his support for Vance (or Blake Masters) was entirely or mostly motivated by neoreaction. There are obviously a lot of reasons to try and get one of your buddies to become senators! If EA had any shot at getting one of “our guys” to be the next Dem vice president, I’m sure we’d be trying hard to do that!
“Shifting the conversation”: In general, I think Thiel’s support for Trump in 2016 was a dumb idea that backfired and made the world worse (and not just by Dem lights—Thiel himself now seems to regret his involvement). He sometimes seems so angry at the stagnation created by the dominant liberal international order, that he assumes if we just shake things up enough, people will wake up and the national conversation will suddenly shift away from culture-war distractions to more important issues. But IMO this hasn’t happened at all. (Sure, Dems are maybe pivoting to “abundance” away from wokeness, which is awesome. But meanwhile, the entire Republican party has forgotten about “fiscal responsibility”, etc, and fallen into a protectionist / culture-war vortex. And most of all, the way Trump’s antics constantly saturate the news media seems like the exact opposite of a healthy national pivot towards sanity.) Nevertheless, maybe Thiel hasn’t learned his lesson here, so a misguided desire to generally oppose Dems even at the cost of supporting Trump probably forms some continuing part of his motivation.
Just trying to actually get desired policies (potentially after 2028): I’d be able to say more about this if I knew more about Vance and Masters’ politics. But I’m not actually an obsessive follower of JD Vance Thought (in part because he just seems to lie all the time) like I am with Thiel. But, idk, some thoughts on this, which seems like it probably makes up the bulk of the motivation:
Vance does seems to just lie all the time, misdirecting people and distracting from one issue by bringing up another in a totally scope-insensitive way. (Albeit this lying takes a kind of highbrow, intellectual, right-wing-substacker form, rather than Trump’s stream-of-consciousness narcissistic confabulation style.) He’ll say stuff like “nothing in this budget matters at all, don’t worry about the deficit or the benefit cuts or etc—everything will be swamped by the importance of [some tiny amount of increased border enforcement funding]”.
The guy literally wrote a whole book about all the ways Trump is dumb and bad, and now has to constantly live a lie to flatter Trump’s whims, and is apparently pulling that trick off successfully! This makes me feel like “hmm, this guy is the sort of smart machiavellian type dude who might have totally different actual politics than what he externally espouses”. So, who knows, maybe he is secretly 100% on board with all of Thiel’s transhumanist libertarian stuff, in which case Thiel’s support would be easily explained!
Sometimes (like deficit vs border funding, or his anti-Trump book vs his current stance) it’s obvious that he’s knowingly lying. But other times he seems genuinely confused and scope-insensitive. Like, maybe one week he’s all on about how falling fertility rates is a huge crisis and #1 priority. Then another week he’s crashing the Paris AI summit and explaining how America is ditching safetyism and going full-steam ahead since AI is the #1 priority. (Oh yeah, but also he claims to have read AI 2027 and to be worried about many of the risks...) Then it’s back to cheerleading for deportations and border control, since somehow stopping immigrants is the #1 priority. (He at least knows it’s Trump’s #1 best polling issue...) Sometimes all this jumping-around seems to happen within a single interview conversation, in a way that makes me think “okay, maybe this guy is not so coherent”.
All the lying makes it hard to tell where Vance really stands on various issues. He seems like he was pushing to be less involved in fighting against Houthis and Iran? (Although lost those internal debates.) Does he actually care about immigration, or is that fake? What does he really think about tarriffs and various budget battles?
Potential Thiel-flavored wins coming out of the white house:
Zvi says that “America’s AI Action Plan is Pretty Good”; whose doing is that? Not Trump. Probably not Elon. If this was in part due to Vance, then this is probably the biggest Vance-related payoff Thiel has gotten so far.
The long-threatened semiconductor tariff might be much weaker than expected; probably this was the work of Nvidia lobbyists or something, but again, maybe Vance had a finger on the scale here?
Congress has also gotten really pro-nuclear-power really quickly, although again this is probably at the behest of AI-industry lobbyists, not Vance.
But it might especially help to have a cheerleader in the executive branch when you are trying to overhaul the government with AI technology, eg via big new Palantir contracts or providing chatGPT to federal workers.
Thiel seems to be a fan of cryptocurrency; the republicans have done a lot of pro-crypto stuff, although maybe they would have done all this anyways without Vance.
Hard to tell where Thiel stands on geopolitical issues, but I would guess he’s in the camp of people who are like “ditch russia/ukraine and ignore iran/israel, but be aggressive on containing china”. Vance seems to be a dove on Iran and the Houthis, and his perrenial europe-bashing is presumably seen as helpful as regards Russia, trying to convince europe that they can’t always rely on the USA to back them up, and therefore need to handle Russia themselves.
Tragically, RFK is in charge of all the health agencies and is doing a bunch of terrible, stupid stuff. But Marty Makary at the FDA and Jim O’Neill at the HHS are Thiel allies and have been scurrying around amidst the RFK wreckage, doing all kinds of cool stuff—trying to expedite pharma manufacturing build-outs, building AI tools to accelerate FDA approval processes, launching a big new ARPA-H research program for developing neural interfaces, et cetera. This doesn’t have anything to do with Vance, but definitely represents return-on-investment for Thiel’s broader influence strategy. (One of the few arguable bright spots for the tech right, alongside AI policy, since Elon’s DOGE effort has been such a disaster, NASA lost an actually-very-promising Elon-aligned administrator, Trump generally has been a mess, etc.)
Bracketing the ill effects of generally continuing to support Trump (which are maybe kind of a sunk cost for Thiel at this point), the above wins seem easily worth the $30m or so spent on Vance and Masters’ various campaigns.
And then of course there’s always the chance he becomes president in 2028, or otherwise influences the future of a hopefully-post-Trump republican party, and therefore gets a freer hand to implement whatever his actual politics are.
I’m not sure how the current wins (some of them, like crypto deregulation or abandoning Ukraine or crashing the Paris AI summit, are only wins from Thiel’s perspective, not mine) weighs up against bad things Vance has done (in the sense of bad-above-replacement of the other vice-presidential contenders like Marco Rubio) -- compared to more normal republicans, Vance seems potentially more willing to flatter Trump’s idiocy on stuff like tariffs, or trying to annex Greenland, or riling people up with populist anti-immigrant rhetoric.
I am a biased center left dem though
I am a centrist dem too, if you can believe it! I’m a big fan of Slow Boring, and in recent months I have also really enjoyed watching Richard Hannania slowly convert from a zealous alt-right anti-woke crusader into a zealous neoliberal anti-Trump dem and shrimp-welfare-enjoyer. But I like to hear a lot of very different perspectives about life (I think it’s very unclear what’s going on in the world, and getting lots of different perspectives helps for piecing together the big picture and properly understanding / prioritizing things), which causes me to be really interested in a handful of “thoughtful conservatives”. There are only a few of them, especially when they keep eventually converting to neoliberalism / georgism / EA / etc, so each one gets lots of attention...
I think Thiel really does have a variety of strongly held views. Whether these are “ethical” views, ie views that are ultimately motivated by moral considerations… idk, kinda depends on what you are willing to certify as “ethical”.
I think you could build a decent simplified model of Thiel’s motivations (although this would be crediting him with WAY more coherence and single-mindedness than he or anyone else really has IMO) by imagining he is totally selfishly focused on obtaining transhumanist benefits (immortality, etc) for himself, but realizes that even if he becomes one of the richest people on the planet, you obviously can’t just go out and buy immortality, or even pay for a successful immortality research program—it’s too expensive, there are too many regulatory roadblocks to progress, etc. You need to create a whole society that is pro-freedom and pro-property-rights (so it’s a pleasant, secure place for you to live) and radically pro-progress. Realistically it’s not possible to just create an offshoot society, like a charter city in the ocean or a new country on Mars (the other countries will mess with you and shut you down). So this means that just to get a personal benefit to yourself, you actually have to influence the entire trajectory of civilization, avoiding various apocalyptic outcomes along the way (nuclear war, stable totalitarianism), etc. Is this an “ethical” view?
Obviously, creating a utopian society and defeating death would create huge positive externalities for all of humanity, not just Mr Thiel.
(Although longtermists would object that this course of action is net-negative from an impartial utilitarian perspective—he’s short-changing unborn future generations of humanity, running a higher level of extinction risk in order to sprint to grab the transhumanist benefits within his own lifetime.)
But if the positive externalities are just a side-benefit, and the main motivation is the personal benefit, then it is a selfish rather than altruistic view. (Can a selfish desire for personal improvement and transcendence still be “ethical”, if you’re not making other people worse off?)
Would Thiel press a button to destroy the whole world if it meant he personally got to live forever? I would guess he wouldn’t, which would go to show that this simplified monomanaical model of his motivations is wrong, and that there’s at least a substantial amount of altruistic motivation in there.
I also think that lots of big, world-spanning goals (including altruistic things like “minimize existential risk to civilization”, or “minimimze animal suffering”, or “make humanity an interplanetary species”) often problematically route through the convergent instrumental goal of “optimize for money and power”, while also being sincerely-held views. And none moreso than a personal quest for immortality! But he doesn’t strike me as optimising for power-over-others as a sadistic goal for its own sake (as it may have been for, say, Stalin) -- he seems to have such a strong belief in the importance of individual human freedom and agency that it would be suprising if he’s secretly dreaming of enslaving everyone and making them do his bidding. (Rather, he consistently sees himeself as trying to help the world throw off the shackles of a stultifying, controlling, anti-progress regime.)
But getting away from this big-picture philosophy, Thiel also seems to have lots of views which, although they technically fit nicely into the overall “perfect rational selfishness” model above, seem to at least in part be fueled by an ethical sense of anger at the injustice of the world. For example, sometime in the past few years Thiel started becoming a huge Georgist. (Disclaimer: I myself am a huge Georgist, and I think it always reflects well on people, both morally and in terms of the quality of their world-models / ability to discern truth.)
Here is a video lecture where Thiel spends half an hour at the National Conservatism Conference, desperately begging Republicans to stop just being obsessed with culture-war chum and instead learn a little bit about WHY California is so messed up (ie, the housing market), and therefore REALIZE that they need to pass a ton of “Yimby” laws right away in all the red states, or else red-state housing markets will soon become just as disfunctional as California’s, and hurt middle class and poor people there just like they do in California. There is some mean-spiritedness and a lot of Republican in-group signalling throughout the video (like when he is mocking the 2020 dem presidential primary candidates), but fundamentally, giving a speech trying to save the American middle class by Yimby-pilling the Republicans seems like a very good thing, potentially motivated by sincere moral belief that ordinary people shouldn’t be squeezed by artificial scarcity creating insane rents.
Here’s a short, two-minute video where Thiel is basically just spreading the Good News about Henry George, wherin he says that housing markets in anglosphere countries are a NIMBY catastrophe which has been “a massive hit to the lower-middle class and to young people”.
Thiel’s georgism ties into some broader ideas about a broken “inter-generational compact”, whereby the boomer generation has unjustly stolen from younger generations via housing scarcity pushing up rents, via ever-growing medicare / social-security spending and growing government debt, via shutting down technological progress in favor of safetyism, via a “corrupt” higher-education system that charges ever-higher tuition and not providing good enough value for money, and various other means.
The cynical interpretation of this is that this is just a piece of his overall project to “make the world safe for capitalism”, which in turn is part of his overall selfish motivation: He realizes that young people are turning socialist because the capitalist system seems broken to them. It seems broken to them, not because ALL of capitalism is actually corrupt, but specifically because they are getting unjustly scammed by NIMBYism. So he figures that to save capitalism from being overthrown by angry millenials voting for Bernie, we need to make America YIMBY so that the system finally works for young people and they have a stake in the system. (This is broadly correct analysis IMO) Somewhere I remember Thiel explicitly explaining this (ie, saying “we need to repair the intergenerational compact so all these young people stop turning socialist”), but unfortunately I don’t remember where he said this so I don’t have a link.
So you could say, “Aha! It’s really just selfishness all the way down, the guy is basically voldemort.” But, idk… altruistically trying to save young people from the scourge of high housing prices seems like going pretty far out of your way if your motivations are entirely selfish. It seems much more straightforwardly motivated by caring about justice and about individual freedom, and wanting to create a utopian world of maximally meritocratic, dynamic capitalism rather than a world of stagnant rent-seeking that crushes individual human agency.
Thiel seems to believe that the status-quo “international community” of liberal western nations (as embodied by the likes of Obama, Angela Merkel, etc) is currently doomed to slowly slide into some kind of stagnant, inescapable, communistic, one-world-government dystopia.
Personally, I very strongly disagree with Thiel that this is inevitable or even likely (although I see where he’s coming from insofar as IMO this is at least a possibility worth worrying about). Consequently, I think the implied neoreactionary strategy (not sure if this is really Thiel’s strategy since obviously he wouldn’t just admit it) -- something like “have somebody like JD Vance or Elon Musk coup the government, then roll the dice and hope that you end up getting a semi-benevolent libertarian dictatorship that eventually matures into a competent normal government, like Singapore or Chile, instead of ending up getting a catastrophic outcome like Nazi Germany or North Korea or a devastating civil war”—is an incredibly stupid strategy that is likely to go extremely wrong.
I also agree with you that Christianity is obviously false and thus reflects poorly on people who sincerely believe it. (Although I think Ben’s post exaggerates the degree to which Thiel is taking Christian ideas literally, since he certainly doesn’t seem to follow official doctrine on lots of stuff.) Thiel’s weird reasoning style that he brings not just to Christianity but to everything (very nonlinear, heavy on metaphors and analogies, not interested in technical details) is certainly not an exemplar of rationalist virtue. (I think it’s more like… heavily optimized for trying to come up with a different perspective than everyone else, which MIGHT be right, or might at least have something to it. Especially on the very biggest questions where, he presumably believes, bias is the strongest and cutting through groupthink is the most difficult. Versus normal rationalist-style thinking is optimized for just, you know, being actually fully correct the highest % of the time, which involves much more careful technical reasoning, lots of hive-mind-style “deferring” to the analysis of other smart people, etc)
- Aug 7, 2025, 11:37 PM; 19 points) 's comment on EA as Antichrist: Understanding Peter Thiel by (
Agreed that it is weird that a guy who seems to care so much about influencing world events (politics, technology, etc) has given away such a small percentage of his fortune as philanthropic + political donations.
But I would note that since Thiel’s interests are less altruistic and more tech-focused, a bigger part of his influencing-the-world portfolio can happen via investing in the kinds of companies and technologies he wants to create, or simply paying them for services. Some prominent examples of this strategy are founding Paypal (which was originally going to try and be a kind of libertarian proto-crypto alternate currency, before they realized that wasn’t possible), founding Palantir (allegedly to help defend western values against both terrorism and civil-rights infringement) and funding Anduril (presumably to help defend western values against a rising China). A funnier example is his misadventures trying to consume the blood of the youth in a dark gamble for escape from death, via blood transfusions from a company called Ambrosia. Thiel probably never needed to “donate” to any of these companies.
(But even then, yeah, it does seem a little too miserly...)
He certainly seems very familiar with the arguments involved, the idea of superintelligence, etc, even if he disagrees in some ways (hard to tell exactly which ways), and seems really averse to talking about AI the familiar rationalist style (scaling laws, AI timelines, p-dooms, etc), and kinda thinks about everything in his characteristic style: vague, vibes- and political-alignment- based, lots of jumping around and creative metaphors, not interested in detailed chains of technical arguments.
Here is a Wired article tracing Peter Thiel’s early funding of the Singularity Institute, way back in 2005. And here’s a talk from two years ago where he is talking about his early involvement with the Singularity Institute, then mocking the bay-area rationalist community for devolving from a proper transhumanist movement into a “burning man, hippie luddite” movement (not accurate IMO!), culminating in the hyper-pessimism of Yudkowsky’s “Death with Dignity” essay.
When he is bashing EA’s focus on existential risk (like in that “anti-anti-anti-anti classical liberalism” presentation), he doesn’t do what most normal people do and say that existential risk is a big fat nothingburger. Instead, he acknowledges that existential risk is at least somewhat real (even if people have exaggerated fears about it—eg, he relates somewhere that people should have been “afraid of the blast” from nuclear weapons, but instead became “afraid of the radiation”, which leads them to ban nuclear power), but that the real existential risk is counterbalanced by the urgent need to avoid stagnation and one-world-government (and presumably, albeit usually unstated, the need to race ahead to achieve transhumanist benefits like immortality).
His whole recent schtick about “Why can we talk about the existential-risk / AI apocalypse, but not the stable-totalitarian / stagnation Antichrist?”, which of course places him squarely in the “techno-optimist” / accelerationist part of the tech right, is actually quite the pivot from a few years ago, when one of his most common catchphrases went along the lines of “If technologies can have political alignments, since everyone admits that cryptocurrency is libertarian, then why isn’t it okay to say that AI is communist?” (Here is one example.) Back then he seemed mainly focused on an (understandable) worry about the potential for AI to be a hugely power-centralizing technology, performing censorship and tracking individuals’ behavior and so forth (for example, how China uses facial and gait recognition against hong kong protestors, xinjiang residents, etc).
(Thiel’s positions on AI, on government spying, on libertarianism, etc, coexist in a complex and uneasy way with the fact that of course he is a co-founder of Palantir, the premier AI-enabled-government-spying corporation, which he claims to have founded in order to “reduce terrorism while preserving civil liberties”.)
Thiel describing a 2024 conversation with Elon Musk and Demis Hassabis, where Elon is saying “I’m working on going to mars, it’s the most important project in the world” and Demis argues “actually my project is the most important in the world; my superintelligence will change everything, and it will follow you to mars”. (This is in the context of Thiel’s long pivot from libertarianism to a darker strain of conservativism / neoreaction, having realized that that “there’s nowhere else to go” to escape mainstream culture/civilization, that you can’t escape to outer space, cyberspace, or the oceans as he once hoped, but can only stay and fight to sieze control of the one future (hence all these musings about carl schmidtt and etc that make me feel wary he is going to be egging on J D Vance to try and auto-coup the government).
Followed by (correctly IMO) mocking Elon for being worried about the budget deficit, which doesn’t make any sense if you really are fully confident that superintelligent AI is right around the corner as Elon claims.
A couple more quotes on the subject of superintelligence from the recent Ross Douthat conversation (transcript, video):Thiel claims to be one of those people who (very wrongly IMO) thinks that AI might indeed achieve 3000 IQ, but that it’ll turn out being 3000 IQ doesn’t actually help you do amazing things like design nanotech or take over the world:
PETER THIEL: It’s probably a Silicon Valley ideology and maybe, maybe in a weird way it’s more liberal than a conservative thing, but people are really fixated on IQ in Silicon Valley and that it’s all about smart people. And if you have more smart people, they’ll do great things. And then the economics anti IQ argument is that people actually do worse. The smarter they are, the worse they do. And they, you know, it’s just, they don’t know how to apply it, or our society doesn’t know what to do with them and they don’t fit in. And so that suggests that the gating factor isn’t IQ, but something, you know, that’s deeply wrong with our society.ROSS DOUTHAT: So is that a limit on intelligence or a problem of the sort of personality types human superintelligence creates? I mean, I’m very sympathetic to the idea and I made this case when I did an episode of this, of this podcast with a sort of AI accelerationist that just throwing, that certain problems can just be solved if you ramp up intelligence. It’s like, we ramp up intelligence and boom, Alzheimer’s is solved. We ramp up intelligence and the AI can, you know, figure out the automation process that builds you a billion robots overnight. I, I’m an intelligent skeptic in the sense I don’t think, yeah, I think you probably have limits.
PETER THIEL: It’s, it’s, it’s hard to prove one way or it’s always hard to prove these things.
Thiel talks about transhumanism for a bit (albeit devolves into making fun of transgender people for being insufficiently ambitious) -- see here for the Dank EA Meme version of this exchange:
ROSS DOUTHAT: But the world of AI is clearly filled with people who at the very least seem to have a more utopian, transformative, whatever word you want to call it, view of the technology than you’re expressing here, and you were mentioned earlier the idea that the modern world used to promise radical life extension and doesn’t anymore. It seems very clear to me that a number of people deeply involved in artificial intelligence see it as a kind of mechanism for transhumanism, for transcendence of our mortal flesh and either some kind of creation of a successor species, or some kind of merger of mind and machine. Do you think that’s just all kind of irrelevant fantasy? Or do you think it’s just hype? Do you think people are trying to raise money by pretending that we’re going to build a machine god? Is it delusion? Is it something you worry about? I think you, you would prefer the human race to endure, right? You’re hesitating.PETER THIEL: I don’t know. I, I would… I would...
ROSS DOUTHAT: This is a long hesitation.
PETER THIEL: There’s so many questions and pushes.
ROSS DOUTHAT: Should the human race survive?
PETER THIEL: Yes.
ROSS DOUTHAT: Okay.
PETER THIEL: But, but I, I also would. I, I also would like us to, to radically solve these problems. Transhumanism is this, you know, the ideal was this radical transformation where your human natural body gets transformed into an immortal body. And there’s a critique of, let’s say, the trans people in a sexual context or, I don’t know, transvestite is someone who changes their clothes and cross dresses, and a transsexual is someone where you change your, I don’t know, penis into a vagina. And we can then debate how well those surgeries work, but we want more transformation than that. The critique is not that it’s weird and unnatural. It’s man, it’s so pathetically little. And okay, we want more than cross dressing or changing your sex organs. We want you to be able to change your heart and change your mind and change your whole body.
Making fun of Elon for simultaneously obsessing over budget deficits while also claiming to be confident that a superintelligence-powered industrial explosion is right around the corner:
PETER THIEL: A conversation I had with Elon a few weeks ago about this was, he said, “We’re going to have a billion humanoid robots in the US in 10 years.” And I said, “Well, if that’s true, you don’t need to worry about the budget deficits because we’re going to have so much growth. The growth will take care of this.” And then, well, he’s still worried about the budget deficits. And then this doesn’t prove that he doesn’t believe in the billion robots, but it suggests that maybe he hasn’t thought it through or that he doesn’t think it’s going to be as transformative economically, or that there are big error bars around it.
From a podcast conversation with Ross Douthat, trying to explain why his interest in transhumanism and immortality is not heresy:
ROSS DOUTHAT: I generally agree with what I think is your belief that religion should be a friend to science and ideas of scientific progress. I think any idea of divine providence has to encompass the fact that we have progressed and achieved and done things that would have been unimaginable to our ancestors. But it still also seems like, yeah, the promise of Christianity in the end is you get the perfected body and the perfected soul through God’s grace. And the person who tries to do it on their own with a bunch of machines is likely to end up as a dystopian character.
PETER THIEL: Well, it’s. Let’s, let’s articulate this and you can.
ROSS DOUTHAT: Have a heretical form of Christianity. Right. That says something else.
PETER THIEL: I don’t know. I think the word nature does not occur once in The Old Testament. And so if you, and there is a word in which, a sense in which the way I understand the Judeo Christian inspiration is it is about transcending nature. It is about overcoming things.
And the closest thing you can say to nature is that people are fallen. And that that’s the natural thing in a Christian sense is that you’re messed up. And that’s true. But, you know, there’s some ways that, you know, with God’s help, you are supposed to transcend that and overcome that.
Thiel is definitely not following “standard theology” on some of the stuff you mention!
“Jesus will win for certain.” “If chaos is inevitable… why [bother trying to accelerate economic growth]?” Peter Thiel is constantly railing against this kind of sentiment. He literally will not shut up about the importance of individual human agency, so much so that he has essentially been pascal’s mugged by the idea of the centrality of human freedom and the necessity of believing in the indeterminacy of the future. Some quotes of his:
“At the extreme, optimism and pessimism are the same thing. If you’re extremely pessimistic, there’s nothing you can do. If you’re extremely optimistic, there’s nothing you need to do. Both extreme optimism and extreme pessimism converge on laziness.”
“I went to the World Economic Forum in Davos the last time in 2013… And people are there only in their capacity as representatives of corporations or of governments or of NGOs. And it really hit me: There are simply no individuals. There are no individuals in the room. There’s nobody there who’s representing themselves. And it’s this notion of the future I reject. A picture of the future where the future will be a world where there are no individuals. There are no people with ideas of their own.”
“The future of technology is not pre-determined, and we must resist the temptation of technological utopianism — the notion that technology has a momentum or will of its own, that it will guarantee a more free future, and therefore that we can ignore the terrible arc of the political in our world. A better metaphor is that we are in a deadly race between politics and technology. The future will be much better or much worse, but the question of the future remains very open indeed. We do not know exactly how close this race is, but I suspect that it may be very close, even down to the wire. Unlike the world of politics, in the world of technology the choices of individuals may still be paramount. The fate of our world may depend on the effort of a single person who builds or propagates the machinery of freedom that makes the world safe for capitalism.”
“COWEN: What number should I keep my eye on? Let’s say you’re going to take a long nap and I need someone to tell me, “Tyler, we’re out of the great stagnation now.” What’s the impersonal indicator that I should look at?
THIEL: I disagree with the premise of that question. I don’t think the future is this fixed thing that just exists. I don’t think there’s something automatic about the great stagnation ending or not ending. I think — I always believe in human agency and so I think it matters a great deal whether people end it or not. There was this sort of hyperoptimistic book by Kurzweil, The Singularity Is Near; we had all these sort of accelerating charts. I also disagree with that, not just because I’m more pessimistic, but I disagree with the vision of the future where all you have to do is sit back, eat popcorn, and watch the movie of the future unfold. I think the future is open to us to decide what to do. If you take a nap, if you encourage everybody else to take a nap, then the great stagnation is never going to end.”
He is constantly on about this, mentioning the point about optimism/pessimism both leading to inaction in almost every interview. In some of his christian stuff he also talks about the importance of how God gave us free will, etc. Not sure exactly how all the theology adds up in his head, since as you point out, it seems very hard to square this with taking christian ideas about the end times 100% literally.
Similar situation regarding longevity and human flourishing versus a literalist take of tallying up “number of souls saved”—he definitely doesn’t seem to be tallying souls in the usual way where it’s just about telling people the Good News, rather seems to think of the kingdom of heaven as something more material that humanity will potentially help bring about (perhaps something like, eg, a future transhumanist utopia of immortal uploaded super-minds living in a dyson swarm, although he doesn’t come out and say this). When christian interviewers ask him about his interest in life extension, he talks about how christianity is very pro-life, it says that life is good and more life is better, that christianity says death is bad and importantly that it’s is something to be overcome, not something to be accepted. (The christian interviewers usually don’t seem to buy it, lol...)
“Isn’t that goal quite similar to more standard goals of keeping societies open, innovative and prosperous?”
I think Thiel might fairly argue that his quest to conquer death, achieve transcendence, and build a utopian society has a pretty strong intrinsic spiritual connotation even when pursued by modern bay-area secular-rationalist programmer types who say they are nonreligious.
He might also note that (sadly) these transhumanist goals (or even the milder goals of keeping society “innovative and prosperous”, if you interpret that as “very pro-tech and capitalistic”) are very far from universal or “standard” goals held by most people or governments. (FDA won’t even CONSIDER any proposed treatments for aging because they say aging isn’t a disease! If you even try, journalists will write attack articles calling you a eugenicist. (Heck, just look at what happened to poor Dustin Moskovitz… guy is doing totally unobjectionable stuff, just trying to save thousands of lives and minimize existential risk entirely out of the goodness of his own heart, and some unhinged psycho starts smearing him as the antichrist!) A man can’t even build a simple nuclear-battery-powered flying car without the FAA, NRC, and NHTSA all getting upset and making absurdly safetyist tradeoffs that destroy immense amounts of economic value. And if you want to fix any of that, good luck getting any nation to give you even the tiniest speck of land on which to experiment with your new constitution outlining an AI-prediction-market-based form of government… you’d have better odds trying to build a city at the bottom of the ocean!)
- Aug 7, 2025, 8:25 PM; 6 points) 's comment on EA as Antichrist: Understanding Peter Thiel by (
Historicity of the claims is extremely low IMO insofar as he’s positing some incredibly specific mechanisms that he expects to find in societies all over the world… The idea that scapegoating often focuses on a *single person* rather than a group seems very dubious. Ditto the idea that lots of pagan religions involve sacrificial gods, or that victims are often elevated as gods after death. And Girard’s theory would seem to imply that we could easily map out repeating cycles of the scapegoat process in the history of almost every human society, which as far as I know nobody has ever even claimed to have done.
I think a salvageable position here would be to say that the specific sacrificial-god scapegoat process and all the stuff about mimetic desire, is really just an exaggerated dramatization of a more abstract process akin to “Turchin Cycles” whereby societal trust/cohesion rises and falls in a cyclical way due to game-theory-style dynamics about when it’s valuable to cooperate vs defect.
Girard is also committed to thinking that Christianity is incredibly unique (because he thinks it is literally the true religion, etc), wheras IMO Christianity has neither a monopoly on sophisticated moral reasoning about collective violence (the Bhagavad Gita and various chinese schools of thought like Confucianism and Mohism and Mahayana Buddhism come to mind), nor a particularly spotless record for avoiding scapegoating / persecution (protestant-vs-catholic wars, antisemitism, literal witch hunts, etc). So christian societies don’t seem radically different to me than non-christian ones.
I think maybe a salvageable position here for a Girard apologist would be to say something like “Okay, christian *societies* are barely detectably different from non-christian ones because they failed to live up to their values. But (as this SSC article speculates: https://slatestarcodex.com/2018/01/30/the-invention-of-moral-narrative/ ) maybe the reason christianity got so popular and successful was because it helped illuminate the important moral truth (and/or compellingly viral outrage-bait narrative) that scapegoating sometimes happens, victims are sometimes innocent, etc.”
IIRC Girard posits kind of a confusing multi-step process that involves something like:
people become more and more similar due to memetic desire, competition, imitation, etc. ironically, as people become more similar, they become more divided and start more fights (since they increasingly want the same things, I guess). So, tension increases and the situation threatens to break out into some kind of violent anarchy.
in order to forestall a messy civil war, people instead fixate on a scapegoat (per Ben’s quote above). everyone exaggerates the different-ness of the scapegoat and gangs up against them, which helps the community feel nice and unified again.
So the scapegoat is indeed different in some way (different religion, ethnicity, political faction, whatever). And if you ask anybody at the time, it’ll be the massive #1 culture-war issue that the scapegoated group are all heathens who butter their bread with the butter side down, while we righteous upstanding citizens butter our bread with the butter side up. But objectively, the actual difference between the two groups is very small, and indeed the scapegoat process is perhaps more effective the smaller the actual objective difference is. (One is reminded of some of Stalin’s purges, where true believers in the cause of communism were sent to the gulag for what strike us today as minor doctrinal differences. Or the long history of bitter religious schisms over nigh-incomprehensible theological disputes.)
Nice post! I am a pretty close follower of the Thiel Cinematic Universe (ie his various interviews, essays, etc), so here are a ton of sprawling, rambly thoughts. I tried to put my best material first, so feel free to stop reading whenever!
There is a pretty good Girard documentary (free to watch on youtube, likely funded in part by Theil and friends) that came out recently.
Unrelated to Thiel or Girard, but if you enjoy that documentary, and you crave more content in the niche genre of “christian theology that is also potentially-groundbreaking sociological theory explaining political & cultural dynamics”, then I highly recommend this Richard Ngo blog post, about preference falsification, decision theory, and Kierkegaard’s concept of a “leap of faith” from his book Fear and Trembling.
I think Peter Thiel’s beef with EA is broader and deeper than just the AI-specific issue of “EA wants to regulate AI, and regulating AI is the antichrist, therefore EA is the antichrist”. Consider this bit of an interview from three years ago where he’s getting really spooked about Bostrom’s “Vulnerable World Hypothesis” paper (wherein Bostrom indeed states that an extremely pervasive, hitherto-unseen form of technologically-enabled totalitarianism might be necessary if humanity is to survive the invention of some hypothetical, extremely-dangerous technologies).
Thiel definitely thinks that EA embodies a general tendency in society (a tendency which has been dominant since the 1970s, ie the environmentalist and anti-nuclear movements) to shut down new technologies out of fear.
It’s unclear if he thinks EA is cynically executing a fear-of-technology-themed strategy to influence governments, gain power, and do antichrist things itself… Or if he thinks EA is merely a useful-idiot, sincerely motivated by its fear of technology (but in a way that unwittingly makes society worse and plays into the hands of would-be antichrists who co-opt EA ideas / efforts / etc to gain power).
I think Thiel is also personally quite motivated (understandably) by wanting to avoid death. This obviously relates to a kind of accelerationist take on AI that sets him against EA, but again, there’s a deeper philosophical difference here. Classic Yudkowsky essays (and a memorable Bostrom short story, video adaptation here) share this strident anti-death, pro-medical-progress attitude (cryonics, etc), as do some philanthropists like Vitalik Buterin. But these days, you don’t hear so much about “FDA delenda est” or anti-aging research from effective altruism. Perhaps there are valid reasons for this (low tractability, perhaps). But some of the arguments given by EAs against aging’s importance are a little weak, IMO (more on this later) -- in Thiel’s view, maybe suspiciously weak. This is a weird thing to say, but I think to Thiel, EA looks like a fundamentally statist / fascist ideology, insofar as it is seeking to place the state in a position of central importance, with human individuality / agency / consciousness pushed aside.
Somebody like Thiel might say that whole concept of “longtermism” is about suppressing the individual (and their desires for immortality / freedom / whatever), instead controlling society and optimizing (slowing) the path of technological development for the sake of overall future civilization (aka, the state). One might cite books like Ernest Becker’s The Denial of Death (which claims, per that wikipedia page, that “human civilization is a defense mechanism against the knowledge of our mortality” and that people manage their “death anxiety” by pouring their efforts into an “immortal project”—which “enables the individual to imagine at least some vestige of meaning continuing beyond their own lifespan”). In this modern age, when heroic cultural narratives and religious delusions no longer do the job, and when building LITERAL giant pyramids in the desert for the glorification of the state is out of style, what better a project than “longtermism” with which to harness individuals’ energy while keeping them under control by providing comfortable relief from their death-anxiety?
In the standard EA version of total hedonic utilitarianism (not always mentioned directly, but often present in EA thinking/analysis as a convenient background assumption), wherein there is no difference between individuals (10 people living 40 years is the same number of QALYs as 5 people living 80 years), no inherent notion of fundamental human rights or freedoms (perhaps instead you should content yourself with a kind of standard UBI of positively-valenced qualia), a kind of Rawlsian tendency towards communistic redistribution rather than traditional property-ownership and inequality, no accounting for Nietzschean-style aesthetics of virtue and excellence, et cetera. Utilitarianism as it is usually talked about has a bit of a “live in the pod, eat the bugs” vibe.
For the secular version of Thiel’s argument more directly, see Peter Thiel’s speech on “Anti-Anti-Anti-Anti Classical Liberalism”, in which Thiel ascends what Nick Bostrom would call a “deliberation ladder of crucial considerations” for and against classical liberalism (really more like “universities”), which (if I recall correctly—and note I’m describing not necessarily agreeing) goes something like this:
Classical liberalism (and in particular, universities / academia / other institutions driving scientific progress) are good for all the usual reasons
Anti: But look at all this crazy wokeness and postmodernism and other forms of absurd sophistry, the universities are so corrupt with these dumb ideologies, look at all this waste and all this leftist madness. If classical liberalism inexorably led to this mess, then classical liberalism has got to go.
Anti-anti: Okay, but actually all that woke madness and sophistry is mostly confined to the humanities; things are not so bad in the sciences. Harvard et al might emit some crazy noises about BLM or Gaza, but there are lots of quiet science/engineering/etc departments slowly pushing forward cures for diseases, progress towards fusion power, etc. (And note that the sciences have been growing dramatically as a percentage of all college graduates! Humanities are basically withering away due to their own irrelevance.) Zooming out from the universities, maybe you could make a similar point about “our politics is full of insane woke / MAGA madness, but beneath all that shouting you find that the stock market is up, capitalism is humming along better than ever, etc”. So, classical liberalism is good.
Anti-anti-anti: But actually, all that scientific progress is ultimately bad, because although it’s improving our standard of living here and now, ultimately it’s leading us into terrible existential risks (as we already experience with nuclear weapons, and perhaps soon with pandemics, AI, etc).
Anti-anti-anti-anti: Okay, but you forgetting some things on your list of risks to worry about. Consider that 1. totalitarian one-world government is about as likely as any of those existential risks, and classical liberalism / technological progress is a good defense against that. And 2. zero technological progress isn’t a safe state, but would be a horrible zero-growth regime that would cause people to turn against each other, start wars, etc. So, the necessity of technological progress for avoiding stable totalitarianism means that classical liberalism / universities / etc are ultimately good.
I think part of the reason for Thiel talking about the antichrist (beyond his presumably sincere belief in this stuff, on whatever level of metaphoricalness vs literalness he believes Christianity) is that he probably wants to culturally normalize the use of the term “antichrist” to refer metaphorically to stable totalitarianism, in the same sense that lots of people talk about “armageddon” in a totally secular context to refer to existential risks like nuclear war. In Thiel’s view, the very fact that “armageddon” is totally normal, serious-person vocabulary, but “antichrist” connotes a ranting conspiracy theorist, is yet more evidence of society’s unhealthy tilt between the Scylla of extinction risk and the Charybdis of stable totalitarianism.
As for my personal take on Thiel’s views—I’m often disappointed at the sloppiness (blunt-ness? or low-decoupling-ness?) of his criticisms, which attack the EA for having a problematic “vibe” and political alignment, but without digging into any specific technical points of disagreement. But I do think some of his higher-level, vibe-based critiques have a point.
Stable totalitarianism is pretty obviously a big deal, yet it goes essentially ignored by mainstream EA. (80K gives it just a 0.3% chance of happening over the next century? I feel like AI-enabled coups alone are surely above 0.3%, and that’s just one path of several!) Much of the stable-totalitarian-related discussion I see around here are left-coded things like “fighting misinformation” (presumably via a mix of censorship and targeted “education” on certain topics), “protecting democracy” (often explicitly motivated by the desire to protect people from electing right-wing populists like Trump).
Where is the emphasis on empowering the human individual, growing human freedom, and trying to make current human freedoms more resilient and robust? I can sort of imagine a more liberty-focused EA that puts more emphasis on things like abundance-agenda deregulatory reforms, charter cities / network states, lobbying for US fiscal/monetary policy to optimize for long-run economic growth, boosting privacy-enhancing technologies (encryption of all sorts, including Vitalik-style cryptocurrency stuff, etc), delenda-ing the FDA, full steam ahead on technology for superbabies and BCIs / IQ enhancement, pushing for very liberal rules on high-skill immigration, et cetera. And indeed, a lot of this stuff is sorta present in EA to some degree. But, with the recent exception of an Ezra-Klein-endorsed abundance agenda, it kinda lives around the periphery; it isn’t the dominant vibe. Most of this stuff is probably just way lower importance / neglectedness / tractability than the existing cause areas, of course—not all cause areas can be the most important cause area! But I do think there is a bit of a blind spot here.
The one thing that I think should clearly be a much bigger deal within EA is object-level attempts to minimize stable totalitarianism—it seems to me this should perhaps be on a par with EA’s focus on biosecurity (or at the very least, nuclear war), but IRL it gets much less attention. Consider the huge emphasis devoted to mapping out the possible long-term future of AI—people are even doing wacky stuff like figuring out what kind of space-governance laws we should pass to assign ownership of distant galaxies, on the off chance that our superintelligences end up with lawful-neutral alignment and decide to respect UN treaties. Where is the similar attention on mapping out all the laws we should be passing and precedents we should be setting that will help prevent stable totalitarianism in the future?
Like maybe passing laws mandating that brain-computer-interface data be encrypted by default?
Or a law clarifying that emulated human minds have the same rights as biological humans?
Or a law attempting to ban the use of LLMs for NSA-style mass surveillance / censorship purposes, despite the fact that LLMs are obviously extremely well-suited for these tasks?
Maybe somebody should hire Rethink / Forethought / etc to map out various paths that might lead to a stable-totalitarian world government and rank them by plausibility—AI-enabled coup? Or a more traditional slow slide into socialism like Thiel et al are always on about? Or the most traditional path of all, via some charismatic right-wing dictator blitzkrieging everyone? Does it start in one nation and overrun other nations’ opposition, or emerge (as Thiel seems to imply) via a kind of loose global consensus akin to how lots of different nations had weirdly similar policy responses to Covid-19 (and to nuclear power). Does it route through the development of certain new technologies like extremley good AI-powered lie-detection, or AI superpersuasion, or autonomous weapons, or etc?
As far as I can tell, this isn’t really a cause area within EA (aside from a very nascent and still very small amount of attention placed on AI-enabled coups specifically).
It does feel like there are a lot of potential cause areas—spicy stuff like superbabies, climate geoengineering, perhaps some longevity or BCI-related ideas, but also just “any slightly right-coded policy work” that EA is forced to avoid for essentially PR reasons, because they don’t fit the international liberal zeitgeist. To be clear, I think it’s extremely understandable that the literal organizations Good Ventures and Open Philanthropy are constrained in this way, and I think they are probably making absolutely the right decision to avoid funding this stuff. But I think it’s a shame that the wider movement / idea of “effective altruism” is so easily tugged around by the PR constraints that OP/GV have to operate under. I think it’s a shame that EA hasn’t been able to spin up some “EA-adjacent” orgs (besides, idk, ACX grants) that specialize in some of this more-controversial stuff. (Although maybe this is already happening on a larger scale than I suspect—naturally, controversial projects would try to keep a low profile.)
I do think that EA is perhaps underrating longevity and other human-enhancement tech as a cause area. Although unlike with stable totalitarianism, I don’t think that it’s underrating the cause area SO MUCH that longevity actually deserves to be a top cause area.
But if we ever feel like it’s suddenly a top priority to try and appease Thiel and the accelerationists, and putting more money into mere democrat-approved-abundance-agenda stuff doesn’t seem to be doing the trick, it might nevertheless be worthwhile from a cynical PR perspective to put some token effort into this transhumanist stuff (and some of the the “human-liberty-promoting” ideas from earlier), convince them that we aren’t actually the antichrist.
(Not really an argument, although I do disagree with stuff like RP’s moral weights. Just kind of an impression / thought, that I am addressing to Vasco but also to invertebrate-suffering folks more broadly.)
Reading through this interesting and provocative (though also IMO incorrect) post and some of your helpfully linked resources & further analysis, it’s hard to wrap my mind around the worldview that must follow, once you believe that each random 1m^2 patch of boreal taiga, temperate grassland, and other assorted forest biomes (as you tabulate here; screenshot below), despite appearing to be an inert patch of dirt topped by a few shrubs or a tree, actually contains the moral equivalent of DOZENS of suffering humans (like 20 − 40 humans suffering 24⁄7 per cube of dirt)??
In this Brian-Tomasik style world, humans (and indeed, essentially every visible thing) are just a tiny, thin crust of intelligence and complexity existing atop a vast hellish ocean of immense (albeit simple/repetitive) suffering. (Or, if the people complaining that nematode lives might be net-positive are correct but all the other views on the importance of invertebrates are kept the same, then everything we see is the same irrelevant crust but now sitting atop a vast incomprehensible bulk of primordial pleasure.)
What is the best way to imagine this? I am guessing that insect-welfare advocates would object to my image of each cube of dirt containing dozens of suffering humans, saying stuff like:
“you can’t actually use RP-style moral weights to compare things in that way” (but they seem to make exactly these comparisons all the time?)
“it’s an equivalent amount of suffering, yes, but it’s such a different TYPE of suffering that you shouldn’t picture suffering humans, instead it would be more accurate to picture X” (what should X be? maybe something simpler than an adult human but still relatable, like crying newborns or a writhing, injured insect?)
“negative QALYs aren’t actually very bad; it’s more like having a stubbed toe 24⁄7 than being tortured 24/7” (I am very confused about the idea of negative QALYs, neutral points, etc, and it seems everyone else is too)
Here is a picture of some square meters of boreal tundra that I googled, if it helps:
I’d also be very curious to know what people make of the fact that at least the most famous nematode species has only 302 neurons that are always wired up in the exact same way. Philosophically, I tend to be of the opinion that if you made a computer simulation of a human brain experiencing torture, it would be very bad to run that simulation. But if you then ran the EXACT same simulation again, this would not be 2x as bad—it might not be even any worse at all than running it once. (Ditto for running 2 copies of the simulation on 2 identical computers sitting next to each other. Or running the simulation on a single computer with double-width wires.) How many of those 302 neurons can possibly be involved in nematode suffering? Maybe, idk, 10 of them? How many states can those ten neurons have? How many of those states are negative vs positive? You see what I’m getting at—how long before adding more nematodes doesn’t carry any additional moral weight (under the view I outlined above), because it starts just being “literally the exact same nematode experience” simply duplicated many times?
Anyways, perhaps this perspective—wherein human civilization is essentially irrelevant except insofar as we can take action that affects the infinite ocean of primitive-but-vast nematode experience—would seem more normal to me if I came from a more buddhist / hindu / jain culture instead of a mostly christian/western one—mahayana buddhism is always on about innumerable worlds filled with countless beings, things persisting for endless repetitions of lifetimes, and so forth. In contrast to christianity which places a lot of emphasis on individual human agency and the drama of historical events (like the roman empire, etc). Or one could view it as a kind of moral equivalent of the copernican / broader scientific revolution, when people were shocked to realize that the earth is actually a tiny part of an incomprehensibly vast galaxy. The galaxy is physically large, but it is mostly just rocks and gas, so (we console ourselves) it is not “morally large”; we are still at the center of the “moral universe”. But for many strong believers in animal welfare as a cause area, and doubly or triply so for believers in insect welfare, this is not the case.
Agreed with Marcus Abramovitch that (if nematode lives are indeed net-negative, and if one agrees with RP-style weights on the importance of very simple animals), I think it WOULD strongly suggest (both emotionally and logically) pursuing “charities that just start wildfires” (which IMO would be cost-effective—seems pretty cheap to set stuff on fire...), or charities that promote various kinds of existential risk. Vasco comments that nuclear war or bioweapons would likely result in even more insect suffering by diminishing the scope of human civilization, which makes a lot of sense to me. But there are other existential risks where this defense wouldn’t work. Deliberately hastening global warming (perhaps by building a CFC-emissions factory on the sly) might shift biomes in a favorable way for the nematodes. Steering an asteroid into the earth, or hastening the arrival of a catastrophically misaligned AI superintelligence, might effectively sterilize the planet where nukes can’t. And so on. All the standard longtermist arguments would then apply—even raising the chance of sterilizing the earth by a little bit would be worth a lot. From my perspective (as someone who disagrees with the premises of this insect-welfare stuff), these implications do seem socially dangerous.
(Pictured: how I imagine it must feel to be an insect-welfare advocate who believes that every couple meters of boreal taiga contains lifetimes of suffering??)
I actually wrote the above comment in response to a very similar “Chinese AI vs US AI” post that’s currently being discussed on lesswrong. There, commenter Michael Porter had a very helpful reply to my coment. He references a May 2024 report from Concordia AI on “The State of AI Safety in China”, whose executive summary states:
The relevance and quality of Chinese technical research for frontier AI safety has increased substantially, with growing work on frontier issues such as LLM unlearning, misuse risks of AI in biology and chemistry, and evaluating “power-seeking” and “self-awareness” risks of LLMs.
There have been nearly 15 Chinese technical papers on frontier AI safety per month on average over the past 6 months. The report identifies 11 key research groups who have written a substantial portion of these papers.
China’s decision to sign the Bletchley Declaration, issue a joint statement on AI governance with France, and pursue an intergovernmental AI dialogue with the US indicates a growing convergence of views on AI safety among major powers compared to early 2023.
Since 2022, 8 Track 1.5 or 2 dialogues focused on AI have taken place between China and Western countries, with 2 focused on frontier AI safety and governance.
Chinese national policy and leadership show growing interest in developing large models while balancing risk prevention.
Unofficial expert drafts of China’s forthcoming national AI law contain provisions on AI safety, such as specialized oversight for foundation models and stipulating value alignment of AGI.
Local governments in China’s 3 biggest AI hubs have issued policies on AGI or large models, primarily aimed at accelerating development while also including provisions on topics such as international cooperation, ethics, and testing and evaluation.
Several influential industry associations established projects or committees to research AI safety and security problems, but their focus is primarily on content and data security rather than frontier AI safety.
In recent months, Chinese experts have discussed several focused AI safety topics, including “red lines” that AI must not cross to avoid “existential risks,” minimum funding levels for AI safety research, and AI’s impact on biosecurity.
Michael then says, “So clearly there is a discourse about AI safety there, that does sometimes extend even as far as the risk of extinction. It’s nowhere near as prominent or dramatic as it has been in the USA, but it’s there.”
I agree that it’s not like everyone in China is 100% asleep at the wheel—China is a big place with lots of smart people, they can read the news and discuss ideas just like we can, and so naturally there are some folks there who share EA-style concerns about AI alignment. But it does seem like the small amount of activity happening there is mostly following / echoing / agreeing with western ideas about AI safety, and seems more concentrated among academics, local governments, etc, rather than also coming from the leaders of top labs like in the USA.
As for trying to promote more AI safety thinking in China, I think it’s very tricky—if somebody like OpenPhil just naively started sending millions of dollars to fund Chinese AI safety university groups and create Chinese AI safety think tanks / evals organizations / etc, I think this would be (correctly?) percieved by China’s government as a massive foreign influence operation designed to subvert their national goals in a critical high-priority area. Which might cause them to massively crack down on the whole concept of western-style “AI safety”, making the situation infinitely worse than before. So it’s very important that AI safety ideas in China arise authentically / independently—but of course, we paradoxically want to “help them” independently come up with the ideas! Some approaches that seem less likely to backfire here might be:
The mentioned “track 2 diplomacy”, where mid-level government officials, scientists, and industry researchers host informal / unofficial discussions about the future of AI with their counterparts in China.
Since China already somewhat follows Western thinking about AI, we should try to use that influence for good, rather than accidentally egging them into an even more desperate arms race. Eg, if the USA announces a giant “manhattan project for AI” with great fanfare, talks all about how this massive national investment is a top priority for outracing China on military capabilies, etc, that would probably just goad China’s national leaders into thinking about AI in the exact same way. So, trying to influence US discourse and policy has a knock-on effect in China.
Even just in a US context, I think it would be extremely valuable to have more objective demonstrations of dangers like alignment faking, instrumental convergence, AI ability to provide advice to would-be bioterrorists, etc. But especially if you are trying to convince Chinese labs and national leaders in addition to western ones, then you are going to be trying to reach across a much bigger gap in terms of cultural context / political mistrust / etc. For crossing that bigger gap, objective demonstrations of misalignment (and other dangers like gradual disempowerment, etc) become relatively even more valuable compared to mere discourse like translating LessWrong articles into chinese.
@ScienceMon🔸 There is vastly less of an “AI safety community” in China—probably much less AI safety research in general, and much less of it, in percentage terms, is aimed at thinking ahead about superintelligent AI. (ie, more of China’s “AI safety research” is probably focused on things like reducing LLM hallucinations, making sure it doesn’t make politically incorrect statements, etc.)
Where are the chinese equivalents of the American and British AISI government departments? Organizations like METR, Epoch, Forethought, MIRI, et cetera?
Who are some notable Chinese intellectuals / academics / scientists (along the lines of Yoshua Bengio or Geoffrey Hinton) who have made any public statements about the danger of potential AI x-risks?
Have any chinese labs published “responsible scaling plans” or tiers of “AI Safety Levels” as detailed as those from OpenAI, Deepmind, or Anthropic? Or discussed how they’re planning to approach the challenge of aligning superintelligence?
Have workers at any Chinese AI lab resigned in protest of poor AI safety policies (like the various people who’ve left OpenAI over the years), or resisted the militarization of AI technology (like googlers protesting Project Maven, or microsoft employees protesting the IVAS HMD program)?
When people ask this question about the relative value of “US” vs “Chinese” AI, they often go straight for big-picture political questions about whether the leadership of China or the US is more morally righteous, less likely to abuse human rights, et cetera. Personally, in these debates, I do tend to favor the USA, although certainly both the US and China have many deep and extremely troubling flaws—both seem very far from the kind of responsible, competent, benevolent entity to whom I would like to entrust humanity’s future.
But before we even get to that question of “What would national leaders do with an aligned superintelligence, if they had one,” we must answer the question “Do this nation’s AI labs seem likely to produce an aligned superintelligence?” Again, the USA leaves a lot to be desired here. But oftentimes China seems to not even be thinking about the problem. This is a huge issue from both a technical perspective (if you don’t have any kind of plan for how you’re going to align superintelligence, perhaps you are less likely to align superintelligence), AND from a governance perspective (if policymakers just think of AI as a tool for boosting economic / military progress and haven’t thought about the many unique implications of superintelligence, then they will probably make worse decisions during an extremely important period in history).
Now, indeed—has Trump thought about superintelligence? Obviously not—just trying to understand intelligent humans must be difficult for him. But the USA in general seems much more full of people who “take AI seriously” in one way or another—sillicon-valley CEOs, pentagon advisers, billionare philanthropists, et cetera. Even in today’s embarassing administration, there are very high-ranking people (like Elon Musk and J. D. Vance) who seem at least aware of the transformative potential of AI. China’s government is more opaque, so maybe they’re thinking about this stuff too. But all public evidence suggests to me that they’re kinda just blindly racing forward, trying to match and surpass the West on capabilities, without giving much thought as to where this technology might ultimately go.
New Cause Area: Low-Hanging Fruit
Pretty much all company owners (or the respective investors) believe that they are most knowledgeable about what’s the best way to reinvest income.
Unfortunately, mostly they overestimate their own knowledge in this regard.The idea that random customers would be better at corporate budgeting than the people who work in those companies and think about corporate strategy every day, is a really strong claim, and you should try to offer evidence for this claim if you want people to take your fintech idea seriously.
Suppose I buy a new car from Toyota, and now I get to decide how Toyota invests the $10K of profit they made by selling me the car. There are immediately so many problems:How on earth am I supposed to make this decision?? Should they spend the money on ramping up production of this exact car model? Or should they spend the money on R&D to make better car engines in the future? Or should they save up money to buy an electric-vehicle battery manufacturing startup? Maybe they should just spend more on advertising? I don’t know anything about running a car company. I don’t even know what their current budget is—maybe advertising was the best use of new funds last year, but this year they’re already spending a ton on advertising, and it would be better to simply return additional profits to shareholders rather than over-expand?
Would it be Toyota’s job to give me tons of material that I could read, to become informed and make the decision properly? But then wouldn’t Toyota just end up making all the decisions anyway, in the form of “recommendations”, that customers would usually agree with?
Wouldn’t a lot of this information be secret / internal data, such that giving it away would unduly help rival companies?
Maybe an idea is popular and sounds good, but is actually a terrible idea for some subtle reason. For example, “Toyota should pivot to making self-driving cars powered by AI” sounds like a good idea to me, but I’m guessing that the reason Toyota isn’t doing it is that it would be pretty difficult for them to become a leader in self-driving technology. If ill-informed customers were making decisions, wouldn’t we expect follies like this to happen all the time?
How is everyone supposed to find the time to be constantly researching different corporations? Last month I bought a car and had to become a Toyota expert, this month I bought a new TV from Samsung, next month I’ll upgrade my Apple iphone, or maybe buy a Nintendo switch. And let’s not forget all the grocery shopping I do, restaurant meals, etc innumerable small purchases.
What happens to all the votes of the people who never bother to engage with this system? What’s the incentive for customers to spend time making corporate decisions?
It seems like you’d need some kind of liquid-democracy-style delegation system for this to work properly, and not take up everyone’s time. Like, maybe you’d delegate most coporate decision-making power to a single expert who we think knows the most about the company (we could call this person a “CEO”), and then have a wider circle of people that oversee the CEO’s behavior and fire them if necessary (this could be a “board of directors”), and then a wider circle of people who are generally interested in that company (these might be called “shareholders”) could determine who’s on the board of directors...
Thanks for this detailed overview; I’ve been interested to learn about AI for materials science (after hearing about stuff like Alphafold in biology), and this is the most detailed exploration I’ve yet seen.
Hello!
I’m glad you found my comment useful! I’m sorry if it came across as scolding; I interpreted Tristan’s original post to be aimed at advising giant mega-donors like Open Philanthropy, moreso than individual donors. In my book, anybody donating to effective global health charities is doing a very admirable thing—especially in these dark days when the US government seems to be trying to dismantle much of its foreign aid infrastructure.
As for my own two cents on how to navigate this situation (especially now that artificial intelligence feels much more real and pressing to me than it did a few years ago), here are a bunch of scattered thoughts (FYI these bullets have kind of a vibe of “sorry, i didn’t have enough time to write you a short letter, so I wrote you a long one”):
My scold-y comment on Tristan’s post might suggest a pretty sharp dichotomy, where your choice is to either donate to proven global health interventions, or else to fully convert to longtermism and donate everything to some weird AI safety org doing hard-to-evaluate-from-the-outside technical work.
That’s a frustrating choice for a lot of reasons—it implies totally pivoting your giving to a new field, where it might no longer feel like you have a special advantage in picking the best opportunities within the space. It also means going all-in on a very specific and uncertain theory of impact (cue the whole neartermist-vs-longtermist debate about the importance of RCTs, feedback loops, and tangible impact, versus ideas like “moral uncertainty” that m.
You could try to split your giving 50⁄50, which seems a little better (in a kind of hedging-your-bets way), but still pretty frustrating for various reasons...
I might rather seek to construct a kind of “spectrum” of giving opportunities, where Givewell-style global health interventions and longtermist AI existential-risk mitigation define the two ends of the spectrum. This might be a dumb idea—what kinds of things could possibly be in the middle of such a bizarre spectrum? And even if we did find some things to put in the middle, what are the chances that any of them would pass muster as a highly-effective, EA-style opportunity? But I think possibly there could actually be some worthwhile ideas here. I will come back to this thought in a moment.
Meanwhile, I agree with Tristan’s comment here that it seems like eventually money will probably cease to be useful—maybe we go extinct, maybe we build some kind of coherent-extrapolated-volition utopia, maybe some other similarly-weird scenario happens.
(In a big-picture philosophical sense, this seems true even without AGI? Since humanity would likely eventually get around to building a utopia and/or going extinct via other means. But AGI means that the transition might happen within our own lifetimes.)
However, unless we very soon get a nightmare-scenario “fast takeoff” where AI recursively self-improves and seizes control of the future over the course of hours-to-weeks, it seems like there will probably be a transition period, where approximately human-level AI is rapidly transforming the economy and society, but where ordinary people like us can still substantially influence the future. There are a couple ways we could hope to influence the long-term future:
We could simply try to avoid going extinct at the hands of misaligned ASI (most technical AI safety work is focused on this)
If you are a MIRI-style doomer who believes that there is a 99%+ chance that AI development leads to egregious misalignment and therefore human extinction, then indeed it kinda seems like your charitable options are “donate to technical alignment research”, “donate to attempts to implement a global moratorium on AI development”, and “accept death and donate to near-term global welfare charities (which now look pretty good, since the purported benefits of longtermism are an illusion if there is effectively a 100% chance that civilization ends in just a few years/decades)”. But if you are more optimistic than MIRI, then IMO there are some other promising cause areas that open up...
There are other AI catastrophic risks aside from misalignment—gradual disempowerment is a good example, as are various categories of “misuse” (including things like “countries get into a nuclear war as they fight over who gets to deploy ASI”)
Interventions focused on minimizing the risk of these kinds of catastrophes will look different—finding ways to ease international tensions and cooperate around AI to avoid war? Advocating for georgism and UBI and designing new democratic mechanisms to avoid gradual disempowerment? Some of these things might also have tangible present-day benefits even aside from AI (like reducing the risks of ordinary wars, or reducing inequality, or making democracy work better), which might help them exist midway on the spectrum I mentioned earlier, from tangible givewell-style interventions to speculative and hard-to-evaluate direct AI safety work.
Even among scenarios that don’t involve catastrophes or human extinction, I feel like there is a HUGE variance betwen the best possible worlds, and the median outcome. So there is still tons of value in pushing for a marginally better future—CalebMaresca’s answer mentions the idea that it’s not clear whether animals would be invited along for the ride in any future utopia. This indeed seems like an important thing to fight for. I think there are lots of things like this—there are just so many different possible futures.
(For example, if we get aligned ASI, this doesn’t answer the question of whether ordinary people will have any kind of say in crafting the future direction of civilization; maybe people like Sam Altman would ideally like to have all the power for themselves, benevolently orchestrating a nice transhumanist future wherein ordinary people get to enjoy plenty of technological advancements, but have no real influence over the direction of which kind of utopia we create. This seems worse to me than having a wider process of debate & deliberation about what kind of far future we want.)
CalebMaresca’s answer seems to imply that we should be saving all our money now, to spend during a post-AGI era that they assume will look kind of neo-feudal. This strikes me as unwise, since a neo-feudal AGI semi-utopia is a pretty specific and maybe not especially likely vision of the future! Per Tristan’s comment that money will eventually cease to be useful, it seems like it probably makes the most sense to deploy cash earlier, when the future is still very malleable:
In the post-ASI far future, we might be dead and/or money might no longer have much meaning and/or the future might already be effectively locked in / out of our control.
In the AGI transition period, the future will still be very malleable, we will probably have more money than we do now (although so will everyone else), and it’ll be clearer what the most important / neglected / tractable things are to focus on. The downside is that by this point, everyone else will have realized that AGI is a big deal, lots of crazy stuff will be happening, and it might be harder to have an impact because things are less neglected.
Today, lots of AI-related stuff is neglected, but it’s also harder to tell what’s important / tractable.
For a couple of examples of interventions that could exist midway along a spectrum from givewell-style interventions to AI safety research, which are also focused on influencing the transitional period of AGI, consider Dario Amodei’s vision of what an aspirational AGI transition period might look like, and what it would take to bring it about:
Dario talks about how AI-enhanced biological research could lead to amazing medical breakthroughs. To allow this to happen more quickly, it might make sense to lobby to reform the FDA or the clinical trial system. It also seems like a good idea to lobby for the most impactful breakthroughs to be quickly rolled out, even to people in poor countries who might not be able to afford them on their own. Getting AI-driven medical advances to more people, more quickly would of course benefit the people for whom the treatments arrive just in time. But it might also have important path-dependent effects on the long-run future, by setting precedents and infuencing culture and etc.
In the section on “neuroscience and mind”, Dario talks about the potential for an “AI coach who always helps you to be the best version of yourself, who studies your interactions and helps you learn to be more effective”. Maybe there is some way to support / accelerate the development of such tools?
Dario is thinking of psychology and mental health here. (Imagine a kind of supercharged, AI-powered version of Happier-Lives-Institute-style wellbeing interventions like StrongMinds?) But there could be similarly wide potential for disseminating AI technology for promoting economic growth in the third world (even today’s LLMs can probably offer useful medical advice, engineering skills, entrepeneurial business tips, agricultural productivity best practices, etc).
Maybe there’s no angle for philanthropy in promoting the adoption of “AI coach” tools, since people are properly incentivized to use such tools and the market will presumably race to provide them (just as charitable initiatives like OneLaptopPerChild ended up much less impactful than ordinary capitalism manufacturing bajillions of incredibly cheap smartphones). But who knows; maybe there’s a clever angle somewhere.
He mentions a similar idea that “AI finance ministers and central bankers” could offer good economic advice, helping entire countries develop more quickly. It’s not exactly clear to me why he expects nations to listen to AI finance ministers more than ordinary finance ministers. (Maybe the AIs will be more credibly neutral, or eventually have a better track record of success?) But the general theme of trying to find ways to improve policy and thereby boost economic growth in LMIC (as described by OpenPhil here) is obviously an important goal for both the tangible benefits, and potentially for its path-dependent effects on the long-run future. So, trying to find some way of making poor countries more open to taking pro-growth economic advice, or encouraging governments to adopt efficiency-boosting AI tools, or convincing them to be more willing to roll out new AI advancements, seem like they could be promising directions.
Finally he talks about the importance of maintaining some form of egalitarian / democratic control over humanity’s future, and the idea of potentially figuring out ways to improve democracy and make it work better than it does today. I mentioned these things earlier; both seem like promising cause areas.
“However, the likely mass extinction of K-strategists and the concomitant increase in r-selection might last for millions of years.”
I like learning about ecology and evolution, so personally I enjoy these kinds of thought experiments. But in the real world, isn’t it pretty unlikely that natural ecosystems will just keep humming along for another million years? I would guess that within just the next few hundred years, human civilization will have grown in power to the point where it can do what it likes with natural ecosystems:
perhaps we bulldoze the earth’s surface in order to cover it with solar panels, fusion power plants, and computronium?
perhaps we rip apart the entire earth for raw material to be used for the construction of a Dyson swarm?
more prosaically, maybe human civilization doesn’t expand to the stars, but still expands enough (and in a chaotic, unsustainable way) such that most natural habitats are destroyed
perhaps there will have been a nuclear war (or some other similarly devastating event, like the creation of mirror life that devastates the biosphere)
perhaps we create unaligned superintelligent AI which turns the universe into paperclips
perhaps humanity grows in power but also becomes more responsible and sustainable, and we reverse global warming using abundant clean energy powering technologies like carbon air capture, assorted geoengineering techniques, etc
perhaps humanity attains a semi-utopian civilization, and we decide to extensively intervene in the natural world for the benefit of nonhuman animals
etc
Some of those scenarios might be dismissable as the kind of “silly sci-fi speculation” mentioned by the longtermist-style meme below. But others seem pretty mundane, indeed “to be expected” even by the most conservative visions of the future. To me, the million-year impact of things like climate change only seems relevant in scenarios where human civilization collapses pretty soon, but in a way that leaves Earth’s biosphere largely intact (maybe if humans all died to a pandemic?).
Infohazards are indeed a pretty big worry of lots of the EAs working on biosecurity: https://forum.effectivealtruism.org/posts/PTtZWBAKgrrnZj73n/biosecurity-culture-computer-security-culture
1 vote
Overall karma indicates overall quality.
Total points: 0
Agreement karma indicates agreement, separate from overall quality.
Sorry about that! I think I just intended to link to the same place I did for my earlier use of the phrase “AI-enabled coups”, namely this Forethought report by Tom Davidson and pals, subtitled “How a Small Group Could Use AI to Seize Power”: https://www.forethought.org/research/ai-enabled-coups-how-a-small-group-could-use-ai-to-seize-power
But also relevant to the subject is this Astral Codex Ten post about who should control an LLM’s “spec”: https://www.astralcodexten.com/p/deliberative-alignment-and-the-spec
The “AI 2027″ scenario is pretty aggressive on timelines, but also features a lot of detailed reasoning about potential power-struggles over control of transformative AI which feels relevant to thinking about coup scenarios. (Or classic AI takeover scenarios, for that matter. Or broader, coup-adjacent / non-coup-authoritarianism scenarios of the sort Thiel seems to be worried about, where instead of getting taken over unexpectedly by China, Trump, or etc, today’s dominant western liberal institutions themselves slowly become more rigid and controlling.)
For some of the shenanigans that real-world AI companies are pulling today, see the 80,000 Hours podcast on OpenAI’s clever ploys to do away with its non-profit structure, or Zvi Mowshowitz on xAI’s embarrassingly blunt, totally not-thought-through attempts to manipulate Grok’s behavior on various political issues (or a similar, earlier incident at Google).