He certainly seems very familiar with the arguments involved, the idea of superintelligence, etc, even if he disagrees in some ways (hard to tell exactly which ways), and seems really averse to talking about AI the familiar rationalist style (scaling laws, AI timelines, p-dooms, etc), and kinda thinks about everything in his characteristic style: vague, vibes- and political-alignment- based, lots of jumping around and creative metaphors, not interested in detailed chains of technical arguments.
Here is a Wired article tracing Peter Thiel’s early funding of the Singularity Institute, way back in 2005. And here’s a talk from two years ago where he is talking about his early involvement with the Singularity Institute, then mocking the bay-area rationalist community for devolving from a proper transhumanist movement into a “burning man, hippie luddite” movement (not accurate IMO!), culminating in the hyper-pessimism of Yudkowsky’s “Death with Dignity” essay.
When he is bashing EA’s focus on existential risk (like in that “anti-anti-anti-anti classical liberalism” presentation), he doesn’t do what most normal people do and say that existential risk is a big fat nothingburger. Instead, he acknowledges that existential risk is at least somewhat real (even if people have exaggerated fears about it—eg, he relates somewhere that people should have been “afraid of the blast” from nuclear weapons, but instead became “afraid of the radiation”, which leads them to ban nuclear power), but that the real existential risk is counterbalanced by the urgent need to avoid stagnation and one-world-government (and presumably, albeit usually unstated, the need to race ahead to achieve transhumanist benefits like immortality).
His whole recent schtick about “Why can we talk about the existential-risk / AI apocalypse, but not the stable-totalitarian / stagnation Antichrist?”, which of course places him squarely in the “techno-optimist” / accelerationist part of the tech right, is actually quite the pivot from a few years ago, when one of his most common catchphrases went along the lines of “If technologies can have political alignments, since everyone admits that cryptocurrency is libertarian, then why isn’t it okay to say that AI is communist?” (Here is one example.) Back then he seemed mainly focused on an (understandable) worry about the potential for AI to be a hugely power-centralizing technology, performing censorship and tracking individuals’ behavior and so forth (for example, how China uses facial and gait recognition against hong kong protestors, xinjiang residents, etc).
(Thiel’s positions on AI, on government spying, on libertarianism, etc, coexist in a complex and uneasy way with the fact that of course he is a co-founder of Palantir, the premier AI-enabled-government-spying corporation, which he claims to have founded in order to “reduce terrorism while preserving civil liberties”.)
Thiel describing a 2024 conversation with Elon Musk and Demis Hassabis, where Elon is saying “I’m working on going to mars, it’s the most important project in the world” and Demis argues “actually my project is the most important in the world; my superintelligence will change everything, and it will follow you to mars”. (This is in the context of Thiel’s long pivot from libertarianism to a darker strain of conservativism / neoreaction, having realized that that “there’s nowhere else to go” to escape mainstream culture/civilization, that you can’t escape to outer space, cyberspace, or the oceans as he once hoped, but can only stay and fight to sieze control of the one future (hence all these musings about carl schmidtt and etc that make me feel wary he is going to be egging on J D Vance to try and auto-coup the government).
Followed by (correctly IMO) mocking Elon for being worried about the budget deficit, which doesn’t make any sense if you really are fully confident that superintelligent AI is right around the corner as Elon claims.
A couple more quotes on the subject of superintelligence from the recent Ross Douthat conversation (transcript, video):
Thiel claims to be one of those people who (very wrongly IMO) thinks that AI might indeed achieve 3000 IQ, but that it’ll turn out being 3000 IQ doesn’t actually help you do amazing things like design nanotech or take over the world:
PETER THIEL: It’s probably a Silicon Valley ideology and maybe, maybe in a weird way it’s more liberal than a conservative thing, but people are really fixated on IQ in Silicon Valley and that it’s all about smart people. And if you have more smart people, they’ll do great things. And then the economics anti IQ argument is that people actually do worse. The smarter they are, the worse they do. And they, you know, it’s just, they don’t know how to apply it, or our society doesn’t know what to do with them and they don’t fit in. And so that suggests that the gating factor isn’t IQ, but something, you know, that’s deeply wrong with our society.
ROSS DOUTHAT: So is that a limit on intelligence or a problem of the sort of personality types human superintelligence creates? I mean, I’m very sympathetic to the idea and I made this case when I did an episode of this, of this podcast with a sort of AI accelerationist that just throwing, that certain problems can just be solved if you ramp up intelligence. It’s like, we ramp up intelligence and boom, Alzheimer’s is solved. We ramp up intelligence and the AI can, you know, figure out the automation process that builds you a billion robots overnight. I, I’m an intelligent skeptic in the sense I don’t think, yeah, I think you probably have limits.
PETER THIEL: It’s, it’s, it’s hard to prove one way or it’s always hard to prove these things.
Thiel talks about transhumanism for a bit (albeit devolves into making fun of transgender people for being insufficiently ambitious) -- see here for the Dank EA Meme version of this exchange:
ROSS DOUTHAT: But the world of AI is clearly filled with people who at the very least seem to have a more utopian, transformative, whatever word you want to call it, view of the technology than you’re expressing here, and you were mentioned earlier the idea that the modern world used to promise radical life extension and doesn’t anymore. It seems very clear to me that a number of people deeply involved in artificial intelligence see it as a kind of mechanism for transhumanism, for transcendence of our mortal flesh and either some kind of creation of a successor species, or some kind of merger of mind and machine. Do you think that’s just all kind of irrelevant fantasy? Or do you think it’s just hype? Do you think people are trying to raise money by pretending that we’re going to build a machine god? Is it delusion? Is it something you worry about? I think you, you would prefer the human race to endure, right? You’re hesitating.
PETER THIEL: I don’t know. I, I would… I would...
ROSS DOUTHAT: This is a long hesitation.
PETER THIEL: There’s so many questions and pushes.
ROSS DOUTHAT: Should the human race survive?
PETER THIEL: Yes.
ROSS DOUTHAT: Okay.
PETER THIEL: But, but I, I also would. I, I also would like us to, to radically solve these problems. Transhumanism is this, you know, the ideal was this radical transformation where your human natural body gets transformed into an immortal body. And there’s a critique of, let’s say, the trans people in a sexual context or, I don’t know, transvestite is someone who changes their clothes and cross dresses, and a transsexual is someone where you change your, I don’t know, penis into a vagina. And we can then debate how well those surgeries work, but we want more transformation than that. The critique is not that it’s weird and unnatural. It’s man, it’s so pathetically little. And okay, we want more than cross dressing or changing your sex organs. We want you to be able to change your heart and change your mind and change your whole body.
Making fun of Elon for simultaneously obsessing over budget deficits while also claiming to be confident that a superintelligence-powered industrial explosion is right around the corner:
PETER THIEL: A conversation I had with Elon a few weeks ago about this was, he said, “We’re going to have a billion humanoid robots in the US in 10 years.” And I said, “Well, if that’s true, you don’t need to worry about the budget deficits because we’re going to have so much growth. The growth will take care of this.” And then, well, he’s still worried about the budget deficits. And then this doesn’t prove that he doesn’t believe in the billion robots, but it suggests that maybe he hasn’t thought it through or that he doesn’t think it’s going to be as transformative economically, or that there are big error bars around it.
Thiel describing a 2024 conversation with Elon Musk and Demis Hassabis, where Elon is saying “I’m working on going to mars, it’s the most important project in the world” and Demis argues “actually my project is the most important in the world; my superintelligence will change everything, and it will follow you to mars”. (This is in the context of Thiel’s long pivot from libertarianism to a darker strain of conservativism / neoreaction, having realized that that “there’s nowhere else to go” to escape mainstream culture/civilization, that you can’t escape to outer space, cyberspace, or the oceans as he once hoped, but can only stay and fight to sieze control of the one future (hence all these musings about carl schmidtt and etc that make me feel wary he is going to be egging on J D Vance to try and auto-coup the government).
He certainly seems very familiar with the arguments involved, the idea of superintelligence, etc, even if he disagrees in some ways (hard to tell exactly which ways), and seems really averse to talking about AI the familiar rationalist style (scaling laws, AI timelines, p-dooms, etc), and kinda thinks about everything in his characteristic style: vague, vibes- and political-alignment- based, lots of jumping around and creative metaphors, not interested in detailed chains of technical arguments.
Here is a Wired article tracing Peter Thiel’s early funding of the Singularity Institute, way back in 2005. And here’s a talk from two years ago where he is talking about his early involvement with the Singularity Institute, then mocking the bay-area rationalist community for devolving from a proper transhumanist movement into a “burning man, hippie luddite” movement (not accurate IMO!), culminating in the hyper-pessimism of Yudkowsky’s “Death with Dignity” essay.
When he is bashing EA’s focus on existential risk (like in that “anti-anti-anti-anti classical liberalism” presentation), he doesn’t do what most normal people do and say that existential risk is a big fat nothingburger. Instead, he acknowledges that existential risk is at least somewhat real (even if people have exaggerated fears about it—eg, he relates somewhere that people should have been “afraid of the blast” from nuclear weapons, but instead became “afraid of the radiation”, which leads them to ban nuclear power), but that the real existential risk is counterbalanced by the urgent need to avoid stagnation and one-world-government (and presumably, albeit usually unstated, the need to race ahead to achieve transhumanist benefits like immortality).
His whole recent schtick about “Why can we talk about the existential-risk / AI apocalypse, but not the stable-totalitarian / stagnation Antichrist?”, which of course places him squarely in the “techno-optimist” / accelerationist part of the tech right, is actually quite the pivot from a few years ago, when one of his most common catchphrases went along the lines of “If technologies can have political alignments, since everyone admits that cryptocurrency is libertarian, then why isn’t it okay to say that AI is communist?” (Here is one example.) Back then he seemed mainly focused on an (understandable) worry about the potential for AI to be a hugely power-centralizing technology, performing censorship and tracking individuals’ behavior and so forth (for example, how China uses facial and gait recognition against hong kong protestors, xinjiang residents, etc).
(Thiel’s positions on AI, on government spying, on libertarianism, etc, coexist in a complex and uneasy way with the fact that of course he is a co-founder of Palantir, the premier AI-enabled-government-spying corporation, which he claims to have founded in order to “reduce terrorism while preserving civil liberties”.)
Thiel describing a 2024 conversation with Elon Musk and Demis Hassabis, where Elon is saying “I’m working on going to mars, it’s the most important project in the world” and Demis argues “actually my project is the most important in the world; my superintelligence will change everything, and it will follow you to mars”. (This is in the context of Thiel’s long pivot from libertarianism to a darker strain of conservativism / neoreaction, having realized that that “there’s nowhere else to go” to escape mainstream culture/civilization, that you can’t escape to outer space, cyberspace, or the oceans as he once hoped, but can only stay and fight to sieze control of the one future (hence all these musings about carl schmidtt and etc that make me feel wary he is going to be egging on J D Vance to try and auto-coup the government).
Followed by (correctly IMO) mocking Elon for being worried about the budget deficit, which doesn’t make any sense if you really are fully confident that superintelligent AI is right around the corner as Elon claims.
A couple more quotes on the subject of superintelligence from the recent Ross Douthat conversation (transcript, video):
Thiel claims to be one of those people who (very wrongly IMO) thinks that AI might indeed achieve 3000 IQ, but that it’ll turn out being 3000 IQ doesn’t actually help you do amazing things like design nanotech or take over the world:
PETER THIEL: It’s probably a Silicon Valley ideology and maybe, maybe in a weird way it’s more liberal than a conservative thing, but people are really fixated on IQ in Silicon Valley and that it’s all about smart people. And if you have more smart people, they’ll do great things. And then the economics anti IQ argument is that people actually do worse. The smarter they are, the worse they do. And they, you know, it’s just, they don’t know how to apply it, or our society doesn’t know what to do with them and they don’t fit in. And so that suggests that the gating factor isn’t IQ, but something, you know, that’s deeply wrong with our society.
ROSS DOUTHAT: So is that a limit on intelligence or a problem of the sort of personality types human superintelligence creates? I mean, I’m very sympathetic to the idea and I made this case when I did an episode of this, of this podcast with a sort of AI accelerationist that just throwing, that certain problems can just be solved if you ramp up intelligence. It’s like, we ramp up intelligence and boom, Alzheimer’s is solved. We ramp up intelligence and the AI can, you know, figure out the automation process that builds you a billion robots overnight. I, I’m an intelligent skeptic in the sense I don’t think, yeah, I think you probably have limits.
PETER THIEL: It’s, it’s, it’s hard to prove one way or it’s always hard to prove these things.
Thiel talks about transhumanism for a bit (albeit devolves into making fun of transgender people for being insufficiently ambitious) -- see here for the Dank EA Meme version of this exchange:
ROSS DOUTHAT: But the world of AI is clearly filled with people who at the very least seem to have a more utopian, transformative, whatever word you want to call it, view of the technology than you’re expressing here, and you were mentioned earlier the idea that the modern world used to promise radical life extension and doesn’t anymore. It seems very clear to me that a number of people deeply involved in artificial intelligence see it as a kind of mechanism for transhumanism, for transcendence of our mortal flesh and either some kind of creation of a successor species, or some kind of merger of mind and machine. Do you think that’s just all kind of irrelevant fantasy? Or do you think it’s just hype? Do you think people are trying to raise money by pretending that we’re going to build a machine god? Is it delusion? Is it something you worry about? I think you, you would prefer the human race to endure, right? You’re hesitating.
PETER THIEL: I don’t know. I, I would… I would...
ROSS DOUTHAT: This is a long hesitation.
PETER THIEL: There’s so many questions and pushes.
ROSS DOUTHAT: Should the human race survive?
PETER THIEL: Yes.
ROSS DOUTHAT: Okay.
PETER THIEL: But, but I, I also would. I, I also would like us to, to radically solve these problems. Transhumanism is this, you know, the ideal was this radical transformation where your human natural body gets transformed into an immortal body. And there’s a critique of, let’s say, the trans people in a sexual context or, I don’t know, transvestite is someone who changes their clothes and cross dresses, and a transsexual is someone where you change your, I don’t know, penis into a vagina. And we can then debate how well those surgeries work, but we want more transformation than that. The critique is not that it’s weird and unnatural. It’s man, it’s so pathetically little. And okay, we want more than cross dressing or changing your sex organs. We want you to be able to change your heart and change your mind and change your whole body.
Making fun of Elon for simultaneously obsessing over budget deficits while also claiming to be confident that a superintelligence-powered industrial explosion is right around the corner:
PETER THIEL: A conversation I had with Elon a few weeks ago about this was, he said, “We’re going to have a billion humanoid robots in the US in 10 years.” And I said, “Well, if that’s true, you don’t need to worry about the budget deficits because we’re going to have so much growth. The growth will take care of this.” And then, well, he’s still worried about the budget deficits. And then this doesn’t prove that he doesn’t believe in the billion robots, but it suggests that maybe he hasn’t thought it through or that he doesn’t think it’s going to be as transformative economically, or that there are big error bars around it.
FTR: while Thiel has already claimed this version before, the more common version (e.g. here, here, here from Hassabis’ mouth, and more obliquely here in his lawsuit against Altman) is that Hassabis was warning Musk about existential risk from unaligned AGI, not threatening him with his own personally aligned AGI. However, this interpretation is interestingly resonant with Elon Musk’s creation of OpenAI being motivated by fear of Hassabis becoming an AGI dictator (a fear his co-founders apparently shared). It is certainly an interesting hypothesis that Thiel and Musk engineered together for a decade both the AGI race and global democratic backsliding wholly motivated by a same single one-sentence possible slight by Hassabis in 2012.