Thanks! Do you know if there is anywhere he has engaged more seriously with the possibility that AI could actually be transformative? His âmaybe heterodox thinking mattersâ statement I quoted above feels like relatively superficial engagement with the topic.
He certainly seems very familiar with the arguments involved, the idea of superintelligence, etc, even if he disagrees in some ways (hard to tell exactly which ways), and seems really averse to talking about AI the familiar rationalist style (scaling laws, AI timelines, p-dooms, etc), and kinda thinks about everything in his characteristic style: vague, vibes- and political-alignment- based, lots of jumping around and creative metaphors, not interested in detailed chains of technical arguments.
Here is a Wired article tracing Peter Thielâs early funding of the Singularity Institute, way back in 2005. And hereâs a talk from two years ago where he is talking about his early involvement with the Singularity Institute, then mocking the bay-area rationalist community for devolving from a proper transhumanist movement into a âburning man, hippie ludditeâ movement (not accurate IMO!), culminating in the hyper-pessimism of Yudkowskyâs âDeath with Dignityâ essay.
When he is bashing EAâs focus on existential risk (like in that âanti-anti-anti-anti classical liberalismâ presentation), he doesnât do what most normal people do and say that existential risk is a big fat nothingburger. Instead, he acknowledges that existential risk is at least somewhat real (even if people have exaggerated fears about itâeg, he relates somewhere that people should have been âafraid of the blastâ from nuclear weapons, but instead became âafraid of the radiationâ, which leads them to ban nuclear power), but that the real existential risk is counterbalanced by the urgent need to avoid stagnation and one-world-government (and presumably, albeit usually unstated, the need to race ahead to achieve transhumanist benefits like immortality).
His whole recent schtick about âWhy can we talk about the existential-risk /â AI apocalypse, but not the stable-totalitarian /â stagnation Antichrist?â, which of course places him squarely in the âtechno-optimistâ /â accelerationist part of the tech right, is actually quite the pivot from a few years ago, when one of his most common catchphrases went along the lines of âIf technologies can have political alignments, since everyone admits that cryptocurrency is libertarian, then why isnât it okay to say that AI is communist?â (Here is one example.) Back then he seemed mainly focused on an (understandable) worry about the potential for AI to be a hugely power-centralizing technology, performing censorship and tracking individualsâ behavior and so forth (for example, how China uses facial and gait recognition against hong kong protestors, xinjiang residents, etc).
(Thielâs positions on AI, on government spying, on libertarianism, etc, coexist in a complex and uneasy way with the fact that of course he is a co-founder of Palantir, the premier AI-enabled-government-spying corporation, which he claims to have founded in order to âreduce terrorism while preserving civil libertiesâ.)
Thiel describing a 2024 conversation with Elon Musk and Demis Hassabis, where Elon is saying âIâm working on going to mars, itâs the most important project in the worldâ and Demis argues âactually my project is the most important in the world; my superintelligence will change everything, and it will follow you to marsâ. (This is in the context of Thielâs long pivot from libertarianism to a darker strain of conservativism /â neoreaction, having realized that that âthereâs nowhere else to goâ to escape mainstream culture/âcivilization, that you canât escape to outer space, cyberspace, or the oceans as he once hoped, but can only stay and fight to sieze control of the one future (hence all these musings about carl schmidtt and etc that make me feel wary he is going to be egging on J D Vance to try and auto-coup the government).
Followed by (correctly IMO) mocking Elon for being worried about the budget deficit, which doesnât make any sense if you really are fully confident that superintelligent AI is right around the corner as Elon claims.
A couple more quotes on the subject of superintelligence from the recent Ross Douthat conversation (transcript, video):
Thiel claims to be one of those people who (very wrongly IMO) thinks that AI might indeed achieve 3000 IQ, but that itâll turn out being 3000 IQ doesnât actually help you do amazing things like design nanotech or take over the world:
PETER THIEL: Itâs probably a Silicon Valley ideology and maybe, maybe in a weird way itâs more liberal than a conservative thing, but people are really fixated on IQ in Silicon Valley and that itâs all about smart people. And if you have more smart people, theyâll do great things. And then the economics anti IQ argument is that people actually do worse. The smarter they are, the worse they do. And they, you know, itâs just, they donât know how to apply it, or our society doesnât know what to do with them and they donât fit in. And so that suggests that the gating factor isnât IQ, but something, you know, thatâs deeply wrong with our society.
ROSS DOUTHAT: So is that a limit on intelligence or a problem of the sort of personality types human superintelligence creates? I mean, Iâm very sympathetic to the idea and I made this case when I did an episode of this, of this podcast with a sort of AI accelerationist that just throwing, that certain problems can just be solved if you ramp up intelligence. Itâs like, we ramp up intelligence and boom, Alzheimerâs is solved. We ramp up intelligence and the AI can, you know, figure out the automation process that builds you a billion robots overnight. I, Iâm an intelligent skeptic in the sense I donât think, yeah, I think you probably have limits.
PETER THIEL: Itâs, itâs, itâs hard to prove one way or itâs always hard to prove these things.
Thiel talks about transhumanism for a bit (albeit devolves into making fun of transgender people for being insufficiently ambitious) -- see here for the Dank EA Meme version of this exchange:
ROSS DOUTHAT: But the world of AI is clearly filled with people who at the very least seem to have a more utopian, transformative, whatever word you want to call it, view of the technology than youâre expressing here, and you were mentioned earlier the idea that the modern world used to promise radical life extension and doesnât anymore. It seems very clear to me that a number of people deeply involved in artificial intelligence see it as a kind of mechanism for transhumanism, for transcendence of our mortal flesh and either some kind of creation of a successor species, or some kind of merger of mind and machine. Do you think thatâs just all kind of irrelevant fantasy? Or do you think itâs just hype? Do you think people are trying to raise money by pretending that weâre going to build a machine god? Is it delusion? Is it something you worry about? I think you, you would prefer the human race to endure, right? Youâre hesitating.
PETER THIEL: I donât know. I, I would⌠I would...
ROSS DOUTHAT: This is a long hesitation.
PETER THIEL: Thereâs so many questions and pushes.
ROSS DOUTHAT: Should the human race survive?
PETER THIEL: Yes.
ROSS DOUTHAT: Okay.
PETER THIEL: But, but I, I also would. I, I also would like us to, to radically solve these problems. Transhumanism is this, you know, the ideal was this radical transformation where your human natural body gets transformed into an immortal body. And thereâs a critique of, letâs say, the trans people in a sexual context or, I donât know, transvestite is someone who changes their clothes and cross dresses, and a transsexual is someone where you change your, I donât know, penis into a vagina. And we can then debate how well those surgeries work, but we want more transformation than that. The critique is not that itâs weird and unnatural. Itâs man, itâs so pathetically little. And okay, we want more than cross dressing or changing your sex organs. We want you to be able to change your heart and change your mind and change your whole body.
Making fun of Elon for simultaneously obsessing over budget deficits while also claiming to be confident that a superintelligence-powered industrial explosion is right around the corner:
PETER THIEL: A conversation I had with Elon a few weeks ago about this was, he said, âWeâre going to have a billion humanoid robots in the US in 10 years.â And I said, âWell, if thatâs true, you donât need to worry about the budget deficits because weâre going to have so much growth. The growth will take care of this.â And then, well, heâs still worried about the budget deficits. And then this doesnât prove that he doesnât believe in the billion robots, but it suggests that maybe he hasnât thought it through or that he doesnât think itâs going to be as transformative economically, or that there are big error bars around it.
Thiel describing a 2024 conversation with Elon Musk and Demis Hassabis, where Elon is saying âIâm working on going to mars, itâs the most important project in the worldâ and Demis argues âactually my project is the most important in the world; my superintelligence will change everything, and it will follow you to marsâ. (This is in the context of Thielâs long pivot from libertarianism to a darker strain of conservativism /â neoreaction, having realized that that âthereâs nowhere else to goâ to escape mainstream culture/âcivilization, that you canât escape to outer space, cyberspace, or the oceans as he once hoped, but can only stay and fight to sieze control of the one future (hence all these musings about carl schmidtt and etc that make me feel wary he is going to be egging on J D Vance to try and auto-coup the government).
Thanks! Do you know if there is anywhere he has engaged more seriously with the possibility that AI could actually be transformative? His âmaybe heterodox thinking mattersâ statement I quoted above feels like relatively superficial engagement with the topic.
He certainly seems very familiar with the arguments involved, the idea of superintelligence, etc, even if he disagrees in some ways (hard to tell exactly which ways), and seems really averse to talking about AI the familiar rationalist style (scaling laws, AI timelines, p-dooms, etc), and kinda thinks about everything in his characteristic style: vague, vibes- and political-alignment- based, lots of jumping around and creative metaphors, not interested in detailed chains of technical arguments.
Here is a Wired article tracing Peter Thielâs early funding of the Singularity Institute, way back in 2005. And hereâs a talk from two years ago where he is talking about his early involvement with the Singularity Institute, then mocking the bay-area rationalist community for devolving from a proper transhumanist movement into a âburning man, hippie ludditeâ movement (not accurate IMO!), culminating in the hyper-pessimism of Yudkowskyâs âDeath with Dignityâ essay.
When he is bashing EAâs focus on existential risk (like in that âanti-anti-anti-anti classical liberalismâ presentation), he doesnât do what most normal people do and say that existential risk is a big fat nothingburger. Instead, he acknowledges that existential risk is at least somewhat real (even if people have exaggerated fears about itâeg, he relates somewhere that people should have been âafraid of the blastâ from nuclear weapons, but instead became âafraid of the radiationâ, which leads them to ban nuclear power), but that the real existential risk is counterbalanced by the urgent need to avoid stagnation and one-world-government (and presumably, albeit usually unstated, the need to race ahead to achieve transhumanist benefits like immortality).
His whole recent schtick about âWhy can we talk about the existential-risk /â AI apocalypse, but not the stable-totalitarian /â stagnation Antichrist?â, which of course places him squarely in the âtechno-optimistâ /â accelerationist part of the tech right, is actually quite the pivot from a few years ago, when one of his most common catchphrases went along the lines of âIf technologies can have political alignments, since everyone admits that cryptocurrency is libertarian, then why isnât it okay to say that AI is communist?â (Here is one example.) Back then he seemed mainly focused on an (understandable) worry about the potential for AI to be a hugely power-centralizing technology, performing censorship and tracking individualsâ behavior and so forth (for example, how China uses facial and gait recognition against hong kong protestors, xinjiang residents, etc).
(Thielâs positions on AI, on government spying, on libertarianism, etc, coexist in a complex and uneasy way with the fact that of course he is a co-founder of Palantir, the premier AI-enabled-government-spying corporation, which he claims to have founded in order to âreduce terrorism while preserving civil libertiesâ.)
Thiel describing a 2024 conversation with Elon Musk and Demis Hassabis, where Elon is saying âIâm working on going to mars, itâs the most important project in the worldâ and Demis argues âactually my project is the most important in the world; my superintelligence will change everything, and it will follow you to marsâ. (This is in the context of Thielâs long pivot from libertarianism to a darker strain of conservativism /â neoreaction, having realized that that âthereâs nowhere else to goâ to escape mainstream culture/âcivilization, that you canât escape to outer space, cyberspace, or the oceans as he once hoped, but can only stay and fight to sieze control of the one future (hence all these musings about carl schmidtt and etc that make me feel wary he is going to be egging on J D Vance to try and auto-coup the government).
Followed by (correctly IMO) mocking Elon for being worried about the budget deficit, which doesnât make any sense if you really are fully confident that superintelligent AI is right around the corner as Elon claims.
A couple more quotes on the subject of superintelligence from the recent Ross Douthat conversation (transcript, video):
Thiel claims to be one of those people who (very wrongly IMO) thinks that AI might indeed achieve 3000 IQ, but that itâll turn out being 3000 IQ doesnât actually help you do amazing things like design nanotech or take over the world:
PETER THIEL: Itâs probably a Silicon Valley ideology and maybe, maybe in a weird way itâs more liberal than a conservative thing, but people are really fixated on IQ in Silicon Valley and that itâs all about smart people. And if you have more smart people, theyâll do great things. And then the economics anti IQ argument is that people actually do worse. The smarter they are, the worse they do. And they, you know, itâs just, they donât know how to apply it, or our society doesnât know what to do with them and they donât fit in. And so that suggests that the gating factor isnât IQ, but something, you know, thatâs deeply wrong with our society.
ROSS DOUTHAT: So is that a limit on intelligence or a problem of the sort of personality types human superintelligence creates? I mean, Iâm very sympathetic to the idea and I made this case when I did an episode of this, of this podcast with a sort of AI accelerationist that just throwing, that certain problems can just be solved if you ramp up intelligence. Itâs like, we ramp up intelligence and boom, Alzheimerâs is solved. We ramp up intelligence and the AI can, you know, figure out the automation process that builds you a billion robots overnight. I, Iâm an intelligent skeptic in the sense I donât think, yeah, I think you probably have limits.
PETER THIEL: Itâs, itâs, itâs hard to prove one way or itâs always hard to prove these things.
Thiel talks about transhumanism for a bit (albeit devolves into making fun of transgender people for being insufficiently ambitious) -- see here for the Dank EA Meme version of this exchange:
ROSS DOUTHAT: But the world of AI is clearly filled with people who at the very least seem to have a more utopian, transformative, whatever word you want to call it, view of the technology than youâre expressing here, and you were mentioned earlier the idea that the modern world used to promise radical life extension and doesnât anymore. It seems very clear to me that a number of people deeply involved in artificial intelligence see it as a kind of mechanism for transhumanism, for transcendence of our mortal flesh and either some kind of creation of a successor species, or some kind of merger of mind and machine. Do you think thatâs just all kind of irrelevant fantasy? Or do you think itâs just hype? Do you think people are trying to raise money by pretending that weâre going to build a machine god? Is it delusion? Is it something you worry about? I think you, you would prefer the human race to endure, right? Youâre hesitating.
PETER THIEL: I donât know. I, I would⌠I would...
ROSS DOUTHAT: This is a long hesitation.
PETER THIEL: Thereâs so many questions and pushes.
ROSS DOUTHAT: Should the human race survive?
PETER THIEL: Yes.
ROSS DOUTHAT: Okay.
PETER THIEL: But, but I, I also would. I, I also would like us to, to radically solve these problems. Transhumanism is this, you know, the ideal was this radical transformation where your human natural body gets transformed into an immortal body. And thereâs a critique of, letâs say, the trans people in a sexual context or, I donât know, transvestite is someone who changes their clothes and cross dresses, and a transsexual is someone where you change your, I donât know, penis into a vagina. And we can then debate how well those surgeries work, but we want more transformation than that. The critique is not that itâs weird and unnatural. Itâs man, itâs so pathetically little. And okay, we want more than cross dressing or changing your sex organs. We want you to be able to change your heart and change your mind and change your whole body.
Making fun of Elon for simultaneously obsessing over budget deficits while also claiming to be confident that a superintelligence-powered industrial explosion is right around the corner:
PETER THIEL: A conversation I had with Elon a few weeks ago about this was, he said, âWeâre going to have a billion humanoid robots in the US in 10 years.â And I said, âWell, if thatâs true, you donât need to worry about the budget deficits because weâre going to have so much growth. The growth will take care of this.â And then, well, heâs still worried about the budget deficits. And then this doesnât prove that he doesnât believe in the billion robots, but it suggests that maybe he hasnât thought it through or that he doesnât think itâs going to be as transformative economically, or that there are big error bars around it.
FTR: while Thiel has already claimed this version before, the more common version (e.g. here, here, here from Hassabisâ mouth, and more obliquely here in his lawsuit against Altman) is that Hassabis was warning Musk about existential risk from unaligned AGI, not threatening him with his own personally aligned AGI. However, this interpretation is interestingly resonant with Elon Muskâs creation of OpenAI being motivated by fear of Hassabis becoming an AGI dictator (a fear his co-founders apparently shared). It is certainly an interesting hypothesis that Thiel and Musk engineered together for a decade both the AGI race and global democratic backsliding wholly motivated by a same single one-sentence possible slight by Hassabis in 2012.