Metaculus puts (being significantly more bullish than actual AI/ML experts and populated with rationalists/EAs) <25% chance on transformative AI happening by the end of the decade and <8% chance of this leading to the traditional AI-go-foom scenario, so <2% p(doom) by the end of the decade. I can’t find a Metaculus poll on this but I would halve that to <1% for whether such transformative AI would be reached by simply scaling LLMs.
Matrice Jacobine
To be clear, my point is that 1/ even inside the environmental movement calling for an immediate pause on all industry from the same argument you’re using is extremely fringe, 2/ the reputation costs in 99% of worlds will themselves increase existential risk in the (far more likely) case that AGI happens when (or after) most experts think it will happen.
Industry regulations tend to be based on statistical averages (i.e., from a global perspective, on certainties), not multiplications of subjective-Bayesian guesses. I don’t think the general public accepting any industry regulations commit them to Pascal-mugging-adjacent views. After all, 1% of existential risk (or at least global catastrophic risk) due to climate change, biodiversity collapse, or zoonotic pandemics seem plausible too. If you have any realistic amount of risk aversion, whether the remaining 99% of the futures (even from a strictly strong-longtermist perspective) are improved upon by pausing (worse, by flippant militant advocacy for pausing on alarmist slogans that will carry extreme reputation costs in the 99% of worlds where no x-risk from LLMs happen) is important!
Crucially, p(doom)=1% isn’t the claim PauseAI protesters are making. Discussed outcomes should be fairly distributed over probable futures, if only to make sure your preferred policy is an improvement on most or all of those (this is where I would weakly agree with @Matthew_Barnett’s comment).
Most surveys of AI/ML researchers (with significant selection effects and very high variance) indicate p(doom)s of ~10% (among a variety of different kinds of global risks beyond the traditional AI-go-foom), and (like Ajeya Cotra’s report on AI timelines) a predicted AGI date in the mid-century according to one definition, in next century by another.
Pausing scaling LLMs above a given magnitude will do ~nothing for non-x-risk AI worries. Pausing any subcategory below that (e.g. AI art generators, open-source AI) will do ~nothing (and indeed probably be a net negative) for x-risk AI worries.
Those are meta-level epistemological/methodological critiques for the most part, but meta-level epistemological/methodological critiques can still be substantive critiques and not reducible to mere psychologization of adversaries.
Here are Thorstad’s earlier posts on the institutional critique:
https://reflectivealtruism.com/2023/07/15/the-good-it-promises-part-5-de-coriolis-et-al/
https://reflectivealtruism.com/2023/09/08/the-good-it-promises-part-7-crary-continued/
In addition to what @gw said on the public being in favor of slowing down AI, I’m mostly basing this on reactions to news about PauseAI protests on generic social media websites. The idea that LLMs scaling without further technological breakthrough will for sure lead to superintelligence in the coming decade is controversial by EA standards, fringe by general AI community standard, and resoundly mocked by the general public.
If other stakeholders agree with the existential risk perspective then that is of course great and should be encouraged. To develop further on what I meant (though see also the linked post), I am extremely skeptical that allying with copyright lobbyists is good by any EA/longtermist metric, when ~nobody think art generators pose any existential risk and big AI companies are already negotiating deals with copyright giants (or even the latter creating their own AI divisions as with Adobe Firefly or Disney’s new AI division), while independent EA-aligned research groups like EleutherAI are heavily dependent on the existence of open-source datasets.
https://nickbostrom.com/papers/astronomical-waste/
In light of the above discussion, it may seem as if a utilitarian ought to focus her efforts on accelerating technological development. The payoff from even a very slight success in this endeavor is so enormous that it dwarfs that of almost any other activity. We appear to have a utilitarian argument for the greatest possible urgency of technological development.
However, the true lesson is a different one. If what we are concerned with is (something like) maximizing the expected number of worthwhile lives that we will create, then in addition to the opportunity cost of delayed colonization, we have to take into account the risk of failure to colonize at all. We might fall victim to an existential risk, one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.8 Because the lifespan of galaxies is measured in billions of years, whereas the time-scale of any delays that we could realistically affect would rather be measured in years or decades, the consideration of risk trumps the consideration of opportunity cost. For example, a single percentage point of reduction of existential risks would be worth (from a utilitarian expected utility point-of-view) a delay of over 10 million years.
Therefore, if our actions have even the slightest effect on the probability of eventual colonization, this will outweigh their effect on when colonization takes place. For standard utilitarians, priority number one, two, three and four should consequently be to reduce existential risk. The utilitarian imperative “Maximize expected aggregate utility!” can be simplified to the maxim “Minimize existential risk!”.
This argument is highly dependent on your population ethics. From a longtermist, total positive utilitarian perspective, existential risk is many, many magnitudes worse than delaying progress, as it affect many, many magnitudes more (potential) people.
@EliezerYudkowsky famously called this position requiredism. A common retort is that most self-identified compatibilist philosophers are in fact requiredists, making the word “compatibilism” indeed a bit of a misnomer.
PauseAI largely seek to emulate existing social movements (like the climate justice movement) but essentially has a cargo cult approach to how social movements work. For a start, there is currently no scientific consensus around AI safety the way there is around climate change, so all actions trying to imitate the climate justice movement are extremely premature. Blockading an AI company’s office talking about existential risk from artificial general intelligence won’t convince any standby passenger, it will just make you look like a doomsayer caricature. It would be comparable to staging an Extinction Rebellion protest in the mid-19th-century.
Due to this, many in PauseAI are trying to do coalition politics bringing together all opponents of work on AI (neo-Luddites, SJ-oriented AI ethicists, environmentalists, intellectual property lobbyists). But the space of possible AI policies is highly dimensional, so any such coalition, done with little understanding of political strategy, will risk focusing on policies and AI systems that have little to do with existential risk (such as image generators), or that even might prove entirely counter-productive (by entrenching further centralization in the hands of the Big Four¹ and discouraging independent research by EA-aligned groups like EleutherAI).
¹: Microsoft/OpenAI, Amazon/Anthropic, Google/DeepMind, Facebook/Meta
It seems to plausible that, much like Environmental Political Orthodoxy (reverence for simple rural living as expressed through localism, anti-nuclear sentiment, etc.) ultimately led the environmental movement to be harmful for its own professed goals, EA Political Orthodoxy (technocratic liberalism, “mistake theory”, general disdain for social science) could (and maybe already had, with the creation of OpenAI) ultimately lead EA efforts on AI to be a net negative by its own standards.
I identify with your asterisk quite a bit. I used to be much more strongly involved in rationalist circles in 2018-2020, including the infamous Culture War Thread. I distanced myself from it around ~2020, at the time of the NYT controversy, mostly just remaining on Rationalist Tumblr. (I kinda got out at the right time because after I left everyone moved to Substack, which positioned itself against the NYT by personally inviting Scott, and was seemingly designed to encourage every reactionary tendency of the community.)
One of the most salient memories of the alt-right infestation in the SSC fandom to me was this comment by a regular SSC commenter with an overtly antisemitic username, bluntly stating the alt-right strategy for recruiting ~rationalists:
[IQ arguments] are entry points to non-universalist thought.
Intelligence and violence are important, but not foundational; Few people disown their kin because they’re not smart. The purpose of white advocacy is not mere IQ-maximization to make the world safe for liberal-egalitarianism; Ultimately, we value white identity in large part because of the specific, subjective, unquantifiable comfort and purpose provided by unique white aesthetics and personalities as distinct from non-whites and fully realized in a white supermajority civilization.
However, one cannot launch into such advocacy straight away, because it is not compatible with the language of universalism that defines contemporary politics among white elites. That shared language, on both left and right, is one of humanist utilitarianism, and fulfillment of universalist morals with no particular tribal affinity. Telling the uninitiated Redditor that he would experience greater spiritual fulfillment in a white country is a non-starter, not on the facts, but because this statement is orthogonal to his modes of thinking.
Most people come into the alt-right from a previous, universalist political ideology, such as libertarianism. At some point, either because they were redpilled externally or they had to learn redpill arguments to defend their ideology from charges of racism/sexism/etc, they come to accept the reality of group differences. Traits like IQ and criminality are the typical entry point here because they are A) among the most obvious and easily learned differences, and B) are still applicable to universalist thinking; that is, one can become a base-model hereditarian who believes in race differences on intelligence without having to forfeit the mental comfort of viewing humans as morally fungible units governed by the same rules.
This minimal hereditarianism represents an ideological Lagrange point between liberal-egalitarian and inegalitarian-reactionary thought; The redpilled libertarian or liberal still imagines themselves as supporting a universal moral system, just one with racial disparate impacts. Some stay there and never leave. Others, having been unmoored from descriptive human equality, cannot help but fall into the gravity well of particularism and “innate politics” of the tribe and race. This progression is made all but inevitable once one accepts the possibility of group differences in the mind, not just on mere gross dimensions of goodness like intelligence, but differences-by-default for every facet of human cognition.
The scope of human inequality being fully internalized, the constructed ideology of a shared human future cedes to the reality of competing evolutionary strategies and shared identities within them, fighting to secure their existence in the world.
There is isn’t really much more to say, he essentially spilled the beans – but in front on an audience who pride itself so much in “high-decoupling” that they can’t warp their mind around the idea that overt neo-Nazis might in fact be bad people who abuse social norms of discussion to their advantage – even when said neo-Nazis are openly bragging about it to their face.
If one is a a rationalist who seek to raise the sanity waterline and widely spread the tools of sound epistemology, and even more so if one is an effective altruist who seek to expand the moral circle of humanity, then there is zero benefit to encourage discussion of the currently unknowable etiology of a correlation between two scientifically dubious categories, when the overwhelming majority of people writing about it don’t actually care about it, and only seek to use it as a gateway to rehabilitating a pseudoscientific concept universally rejected by biologists and geneticists, on explicitly epistemologically subjectivist and irrationalist grounds, to advance a discriminatory-to-genocidal political project.
I don’t think that’s true at all. The effective accelerationists and the (to coin a term) AI hawks are major factions in the conflict over AI. I think you could argue they aren’t bullish enough about the full extent of the capabilities of AGI (except for the minority of extinctionist Landians, this is partly true) – in which case the Trumps aren’t bullish enough either. As @Garrison noted here, prominent Republicans like Ted Cruz and JD Vance himself are already explicitly hostile to AI safety.
I think it, like much of Scott’s work, is written with a “micro-humorous” tone but reflect to a significant extent his genuine views – in the case you quoted, I see no reason to it’s not his genuine view that building Trump’s wall would be a meaningless symbol that would change nothing, with all that implies of scorn toward both #BuildTheWall Republicans and #Resistance Democrats.
Another example, consider these policy proposals:
- Tell Russia that if they can defeat ISIS, they can have as much of Syria as they want, and if they can do it while getting rid of Assad we’ll let them have Alaska back too.
- Agree with Russia and Ukraine to partition Ukraine into Pro-Russia Ukraine and Pro-West Ukraine. This would also work with Moldova.
[...]
- Tell Saudi Arabia that we’re sorry for sending mixed messages by allying with them, and actually they are total scum and we hate their guts. Ally with Iran, who are actually really great aside from the whole Islamic theocracy thing. Get Iran to grudgingly tolerate Israel the same way we got Egypt, Saudi Arabia, Jordan, etc to grudgingly tolerate Israel, which I assume involves massive amounts of bribery. Form coalition for progress and moderation vs. extremist Sunni Islam throughout Middle East. Nothing can possibly go wrong.
Months later he replied this to an anonymous ask on the subject:
So that was *kind of* joking, and I don’t know anything about foreign policy, and this is probably the worst idea ever, but here goes:
Iran is a (partial) democracy with much more liberal values than Saudi Arabia, which is a horrifying authoritarian hellhole. Iran has some level of women’s rights, some level of free speech, and a real free-ish economy that produces things other than oil. If they weren’t a theocracy, it would be hard to tell them apart from an average European state.
In the whole religious war thing, the Iranians are allied with the Shia and the Saudis with the Sunni. Most of our enemies in the Middle East are Sunni. Saddam was Sunni. Al Qaeda is Sunni. ISIS is Sunni. Our Iraqi puppet government is Shia, which is awkward because even though they’re supposed to be our puppet government they like Iran more than us. Bashar al-Assad is Shia, which is awkward because as horrible as he is he kept the country at peace, plus whenever we give people weapons to overthrow him they turn out to have been Al Qaeda in disguise.
Telling the Saudis to fuck off and allying with Iran would end this awkward problem where our friends are allies with our enemies but hate our other friends. I think it would go something like this:
- We, Russia, and Iran all cooperate to end the Syrian civil war quickly in favor of Assad, then tell Assad to be less of a jerk (which he’ll listen to, since being a jerk got him into this mess)
- Iraq’s puppet government doesn’t have to keep vacillating between being a puppet of us and being a puppet of Iran. They can just be a full-time puppet of the US-Iranian alliance. Us, Iran, Iraq, and Syria all ally to take out ISIS.
- We give Iran something they want (like maybe not propping up Saudi Arabia) in exchange for them promising to harass Israel through legal means rather than violence. Iran either feels less need to develop nuclear weapons, or else maybe they have nuclear weapons but they’re on our side now so it’s okay.
- The Saudi king was visibly shaken and dropped his copy of Kitab al-Tawhid. The Arabs applauded and accepted Zoroaster as their lord and savior. A simurgh named “Neo-Achaemenid Empire” flew into the room and perched atop the Iranian flag. The Behistun Inscription was read several times, and Saoshyant himself showed up and enacted the Charter of Cyrus across the region. The al-Saud family lost their crown and were exiled the next day. They were taken out by Mossad and tossed into the pit of Angra Mainyu for all eternity.
PS: Marg bar shaytân-e bozorg
Do Scott actually believe the Achaemenid Empire should be restored with Zoroastrianism as state religion? No, “that was *kind of* joking, and [he doesn’t] know anything about foreign policy, and this is probably the worst idea ever”. Does this still reflect a coherent set of (politically controversial) beliefs about foreign policy which he clearly actually believe (e.g. that “Bashar al-Assad [...] kept the country at peace” and Syrian oppositionists were all “Al-Qaeda in disguise”), that are also consistent with him picking Tulsi Gabbard as Secretary of State in his “absurdist humor”? Yeah, it kinda does. Same applies, I think, to the remainder of his post.
Fortunately, the existential risks posed by AI are recognized by many close to President-elect Donald Trump. His daughter Ivanka seems to see the urgency of the problem. Elon Musk, a critical Trump backer, has been outspoken about the civilizational risks for many years, and recently supported California’s legislative push to safety-test AI. Even the right-wing Tucker Carlson provided common-sense commentary when he said: “So I don’t know why we’re sitting back and allowing this to happen, if we really believe it will extinguish the human race or enslave the human race. Like, how can that be good?” For his part, Trump has expressed concern about the risks posed by AI, too.
This is a strange contrast from the rest of the article, considering both Donald and Ivanka Trump’s positions are largely informed by the “situational awareness” position arguing that the US should develop AGI before China to ensure US victory over China – which is explicitly the position Tegmark and Leahy argue against (and consider existentially harmful) when they call to stop work on AGI and work on international co-operation to restrict it and develop tool AI instead.
I still see this kind of confusion between the two positions a fair bit and it is extremely strange. It’s like if back in the original Cold War people couldn’t tell the difference between anti-communist hawks and the Bulletin of the Atomic Scientists (let alone anti-war hippies) because technically they both considered nuclear arms race to be very important for the future of humanity.
I would advise using normal capitalization for your titles. Not that big of a deal if you just read the article but the table of contents on the left side of the site just makes it looks like you’re SCREAMING.
Note: Diana Fleischman is the wife of Geoffrey Miller, another far-right effective altruist with a (much more prolific) account on this forum.
If there’s no humans left after AGI, then that’s also true for “weak general AI”. Transformative AI is also a far better target for what we’re talking about than “weak general AI”.
The “AI Dystopia” scenario is significantly different from what PauseAI rhetoric is centered about.
The PauseAI rhetoric is also very much centered on just scaling LLMs, not acknowledging other ingredients of AGI.