Evolutionary psychology professor, author of ‘The Mating Mind’, ‘Spent’, ‘Mate’, & ‘Virtue Signaling’. B.A. Columbia; Ph.D. Stanford. My research has focused on human cognition, machine learning, mate choice, intelligence, genetics, emotions, mental health, and moral virtues. Interested in long termism, X risk, longevity, pronatalism, population ethics, AGI, China, crypto.
Geoffrey Miller
Matt—thanks for an insightful post. Mostly agree.
However, on your point 2 about ‘technological determinism’: I worry that way too many EAs have adopted this view that building ASI is ‘inevitable’, and that the only leverage we have over the future of AI X-risk is to join AI companies explicitly trying to build ASI, and try to steer them in benign directions that increase control and alignment.
That seems to be the strategy that 80k Hours has actively pushed for years. It certainly helps EAs find lucrative, high-prestige jobs in the Bay Area, and gives them the illusion that they’re doing good. But to outsiders, it looks like little more than a self-serving jobs program for utilitarians who want a slice of that sweet, sweet, equity in AI companies—without any of the guilt of actually working on ASI capabilities development.
And the weird thing is, this strategy only makes sense if we believe two key things: (1) ASI development is ‘inevitable’—even if virtually all of humanity agrees that it would be suicidal, and (2) ASI alignment is solvable—such that we can keep control of ASIs, and force them to work for humans, generation after generation, forever.
Both of these seem equally and wildly implausible. And the sooner we recognize their implausibility, the faster we can move beyond this rather cynical/self-serving 80k Hours strategy of encouraging young EAs to join the safety-washing, PR-window-dressing ‘technical AI alignment’ groups at frontier AI companies like OpenAI, Anthropic, DeepMind, etc.
Thanks for this analysis. I think your post deserves more attention, so I upvoted it.
We need more game-theory analyses like this, of geopolitical arms race scenarios.
Way too often, people just assume that the US-China rivalry can be modelled simply as a one-shot Prisoner’s Dilemma, in which the only equilibrium is mutual defection (from humanity’s general interests) through both sides trying to build ASI as soon as possible.
As your post indicates, the relevant game theory must include incomplete and asymmetric information, possible mixed-strategy equilibria, iterated play that depends strongly on what the other player has been doing, etc.
I would also encourage more development of game theory scenarios that explicitly model the creation of ASI as the introduction of a new player with its own rules, strategies, and payoffs.
Building an ASI isn’t just giving existing players a new tool for ‘winning the game’. It’s introducing a new player with its own interests (unless the ASI is 100% totally, reliably controlled & aligned with one existing player—which is probably impossible.)
New movie ‘A house of dynamite’: Required viewing about nuclear X-risk
Tobias—I take your point. Sort of.
Just as they say ‘There are no atheists in foxholes’ [when facing risk of imminent death during combat], I feel that it’s OK to pray (literally and/or figuratively) when facing AI extinction risk—even if one’s an atheist or agnostic. (I’d currently identify as an ‘agnostic’, insofar as the Simulation Hypothesis might be true).
My X handle ‘primalpoly’ is polysemic, and refers partly to polyamory, but partly to polygenic traits (which I’ve studied extensively), and partly to some of the hundreds of other words that start with ‘poly’.
I think that given most of my posts on X over the last several years, and the people who follow me, I’m credibly an insider to the conservative right.
My new interview (48 mins) on AI risks for Bannon’s War room: https://rumble.com/v6z707g-full-battleground-91925.html
This was my attempt to try out a few new arguments, metaphors, and talking points to raise awareness about AI risks among MAGA conservatives. I’d appreciate any feedback, especially from EAs who lean to the Right politically, about which points were most or least compelling.
PS the full video of my 15-minute talk was just posted today on the NatCon YouTube channel; here’s the link
David—I considered myself an atheist for several decades (partly in alignment with my work in evolutionary psychology), and would identify now as an agnostic (insofar as the Simulation Hypothesis has some slight chance of being true, and insofar as ‘Simulation-Coders’ aren’t functionally any different from ‘Gods’, from our point of view).
And I’m not opposed to various kinds of reproductive tech, regenerative medicine research, polygenic screening, etc.
However, IMHO, too many atheists in the EA/Rationalist/AI Safety subculture have been too hostile or dismissive of religion to be effective in sharing the AI risk message with religious people (as I alluded to in this post).
And, I think way too much overlap has developed between transhumanism and the e/acc cult that dismisses AI risk entirely, and/or that embraces human extinction and replacement by machine intelligences. Insofar as ‘transhumanism’ has morphed into contempt for humanity-as-it-is, and into a yearning for hypothetical-posthumanity-as-it-could be, then I think it’s very dangerous.
Modest, gradual, genetic selection or modification of humans to make them a little healthier or smarter, generation by generation? That’s fine with me.
Radical replacement of humanity by ASIs in order to colonize the galaxy and the lightcone faster? Not fine with me.
Arepo—thanks for your comment.
To be strictly accurate, perhaps I should have said ‘the more you know about AI risks and AI safety, the higher your p(doom)’. I do think that’s an empirically defensible claim. Especially insofar as most of the billions of people who know nothing about AI risks have a p(doom) of zero.
And I might have added that thousands of AI devs employed by AI companies to build AGI/ASI have very strong incentives not to learn about too much about AI risks and AI safety of the sort that EAs have talked about for years, because such knowledge would cause massive cognitive dissonance, ethical self-doubt, regret (as in the case of Geoff Hinton), and/or would handicap their careers and threaten their salaries and equity stakes.
Remmelt—thanks for posting this.
Senator Josh Hawley is a big deal, with a lot of influence. I think building alliances with people like him could help slow down reckless AGI development. He may not be as tuned into AI X-risk as your typical EA is, but he is, at least, resisting the power of the pro-AI lobbyists.
My talk on AI risks at the National Conservatism conference last week
Thanks for sharing this.
IMHO, if EAs really want effective AI regulation & treaties, and a reduction in ASI extinction risk, we need to engage more with conservatives, including those currently in power in Washington. And we need to do so using the language and values that appeal to conservatives.
Joel—have you actually read the Bruce Gilley book?
If you haven’t, maybe give it a try before dismissing it as something that’s ‘extremely useful to avoid associating ourselves with’.
To me, EA involves a moral obligation to seek the truth about contentious political topics, especially those that concern the origins and functioning of successful institutions—which is what the whole colonialism debate is centrally about. And not ignoring these topics just to stay inside the Overton window.
I think EA should be careful not to take the ‘colonialism studies’ too seriously—i.e. the view that colonialism was almost entirely bad, and that decolonialism was almost entirely good—esp in subsaharan Africa. That’s the view that seems to be spilling over here into the assumption that ‘colonialism was bad, neocolonialism is bad; so if EA is neocolonialist, then EA is bad’.
For a counter-argument against this ‘colonialism studies’ dogma, see the recent book ‘The case of colonialism’ (2023) by Bruce Gilley. IMHO, he makes a pretty compelling case that in almost every case, colonialism was one of the best things that ever happened to indigenous cultures (e.g. in spreading the rule of law, developing infrastructure, improving education, promoting economic development, decreasing tribal warfare and rape, promoting women’s rights, etc), and decolonialism was one of the worst things (e.g. in backsliding into counter-productive Marxist revolutionary zeal and/or corrupt kleptocracies).
If Gilley’s general point is correct, then EAs should not feel ashamed about various of our global health and global poverty-reduction projects sounding a bit ‘neocolonialist’.
Jason—your reply cuts to the heart of the matter.
Is it ethical to try to do good by taking a job within an evil and reckless industry? To ‘steer it’ in a better direction? To nudge it towards minimally-bad outcomes? To soften the extinction risk?
I think not. I think the AI industry is evil and reckless, and EAs would do best to denounce it clearly by warning talented young people not to work inside it.
JackM—these alleged ‘tremendous’ benefits are all hypothetical and speculative.
Whereas the likely X risk from ASI have been examined in detail by thousands of serious people, and polls show that most people, both inside and outside the AI industry, are deeply concerned by them.
This is why I think it’s deeply unethical for 80k Hours to post jobs to work on ASI within AI companies.
Conor—yes, I understand that you’re making judgment calls about what’s likely to be net harmful versus helpful.
But your judgment calls seem to assume—implicitly or explicitly—that ASI alignment and control are possible, eventually, at least in principle.
Why do you assume that it’s possible, at all, to achieve reliable long-term alignment of ASI agents? I see no serious reason to think that it is possible. And I’ve never seen a single serious thinker make a principled argument that long-term ASI alignment with human values is, in fact, possible.
And if ASI alignment isn’t possible, then all AI ‘safety research’ at AI companies aiming to build ASI is, in fact, just safety-washing. And it all increases X risk by giving a false sense of security, and encouraging capabilities development.
So, IMHO, 80k Hours should re-assess what it’s doing by posting these ads for jobs inside AI companies—which are arguably the most dangerous organizations in human history.
This is a good video; thanks for sharing.
But I have to ask: why is 80k Hours still including job listings for AGI development companies that are imposing extinction risks on humanity?
I see dozens of jobs on the 80k Hours job board for positions at OpenAI, Anthropic, xAI, etc—and not just in AI safety roles, but in capabilities development, lobbying, propaganda, etc. And even the ‘AI safety jobs’ seem to be there for safety-washing/PR purposes, with no real influence on slowing down AI capabilities development.
If 80k Hours wants to take a principled stand against reckless AGI development, then please don’t advertise jobs where EAs are enticed by $300,000+ salaries to push AGI development.
Good post. Thank you.
But, I fear that you’re overlooking a couple of crucial issues:
First, ageism. Lots of young people are simply biased against older people—assuming that we’re closed-minded, incapable of learning, ornery, hard to collaborate with, etc. I’ve encountered this often in EA.
Second, political bias. In my experience, ‘signaling value-alignment’ in EA organizations and AI safety groups isn’t just a matter of showing familiarity with EA and AI concepts, people, strategies, etc. It’s also a matter of signaling left-leaning political values, atheism, globalism, etc—values which have no intrinsic or logical connection to EA or AI safety, but which are simply the water in which younger Millennials and Gen Z swim.
I trust my kids and grandkids to solve their own problems in the future.
I don’t trust our generation to make sure our kids and grandkids survive.
Avoiding extinction is the urgent priority; all else can wait. (And, life is already getting better at a rapid rate for the vast majority of the world’s people. We don’t face any urgent or likely extinction risks other than technologies of our own making.)
Matt—thanks for the quick and helpful reply.
I think the main benefit of explicitly modeling ASI as being a ‘new player’ in the geopolitical game is that it highlights precisely the idea that the ASI will NOT just automatically be a tool used by China or the US—but rather than it will have its own distinctive payoffs, interests, strategies, and agendas. That’s the key issue that many current political leaders (e.g. AI Czar David Sacks) do not seem to understand—if America builds an ASI, it won’t be ‘America’s ASI’, it will be the ASI’s ASI, so to speak.
ASI being unaligned doesn’t necessarily mean that it will kill all humans quickly—there are many, many possible outcomes other than immediate extinction that might be in the ASI’s interests.
The more seriously we model the possible divergences of ASI interests from the interests of current nation-states, the more persuasively we can make the argument that any nation building an ASI is not just flipping a coin between ‘geopolitical dominance forever’ and ‘human extinction forever’—rather, it’s introducing a whole new set of ASI interests that need to be taken into account.