Evolutionary psychology professor, author of ‘The Mating Mind’, ‘Spent’, ‘Mate’, & ‘Virtue Signaling’. B.A. Columbia; Ph.D. Stanford. My research has focused on human cognition, machine learning, mate choice, intelligence, genetics, emotions, mental health, and moral virtues. Interested in long termism, X risk, longevity, pronatalism, population ethics, AGI, China, crypto.
Geoffrey Miller
I think there’s a huge difference in potential reach between a major TV series and a LessWrong post.
According to this summary from Financial Times, as of March 27, ‘3 Body Problem’ had received about 82 million view-hours, equivalent to about 10 million people worldwide watching the whole 8-part series. It was a top 10 Netflix series in over 90 countries.
Whereas a good LessWrong post might get 100 likes.
We should be more scope-sensitive about public impact!
PS: Fun fact: after my coauthor Peter Todd (Indiana U.) and I read ‘3 Body Problem’ novel in 2015, we were invited to a conference on ‘active Messaging to Extraterrestrial Intelligence’ (‘active METI’) at Arecibo radio telescope in Puerto Rico. Inspired by Liu Cixin’s book, we gave a talk about the extreme risks of active METI, which we then wrote up as this journal paper, published in 2017:
PDF here
Journal link here
Title: The Evolutionary Psychology of Extraterrestrial Intelligence: Are There
Universal Adaptations in Search, Aversion, and Signaling?Abstract
To understand the possible forms of extraterrestrial intelligence (ETI), we need not only astrobiology theories about how life evolves given habitable planets, but also evolutionary psychology theories about how intelligence emerges given life. Wherever intelligent organisms evolve, they are likely to face similar behavioral challenges in their physical and social worlds. The cognitive mechanisms that arise to meet these challenges may then be copied, repurposed, and shaped by further evolutionary selection to deal with more abstract, higher-level cognitive tasks such as conceptual reasoning, symbolic communication, and technological innovation, while retaining traces of the earlier adaptations for solving physical and social problems. These traces of evolutionary pathways may be leveraged to gain insight into the likely cognitive processes of ETIs. We demonstrate such analysis in the domain of search strategies and show its application in the domains of emotional aversions and social/sexual signaling. Knowing the likely evolutionary pathways to intelligence will help us to better search for and process any alien signals from the search for ETIs (SETI) and to assess the likely benefits, costs, and risks of humans actively messaging ETIs (METI).
‘3 Body Problem’ is a new 8-episode Netflix TV series that’s extremely popular, highly rated (7.8/10 on IMDB), and based on the bestselling 2008 science fiction book by Chinese author Liu Cixin.
It raises a lot of EA themes, e.g. extinction risk (for both humans & the San-Ti aliens), longtermism (planning 400 years ahead against alien invasion), utilitarianism (e.g. sacrificing a few innocents to save many), cross-species empathy (e.g. between humans & aliens), global governance to coordinate against threats (e.g. Thomas Wade, the UN, the Wallfacers), etc.
Curious what you all think about the series as an entry point for talking about some of these EA issues with friends, family, colleagues, and students?
Geoffrey Miller’s Quick takes
Well Leif Wenar seems to have written a hatchet job that’s deliberately misleading about EA values, priorities, and culture.
The usual anti-EA ideologues are celebrating about Wired magazine taking such a negative view of EA.
For example, leader of the ‘effective accelerationist’ movement ‘Beff Jezos’ (aka Guillaume Verdon) wrote this post on X, linking to the Wenar piece, saying simply ‘It’s over. We won’. Which is presumably a reference to EA people working on AI safety being a bunch of Luddite ‘decels’ who want to stop the glorious progress towards ASI replacing all of humanity, and this Wenar piece permanently discrediting all attempts to slowing AI or advocating for AI safety.
So, apart from nitpicking everything that Wenar gets wrong, we should pay attention to the broader cultural context, in which he’s seen as a pro-AI e/acc hero for dissing all attempts at promoting AI safety and responsible longtermism.
David—this is a helpful and reasonable comment.
I suspect that many EAs tactically and temporarily suppressed their use of EA language after the FTX debacle, when they knew that EA had suffered a (hopefully transient) setback.
This may actually be quite analogous to the cyclical patterns of outreach and enthusiasm that we see in crypto investing itself. The post-FTX 2022-2023 bear market in crypto was reflected in a lot of ‘crypto influencers’ just not talking very much about crypto for a year or two, when investor sentiment was very low. Then, as the price action picked up in the last half of 2023 through now, and optimism returned, and the Bitcoin ETFs got approved by the SEC, people started talking about crypto again. So it has gone, with every 4-year-cycle in crypto.
The thing to note here is that in the dark depths of the ‘crypto winter’ (esp. early 2023), it seemed like confidence and optimism might never return. (Which is, of course, why token prices were so low). But, things did improve, as the short-term sting of the FTX scandal faded.
So, hopefully, things might go with EA itself, as we emerge from this low point in our collective sentiment.
Nicholas—thanks for posting this helpful summary of these empirical studies.
I do find it somewhat sad and alarming that so many EAs seem to be delaying or avoiding having kids, out of fear that this will ‘impair productivity’.
Productivity-maxxing can be a false god—and this is something that’s hard to understand until one becomes a parent.
Just as money sent to charities can vary 100x in terms of actual effectiveness, ‘productivity’ can vary hugely in terms of actual impact in the world.
Lots of academic parents I know (including me) realized, after having kids, that they had been spending huge amounts of time doing stuff that seemed ‘productive’ or ‘fun’ at the time, but that wasn’t actually aligned with their genuine long-term goals and values. Some of this time was spent on self-indulgent status-seeking, credentialism, careerism, workaholism, networking, etc. Some of it was spent on habit-forming but unfulfilling forms of leisure (TV, video games, light reading). Much of it was mating effort to find and retain a sexual partner(s). And some of it was spent on feeling depressed, anxious, etc, wondering about the meaning of life—concerns that tend to evaporate when you start spending more time enjoying the company of your kids, when the ‘meaning of life’ becomes bittersweetly apparent.
Jason—fair point.
Except that all psychological traits are heritable, so offspring of smart, conscientious, virtuous EAs are likely to be somewhat smarter, more conscientious, and more virtuous than average offspring.
I think it’s important for EA to avoid partisan political fights like this—they’re not neglected cause areas, and they’re often not tractable.
It’s easy for the Left to portray the ‘far right’ as a ‘threat to democracy, in the form of ‘fascist authoritarians’.
It’s also easy for the Right to portray the ‘far left’ as a ‘threat to democracy’ in the form of ‘socialist authoritarians’.
The issue of immigration (e.g. as considered by AfD) is especially tricky and controversial, in terms of whether increased immigration into Western democracies of people with anti-democratic values (e.g. fundamentalist religious values) would be a good or a bad thing.
So many political groups are already fighting over these issues. It would dilute EA’s focus, and undermine our non-partisan credibility, to get involved in these things.
Kyle—I just completed the survey yesterday. I did find it very long and grueling. I worry that you might get lower quality data in the last 1⁄2 of the survey, due to participant fatigue and frustration.
My suggestion—speaking as a psych professor who’s run many surveys over the last three decades—is to develop a shorter survey (no more than 25 minutes) that focuses on your key empirical questions, and try to get a good large sample for that.
I just reposted your X/Twitter recruitment message, FWIW:
https://twitter.com/law_fiore/status/1706806416931987758
Good luck! I might suggest doing a shorter follow-up survey in due course -- 90 minutes is a big time commitment for $15 payment!
Johanna - thanks very much for sharing this fascinating, important, and useful research! Hope lots of EAs pay attention to it.
[Question] Suggested readings & videos for a new college course on ‘Psychology and AI’?
Hayven—there’s a huge, huge middle ground between reckless e/acc ASI accelerationism on the one hand, and stagnation on the other hand.
I can imagine a moratorium on further AGI research that still allows awesome progress on all kinds of wonderful technologies such as longevity, (local) space colonization, geoengineering, etc—none of which require AGI.
Isaac—good, persuasive post.
I agree that p(doom) is rhetorically ineffective—to normal people, it just looks weird, off-putting, pretentious, and depressing. Most folks out there have never taken a probability and statistics course, and don’t know what p(X) means in general, much less p(doom).
I also agree that p(doom) is way too ambiguous, in all the ways you mentioned, plus another crucial way: it isn’t conditioned on anything we actually do about AI risk. Our p(doom) given an effective global AI regulation regime might be a lot lower than p(doom) if we do nothing. And the fact that p(doom) isn’t conditioned on our response to p(doom) creates a sense of fatalistic futility, as if p(doom) is a quantitative fact of nature, like the Planck constant or the Coulomb constant, rather than a variable that reflects our collective response to AI risks, and that could go up or down quite dramatically given human behavior.
Caleb—thanks for this helpful introduction to Zach’s talents, qualifications, and background—very useful for those of us who don’t know him!
I agree that EA organizations should try very hard to avoid entanglements with AI companies such as Anthropic—however well-intentioned they seem. We need to be able to raise genuine concerns about AI risks without feeling beholden to AI corporate interests.
Malo—bravo on this pivot in MIRI’s strategy and priorities. Honestly it’s what I’ve hoped MIRI would do for a while. It seems rational, timely, humble, and very useful! I’m excited about this.
I agree that we’re very unlikely to solve ‘technical alignment’ challenges fast enough to keep AI safe, given the breakneck rate of progress in AI capabilities. If we can’t speed up alignment work, we have to slow down capabilities work.
I guess the big organizational challenge for MIRI will be whether its current staff, who may have been recruited largely for their technical AI knowledge, general rationality, and optimism about solving alignment, can pivot towards this more policy-focused and outreach-focused agenda—which may require quite different skill sets.
Let me know if there’s anything I can do to help, and best of luck with this new strategy!
Will—we seem to be many decades away from being able to do ‘mind uploading’ or serious levels of cognitive enhancement, but we’re probably only a few years away from extremely dangerous AI.
I don’t think that betting on mind uploading or cognitive enhancement is a winning strategy, compared to pausing, heavily regulating, and morally stigmatizing AI development.
(Yes, given a few generations of iterated embryo selection for cognitive ability, we could probably breed much smarter people within a century or two. But they’d still run a million times slower than machine intelligences. As for mind uploading, we have nowhere near the brain imaging abilities required to do whole-brain emulations of the sort envisioned by Robin Hanson)
Remmelt—I agree. I think EA funders have been way too naive in thinking that, if they just support the right sort of AI development, with due concern for ‘alignment’ issues, they could steer the AI industry away from catastrophe.
In hindsight, this seems to have been a huge strategic blunder—and the big mistake was under-estimating the corporate incentives and individual hubris that drives unsafe AI development despite any good intentions of funders and founders.
A brief meta-comment on critics of EAs, and how to react to them:
We’re so used to interacting with each other in good faith, rationally and empirically, constructively and sympathetically, according to high ethical and epistemic standards, that we EAs have real trouble remembering some crucial fact of life:
Some people, including many prominent academics, are bad actors, vicious ideologues, and/or Machiavellian activists who do not share our world-view, and never will
Many people engaged the public sphere are playing games of persuasion, influence, and manipulation, rather than trying to understand or improve the world
EA is emotionally and ideologically threatening to many people and institutions, because insofar as they understand our logic of focusing on tractable, neglected, big-scope problems, they realize that they’ve wasted large chunks of their lives on intractable, overly popular, smaller-scope problems; and this makes them sad and embarrassed, which they resent
Most critics of EA will never be persuaded that EA is good and righteous. When we argue with such critics, we must remember that we are trying to attract and influence onlookers, not trying to change the critics’ minds (which are typically unchangeable).