Evolutionary psychology professor, author of ‘The Mating Mind’, ‘Spent’, ‘Mate’, & ‘Virtue Signaling’. B.A. Columbia; Ph.D. Stanford. My research has focused on human cognition, machine learning, mate choice, intelligence, genetics, emotions, mental health, and moral virtues. Interested in long termism, X risk, longevity, pronatalism, population ethics, AGI, China, crypto.
Geoffrey Miller
Helpful suggestions, thank you! Will check them out.
Thanks! Appreciate the suggestion.
Abby—good suggestions, thank you. I think I will assign some Robert Miles videos! And I’ll think about the human value datasets.
Ulrik—I understand your point, sort of, but feel free to reverse any of these human-human alignment examples in whatever ways seem more politically palatable.
Personally, I’m fairly worried about agentic, open-source AGIs being used by Jihadist terrorists. But very few of the e/accs and AI devs advocating open-source AGI seem worried by such things.
‘AI alignment’ isn’t about whether a narrow, reactive, non-agentic AI system (such as a current LLM) seems ‘helpful’.
It’s about whether an agentic AI that can make its own decision and take its own autonomous actions will make decisions that are aligned with general human values and goals.
Scott—thanks for the thoughtful reply; much appreciated.
I think a key strategic difference here is that I’m willing to morally stigmatize the entire AI industry in order to reduce extinction risk, along the lines of this essay I published on EA Forum a year ago.
Moral stigmatization is a powerful but blunt instrument. It doesn’t do nuance well. It isn’t ‘epistemically responsible’ in the way that Rationalists and EAs prefer to act. It does require dividing the world into Bad Actors and Non-Bad Actors. It requires, well, stigmatization. And most people aren’t comfortable stigmatizing people who ‘seem like us’—e.g. AI devs who share with most EAs traits such as high intelligence, high openness, technophilia, liberal values, and ‘good intentions’, broadly construed.
But, I don’t see any practical way of slowing AI capabilities development without increasing the moral stigmatization of the AI industry. And Sam Altman has rendered himself highly, highly stigmatizable. So, IMHO, we might as well capitalize on that, to help save humanity from his hubris, and the hubris of other AI leaders.
(And, as you point out, formal regulation and gov’t policy also come with their own weaknesses, vested interests, and bad actors. So, although EAs tend to act as if formal gov’t regulation is somehow morally superior to the stigmatization strategy, it’s not at all clear to me that it really is.)
Benjamin—thanks for a thoughtful and original post. Much of your reasoning makes sense from a strictly financial, ROI-maximizing perspective.
But I don’t follow your logic in terms of public sentiment regarding AI safety.
Your wrote ‘Second, an AI crash could cause a shift in public sentiment. People who’ve been loudly sounding caution about AI systems could get branded as alarmists, or people who fell for another “bubble”, and look pretty dumb for a while.’
I don’t see why an AI crash would turn people against AI safety concerns.
Indeed, a logical implication of our ‘Pause AI’ movement, and the public protests against AI companies, is that (1) we actually want AI companies to fail, because they’re pursuing AGI recklessly, (2) we are doing our best to help them to fail, to protect humanity, (3) we are stigmatizing people who invest in AI companies as unethical, and (4) we hope that the value of AI companies, and the Big Tech companies associated with them, plummets like a rock.
I don’t think EAs can have it both ways—profiting from investments in reckless AI companies, while also warning the public about the recklessness of those companies. There might be a certain type of narrow, short-sighted utilitarian reasoning in which such moral hypocrisy makes sense. But to most people, who are intuitive virtue ethicists and/or deontologists, investing in companies that impose extinction risk on our species, just in hopes that we can make enough money to help mitigate those extinction risks, will sound bizarre, contradictory, and delusional.
If we really want to make money, just invest like normal people in crypto when prices are low, and sell when prices are high. There’s no need to put our money into AI companies that we actually want to fail, for the sake of human survival.
A tale of two Sams
My sense is that public opinion has already been swinging against the AI industry (not just OpenAI), and that this is a good and righteous way to slow down reckless AGI ‘progress’ (i.e. the hubris of the AI industry driving humanity off a cliff).
My take is this:
Whenever Sam Altman behaves like an unprincipled sociopath, yet again, we should update, yet again, in the direction of believing that Sam Altman might be an unprincipled sociopath, who should not be permitted to develop the world’s most dangerous technology (AGI).
[Question] Seeking suggested readings & videos for a new course on ‘AI and Psychology’
adekcz—thanks for writing this. I’m also horrified by OpenAI turning from well-intentioned to apparently reckless and sociopathic, in pushing forward towards AGI capabilities without any serious commitment to AI safety.
The question is whether withholding a bit of money from OpenAI will really change their behavior, or whether a ‘ChatGPT boycott’ based on safety concerns could be more effective if our money-withholding is accompanied by some noisier public signaling of our moral outrage. I’m not sure what this would look like, exactly, but I imagine it could include some coordination with the ‘Pause AI’ movement (active on X/Twitter). I think a public commitment to boycotting OpenAI (e.g. through social media posts) would be helpful—especially through channels that don’t normally attract much AI safety discussion (e.g. Facebook, TikTok, Instagram—in addition to X).
tldr: boycotts work best when we make a lot of noise about them, not when they’re just a private withdrawal of funding.
Good question. My hunch is that EA as a culture tends to prioritize epistemic and ethical sophistication and rigor, over direct ‘political’ action. And has traditionally avoided getting involved in issues that seem ‘intractable’ by virtue of being highly controversial and potentially partisan.
Against that background of EA’s rather ‘ivory tower’ ethos, any direct protests may tend to be seen as rather simplistic, strident, and undignified—even for issues such as animal agriculture where there’s pretty strong EA consensus that factory farming is unethical.
But I think it’s time for EAs to climb down from our AI safety debates, recognize that the leading AI companies are not actually prioritizing safety, and start getting more involved in social media activism and in-person protests.
As a tangent, I think EAs should avoid using partisan political examples as intuition pumps for situations like this.
Liberals might think that ‘engagement with criticism by Trump’ would be worthless. But conservative crypto investors might think ‘engagement with criticism by Elizabeth Warren’ would be equally worthless.
Let’s try to set aside the reflexive Trump-bashing.
This argument seems extremely naive.
Imitation learning could easily become an extinction risk if the individuals or groups being imitated actively desire human extinction, or even just death to a high proportion of humans. Many do.
Radical eco-activists (e.g. Earth First) have often called for voluntary human extinction, or at least massive population reduction.
Religious extremists (e.g. Jihadist terrorists) have often called for death to all non-believers (e.g. the 6 billion people who aren’t Muslim.)
Antinatalists and negative utilitarians are usually careful not to call for extinction or genocide as a solution to ‘suffering’, but calls for human extinction seem like a logical outgrowth of their world-view.
Many kinds of racists actively want the elimination, or at least reduction, of other races.
I fear that any approach to AI safety that assumes the whole world shares the same values as Bay Area liberals will utterly fail when advanced AI systems become available to a much wider range of people with much more misanthropic agendas.
Yarrow—I’m curious which bits of what I wrote you found ‘psychologically implausible’?
Beautiful and inspiring. Thanks for sharing this.
I hope more EAs think about turning abstract longtermist ideas into more emotionally compelling media!
mikbp: good question.
Finding meaningful roles for ordinary folks (‘mediocrities’) is a big challenge for almost every human organization, movement, and subculture. It’s not unique to EA—although EA does tend to be quite elitist (which is reasonable, given that many of its core insights and values require a very high level of intelligence and openness to understand.)
The usual strategy for finding roles for ordinary people in organizations is to create hierarchical structures in which the ordinary people are bossed around/influenced/deployed by more capable leaders. This requires a willingness to accept hierarchies as ethically and pragmatically legitimate—which tends to be more of a politically conservative thing, and might conflict with EA’s tendency to attract anti-hierarchical liberals.
Of course, such hierarchies don’t need to involve full-time paid employment. Every social club, parent-teacher association, neighborhood association, amateur sports team, activist group, etc involves hierarchies of part-time volunteers. They don’t expect full-time commitments. So they’re often pretty good at including people who are average both in terms of their traits and abilities, and in terms of the time they have available for doing stuff, beyond their paid jobs, child care, and other duties.
Counterpoints:
Humans are about as good and virtuous as we could reasonably expect from a social primate that has evolved through natural selection, sexual selection, and social selection (I’ve written extensively on this in my 5 books).
Human life has been getting better, consistently, for hundreds of years. See, e.g. Steven Pinker (2018) ‘Enlightenment Now’.
Factory farming would be ludicrously inefficient for the first several decades, at least, of any Moon or Mars colonies, so would simply not happen.
My more general worry is that this kind of narrative that ‘humans are horrible, we mustn’t colonize space and spread our horribleness elsewhere’ is that it feeds the ‘effective accelerationist’ (e/acc) cult that thinks we’d be better replaced by AIs.
Manuel—thanks for your thoughts on this. It is important to be politically and socially savvy about this issue.
But, sometimes, a full-on war mode is appropriate, and trying to play nice with an industry just won’t buy us anything. Trying to convince OpenAI to pause AGI development until they solve AGI alignment, and sort out other key safety issues, seems about as likely to work as nicely asking Cargill Meat Solutions (which produces 22% of chicken meat in the US) to slow down their chicken production, until they find more humane ways to raise and slaughter chickens.
I don’t really care much if the AI industry severs ties with EAs and Rationalists. Instead, I care whether we can raise awareness of the AI safety issues with the general public, and politicians, quickly and effectively enough to morally stigmatize the AI industry.
Sometimes, when it comes to moral issues, the battle lines have already been drawn, and we have to choose sides. So far, I think EAs have been far too gullible and naive about AI safety and the AI industry, and have chosen too often to take the side of the AI industry, rather than the side of humanity.