Evolutionary psychology professor, author of ‘The Mating Mind’, ‘Spent’, ‘Mate’, & ‘Virtue Signaling’. B.A. Columbia; Ph.D. Stanford. My research has focused on human cognition, machine learning, mate choice, intelligence, genetics, emotions, mental health, and moral virtues. Interested in long termism, X risk, longevity, pronatalism, population ethics, AGI, China, crypto.
Geoffrey Miller
Tom—you raise some fascinating issues, and your Venn diagrams, however impressionistic they might be, are useful visualizations.
I do hope that AI safety remains an important part of EA—not least because I think there is some important, under-explored overlap between AI safety and the other key cause areas, global health & development, and animal welfare.
For example, I’m working on an essay about animal welfare implications of AGI. Ideally, advanced AI wouldn’t just be ‘aligned’ with human interests, but with the interests of the other 70,000 species of sentient vertebrates (and the sentient invertebrates). But very little has been written about this so far. So, AI safety has a serious anthropocentrism bias that needs challenging. The EAs who have worked on animal welfare could have a lot to say about AI safety issues in relation to other species.
Likewise, the ‘e/acc’ cult (which dismisses AI safety concerns, and advocates AGI development ASAP), often argues that there’s a moral imperative to develop AGI, in order to promote global health and development (e.g. ‘solving longevity’ and ‘promoting economic growth’). EA people who have worked on global health and development could contribute a lot to the debate over whether AGI is strictly necessary to promote longevity and prosperity.
So, the Venn diagrams need to overlap even more!
This is a great idea, and I look forward to reading the diverse views on the wisdom of an AI pause.
I do hope that the authors contributing to this discussion take seriously the idea that an ‘AI pause’ doesn’t need to be fully formalizable at a political, legal, or regulatory level. Rather, its main power can come from promoting an informal social consensus about the serious risks of AGI development, among the general public, journalists, politicians, and the more responsible people in the AI industry.
In other words, the ‘Pause AI’ campaign might get most of its actual power and influence from helping to morally stigmatize reckless AI development, as I argued here.
Thus, the people who argue that pausing AI isn’t feasible, or realistic, or legal, or practical, may be missing the point. ‘Pause AI’ can function as a Schelling point, or focal point, or coordination mechanism, or whatever you want to call it, with respect to public discourse about the ethics of AI development.
There are universal human psychological adaptations associated with moral disgust, so it’s not that hard for ‘moral disgust’ to explain broad moral consensus across very different cultures. For example, murder and rape within societies are almost always considered morally disgusting, across cultures, according to the anthropological research.
It’s not that big a stretch to imagine that a global consensus could be developed that leverages these moral disgust instincts to stigmatize reckless AI development. As I argued here.
Nathan—thanks for your support here.
I’ve noticed that every time I post something critical of coddling culture or runaway safetyism or the way that woke politics is undermining EA culture, I get seriously downvoted. I don’t get downvoted like that when I post about AI safety, parenting as an EA, teaching EA classes, or any other topic.
So I suspect there are a lot of political biases at work here.
What exactly is ‘dangerous’ about expressing concern that Rockwell’s post comes across as sex-negative, drug-negative, and cohousing-negative?
Is open discussion of a community’s social norms ‘dangerous’?
Rockwell—these norms might sound fair, reasonable, and helpful, at first glance.
But they show, IMHO, a strong latent sex-negativity, drug-negativity, and cohousing-negativity that is the diametrical opposite of EA’s traditional subculture—at least until recent years, when ‘safetyism’ seems to have become prioritized over fun, collegiality, and alternative lifestyles.
Take the issue of ‘power differentials’, for example. Some people are really attracted to people who are more powerful, higher status, higher prestige, more influential, more famous, wealthier, and/or older. (There is a LOT of psychological research on these kinds of status-seeking mate preferences, which are very common across cultures.) Such people might prefer to ‘date coworkers’, especially when there is a power differential. This is especially true for the significant proportion of people involved in ‘power exchange’ relationships (e.g. the BDSM subculture, including Dom/sub relationships). (This is salient to me because I teach Human Sexuality courses at college, and I do research on anti-BDSM and anti-polyamory stigma and prejudice).
So, prohibiting ‘power-differential dating’ sounds extra ‘safe’ at first glance. But it would marginalize and stigmatize everybody who’s already in a ‘power-differential’ relationship, or who wants to be—especially among people who take their EA identity seriously, and who would prefer to date other EAs. (Also, of course, stigmatizing ‘power-differential dating’ often boils down to ageism, and the stigmatization of relationships that involve ‘age gaps’.)
Likewise with the notion that EAs should never promote drug use among coworkers, including legal drugs, and alcohol. Let’s be honest here. The expansion of many of our ‘moral circles’ involved psychedelic experiences that allowed us to think about animal sentience, AI, long-termism, and future people in new ways. For those of us with Aspergers (like me) or other autism-spectrum traits, psychoactive drugs such as MDMA helped us develop empathy, compassion, and capacities for social perspective-taking and connection. For those of us with social awkwardness and introversion, light recreational drugs such as alcohol, cannabis, or modafinil can be crucial in loosening up enough to make friends and network at parties and social events. If EA strongly discourages substance use in all EA-adjacent social events, cohousing communities, and friendship circles, then we may be socially handicapping everyone who isn’t neurotypical extrovert.… and we may be keeping our moral circles from growing.
So, in short, I think for any proposed changes to EA subculture norms, we should think very carefully about how these new norms might affect the full range and diversity of people involved in EA, given their actual preferences, experiences, and relationships.
And we should think about whether the new norms are contrary to the traditions of the EA subculture as it’s developed over the last dozen years. In my view, EA is a wonderful subculture, full of fascinating and principled people, with unique perspectives and priorities, and I think this has been due, in no small measure, to the relatively sex-positive, drug-positive, and cohousing-positive features of ‘Trad EA’ culture.
Spencer—good reply.
The crux here is about ‘how bad it is to make public, false, potentially damaging claims about people, and the standard of care/evidence required before making those claims’.
I suspect there are two kinds of people most passionately involved in this dialogue here on EA Forum:
(1) those who have personally experienced being harmed by false, damaging claims (e.g. libel, slander) in the past (which includes me, for example) -- who tend to focus on the brutal downsides of reckless accusations that aren’t properly researched, and
(2) those who have been harmed by people who should have been called out earlier, but where nobody had the guts to be a whistle-blower before—who tend to focus on the downsides of failing to report bad behavior in a quick and effective and public way.
I think if everybody does a little soul-searching about which camp they fall into, and is a little more upfront about their possible personal biases around these issues, the quality of discourse might be higher.
Time and effort invested in writing a post have little bearing on the objectivity of the post, when it comes to adjudicating what’s really true in ‘he said/she said’ (or ‘she said/she said’) cases.
If people have an agenda, they might invest large amounts of time and energy into writing something. But if they’re not consciously following principles of objective reporting (eg as crystallized in the highest ideals of investigative journalism), what they write might be very unbalanced.
We are all familiar with many, many cases of this in partisan news media from the Left and the Right. Writers with an agenda routinely invest hundreds of hours into writing pieces that end up being very biased.
It reveals a lot that you ‘suspect Nonlinear would have come out looking very bad regardless’. That suggests that Ben’s initial framing of this narrative will, in fact, tend to overwhelm any counter-evidence that Nonlinear can offer—and maybe he should have waited longer, and tried harder, to incorporate their counter-evidence before publishing this.
Note that I am NOT saying that Ben definitely had a hidden agenda, or definitely was biased, or was acting in bad faith. I’m simply saying that we, as outsiders, do not know the facts of the matter yet, and we should not confuse amount of time invested in writing something with the objectively of the result.
So, you don’t think amateur investigative journalism should even try to adhere to the standards of professional investigative journalism? (That’s the crux of my argument—I’m obviously not saying that everybody needs to be a trained investigative journalist to publish these kinds of pieces on EA Forum)
Did he interview them about the specific claims he was making, and give them the opportunity to present counter-evidence? That’s the issue.
A generic interview, without the Nonlinear people knowing the details of his allegations, isn’t relevant, and doesn’t count as ‘fact-checking’. (If that’s what he did)
IMHO, the burden of proof was on Ben Pace to fact-check these kinds of claims before publishing them in a public forum like EA Forum—by interviewing all the relevant people, rather than just reporting the claims of his two main anonymous informants.
PS for those folks who disagree-voted with my post:
My key takeaway was ‘if we publish amateur investigative journalism in EA Forum, especially when there are very high stakes for the reputations of individuals and organizations, we should try to adhere, as closely as possible, to the standards of professional investigative journalism.’
Do you disagree with that conclusion?
Or with some other specific aspect of what I wrote?
Genuinely curious.
A note on EA posts as (amateur) investigative journalism:
When passions are running high, it can be helpful to take a step back and assess what’s going on here a little more objectively.
There are all different kinds of EA Forum posts that we evaluate using different criteria. Some posts announce new funding opportunities; we evaluate these in terms of brevity, clarity, relevance, and useful links for applicants. Some posts are introduce a new potential EA cause area; we evaluate them in terms of whether they make a good empirical case for the cause area being large-scope, neglected, and tractable. Some posts raise a theoretical issues in moral philosophy; we evaluate those in terms of technical philosophical criteria such as logical coherence.
This post by Ben Pace is very unusual, in that it’s basically investigative journalism, reporting the alleged problems with one particular organization and two of its leaders. The author doesn’t explicitly frame it this way, but in his discussion of how many people he talked to, how much time he spent working on it, and how important he believes the alleged problems are, it’s clearly a sort of investigative journalism.
So, let’s assess the post by the usual standards of investigative journalism. I don’t offer any answers to the questions below, but I’d like to raise some issues that might help us evaluate how good the post is, if taken seriously as a work of investigative journalism.
Does the author have any training, experience, or accountability as an investigative journalist, so they can avoid the most common pitfalls, in terms of journalist ethics, due diligence, appropriate degrees of skepticism about what sources say, etc?
Did the author have any appropriate oversight, in terms of an editor ensuring that they were fair and balanced, or a fact-checking team that reached out independently to verify empirical claims, quotes, and background context? Did they ‘run it by legal’, in terms of checking for potential libel issues?
Does the author have any personal relationship to any of their key sources? Any personal or professional conflicts of interest? Any personal agenda? Was their payment of money to anonymous sources appropriate and ethical?
Were the anonymous sources credible? Did they have any personal or professional incentives to make false allegations? Are they mentally healthy, stable, and responsible? Does the author have significant experience judging the relative merits of contradictory claims by different sources with different degrees of credibility and conflicts of interest?
Did the author give the key targets of their negative coverage sufficient time and opportunity to respond to their allegations, and were their responses fully incorporated into the resulting piece, such that the overall content and tone of the coverage was fair and balanced?
Does the piece offer a coherent narrative that’s clearly organized according to a timeline of events, interactions, claims, counter-claims, and outcomes? Does the piece show ‘scope-sensitivity’ in accurately judging the relative badness of different actions by different people and organizations, in terms of which things are actually trivial, which may have been unethical but not illegal, and which would be prosecutable in a court of law?
Does the piece conform to accepted journalist standards in terms of truth, balance, open-mindedness, context-sensitivity, newsworthiness, credibility of sources, and avoidance of libel? (Or is it a biased article that presupposed its negative conclusions, aka a ‘hit piece’, ‘takedown’, or ‘hatchet job’).
Would this post meet the standards of investigative journalism that’s typically published in mainstream news outlets such as the New York Times, the Washington Post, or the Economist?
I don’t know the answers to some of these, although I have personal hunches about others. But that’s not what’s important here.
What’s important is that if we publish amateur investigative journalism in EA Forum, especially when there are very high stakes for the reputations of individuals and organizations, we should try to adhere, as closely as possible, to the standards of professional investigative journalism. Why? Because professional journalists have learned, from centuries of copious, bitter, hard-won experience, that it’s very hard to maintain good epistemic standards when writing these kinds of pieces, it’s very tempting to buy into the narratives of certain sources and informants, it’s very hard to course-correct when contradictory information comes to light, and it’s very important to be professionally accountable for truth and balance.
- 7 Sep 2023 21:10 UTC; 24 points) 's comment on Sharing Information About Nonlinear by (LessWrong;
A brief note on defamation law:
The whole point of having laws against defamation, whether libel (written defamation) or slander (spoken defamation), is to hold people to higher epistemic standards when they communicate very negative things about people or organizations—especially negative things that would stick in the readers/listeners minds in ways that would be very hard for subsequent corrections or clarifications to counter-act.
Without making any comment about the accuracy or inaccuracy of this post, I would just point out that nobody in EA should be shocked that an organization (e.g. Nonlinear) that is being libeled (in its view) would threaten a libel suit to deter the false accusations (as they see them), to nudge the author(e.g. Ben Pace) towards making sure that their negative claims are factually correct and contextually fair.
That is the whole point and function of defamation law: to promote especially high standards of research, accuracy, and care when making severe negative comments. This helps promote better epistemics, when reputations are on the line. If we never use defamation law for its intended purpose, we’re being very naive about the profound costs of libel and slander to those who might be falsely accused.
EA Forum is a very active public forum, where accusations can have very high stakes for those who have devoted their lives to EA. We should not expect that EA Forum should be completely insulated from defamation law, or that posts here should be immune to libel suits. Again, the whole point of libel suits is to encourage very high epistemic standards when people are making career-ruining and organization-ruining claims.
- 7 Sep 2023 19:08 UTC; 37 points) 's comment on Sharing Information About Nonlinear by (LessWrong;
ASB—thanks for sparking a fascinating discussion, and to the many comment-writers who contributed.
I’m left with mixed feelings about the pros and cons of developing narrow, ‘boring’, expertise in specific policy topics, versus more typical EA-style big-picture thinking.
The thing is, there are important and valuable roles for people who specialize in connecting these two approaches, in serving as ‘middle men’ between the specialist policy wonks and the EA strategists. This requires some pro-active networking, some social skills, a capacity for rapid getting-up-to-speed in new areas, a respect for subject matter experts, and an ability to understand what can help policy experts do their jobs, and advance their careers, more effectively. This intermediary role could probably benefit from a few years of immersion in the gov’t/policy/think tank world—but not such deep immersion that one soaks up all of the conventional wisdom and groupthink and unexamined assumptions that tend to characterize many policy subcultures. So, the best intermediaries may still keep one foot in the EA subculture and one foot in a very specialized policy subculture.
(I say this as someone who’s spent most of his academic career trying to connect specialist knowledge in evolutionary and genetic theory to bigger-picture issues in human psychology.)
Max—thanks very much for sharing this comprehensive, candid, and transparent overview of the process. It all sounds very reasonable, reflective, and effective.
Just wanted to express my appreciation to you and the other members of the search team.
(PS I wish the search for university administrators was this open and honest!)
Fascinating question.
A follow-up question: Does EA bring out the best in me… compared to what?
I’m active (probably way too active!) on Twitter (aka ‘X’). Which brings out the better me—Twitter or EA Forum? Almost certainly EA Forum does.
Twitter tends to make me disagreeable, reactive, ornery, partisan, angry, outraged, attention-seeking. EA Forum tends to make me calmer, smarter, more open-minded, more respectful, more intellectually serious, more likely to steelman opposing views, etc.
So, as a subculture (or at least as a social media platform), EA (and EA Forum) seem way better than Twitter/X or most other social media platforms or subcultures.
I think analogous arguments could be made for EA Forum bringing out better versions of us than Reddit, TikTok, Instagram, YouTube, Facebook, etc.
Is EA perfect as a subculture? Nope. But it’s one of the best subcultures I’ve ever been involved with, and it brings out more of my best than most others do.
Thanks for sharing this article. I’m not very familiar with the moral philosophy debates about antinatalism, but I find the general thrust of the article quite confusing and uncompelling.
The article admits that ‘the arguments from quality of life, risk, asymmetry, and consent do not seem to produce a reliable tool for the antinatalist activist’s kit’, and then they go on to try to develop an antinatalist argument based on ‘postnatal imposition’ (basically, parents are ‘imposing’ ongoing suffering on their kids by having brought them into existence.)
I have two basic problems with this at sort of a meta-level.
First, the whole article reads as if the authors are determined to create some compelling antinatalist arguments, even if the previous arguments failed. They basically assume antinatalism is the correct moral-philosophical position, and antinatalist activism is righteous, and then they cast about for arguments that might be strategically and tactically useful to antinatalist activists. I come away with the impression that there are no rational or empirical arguments that could switch them from antinatalism to pronatalism. The conclusion is predetermined.
Second, from my perspective as an evolutionary psychologist, all such antinatalist arguments seem futile, at the evolutionary time-scale. Insofar as there are any heritable cognitive or personality traits that incline people towards antinatalism, and insofar as antinatalists actually have fewer kids than pronatalists, antinatalist tendencies will quickly be selected out of the population. To a large degree, of course, this has already happened—which is why antinatalism is deeply unpopular, counter-intuitive, and apparently ridiculous to most people. Antinatalism as a philosophy would only ‘win’ (ie result in total human extinction) if very persuasive arguments were developed and spread so quickly that every human lineage self-terminated at roughly the same time, within a few generations. If even a few lineages avoid the antinatalist ‘mind virus’ (as I see it), then those lineages will become the ancestors of all future humans (and post-humans) -- and those future people will have even stronger cognitive, ethical, and emotional defenses against antinatalism than we do now.
To a psychologist like me, most of the antinatalist philosophy I’ve read so far just comes across as people universalizing their higher-than-average levels of depression, ingratitude, and pessimism as if it’s shared by all other sentient beings. But, empirically, it isn’t. The happiness research shows that most people are pretty happy most of the time. Especially in the modern world (as opposed to the medieval world, for example). One can say they’re deluded about that. But that’s a dangerously patronizing attitude to take—one that’s totally opposed to modern notions of autonomy, freedom, and democracy.
Vasco—thanks for a fascinating, illuminating, and skeptical review of the nuclear winter literature.
It seems like about 1000x as much effort has gone into modeling global warming as into modeling nuclear winter. Yet this kind of nuclear winter modelling seems very important—it’s large scope, neglected, and tractable—perfect for EA funding.
Given the extremely high stakes for understanding whether nuclear war is likely to be a ‘moderately bad global catastrophe’ (e.g. ‘only’ a few hundred million dead, civilization set back by decades or centuries, but eventual recovery possible), or an extinction-level event for humanity (i.e. everybody dies forever), clarifying the likely effects of ‘nuclear winter’ seem like a really good use of EA talent and money.
OK, that sounds somewhat plausible, in the abstract.
But what would be your proposal to slow down and reduce extinction risk from AI development? Or do you think that risk is so low that it’s not worth trying to manage it?