Evolutionary psychology professor, author of ‘The Mating Mind’, ‘Spent’, ‘Mate’, & ‘Virtue Signaling’. B.A. Columbia; Ph.D. Stanford. My research has focused on human cognition, machine learning, mate choice, intelligence, genetics, emotions, mental health, and moral virtues. Interested in long termism, X risk, longevity, pronatalism, population ethics, AGI, China, crypto.
Geoffrey Miller
I agree that growing EA in China will be important, given China’s increasing wealth, clout, confidence, and global influence. If EA fails to reach a critical mass in China, its global impact will be handicapped in 2 to 4 decades. But, as Austen Forrester mentioned in another comment, the charity sector may not be the best beachhead for a Chinese EA movement.
Some other options: First, I imagine China’s government would be motivated to thinking hard about X-risks, particularly in AI and bioweapons—and they’d have the decisiveness, centralized control, and resources to really make a difference. If they can build 20,000 miles of high-speed rail in just one decade, they could probably make substantial progress on any challenge that catches the Politburo’s attention. Also, they tend to take a much longer-term perspective than Western ‘democracies’, planning fairly far into the mid to late 21st century. And of course if they don’t take AI X-risk seriously, all other AI safety work elsewhere may prove futile.
Second, China is very concerned about ‘soft power’—global influence through its perceived magnanimity. This is likely to happen through government do-gooding rather than from private charitable donations. But gov’t do-gooding could be nudged into more utilitarian directions with some influence from EA insights—e.g. China eliminating tropical diseases in areas of Africa where it’s already a neocolonialist resource-extraction power, or reducing global poverty or improving governance in countries that could become thriving markets for its exports.
Third, lab meat & animal welfare: China’s government knows that a big source of subjective well-being for people, and a contributor to ‘social stability’, is meat consumption. They consume more than half of all pork globally, and have a ‘strategic pork reserve’: https://www.cnbc.com/id/100795405. But they plan to reduce meat consumption by 50% for climate change reasons: https://www.theguardian.com/world/2016/jun/20/chinas-meat-consumption-climate-change This probably creates a concern for the gov’t: people love their pork, but if they’re told to simply stop eating it in the service of reducing global warming, they will be unhappy. The solution could be lab-grown meat. If China invested heavily in that technology, they could have all the climate-change benefits of reduced livestock farming, but people wouldn’t be resentful and unhappy about having to eat less meat. So that seems like a no-brainer to get the Chinese gov’t interested in lab meat.
Fourth, with rising affluence, young Chinese middle-class people are likely to have the kind of moral/existential/meaning-of-life crises that hit the US baby boomers in the 1960s. They may be looking for something genuinely meaningful to do with their lives beyond workaholism & consumerism. I think 80k hours could prove very effective in filling this gap, if it developed materials suited to the Chinese cultural, economic, and educational context.
Excellent post; as a psych professor I agree that psych and cognitive science are relevant to AI safety, and it’s surprising that our insights from studying animal and human minds for the last 150 years haven’t been integrating into mainstream AI safety work.
The key problem, I think, is that AI safety seems to assume that there will be some super-powerful deep learning system attached to some general-purpose utility function connected to a general-purpose reward system, and we have to get the utility/reward system exactly aligned with our moral interests.
That’s not the way any animal mind has ever emerged in evolutionary history. Instead, minds emerge as large numbers of domain-specific mental adaptations to solve certain problems, and they’re coordinated by superordinate ‘modes of operation’ called emotions and motivations. These can be described as implementing utility functions, but that’s not their function—promoting reproductive success is. Some animals also evolve some ‘moral machinery’ for nepotism, reciprocity, in-group cohesion, norm-policing, and virtue-signaling, but those mechanisms are also distinct and often at odds.
Maybe we’ll be able to design AGIs that deviate markedly from this standard ‘massively modular’ animal-brain architecture, but we have no proof-of-concept for thinking that will work. Until then, it seems useful to consider what psychology has learned about preferences, motivations, emotions, moral intuitions, and domain-specific forms of reinforcement learning.
I would love to see some ’40,000 hours’ materials for mid-career people pivoting into EA work.
Our skills, needs, constraints, and opportunities are quite different from 20-somethings. For example, if one has financial commitments (child support, mortgage, debts, alimony), it’s not realistic to go back to grad school or an unpaid internship to re-train. We also have geographical constraints—partners, kids in school, dependent parents, established friendships, community commitments. And in mid-life, our ‘crystallized intelligence’ (stock of knowledge) is much higher than a 20-something’s, but our ‘fluid intelligence’ (ability to solve abstract new problems quickly) is somewhat lower—so it’s easier to learn things that relate to our existing expertise, but harder to learn coding, data science, or finance from scratch.
On the upside, a ’40k project’ would allow EA to bring in a huge amount of talent—people with credentials, domain knowledge, social experience, leadership skills, professional networks, prestige, and name recognition. Plus, incomes that would allow substantially larger donations than 20-something can manage.
- 29 Apr 2022 12:38 UTC; 9 points) 's comment on Is anyone working on 10K? by (
Psychedelics could bring many benefits, but the EA community needs to be careful not to become associated with flaky New Age beliefs. I think we can do this best by being very specific about how psychedelics could help with certain ‘intention setting’, e.g. 1) expanding the moral circle: promoting empathy, turning abstract recognition of others beings’ sentience into a more gut-level connection to their suffering; 2) career re-sets: helping people step back from their daily routines and aspirations to consider alternative careers, lifestyles, and communities; 80k hours applications; 3) far-future goal setting: getting more motivated to reduce X-risk by envisioning far-future possibilities more vividly, as in Bostrom’s ‘Letter from Utopia’ 4) recalibrating utility ceilings: becoming more familiar with states of extreme elation and contentment can remind EAs that we’re fighting for trillions of future beings to be able to experience those states whenever they want.
In academic research, government and foundation grants are often awarded using criteria similar to ITN, except:
1) ‘importance’ is usually taken as short-term importance to the research field, and/or to one country’s current human inhabitants (especially registered voters),
2) ‘tractability’ is interpreted as potential to yield several journal publications, rather than potential to solve real-world problems,
3) ‘neglectedness’ is interpreted as addressing a problem that’s already been considered in only 5-20 previous journal papers, rather than one that’s totally off the radar.
I would love to see academia in general adopt a more EA perspective on how to allocate scarce resources—not just when addressing problems of human & animal welfare and X-risk, but in addressing any problem.
Fascinating post. I agree that we shouldn’t compare LAWs to (a) hypothetical, perfectly consequentialist, ethically coherent, well-trained philosopher-soldiers, but rather to (b) soldiers as the order-following, rules-of-engagement-implementing, semi-roboticized agents they’re actually trained to become.
A key issue is the LAWs’ chain of commands’ legitimacy, and how it’s secured.
Mencius Moldbug had some interesting suggestions in Patchwork about how a ‘cryptographic chain of command’ over LAWs could actually increase the legitimacy and flexibility of governance over lethal force. https://www.amazon.com/dp/B06XG2WNF1
Suppose a state has an armada/horde/flock of formidable LAWS that can potentially destroy or pacify the civilian populace—an ‘invincible robot army’. Who is permitted to issue orders? If the current political leader is voted out of office, but they don’t want to leave, and they still have the LAWS ‘launch codes’, what keeps them from using LAWS to subvert democracy? In a standard human-soldier/secret service agent scenario, the soldiers and agents have been socialized to respect the outcomes of democratic elections, and would balk at defending the would-be dictator. They would literally escort him/her out of the White House. In the LAWs scenario, the soldiers/agents would be helpless against local LAWs under the head of state. The robot army would escort the secret service agents out of the White House until they accept the new dictator.
In other words, I’m not as worried about interstate war or intrastate protests; I’m worried about LAWs radically changing the incentives and opportunities for outright dictatorship. Under the Second Amendment, the standard countervailing force against dictatorship is supposed to be civilian ownership of near-equivalent tech that poses a credible threat against dictatorial imposition of force. But in this invincible-robot-army scenario, that implies civilians would need to be able to own and deploy LAWs too, either individually (so they can function as aggrieved tyrant-assassins) or collectively (so they can form revolutionary militias against gov’t LAWs).
I guess this is just another example of an alignment problem—in this case between the LAWs and the citizens, with the citizens somehow able to collectively over-rule a dictator’s ‘launch codes’. Maybe every citizen has their own crypto key, and they do some kind of blockchain vote system about what the LAWs do and who they obey. This then opens the way to a majoritarian mob rule with LAWs forcibly displacing/genociding targeted minorites—or the LAWs must embody some ‘human/constitutional rights interrupts’ that prevent such bullying.
Any suggestions on how to solve this ‘chain of command’ problem?
Cool idea. Although I think domain-specific board games might be more intuitive and vivid for most people—e.g. a set on X-risks (one on CRISPR-engineered pandemics, one on an AGI arms race), one on deworming, one on charity evaluation with strategic conflict between evaluators, charities, and donors, a modified ‘Game of Life’ based on 80k hours principles, etc.
Heterodox Academy also has this new online training for reducing polarization and increasing mutual understanding across the political spectrum: https://heterodoxacademy.org/resources/viewpoint-diversity-experience/
Thank you! I’ll check it out.
Yes, I always put too much text on slides the first few times I present on a new topic, and then gradually strip it away as I remember better what my points are. Thanks!
Thanks to everybody for your helpful links! I’ve shared your suggestions with the journalist, who is grateful. :)
Good idea. Will check.
This post raises good points. I think crypto is a very neglected cause area with enormous upside potential, especially for developing countries. There’s much, much more to the crypto industry than just Bitcoin as a ‘store of value’, or crypto trading as a way to make money.
There are tens of thousands of smart people working on blockchain technologies and protocols that could offer a huge range of EA-adjacent use cases, such as:
much faster, cheaper remittances
protect savings against hyperinflation by irresponsible central banks
secure economic identity that allows poor people to get loans, buy insurance, receive gov’t vouchers, prove educational credentials & work histories, etc
voting systems that are more secure, inclusive, hard-to-hack, and easy to validate
secure property rights & land records in areas where governments are often overthrown, and lands are confiscated
access to reliable, validated, uncensorable data through oracle apps—e.g. weather data that can support crop insurance for poor farmers; inflation statistics that can’t be biased by government economists
social networks that can build in consensus mechanisms for quality control, without centralized censorship
smart contracts for royalty payments that allow creators to receive a share of any increase in value of their unique art-works
Projects that could support these use cases include Ethereum, Cardano, Chainlink, Algorand, Polkadot, and many others. Many use Proof of Stake consensus protocols (low energy consumption) rather than Proof of Work (like Bitcoin, which requires higher energy consumption).
Also, there’s a lot of overlap between EA and crypto in terms of culture, personalities, and values. Apart from the ‘toxic bitcoin maximalists’, most people in the crypto industry pride themselves on their rationality, openness to evidence, long-termism, global outlook, optimism, and skepticism about virtue signaling.
I agree with Scott Alexander that when talking with most non-EA people, an X risk framework is more attention-grabbing, emotionally vivid, and urgency-inducing, partly due to negativity bias, and partly due to the familiarity of major anthropogenic X risks as portrayed in popular science fiction movies & TV series.
However, for people who already understand the huge importance of minimizing X risk, there’s a risk of burnout, pessimism, fatalism, and paralysis, which can be alleviated by longtermism and more positive visions of desirable futures. This is especially important when current events seem all doom’n’gloom, when we might ask ourselves ‘what about humanity is really worth saving?’ or ‘why should we really care about the long-term future, it it’ll just be a bunch of self-replicating galaxy-colonizing AI drones that are no more similar to us than we are to late Permian proto-mammal cynodonts?’
In other words, we in EA need long-termism to stay cheerful, hopeful, and inspired about why we’re so keen to minimize X risks and global catastrophic risks.
But we also need longtermism to broaden our appeal to the full range of personality types, political views, and religious views out there in the public. My hunch as a psych professor is that there are lots of people who might respond better to longtermist positive visions than to X risk alarmism. It’s an empirical question how common that is, but I think it’s worth investigating.
Also, a significant % of humanity is already tacitly longtermist in the sense of believing in an infinite religious afterlife, and trying to act accordingly. Every Christian who takes their theology seriously & literally (i.e. believes in heaven and hell), and who prioritizes Christian righteousness over the ‘temptations of this transient life’, is doing longtermist thinking about the fate of their soul, and the souls of their loved ones. They take Pascal’s wager seriously; they live it every day. To such people, X risks aren’t necessarily that frightening personally, because they already believe that 99.9999+% of sentient experience will come in the afterlife. Reaching the afterlife sooner rather than later might not matter much, given their way of thinking.
However, even the most fundamentalist Christians might be responsive to arguments that the total number of people we could create in the future—who would all have save-able souls—could vastly exceed the current number of Christians. So, more souls for heaven; the more the merrier. Anybody who takes a longtermist view of their individual soul might find it easier to take a longtermist view of the collective human future.
I understand that most EAs are atheists or agnostics, and will find such arguments bizarre. But if we don’t take the views of religious people seriously, as part of the cultural landscape we’re living in, we’re not going to succeed in our public outreach, and we’re going to alienate a lot of potential donors, politicians, and media influencers.
There’s a particular danger in overemphasizing the more exotic transhumanist visions of the future, in alienating religious or political traditionalists. For many Christians, Muslims, and conservatives, a post-human, post-singularity, AI-dominated future would not sound worth saving. Without any humane connection to their human social world as it is, they might prefer a swift nuclear Armageddon followed by heavenly bliss, to a godless, soulless machine world stretching ahead for billions of years.
EAs tend to score very highly on Openness to Experience. We love science fiction. We like to think about post-human futures being potentially much better than human futures. But it that becomes our dominant narrative, we will alienate the vast majority of current living humans, who score much lower on Openness.
If we push the longtermist narrative to the general public, we better make the long-term future sound familiar enough to be worth fighting for.
This was a very clear and helpful thread. I’d suggest something that a much higher amount of funding could allow EA to pursue:
Recruit more mid-career and late-career researchers into EA, particularly established academics who are already working on EA-related issues, but who might not be inside the EA community yet, and who never viewed EA as a potential funding source. Often these researchers have established track records of publishing and consulting, run large lab groups full of grad students and post-docs, and have high impact and visibility within their fields.
But they’re often spending huge amounts of time applying for government research grants that have very low funding rates (below 10%), to keep their labs going and to supplement their salaries (e.g. for teaching buy-outs & summer salaries). This is extremely frustrating for most of them. And they have to wrap their real research interests up in some kind of package that sounds appealing given the current NIH, NSF, EU, or UK Research Council funding priorities, which have heavy political biases & very narrow Overton windows.
If these researchers were more aware that EA can offer grants just as large as they could get from other funders, but where the funding rate was significantly higher (e.g. above 25%), they might shift their research focus into greater alignment with the EA ethos and EA cause areas. If they run high quality labs with high quality grad students, this could also be a great way to recruit more young talent into EA.
For example, I’ve given many talks about EA and X risk at various behavioral sciences conferences. Often, researchers will come up afterwards and ask how they can get involved. I can point them to the standard EA resources (e.g. EA Forum, 80k Hours, Open Phil, etc), but those resources seem designed mostly for students or early-career researchers. And the EA selection processes for grants often seem to weigh in-groupish EA credentials (EA social connections, buzzwords, familiarity with the cool cause areas) over established academic credentials, research capacity, and proof of research impact. This can be off-putting to anyone with an h-index above 30.
In other words, with increased funding comes the possibility of an expanded strategy for EA community building and recruitment. Instead of just trying to spot young talents and pay them modest salaries to do entry-level research, we can potentially recruit already established academic researchers, and support their labs to work on EA cause areas rather than cause areas considered important by government funding agencies.
There is an ocean of proven academic talent out there, dying to find a better way to support their lab, and to do more interesting research that’s truly high impact. They just need better on-ramps to figure out how to pivot into EA—and they need to feel genuinely welcome even if they don’t yet speak the EA dialect, if they don’t fully understand the EA ethos and cause areas, and even if they’re over the age of 40. (EA’s pervasive & obnoxious ageism is a topic for another time....)
Thomas—excellent reply, and good points. I’ve written a bit about virtue signaling, and agree that there are good forms (reliable, predictive) and bad forms (cheap talk, deceptive, misguided) of virtue signaling.
I also agree that EA could be more creative and broad-minded about what kinds of virtue signaling are likely to be helpful in predictive future integrity, dedication, and constructiveness in EA. Historically, a lot of EA signaling has involved living frugally, being vegan, being a good house-mate in an EA shared house, collaborating well on EA projects, getting lots of upvotes on EA Forum, etc. Assessing those signals accurately requires a lot of first-hand or second-hand knowledge, which can be hard to do at scale, as the EA movement grows.
As EA grows in scale and becomes more diverse in terms of background (e.g. recruits more established professionals from other fields, not just recent college grads), we may need to get savvier about domain-specific virtue signals, e.g. how do medical researchers vs geopolitical security experts vs defense attorneys vs bioethicists vs blockchain developers show their true colors?
The very tricky trade-off, IMHO, is that often the most reliable virtue signals in terms of predicting personality traits (honesty, humility, conscientiousness, kindness) are often the least efficient in terms of actually accomplishing real-world good. For example, defense attorneys who do a lot of pro bono work doing appeals for death row inmates might be showing genuine dedication and altruism—but this might be among the least effective uses of their time in achieving criminal justice reform. So, do we want the super-trustworthy but scope-insensitive lawyers involved in EA, or the slightly less virtue-signaling but more rational and scope-sensitive lawyers?
That seems like a real dilemma. Traditionally, EA has solved it mostly by expecting a fair amount of private personality-signaling (e.g. being a conscientious vegan house-mate) plus a lot of public, hyper-rational, scope-sensitive analysis and discussion.
Good post with a fairly comprehensive list of the conscious, semi-conscious, covert, or adaptively self-deceived reasons why we may be attracted to EA.
I think these apply to any kind of virtue signaling, do-gooding, or public concern over moral, political, or religious issues, so they’re not unique to EA. (Although the ‘intellectual puzzle’ piece may be somewhat distinctive with EA).
We shouldn’t beat ourselves up about these motivations, IMHO. There’s no shame in them. We’re hyper-social primates, evolved to gain social, sexual, reproductive, and tribal success through all kinds of moralistic beliefs, values, signals, and behaviors. If we can harness those instincts a little more effectively in the direction of helping other current and future sentient beings, that’s a huge win.
We don’t need pristine motivations. Don’t buy into the Kantian nonsense that only disinterested or purely ‘altruistic’ reasons for altruism are legitimate. There is no naturally evolved species that would be capable of pure Kantian altruism. It’s not an evolutionarily stable strategy, in game theory terms.
We just have to do the best we can with the motivations that evolution gave us. I think Effective Altruism is doing the best we can.
The only trouble comes if we try to pretend that none of these motivations should have any legitimacy in EA. If we shame each other for using our EA activities to make friends, find mates, raise status, make a living, or feel good about ourselves, we undermine EA. And if we undermine the payoffs for any of these incentives through some misguided puritanism about what motives we can expect EAs to have, we might undermine EA.
It’s a good thread, and worth a look!
The Michael Neilsen critique seems thoughtful, constructive, and well-balanced on first read, but I have some serious reservations about the underlying ethos and its implications.
Look, any compelling new world-view that is outside the mainstream cultures’ Overton window can be pathologized as an information hazard that makes its believers feel unhappy, inadequate, and even mentally ill by mainstream standards. Nielsen seems to view ‘strong EA’ as that kind of information hazard, and critiques it as such.
Trouble is, if you understand that most normies are delusional about some important issue, and you you develop some genuinely deeper insights into that issue, the psychologically predictable result is some degree of alienation and frustration. This is true for everyone who has a religious conversion experience. It’s true for everyone who really takes onboard the implications of any intellectually compelling science—whether cosmology, evolutionary biology, neuroscience, signaling theory, game theory, behavior genetics, etc. It’s true for everyone who learns about any branch of moral philosophy and takes it seriously as a guide to action.
I’ve seen this over, and over, and over in my own field of evolutionary psychology. The usual ‘character arc’ of ev psych insight is that (1) you read Dawkins or Pinker or Buss, you get filled with curiosity about the origins of human nature, (2) you learn some more and you feel overwhelming intellectual awe and excitement about the grandeur of evolutionary theory, (3) you gradually come to understand that every human perception, preference, value, desire, emotion, and motivation has deep evolutionary roots beyond your control, and you start to feel uneasy, (4) you ruminate about how you’re nothing but an evolved robot chasing reproductive success through adaptively self-deceived channels, and you feel some personal despair, (5) you look around at a society full of other self-deceived humans unaware of their biological programming, and you feel black-pilled civilizational despair, (6) you live with the Darwinian nihilism for a few years, adapt to the new normal, and gradually find some way to live with the new insights, climbing your way back into some semblance of normie-adjacent happiness. I’ve seen these six phases many times in my own colleagues, grad students, and collaborators.
And that’s just with a new descriptive world-view about how the human world works. EA’s challenge can be even more profound, because it’s not just descriptive, but normative, or at least prescriptive. So there’s a painful gap between what we could be doing, and what we are doing. And so there should be, if you take the world in a morally serious way.
I think the deeper problem is that given 20th century history, there’s a general dubiousness about any group of people who do take the world in a morally serious way that deviates from the usual forms of mild political virtue signaling encouraged in our current system of credentialism, careerism, and consumerism.
- 7 Dec 2022 17:54 UTC; 6 points) 's comment on Announcing: EA Forum Podcast – Audio narrations of EA Forum posts by (
From my perspective as an evolutionary psychologist, I wouldn’t expect us to have reliable or coherent intuitions about utility aggregation for any groups larger than about 150 people, for any time-spans beyond two generations, or for any non-human sentient beings.
This is why consequentialist thought experiments like this so often strike me as demanding the impossible of human moral intuitions—like expecting us to be able to reconcile our ‘intuitive physics’ concept of ‘impetus’ with current models of quantum gravity.
Whenever we take our moral intuitions beyond their ‘environment of evolutionary adaptedness’ (EEA), there’s no reason to expect they can be reconciled with serious consequentialist analysis. And even within the EEA, there’s no reason to expect out moral intuitions will be utilitarian rather than selfish + nepotistic + in-groupish + a bit of virtue-signaling.