Evolutionary psychology professor, author of ‘The Mating Mind’, ‘Spent’, ‘Mate’, & ‘Virtue Signaling’. B.A. Columbia; Ph.D. Stanford. My research has focused on human cognition, machine learning, mate choice, intelligence, genetics, emotions, mental health, and moral virtues. Interested in long termism, X risk, longevity, pronatalism, population ethics, AGI, China, crypto.
Geoffrey Miller
Well, the main asymmetry here is that the Left-leaning ‘mainstream’ press doesn’t understand or report the Right’s concerns about Leftist authoritarianism, but it generates and amplifies the Left’s concerns about ‘far Right authoritarianism’.
So, any EAs who follow ‘mainstream’ journalism (e.g. CNN, MSNBC, NY Times, WaPo) will tend to repeat their talking points, their analyses, and their biases.
Most reasonable observers, IMHO, understand that the US ‘mainstream’ press has become very left-leaning and highly biased over the last few decades, especially since 2015, and it is functioning largely as a propaganda wing of the Democratic Party. (Consider, for example, the ‘mainstream’ media’s systematic denial of Biden’s dementia for the last several years, until the symptoms became too painfully obvious, to everyone, to ignore. Such journalists would never have run cover for Trump, if he’d been developing dementia; they would have been demanding his resignation years ago.)
In any case, the partisan polarization on such issues is, perhaps, precisely why EAs should be very careful not to wade into these debates unless they have a very good reason for doing so, a lot of political knowledge and wisdom, an ability to understand both sides, and a recognition that these political differences are probably neither neglected nor tractable.
If we really want to make a difference in politics, I think we should be nudging the relevant decision-makers, policy wonks, staffers, and pundits into developing a better understanding of the global catastrophic risks that we face from nuclear war, bioweapons, and AI.
Yelnats—thanks for this long, well-researched, and thoughtful piece.
I agree that political polarization, destabilization, and potential civil war in the US (and elsewhere) are worthy of more serious consideration within EA, since they amplify many potential catastrophic risks and extinction risks.
However, I would urge you to try much harder to develop a less partisan analysis of these issues. This essay comes across (to me, as a libertarian centrist with some traditionalist tendencies) as a very elaborate rationalization for ‘Stop Trump at all costs!‘, based on the commonly-repeated claim that ‘Trump is an existential threat to democracy’. And a lot of the rhetoric, and examples, are basically repeating highly partisan Democratic Party talking points, which have been promoted ad nauseum by CNN, MSNBC, Washington Post, NY Times, etc. And, many of which have been debunked upon further investigation.
EAs tend to lean Left. We know this from EA surveys. Rich EAs (such as SBF) have donated very large sums of money to Democratic candidates. That makes it very important for us to become more aware of our own political biases, when we address issues such as polarization.
In my opinion, both current US political parties are showing some highly authoritarian tendencies. You mentioned some authoritarian tendencies from the Republican side. But you seem to have overlooked many authoritarian trends on the Democratic/Leftist side, which have included:
outsourcing government censorship to social media and Big Tech, especially from 2015 through today
demanding lockdowns, school closures, mandatory vaccinations, and public masking during Covid
using organized ‘lawfare’ against political opponents, including Trump
promoting pro-Jihadist, pro-Hamas, anti-semitic protests on college campuses,
promoting racially and sexually divisive identity politics in public K-12 schools and universities,
threatening the independence and integrity of the Supreme Court (e.g. by planning court-packing, AOC threatening to impeach SCOTUS justices, and Biden ignoring SCOTUS decisions, e.g. prohibiting ‘forgiving’ student loans)
promoting infringements on Second Amendment rights through unconstitutional ‘gun control’ legislation and executive orders
promoting ‘regulation by enforcement’, e.g. Gary Gensler & Elizabeth Warren weaponizing the SEC to harass the crypto industry and infringe on American rights to hold digital assets
undermining states’ autonomy to pass their own laws regarding controversial social and sexual issues, such as abortion
promoting a Central Bank Digital Currency, which would create a ‘financial panopticon’ in which the federal government has information about every economic transaction that every citizen makes
trying to protect the permanent, unelected, unaccountable, federal bureaucracy (aka ‘deep state’) from legislative oversight (e.g. objecting to SCOTUS recently overturning ‘Chevron deference’)
turning federal law enforcement (e.g. the FBI) into a partisan weapon for demonizing political opponents (e.g. claiming the ‘white nationalism’ is the ‘biggest terrorist threat to America’)
treating the outcome of the 2016 presidential election as illegitimate for years afterwards, e.g. blaming it on ‘Russian interference’
defending a manifestly senile president (Biden) whose executive branch seems to be run by an unelected, unaccountable, shadowy set of advisors who remain unknown to most citizens, and who have concentrated a huge amount of authoritarian power behind an aging, incompetent figurehead.
Many on the Left think of ‘authoritarianism’ as a purely Right-wing phenomenon, following the Frankfurt School Leftists such as Adorno et al. publishing ‘The authoritarian personality’ (1950). However, more recent work in political psychology shows that there are plenty of Leftist authoritarians. Also, history reveals plenty of examples of authoritarian socialists, such as Lenin, Stalin, Mao, Pol Pot, Castro, etc—who are responsible for tens of millions of deaths.
Moreover, the standard 2-D graph of political orientation, which includes a Left-vs-Right dimension, but also an Authoritarian-vs-Libertarian dimension, reminds us that the Right does not have a monopoly on authoritarianism.
So, I would urge you to continue this work, but to re-examine your own political biases, and perhaps to collaborate with researchers who hold more diverse political views, such as Centrists, Libertarians, Conservatives, Neo-Reactionaries, Nationalists, Populists, etc.
I expect this comment to be downvoted into oblivion by EAs who reflexively think ‘Trump bad, Progressives good’.
But I beseech you all, consider the possibility that the Democrats are just as much of a threat to American democracy and liberty as the Republicans have ever been.
Peter—This is a valuable comment; thanks for adding a lot more detail about this lab.
Vasco—understood. The estimate still seems much lower than most other credible estimates I’ve seen. And much lower than it felt when we were living through the 70s and 80s, and the Cold War was still very much a thing.
This is indeed somewhat puzzling. I don’t know Neil Thompson’s work, but his google scholar profile isn’t that impressive (less than 2,000 citations, h-index 15), and his work on impacts of AI and computing (1) don’t seem all that relevant to AI safety or X risk, or (2) don’t seem to necessarily require $17 million in funding, insofar as the research seems to be mostly literature reviews, conceptual writing, and journal paper publishing.
If I was just writing thought pieces on the future of compute, $17 million would fund me for at least the next 70 years....
Raemon—I strongly agree, and I don’t think EAs should be overthinking this as much as we seem to be in the comments here. Some ethical issues are, actually, fairly simple.
OpenAI, Deepmind, Meta, and even Anthropic are pushing recklessly ahead with AGI capabilities development. We all understand the extinction risks and global catastrophic risks that this imposes on humanity. These companies are not aligned with EA values of preserving human life, civilization, and sentient well-being.
Therefore, instead of 80k Hours advertising jobs at such companies, which does give them our EA seal of moral approval, we should be morally stigmatizing them, denouncing them, and discouraging people from working with them.
If we adopt a ‘sophisticated’, ‘balanced’, mealy-mouthed approach where we kinda sorta approve of them recruiting EAs, but only in particular kinds of safety roles, in hope of influencing their management from the inside, we are likely to (1) fail to influence management, and (2) undermine our ability to use a moral stigmatization strategy to slow or pause AGI development.
In my opinion, if EAs banded together to advocate an immediate pause on any further AGI development, and adopted a public-relations strategy of morally stigmatizing any work in the AI industry, we would be much more likely to reduce AI extinction risk, than if spend our time trying to play 4-D chess in figuring out how to influence AI companies from the inside.
Some industries are simply evil and reckless, and it’s good for us to say so.
Let’s be honest with ourselves. The strategy we’ve followed for a decade, of trying to influence AI companies from the inside, to slow capabilities development and to promote AI alignment work, has failed. The strategy of trying to promote government regulation to slow reckless AI development is showing some signs of success, but is probably too slow to actually inhibit AI capabilities development. This leaves the informal public-relations strategy of stigmatizing the industry, to reduce up its funding, reduce its access to talent, and to make it morally embarrassing rather than cool to work in AI.
But EAs can only pursue the moral stigmatization strategy to slow AGI development if we are crystal clear that working on AGI development is a moral evil that we cannot endorse.
Michael—I agree with your assessment here, both that the CEARCH report is very helpful and informative, but also that their estimated likelihood of nuclear (only 10% per century) seems much lower than seems reasonable, and much lower than other expert estimates that I’ve seen.
Just as a lot can happen in a century of AI development, a lot can happen over the next century that could increase the likelihood of nuclear war.
sammyboiz—I strongly agree. Thanks for writing this.
There seems to be no realistic prospect of solving AGI alignment or superalignment before the AI companies develop AGI or ASI. And they don’t care. There are no realistic circumstances under which OpenAI, or DeepMind, or Meta, would say ‘Oh no, capabilities research is far outpacing alignment; we need to hire 10x more alignment researchers, put all the capabilities researchers on paid leave, and pause AGI research until we fix this’. It will not happen.
Alternative strategies include formal governance work. But they also include grassroots activism, and informal moral stigmatization of AI research. I think of PauseAI as doing more of the last two, rather than just focusing on ‘governance’ per se.
As I’ve often argued, if EAs seriously think that AGI is an extinction risk, and that the AI companies seeking AGI cannot be trusted to slow down or pause until they solve the alignment and control problems, then our only realistic option is to use social, cultural, moral, financial, and government pressure to stop them. Now.
Will—please expand a little bit more on what you’re looking for? I found this question a little bit too abstract to answer, and others might share this confusion.
Yep. 100% agree!
Rob—excellent post. Wholeheartedly agree.
This is the time for EAs to radically rethink our whole AI safety strategy. Working on ‘technical AI alignment’ is not going to work in the time that we probably have, given the speed of AI capabilities development.
Richard—this is an important point, nicely articulated.
My impression is that a lot of anti-EA critics actually see scope-sensitivity as actively evil, rather than just a neutral corollary of impartial beneficence or goal-directed altruism. One could psychoanalyze why they think this—I suspect it’s usually more of an emotional defense than a thoughtful application of deontology. But I think EAs need to contend with the fact that to many non-EAs, scope-sensitive reasoning about moral issues comes across as somewhat sociopathic. Which is bizarre, and tragic, but seems often true.
I think, at this point, EAs (including 80k Hours) publicly boycotting OpenAI, and refusing to work there, and explaining why, clearly and forcefully, would do a lot more good than trying to work there and nudge them from the inside towards not imposing X risks on humanity.
Linch—I agree with your first and last paragraphs.
I have my own doubts about our political institutions, political leaders, and regulators. They have many and obvious flaws. But they’re one of the few tools we have to hold corporate power accountable to the general public. We might as well use them, as best we can.
Neel—am I incorrect that Anthropic and DeepMind are still pursuing AGI, despite AI safety and alignment research still lagging far behind AI capabilities research? If they are still pursuing AGI, rather than pausing AGI research, they are no more ethical than OpenAI, in my opinion.
The OpenAI debacles and scandals help illuminate some of the commercial incentives, personal egos, and systemic hubris that sacrifices safety for speed in the AI industry. But there’s no reason to think those issues are unique to OpenAI.
If Anthropic came out tomorrow and said, ‘OK, everyone, this AGI stuff is way too dangerous to pursue at the moment; we’re shutting down capabilities research for a decade until AI safety can start to catch up’, then they would have my respect.
Manuel—thanks for your thoughts on this. It is important to be politically and socially savvy about this issue.
But, sometimes, a full-on war mode is appropriate, and trying to play nice with an industry just won’t buy us anything. Trying to convince OpenAI to pause AGI development until they solve AGI alignment, and sort out other key safety issues, seems about as likely to work as nicely asking Cargill Meat Solutions (which produces 22% of chicken meat in the US) to slow down their chicken production, until they find more humane ways to raise and slaughter chickens.
I don’t really care much if the AI industry severs ties with EAs and Rationalists. Instead, I care whether we can raise awareness of the AI safety issues with the general public, and politicians, quickly and effectively enough to morally stigmatize the AI industry.
Sometimes, when it comes to moral issues, the battle lines have already been drawn, and we have to choose sides. So far, I think EAs have been far too gullible and naive about AI safety and the AI industry, and have chosen too often to take the side of the AI industry, rather than the side of humanity.
Helpful suggestions, thank you! Will check them out.
Thanks! Appreciate the suggestion.
Abby—good suggestions, thank you. I think I will assign some Robert Miles videos! And I’ll think about the human value datasets.
Alex—thanks for the helpful summary of this exciting new book.
It looks like a useful required textbook for my ‘Psychology of Effective Altruism’ course (syllabus here), next time I teach it!