[Notes] Steven Pinker and Yuval Noah Harari in conversation

Word count: ~1900

Read­ing time: ~9 mins

Key­words: Hu­man progress, ex­is­ten­tial risk, geopoli­tics, cli­mate change, nu­clear war, ar­tifi­cial in­tel­li­gence, cog­ni­tion, democ­racy, surveillance, fake news, tech­nolog­i­cal dis­rup­tion, poli­ti­ci­sa­tion of academia.

Summary

Harari and Pinker are well-known au­thors of macro-his­tory and I think the dis­cus­sion has in­ter­est­ing im­pli­ca­tions for how we think about the long-run fu­ture. I found this con­ver­sa­tion in­ter­est­ing and wanted to share the key points in an ac­cessible for­mat, quicker to ab­sorb than a 43 minute video.

I wanted to pro­duce this ar­ti­cle in as short a time as pos­si­ble. I or­dered a tran­scrip­tion from Rev.com’s al­gorith­mic speech recog­ni­tion (which ended up be­ing free), and spent 5 hours writ­ing this sum­mary and for­mat­ting the post.

Context

Steven Pinker is a cog­ni­tive psy­chol­o­gist, lin­guist, and pop­u­lar sci­ence au­thor. He is best known for The Bet­ter An­gels of Our Na­ture, which ar­gues that vi­o­lence in the world has de­clined sug­gests ex­pla­na­tions as to why this has oc­curred. The book has been a best­sel­ler, with en­dorse­ments from peo­ple in­clud­ing Bill Gates and Mark Zucker­berg.

Yu­val Noah Harari, is a lec­turer at the Depart­ment of His­tory at the He­brew Univer­sity of Jerusalem. His books Sapi­ens, Homo Deus, and 21 Les­sons for the 21st Cen­tury have sold over 23 mil­lion copies wor­ld­wide. His writ­ings ex­am­ine global his­tory, tech­nol­ogy, free will, con­scious­ness, suffer­ing, in­tel­li­gence and hap­piness.

Both Pinker and Harari have writ­ten macro-his­to­ries with sig­nifi­cant the­matic breadth that have in­fluenced pop­u­lar cul­ture. Bet­ter An­gels is Nick Beck­stead’s top recom­mended au­dio­book. Sapi­ens is in Robert Wiblin’s Top 9 books.

Key points

Both writ­ers share con­cerns about cli­mate change, nu­clear war, and tech­nolog­i­cal dis­rup­tion. Pinker tends to take op­ti­mistic stances, ar­gu­ing that past im­prove­ments sug­gest hu­man­ity could con­tinue to make progress in the fu­ture. He is scep­ti­cal of the po­ten­tial speed of tech­nolog­i­cal de­vel­op­ment, and sees hu­man so­ciety as ro­bust and pro­gres­sive.

Harari raises long-term ques­tions, and frets that we are ap­proach­ing po­ten­tial tip­ping points of tech­nolog­i­cal dis­rup­tion. He voices con­cerns about the loss of in­di­vi­d­ual au­ton­omy and the po­ten­tial rise of digi­tal dic­ta­tor­ships.

Pinker and Harari find agree­ment on sev­eral top­ics, chiefly that they share long-run un­cer­tainty over the fu­ture. Harari’s web­site recom­mends En­light­en­ment Now as part of his list, A Hap­haz­ard Guided Tour of Hu­man­ity on the Brink:

“Pinker ex­tols the amaz­ing achieve­ments of moder­nity, and demon­strates that hu­mankind has never been so peace­ful, healthy and pros­per­ous. There is of course much to ar­gue about, but that’s what makes this book so in­ter­est­ing”.

Part of this is re­solved by Harari al­low­ing for flex­i­bil­ity over timescales, ar­gu­ing that the 50-100 year timescales are short rel­a­tive to hu­man his­tory. My view is that what dis­t­in­guishes them is tone and em­pha­sis. Should we be op­ti­mists, pes­simists, or re­al­ists?

Po­ten­tial im­pli­ca­tions for the effec­tive al­tru­ism community

If we see the fu­ture of hu­man­ity as pos­i­tive, then Nick Bostrom sug­gests that we want to act to re­duce ex­is­ten­tial risk. Nick Beck­stead’s PhD (p. 85) makes a similar claim:

‘The key claims are that hu­man­ity could sur­vive for a very long time, with an ex­pected du­ra­tion on the or­der of billions of years or more; that the fu­ture is over­whelm­ingly im­por­tant if my nor­ma­tive as­sump­tions are true; that we could po­ten­tially shape the fu­ture for the bet­ter by speed­ing up progress, re­duc­ing ex­is­ten­tial risk, or pro­duc­ing other pos­i­tive tra­jec­tory changes; and that what mat­ters most for shap­ing the far fu­ture is cre­at­ing pos­i­tive tra­jec­tory changes. The best ways of shap­ing the far fu­ture could be very broad or very tar­geted, and know­ing which would be very valuable.’

If we as­sume that a Pinker tra­jec­tory con­tinues, and stuff gets bet­ter, then re­duc­ing big, sexy risks like AI, nu­clear, and biose­cu­rity seem im­por­tant. But, if we take a Harari view that ‘things might get much, much worse’, then per­haps some EAs might also pri­ori­tise shap­ing the tra­jec­to­ries of top­ics like democ­racy and surveillance, while oth­ers fo­cus on AI, bio, and nu­clear.

See my fur­ther read­ing list be­low!

Selected quotes

Op­ti­mism vs pes­simism

Pinker

“We have the abil­ity to think up solu­tions to prob­lems to share them via lan­guage”
“Our lifes­pans have more than dou­bled… a race of death and war has come down, [and de­clin­ing] rates of death and homi­cide, vi­o­lence against women, dis­ease [all point] out that we have made progress in the past.”
“Whether [there’s] cause for op­ti­mism in the fu­ture is im­pos­si­ble to say. No one is a prophet that we’re doomed… Maybe things will get worse, but it won’t nec­es­sar­ily get worse given that we know that we’ve solved prob­lems in the past.”

Harari

“[I would] sum­ma­rize the cur­rent hu­man con­di­tion in three brief sen­tences: things for hu­mans are bet­ter than ever; things are still quite bad; and things can get much, much worse.”

Out­look on the fu­ture

Pinker

“Cli­mate change is the most ob­vi­ous [threat to hu­man­ity]. We’re not on track to solv­ing it, and there’s ev­ery rea­son to be­lieve that the con­se­quences could be ter­rible.”
“And the threat of nu­clear war… it’s not neg­ligibly un­likely. It’s a high enough prob­a­bil­ity that we should worry about it. As with cli­mate change, the di­rec­tion that we moved in in the last five years has not been pos­i­tive.”

Harari

“The risk of dis­rup­tive tech­nolo­gies, es­pe­cially ar­tifi­cial in­tel­li­gence and which of course hold also enor­mous promises to hu­mankind, but also some very se­ri­ous threats, whether it’s a com­plete, a so­cial up­heaval as a re­sult of chang­ing the job mar­ket very, very quickly, whether it’s the rise of new digi­tal dic­ta­tor­ships and to­tal­i­tar­ian regimes worse than any­thing we’ve seen be­fore in his­tory.”
“And maybe the biggest prob­lem with all that is that for all three threats, whether we talk about nu­clear war or cli­mate change or the rise of dis­rup­tive tech­nolo­gies to do some­thing effec­tive against the threat, you need global co­op­er­a­tion”
“And I some­times have a sus­pi­cion that we are like run­ning on the last gas in the gas tank in our philo­soph­i­cal gap… cli­mate change and nu­clear war in a way are kind of easy prob­lems be­cause, we know want to do about it. We need to pre­vent them. It’s very easy. Maybe we, not ev­ery­body agrees that it’s a real threat. Maybe not ev­ery­body agrees how to stop it. But in prin­ci­ple no­body says, Hey, cli­mate change. That’s great. Let’s have more of that nu­clear war. Yes, I’m in favour. No­body says that. But with tech­nolog­i­cal dis­rup­tion, what to do is AI and bio-en­g­ineer­ing, there is ab­solutely no agreed goal.”

Surveillance states, fake news

Pinker

“I’m a bit more skep­ti­cal of how rapidly there’ll be ad­vances in ar­tifi­cial in­tel­li­gence, ge­netic en­g­ineer­ing of hu­mans, and, psy­cholog­i­cal ma­nipu­la­tion”
“Hu­mans have a lot of squeamish­ness and taboos that of­ten will re­tard tech­nolog­i­cal pro­cess”
“The is­sue is, are the or­di­nary ex­pec­ta­tions of peo­ple in it who are not sub­ject to oc­cu­pa­tion, who are liv­ing in a democ­racy go­ing to be ro­bust enough… to rise to the oc­ca­sion of re­sist­ing that kind of con­stant surveillance?”
“Even the sim­ple [AI] prob­lems turn out to be harder than within than we think. When it comes to hack­ing hu­man be­havi­our, it’s all the more com­plex”
“The stud­ies of the effects of fake news on so­cial me­dia show­ing that the effects are very small and prob­a­bly did not in­fluence the elec­tion. That most of the fake news went to peo­ple who are already highly par­ti­san and whose minds weren’t go­ing to change. It’s not as easy to ma­nipu­late hu­man be­havi­our as we might fear in our dystopian night­mares.”

Al­gorith­mic discrimination

Pinker

“Clini­cal de­ci­sion mak­ing.. Five pre­dic­tors [can] make a de­ci­sion much bet­ter than a typ­i­cal hu­man judge, or di­ag­nos­ing dis­ease.. we’ve known this for a 70 years al­most… sub­jec­tive im­pres­sions are sub­ject to bias and er­ror, in­clud­ing racist bias.. But we don’t hand it over to al­gorithms.”

Harai

“I do think that there is a chance we’ll see some ver­sion of digi­tal dic­ta­tor­ships in to­tal­i­tar­ian regimes based on this mas­sive surveillance and anal­y­sis of hu­mans”
“You just have ma­chines go­ing over all the data. And again, this is not sci­ence fic­tion. This is hap­pen­ing in var­i­ous parts of the world. It’s hap­pen­ing now in China. It’s hap­pen­ing now in my home coun­try, in Is­rael… you just have these very so­phis­ti­cated al­gorithms go­ing over enor­mous amounts of data over mil­lions of peo­ple. And that’s a com­plete game changer.”
“But what will hap­pen if and when effi­ciency and ethics go in differ­ent di­rec­tions, that to­tal­i­tar­i­anism be­comes very effec­tive, but it’s still ex­tremely un­eth­i­cal. Would our eth­i­cal kind of con­straints and ideas hold in that situ­a­tion?”
“So I’m not think­ing about this sci­ence fic­tion sce­nario that an AI, that micro­man­ages ev­ery move­ment of your day, it starts with far sim­pler things of just shift­ing more and more au­thor­ity to the AI to de­cide who to ac­cept the uni­ver­sity, who to hire for the job and whom to date.”

Poli­ti­ci­sa­tion of academia

Pinker

“There cer­tainly is cause for con­cern about in­tel­lec­tual open­ness in the, uh, uh, in­sti­tu­tions that are sup­posed to pro­mote it, namely uni­ver­si­ties. There has been an ide­olog­i­cal nar­row­ing that is, uni­ver­si­ties are be­com­ing more mono-cul­tures of left-wing thought.”
“On the other hand, there’s some op­ti­mism in those of us who are wor­ried about au­thor­i­tar­ian pop­ulism in that it is kind of an old per­son’s ide­ol­ogy and the sup­port for pop­ulism falls off with gen­er­a­tional co­hort”

Harari

“We shouldn’t gen­er­al­ize from the cul­tural war cul­tural Wars in the US to the world as a whole… far worse things are hap­pen­ing in places like Hun­gary, like Rus­sia, where the sup­pres­sion is definitely the other way around, that en­tire de­part­ments are be­ing closed”
“The sup­pres­sion of sev­eral de­part­ments at pre­sent in Hun­gary or Rus­sia. So gen­der stud­ies is be­ing blamed for be­ing, it’s not sci­ence, it’s poli­tics, it’s ide­ol­ogy. But this will hap­pen to more and more de­part­ments. We shouldn’t aban­don the gen­der stud­ies de­part­ment in its fight be­cause it will come to more and more de­part­ments. Now cli­mate sci­ence is also poli­ti­cized and soon com­puter sci­ence will be poli­ti­cized.”

Re­spon­si­bil­ity of sci­en­tists

Harari

“First of all, sci­en­tists need to ed­u­cate… you do need a bet­ter un­der­stand­ing of what’s hap­pen­ing and what’s com­ing, be­cause it’s very rele­vant to poli­ti­cal de­ci­sions.”
“Se­condly, sci­en­tists have to take greater re­spon­si­bil­ity for what they are do­ing. For ex­am­ple, if you’re an en­g­ineer and you’re de­vel­op­ing some new tool in any field, so I would say take a few min­utes or a few hours, think about the poli­ti­cian you most fear in the world and now think what will he or she do with my in­ven­tion? The gen­eral ten­dency of en­g­ineers and en­trepreneurs is to think about the best case sce­nar­ios.”

Fur­ther reading

Of the recom­men­da­tions above, these seem most rele­vant:

I re­cently en­joyed this, and it picks up many similar themes:

A talk on EAs and surveillance by Ben Garfinkel

Risk ty­pol­ogy and cog­ni­tive biases