Why you should consider going to EA Global

My main mo­ti­va­tion be­hind writ­ing this is to help you con­sider whether go­ing to an Effec­tive Altru­ism Global (EAG) con­fer­ence this year is worth it. After hav­ing doubted the value, I was con­vinced oth­er­wise at EAGx Oxford 2016. There­fore, I’m shar­ing my per­sonal high­lights from that week­end to at­tempt to demon­strate that these con­fer­ences are among the most valuable events *any­one* can at­tend be­cause they have great con­tent, a unique frame­work and ex­cep­tional at­ten­dees.

Let’s start with a quick overview of the top­ics from the offi­cial ses­sions I at­tended:

  • High Im­pact Ca­reer Planning

  • Prob­a­bil­ity and Statistics

  • Ap­plied Rationality

  • Re­search Heuristics

  • Log­i­cal Fallacies

  • In­ter­na­tional Development

  • Found­ing Organisations

  • Big Data

  • Catas­trophic and Ex­is­ten­tial Risks

  • Utilitarians

  • On What Mat­ters (III)

Most of the time, two other ses­sions ran si­mul­ta­neously and a lot of high pro­file speak­ers didn’t make choos­ing any eas­ier (click here for the de­tailed sched­ule). So within our Genevan team, we made sure we cov­ered all rele­vant ses­sions and ex­changed notes af­ter­wards. I per­son­ally missed the Ar­tifi­cial In­tel­li­gence re­lated ses­sions, so none of them will be part of this post but if that’s what you’re in­ter­ested in, take a look at this post that I ran­domly found while ab­solutely not pro­cras­ti­nat­ing.

Highlights

Humans

On­line, the EA com­mu­nity can some­times seem less heart­warm­ing than it re­ally is (like me). How­ever, once you make it to an in-per­son event it is hard to miss that the move­ment con­sists of many lovely hu­mans try­ing their best to make sure we figure things out in time. Hu­mans are the top high­light of any EA event. Every­one is warm (±37°C, ideally), open-minded, rea­son­able and cu­ri­ous. Con­ver­sa­tions range from ca­sual chat­ting to se­ri­ous truth-seek­ing and ev­ery­one is su­per knowl­edge­able in the most differ­ent mat­ters. Even bet­ter, you can ask any­one any­thing and they’ll be happy to help you out.

I used my free time to re­con­nect with friends, meet new peo­ple and pro­cess all the in­put. The gen­eral vibe is su­per easy­go­ing—al­most like you’re at a mu­sic fes­ti­val in Por­tu­gal, but you’re not. You’re in one of the world’s aca­demic cap­i­tals in chilly, rainy England. The speak­ers could of­ten be spot­ted at other ses­sions, too, blend­ing in with the crowd. Act­ing like mere mug­gles, you could even ask them mun­dane ques­tions. Thus, if you want to see what this move­ment is all about, be in­spired and gain mo­ti­va­tion to do some­thing: go meet its hu­man sub­sets at an EAG con­fer­ence and you will have a hard time not lik­ing it (for those of you who will still have a hard time be­cause you care more about hu­man­ity than about its in­di­vi­d­ual sub­sets, I wrote the next sec­tion, I can un­der­stand you some­times).

Top three sessions

Work­shop ap­pe­tisers from the Cen­ter for Ap­plied Ra­tion­al­ity (CFAR)

If you’re se­ri­ous about en­sur­ing our best pos­si­ble fu­ture, CFAR is ded­i­cated to turn­ing you into the best goal-achiever ever. At the con­fer­ence, they served three ap­pe­tisers of their im­mer­sive 4-day cur­ricu­lum com­pressed into short, one-hour ses­sions. They as­sumed that the crowd at the con­fer­ence was ad­vanced enough to deal with the im­ple­men­ta­tion in­de­pen­dently, hence, they mainly ex­plained their rea­son­ing be­hind each tech­nique and the tech­nique it­self.

I had heard about CFAR and their work but I wasn’t aware of just how use­ful it would be. I can hon­estly say that I now think through how we go about do­ing things from dis­cussing to mak­ing plans to im­ple­ment­ing new habits more con­vinc­ingly. On top of that, Dun­can, the coach, had a lot of great analo­gies and re­marks to make that made the ses­sions very en­joy­able. I hope to at­tend their full course soon be­cause there’s still far too much self-im­prove­ment to be done.

“That’s what you get if you’re run­ning com­put­ers made of meat that wrote them­selves.”

- Dun­can, CFAR coach

To give you a more con­crete idea, here the three tech­niques we learned, with ex­plana­tory links:

  • Build­ing Blocks of Be­havi­our Change: ’Trig­ger-Ac­tion-Plans
    TAPs cre­ate an in­cre­men­tal tran­si­tion by iter­at­ing a, ba­si­cally zero-effort, three-step pro­cess that is de­signed to “sum­mon your sapi­ence” and let you rewire your brain.

  • Nav­i­gat­ing In­tel­lec­tual Disagree­ment: ’Dou­ble crux
    A tech­nique to turn dis­agree­ments into a col­lab­o­ra­tive search for truth, or at least for you, to learn as much as pos­si­ble from other wor­ld­views and gather more data.

  • Over­com­ing Plan­ning Bi­ases: ’Mur­phy-Jitsu
    Mur­phy-Jitsu is de­signed to make us think about things we ac­tu­ally can an­ti­ci­pate but usu­ally don’t when mak­ing fu­ture plans. The ob­vi­ous of­ten is non-ob­vi­ous to.

“The uni­verse is a dark maze and at some point, all of us run into a wall.

Face first.

Be­cause we had a be­lief of where to go.”
- Dun­can.

Pre­sen­ta­tion: What do Peo­ple think about Utili­tar­i­ans?

This talk, by Molly Crock­ett on her re­search and the con­clu­sions she had come to, was quite in­ter­est­ing to hear about be­cause a large part of the EA com­mu­nity iden­ti­fies as some kind of util­i­tar­ian. How­ever, the word alone seems to di­vide crowds. Thus, I was ea­gerly hop­ing for a few in­sights on how one can avoid com­ing across as cold and heartless when pre­sent­ing trade-offs and calcu­la­tions that, even when based on global em­pa­thy, come off as in­hu­man(e).

Crock­ett’s lab found that util­i­tar­i­ans are gen­er­ally seen as (i) less trust­wor­thy; (ii) less em­pathic; and (iii) less likely to co­op­er­ate. She even claims that hu­mans have de­vel­oped a de­fault moral­ity—de­on­tol­ogy. That might be to sig­nal our value as a co­op­er­a­tor on the part­ner­ship mar­ket, rooted in the value of im­plicit so­cial con­tracts that most of our so­cietal fabric re­lies on. And util­i­tar­ian logic poses a di­rect threat to this fabric. There­fore, peo­ple who claim that it’s ob­vi­ous/​easy to sac­ri­fice some­thing—even if it is for the greater good—quickly alienate them­selves from so­ciety be­cause the rest of the group fears be­ing used.

It fol­lows that, if we re­ally want to ap­peal to ev­i­dence and rea­son in our de­ci­sion-mak­ing, we ought to ap­peal si­mul­ta­neously to our ‘why’ - our val­ues, our al­tru­ism. Without un­der­stand­ing that, it is un­der­stand­ably off-putting to listen to statis­tics and cost-benefit-analy­ses. For EA, that means that we need to em­pha­sise the ‘A’ part of the move­ment more proac­tively, es­pe­cially when talk­ing about the ‘E’. Ad­di­tion­ally, the move­ment could make more of an effort to sup­port in­di­vi­d­ual au­ton­omy and di­ver­sity to build an un­shake­able ba­sis of trust to thrive and en­sure it’s not be­ing mi­s­un­der­stood.

Pre­sen­ta­tion: Heavy Tails & Power Laws

“Nor­mal is not nor­mal!” pro­claimed the Fu­ture of Hu­man­ity In­sti­tute (FHI)’s An­ders Sand­berg. He started his fun talk by ex­plain­ing why the ‘nor­mal dis­tri­bu­tion’, or ‘bell curve’ should re­ally only be called ‘Gaus­sian dis­tri­bu­tion’: ex­cept for well-known things, like the in­tel­li­gence of hu­mans and rol­ling dice, the Gaus­sian dis­tri­bu­tion and Cen­tral Limit The­o­rem can be very mis­lead­ing. That is be­cause we live in ‘Ex­trem­is­tan’ and figur­ing things out that we don’t already know re­quires a differ­ent mind­set here.

In Ex­trem­is­tan, freak events (or more beau­tifully called ‘Black Swans’) oc­cur. And when such events oc­cur, they are far more in­tense than usual events, of­ten trig­ger­ing fur­ther ex­treme events. This is due to the com­plex (in­ter-)de­pen­den­cies and cor­re­la­tions in our world. There­fore, a ‘real’ nor­mal dis­tri­bu­tion might have the cen­tre of a bell curve, but the tails are nowhere close. There are a lot of cas­cade effects in Ex­trem­is­tan with its frac­tal-ge­o­met­ri­cal na­ture, so ex­pect­ing most val­ues to lie within two or three stan­dard de­vi­a­tions of the mean is a dan­ger­ous as­sump­tion we tend to make in­tu­itively.

An­ders’ talk illus­trated how dan­ger­ous over­sim­plifi­ca­tions are, and how un­aware we are of so-called ‘Dragon Kings’. Or, to say it less beau­tifully: how un­aware we are of max­ima that are caused by non-lin­ear dy­nam­ics in com­plex sys­tems, cre­at­ing statis­ti­cal out­liers that throne above any­thing we’d seen be­fore. Yet, there is hope: study­ing these dy­nam­ics in de­tail might al­low us to see more Black Swans as Dragon Kings—events that we could pre­dict with more com­plex mod­els. In­stead of say­ing “oh, that was un­likely” we ought to say “oh, the model was wrong” an awful lot more of­ten.

This is why the EA move­ment is try­ing to figure out how to get into those heavy tails—“think­ing meta mat­ters!” Figur­ing out where tails ac­tu­ally cut off and find­ing the dragon kings could help us pre­pare for ex­treme events. No mat­ter how prob­a­ble such events are, with Dragon Kings you’re bet­ter safe than sorry. An­ders is try­ing to do ex­actly that at the FHI; draw­ing na­ture’s lot­tery tick­ets and stack­ing the deck wher­ever pos­si­ble. In ad­di­tion to that, he’s a ter­rific speaker. Here are the slides to this talk. He also gave a pre­sen­ta­tion on Hu­man En­hance­ment that I re­gret not hav­ing been to be­cause these slides alone are already in­cred­ibly in­ter­est­ing.

Gen­eral takeaways

Many differ­ent peo­ple were talk­ing about policy work as a po­ten­tial top pri­or­ity and made me con­vinced that the move­ment keeps up­dat­ing in the right di­rec­tion. The same goes for de­sign­ing and giv­ing pre­sen­ta­tions. At pre­vi­ous events, I was always a lit­tle baf­fled at how bad the slides and how un­pre­pared some speak­ers were, but I saw only one pre­sen­ta­tion for which I could say that this time. Along with be­ing more pro­fes­sional, the gen­eral or­gani­sa­tion and man­age­ment was done ex­tremely well and even the ve­gan food choices were quite nice.

Other than that, I was as­ton­ished at how much more low-hang­ing fruit there seems to be in fight­ing ex­treme poverty. Two talks on in­ter­na­tional de­vel­op­ment out­lined how much bet­ter we could use (big) data if only it was all pub­li­cly available. How much that alone would con­tribute to end­ing poverty? $3 trillion/​year in value, claims Alena Stern from AidData, who also em­pha­sised that de­vel­op­ment aid wasn’t sci­en­tific at all be­fore the nineties. Ad­di­tion­ally, if pro­grams weren’t di­vided along coun­try bor­ders but fo­cused on only the poor­est re­gions, we could do a lot more for those who are the worst off. Fur­ther, cheaper than cre­at­ing ran­domised con­trol­led tri­als, we could just analyse geospa­tial data and satel­lite imagery to set up quasi-ex­per­i­ments, all the way back to the eighties. How­ever, even when such data is available, data illiter­acy, lack of trust and ed­u­ca­tion are still sig­nifi­cant hin­drances in the rele­vant ar­eas. It seems we’re still at the be­gin­ning of the data rev­olu­tion, af­ter all.

The next para­graph is a com­bi­na­tion of differ­ent talks with im­pli­ca­tions for the move­ment’s gen­eral strat­egy. Taken from and in­spired by:

(i) Amanda Askell’s “Look, Leap or Re­treat”

(ii) Owen Cot­ton-Bar­ratt’s “Prospect­ing For Gold

(iii) Ste­fan Schu­bert’s “The Younger Sibling Fal­lacy”

(i) Often, look­ing and do­ing re­search is worth a lot more than we’d ex­pect in­tu­itively, es­pe­cially when the ex­pected value is un­clear, be­cause then ad­di­tional data points will al­low us to be sig­nifi­cantly more pre­cise about the pos­si­ble pos­i­tive out­comes of an ac­tion; (ii) if we want to mo­ti­vate oth­ers to join in the (re-)search, we should strate­gi­cally plan on a group level to make the most out of each per­son’s in­di­vi­d­ual com­par­a­tive ad­van­tage and (ii) we need to avoid short­en­ing our mes­sage, as not be­ing un­der­stood means risk­ing costly de­vi­a­tions. (iii) As we have the ten­dency to see oth­ers as less proac­tive than our­selves, we of­ten dis­miss their ca­pac­i­ties and over­look the (po­ten­tial for) cas­cade effects our ac­tions have; (ii) which means that we should always try to move fur­thest into the very end of the heavy tails.

One, last thing that was em­pha­sised mul­ti­ple times, was the value of ask­ing. Most of us still don’t do it enough. If one doesn’t un­der­stand some­thing, if we don’t know where to start, if we want peo­ple to sup­port us—there’s one sim­ple trick: just ask. We tend to feel like there is some so­cial cost to ask­ing but sim­ply ask­ing can provide ex­tremely valuable sup­port and only has down­sides if we do it too of­ten. So far, most of us don’t do it enough.

Conclusion

Be­fore go­ing, I didn’t ex­pect much from the con­fer­ence be­yond so­cial­is­ing and a lot of fuzzies caused by pre­sent hu­mans. Most of the ses­sions didn’t sound like they were go­ing to provide much be­yond what I had been read­ing about ev­ery day for the past two years. But then, the pre-con­fer­ence work­shops alone blew my low ex­pec­ta­tions out of the wa­ter be­fore any­thing had offi­cially started. Only then came the lovely hu­mans and more mind-broad­en­ing ses­sions. And more lovely hu­mans (whom you can ask any­thing with­out hes­i­ta­tion). At the very worst, some talks were just a lit­tle too su­perfi­cial, but noth­ing was bad. My sole sug­ges­tion for pos­si­ble im­prove­ments would be to in­tro­duce differ­ent “difficulty” tracks, to al­low new­bies and sa­vants to en­joy differ­ent ses­sions, in­stead of di­vid­ing tracks along top­ics. That seems com­pli­cated to im­ple­ment though.


So, se­ri­ously, the ‘mind-broad­en­ings per minute’ and the ‘den­sity of lovely peo­ple per m2’ seem to reach the global op­ti­mum at EAG. Go, sign up, be one of these lovely peo­ple this year. There’s also a lot of fi­nan­cial sup­port available if that’s what’s keep­ing you from it.
___

Origi­nally posted on the EA Geneva blog.