EA Hotel Fundraiser 2: Current guests and their projects

This is the sec­ond of a se­ries of posts that ac­com­pany the EA Ho­tel fundraiser. The plan was to post a proper EV anal­y­sis of the ho­tel, but that has taken longer than ex­pected. We will post an ap­pe­tiser in the mean­time, with apolo­gies from the kitchen.

Read­ers may be cu­ri­ous as to the kinds of guests the EA ho­tel has so far hosted[1]. Below have com­piled some in­for­ma­tion on cur­rent guests and their pro­jects.

Statistics

We have gath­ered data from 1920 res­i­dents (as of Jan 22nd).

Former EA engagement

  • 7 are em­ployed by an EA or­gani­sa­tion, pay­ing for/​con­tribut­ing to­ward their stay

  • 3 have been em­ployed by, or have re­ceived grants from, an EA or­gani­sa­tion before

  • 5 have ap­plied for grants or jobs at EA or­gani­sa­tions, but haven’t been se­lected (yet)

  • 4 have never ap­plied for EA grants or jobs

The res­i­dents have at­tended EA Global twice on av­er­age, rang­ing from 0 to 6 times. 15 of them have at­tended at least once. They have at­tended 1.8 re­treats on av­er­age.

Education

The res­i­dents have had an av­er­age of 4.6 nom­i­nal years of uni­ver­sity-level ed­u­ca­tion. They ma­jored in the fol­low­ing sub­jects (num­bers in brack­ets are to­tal ag­gre­gate years of ed­u­ca­tion):

  • Philos­o­phy (12)

  • Psy­chol­ogy (7)

  • Physics (6)

  • AI (5.5)

  • Ge­net­ics & an­i­mal sci­ence (5)

  • CS (5)

  • Poli­tics, philos­o­phy & eco­nomics (4)

  • Physics & philos­o­phy (4)

  • Maths & philos­o­phy (4)

  • Maths & CS (4)

  • Chem­i­cal en­g­ineer­ing (4)

  • Film (4)

  • Math (4)

  • Earth sys­tems sci­ence (3.5)

  • Data sci­ence for hu­man be­havi­our (3.5)

  • Poli­tics & philos­o­phy (3)

  • Philos­o­phy & psy­chol­ogy (3)

  • Public health (1)

  • Health eco­nomics (1)

  • Bio­statis­tics (1)

Work experience

Res­i­dents have an av­er­age of 4.8 years of work ex­pe­rience, nor­mal­ised to full time equiv­a­lent, only count­ing work that yields ca­reer cap­i­tal and/​or im­pact. Some of the job ti­tles that were held in­clude (num­bers in brack­ets are peo­ple that held the job):

  • Re­search (10)

  • En­trepreneur­ship and/​or man­age­ment (8)

  • Soft­ware de­vel­op­ment (5)

  • Teach­ing (4)

  • Writ­ing & edit­ing (2)

  • Coach­ing (2)

  • Eng­ineer­ing (2)

Cause areas

Res­i­dents re­port that their cause ar­eas of in­ter­est in­clude:

  • AI Safety/​far fu­ture (13)

  • EA op­er­a­tions/​man­age­ment/​ETG (7)

  • An­i­mal welfare (4)

  • Devel­op­ment/​poverty re­lief (3)

  • Men­tal health (2)

  • Cause pri­ori­ti­sa­tion (2)

  • Policy (1)

Note that the ho­tel has so far been cause-neu­tral in its admissions

Counterfactuals

Out of 19 res­i­dents, 15 would be do­ing the same work coun­ter­fac­tu­ally, but the ho­tel al­lows them to do, on av­er­age, 2.2 times more EA work—as op­posed to work­ing a part time job to self-fund, or burn­ing more run­way.

Of those 15, 2 are study­ing, 7 are do­ing in­de­pen­dent re­search, 5 are do­ing char­ity en­trepreneur­ship, and 1 is do­ing op­er­a­tions for an EA or­gani­sa­tion.

Of the 4 oth­ers, 2 res­i­dents would work reg­u­lar jobs, jointly donat­ing $6000 per year in line with the Giv­ing What We Can pledge, 1 would be pur­su­ing a ca­reer in the civil ser­vice, and 1 would be study­ing AI Safety.

Qual­i­ta­tive descriptions

Pro­vid­ing a de­scrip­tion of one’s work is manda­tory for those stay­ing long-term. You can find these on the web­site. Below is a cur­rent snap­shot of all those cur­rently stay­ing, plus (all but two) past guests who have stayed more than a month. We have also had on the or­der of 20 peo­ple stay short term (less than a month) in or­der to col­lab­o­rate with other guests or do short work sprints.

Greg Colbourn has been into EA for years but has moved into work­ing on re­lated things (in­clud­ing found­ing this pro­ject) full time rel­a­tively re­cently. Pre­vi­ously, he stud­ied Astro­physics (un­der­grad), and Earth Sys­tem Model­ling (PhD), and worked on 3D print­ing/​open source hard­ware (busi­ness with a view to EA). He has dab­bled in in­vest­ing (mainly crypto), and study­ing sub­jects re­lated to AI Safety, of which he hopes to do more of.

Denisa Pop“I’m a former coun­sel­ling psy­chol­o­gist spe­cial­ised in cog­ni­tive-be­havi­oural ther­apy and I also have a re­search back­ground in hu­man-an­i­mal in­ter­ac­tion (PhD). As a hobby, I en­joy bring­ing peo­ple to­gether (e.g. through or­ganis­ing con­fer­ences such as EAGx, TEDx), be­cause I find this to be a great way for peo­ple to in­spire and to get in­spired, as well as to strengthen the bonds within the com­mu­nity. So at the ho­tel, be­sides writ­ing a sci­en­tific ar­ti­cle and offer­ing men­tal health ses­sions, I’m or­ganis­ing events to­gether with EA Nether­lands.”

Justin Shov­e­lain is the founder of the quan­ti­ta­tive long term strat­egy or­gani­sa­tion Con­ver­gence. Over the last seven years he has worked with MIRI, CFAR, EA Global, Founders Fund, and Lev­er­age, and done work in EA strat­egy, fundrais­ing, net­work­ing, teach­ing, cog­ni­tive en­hance­ment, and AI safety re­search. He has a MS de­gree in com­puter sci­ence and BS de­grees in com­puter sci­ence, math­e­mat­ics, and physics.

David Kristoffers­son: “Soft­ware en­g­ineer, thinker, and or­ganiser. I have a back­ground as R&D Pro­ject Man­ager and Soft­ware Eng­ineer at Eric­s­son. I’ve worked with FHI. I co-or­ganised the first AI Safety Camp. I’m cur­rently do­ing AI and ex­is­ten­tial risk strat­egy with Con­ver­gence and this is what I’ll be work­ing on at the ho­tel when I re­turn in Septem­ber. I en­joy figur­ing out the most fun­da­men­tal ques­tions of how re­al­ity and hu­man­ity works.”

Toon Alfrink is the founder of RAISE, which aims to up­grade the pipeline for ju­nior AI Safety re­searchers, pri­mar­ily by cre­at­ing an on­line course. He co-founded LessWrong Nether­lands in 2016. He has given talks about EA and AI Safety, ad­dress­ing crowds at var­i­ous venues in­clud­ing fes­ti­vals and fra­ter­ni­ties. He is also work­ing part time on man­ag­ing the ho­tel, us­ing his ex­pe­rience of liv­ing in a Bud­dhist tem­ple as a refer­ence for cre­at­ing the best pos­si­ble liv­ing and work­ing en­vi­ron­ment.

Chris Leong is cur­rently fo­cus­ing his re­search on in­finite ethics, but his side-in­ter­ests in­clude de­ci­sion the­ory, an­throp­ics and para­doxes. He helped found the EA so­ciety at the Univer­sity of Syd­ney and man­aged to set up an un­for­tu­nately short-lived group at the Univer­sity of Maas­tricht whilst on ex­change. He rep­re­sented Aus­tralia at the In­ter­na­tional Olympiad in In­for­mat­ics and won a Gold in the Asian Pa­cific Maths Olympiad. He’s stud­ied philos­o­phy and psy­chol­ogy and oc­ca­sion­ally en­joys danc­ing Salsa.

Hoagy Cun­ning­ham grad­u­ated from Oxford in 2017 with a de­gree in Poli­tics, Philos­o­phy and Eco­nomics, and is now teach­ing him­self all the Maths, Neu­ro­science and Com­puter Science he can get his hands on that might point the way to­wards a fu­ture of safe AI. He cur­rently works for RAISE, port­ing Paul Chris­ti­ano’s IDA se­quence to their teach­ing plat­form, and adding ex­er­cises.

Davide Zagami com­pleted a bach­e­lor’s de­gree in Com­puter Eng­ineer­ing and de­cided to head as an au­to­di­dact to­wards con­tribut­ing to AI safety and AI al­ign­ment tech­ni­cal re­search. He strives to learn as much as pos­si­ble and is hun­gry for ev­i­dence about how he can per­son­ally miti­gate ex­is­ten­tial risks. He leads the con­tent de­vel­op­ment of RAISE, a non-profit or­gani­sa­tion which is cre­at­ing an on­line course for AI safety.

Derek Foster has a back­ground in philos­o­phy, ed­u­ca­tion, pub­lic health and health eco­nomics. While liv­ing at the EA Ho­tel, he co-au­thored a chap­ter of the 2019 Global Hap­piness Policy Re­port (to be pub­lished on 10 Fe­bru­ary), which fo­cused on ways of in­cor­po­rat­ing sub­jec­tive wellbe­ing into health­care pri­ori­ti­sa­tion. He now works on an­i­mal welfare, men­tal health and grant­mak­ing for Re­think Pri­ori­ties.

Roshawn Terell is an AI Re­searcher, In­for­ma­tion The­o­rist, Cog­ni­tive sci­en­tist, who works to build bridges be­tween dis­tant fields of knowl­edge. He is mostly self-taught, hav­ing worked on mul­ti­ple re­search pro­jects, with var­i­ous pub­lished pa­pers and lec­tures at Oxford and other in­sti­tu­tions. He is presently en­gaged in ap­ply­ing his cog­ni­tive sci­ence the­o­ries to­wards de­vel­op­ing more so­phis­ti­cated ar­tifi­cial in­tel­li­gence.

Ed­ward Wise be­came in­ter­ested in Effec­tive Altru­ism at Oxford Univer­sity, and aims to re­search the in­ter­ac­tion be­tween the ethics of effec­tive al­tru­ism and left-wing poli­ti­cal philos­o­phy.

Fredi Back­toldt“I’m study­ing philos­o­phy at Goethe Univer­sität Frank­furt, cur­rently writ­ing my mas­ter the­sis on the De­mand­ing­ness Ob­jec­tion to eth­i­cal the­o­ries. On the side, i started to vol­un­teer for An­i­mal Ethics, where I now also do an in­tern­ship. The ho­tel with its great at­mo­sphere helps me to put my val­ues into ac­tion and that’s what I’m try­ing to do here!

Saulius Šimčikas is a Re­search An­a­lyst at Re­think Pri­ori­ties, mostly work­ing on top­ics re­lated to an­i­mal welfare. Pre­vi­ously, he was a re­search in­tern at An­i­mal Char­ity Eval­u­a­tors, or­ganised Effec­tive Altru­ism events in the UK and Lithua­nia, and earned to give as a pro­gram­mer. Liv­ing in the ho­tel helps him fo­cus on work.

Rhys Southan is a writer and philoso­pher with a fo­cus on an­i­mal ethics and pop­u­la­tion ethics. Last year he com­pleted a mas­ter’s de­gree in philos­o­phy at The Univer­sity of Oxford. He has been pub­lished in the New York Times, Aeon Magaz­ine and Modern Farmer. While at the EA Ho­tel, Rhys is work­ing on a fic­tion novel re­lated to AI al­ign­ment, as well as re­search­ing and writ­ing on an­i­mal ethics. He is also in­ter­ested in autism and how it af­fects ro­man­tic re­la­tion­ships and men­tal health.

Matt Gold­en­berg is a com­mu­nity builder and en­trepreneur. His cur­rent re­search is on the sys­tem­a­ti­sa­tion of cre­at­ing im­pact­ful or­gani­sa­tions.

Max Carpen­dale stud­ied philos­o­phy at uni­ver­sity. Max has been do­ing re­search and writ­ing on the sub­ject of in­ver­te­brate sen­tience from an EA an­gle. He has worked with Re­think Pri­ori­ties on the sub­ject. Max’s been in­ter­ested and in­volved with EA since 2011 and has been in­ter­ested in many re­lated ideas be­fore then.

Rafe Kennedy works on macros­trat­egy & AI strat­egy and stud­ies maths and statis­tics, with the goal of con­tribut­ing to­wards AI Safety. Pre­vi­ous work at the ho­tel has in­cluded game-the­o­retic mod­el­ling of AI de­vel­op­ment and vi­su­al­i­sa­tions of statis­ti­cal con­cepts. He holds a mas­ter’s in Physics & Philos­o­phy from Oxford and has pre­vi­ously worked as a soft­ware en­g­ineer at a ven­ture-backed data sci­ence startup.

Ar­ron Bran­ton moved from Lon­don to Black­pool, quit­ting his job to fo­cus on learn­ing pro­gram­ming full time. He is cur­rently work­ing on cre­at­ing a video game for the Google Play store and Ap­ple’s App store, which is planned to be re­leased later in 2019. The money raised will go to­wards helping save hu­man lives in the poor­est poor­est coun­tries around the world. ‘What kind of game is he work­ing on?’ (I hear you ask). You’ll have to wait and see!

Lee Wright“I’m cur­rently un­der­go­ing a course of self-di­rected study to pre­pare my­self for an EA-al­igned ca­reer. De­spite my main in­ter­est in global gov­er­nance and in­ter­na­tional policy, while at the ho­tel I’ve fo­cused on de­vel­op­ing a gen­eral skill set that I think will be use­ful for any an­a­lyt­ics or op­er­a­tions job. Since I’m not work­ing di­rectly on an EA pro­ject, and the op­por­tu­nity cost for me is com­par­a­tively lower than some other guests, I help out with the back-end op­er­a­tions of the ho­tel where I can.”

Linda Linse­fors is an in­de­pen­dent AI Safety stu­dent and re­searcher. She has pre­vi­ously com­pleted a PhD in Quan­tum cos­mol­ogy, or­ganised an AI Safety Camp and in­terned at MIRI. Linda is cur­rently learn­ing more ML and RL, and also think­ing about wire­head­ing and the re­la­tion be­tween learn­ing and the­ory, among other things.

Markus Salmela—Markus stud­ies hu­man health, philos­o­phy and so­cial sci­ences. He has worked on re­search pro­jects re­lat­ing to ex­is­ten­tial risks and long term fore­cast­ing, and also or­ganised EA-events. He is cur­rently writ­ing about longevity re­search from an ex­is­ten­tial risk per­spec­tive.

Evan Sand­hoefner grad­u­ated from Har­vard in 2017 with a de­gree in eco­nomics and com­puter sci­ence. He worked as a pro­gram man­ager at Microsoft for a short time be­fore leav­ing to pur­sue EA work di­rectly. For now, he’s in­de­pen­dently study­ing a wide range of EA-rele­vant top­ics, with a par­tic­u­lar in­ter­est in con­scious­ness.

Conclusion

The ques­tion we’re try­ing to an­swer is the fol­low­ing: is the av­er­age pro­ject car­ried out by EA Ho­tel res­i­dents worth its in­vest­ment? This overview should give a good first im­pres­sion.

To fur­ther help you make an as­sess­ment, we aim to pub­lish (at least) 2 more posts:

  • What about the risks? How will the ho­tel filter out pro­jects that are strongly net-nega­tive? How will the ho­tel pro­tect its res­i­dents from bad ac­tors, and pre­vent in­ci­dents that cause un­ac­cept­able per­sonal harm and tar­nish the rep­u­ta­tion of the com­mu­nity? What sys­tems are gen­er­ally in place to keep both res­i­dents and man­age­ment in check? Is the man­age­ment com­pe­tent enough to iden­tify risks and deal with them timely? Our next post will tackle these con­cerns and ask read­ers to come up with plau­si­ble sce­nar­ios that break our solu­tions.

  • While it has proven difficult to make a satis­fac­tory EV calcu­la­tion that ac­counts for all cases, what we can do is make some as­sump­tions that make the calcu­la­tion eas­ier, and calcu­late EV con­di­tional on those as­sump­tions be­ing true. We will at­tempt to give a lower bound for EV: how likely is it that the res­i­dents of the ho­tel are equally or more effec­tive than fund­ing a marginal EA hire? Another ob­ser­va­tion: much of the ex­pected value of the ho­tel is in the po­ten­tial dis­cov­ery that solu­tions like the ho­tel are vi­able. This would effec­tively cre­ate a new tool for dras­ti­cally re­duc­ing the costs of a ma­jor share of EA work. If this ho­tel doesn’t get funded, it could take sev­eral years be­fore a similar pro­ject gets started again. How much po­ten­tial value could be lost dur­ing those years? Our fourth post will at­tempt to calcu­late EV from these per­spec­tives.

The ask

Do you like this ini­ti­a­tive, and want to see it con­tinue for longer, on a more sta­ble foot­ing, and at a big­ger scale? Do you want to cheaply buy time spent work­ing full time on work re­lat­ing to EA, whilst si­mul­ta­neously fa­cil­i­tat­ing a thriv­ing EA com­mu­nity hub? Then we would like to ask for your sup­port.

For fur­ther in­struc­tions please see our GoFundMe.

If you’d like to give reg­u­lar sup­port, we also have a Pa­treon.

To learn more about the ho­tel, and to book a stay, please see our web­site.

Thanks to Toon Alfrink for con­duct­ing the sur­vey and draft­ing the post, and Sasha Cooper and Florent Ber­thet for com­ments.


Footnote

[1] Some peo­ple have voiced con­cerns over the kind of peo­ple that the EA Ho­tel would at­tract. For ex­am­ple, cit­ing one top com­ment on a SSC ar­ti­cle about the ho­tel:

“This is go­ing to at­tract peo­ple who are into EA and un­usu­ally bad at mak­ing a liv­ing. Is that bad? Not sure, but I ex­pect the most com­pe­tent EAs have no prob­lem mak­ing ends meet in [High Cost Of Liv­ing] ar­eas (de­spite the in­sane in­effi­ciency of that phe­nomenon).”

This is a valid con­cern, but does it re­flect re­al­ity? It is per­haps more true that most of our guests have good earn­ing po­ten­tial, but in­stead choose to do other things—i.e. di­rect EA work. We have a gen­eral im­pres­sion of com­pe­tence, but re­al­ize that this is very hard to for­mally spec­ify. In this post we will in­stead give you some of the data that has led us to that im­pres­sion.