EA Hotel Fundraiser 2: Current guests and their projects
This is the second of a series of posts that accompany the EA Hotel fundraiser. The plan was to post a proper EV analysis of the hotel, but that has taken longer than expected. We will post an appetiser in the meantime, with apologies from the kitchen.
Readers may be curious as to the kinds of guests the EA hotel has so far hosted[1]. Below have compiled some information on current guests and their projects.
Statistics
We have gathered data from 19⁄20 residents (as of Jan 22nd).
Former EA engagement
7 are employed by an EA organisation, paying for/contributing toward their stay
3 have been employed by, or have received grants from, an EA organisation before
5 have applied for grants or jobs at EA organisations, but haven’t been selected (yet)
4 have never applied for EA grants or jobs
The residents have attended EA Global twice on average, ranging from 0 to 6 times. 15 of them have attended at least once. They have attended 1.8 retreats on average.
Education
The residents have had an average of 4.6 nominal years of university-level education. They majored in the following subjects (numbers in brackets are total aggregate years of education):
Philosophy (12)
Psychology (7)
Physics (6)
AI (5.5)
Genetics & animal science (5)
CS (5)
Politics, philosophy & economics (4)
Physics & philosophy (4)
Maths & philosophy (4)
Maths & CS (4)
Chemical engineering (4)
Film (4)
Math (4)
Earth systems science (3.5)
Data science for human behaviour (3.5)
Politics & philosophy (3)
Philosophy & psychology (3)
Public health (1)
Health economics (1)
Biostatistics (1)
Work experience
Residents have an average of 4.8 years of work experience, normalised to full time equivalent, only counting work that yields career capital and/or impact. Some of the job titles that were held include (numbers in brackets are people that held the job):
Research (10)
Entrepreneurship and/or management (8)
Software development (5)
Teaching (4)
Writing & editing (2)
Coaching (2)
Engineering (2)
Cause areas
Residents report that their cause areas of interest include:
AI Safety/far future (13)
EA operations/management/ETG (7)
Animal welfare (4)
Development/poverty relief (3)
Mental health (2)
Cause prioritisation (2)
Policy (1)
Note that the hotel has so far been cause-neutral in its admissions
Counterfactuals
Out of 19 residents, 15 would be doing the same work counterfactually, but the hotel allows them to do, on average, 2.2 times more EA work—as opposed to working a part time job to self-fund, or burning more runway.
Of those 15, 2 are studying, 7 are doing independent research, 5 are doing charity entrepreneurship, and 1 is doing operations for an EA organisation.
Of the 4 others, 2 residents would work regular jobs, jointly donating $6000 per year in line with the Giving What We Can pledge, 1 would be pursuing a career in the civil service, and 1 would be studying AI Safety.
Qualitative descriptions
Providing a description of one’s work is mandatory for those staying long-term. You can find these on the website. Below is a current snapshot of all those currently staying, plus (all but two) past guests who have stayed more than a month. We have also had on the order of 20 people stay short term (less than a month) in order to collaborate with other guests or do short work sprints.
Greg Colbourn has been into EA for years but has moved into working on related things (including founding this project) full time relatively recently. Previously, he studied Astrophysics (undergrad), and Earth System Modelling (PhD), and worked on 3D printing/open source hardware (business with a view to EA). He has dabbled in investing (mainly crypto), and studying subjects related to AI Safety, of which he hopes to do more of.
Denisa Pop—“I’m a former counselling psychologist specialised in cognitive-behavioural therapy and I also have a research background in human-animal interaction (PhD). As a hobby, I enjoy bringing people together (e.g. through organising conferences such as EAGx, TEDx), because I find this to be a great way for people to inspire and to get inspired, as well as to strengthen the bonds within the community. So at the hotel, besides writing a scientific article and offering mental health sessions, I’m organising events together with EA Netherlands.”
Justin Shovelain is the founder of the quantitative long term strategy organisation Convergence. Over the last seven years he has worked with MIRI, CFAR, EA Global, Founders Fund, and Leverage, and done work in EA strategy, fundraising, networking, teaching, cognitive enhancement, and AI safety research. He has a MS degree in computer science and BS degrees in computer science, mathematics, and physics.
David Kristoffersson: “Software engineer, thinker, and organiser. I have a background as R&D Project Manager and Software Engineer at Ericsson. I’ve worked with FHI. I co-organised the first AI Safety Camp. I’m currently doing AI and existential risk strategy with Convergence and this is what I’ll be working on at the hotel when I return in September. I enjoy figuring out the most fundamental questions of how reality and humanity works.”
Toon Alfrink is the founder of RAISE, which aims to upgrade the pipeline for junior AI Safety researchers, primarily by creating an online course. He co-founded LessWrong Netherlands in 2016. He has given talks about EA and AI Safety, addressing crowds at various venues including festivals and fraternities. He is also working part time on managing the hotel, using his experience of living in a Buddhist temple as a reference for creating the best possible living and working environment.
Chris Leong is currently focusing his research on infinite ethics, but his side-interests include decision theory, anthropics and paradoxes. He helped found the EA society at the University of Sydney and managed to set up an unfortunately short-lived group at the University of Maastricht whilst on exchange. He represented Australia at the International Olympiad in Informatics and won a Gold in the Asian Pacific Maths Olympiad. He’s studied philosophy and psychology and occasionally enjoys dancing Salsa.
Hoagy Cunningham graduated from Oxford in 2017 with a degree in Politics, Philosophy and Economics, and is now teaching himself all the Maths, Neuroscience and Computer Science he can get his hands on that might point the way towards a future of safe AI. He currently works for RAISE, porting Paul Christiano’s IDA sequence to their teaching platform, and adding exercises.
Davide Zagami completed a bachelor’s degree in Computer Engineering and decided to head as an autodidact towards contributing to AI safety and AI alignment technical research. He strives to learn as much as possible and is hungry for evidence about how he can personally mitigate existential risks. He leads the content development of RAISE, a non-profit organisation which is creating an online course for AI safety.
Derek Foster has a background in philosophy, education, public health and health economics. While living at the EA Hotel, he co-authored a chapter of the 2019 Global Happiness Policy Report (to be published on 10 February), which focused on ways of incorporating subjective wellbeing into healthcare prioritisation. He now works on animal welfare, mental health and grantmaking for Rethink Priorities.
Roshawn Terell is an AI Researcher, Information Theorist, Cognitive scientist, who works to build bridges between distant fields of knowledge. He is mostly self-taught, having worked on multiple research projects, with various published papers and lectures at Oxford and other institutions. He is presently engaged in applying his cognitive science theories towards developing more sophisticated artificial intelligence.
Edward Wise became interested in Effective Altruism at Oxford University, and aims to research the interaction between the ethics of effective altruism and left-wing political philosophy.
Fredi Backtoldt—“I’m studying philosophy at Goethe Universität Frankfurt, currently writing my master thesis on the Demandingness Objection to ethical theories. On the side, i started to volunteer for Animal Ethics, where I now also do an internship. The hotel with its great atmosphere helps me to put my values into action and that’s what I’m trying to do here!”
Saulius Šimčikas is a Research Analyst at Rethink Priorities, mostly working on topics related to animal welfare. Previously, he was a research intern at Animal Charity Evaluators, organised Effective Altruism events in the UK and Lithuania, and earned to give as a programmer. Living in the hotel helps him focus on work.
Rhys Southan is a writer and philosopher with a focus on animal ethics and population ethics. Last year he completed a master’s degree in philosophy at The University of Oxford. He has been published in the New York Times, Aeon Magazine and Modern Farmer. While at the EA Hotel, Rhys is working on a fiction novel related to AI alignment, as well as researching and writing on animal ethics. He is also interested in autism and how it affects romantic relationships and mental health.
Matt Goldenberg is a community builder and entrepreneur. His current research is on the systematisation of creating impactful organisations.
Max Carpendale studied philosophy at university. Max has been doing research and writing on the subject of invertebrate sentience from an EA angle. He has worked with Rethink Priorities on the subject. Max’s been interested and involved with EA since 2011 and has been interested in many related ideas before then.
Rafe Kennedy works on macrostrategy & AI strategy and studies maths and statistics, with the goal of contributing towards AI Safety. Previous work at the hotel has included game-theoretic modelling of AI development and visualisations of statistical concepts. He holds a master’s in Physics & Philosophy from Oxford and has previously worked as a software engineer at a venture-backed data science startup.
Arron Branton moved from London to Blackpool, quitting his job to focus on learning programming full time. He is currently working on creating a video game for the Google Play store and Apple’s App store, which is planned to be released later in 2019. The money raised will go towards helping save human lives in the poorest poorest countries around the world. ‘What kind of game is he working on?’ (I hear you ask). You’ll have to wait and see!
Lee Wright—“I’m currently undergoing a course of self-directed study to prepare myself for an EA-aligned career. Despite my main interest in global governance and international policy, while at the hotel I’ve focused on developing a general skill set that I think will be useful for any analytics or operations job. Since I’m not working directly on an EA project, and the opportunity cost for me is comparatively lower than some other guests, I help out with the back-end operations of the hotel where I can.”
Linda Linsefors is an independent AI Safety student and researcher. She has previously completed a PhD in Quantum cosmology, organised an AI Safety Camp and interned at MIRI. Linda is currently learning more ML and RL, and also thinking about wireheading and the relation between learning and theory, among other things.
Markus Salmela—Markus studies human health, philosophy and social sciences. He has worked on research projects relating to existential risks and long term forecasting, and also organised EA-events. He is currently writing about longevity research from an existential risk perspective.
Evan Sandhoefner graduated from Harvard in 2017 with a degree in economics and computer science. He worked as a program manager at Microsoft for a short time before leaving to pursue EA work directly. For now, he’s independently studying a wide range of EA-relevant topics, with a particular interest in consciousness.
Conclusion
The question we’re trying to answer is the following: is the average project carried out by EA Hotel residents worth its investment? This overview should give a good first impression.
To further help you make an assessment, we aim to publish (at least) 2 more posts:
What about the risks? How will the hotel filter out projects that are strongly net-negative? How will the hotel protect its residents from bad actors, and prevent incidents that cause unacceptable personal harm and tarnish the reputation of the community? What systems are generally in place to keep both residents and management in check? Is the management competent enough to identify risks and deal with them timely? Our next post will tackle these concerns and ask readers to come up with plausible scenarios that break our solutions.
While it has proven difficult to make a satisfactory EV calculation that accounts for all cases, what we can do is make some assumptions that make the calculation easier, and calculate EV conditional on those assumptions being true. We will attempt to give a lower bound for EV: how likely is it that the residents of the hotel are equally or more effective than funding a marginal EA hire? Another observation: much of the expected value of the hotel is in the potential discovery that solutions like the hotel are viable. This would effectively create a new tool for drastically reducing the costs of a major share of EA work. If this hotel doesn’t get funded, it could take several years before a similar project gets started again. How much potential value could be lost during those years? Our fourth post will attempt to calculate EV from these perspectives.
The ask
Do you like this initiative, and want to see it continue for longer, on a more stable footing, and at a bigger scale? Do you want to cheaply buy time spent working full time on work relating to EA, whilst simultaneously facilitating a thriving EA community hub? Then we would like to ask for your support.
For further instructions please see our GoFundMe.
If you’d like to give regular support, we also have a Patreon.
To learn more about the hotel, and to book a stay, please see our website.
Thanks to Toon Alfrink for conducting the survey and drafting the post, and Sasha Cooper and Florent Berthet for comments.
Footnote
[1] Some people have voiced concerns over the kind of people that the EA Hotel would attract. For example, citing one top comment on a SSC article about the hotel:
“This is going to attract people who are into EA and unusually bad at making a living. Is that bad? Not sure, but I expect the most competent EAs have no problem making ends meet in [High Cost Of Living] areas (despite the insane inefficiency of that phenomenon).”
This is a valid concern, but does it reflect reality? It is perhaps more true that most of our guests have good earning potential, but instead choose to do other things—i.e. direct EA work. We have a general impression of competence, but realize that this is very hard to formally specify. In this post we will instead give you some of the data that has led us to that impression.
- My Q1 2019 EA Hotel donation by 1 Apr 2019 2:23 UTC; 120 points) (
- EA Hotel Fundraiser 6: Concrete outputs after 17 months by 31 Oct 2019 21:39 UTC; 78 points) (
- CEEALAR Fundraiser 9: Concrete outputs after 29 months by 27 Nov 2020 14:48 UTC; 69 points) (
- How to Understand and Mitigate Risk by 12 Mar 2019 10:14 UTC; 55 points) (LessWrong;
- EA Hotel Fundraiser 4: Concrete outputs after 10 months by 30 Mar 2019 19:54 UTC; 51 points) (
- $100 Prize to Best Argument Against Donating to the EA Hotel by 27 Mar 2019 17:03 UTC; 49 points) (
- EA Hotel Fundraiser 7: Pitch focusing on case studies with counterfactuals by 22 Nov 2019 11:25 UTC; 44 points) (
- Why is the EA Hotel having trouble fundraising? by 26 Mar 2019 23:20 UTC; 34 points) (
- Why is the EA Hotel having trouble fundraising? by 26 Mar 2019 23:20 UTC; 34 points) (
- 28 Mar 2019 7:47 UTC; 21 points) 's comment on $100 Prize to Best Argument Against Donating to the EA Hotel by (
- How to Understand and Mitigate Risk (Crosspost from LessWrong) by 12 Mar 2019 10:24 UTC; 17 points) (
- EA Hotel Fundraiser 3: Estimating the relative Expected Value of the EA Hotel (Part 1) by 11 Mar 2019 14:41 UTC; 17 points) (
Some other considerations:
Living in the hotel might make people work more because less effort needs to be expanded on taking care of food, having a social life, etc.
For some having a place to work around other like-minded people is important for productivity
People sometimes learn relevant EA things from each other during daily conversations
Living in the hotel might prevent value drift and increase the engagement with EA (this is probably the most important one)
We hope to quantify as much of this as possible in our 4th post
Note that with 3 months’ runway remaining, we are at a stage where a single medium-size funder can have an outsized impact. Our costs are ~£8k/month, so even buying a month or two runway would make a big difference in terms of giving us some breathing space to work on getting more money coming in. It’s also approximately continuously divisible in that every ~£265 will keep us going another day.
As things stand, we are getting close to the point where we will have to radically alter the nature of the project (i.e. start charging people/kicking them out if we can’t otherwise support them and their work).
Does the~£265 figure take into account rent from those who are paying it?
No, factoring in rent paid costs were ~25% less for Jan and Feb so far (although it’s hard to state a precise figure going forward with changes in occupancy etc).
Also see:
EA Hotel Fundraiser 1: the story
The initial forum post that explains the EA Hotel in detail
Slate Star Codex post about the hotel
Hotel’s website
Great to see so many folks working at cool stuff at the EA Hotel!
Thank you for taking the time to write this up, and for everything else you’ve done to make this happen.
As a heads up—we intend to hire a new manager to start mid-June (as Toon will be leaving at the end of June). Will hold off officially advertising until we have a better idea of our funding situation, but open to any expressions of interest.