EA Hotel Fundraiser 4: Concrete outputs after 10 months

Due to popular demand, we’re publishing a list of concrete outputs that hotel residents have made during their stay.

Note that interpreting this list comes with some caveats:

  • While the output per dollar is very high, this might not be the best metric. As an intuition pump, consider that sending an EA to prison has a higher output/​$ ratio than the EA hotel, because it’s free. The best metric would be marginal output divided by marginal output for a counterfactual donation, but this is hard to estimate. As a proxy, we suggest looking at output per day per person. Does this seem high compared to an average EA?

  • This list doesn’t cover everything of value that has been supported by the hotel. Some residents have spent months working on things that they haven’t published yet. Some of them have spent most of their time doing self-therapy. Some of them have been reading books. Some of them have been having a lot of research conversations. Some have been developing presentations and workshops. The hotel is in a unique position to support this kind of hard-to-verify work.

Most of the data is in. We will keep an up to date version of this post live at eahotel.org/​outputs.

Total expenses as of March 2019

Money: So far ~£66,500* has been spent on hosting our residents, of which ~£7,600 was contributed by residents.

Time: ~4,000 person-days spent at the hotel.

Outputs as of March 2019

Summary

  • The incubation of three scalable EA organisations

  • One online course produced

  • 29 posts on LessWrong and the EA Forum (with a total of ~1000 karma)

  • 4 EA retreats hosted, 2 organised

  • 12 online courses followed

  • 2 internships and 1 job earned at EA organisations

Anonymous 1:
One 3 month work trial earned at a prominent X-risk organisation

RAISE:
(context)
Nearly the entirety of this online course was created at the hotel

Linda Linsefors:
Posts on the alignment forum:
Optimization Regularization through Time Penalty (12)
The Game Theory of Blackmail (24)

Chris Leong:
“I’ve still got a few more posts on infinity to write up, but here’s the posts I’ve made on LessWrong since arriving [with estimates of how likely they were to be written had I not been at the hotel]:
Summary: Surreal Decisions [50%] (27)
An Extensive Categorisation of Infinite Paradoxes [80%] (-4)
On Disingenuity [50%] (34)
On Abstract Systems [50%] (14)
Deconfusing Logical Counterfactuals [75%] (18)
Debate AI and the Decision to Release an AI [90%] (8)

John Maxwell
Courses taken:
Improving Your Statistical Inferences
MITx Probability
Statistical Learning
Formal Software Verification
ARIMA Modeling with R
Introduction to Recommender Systems
Text Mining and Analytics
Introduction to Time Series Analysis
Regression Models

Anonymous 2:
Courses:
Probabilistic Graphical Models
Model Thinking
MITx Probability
LessWrong posts:
Annihilating aliens & Rare Earth suggest early filter (8)
Believing others’ priors (9)
AI development incentive gradients are not uniformly terrible (23)
EA Forum post:
Should donor lottery winners write reports? (29)

Retreats hosted:

  • EA London Retreats:
    Life Review Weekend (Aug. 24th – 27th)
    Careers Week (Aug. 27th – 31st)
    Holiday/​EA Unconference (Aug. 31st – Sept. 3rd)

  • EA Glasgow (March 2019)

Denisa Pop:
Helped organise the EA Values-to-Actions Retreat
Helped organise the EA Community Health Unconference

Toon Alfrink
EA forum posts:
EA is vetting-constrained (96)
The Home Base of EA (12)
Task Y: representing EA in your field (11)
LessWrong posts:
We can all be high status (61)
The housekeeper (26)
What makes a good culture? (30)

Matt Goldenberg
The entirety of Project Metis
Posts on LessWrong:
The 3 Books Technique for Learning a New Skill (125)
A Framework for Internal Debugging (20)
S-Curves for Trend Forecasting (87)
What Vibing Feels Like (9)
How to Understand and Mitigate Risk (47)

Derek Foster: Priority Setting in Healthcare Through the Lens of Happiness – Chapter 3 of the 2019 Global Happiness and Well-Being Policy Report published by the Global Happiness Council.
Hired as a research analyst for Rethink Priorities.

Max Carpendale:
Posts on the EA Forum:
The Evolution of Sentience as a Factor in the Cambrian Explosion: Setting up the Question (28)
Sharks probably do feel pain: a reply to Michael Tye and others (19)
Why I’m focusing on invertebrate sentience (48)

Frederik Bechtold
Received an (unpaid) internship at Animal Ethics.

Saulius Šimčikas
Posts on the EA Forum:
Rodents farmed for pet snake food (64)
Will companies meet their animal welfare commitments? (109; winner of 3rd place EA Forum Prize for Feb 2019)

Magnus Vinding
Why Altruists Should Perhaps Not Prioritize Artificial Intelligence: A Lengthy Critique (Probability it would have been written otherwise: 99 percent).
Revising journal paper for Between the Species. (Got feedback and discussion about it I couldn’t have had otherwise; one reviewer happened to be a guest at the hotel.)
I got the idea to write the book I’m currently writing (“Suffering-Focused Ethics”) (50 percent)


Our Ask

Do you like this initiative, and want to see it continue for longer, on a more stable footing? Do you want to cheaply buy time spent working full time on work relating to EA, whilst simultaneously facilitating a thriving EA community hub? Do you want to see more work in the same vein as the above? Then we would like to ask for your support.

We are very low on runway. Our current shortfall is ~£4k/​month from July onward.

To donate, please see our GoFundMe or PayPal Money Pool, or get in touch if you’d like to make a direct bank transfer (which will save ~3% on fees and up to 3% on currency conversion if using a service like Revolut or Transferwise).

If you’d like to give regular support, we also have a Patreon.

Previous posts in this series:

Written by others:



*this is the total cost of the project to date (30 March 2019), not including the purchase of the building (£132,276.95 including building survey and conveyancing)