$100 Prize to Best Argument Against Donating to the EA Hotel

[Full disclosure: I’ve previously booked a stay in the EA Hotel later in the year and had this post reviewed by the organizers before posting, but otherwise I’m not affiliated and do not speak for the Hotel or its residents and seem to have somewhat different priors on some counts. Throughout this post, I’ll be referring to the case they’ve made for themselves so far here: 1, 2, 3. I take their data for granted, but not necessarily their conclusions.]

Summary: Comment below with reasons not to donate to the EA Hotel, and whichever gets the most upvotes earns $100 from me.


The Meta Level

As regular Forum readers know, the EA Hotel was first established and posted about almost a year ago to substantial (mostly positive) reception. Now, it seems to be fully functioning, with its rooms fully booked with ~20 residents working on EA projects. The only issue is, it’s running out of funding, according to the organizers (emphasis theirs):

We are getting critically low on runway. Our current shortfall is ~£5k/​month from May onward. We will have to start giving current guests notice in 1 month.

I am personally surprised that the Hotel’s funding stream has been so dry, given the substantial enthusiasm it has received, both on this Forum and on EA social media. Evidently, I’m not the only one who’s confused and curious about this. When I try to model why this could be, one central observation sticks out:

Most of those excited about the Hotel are likely prospective residents. Conditional on someone being excited to work on their own (EA-related) thing for a while without having to worry about rent, chances are they don’t have much runway. This implies they are unlikely to have enough money to be major donors.

Under that assumption, the class of “people excited about the EA Hotel” may be something of a filter bubble. Except also an actual bubble, since the border is hard to see from certain angles.

With that framing, I can think of these plausible reasons for the discrepancy between the Hotel’s funding situation and the level of armchair enthusiasm:

A) There are good reasons to think the Hotel is low expected value (EV), and these reasons are generally understood to those who aren’t starry-eyed about free rent.

B) Outside the bubble, opinions of the Hotel are generally lukewarm. Unlike in (A), there aren’t compelling reasons against it, just not enough compelling reasons for it to warrant funding. Presumably, this also implies some active skepticism about the case the Hotel’s been making for itself (1, 2, 3).

C) The evidence indicates the Hotel is high EV for more or less the reasons that have been laid out by its organizers, but most major donors have not engaged with that very much.

Or, as always, it could be some combination of (A-C). But also, my basic framing could be wrong, and maybe there’s some other reason I’m not thinking of. Either way, I am curious about this, and feel like I would have a better model of how EA funding works in general if I understood this puzzle.

With that in mind, I would like to solicit the best argument(s) against donating to the EA Hotel, so I hereby offer $100 from my pocket to whoever in the comments gives the best such argument.

This will be judged simply by the number of upvotes on any comments posted here within exactly one week of the timestamp on this post. Feel free to use the comments section for other stuff, but only comments that contain an explicit argument against donating to the EA Hotel will be considered for the prize. To verify I’m a real person that will in fact award $100, find me on FB here.

Also, feel free to leave comments from an anonymous account. If you win, then you will have to message me from that account to confirm who you are. It might also be necessary to message a trusted 3rd-party to verify the transaction went through, but hopefully this will still be fine as far as reducing the incentives against negativity. For instance, I give my general impression of the current residents below. Opining that they’re worse than that is socially costly, so I want to allow space to air those opinions explicitly if they exist. But that said, I think most of the useful criticism I can imagine is not socially costly, so I don’t want to encourage everyone to post anonymously.

The Object Level

Here I’d like to review the skepticisms of the Hotel that I have seen so far, and why I don’t find these completely satisfactory. I only intend this as inspiration for more refined critiques, and I absolutely welcome comments that take a different line of argument than those below.

In large part, there have been general worries about who the Hotel is likely to attract. As one of the top comments on the original Hotel post last year put it:

the hotel could become a hub for everyone who doesn’t study at a university or work on a project that EA donors find worth funding, i.e. the hotel would mainly support work that the EA community as a whole would view as lower-quality. I’m not saying I’m confident this will happen, but I think the chance is non-trivial without the leadership and presence of highly experienced EAs (who work there as e.g. hotel managers /​ trustees).
Furthermore, people have repeatedly brought up the argument that the first “bad” EA project in each area can do more harm than an additional “good” EA project, especially if you consider tail risks, and I think this is more likely to be true than not. E.g. the first political protest for AI regulation might in expectation do more harm than a thoughtful AI policy project could prevent. This provides a reason for EAs to be risk-averse.

Now, I certainly take the risk of net-negative projects seriously, but I don’t see much reason to think the Hotel will lead to these. Reading over the most comprehensive article the community has on the subject (to my knowledge), most of these risks tend to arise from at least one of a) unilateralism/​lack of feedback, b) unfamiliarity EA and its norms, c) unfamiliarity with the specific field of research, and d) what I will bluntly call general incompetence/​stupidity.

Under the counterfactual of the Hotel’s nonexistence, I’d guess most of the residents would only work on their projects by themselves part-time or not at all. Compared to that, the Hotel seems pretty much neutral on (c), but I would speculate, actually helps with (a,b), since it acts similar to an EA org in the way members can get easy feedback from other residents on the potential risks of their project. Obviously, the concern here is with (d), because the residents can be expected to be somewhat less smart/​competent than those who’ve cleared the bar at EA orgs. Still, my impression from the profiles of the residents is that they’re competent enough such that (a) more than counteracts (d). Allow me to make these intuitions more explicit.

Suppose that, on some level of general competence, Alice is 95th percentile among EAs on the Forum and is working on her own EA project independently, while Bob is of 30th percentile competence and is working on his project while socially immersed in his many in-person EA contacts. I am significantly more worried about downside risk from Alice’s project than Bob’s. The reason for this is that, in a given field, many of these downside risks are very hard or near-impossible to envision ahead of time, even if you’re really smart and cautious. However, once these domain-specific pitfalls are pointed out to you, it’s not that cognitively taxing to grok them and adjust your thinking/​actions accordingly. My guess is, 30th percentile competence is enough to do this without major issue, while 95th percentile is only enough for some of the envisioning (this certainly varies wildly by field). In my estimation, the former is about my lower bound for the general competence levels of the current residents (most seem to be at least 50th). Hence I see relatively little to worry about downside risks vis-a-vis the Hotel.

However, I look forward to seeing my reasoning here questioned, and updating my model of downside risks.


But the general concern here was not downside risks specifically, but that the average competence of the residents may make it unlikely that much successful work gets done. Currently, the most well-thought-out Hotel critique I know of is this comment from a couple months ago. Noting that relatively little successful work has (apparently) come out of the Hotel so far:

I don’t take this (apparent) absence of evidence to be a surprising or adverse signal. Among many reasons: the hotel has only been around for 8 months or so, and many projects wouldn’t be expected to be producing promising early results in this time; there are natural incentives that push against offering rough or unpolished work for public scrutiny (e.g. few PhD students—myself included—would be keen on presenting ‘what they’ve done so far’ at the 6m mark for public scrutiny); many ex ante worthwhile projects (e.g. skill building career development) may only have generally noisy and long delayed ex post confirmation.
Yet this also means there isn’t much to shift one’s priors. My own (which I think are often shared, particularly by those in EA in a position to make larger donations) are fairly autumnal: that a lot of ‘EA ideas’ are very hard to accomplish (and for some delicate areas have tricky pitfalls to navigate) even for highly motivated people, and so I’m more excited about signals of exceptional ability than exceptional commitment (cf. selectiveness, talent constraint, etc. etc.)
I understand the thinking behind the hotel takes a different view: that there is a lot of potential energy among committed EAs to make important contributions but cannot afford to devote themselves to it (perhaps due to mistakes among funders like insufficient risk-appetite, too ingroupy, exclusive in ways orthogonal to expected value, or whatever else). Thus a cheap ‘launch pad’ for these people can bring a lot of value.
If this is right, and I am wrong, I’d like to know sooner rather than later. Yet until I am corrected, the hotel doesn’t look really promising in first order terms, and the collective ‘value of information’ budget may not extend into the six figures.

Before commenting further, let me just say this is very well-put.

But still, after the wave of posts/​discussions on this forum triggered by:

After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation

I sense there have been some general updates around the topic of “selectiveness”, such that while the priors mentioned in that comment may be as true as ever, I feel they now have to be more explicitly argued for.

At least, I think it’s fair to say that while most everyone who meets the hiring standards of EA orgs is quite competent, there is a very high false negative rate. So what happens to the relatively large number of committed, highly competent EAs who can’t get EA jobs? I certainly hope most either earn to give or pursue PhDs, but for those who are best-suited towards direct work/​research, but for whatever reason aren’t suited for (or wouldn’t benefit much from) a PhD, then what?

Let D be this demographic: committed EAs who can’t get an EA job, are best fit for direct work/​research, but not a good fit for academia (at least right now). Quite frankly, D certainly contains many EAs who likely aren’t “good enough” to be very impactful. But let E be the subset of D that is quite competent. My intuitions say that E is still a substantial demographic, because of the aforementioned false negative rate (and the fact that PhDs aren’t for everyone, even in research).

But even if that’s true, that doesn’t mean we should necessarily go out of our way to let the members of E work on their projects. By definition, this set is hard to filter for, and so there probably isn’t a way to reach them without also reaching the much larger number of less competent look-alikes in D. And if the inevitable costs associated with this are too high, then we as a community should be able to openly say “No, this isn’t worth it in EV.”

With that said, my intuitions still say the EV for the Hotel seems worth it. Very roughly speaking, the question seems to be whether $1 of research purchased from the Hotel is worth as much as $1 of research purchased from an EA org.

This isn’t actually right: for nuances, see both the Addendum below and the Hotel’s own EV calculation. Worse, I will fabricate a number for the sake of discussion (but please let me know a good estimate for its actual value): the average salary at an EA org.

It costs about £6,000 ($7,900) to fund a resident at the Hotel, so let’s round and suppose it costs £60,000 ($79,000) to hire someone at a random EA org (the Hotel’s residents seem to mostly do research, and research positions get paid more, so hopefully that number isn’t too nutty).

Then the question is (roughly) whether, given £60,000, it makes more sense to fund 1 researcher who’s cleared the EA hiring bar, or 10 who haven’t (and are in D).

(Note: We shouldn’t quite expect residents of the Hotel to just be random members of D. For instance, there’s an extra filter for someone willing to transplant to Blackpool: either they have no major responsibilities where they live or are committed enough to drop them. I think this implicit filter is a modest plus to the Hotel, while the other differences with D don’t add up to much, but there’s certainly room to argue otherwise).

It’s well-known here that top performers do orders of magnitude more to advance their field than the median, and I will almost always take 1 superb researcher over 10 mediocre ones. But the point here is the EV of 10 random members of D: if you think a random EA there has a probability p >10% of being as competent as an employed EA researcher, and you believe my arguments above that the other 9 are unlikely to be net-negative, then the EV works out in the Hotel’s favor. But if your subjective value of p is much less than 10%, then the other 9 probably won’t add all that much.

So what’s your p? I feel like this may be an important crux, or maybe I’m modeling this the wrong way. Either way I’d like to know. Also, I emphasize again the above paragraphs are embarrassingly oversimplified, but again that is just intended as a jumping-off point. For a more detailed/​rigorous analysis, see the Hotel’s own.

Addendum: What precisely counts as an argument against donating?

When I first wanted to specify this, it seemed natural to say it’s any argument against the proposition:

$1 to the EA Hotel has at least as much EV as $1 to any of the usual EA organizations (e.g. FHI, MIRI, ACE, etc.)

And if you’re less of a pedant than me, read no further.

But this doesn’t quite work. For one, $1 might not be a good number since economies of scale may be involved. The Hotel is asking for £130,000 (~$172,000) to get 18 months runway, and presumably it would be better to have that up-front than on a week-to-week basis, due to the financial security of the residents etc. But I don’t know how much this matters.

The other problem is, this allows an argument of the form “organization X is really effective because of the work on topic Y they are doing”. Since the EA Hotel has a decently well-rounded portfolio of EA projects (albeit with some skew toward AI safety), the more relevant comparison would be more like $1 spread across multiple orgs, or better yet across the major cause-neutral meta-orgs.

But I’m not even sure it’s right to compare with major orgs (even though the Hotel organizers do in their own EV analysis). This is because the mantra “EA isn’t funding constrained” is true in the sense that all the major orgs seem to have little problem reaching their funding targets these days (correct me if this is too sweeping a generalization). But it’s false in the sense that there are plenty of smaller orgs/​projects that struggle to get funding, even though some of them seem to be worth it. Since the role of an EA donor is to find and vet these projects, the relevant comparison for the Hotel would seem to be the collection of other small (but credible) projects that OpenPhil skipped over. For this purpose, good reference classes seem to be:

1) The list of grantees for EA Meta Funds, listed at the bottom of this page.

2) The list of grantees for the first round of EA grants, listed here.

With that in mind, I believe the specific proposition I would like to see critiqued is:

$172,000 to the EA Hotel has at least as much EV as $172,000 distributed randomly to grantees from (1) or (2)