$100 Prize to Best Argument Against Donating to the EA Hotel

[Full dis­clo­sure: I’ve pre­vi­ously booked a stay in the EA Ho­tel later in the year and had this post re­viewed by the or­ga­niz­ers be­fore post­ing, but oth­er­wise I’m not af­fili­ated and do not speak for the Ho­tel or its res­i­dents and seem to have some­what differ­ent pri­ors on some counts. Through­out this post, I’ll be refer­ring to the case they’ve made for them­selves so far here: 1, 2, 3. I take their data for granted, but not nec­es­sar­ily their con­clu­sions.]

Sum­mary: Com­ment be­low with rea­sons not to donate to the EA Ho­tel, and whichever gets the most up­votes earns $100 from me.


The Meta Level

As reg­u­lar Fo­rum read­ers know, the EA Ho­tel was first es­tab­lished and posted about al­most a year ago to sub­stan­tial (mostly pos­i­tive) re­cep­tion. Now, it seems to be fully func­tion­ing, with its rooms fully booked with ~20 res­i­dents work­ing on EA pro­jects. The only is­sue is, it’s run­ning out of fund­ing, ac­cord­ing to the or­ga­niz­ers (em­pha­sis theirs):

We are get­ting crit­i­cally low on run­way. Our cur­rent short­fall is ~£5k/​month from May on­ward. We will have to start giv­ing cur­rent guests no­tice in 1 month.

I am per­son­ally sur­prised that the Ho­tel’s fund­ing stream has been so dry, given the sub­stan­tial en­thu­si­asm it has re­ceived, both on this Fo­rum and on EA so­cial me­dia. Ev­i­dently, I’m not the only one who’s con­fused and cu­ri­ous about this. When I try to model why this could be, one cen­tral ob­ser­va­tion sticks out:

Most of those ex­cited about the Ho­tel are likely prospec­tive res­i­dents. Con­di­tional on some­one be­ing ex­cited to work on their own (EA-re­lated) thing for a while with­out hav­ing to worry about rent, chances are they don’t have much run­way. This im­plies they are un­likely to have enough money to be ma­jor donors.

Un­der that as­sump­tion, the class of “peo­ple ex­cited about the EA Ho­tel” may be some­thing of a filter bub­ble. Ex­cept also an ac­tual bub­ble, since the bor­der is hard to see from cer­tain an­gles.

With that fram­ing, I can think of these plau­si­ble rea­sons for the dis­crep­ancy be­tween the Ho­tel’s fund­ing situ­a­tion and the level of arm­chair en­thu­si­asm:

A) There are good rea­sons to think the Ho­tel is low ex­pected value (EV), and these rea­sons are gen­er­ally un­der­stood to those who aren’t starry-eyed about free rent.

B) Out­side the bub­ble, opinions of the Ho­tel are gen­er­ally luke­warm. Un­like in (A), there aren’t com­pel­ling rea­sons against it, just not enough com­pel­ling rea­sons for it to war­rant fund­ing. Pre­sum­ably, this also im­plies some ac­tive skep­ti­cism about the case the Ho­tel’s been mak­ing for it­self (1, 2, 3).

C) The ev­i­dence in­di­cates the Ho­tel is high EV for more or less the rea­sons that have been laid out by its or­ga­niz­ers, but most ma­jor donors have not en­gaged with that very much.

Or, as always, it could be some com­bi­na­tion of (A-C). But also, my ba­sic fram­ing could be wrong, and maybe there’s some other rea­son I’m not think­ing of. Either way, I am cu­ri­ous about this, and feel like I would have a bet­ter model of how EA fund­ing works in gen­eral if I un­der­stood this puz­zle.

With that in mind, I would like to so­licit the best ar­gu­ment(s) against donat­ing to the EA Ho­tel, so I hereby offer $100 from my pocket to who­ever in the com­ments gives the best such ar­gu­ment.

This will be judged sim­ply by the num­ber of up­votes on any com­ments posted here within ex­actly one week of the times­tamp on this post. Feel free to use the com­ments sec­tion for other stuff, but only com­ments that con­tain an ex­plicit ar­gu­ment against donat­ing to the EA Ho­tel will be con­sid­ered for the prize. To ver­ify I’m a real per­son that will in fact award $100, find me on FB here.

Also, feel free to leave com­ments from an anony­mous ac­count. If you win, then you will have to mes­sage me from that ac­count to con­firm who you are. It might also be nec­es­sary to mes­sage a trusted 3rd-party to ver­ify the trans­ac­tion went through, but hope­fully this will still be fine as far as re­duc­ing the in­cen­tives against nega­tivity. For in­stance, I give my gen­eral im­pres­sion of the cur­rent res­i­dents be­low. Opin­ing that they’re worse than that is so­cially costly, so I want to al­low space to air those opinions ex­plic­itly if they ex­ist. But that said, I think most of the use­ful crit­i­cism I can imag­ine is not so­cially costly, so I don’t want to en­courage ev­ery­one to post anony­mously.

The Ob­ject Level

Here I’d like to re­view the skep­ti­cisms of the Ho­tel that I have seen so far, and why I don’t find these com­pletely satis­fac­tory. I only in­tend this as in­spira­tion for more re­fined cri­tiques, and I ab­solutely wel­come com­ments that take a differ­ent line of ar­gu­ment than those be­low.

In large part, there have been gen­eral wor­ries about who the Ho­tel is likely to at­tract. As one of the top com­ments on the origi­nal Ho­tel post last year put it:

the ho­tel could be­come a hub for ev­ery­one who doesn’t study at a uni­ver­sity or work on a pro­ject that EA donors find worth fund­ing, i.e. the ho­tel would mainly sup­port work that the EA com­mu­nity as a whole would view as lower-qual­ity. I’m not say­ing I’m con­fi­dent this will hap­pen, but I think the chance is non-triv­ial with­out the lead­er­ship and pres­ence of highly ex­pe­rienced EAs (who work there as e.g. ho­tel man­agers /​ trustees).
Fur­ther­more, peo­ple have re­peat­edly brought up the ar­gu­ment that the first “bad” EA pro­ject in each area can do more harm than an ad­di­tional “good” EA pro­ject, es­pe­cially if you con­sider tail risks, and I think this is more likely to be true than not. E.g. the first poli­ti­cal protest for AI reg­u­la­tion might in ex­pec­ta­tion do more harm than a thought­ful AI policy pro­ject could pre­vent. This pro­vides a rea­son for EAs to be risk-averse.

Now, I cer­tainly take the risk of net-nega­tive pro­jects se­ri­ously, but I don’t see much rea­son to think the Ho­tel will lead to these. Read­ing over the most com­pre­hen­sive ar­ti­cle the com­mu­nity has on the sub­ject (to my knowl­edge), most of these risks tend to arise from at least one of a) unilat­er­al­ism/​lack of feed­back, b) un­fa­mil­iar­ity EA and its norms, c) un­fa­mil­iar­ity with the spe­cific field of re­search, and d) what I will bluntly call gen­eral in­com­pe­tence/​stu­pidity.

Un­der the coun­ter­fac­tual of the Ho­tel’s nonex­is­tence, I’d guess most of the res­i­dents would only work on their pro­jects by them­selves part-time or not at all. Com­pared to that, the Ho­tel seems pretty much neu­tral on (c), but I would spec­u­late, ac­tu­ally helps with (a,b), since it acts similar to an EA org in the way mem­bers can get easy feed­back from other res­i­dents on the po­ten­tial risks of their pro­ject. Ob­vi­ously, the con­cern here is with (d), be­cause the res­i­dents can be ex­pected to be some­what less smart/​com­pe­tent than those who’ve cleared the bar at EA orgs. Still, my im­pres­sion from the pro­files of the res­i­dents is that they’re com­pe­tent enough such that (a) more than coun­ter­acts (d). Allow me to make these in­tu­itions more ex­plicit.

Sup­pose that, on some level of gen­eral com­pe­tence, Alice is 95th per­centile among EAs on the Fo­rum and is work­ing on her own EA pro­ject in­de­pen­dently, while Bob is of 30th per­centile com­pe­tence and is work­ing on his pro­ject while so­cially im­mersed in his many in-per­son EA con­tacts. I am sig­nifi­cantly more wor­ried about down­side risk from Alice’s pro­ject than Bob’s. The rea­son for this is that, in a given field, many of these down­side risks are very hard or near-im­pos­si­ble to en­vi­sion ahead of time, even if you’re re­ally smart and cau­tious. How­ever, once these do­main-spe­cific pit­falls are pointed out to you, it’s not that cog­ni­tively tax­ing to grok them and ad­just your think­ing/​ac­tions ac­cord­ingly. My guess is, 30th per­centile com­pe­tence is enough to do this with­out ma­jor is­sue, while 95th per­centile is only enough for some of the en­vi­sion­ing (this cer­tainly varies wildly by field). In my es­ti­ma­tion, the former is about my lower bound for the gen­eral com­pe­tence lev­els of the cur­rent res­i­dents (most seem to be at least 50th). Hence I see rel­a­tively lit­tle to worry about down­side risks vis-a-vis the Ho­tel.

How­ever, I look for­ward to see­ing my rea­son­ing here ques­tioned, and up­dat­ing my model of down­side risks.


But the gen­eral con­cern here was not down­side risks speci­fi­cally, but that the av­er­age com­pe­tence of the res­i­dents may make it un­likely that much suc­cess­ful work gets done. Cur­rently, the most well-thought-out Ho­tel cri­tique I know of is this com­ment from a cou­ple months ago. Not­ing that rel­a­tively lit­tle suc­cess­ful work has (ap­par­ently) come out of the Ho­tel so far:

I don’t take this (ap­par­ent) ab­sence of ev­i­dence to be a sur­pris­ing or ad­verse sig­nal. Among many rea­sons: the ho­tel has only been around for 8 months or so, and many pro­jects wouldn’t be ex­pected to be pro­duc­ing promis­ing early re­sults in this time; there are nat­u­ral in­cen­tives that push against offer­ing rough or un­pol­ished work for pub­lic scrutiny (e.g. few PhD stu­dents—my­self in­cluded—would be keen on pre­sent­ing ‘what they’ve done so far’ at the 6m mark for pub­lic scrutiny); many ex ante worth­while pro­jects (e.g. skill build­ing ca­reer de­vel­op­ment) may only have gen­er­ally noisy and long de­layed ex post con­fir­ma­tion.
Yet this also means there isn’t much to shift one’s pri­ors. My own (which I think are of­ten shared, par­tic­u­larly by those in EA in a po­si­tion to make larger dona­tions) are fairly au­tum­nal: that a lot of ‘EA ideas’ are very hard to ac­com­plish (and for some del­i­cate ar­eas have tricky pit­falls to nav­i­gate) even for highly mo­ti­vated peo­ple, and so I’m more ex­cited about sig­nals of ex­cep­tional abil­ity than ex­cep­tional com­mit­ment (cf. se­lec­tive­ness, tal­ent con­straint, etc. etc.)
I un­der­stand the think­ing be­hind the ho­tel takes a differ­ent view: that there is a lot of po­ten­tial en­ergy among com­mit­ted EAs to make im­por­tant con­tri­bu­tions but can­not af­ford to de­vote them­selves to it (per­haps due to mis­takes among fun­ders like in­suffi­cient risk-ap­petite, too in­groupy, ex­clu­sive in ways or­thog­o­nal to ex­pected value, or what­ever else). Thus a cheap ‘launch pad’ for these peo­ple can bring a lot of value.
If this is right, and I am wrong, I’d like to know sooner rather than later. Yet un­til I am cor­rected, the ho­tel doesn’t look re­ally promis­ing in first or­der terms, and the col­lec­tive ‘value of in­for­ma­tion’ bud­get may not ex­tend into the six figures.

Be­fore com­ment­ing fur­ther, let me just say this is very well-put.

But still, af­ter the wave of posts/​dis­cus­sions on this fo­rum trig­gered by:

After one year of ap­ply­ing for EA jobs: It is re­ally, re­ally hard to get hired by an EA organisation

I sense there have been some gen­eral up­dates around the topic of “se­lec­tive­ness”, such that while the pri­ors men­tioned in that com­ment may be as true as ever, I feel they now have to be more ex­plic­itly ar­gued for.

At least, I think it’s fair to say that while most ev­ery­one who meets the hiring stan­dards of EA orgs is quite com­pe­tent, there is a very high false nega­tive rate. So what hap­pens to the rel­a­tively large num­ber of com­mit­ted, highly com­pe­tent EAs who can’t get EA jobs? I cer­tainly hope most ei­ther earn to give or pur­sue PhDs, but for those who are best-suited to­wards di­rect work/​re­search, but for what­ever rea­son aren’t suited for (or wouldn’t benefit much from) a PhD, then what?

Let D be this de­mo­graphic: com­mit­ted EAs who can’t get an EA job, are best fit for di­rect work/​re­search, but not a good fit for academia (at least right now). Quite frankly, D cer­tainly con­tains many EAs who likely aren’t “good enough” to be very im­pact­ful. But let E be the sub­set of D that is quite com­pe­tent. My in­tu­itions say that E is still a sub­stan­tial de­mo­graphic, be­cause of the afore­men­tioned false nega­tive rate (and the fact that PhDs aren’t for ev­ery­one, even in re­search).

But even if that’s true, that doesn’t mean we should nec­es­sar­ily go out of our way to let the mem­bers of E work on their pro­jects. By defi­ni­tion, this set is hard to filter for, and so there prob­a­bly isn’t a way to reach them with­out also reach­ing the much larger num­ber of less com­pe­tent look-al­ikes in D. And if the in­evitable costs as­so­ci­ated with this are too high, then we as a com­mu­nity should be able to openly say “No, this isn’t worth it in EV.”

With that said, my in­tu­itions still say the EV for the Ho­tel seems worth it. Very roughly speak­ing, the ques­tion seems to be whether $1 of re­search pur­chased from the Ho­tel is worth as much as $1 of re­search pur­chased from an EA org.

This isn’t ac­tu­ally right: for nu­ances, see both the Ad­den­dum be­low and the Ho­tel’s own EV calcu­la­tion. Worse, I will fabri­cate a num­ber for the sake of dis­cus­sion (but please let me know a good es­ti­mate for its ac­tual value): the av­er­age salary at an EA org.

It costs about £6,000 ($7,900) to fund a res­i­dent at the Ho­tel, so let’s round and sup­pose it costs £60,000 ($79,000) to hire some­one at a ran­dom EA org (the Ho­tel’s res­i­dents seem to mostly do re­search, and re­search po­si­tions get paid more, so hope­fully that num­ber isn’t too nutty).

Then the ques­tion is (roughly) whether, given £60,000, it makes more sense to fund 1 re­searcher who’s cleared the EA hiring bar, or 10 who haven’t (and are in D).

(Note: We shouldn’t quite ex­pect res­i­dents of the Ho­tel to just be ran­dom mem­bers of D. For in­stance, there’s an ex­tra filter for some­one will­ing to trans­plant to Black­pool: ei­ther they have no ma­jor re­spon­si­bil­ities where they live or are com­mit­ted enough to drop them. I think this im­plicit filter is a mod­est plus to the Ho­tel, while the other differ­ences with D don’t add up to much, but there’s cer­tainly room to ar­gue oth­er­wise).

It’s well-known here that top perform­ers do or­ders of mag­ni­tude more to ad­vance their field than the me­dian, and I will al­most always take 1 su­perb re­searcher over 10 mediocre ones. But the point here is the EV of 10 ran­dom mem­bers of D: if you think a ran­dom EA there has a prob­a­bil­ity p >10% of be­ing as com­pe­tent as an em­ployed EA re­searcher, and you be­lieve my ar­gu­ments above that the other 9 are un­likely to be net-nega­tive, then the EV works out in the Ho­tel’s fa­vor. But if your sub­jec­tive value of p is much less than 10%, then the other 9 prob­a­bly won’t add all that much.

So what’s your p? I feel like this may be an im­por­tant crux, or maybe I’m mod­el­ing this the wrong way. Either way I’d like to know. Also, I em­pha­size again the above para­graphs are em­bar­rass­ingly over­sim­plified, but again that is just in­tended as a jump­ing-off point. For a more de­tailed/​rigor­ous anal­y­sis, see the Ho­tel’s own.

Ad­den­dum: What pre­cisely counts as an ar­gu­ment against donat­ing?

When I first wanted to spec­ify this, it seemed nat­u­ral to say it’s any ar­gu­ment against the propo­si­tion:

$1 to the EA Ho­tel has at least as much EV as $1 to any of the usual EA or­ga­ni­za­tions (e.g. FHI, MIRI, ACE, etc.)

And if you’re less of a pedant than me, read no fur­ther.

But this doesn’t quite work. For one, $1 might not be a good num­ber since economies of scale may be in­volved. The Ho­tel is ask­ing for £130,000 (~$172,000) to get 18 months run­way, and pre­sum­ably it would be bet­ter to have that up-front than on a week-to-week ba­sis, due to the fi­nan­cial se­cu­rity of the res­i­dents etc. But I don’t know how much this mat­ters.

The other prob­lem is, this al­lows an ar­gu­ment of the form “or­ga­ni­za­tion X is re­ally effec­tive be­cause of the work on topic Y they are do­ing”. Since the EA Ho­tel has a de­cently well-rounded port­fo­lio of EA pro­jects (albeit with some skew to­ward AI safety), the more rele­vant com­par­i­son would be more like $1 spread across mul­ti­ple orgs, or bet­ter yet across the ma­jor cause-neu­tral meta-orgs.

But I’m not even sure it’s right to com­pare with ma­jor orgs (even though the Ho­tel or­ga­niz­ers do in their own EV anal­y­sis). This is be­cause the mantra “EA isn’t fund­ing con­strained” is true in the sense that all the ma­jor orgs seem to have lit­tle prob­lem reach­ing their fund­ing tar­gets these days (cor­rect me if this is too sweep­ing a gen­er­al­iza­tion). But it’s false in the sense that there are plenty of smaller orgs/​pro­jects that strug­gle to get fund­ing, even though some of them seem to be worth it. Since the role of an EA donor is to find and vet these pro­jects, the rele­vant com­par­i­son for the Ho­tel would seem to be the col­lec­tion of other small (but cred­ible) pro­jects that OpenPhil skipped over. For this pur­pose, good refer­ence classes seem to be:

1) The list of grantees for EA Meta Funds, listed at the bot­tom of this page.

2) The list of grantees for the first round of EA grants, listed here.

With that in mind, I be­lieve the spe­cific propo­si­tion I would like to see cri­tiqued is:

$172,000 to the EA Ho­tel has at least as much EV as $172,000 dis­tributed ran­domly to grantees from (1) or (2)