First, big kudos for your strong commitment to put your personal funding into this, and for the guts and drive to actually make it happen!
That said, my overall feelings about the project are mixed, mainly for the following reasons (which you also partly discuss in your post):
It seems plausible that most EAs who do valuable work won’t be able to benefit from this. If they’re students, they’ll most likely be studying at a university outside Blackpool and might not be able to do so remotely. If they’re launching a new great project, they’ll very likely be able to get funding from an EA donor, and there will be major benefits from being in a big city or existing hub such as Oxford, London, or the Bay (so donors should be enthusiastic about covering the living costs of these places). While it’s really impressive how low the rent at the hotel will be, rent cost is rarely a major reason for a project’s funding constraints (at least outside the SF Bay Area).
Instead, the hotel could become a hub for everyone who doesn’t study at a university or work on a project that EA donors find worth funding, i.e. the hotel would mainly support work that the EA community as a whole would view as lower-quality. I’m not saying I’m confident this will happen, but I think the chance is non-trivial without the leadership and presence of highly experienced EAs (who work there as e.g. hotel managers / trustees).
Furthermore, people have repeatedly brought up the argument that the first “bad” EA project in each area can do more harm than an additional “good” EA project, especially if you consider tail risks, and I think this is more likely to be true than not. E.g. the first political protest for AI regulation might in expectation do more harm than a thoughtful AI policy project could prevent. This provides a reason for EAs to be risk-averse. (Specifically, I tentatively disagree with your claims that “we’re probably at the point where there are more false negatives than false positives, so more chances can be taken on people at the low end”, and that we should invest “a small amount”.) Related: Spencer Greenberg’s idea that plenty of startups cause harm.
The fact that this post got way more upvotes than other projects that are similarly exciting in my view (such as Charity Entrepreneurship) also makes me think that the enthusiasm for this project may be partly driven by social reasons (it feels great to have a community hotel hub with likeminded people) as opposed to people’s impact assessments. But maybe there’s something I’m overlooking, e.g. maybe this post was just shared much more on social media.
What happens if you concentrate a group of EAs who wouldn’t get much funding from the broader community in one place and help them work together? I don’t know. It could be very positive or very negative. Or it just couldn’t lead to much at all. Overall, I think it may not be worth the downside risks.
First of all, Greg_Colbourn, very impressed with all the thought that’s gone into this. I was already super impressed that you were doing the project in the first place, but this was a good read. Criticism tends to hit people harder than praise, so good on you for continuing to engage with that too.
vollmer, thanks for the time you’re putting into the discussion here. I think a lot of your worries have less force if you think of the Hotel as a stepping stone / gap-filler / early incubator / refuge / safety net. If you don’t have a lot of savings, you can’t dedicate a focused chunk of time to:
earning the trust of other EAs who can then fund or at least vouch for you (this seems to be an important part of funding decisions, especially for new projects/orgs)
making a career move (the number of people I’ve known stuck in jobs they don’t want to be in because they don’t have the time to research, try out and apply for what they want to do next, and can’t afford to to quit first and then figure it out...)
general reading/thinking/discussion around EA/rationality/self-improvement, without knowing where it might take you (I’d love to see EA Grants accept a funding proposal for “Sitting on a beach and reading whatever seems useful for a few months...maybe I’ll start with the sequences, maybe I won’t, who knows lol”)
coming up with project proposals to take to the more official EA funding streams
recovering from burn-out or another knock (I know Greg_Colbourn hasn’t mentioned this here and it’s quite different from the others, so maybe I’m really straying into general “EA Hotels and Low-cost Living” territory now)
There’s a two-year limit. It’s not meant to be a long-term lifestyle for an EA. I think it’s meant to give less financially comfortable EAs a bit of freedom and breathing space between projects/careers.
I think this gap exists. The funder who says, “I know you don’t have the time or energy to make a grant-proposal-standard case for why I should support you yet. And that’s why I’m going to.” Maybe I’m wrong. But hey, let’s build it and see if they come :-)
I also think that often the way in which funding is offered is important. In my case, if I started to struggle financially, I’d be excited and proud to move into the EA Hotel. On the other hand, I’d feel somewhat embarrassed and burdensome to move back in with my parents, I’ve turned down financial support from three close EA friends because of what it might do to our relationship (and accepted very gratefully from two other less close EA friends <3), and I hate the thought of people questioning my motives as a community organiser if I accept a higher salary at EA London. I can also imagine people treating the EA Hotel as a last resort because e.g. future employers might think it looks like a cult. My point is more that it’s a different kind of option, and people have all sorts of reasons for not wanting to accept funding that they need. (Note that the personal, somewhat emotional relevance probably makes me slightly biased in favour of this project overall.)
Then again, maybe I’m being too blase. I have no idea how many EA project ideas aren’t getting funded due to lack of coordination / management time / funds in a particular area, and how many aren’t getting funded because funders think they shouldn’t be funded. The term “crucial considerations” makes me immediately think, “Ah. Yes. Yes, it’s very easy to do well-intentioned harm.” And actually I think I place an unusually high value on private research relative to action (publicising research can be pretty action-y), which might warrant a small group of community leaders withholding a lot of funding without much explanation until they’ve done a lot more research...Unfortunately, I also think it makes sense to be pretty suspicious of that approach.
No conclusions. Just some considerations for y’all.
(Incidentally, rejection is usually at least a bit hurtful. I wouldn’t be surprised if all the job/funding rejections in our community—especially alongside all the headhunting, careers advice, talent gap talk, “EA’s not funding-constrained” talk, pressure to reach relentlessly higher and reprioritise every 5 minutes etc. - was driving a lot of the enthusiasm for funders with more relaxed standards, including the upvotes on this article relative to the Charity Entrepreneurship one.)
Sure, but an EA hotel seems like a weird way to address this inefficiency: only few people with worthwhile projects can move to Blackpool to benefit from it, the funding is not flexible, it’s hard to target this well, the project has some time lag, etc. The most reasonable approach to fixing this is simply to give more money to some of the projects that didn’t get funded.
Maybe CEA will accept 20-30% of EA Grants applications in the next round, or other donors will jump in to fill the gaps. (I’d expect that a lot of the grants applications (maybe half) might have been submitted by people not really familiar with EA, and some of the others weren’t worth funding.)
[Disclosure: I’m planning to move to Blackpool before the end of this month.]
only few people with worthwhile projects can move to Blackpool to benefit from it, the funding is not flexible
If you’re working on a project full-time, there’s a good chance you’re not location-constrained.
the project has some time lag
I’m not sure what you’re referring to.
Over 3 months passed between the EA grants announcement and disbursement. Does that count as “time lag”?
The disadvantages you cite don’t seem compelling to me alongside the advantages cited in this post: dramatically lower costs, supportive EA community, etc. Yes, it’s not a great fit for every project—but if you’re offered a bargain on supporting one project, it seems silly to avoid taking it just because you weren’t offered a bargain on supporting some other project.
I think maybe our core disagreement is that you believe the bottom 95% of EA projects are risky and we should discourage people from funding them. Does that sound like an accurate summary of your beliefs?
I’ve writtensome about why I want to see more discussion of downside risk (and I’m compiling notes for several longer posts—maybe if I was living in Blackpool I’d have enough time to write them). However, the position that we should discourage funding the bottom 95% of projects seems really extreme to me, and it also seems like a really crude way to address downside risk.
Even if there is some downside risk from any given project, the expected value is probably positive, solely based on the fact that some EA thinks it’s a good idea. Value of information is another consideration in favor of taking action, especially doing research (I’m guessing most people who move to Blackpool will want to do research of some sort).
As for a good way to decrease downside risk, I would like to see a lot more people do what Joey does in this post and ask people to provide the best arguments they can against their project (or maybe ask a question on Metaculus).
The issue with downside risk is not that the world is currently on a wonderful trajectory that must not be disturbed. Rather, any given disturbance is liable to have effects that are hard to predict—but if we spent more time thinking and did more research, maybe we could get a bit better at predicting these effects. Loss aversion will cause us to overweight the possibility of a downside, but if we’re trying to maximize expected value then we should weight losses and gains of the same size equally. I’m willing to believe highly experienced EAs are better at thinking about downside risk, but I don’t think the advantage is overwhelming, and I suspect being a highly experienced EA can create its own set of blind spots. I am definitely skeptical that CEA can reliably pick the 33 highest-impact projects from a list of 722. Even experienced investors miss winners, and CEA is an inexperienced charitable “investor”.
Your example “the first political protest for AI regulation might in expectation do more harm than a thoughtful AI policy project could prevent” is one I agree with, and maybe it illustrates a more general principle that public relations efforts should be done in coordination with major EA organizations. Perhaps it makes sense to think about downside risk differently depending on the kind of work someone is interested in doing.
BTW, if the opinion of experienced EAs is considered reliable, maybe it’s worth noting that 80k advocated something like this a few years ago, and ideas like it have been floating around EA for a long time.
I suspect Greg/the manager would not be able to filter projects particularly well based on personal interviews; since the point of the hotel is basically ‘hits-based giving’, I think a blanket ban on irreversible projects is more useful (and would satisfy most of the concerns in the fb comment vollmer linked)
Just to play devil’s advocate for a moment, aren’t personal interviews and hits-based giving essentially the process used by other EA funders? I believe it was OpenPhil who coined the term hits-based giving. It sounds like maybe your issue is with the way funding works in EA broadly speaking, not this project in particular.
The same seems to apply to vollmer’s point about adverse selection effects. Over time, the project pool will increasingly be made up of projects everyone has rejected. So this could almost be considered a fully general counterargument against funding any project. (Note that this thinking directly opposes replaceability: replaceability encourages you to fund projects no other funder is willing to fund; this line of reasoning says just the opposite.) Anyway, I think the EA Hotel could easily be less vulnerable to adverse selection effects, if it appeals to a different crowd. I’m the first long-term resident of the hotel, and I’ve never applied for funding from any other source. (I’m self-studying machine learning at the hotel, which I don’t think I would ever get a grant for.)
Sounds like you really want a broader rule like “no irreversible projects without community consensus” or something. In general, mitigating downside risk seems like an issue that’s fairly orthogonal to establishing low cost of living EA hubs.
Even if there is some downside risk from any given project, the expected value is probably positive, solely based on the fact that some EA thinks it’s a good idea.
The Unilateralist’s Curse refers to situations where many people oppose a project and only one person supports it. This is an important case to consider, but I think a randomly chosen EA project is unlikely to fall into this category.
Hmm. I think the Unilateralist’s Curse rests on the assumption that individuals underestimate potential downsides relative to their estimations concerning potential upsides, at least when it comes to the consequences for other people. (Anecdotally this assumption seems likely but I basically have no idea if it’s true.) Centralised coordination/control is a way to counteract that. But the situation is even worse in this case because not only are we potentially facilitating projects with no centralised filtering, we’re actually selecting for projects that are more likely to have been rejected by a central filter.
Thank you for prompting me to clarify my thinking here. I expect it’s wrong, I haven’t read Bostrom’s paper on it, but that’s where I’m up to.
(Also it might be worth me saying that I currently still think the EA Hotel has positive expected value—I don’t think it’s giving individuals enough power for the Unilateralist’s Curse to really apply. But it’s worth continuing to think about the UC and it doesn’t seem clear to me that “the fact that some EA thinks it’s a good idea” is sufficient grounds to attribute positive expected value to a project, given no other information, which seemed to be what you were saying.)
Centralised coordination/control is a way to counteract that.
OpenPhil funding OpenAI might be a case of a “central” organization taking unilateral action that’s harmful. vollmer also mentions that he thinks some of EAF’s subprojects were probably negative impact elsewhere in this thread—presumably the EAF is relatively “central”.
If we think that “individuals underestimate potential downsides relative to their estimations concerning potential upsides”, why do we expect funders to be immune to this problem? There seems to be an assumption that if you have a lot of money, you are unusually good at forecasting potential downsides. I’m not sure. People like Joey and Paul Christiano have offered prizes for the best arguments against their beliefs. I don’t believe OpenPhil has ever done this, despite having a lot more money.
In general, funding doesn’t do much to address the unilateralist’s curse because any single funder can act unilaterally to fund a project that all the other funders think is a bad idea. I once proposed an EA donor’s league to address this problem, but people weren’t too keen on it for some reason.
it doesn’t seem clear to me that “the fact that some EA thinks it’s a good idea” is sufficient grounds to attribute positive expected value to a project, given no other information
Here’s a thought experiment that might be helpful as a baseline scenario. Imagine you are explaining effective altruism to a stranger in a loud bar. After hearing your explanation, the stranger responds “That’s interesting. Funny thing, I gave no thought to EA considerations when choosing my current project. I just picked it because I thought it was cool.” Then they explain their project to you, but unfortunately, the bar is too loud for you to hear what they say, so you end up just nodding along pretending to understand. Now assume you have two options: you can tell the stranger to ditch their project, or you can stay silent. For the sake of argument, let’s assume that if you tell the stranger to ditch their project, they will ditch it, but they will also get soured on EA and be unreceptive to EA messages in the future. If you stay silent, the stranger will continue their project and remain receptive to EA messages. Which option do you choose?
My answer is, having no information about the stranger’s project, I have no particular reason to believe it will be either good or bad for the world. So I model the stranger’s project as a small random perturbation on humanity’s trajectory, of the sort that happen thousands of times per day. I see the impact of such perturbations as basically neutral on expectation. In the same way the stranger’s project could have an unexpected downside, it could also have an unexpected upside. And in the same way that the stranger’s actions could have some nasty unforeseen consequence, my action of discouraging the stranger could also have some nasty unforeseen consequence! (Nasty unforeseen consequences of my discouragement action probably won’t be as readily observable, but that doesn’t mean they won’t exist.) So I stay silent, because I gain nothing on expectation by objecting to the project, and I don’t want to pay the cost of souring the stranger on EA.
Suppose you agree with my argument above. If so, do you think that we should default to discouraging EAs from doing projects in the absence of further information? Why? It seems a bit counterintuitive/implausible that being part of the EA community would increase the odds that someone’s project creates a downside. If anything, it seems like being plugged into the community should increase a person’s awareness of how their project might pose a risk. (Consider the EA hotel in comparison to an alternative of having people live cheaply as individuals. Being part of a community of EAs = more peer eyeballs on your project = more external perspectives to spot unexpected downsides.) And in the same way giving strangers default discouragement will sour them on EA, giving EAs default discouragement on doing any kind of project seems like the kind of thing that will suck the life out of the movement.
I don’t want to be misinterpreted, so to clarify:
I am in favor of people discouraging projects if, after looking at the project, they actually think the project will be harmful.
I am in favor of bringing up considerations that suggest a project might be harmful with the people engaged in it, even if you aren’t sure about the project’s overall impact.
I’m in favor of people trying to make the possibility of downsides mentally available, so folks will remember to check for them.
I’m in favor of more people doing what Joey does and offering prizes for arguments that their projects are harmful.
I’m in favor of people publicly making themselves available to shoot holes in the project ideas of others.
I’m in favor of people in the EA community trying to coordinate more effectively, engage in moral trade, cooperate in epistemic prisoner’s dilemmas, etc.
In general, I think brainstorming potential downsides has high value of information and people should do it more. But a gamble can still be positive expected value without having purchased any information! (Also, in order to avoid bias, maybe you should try to spend an equal amount of time brainstorming unexpected upsides.)
I think it may be reasonable to focus on projects which have passed the acid test of trying to think of plausible downsides (since those projects are likely to be higher expected value).
But I don’t really see what purpose blanket discouragement serves.
The OpenPhil/OpenAI article was a good read, thanks, although I haven’t read the comments on either post or Ben’s latest thoughts, and I don’t really have an opinion either way on the value/harm of OpenPhil funding OpenAI if they did so “to buy a seat on OpenAI’s board for Open Philanthropy Project executive director Holden Karnofsky”. But of course, I wasn’t suggesting that centralised action is never harmful; I was suggesting that it’s better on average [edit: in UC-type scenarios, which I’m not sure your two examples were...man this stuff is confusing!]. It’s also ironic that part of the reason funding OpenAI might have been a bad idea seems to be that it creates more of a Unilateralist’s Curse scenario (although I did notice that the first comment claims this is not their current strategy): “OpenAI’s primary strategy is to hire top AI researchers to do cutting-edge AI capacity research and publish the results, in order to ensure widespread access.”
If we think that “individuals underestimate potential downsides relative to their estimations concerning potential upsides”, why do we expect funders to be immune to this problem?
Excellent question. No strong opinion as I’m still in anecdote territory here, but I reckon emotional attachment to one’s own grand ideas is what’s driving the underestimation of risk, and you’d expect funders to be able to assess ideas more dispassionately.
I’m not sure that EA is all that relevant to the answer I’d give in your thought experiment. If they didn’t have much power then I’d say go for it. If their project would have large consequences before anyone else could step in I’d say stop. As I said before, “I currently still think the EA Hotel has positive expected value—I don’t think it’s giving individuals enough power for the Unilateralist’s Curse to really apply.” I genuinely do expect the typical idea someone has for improving the status quo to be harmful, whether they’re an EA or a stranger in a bar. Most of the time it’s good to encourage innovation anyway, because there are feedback mechanisms/power structures in place to stop things getting out of hand if they start to really not look like good ideas. But in UC-type scenarios i.e. where those checks are not in place, we have a problem.
We might be talking past each other. Perhaps we agree that: In your typical real-life scenario i.e. where an individual does not have unilateral power, we should encourage them to pursue their altruistic ideas. Perhaps this was even what you were saying originally, and I just misinterpreted it.
[Edit: I’m pretty sure we’re talking past each other to at least some extent. I don’t think there should be “blanket discouragement”. I think the typical project that someone/an EA thinks is a good idea is in fact a bad idea, but that they should test it anyway. I do think there should be blanket discouragement of actions with large consequences that can be taken by a small minority without the endorsement of others (eg. relating to reputational risk or information hazards).]
Hi Vollmer, appreciate your criticism. Upvoted for that.
While it’s really impressive how low the rent at the hotel will be, rent cost is rarely a major reason for a project’s funding constraints
Do you realise that the figure cited (3-4k a year) isn’t rent cost? It’s total living cost. At least in my case that’s 4 times as little as what I’m running on, and I’m pretty cheap. For others the difference might be much larger.
For example a project might have an actually high-impact idea that doesn’t depend on location. Instead of receiving $150k from CEA to run half a year in the bay with 3 people, they could receive $50k and run for 3 years in Blackpool with 6 people instead. CEA could then fund 3 times as many projects, and it’s impact would effectively stretch 623=36 times further.
Coming from that perspective, staying in the world’s most expensive cities is just non-negotiable. At least for projects (coding, research, etc) that wouldn’t benefit an even stronger multiplier from being on-location.
And this isn’t just projection. I know at least one project that is most likely moving their team to the EA hotel.
Instead, the hotel could become a hub for everyone who doesn’t study at a university or work on a project that EA donors find worth funding, i.e. the hotel would mainly support work that the EA community as a whole would view as lower-quality.
I’m pretty sure EA projects find many projects net-positive even if they don’t find them worth funding. For the same reason that I’d buy a car if I could afford one. Does that mean I find cars lower-quality than my bicycle? Nope.
Imo it’s a very simple equation. EA’s need money to live. So they trade (waste) a major slice of their resources to ineffective endeavors for money. We can take away those needs for <10% the cost, effectively making a large amount of people go from part-time to full-time EA. Assuming that the distribution of EA effectiveness isn’t too steeply inequal (ie there are still effective EA’s out there), this intervention is the most effective I’ve seen thus far.
Do you realise that the figure cited (3-4k a year) isn’t rent cost? It’s total living cost. At least in my case that’s 4 times as little as what I’m running on, and I’m pretty cheap. For others the difference might be much larger.
Yes, I do. But in times when talent is the bigger constraint than funding, I’d rather create $100k worth of impact at a financial cost of $25k than $50k at a cost of $4k. Often, interacting in-person with specific people in specific places (often in major hubs) will enable you to increase your impact substantially. This isn’t true for everyone, and not always, but it will often be the case, even for coding/research projects. E.g. it’s commonly accepted wisdom that for-profit (coding) startups can increase their value substantially by moving to the Bay, and individual programmers can increase their salaries by more than the higher living cost by moving there. Similar things might apply to EA projects in Oxford / London / Berkeley / San Francisco.
So the potential benefits of the EA hotel might be somewhat limited, and there might also be some costs / harms (as I mentioned in the other comments).
maybe this post was just shared much more on social media.
I see Facebook and Twitter share buttons at the bottom of the post (but only when I load the page on my phone). They currently have the numbers 174 and 18 next to them. Seems like an excessive number of Facebook shares!? Surely that can’t be right? (I’ve only seen—and been tagged on—one, in any case. Clicking on the numbers provides no info. as to where the shares went, if indeed they are shares. Ok, actually, clicking on them brings up a share window, but also ups the counter! So maybe that explains a lot as to why the numbers are so high (i.e. people wanting to see where all these shares are going, but only adding to the false counter)).
If they’re students, they’ll most likely be studying at a university outside Blackpool and might not be able to do so remotely.
Regarding studying, it would mainly be suitable for those doing so independently online (it’s possible to take many world class courses on EdX and Coursera for free). But could also be of use to university students outside of term time (say to do extra classes online, or an independent research project, over the summer).
they’ll very likely be able to get funding from an EA donor
As John Maxwell says, I don’t think we are there yet with current seed funding options.
the hotel would mainly support work that the EA community as a whole would view as lower-quality
This might indeed be so, but given the much lower costs it’s possible that the quality-adjusted-work-per-£-spent rate could still be equal to—or higher than—the community average.
.. without the leadership and presence of highly experienced EAs (who work there as e.g. hotel managers / trustees).
I think it’s important to have experienced EAs in these positions for this reason.
Regarding “bad” EA projects, only one comes to mind, and it doesn’t seem to have caused much lasting damage. In the OP, I say that the “dynamics of status and prestige in the non-profit world seem to be geared toward being averse to risk-of-failure to a much greater extent than in the for-profit world (see e.g. the high rate of failure for VC funded start-ups). Perhaps we need to close this gap, considering that the bottom line results of EA activity are often considered in terms expected utility.” Are PR concerns a solid justification for this discrepancy between EA and VC? Or do Spencer Greenberg’s concerns about start-ups mean that EA is right in this regard and it’s VC that is wrong (even in terms of their approach to maximising monetary value)?
the enthusiasm for this project may be partly driven by social reasons
There’s nothing wrong with this, as long as people participating at the hotel for largely social reasons pay their own way (and don’t disrupt others’ work).
Regarding “bad” EA projects, only one comes to mind, and it doesn’t seem to have caused much lasting damage. In the OP, I say that the “dynamics of status and prestige in the non-profit world seem to be geared toward being averse to risk-of-failure to a much greater extent than in the for-profit world (see e.g. the high rate of failure for VC funded start-ups). Perhaps we need to close this gap, considering that the bottom line results of EA activity are often considered in terms expected utility.” Are PR concerns a solid justification for this discrepancy between EA and VC? Or do Spencer Greenberg’s concerns about start-ups mean that EA is right in this regard and it’s VC that is wrong (even in terms of their approach to maximising monetary value)?
Just wanted to flag that I disagree with this for a number of reasons. E.g. I think some of EAF’s sub-projects probably had negative impact, and I’m skeptical that these plus InIn were the only ones. I might write an EA forum post about how EA projects can have negative impacts at some point but it’s not my current priority. See also this facebook comment for some of the ideas.
Regarding your last point, VCs are maximizing their own profit, but Spencer talks about externalities.
Following on vollmer’s point, it might be reasonable to have a blanket rule against policy/PR/political/etc work—anything that is irreversible and difficult to evaluate. “Not being able to get funding from other sources” is definitely a negative signal, so it seems worthwhile to restrict guests to projects whose worst possible outcome is unproductively diverting resources.
On the other hand, I really can’t imagine what harm research projects could do; I guess the worst case scenario is someone so persuasive they can convince lots of EAs of their ideas but so bad at research their ideas are all wrong, which doesn’t seem very likely. (why not
‘malicious & persuasive people’? the community can probably identify those more easily by the subjects they write about)
Furthermore, guests’ ability to engage in negative-EV projects will be constrained by the low stipend and terrible location (if I wanted to engage in Irish republican activism, living at the EA hotel wouldn’t help very much). I think the largest danger to be alert for is reputation risk, especially from bad popularizations of EA, since this is easier to do remotely (one example is Intentional Insights, the only negative-EV EA project I know of)
This basically applies to everything as a matter of degree, so it looks impossible to put in a blanket rule. Suppose I raise £10 and send it to AMF. That’s irreversible. Is it difficult to evaluate? Depends what you mean by ‘difficult’ and what the comparison class is.
Regarding studying, it would mainly be suitable for those doing so independently online (it’s possible to take many world class courses on EdX and Coursera for free). But could also be of use to university students outside of term time (say to do extra classes online, or an independent research project, over the summer).
Fully-funded living expenses could also open up the option of The Open University for some people.
the enthusiasm for this project may be partly driven by social reasons
There’s nothing wrong with this, as long as people participating at the hotel for largely social reasons pay their own way (and don’t disrupt others’ work).
I think vollmer just meant to caution against readers taking upvotes as a proxy for the value of a project.
Furthermore, people have repeatedly brought up the argument that the first “bad” EA project in each area can do more harm than an additional “good” EA project, especially if you consider tail risks, and I think this is more likely to be true than not. E.g. the first political protest for AI regulation might in expectation do more harm than a thoughtful AI policy project could prevent. This provides a reason for EAs to be risk-averse. (Specifically, I tentatively disagree with your claims that “we’re probably at the point where there are more false negatives than false positives, so more chances can be taken on people at the low end”, and that we should invest “a small amount”.) Related: Spencer Greenberg’s idea that plenty of startups cause harm.
I thought this was pretty vague and abstract. You should say why you expect this particular project to suck!
It seems plausible that most EAs who do valuable work won’t be able to benefit from this. If they’re students, they’ll most likely be studying at a university outside Blackpool and might not be able to do so remotely
I also wonder what the target market is. EA doing remote work? EAs need really cheap accommodation for certain time?
First, big kudos for your strong commitment to put your personal funding into this, and for the guts and drive to actually make it happen!
That said, my overall feelings about the project are mixed, mainly for the following reasons (which you also partly discuss in your post):
It seems plausible that most EAs who do valuable work won’t be able to benefit from this. If they’re students, they’ll most likely be studying at a university outside Blackpool and might not be able to do so remotely. If they’re launching a new great project, they’ll very likely be able to get funding from an EA donor, and there will be major benefits from being in a big city or existing hub such as Oxford, London, or the Bay (so donors should be enthusiastic about covering the living costs of these places). While it’s really impressive how low the rent at the hotel will be, rent cost is rarely a major reason for a project’s funding constraints (at least outside the SF Bay Area).
Instead, the hotel could become a hub for everyone who doesn’t study at a university or work on a project that EA donors find worth funding, i.e. the hotel would mainly support work that the EA community as a whole would view as lower-quality. I’m not saying I’m confident this will happen, but I think the chance is non-trivial without the leadership and presence of highly experienced EAs (who work there as e.g. hotel managers / trustees).
Furthermore, people have repeatedly brought up the argument that the first “bad” EA project in each area can do more harm than an additional “good” EA project, especially if you consider tail risks, and I think this is more likely to be true than not. E.g. the first political protest for AI regulation might in expectation do more harm than a thoughtful AI policy project could prevent. This provides a reason for EAs to be risk-averse. (Specifically, I tentatively disagree with your claims that “we’re probably at the point where there are more false negatives than false positives, so more chances can be taken on people at the low end”, and that we should invest “a small amount”.) Related: Spencer Greenberg’s idea that plenty of startups cause harm.
The fact that this post got way more upvotes than other projects that are similarly exciting in my view (such as Charity Entrepreneurship) also makes me think that the enthusiasm for this project may be partly driven by social reasons (it feels great to have a community hotel hub with likeminded people) as opposed to people’s impact assessments. But maybe there’s something I’m overlooking, e.g. maybe this post was just shared much more on social media.
What happens if you concentrate a group of EAs who wouldn’t get much funding from the broader community in one place and help them work together? I don’t know. It could be very positive or very negative. Or it just couldn’t lead to much at all. Overall, I think it may not be worth the downside risks.
First of all, Greg_Colbourn, very impressed with all the thought that’s gone into this. I was already super impressed that you were doing the project in the first place, but this was a good read. Criticism tends to hit people harder than praise, so good on you for continuing to engage with that too.
vollmer, thanks for the time you’re putting into the discussion here. I think a lot of your worries have less force if you think of the Hotel as a stepping stone / gap-filler / early incubator / refuge / safety net. If you don’t have a lot of savings, you can’t dedicate a focused chunk of time to:
earning the trust of other EAs who can then fund or at least vouch for you (this seems to be an important part of funding decisions, especially for new projects/orgs)
making a career move (the number of people I’ve known stuck in jobs they don’t want to be in because they don’t have the time to research, try out and apply for what they want to do next, and can’t afford to to quit first and then figure it out...)
general reading/thinking/discussion around EA/rationality/self-improvement, without knowing where it might take you (I’d love to see EA Grants accept a funding proposal for “Sitting on a beach and reading whatever seems useful for a few months...maybe I’ll start with the sequences, maybe I won’t, who knows lol”)
coming up with project proposals to take to the more official EA funding streams
recovering from burn-out or another knock (I know Greg_Colbourn hasn’t mentioned this here and it’s quite different from the others, so maybe I’m really straying into general “EA Hotels and Low-cost Living” territory now)
Put another way—if we want to invoke the Argument From 80k Authority again—Greg_Colbourn has just provided a free personal runway community insurance scheme.
There’s a two-year limit. It’s not meant to be a long-term lifestyle for an EA. I think it’s meant to give less financially comfortable EAs a bit of freedom and breathing space between projects/careers.
I think this gap exists. The funder who says, “I know you don’t have the time or energy to make a grant-proposal-standard case for why I should support you yet. And that’s why I’m going to.” Maybe I’m wrong. But hey, let’s build it and see if they come :-)
I also think that often the way in which funding is offered is important. In my case, if I started to struggle financially, I’d be excited and proud to move into the EA Hotel. On the other hand, I’d feel somewhat embarrassed and burdensome to move back in with my parents, I’ve turned down financial support from three close EA friends because of what it might do to our relationship (and accepted very gratefully from two other less close EA friends <3), and I hate the thought of people questioning my motives as a community organiser if I accept a higher salary at EA London. I can also imagine people treating the EA Hotel as a last resort because e.g. future employers might think it looks like a cult. My point is more that it’s a different kind of option, and people have all sorts of reasons for not wanting to accept funding that they need. (Note that the personal, somewhat emotional relevance probably makes me slightly biased in favour of this project overall.)
Then again, maybe I’m being too blase. I have no idea how many EA project ideas aren’t getting funded due to lack of coordination / management time / funds in a particular area, and how many aren’t getting funded because funders think they shouldn’t be funded. The term “crucial considerations” makes me immediately think, “Ah. Yes. Yes, it’s very easy to do well-intentioned harm.” And actually I think I place an unusually high value on private research relative to action (publicising research can be pretty action-y), which might warrant a small group of community leaders withholding a lot of funding without much explanation until they’ve done a lot more research...Unfortunately, I also think it makes sense to be pretty suspicious of that approach.
No conclusions. Just some considerations for y’all.
(Incidentally, rejection is usually at least a bit hurtful. I wouldn’t be surprised if all the job/funding rejections in our community—especially alongside all the headhunting, careers advice, talent gap talk, “EA’s not funding-constrained” talk, pressure to reach relentlessly higher and reprioritise every 5 minutes etc. - was driving a lot of the enthusiasm for funders with more relaxed standards, including the upvotes on this article relative to the Charity Entrepreneurship one.)
EA Grants rejected 95% of the applications they got.
Sure, but an EA hotel seems like a weird way to address this inefficiency: only few people with worthwhile projects can move to Blackpool to benefit from it, the funding is not flexible, it’s hard to target this well, the project has some time lag, etc. The most reasonable approach to fixing this is simply to give more money to some of the projects that didn’t get funded.
Maybe CEA will accept 20-30% of EA Grants applications in the next round, or other donors will jump in to fill the gaps. (I’d expect that a lot of the grants applications (maybe half) might have been submitted by people not really familiar with EA, and some of the others weren’t worth funding.)
[Disclosure: I’m planning to move to Blackpool before the end of this month.]
If you’re working on a project full-time, there’s a good chance you’re not location-constrained.
I’m not sure what you’re referring to.
Over 3 months passed between the EA grants announcement and disbursement. Does that count as “time lag”?
The disadvantages you cite don’t seem compelling to me alongside the advantages cited in this post: dramatically lower costs, supportive EA community, etc. Yes, it’s not a great fit for every project—but if you’re offered a bargain on supporting one project, it seems silly to avoid taking it just because you weren’t offered a bargain on supporting some other project.
I think maybe our core disagreement is that you believe the bottom 95% of EA projects are risky and we should discourage people from funding them. Does that sound like an accurate summary of your beliefs?
I’ve written some about why I want to see more discussion of downside risk (and I’m compiling notes for several longer posts—maybe if I was living in Blackpool I’d have enough time to write them). However, the position that we should discourage funding the bottom 95% of projects seems really extreme to me, and it also seems like a really crude way to address downside risk.
Even if there is some downside risk from any given project, the expected value is probably positive, solely based on the fact that some EA thinks it’s a good idea. Value of information is another consideration in favor of taking action, especially doing research (I’m guessing most people who move to Blackpool will want to do research of some sort).
As for a good way to decrease downside risk, I would like to see a lot more people do what Joey does in this post and ask people to provide the best arguments they can against their project (or maybe ask a question on Metaculus).
The issue with downside risk is not that the world is currently on a wonderful trajectory that must not be disturbed. Rather, any given disturbance is liable to have effects that are hard to predict—but if we spent more time thinking and did more research, maybe we could get a bit better at predicting these effects. Loss aversion will cause us to overweight the possibility of a downside, but if we’re trying to maximize expected value then we should weight losses and gains of the same size equally. I’m willing to believe highly experienced EAs are better at thinking about downside risk, but I don’t think the advantage is overwhelming, and I suspect being a highly experienced EA can create its own set of blind spots. I am definitely skeptical that CEA can reliably pick the 33 highest-impact projects from a list of 722. Even experienced investors miss winners, and CEA is an inexperienced charitable “investor”.
Your example “the first political protest for AI regulation might in expectation do more harm than a thoughtful AI policy project could prevent” is one I agree with, and maybe it illustrates a more general principle that public relations efforts should be done in coordination with major EA organizations. Perhaps it makes sense to think about downside risk differently depending on the kind of work someone is interested in doing.
BTW, if the opinion of experienced EAs is considered reliable, maybe it’s worth noting that 80k advocated something like this a few years ago, and ideas like it have been floating around EA for a long time.
I suspect Greg/the manager would not be able to filter projects particularly well based on personal interviews; since the point of the hotel is basically ‘hits-based giving’, I think a blanket ban on irreversible projects is more useful (and would satisfy most of the concerns in the fb comment vollmer linked)
Just to play devil’s advocate for a moment, aren’t personal interviews and hits-based giving essentially the process used by other EA funders? I believe it was OpenPhil who coined the term hits-based giving. It sounds like maybe your issue is with the way funding works in EA broadly speaking, not this project in particular.
The same seems to apply to vollmer’s point about adverse selection effects. Over time, the project pool will increasingly be made up of projects everyone has rejected. So this could almost be considered a fully general counterargument against funding any project. (Note that this thinking directly opposes replaceability: replaceability encourages you to fund projects no other funder is willing to fund; this line of reasoning says just the opposite.) Anyway, I think the EA Hotel could easily be less vulnerable to adverse selection effects, if it appeals to a different crowd. I’m the first long-term resident of the hotel, and I’ve never applied for funding from any other source. (I’m self-studying machine learning at the hotel, which I don’t think I would ever get a grant for.)
Sounds like you really want a broader rule like “no irreversible projects without community consensus” or something. In general, mitigating downside risk seems like an issue that’s fairly orthogonal to establishing low cost of living EA hubs.
I think the Unilateralist’s Curse is relevant here.
The Unilateralist’s Curse refers to situations where many people oppose a project and only one person supports it. This is an important case to consider, but I think a randomly chosen EA project is unlikely to fall into this category.
Hmm. I think the Unilateralist’s Curse rests on the assumption that individuals underestimate potential downsides relative to their estimations concerning potential upsides, at least when it comes to the consequences for other people. (Anecdotally this assumption seems likely but I basically have no idea if it’s true.) Centralised coordination/control is a way to counteract that. But the situation is even worse in this case because not only are we potentially facilitating projects with no centralised filtering, we’re actually selecting for projects that are more likely to have been rejected by a central filter.
Thank you for prompting me to clarify my thinking here. I expect it’s wrong, I haven’t read Bostrom’s paper on it, but that’s where I’m up to.
(Also it might be worth me saying that I currently still think the EA Hotel has positive expected value—I don’t think it’s giving individuals enough power for the Unilateralist’s Curse to really apply. But it’s worth continuing to think about the UC and it doesn’t seem clear to me that “the fact that some EA thinks it’s a good idea” is sufficient grounds to attribute positive expected value to a project, given no other information, which seemed to be what you were saying.)
OpenPhil funding OpenAI might be a case of a “central” organization taking unilateral action that’s harmful. vollmer also mentions that he thinks some of EAF’s subprojects were probably negative impact elsewhere in this thread—presumably the EAF is relatively “central”.
If we think that “individuals underestimate potential downsides relative to their estimations concerning potential upsides”, why do we expect funders to be immune to this problem? There seems to be an assumption that if you have a lot of money, you are unusually good at forecasting potential downsides. I’m not sure. People like Joey and Paul Christiano have offered prizes for the best arguments against their beliefs. I don’t believe OpenPhil has ever done this, despite having a lot more money.
In general, funding doesn’t do much to address the unilateralist’s curse because any single funder can act unilaterally to fund a project that all the other funders think is a bad idea. I once proposed an EA donor’s league to address this problem, but people weren’t too keen on it for some reason.
Here’s a thought experiment that might be helpful as a baseline scenario. Imagine you are explaining effective altruism to a stranger in a loud bar. After hearing your explanation, the stranger responds “That’s interesting. Funny thing, I gave no thought to EA considerations when choosing my current project. I just picked it because I thought it was cool.” Then they explain their project to you, but unfortunately, the bar is too loud for you to hear what they say, so you end up just nodding along pretending to understand. Now assume you have two options: you can tell the stranger to ditch their project, or you can stay silent. For the sake of argument, let’s assume that if you tell the stranger to ditch their project, they will ditch it, but they will also get soured on EA and be unreceptive to EA messages in the future. If you stay silent, the stranger will continue their project and remain receptive to EA messages. Which option do you choose?
My answer is, having no information about the stranger’s project, I have no particular reason to believe it will be either good or bad for the world. So I model the stranger’s project as a small random perturbation on humanity’s trajectory, of the sort that happen thousands of times per day. I see the impact of such perturbations as basically neutral on expectation. In the same way the stranger’s project could have an unexpected downside, it could also have an unexpected upside. And in the same way that the stranger’s actions could have some nasty unforeseen consequence, my action of discouraging the stranger could also have some nasty unforeseen consequence! (Nasty unforeseen consequences of my discouragement action probably won’t be as readily observable, but that doesn’t mean they won’t exist.) So I stay silent, because I gain nothing on expectation by objecting to the project, and I don’t want to pay the cost of souring the stranger on EA.
Suppose you agree with my argument above. If so, do you think that we should default to discouraging EAs from doing projects in the absence of further information? Why? It seems a bit counterintuitive/implausible that being part of the EA community would increase the odds that someone’s project creates a downside. If anything, it seems like being plugged into the community should increase a person’s awareness of how their project might pose a risk. (Consider the EA hotel in comparison to an alternative of having people live cheaply as individuals. Being part of a community of EAs = more peer eyeballs on your project = more external perspectives to spot unexpected downsides.) And in the same way giving strangers default discouragement will sour them on EA, giving EAs default discouragement on doing any kind of project seems like the kind of thing that will suck the life out of the movement.
I don’t want to be misinterpreted, so to clarify:
I am in favor of people discouraging projects if, after looking at the project, they actually think the project will be harmful.
I am in favor of bringing up considerations that suggest a project might be harmful with the people engaged in it, even if you aren’t sure about the project’s overall impact.
I’m in favor of people trying to make the possibility of downsides mentally available, so folks will remember to check for them.
I’m in favor of more people doing what Joey does and offering prizes for arguments that their projects are harmful.
I’m in favor of people publicly making themselves available to shoot holes in the project ideas of others.
I’m in favor of people in the EA community trying to coordinate more effectively, engage in moral trade, cooperate in epistemic prisoner’s dilemmas, etc.
In general, I think brainstorming potential downsides has high value of information and people should do it more. But a gamble can still be positive expected value without having purchased any information! (Also, in order to avoid bias, maybe you should try to spend an equal amount of time brainstorming unexpected upsides.)
I think it may be reasonable to focus on projects which have passed the acid test of trying to think of plausible downsides (since those projects are likely to be higher expected value).
But I don’t really see what purpose blanket discouragement serves.
The OpenPhil/OpenAI article was a good read, thanks, although I haven’t read the comments on either post or Ben’s latest thoughts, and I don’t really have an opinion either way on the value/harm of OpenPhil funding OpenAI if they did so “to buy a seat on OpenAI’s board for Open Philanthropy Project executive director Holden Karnofsky”. But of course, I wasn’t suggesting that centralised action is never harmful; I was suggesting that it’s better on average [edit: in UC-type scenarios, which I’m not sure your two examples were...man this stuff is confusing!]. It’s also ironic that part of the reason funding OpenAI might have been a bad idea seems to be that it creates more of a Unilateralist’s Curse scenario (although I did notice that the first comment claims this is not their current strategy): “OpenAI’s primary strategy is to hire top AI researchers to do cutting-edge AI capacity research and publish the results, in order to ensure widespread access.”
Excellent question. No strong opinion as I’m still in anecdote territory here, but I reckon emotional attachment to one’s own grand ideas is what’s driving the underestimation of risk, and you’d expect funders to be able to assess ideas more dispassionately.
I’m not sure that EA is all that relevant to the answer I’d give in your thought experiment. If they didn’t have much power then I’d say go for it. If their project would have large consequences before anyone else could step in I’d say stop. As I said before, “I currently still think the EA Hotel has positive expected value—I don’t think it’s giving individuals enough power for the Unilateralist’s Curse to really apply.” I genuinely do expect the typical idea someone has for improving the status quo to be harmful, whether they’re an EA or a stranger in a bar. Most of the time it’s good to encourage innovation anyway, because there are feedback mechanisms/power structures in place to stop things getting out of hand if they start to really not look like good ideas. But in UC-type scenarios i.e. where those checks are not in place, we have a problem.
We might be talking past each other. Perhaps we agree that: In your typical real-life scenario i.e. where an individual does not have unilateral power, we should encourage them to pursue their altruistic ideas. Perhaps this was even what you were saying originally, and I just misinterpreted it.
[Edit: I’m pretty sure we’re talking past each other to at least some extent. I don’t think there should be “blanket discouragement”. I think the typical project that someone/an EA thinks is a good idea is in fact a bad idea, but that they should test it anyway. I do think there should be blanket discouragement of actions with large consequences that can be taken by a small minority without the endorsement of others (eg. relating to reputational risk or information hazards).]
Hi Vollmer, appreciate your criticism. Upvoted for that.
Do you realise that the figure cited (3-4k a year) isn’t rent cost? It’s total living cost. At least in my case that’s 4 times as little as what I’m running on, and I’m pretty cheap. For others the difference might be much larger.
For example a project might have an actually high-impact idea that doesn’t depend on location. Instead of receiving $150k from CEA to run half a year in the bay with 3 people, they could receive $50k and run for 3 years in Blackpool with 6 people instead. CEA could then fund 3 times as many projects, and it’s impact would effectively stretch 623=36 times further. Coming from that perspective, staying in the world’s most expensive cities is just non-negotiable. At least for projects (coding, research, etc) that wouldn’t benefit an even stronger multiplier from being on-location. And this isn’t just projection. I know at least one project that is most likely moving their team to the EA hotel.
I’m pretty sure EA projects find many projects net-positive even if they don’t find them worth funding. For the same reason that I’d buy a car if I could afford one. Does that mean I find cars lower-quality than my bicycle? Nope.
Imo it’s a very simple equation. EA’s need money to live. So they trade (waste) a major slice of their resources to ineffective endeavors for money. We can take away those needs for <10% the cost, effectively making a large amount of people go from part-time to full-time EA. Assuming that the distribution of EA effectiveness isn’t too steeply inequal (ie there are still effective EA’s out there), this intervention is the most effective I’ve seen thus far.
Yes, I do. But in times when talent is the bigger constraint than funding, I’d rather create $100k worth of impact at a financial cost of $25k than $50k at a cost of $4k. Often, interacting in-person with specific people in specific places (often in major hubs) will enable you to increase your impact substantially. This isn’t true for everyone, and not always, but it will often be the case, even for coding/research projects. E.g. it’s commonly accepted wisdom that for-profit (coding) startups can increase their value substantially by moving to the Bay, and individual programmers can increase their salaries by more than the higher living cost by moving there. Similar things might apply to EA projects in Oxford / London / Berkeley / San Francisco.
So the potential benefits of the EA hotel might be somewhat limited, and there might also be some costs / harms (as I mentioned in the other comments).
I see Facebook and Twitter share buttons at the bottom of the post (but only when I load the page on my phone). They currently have the numbers 174 and 18 next to them. Seems like an excessive number of Facebook shares!? Surely that can’t be right? (I’ve only seen—and been tagged on—one, in any case. Clicking on the numbers provides no info. as to where the shares went, if indeed they are shares. Ok, actually, clicking on them brings up a share window, but also ups the counter! So maybe that explains a lot as to why the numbers are so high (i.e. people wanting to see where all these shares are going, but only adding to the false counter)).
Regarding studying, it would mainly be suitable for those doing so independently online (it’s possible to take many world class courses on EdX and Coursera for free). But could also be of use to university students outside of term time (say to do extra classes online, or an independent research project, over the summer).
As John Maxwell says, I don’t think we are there yet with current seed funding options.
This might indeed be so, but given the much lower costs it’s possible that the quality-adjusted-work-per-£-spent rate could still be equal to—or higher than—the community average.
I think it’s important to have experienced EAs in these positions for this reason.
Regarding “bad” EA projects, only one comes to mind, and it doesn’t seem to have caused much lasting damage. In the OP, I say that the “dynamics of status and prestige in the non-profit world seem to be geared toward being averse to risk-of-failure to a much greater extent than in the for-profit world (see e.g. the high rate of failure for VC funded start-ups). Perhaps we need to close this gap, considering that the bottom line results of EA activity are often considered in terms expected utility.” Are PR concerns a solid justification for this discrepancy between EA and VC? Or do Spencer Greenberg’s concerns about start-ups mean that EA is right in this regard and it’s VC that is wrong (even in terms of their approach to maximising monetary value)?
There’s nothing wrong with this, as long as people participating at the hotel for largely social reasons pay their own way (and don’t disrupt others’ work).
Just wanted to flag that I disagree with this for a number of reasons. E.g. I think some of EAF’s sub-projects probably had negative impact, and I’m skeptical that these plus InIn were the only ones. I might write an EA forum post about how EA projects can have negative impacts at some point but it’s not my current priority. See also this facebook comment for some of the ideas.
Regarding your last point, VCs are maximizing their own profit, but Spencer talks about externalities.
Following on vollmer’s point, it might be reasonable to have a blanket rule against policy/PR/political/etc work—anything that is irreversible and difficult to evaluate. “Not being able to get funding from other sources” is definitely a negative signal, so it seems worthwhile to restrict guests to projects whose worst possible outcome is unproductively diverting resources.
On the other hand, I really can’t imagine what harm research projects could do; I guess the worst case scenario is someone so persuasive they can convince lots of EAs of their ideas but so bad at research their ideas are all wrong, which doesn’t seem very likely. (why not ‘malicious & persuasive people’? the community can probably identify those more easily by the subjects they write about)
Furthermore, guests’ ability to engage in negative-EV projects will be constrained by the low stipend and terrible location (if I wanted to engage in Irish republican activism, living at the EA hotel wouldn’t help very much). I think the largest danger to be alert for is reputation risk, especially from bad popularizations of EA, since this is easier to do remotely (one example is Intentional Insights, the only negative-EV EA project I know of)
This basically applies to everything as a matter of degree, so it looks impossible to put in a blanket rule. Suppose I raise £10 and send it to AMF. That’s irreversible. Is it difficult to evaluate? Depends what you mean by ‘difficult’ and what the comparison class is.
I agree research projects are more robustly positive. Information hazards are one main way in which they could do a significant amount of harm.
Fully-funded living expenses could also open up the option of The Open University for some people.
I think vollmer just meant to caution against readers taking upvotes as a proxy for the value of a project.
I thought this was pretty vague and abstract. You should say why you expect this particular project to suck!
I also wonder what the target market is. EA doing remote work? EAs need really cheap accommodation for certain time?
I wasn’t making a point about this particular project, but about all the projects this particular project would help.