Am certainly open to considering this business model for the hotel.
The hotel did apply.
The marginal per-EA cost of supplying runway is probably lower with shared overhead and low COL like that.
It’s about $7500 per person per year
As a potential grant recipient (not in this round) I might be biased, but I feel like there is a clear answer to this. No one is able to level up without criticism, and the quality of your decisions will often be bottlenecked by the amount of feedback you receive.
Negative feedback isn’t inherently painful. This is only true if there is an alief that failure is not acceptable. Of course the truth is that failure is necessary for progress, and if you truly understand this, negative feedback feels good. Even if it’s in bad faith.
Given that grantmakers are essentially at the steering wheel of EA, we can’t afford for those people to not internalize this. They need to know all the criticism to make a good decision, they should cherish it.
Of course we can help them get this state of mind by celebrating their willingness to open up to scrutiny, along with the scrutiny
For this specific post, I probably won’t add a summary because my guess is that in this specific case the size of the beneficial effect doesn’t justify the cost.
I still think you should write it. This looks like an important bit of information, but not worth the read, and I estimate a summary would increase the amount of readers fivefold.
I wrote that intense model, and I agree that it’s not a good post. My apologies.
I imagine EA’s getting into all sorts of fields and industries while staying in the community, and this seems so valuable that it makes me second-guess the hotel.
People don’t stay in the community because, if you’re not involved professionally, there’s not much left to gain. We should change that.
I’ve proposed a solution to this problem here and here
I think part of why Y Combinator is so successful is because funding so many startups has allowed them to build a big dataset for what factors do & don’t predict success. Maybe this could become part of the EA Hotel’s mission as well.
Good idea. It will be somewhat tricky since we don’t have the luxury of measuring success in monetary terms, but we should certainly brainstorm about this at some point.
With the hotel, I see a bunch of little hints that it’s not worth my time to attempt an in-depth evaluation of the hotel’s leaders. E.g. the focus on low rent, which seems like a popular meme among average and below average EAs in the bay area, yet the EAs whose judgment I most respect act as if rent is a relatively small issue.
Your posts suggests that there is some class of EA’s that is a lot more competent than everyone else, which means that what everyone else is doing doesn’t matter all that much. While I haven’t met (or recognized) a lot of people that impress me this much, I still give this idea a lot of credence. I’d like to verify it for myself, to get on the same page with you (and perhaps even change my plans). Could you name some examples, besides Drexler and Bostrom, of EA’s that are on this level of competence?
I’m not looking for credentials, I’m looking for resources that demonstrate how these people are thinking, or stories about impressive feats, so I can convince my S1 to sit down and be humble (and model their minds so I can copy the good bits).
I have burned out slightly, but this has happened every 6 months or so for the past 5 years, so it’s probably not caused by the hotel.
At the very least, I agree that one coherent thread is more healthy and something to strive for, but in choosing a thread you might want to be aware of the various stakeholders and their incentives.
I find that counting myself and my needs into my moral framework makes my moral framework more robust.
I’d argue that humans would actually be better understood as an aggregate of agents, each with their own utility function. In your case, these agents might cooperate so well that your internal experience is that you’re just one agent, but that’s certainly not a human universal.
I would rather not. This would pressure people into goodharting their projects for legibility, which is one of the things our setup is supposed to prevent.
(tldr: an agent is legible if a principal can easily monitor them, but it limits their options to what is easy for the principal to measure, which might reduce performance)
Quite a few of our guests are not even on this list, but this doesn’t mean they’re sitting around doing nothing all day. They’re doing illegible work that is hard or even impossible to evaluate at a distance. I put a few examples in the second caveat of the post.
(I realise this is at odds with the EA maxim of measuring outcomes. That’s why we published this post: so the hotel could at least be evaluated in aggregate. I think it’s neat that people with illegible work can hide behind legible ones)
I realise that I’ve been implicitly assuming this is true, which made me resist optimizing for impressions. Doing that I could no longer convince myself that I was acting altruistically. The awful and hard to accept reality is that you sometimes do have to convince people in order for your work to be supported.
1. Does RAISE/the Hotel have a standardized way to measure the progress of people self-studying AI? If so, especially if it’s been vetted by AI risk organizations, it seems like that would go a long ways towards resolving this issue.
Not yet, but it’s certainly a project that is on our radar. We also want to find ways to measure innate talent, so that people can tell earlier whether AIS research would be a good fit for them.
I do think it affects their behavior, I just refuse to let it affect mine more than is strictly necessary, because I think it’s a negative sum game.
Strong upvoted, and thank you, because finally someone is honest about their doubts. You’re as critical in your speech as you are in your thoughts. This should be standard, but it’s rare.
projects that seem pretty tragic like “writing a novel on AI alignment” and “writing a mobile game”—it’s a difficult balance here, unoccupied rooms are doing nothing for the hotel but equally I doubt indulging these sorts of things are valuable
This is what I understand to be hits-based giving. If you have 20 rooms, you can make these kinds of weird gambles, and someone should be doing that.
Poor presentation- I found the post on expected value essentially incoherent as a pitch , but in all of the posts so far little thought seems to have been put into the elevator pitch of why fund this or what are the best aspects of the project are. Funders want a one paragraph or one sentence summary of why they should fund it which seems absent here
I take full responsibility for that. Perhaps I should have studied how other meta organisations estimate their value. I was hubristic to assume that I would be able to do it from scratch.
People don’t want to be associated with something low status
I’d rather assume EA’s to be above status when it comes to stakes this high.
not even close to every current/previous resident had made even a nominal donation of £5 to the campaign
I don’t see why they should. At that point you’re just manipulating impressions. I want to present an honest picture, and I don’t want to engage in a signalling race to the bottom.
Perhaps that’s naive.
I’m not at all convinced that the counterfactual would be working on their problems in solitude.
I wouldn’t be convinced either, but we interviewed our guests and 15 out of 20 were already doing the same work before taking up residence at the hotel. They were either working parttime or burning through runway.
That’s a good point. You made me aware of a certain population of potential hotel residents that would be better off building career capital elsewhere. But I think “almost every case” is an overstatement. Here’s some idealized examples, for the sake of argument:
The person with the high-profile career that decides to do independent research instead of taking a job at a multinational NGO that eventually leads them to a lot more influence
The EA-adjacent software developer that would have drifted outside of the community, if not for a place at the EA Hotel where they’re doing useful knowledge work
The entrepreneurial person that starts an EA organisation at the hotel, instead of doing a second-grade Master’s degree in relative obscurity because they were never good at caring about grades
Would you agree that the first would be a net loss, while the second and the third would be a net gain? I’m curious what you think our pool of residents is like, and how this influences your opinion.
Meta: I’m concerned about the amount of downvotes I see that aren’t accompanied with any justification. Consider that there is a lot of information value in a negative judgment. I imagine that the author would be very happy to hear about this, and more generally, I imagine that EA as a whole would skill up a *lot* faster if downvotes came with instructions.