Shutting Down the Lightcone Offices

Link post

Lightcone recently decided to close down a big project we’d been running for the last 1.5 years: An office space in Berkeley for people working on x-risk/​EA/​rationalist things that we opened August 2021.

We haven’t written much about why, but I and Ben had written some messages on the internal office slack to explain some of our reasoning, which we’ve copy-pasted below. (They are from Jan 26th). I might write a longer retrospective sometime, but these messages seemed easy to share, and it seemed good to have something I can more easily refer to publicly.

Background data

Below is a graph of weekly unique keycard-visitors to the office in 2022.

The x-axis is each week (skipping the first 3), and the y-axis is the number of unique visitors-with-keycards.

Weekly unique visitors with keycards in 2022. There was a lot of seasonality to the office.
The distribution of people by how many days they came (in 2022) looks like this.

Members could bring in guests, which happened quite a bit and isn’t measured in the keycard data below, so I think the total number of people who came by the offices is 30-50% higher.

The offices opened in August 2021. Including guests, parties, and all the time not shown in the graphs, I’d estimate around 200-300 more people visited, for a total of around 500-600 people who used the offices.

The offices cost $70k/​month on rent [1], and around $35k/​month on food and drink, and ~$5k/​month on contractor time for the office. It also costs core Lightcone staff time which I’d guess at around $75k/​year.

Ben’s Announcement

Closing the Lightcone Offices @channel

Hello there everyone,

Sadly, I’m here to write that we’ve decided to close down the Lightcone Offices by the end of March. While we initially intended to transplant the office to the Rose Garden Inn, Oliver has decided (and I am on the same page about this decision) to make a clean break going forward to allow us to step back and renegotiate our relationship to the entire EA/​longtermist ecosystem, as well as change what products and services we build.

Below I’ll give context on the decision and other details, but the main practical information is that the office will no longer be open after Friday March 24th. (There will be a goodbye party on that day.)

I asked Oli to briefly state his reasoning for this decision, here’s what he says:

An explicit part of my impact model for the Lightcone Offices has been that its value was substantially dependent on the existing EA/​AI Alignment/​Rationality ecosystem being roughly on track to solve the world’s most important problems, and that while there are issues, pouring gas into this existing engine, and ironing out its bugs and problems, is one of the most valuable things to do in the world.

I had been doubting this assumption of our strategy for a while, even before FTX. Over the past year (with a substantial boost by the FTX collapse) my actual trust in this ecosystem and interest in pouring gas into this existing engine has greatly declined, and I now stand before what I have helped built with great doubts about whether it all will be or has been good for the world.

I respect many of the people working here, and I am glad about the overall effect of Lightcone on this ecosystem we have built, and am excited about many of the individuals in the space, and probably in many, maybe even most, future worlds I will come back with new conviction to invest and build out this community that I have been building infrastructure for for almost a full decade. But right now, I think both me and the rest of Lightcone need some space to reconsider our relationship to this whole ecosystem, and I currently assign enough probability that building things in the space is harmful for the world that I can’t really justify the level of effort and energy and money that Lightcone has been investing into doing things that pretty indiscriminately grow and accelerate the things around us.

(To Oli’s points I’ll add to this that it’s also an ongoing cost in terms of time, effort, stress, and in terms of a lack of organizational focus on the other ideas and projects we’d like to pursue.)

Oli, myself, and the rest of the Lightcone team will be available to discuss more about this in the channel #closing-office-reasoning where I invite any and all of you who wish to to discuss this with me, the rest of the lightcone team, and each other.

In the last few weeks I sat down and interviewed people leading the 3 orgs whose primary office is here (FAR, AI Impacts, and Encultured) and 13 other individual contributors. I asked about how this would affect them, how we could ease the change, and generally get their feelings about how the ecosystem is working out.

These conversations lasted on average 45 mins each, and it was very interesting to hear people’s thoughts about this, and also their suggestions about other things Lightcone could work on.These conversations also left me feeling more hopeful about building related community-infrastructure in the future, as I learned of a number of positive effects that I wasn’t aware of. These conversations all felt pretty real, I respect all the people involved more, and I hope to talk to many more of you at length before we close.

From the check-ins I’ve done with people, this seems to me to be enough time to not disrupt any SERI MATS mentorships, and to give the orgs here a comfortable enough amount of time to make new plans, but if this does put you in a tight spot, please talk to us and we’ll see how we can help.

The campus team (me, Oli, Jacob, Rafe) will be in the office for lunch tomorrow (Friday at 1pm) to discuss any and all of this with you. We’d like to know how this is affecting you, and I’d really like to know about costs this has for you that I’m not aware of. Please feel free (and encouraged) to just chat with us in your lightcone channels (or in any of the public office channels too).

Otherwise, a few notes:

  • The Lighthouse system is going away when the leases end. Lighthouse 1 has closed, and Lighthouse 2 will continue to be open for a few more months.

  • If you would like to start renting your room yourself from WeWork, I can introduce you to our point of contact, who I think would be glad to continue to rent the offices. Offices cost between $1k and $6k a month depending on how many desks are in them.

  • Here’s a form to give the Lightcone team anonymous feedback about this decision (or anything). [Link removed from LW post.]

  • To talk with people about future plans starting now and after the offices close, whether to propose plans or just to let others know what you’ll be doing, I’ve made the #future-plans channel and added you all to it.

It’s been a thrilling experience to work alongside and get to know so many people dedicated to preventing an existential catastrophe, and I’ve made many new friends working here, thank you, but I think me and the Lightcone Team need space to reflect and to build something better if Earth is going to have a shot at aligning the AGIs we build.

Oliver’s 1st message in #Closing-Office-Reasoning

(In response to a question on the Slack saying “I was hoping you could elaborate more on the idea that building the space may be net harmful.”)

I think FTX is the obvious way in which current community-building can be bad, though in my model of the world FTX, while somewhat of outlier in scope, doesn’t feel like a particularly huge outlier in terms of the underlying generators. Indeed it feels not that far from par for the course of the broader ecosystems relationship to honesty, aggressively pursuing plans justified by naive consequentialism, and more broadly having a somewhat deceptive relationship to the world.

Though again, I really don’t feel confident about the details here and am doing a bunch of broad orienting.

I’ve also written some EA Forum and LessWrong comments that point to more specific things that I am worried will have or have had a negative effect on the world:

My guess is RLHF research has been pushing on a commercialization bottleneck and had a pretty large counterfactual effect on AI investment, causing a huge uptick in investment into AI and potentially an arms race between Microsoft and Google towards AGI: https://​​www.lesswrong.com/​​posts/​​vwu4kegAEZTBtpT6p/​​thoughts-on-the-impact-of-rlhf-research?commentId=HHBFYow2gCB3qjk2i

Thoughts on how responsible EA was for the FTX fraud: https://​​forum.effectivealtruism.org/​​posts/​​Koe2HwCQtq9ZBPwAS/​​quadratic-reciprocity-s-shortform?commentId=9c3srk6vkQuLHRkc6

Tendencies towards pretty mindkilly PR-stuff in the EA community: https://​​forum.effectivealtruism.org/​​posts/​​ALzE9JixLLEexTKSq/​​cea-statement-on-nick-bostrom-s-email?commentId=vYbburTEchHZv7mn4

I feel quite worried that the alignment plan of Anthropic currently basically boils down to “we are the good guys, and by doing a lot of capabilities research we will have a seat at the table when AI gets really dangerous, and then we will just be better/​more-careful/​more-reasonable than the existing people, and that will somehow make the difference between AI going well and going badly”. That plan isn’t inherently doomed, but man does it rely on trusting Anthropic’s leadership, and I genuinely only have marginally better ability to distinguish the moral character of Anthropic’s leadership from the moral character of FTX’s leadership, and in the absence of that trust the only thing we are doing with Anthropic is adding another player to an AI arms race.

More broadly, I think AI Alignment ideas/​the EA community/​the rationality community played a pretty substantial role in the founding of the three leading AGI labs (Deepmind, OpenAI, Anthropic), and man, I sure would feel better about a world where none of these would exist, though I also feel quite uncertain here. But it does sure feel like we had a quite large counterfactual effect on AI timelines.

Before the whole FTX collapse, I also wrote this long list of reasons for why I feel quite doomy about stuff (posted in replies, to not spam everything).

Oliver’s 2nd message

(Originally written October 2022) I’ve recently been feeling a bunch of doom around a bunch of different things, and an associated lack of direction for both myself and Lightcone.

Here is a list of things that I currently believe that try to somehow elicit my current feelings about the world and the AI Alignment community.

  1. In most worlds RLHF, especially if widely distributed and used, seems to make the world a bunch worse from a safety perspective (by making unaligned systems appear aligned at lower capabilities levels, meaning people are less likely to take alignment problems seriously, and by leading to new products that will cause lots of money to go into AI research, as well as giving a strong incentive towards deception at higher capability levels)

  2. It’s a bad idea to train models directly on the internet, since the internet as an environment makes supervision much harder, strongly encourages agency, has strong convergent goals around deception, and also gives rise to a bunch of economic applications that will cause more money to go into AI

  3. The EA and AI Alignment community should probably try to delay AI development somehow, and this will likely include getting into conflict with a bunch of AI capabilities organizations, but it’s worth the cost

  4. I don’t currently see a way to make AIs very useful for doing additional AI Alignment research, and don’t expect any of the current approaches for that to work (like ELK, or trying to imitate humans by doing more predictive modeling of human behavior and then hoping they turn out to be useful), but it sure would be great if we found a way to do this (but like, I don’t think we currently know how to do this)

  5. I am quite worried that it’s going to be very easy to fool large groups of humans, and that AI is quite close to seeming very aligned and sympathetic to executives at AI companies, as well as many AI alignment researchers (and definitely large parts of the public). I don’t think this will be the result of human modeling, but just the result of pushing the AI into patterns of speech/​behaior that we associate with being less threatening and being more trustworthy. In some sense this isn’t a catastrophic risk because this kind of deception doesn’t cause the AI to dispower the humans, but I do expect it to make actually getting the research to stop or to spend lots of resources on alignment a lot harder later on.

  6. I do sure feel like a lot of AI alignment research is very suspiciously indistinguishable from capabilities research, and I think this is probably for the obvious bad reasons instead of this being an inherent property of these domains (the obvious bad reason being that it’s politically advantageous to brand your research as AI Alignment research and capabilities research simultaneously, since that gives you more social credibility, especially from the EA crowd which has a surprisingly strong talent pool and is also just socially close to a lot of top AI capabilities people)

  7. I think a really substantial fraction of people who are doing “AI Alignment research” are instead acting with the primary aim of “make AI Alignment seem legit”. These are not the same goal, a lot of good people can tell and this makes them feel kind of deceived, and also this creates very messy dynamics within the field where people have strong opinions about what the secondary effects of research are, because that’s the primary thing they are interested in, instead of asking whether the research points towards useful true things for actually aligning the AI.

  8. More broadly, I think one of the primary effects of talking about AI Alignment has been to make more people get really hyped about AGI, and be interested in racing towards AGI. Generally knowing about AGI-Risk does not seem to have made people more hesitant towards racing and slow down, but instead caused them to accelerate progress towards AGI, which seems bad on the margin since I think humanity’s chances of survival do go up a good amount with more time.

  9. It also appears that people who are concerned about AGI risk have been responsible for a very substantial fraction of progress towards AGI, suggesting that there is a substantial counterfactual impact here, and that people who think about AGI all day are substantially better at making progress towards AGI than the average AI researcher (though this could also be explained by other attributes like general intelligence or openness to weird ideas that EA and AI Alignment selects for, though I think that’s somewhat less likely)

  10. A lot of people in AI Alignment I’ve talked to have found it pretty hard to have clear thoughts in the current social environment, and many of them have reported that getting out of Berkeley, or getting social distance from the core of the community has made them produce better thoughts. I don’t really know whether the increased productivity here is born out by evidence, but really a lot of people that I considered promising contributors a few years ago are now experiencing a pretty active urge to stay away from the current social milieu.

  11. I think all of these considerations in-aggregate make me worried that a lot of current work in AI Alignment field-building and EA-community building is net-negative for the world, and that a lot of my work over the past few years has been bad for the world (most prominently transforming LessWrong into something that looks a lot more respectable in a way that I am worried might have shrunk the overton window of what can be discussed there by a lot, and having generally contributed to a bunch of these dynamics).

  12. Exercising some genre-saviness, I also think a bunch of this is driven by just a more generic “I feel alienated by my social environment changing and becoming more professionalized and this is robbing it of a lot of the things I liked about it”. I feel like when people feel this feeling they often are holding on to some antiquated way of being that really isn’t well-adapted to their current environment, and they often come up with fancy rationalizations for why they like the way things used to be.

  13. I also feel confused about how to relate to the stronger equivocation of ML-skills with AI Alignment skills. I don’t personally have much of a problem with learning a bunch of ML, and generally engage a good amount with the ML literature (not enough to be an active ML researcher, but enough to follow along almost any conversation between researchers), but I do also feel a bit of a sense of being personally threatened, and other people I like and respect being threatened, in this shift towards requiring advanced cutting-edge ML knowledge in order to feel like you are allowed to contribute to the field. I do feel a bit like my social environment is being subsumed by and is adopting the status hierarchy of the ML community in a way that does not make me trust what is going on (I don’t particularly like the status hierarchy and incentive landscape of the ML community, which seems quite well-optimized to cause human extinction)

  14. I also feel like the EA community is being very aggressive about recruitment in a way that locally in the Bay Area has displaced a lot of the rationality community, and I think this is broadly bad, both for me personally and also because I just think the rationality community had more of the right components to think sanely about AI Alignment, many of which I feel like are getting lost

  15. I also feel like with Lightcone and Constellation coming into existence, and there being a lot more money and status around, the inner circle dynamics around EA and longtermism and the Bay Area community have gotten a lot worse, and despite being a person who I think generally is pretty in the loop with stuff, have found myself being worried and stressed about being excluded from some important community function, or some important inner circle. I am quite worried that me founding the Lightcone Offices was quite bad in this respect, by overall enshrining some kind of social hierarchy that wasn’t very grounded in things I actually care about (I also personally felt a very strong social pressure to exclude interesting but socially slightly awkward people from being in Lightcone that I ended up giving into, and I think this was probably a terrible mistake and really exacerbated the dynamics here)

  16. I think some of the best shots we have for actually making humanity not go extinct (slowing down AI progress, pivotal acts, intelligence enhancement, etc.) feel like they have a really hard time being considered in the current overton window of the EA and AI Alignment community, and I feel like people being unable to consider plans in these spaces both makes them broadly less sane, but also just like prevents work from happening in these areas.

  17. I get a lot of messages these days about people wanting me to moderate or censor various forms of discussion on LessWrong that I think seem pretty innocuous to me, and the generators of this usually seem to be reputation related. E.g. recently I’ve had multiple pretty influential people ping me to delete or threaten moderation action against the authors of posts and comments talking about: How OpenAI doesn’t seem to take AI Alignment very seriously, why gene drives against Malaria seem like a good idea, why working on intelligence enhancement is a good idea. In all of these cases the person asking me to moderate did not leave any comment of their own trying to argue for their position, before asking me to censor the content. I find this pretty stressful, and also like, most of the relevant ideas feel like stuff that people would have just felt comfortable discussing openly on LW 7 years ago or so (not like, everyone, but there wouldn’t have been so much of a chilling effect so that nobody brings up these topics).

Ben’s 1st message in #Closing-Office-Reasoning

Note from Ben: I have lightly edited this because I wrote it very quickly at the time

(I drafted this earlier today and didn’t give it much of a second pass, forgive me if it’s imprecise or poorly written.)

Here are some of the reasons I’d like to move away from providing offices as we have done so far.

  • Having two locations comes with a large cost. To track how a space is functioning, what problems people are running into, how the culture changes, what improvements could be made, I think I need to be there at least 20% of my time each week (and ideally ~50%), and that’s a big travel cost to the focus of the lightcone team.

  • Offices are a high-commitment abstraction for which it is hard to iterate. In trying to improve a culture, I might try to help people start more new projects, or gain additional concepts that help them understand the world, or improve the standards arguments are held to, or something else. But there’s relatively little space for a lot of experimentation and negotiation in an office space — you’ve mostly made a commitment to offer a basic resource and then to get out of people’s way.

  • The “enculturation to investment” ratio was very lopsided. For example, with SERI MATS, many people came for 2.5 months, for whom I think a better selection mechanism would have been something shaped like a 4-day AIRCS-style workshop to better get to know them and think with them, and then pick a smaller number of the best people from that to invest further into. If I came up with an idea right now for what abstraction I’d prefer, it’d be something like an ongoing festival with lots of events and workshops and retreats for different audiences and different sorts of goals, with perhaps a small office for independent alignment researchers, rather than an office space that has a medium-size set of people you’re committed to supporting long-term.

  • People did not do much to invest in each other in the office. I think this in part because the office does not capture other parts of people’s lives (e.g. socializing), but also I think most people just didn’t bring their whole spirit to this in some ways, and I’m not really sure why. I think people did not have great aspirations for themselves or each other. I did not feel here that folks had a strong common-spirit — that they thought each other could grow to be world-class people who changed the course of history, and did not wish to invest in each other in that way. (There were some exceptions to note, such as Alex Mennen’s Math Talks, John Wentworth’s Framing Practica, and some of the ways that people in the Shard Theory teams worked together with the hope of doing something incredible, which both felt like people were really investing into communal resources and other people.) I think a common way to know whether people are bringing their spirit to something is whether they create art about it — songs, in-jokes, stories, etc. Soon after the start I felt nobody was going to really bring themselves so fully to the space, even though we hoped that people would. I think there were few new projects from collaborations in the space, other than between people who already had a long history.

And regarding the broader ecosystem:

  • Some of the primary projects getting resources from this ecosystem do not seem built using the principles and values (e.g. integrity, truth-seeking, x-risk reduction) that I care about — such as FTX, OpenAI, Anthropic, CEA, Will MacAskill’s career as a public intellectual — and those that do seem to have closed down or been unsupported (such as FHI, MIRI, CFAR). Insofar as these are the primary projects who will reap the benefits of the resources that Lightcone invests into this ecosystem, I would like to change course.

  • The moral maze nature of the EA/​longtermist ecosystem has increased substantially over the last two years, and the simulacra level of its discourse has notably risen too. There are many more careerist EAs working here and at events, it’s more professionalized and about networking. Many new EAs are here not because they have a deep-seated passion for doing what’s right and using math to get the answers, but because they’re looking for an interesting, well-paying job in a place with nice nerds. Or are just noticing that there’s a lot of resources being handed out in a very high-trust way. One of the people I interviewed at the office said they often could not tell whether a newcomer was expressing genuine interest in some research, or was trying to figure out “how the system of reward” worked so they could play it better, because the types of questions in both cases seemed so similar. [Added to LW post: I also remember someone joining the offices to collaborate on a project, who explained that in their work they were looking for “The next Eliezer Yudkowsky or Paul Christiano”. When I asked what aspects of Eliezer they wanted to replicate, they said they didn’t really know much about Eliezer but it was something that a colleague of theirs said a lot.] It also seems to me that the simulacra level of writing on the EA Forum is increasing, whereby language is increasingly used primarily to signal affiliation and policy-preferences rather than to explain how reality works. I am here in substantial part because of people (like Eliezer Yudkowsky and Scott Alexander) honestly trying to explain how the world works in their online writing and doing a damn good job of it, and I feel like there is much less of that today in the EA/​longtermist ecosystem. This makes the ecosystem much harder to direct, to orient within, and makes it much harder to trust that resources intended for a given purpose will not be redirected by the various internal forces that grow against the intentions of the system.

  • The alignment field that we’re supporting seems to me to have pretty little innovation and pretty bad politics. I am irritated by the extent to which discussion is commonly framed around a Paul/​Eliezer dichotomy, even while the primary person taking orders of magnitudes more funding and staff talent (Dario Amodei) has barely explicated his views on the topic and appears (from a distance) to have disastrously optimistic views about how easy alignment will be and how important it is to stay competitive with state of the art models. [Added to LW post: I also generally dislike the dynamics of fake-expertise and fake-knowledge I sometimes see around the EA/​x-risk/​alignment places.

    • I recall at EAG in Oxford a year or two ago, people were encouraged to “list their areas of expertise” on their profile, and one person who works in this ecosystem listed (amongst many things) “Biorisk” even though I knew the person had only been part of this ecosystem for <1 year and their background was in a different field.

    • It also seems to me like people who show any intelligent thought or get any respect in the alignment field quickly get elevated to “great researchers that new people should learn from” even though I think that there’s less than a dozen people who’ve produced really great work, and mostly people should think pretty independently about this stuff.

    • I similarly feel pretty worried by how (quite earnest) EAs describe people or projects as “high impact” when I’m pretty sure that if they reflected on their beliefs, they honestly wouldn’t know the sign of the person or project they were talking about, or estimate it as close-to-zero.]

How does this relate to the office?

A lot of the boundary around who is invited to the offices has been determined by:

  1. People whose x-risk reduction work the Lightcone team respects or is actively excited about

  2. People and organizations in good standing in the EA/​longtermist ecosystem (e.g. whose research is widely read, who has major funding from OpenPhil/​FTX, who have organizations that have caused a lot to happen, etc) and the people working and affiliated with them

  3. Not-people who we think would (sadly) be very repellent to many people to work in the space (e.g. lacking basic social skills, or who many people find scary for some reason) or who we think have violated important norms (e.g. lying, sexual assault, etc).

The 2nd element has really dominated a lot of my choices here in the last 12 months, and (as I wrote above) this is a boundary that is increasingly filled with people who I don’t believe are here because they care about ethics, who I am not aware have done any great work, who I am not aware of having strong or reflective epistemologies. Even while massive amounts of resources are being poured into the EA/​longtermist ecosystem, I’d like to have a far more discerning boundary around the resources I create.

  1. ^

    The office rent cost about 1.5x what it needed to be. We started in a WeWork because we were prototyping whether people even wanted an office, and wanted to get started quickly (the office was up and running in 3 weeks instead of going through the slower process of signing a 12-24 month lease). Then we were in a state for about a year of figuring out where to move to long-term, often wanting to preserve the flexibility of being able to move out within 2 months.