Lightcone recently decided to close down a big project we’d been running for the last 1.5 years: An office space in Berkeley for people working on x-risk/EA/rationalist things that we opened August 2021.
We haven’t written much about why, but I and Ben had written some messages on the internal office slack to explain some of our reasoning, which we’ve copy-pasted below. (They are from Jan 26th). I might write a longer retrospective sometime, but these messages seemed easy to share, and it seemed good to have something I can more easily refer to publicly.
Background data
Below is a graph of weekly unique keycard-visitors to the office in 2022.
The x-axis is each week (skipping the first 3), and the y-axis is the number of unique visitors-with-keycards.
Members could bring in guests, which happened quite a bit and isn’t measured in the keycard data below, so I think the total number of people who came by the offices is 30-50% higher.
The offices opened in August 2021. Including guests, parties, and all the time not shown in the graphs, I’d estimate around 200-300 more people visited, for a total of around 500-600 people who used the offices.
The offices cost $70k/month on rent [1], and around $35k/month on food and drink, and ~$5k/month on contractor time for the office. It also costs core Lightcone staff time which I’d guess at around $75k/year.
Ben’s Announcement
Closing the Lightcone Offices @channel
Hello there everyone,
Sadly, I’m here to write that we’ve decided to close down the Lightcone Offices by the end of March. While we initially intended to transplant the office to the Rose Garden Inn, Oliver has decided (and I am on the same page about this decision) to make a clean break going forward to allow us to step back and renegotiate our relationship to the entire EA/longtermist ecosystem, as well as change what products and services we build.
Below I’ll give context on the decision and other details, but the main practical information is that the office will no longer be open after Friday March 24th. (There will be a goodbye party on that day.)
I asked Oli to briefly state his reasoning for this decision, here’s what he says:
An explicit part of my impact model for the Lightcone Offices has been that its value was substantially dependent on the existing EA/AI Alignment/Rationality ecosystem being roughly on track to solve the world’s most important problems, and that while there are issues, pouring gas into this existing engine, and ironing out its bugs and problems, is one of the most valuable things to do in the world.
I had been doubting this assumption of our strategy for a while, even before FTX. Over the past year (with a substantial boost by the FTX collapse) my actual trust in this ecosystem and interest in pouring gas into this existing engine has greatly declined, and I now stand before what I have helped built with great doubts about whether it all will be or has been good for the world.
I respect many of the people working here, and I am glad about the overall effect of Lightcone on this ecosystem we have built, and am excited about many of the individuals in the space, and probably in many, maybe even most, future worlds I will come back with new conviction to invest and build out this community that I have been building infrastructure for for almost a full decade. But right now, I think both me and the rest of Lightcone need some space to reconsider our relationship to this whole ecosystem, and I currently assign enough probability that building things in the space is harmful for the world that I can’t really justify the level of effort and energy and money that Lightcone has been investing into doing things that pretty indiscriminately grow and accelerate the things around us.
(To Oli’s points I’ll add to this that it’s also an ongoing cost in terms of time, effort, stress, and in terms of a lack of organizational focus on the other ideas and projects we’d like to pursue.)
Oli, myself, and the rest of the Lightcone team will be available to discuss more about this in the channel #closing-office-reasoning where I invite any and all of you who wish to to discuss this with me, the rest of the lightcone team, and each other.
In the last few weeks I sat down and interviewed people leading the 3 orgs whose primary office is here (FAR, AI Impacts, and Encultured) and 13 other individual contributors. I asked about how this would affect them, how we could ease the change, and generally get their feelings about how the ecosystem is working out.
These conversations lasted on average 45 mins each, and it was very interesting to hear people’s thoughts about this, and also their suggestions about other things Lightcone could work on.These conversations also left me feeling more hopeful about building related community-infrastructure in the future, as I learned of a number of positive effects that I wasn’t aware of. These conversations all felt pretty real, I respect all the people involved more, and I hope to talk to many more of you at length before we close.
From the check-ins I’ve done with people, this seems to me to be enough time to not disrupt any SERI MATS mentorships, and to give the orgs here a comfortable enough amount of time to make new plans, but if this does put you in a tight spot, please talk to us and we’ll see how we can help.
The campus team (me, Oli, Jacob, Rafe) will be in the office for lunch tomorrow (Friday at 1pm) to discuss any and all of this with you. We’d like to know how this is affecting you, and I’d really like to know about costs this has for you that I’m not aware of. Please feel free (and encouraged) to just chat with us in your lightcone channels (or in any of the public office channels too).
Otherwise, a few notes:
The Lighthouse system is going away when the leases end. Lighthouse 1 has closed, and Lighthouse 2 will continue to be open for a few more months.
If you would like to start renting your room yourself from WeWork, I can introduce you to our point of contact, who I think would be glad to continue to rent the offices. Offices cost between $1k and $6k a month depending on how many desks are in them.
Here’s a form to give the Lightcone team anonymous feedback about this decision (or anything). [Link removed from LW post.]
To talk with people about future plans starting now and after the offices close, whether to propose plans or just to let others know what you’ll be doing, I’ve made the #future-plans channel and added you all to it.
It’s been a thrilling experience to work alongside and get to know so many people dedicated to preventing an existential catastrophe, and I’ve made many new friends working here, thank you, but I think me and the Lightcone Team need space to reflect and to build something better if Earth is going to have a shot at aligning the AGIs we build.
Oliver’s 1st message in #Closing-Office-Reasoning
(In response to a question on the Slack saying “I was hoping you could elaborate more on the idea that building the space may be net harmful.”)
I think FTX is the obvious way in which current community-building can be bad, though in my model of the world FTX, while somewhat of outlier in scope, doesn’t feel like a particularly huge outlier in terms of the underlying generators. Indeed it feels not that far from par for the course of the broader ecosystems relationship to honesty, aggressively pursuing plans justified by naive consequentialism, and more broadly having a somewhat deceptive relationship to the world.
Though again, I really don’t feel confident about the details here and am doing a bunch of broad orienting.
I’ve also written some EA Forum and LessWrong comments that point to more specific things that I am worried will have or have had a negative effect on the world:
My guess is RLHF research has been pushing on a commercialization bottleneck and had a pretty large counterfactual effect on AI investment, causing a huge uptick in investment into AI and potentially an arms race between Microsoft and Google towards AGI: https://www.lesswrong.com/posts/vwu4kegAEZTBtpT6p/thoughts-on-the-impact-of-rlhf-research?commentId=HHBFYow2gCB3qjk2i
Thoughts on how responsible EA was for the FTX fraud: https://forum.effectivealtruism.org/posts/Koe2HwCQtq9ZBPwAS/quadratic-reciprocity-s-shortform?commentId=9c3srk6vkQuLHRkc6
Tendencies towards pretty mindkilly PR-stuff in the EA community: https://forum.effectivealtruism.org/posts/ALzE9JixLLEexTKSq/cea-statement-on-nick-bostrom-s-email?commentId=vYbburTEchHZv7mn4
I feel quite worried that the alignment plan of Anthropic currently basically boils down to “we are the good guys, and by doing a lot of capabilities research we will have a seat at the table when AI gets really dangerous, and then we will just be better/more-careful/more-reasonable than the existing people, and that will somehow make the difference between AI going well and going badly”. That plan isn’t inherently doomed, but man does it rely on trusting Anthropic’s leadership, and I genuinely only have marginally better ability to distinguish the moral character of Anthropic’s leadership from the moral character of FTX’s leadership, and in the absence of that trust the only thing we are doing with Anthropic is adding another player to an AI arms race.
More broadly, I think AI Alignment ideas/the EA community/the rationality community played a pretty substantial role in the founding of the three leading AGI labs (Deepmind, OpenAI, Anthropic), and man, I sure would feel better about a world where none of these would exist, though I also feel quite uncertain here. But it does sure feel like we had a quite large counterfactual effect on AI timelines.
Before the whole FTX collapse, I also wrote this long list of reasons for why I feel quite doomy about stuff (posted in replies, to not spam everything).
Oliver’s 2nd message
(Originally written October 2022) I’ve recently been feeling a bunch of doom around a bunch of different things, and an associated lack of direction for both myself and Lightcone.
Here is a list of things that I currently believe that try to somehow elicit my current feelings about the world and the AI Alignment community.
In most worlds RLHF, especially if widely distributed and used, seems to make the world a bunch worse from a safety perspective (by making unaligned systems appear aligned at lower capabilities levels, meaning people are less likely to take alignment problems seriously, and by leading to new products that will cause lots of money to go into AI research, as well as giving a strong incentive towards deception at higher capability levels)
It’s a bad idea to train models directly on the internet, since the internet as an environment makes supervision much harder, strongly encourages agency, has strong convergent goals around deception, and also gives rise to a bunch of economic applications that will cause more money to go into AI
The EA and AI Alignment community should probably try to delay AI development somehow, and this will likely include getting into conflict with a bunch of AI capabilities organizations, but it’s worth the cost
I don’t currently see a way to make AIs very useful for doing additional AI Alignment research, and don’t expect any of the current approaches for that to work (like ELK, or trying to imitate humans by doing more predictive modeling of human behavior and then hoping they turn out to be useful), but it sure would be great if we found a way to do this (but like, I don’t think we currently know how to do this)
I am quite worried that it’s going to be very easy to fool large groups of humans, and that AI is quite close to seeming very aligned and sympathetic to executives at AI companies, as well as many AI alignment researchers (and definitely large parts of the public). I don’t think this will be the result of human modeling, but just the result of pushing the AI into patterns of speech/behaior that we associate with being less threatening and being more trustworthy. In some sense this isn’t a catastrophic risk because this kind of deception doesn’t cause the AI to dispower the humans, but I do expect it to make actually getting the research to stop or to spend lots of resources on alignment a lot harder later on.
I do sure feel like a lot of AI alignment research is very suspiciously indistinguishable from capabilities research, and I think this is probably for the obvious bad reasons instead of this being an inherent property of these domains (the obvious bad reason being that it’s politically advantageous to brand your research as AI Alignment research and capabilities research simultaneously, since that gives you more social credibility, especially from the EA crowd which has a surprisingly strong talent pool and is also just socially close to a lot of top AI capabilities people)
I think a really substantial fraction of people who are doing “AI Alignment research” are instead acting with the primary aim of “make AI Alignment seem legit”. These are not the same goal, a lot of good people can tell and this makes them feel kind of deceived, and also this creates very messy dynamics within the field where people have strong opinions about what the secondary effects of research are, because that’s the primary thing they are interested in, instead of asking whether the research points towards useful true things for actually aligning the AI.
More broadly, I think one of the primary effects of talking about AI Alignment has been to make more people get really hyped about AGI, and be interested in racing towards AGI. Generally knowing about AGI-Risk does not seem to have made people more hesitant towards racing and slow down, but instead caused them to accelerate progress towards AGI, which seems bad on the margin since I think humanity’s chances of survival do go up a good amount with more time.
It also appears that people who are concerned about AGI risk have been responsible for a very substantial fraction of progress towards AGI, suggesting that there is a substantial counterfactual impact here, and that people who think about AGI all day are substantially better at making progress towards AGI than the average AI researcher (though this could also be explained by other attributes like general intelligence or openness to weird ideas that EA and AI Alignment selects for, though I think that’s somewhat less likely)
A lot of people in AI Alignment I’ve talked to have found it pretty hard to have clear thoughts in the current social environment, and many of them have reported that getting out of Berkeley, or getting social distance from the core of the community has made them produce better thoughts. I don’t really know whether the increased productivity here is born out by evidence, but really a lot of people that I considered promising contributors a few years ago are now experiencing a pretty active urge to stay away from the current social milieu.
I think all of these considerations in-aggregate make me worried that a lot of current work in AI Alignment field-building and EA-community building is net-negative for the world, and that a lot of my work over the past few years has been bad for the world (most prominently transforming LessWrong into something that looks a lot more respectable in a way that I am worried might have shrunk the overton window of what can be discussed there by a lot, and having generally contributed to a bunch of these dynamics).
Exercising some genre-saviness, I also think a bunch of this is driven by just a more generic “I feel alienated by my social environment changing and becoming more professionalized and this is robbing it of a lot of the things I liked about it”. I feel like when people feel this feeling they often are holding on to some antiquated way of being that really isn’t well-adapted to their current environment, and they often come up with fancy rationalizations for why they like the way things used to be.
I also feel confused about how to relate to the stronger equivocation of ML-skills with AI Alignment skills. I don’t personally have much of a problem with learning a bunch of ML, and generally engage a good amount with the ML literature (not enough to be an active ML researcher, but enough to follow along almost any conversation between researchers), but I do also feel a bit of a sense of being personally threatened, and other people I like and respect being threatened, in this shift towards requiring advanced cutting-edge ML knowledge in order to feel like you are allowed to contribute to the field. I do feel a bit like my social environment is being subsumed by and is adopting the status hierarchy of the ML community in a way that does not make me trust what is going on (I don’t particularly like the status hierarchy and incentive landscape of the ML community, which seems quite well-optimized to cause human extinction)
I also feel like the EA community is being very aggressive about recruitment in a way that locally in the Bay Area has displaced a lot of the rationality community, and I think this is broadly bad, both for me personally and also because I just think the rationality community had more of the right components to think sanely about AI Alignment, many of which I feel like are getting lost
I also feel like with Lightcone and Constellation coming into existence, and there being a lot more money and status around, the inner circle dynamics around EA and longtermism and the Bay Area community have gotten a lot worse, and despite being a person who I think generally is pretty in the loop with stuff, have found myself being worried and stressed about being excluded from some important community function, or some important inner circle. I am quite worried that me founding the Lightcone Offices was quite bad in this respect, by overall enshrining some kind of social hierarchy that wasn’t very grounded in things I actually care about (I also personally felt a very strong social pressure to exclude interesting but socially slightly awkward people from being in Lightcone that I ended up giving into, and I think this was probably a terrible mistake and really exacerbated the dynamics here)
I think some of the best shots we have for actually making humanity not go extinct (slowing down AI progress, pivotal acts, intelligence enhancement, etc.) feel like they have a really hard time being considered in the current overton window of the EA and AI Alignment community, and I feel like people being unable to consider plans in these spaces both makes them broadly less sane, but also just like prevents work from happening in these areas.
I get a lot of messages these days about people wanting me to moderate or censor various forms of discussion on LessWrong that I think seem pretty innocuous to me, and the generators of this usually seem to be reputation related. E.g. recently I’ve had multiple pretty influential people ping me to delete or threaten moderation action against the authors of posts and comments talking about: How OpenAI doesn’t seem to take AI Alignment very seriously, why gene drives against Malaria seem like a good idea, why working on intelligence enhancement is a good idea. In all of these cases the person asking me to moderate did not leave any comment of their own trying to argue for their position, before asking me to censor the content. I find this pretty stressful, and also like, most of the relevant ideas feel like stuff that people would have just felt comfortable discussing openly on LW 7 years ago or so (not like, everyone, but there wouldn’t have been so much of a chilling effect so that nobody brings up these topics).
Ben’s 1st message in #Closing-Office-Reasoning
Note from Ben: I have lightly edited this because I wrote it very quickly at the time
(I drafted this earlier today and didn’t give it much of a second pass, forgive me if it’s imprecise or poorly written.)
Here are some of the reasons I’d like to move away from providing offices as we have done so far.
Having two locations comes with a large cost. To track how a space is functioning, what problems people are running into, how the culture changes, what improvements could be made, I think I need to be there at least 20% of my time each week (and ideally ~50%), and that’s a big travel cost to the focus of the lightcone team.
Offices are a high-commitment abstraction for which it is hard to iterate. In trying to improve a culture, I might try to help people start more new projects, or gain additional concepts that help them understand the world, or improve the standards arguments are held to, or something else. But there’s relatively little space for a lot of experimentation and negotiation in an office space — you’ve mostly made a commitment to offer a basic resource and then to get out of people’s way.
The “enculturation to investment” ratio was very lopsided. For example, with SERI MATS, many people came for 2.5 months, for whom I think a better selection mechanism would have been something shaped like a 4-day AIRCS-style workshop to better get to know them and think with them, and then pick a smaller number of the best people from that to invest further into. If I came up with an idea right now for what abstraction I’d prefer, it’d be something like an ongoing festival with lots of events and workshops and retreats for different audiences and different sorts of goals, with perhaps a small office for independent alignment researchers, rather than an office space that has a medium-size set of people you’re committed to supporting long-term.
People did not do much to invest in each other in the office. I think this in part because the office does not capture other parts of people’s lives (e.g. socializing), but also I think most people just didn’t bring their whole spirit to this in some ways, and I’m not really sure why. I think people did not have great aspirations for themselves or each other. I did not feel here that folks had a strong common-spirit — that they thought each other could grow to be world-class people who changed the course of history, and did not wish to invest in each other in that way. (There were some exceptions to note, such as Alex Mennen’s Math Talks, John Wentworth’s Framing Practica, and some of the ways that people in the Shard Theory teams worked together with the hope of doing something incredible, which both felt like people were really investing into communal resources and other people.) I think a common way to know whether people are bringing their spirit to something is whether they create art about it — songs, in-jokes, stories, etc. Soon after the start I felt nobody was going to really bring themselves so fully to the space, even though we hoped that people would. I think there were few new projects from collaborations in the space, other than between people who already had a long history.
And regarding the broader ecosystem:
Some of the primary projects getting resources from this ecosystem do not seem built using the principles and values (e.g. integrity, truth-seeking, x-risk reduction) that I care about — such as FTX, OpenAI, Anthropic, CEA, Will MacAskill’s career as a public intellectual — and those that do seem to have closed down or been unsupported (such as FHI, MIRI, CFAR). Insofar as these are the primary projects who will reap the benefits of the resources that Lightcone invests into this ecosystem, I would like to change course.
The moral maze nature of the EA/longtermist ecosystem has increased substantially over the last two years, and the simulacra level of its discourse has notably risen too. There are many more careerist EAs working here and at events, it’s more professionalized and about networking. Many new EAs are here not because they have a deep-seated passion for doing what’s right and using math to get the answers, but because they’re looking for an interesting, well-paying job in a place with nice nerds. Or are just noticing that there’s a lot of resources being handed out in a very high-trust way. One of the people I interviewed at the office said they often could not tell whether a newcomer was expressing genuine interest in some research, or was trying to figure out “how the system of reward” worked so they could play it better, because the types of questions in both cases seemed so similar. [Added to LW post: I also remember someone joining the offices to collaborate on a project, who explained that in their work they were looking for “The next Eliezer Yudkowsky or Paul Christiano”. When I asked what aspects of Eliezer they wanted to replicate, they said they didn’t really know much about Eliezer but it was something that a colleague of theirs said a lot.] It also seems to me that the simulacra level of writing on the EA Forum is increasing, whereby language is increasingly used primarily to signal affiliation and policy-preferences rather than to explain how reality works. I am here in substantial part because of people (like Eliezer Yudkowsky and Scott Alexander) honestly trying to explain how the world works in their online writing and doing a damn good job of it, and I feel like there is much less of that today in the EA/longtermist ecosystem. This makes the ecosystem much harder to direct, to orient within, and makes it much harder to trust that resources intended for a given purpose will not be redirected by the various internal forces that grow against the intentions of the system.
The alignment field that we’re supporting seems to me to have pretty little innovation and pretty bad politics. I am irritated by the extent to which discussion is commonly framed around a Paul/Eliezer dichotomy, even while the primary person taking orders of magnitudes more funding and staff talent (Dario Amodei) has barely explicated his views on the topic and appears (from a distance) to have disastrously optimistic views about how easy alignment will be and how important it is to stay competitive with state of the art models. [Added to LW post: I also generally dislike the dynamics of fake-expertise and fake-knowledge I sometimes see around the EA/x-risk/alignment places.
I recall at EAG in Oxford a year or two ago, people were encouraged to “list their areas of expertise” on their profile, and one person who works in this ecosystem listed (amongst many things) “Biorisk” even though I knew the person had only been part of this ecosystem for <1 year and their background was in a different field.
It also seems to me like people who show any intelligent thought or get any respect in the alignment field quickly get elevated to “great researchers that new people should learn from” even though I think that there’s less than a dozen people who’ve produced really great work, and mostly people should think pretty independently about this stuff.
I similarly feel pretty worried by how (quite earnest) EAs describe people or projects as “high impact” when I’m pretty sure that if they reflected on their beliefs, they honestly wouldn’t know the sign of the person or project they were talking about, or estimate it as close-to-zero.]
How does this relate to the office?
A lot of the boundary around who is invited to the offices has been determined by:
People whose x-risk reduction work the Lightcone team respects or is actively excited about
People and organizations in good standing in the EA/longtermist ecosystem (e.g. whose research is widely read, who has major funding from OpenPhil/FTX, who have organizations that have caused a lot to happen, etc) and the people working and affiliated with them
Not-people who we think would (sadly) be very repellent to many people to work in the space (e.g. lacking basic social skills, or who many people find scary for some reason) or who we think have violated important norms (e.g. lying, sexual assault, etc).
The 2nd element has really dominated a lot of my choices here in the last 12 months, and (as I wrote above) this is a boundary that is increasingly filled with people who I don’t believe are here because they care about ethics, who I am not aware have done any great work, who I am not aware of having strong or reflective epistemologies. Even while massive amounts of resources are being poured into the EA/longtermist ecosystem, I’d like to have a far more discerning boundary around the resources I create.
- ^
The office rent cost about 1.5x what it needed to be. We started in a WeWork because we were prototyping whether people even wanted an office, and wanted to get started quickly (the office was up and running in 3 weeks instead of going through the slower process of signing a 12-24 month lease). Then we were in a state for about a year of figuring out where to move to long-term, often wanting to preserve the flexibility of being able to move out within 2 months.
I respect you a lot for doing something costly and weird because you thought it was the right thing to do, especially since it cuts against the longtermist norms of “more”.
No idea if you are right though.
Where did/does Lightcone get the money to run?
Our income in 2022 was:
FTX Future Fund February: $2,000,000
FTX Future Fund August $500,000
Open Philanthropy August: $4,500,000
FTX Future Fund November: $1,500,000
SFF November: $1,000,000
Construction loan (from Jaan Tallinn): $3,300,000
This was a lot more spending than we had in previous years. In-general we’ve mostly been funded by Open Phil, SFF and in 2022 a bunch by the FTX Future Fund :(
The Rose Garden Inn was also purchased on a loan from Jaan Tallinn (I’m not quite sure how the finances worked but we weren’t given the funds directly), and so we owe him ~$16MM-or-a-hotel at some point in the future.
Out of curiosity, what else does Lightcone spend $ on?
After excluding the $3MM for operating costs of Lightcone Offices, there’s still $6.5MM, not including the 3.3MM construction loan. Was the 16MM Rose Garden Inn a separate loan to the above?
The $16.5MM Rose Garden loan was in the same transaction, it just all went into the property so felt weird to list as income.
We’ve spent more than just the loan amount on renovation, so that’s where a lot of money is going (I have deep deep hatred for mold and water damage problems, which are making things more expensive).
Besides that there are core staff salaries as well as events like Icecone, EAG after parties, LW server costs and software subscriptions, prizes for the LW review, printing books and a lot of other misc things.
Lightcone spent more than $3.3MM on renovation in 2022? How many square feet of renovations is this for? How much does Lightcone usually spend a year on renovation?
That leaves ~3MM, which I assume is mainly going towards salaries (how many staff are being paid for from this 3MM outside of the 7 listed on the website?), as the LW review prizes appear to be in the hundreds per prize, and less than 10k overall. There was only 1 EA Global event in 2022 in the Bay (did Lightcone fund after parties at other EAGs?), and presumably events are going to be in the ~10s of thousands range at most. I don’t know how much LW server costs, but I’d be surprised if it was more than 10s of thousands/year.
Instead of me speculating, are you happy to share any more detailed breakdowns on the above? Are any of Lightcone’s financials publicly available?
Hmm, this comment indicates some misunderstanding. We “usually” spend $0 on renovations, because we’ve never owned any real estate before. We have purchased an extremely run-down hotel that needs renovations before it can be made operational as a retreat venue, office and community space.
The total interior square footage of the hotel is around 20,000 sq. ft., with some additional 10,000 sq. ft. of outdoor space. I expect we will have some upkeep cost in the future, but probably something around the $500k/yr range.
Especially in the context of renovations we work with a lot of contractors, but also otherwise we pay salaries to more than 7 staff listed on the website. Some examples:
We pay Jim Babcock as a contractor to help develop LessWrong and the EA Forum
We pay Aaron Silverbook to help us with operations on a surge basis
We pay for a full time design consultant and renovation manager
We pay Justis to provide editing services for LessWrong authors
We paid Sam Kennedy to trial as someone owning the printing of the LessWrong books and other books we want to print
We pay Vlad as a coding tutor for LessWrong developers to improve their coding skills for a few hours a week on-average
We pay CFAR for providing fiscal sponsorship and accounting services/help to us
We paid for many ops people for the Icecone retreat
We covered half of the cost of the first MLAB that we collaborated in with Redwood
We pay rent for the Lighthouse system and pay for cleaners and repairs to the rental properties
And probably a bunch more I am forgetting. We’ve done a lot of things in 2022.
We don’t yet have any public accounting for 2022, and previous years aren’t that informative for modeling our spending since we did very different things previously, but you will eventually be able to find out public finances as part of the CFAR non-profit finances documentation.
I am currently also in the middle of preparing our accounting for the tax deadline and might be able to share a more detailed breakdown of spending when I am done with that.
Thanks for this comment, appreciate the clarity!
Minor note RE: CFAR documentation-I’m probably looking in the wrong place but I can’t find anything after 2018 in the official records.
Oops, looks like nobody has updated the website in a while. Here you can find stuff until 2020: https://projects.propublica.org/nonprofits/organizations/453100226
San Fran really needs the Sabs Solution to stop people doing this kind of thing ever again, it’s just such a terrible Schelling point for the tech industry. So much value is being destroyed by having so many productive people and firms clustered in this nightmarish and incredibly expensive hellhole. One medium-sized nuclear device. That’s all it takes.
Thanks so much for bringing this degree of honesty, openness and detail about a decision this big. As someone not deeply embroiled in the longtermist/rationalist world your uncertainty about whether you and others are doing net harm vs good on the AI alignment front is prett chilling. I’m looking forward to responses, hoping the picture is not quite as bleak as you paint!
One question on something I do know a little about (which could be answered in a couple of sentances or even perhaps a link). What’s your issue with Will Mckaskill as a public intellectual? I’ve watched Ted talks, heard him do interviews etc. and he seemed on shallow thought to be a good advocate for EA stuff in general.
Over the course of me working in EA for the last 8 years I feel like I’ve seen about a dozen instances where Will made quite substantial tradeoffs where he traded off both the health of the EA community, and something like epistemic integrity, in favor of being more popular and getting more prestige.
Some examples here include:
When he was CEO while I was at CEA he basically didn’t really do his job at CEA but handed off the job to Tara (who was a terrible choice for many reasons, one of which is that she then co-founded Alameda and after that went on to start another fradulent-seeming crypto trading firm as far as I can tell). He then spent like half a year technically being CEO but spending all of his time being on book tours and talking to lots of high net-worth and high-status people.
I think Doing Good Better was already substantially misleading about the methodology that the EA community has actually historically used to find top interventions. Indeed it was very “randomista” flavored in a way that I think really set up a lot of people to be disappointed when they encountered EA and then realized that actual cause prioritization is nowhere close to this formalized and clear-cut.
I think WWOTF is not a very good book because it really fails to understand AI risk and also describes some methodology of longtermism that again feels like something someone wrote to sound compelling, but just totally doesn’t reflect how any of the longtermist-oriented EAs think about cause-prioritization. This is in-contrast to, for example, The Precipice, which seems like a much better book to me (though still flawed) and actually represents a sane way to think about the future.
The only time when Will was really part of a team at CEA was during the time when CEA went through Y-Combinator, which I think was kind of messed up (like, he didn’t build the team or the organization or really any of the products up to that point). As part of that, he (and some of the rest of the leadership) decided to refocus all of their efforts on building EA funds, despite the organization just having gone through a major restructuring to focus on talent instead of money, since with Open Phil there was already a lot of money around. This was explicitly not because it would be the most impactful thing to do, but because focusing on something clear and understandable like money would maximize the chances of CEA getting into Y-Combinator. I left the organization when this decision was made.
In-general CEA was a massive shitshow for a very long period of time while Will was a board member (and CEO). He didn’t do anything about it, and often exacerbated the problems, and I think this had really bad consequences for the EA community as I’ve written about in other comments. Instead he focused on promoting EA as well as his own brand.
Despite Will branding himself as a leader of the EA community, as far as I can tell he is actually just not very respected among almost any of the other intellectual leaders of the community, at least here in the Bay. He also doesn’t participate in any discourse with really anyone else in the community. He never comments on the EA Forum, he doesn’t do panel discussions with other people, and he doesn’t really steer the actions of any EA organizations, while of course curating an image of himself as the clear leader of the community. This feels to me very much like trying to get the benefits of being a leader without actually doing the job of leadership.
Will displayed extremely bad judgement in his engagement with Sam Bankman Fried and FTX. He was the person most responsible for entangling EA with FTX by publicly endorsing SBF multiple times, despite many warnings he received from many people in the community. The portrayal in this article here seems roughly accurate to me. I think this alone should justify basically expelling him as a leader in the EA community, since FTX was really catastrophically bad and he played a major role in it (especially in its effects on the EA community).
(Edit: See this comment I made with some minor retractions on the above. I do want to note that in as much as I did get things wrong, both me and Will agreed that it was likely because people hired and supervised by Will directly lied to both me and him, which I think is in substantial part Will’s fault, and as things go among the more forgivable reasons for getting things wrong. I also think most of the retractions don’t bear that much on my overall assessment, though I did make some minor updates on the mess at CEA being more “Will being taken advantage of” rather than “Will playing an active role in the advantage-taking”)
Fwiw I have little private information but think that:
I sense this misses some huge successes in EA getting where it is. Seems we’ve done pretty well all things considered. Wasn’t will part of that?
Will is a superlative networker
He is a very good public intellectual. Perhaps Ord could be if his books were backed to that extent. Perhaps Will could be better if he wrote different books. But he seems really good at it. I would guess that on that public intellectual side he’s a benefit not a cost
If I’d had the ability to direct billions in philanthropy I probably would have, even with nagging doubts.
It seems he’s maybe less good at representing the community or managing orgs. I don’t know if thats the case, but I can believe it.
If so, it seems possible there is a role as a public intellectual associated with EA but who isn’t the only one
I feel bad when writing criticism because personally I hope he’s well and I’m very grateful to him.
Also thanks Habryka for writing this. I think surfacing info like this is really valuable and I guess it has personal costs to you.
I agree Will’s made a bunch of mistakes (like yes CEA was messed up), but I find it hard to sign up to a narrative where status seeking is the key reason.
My impression is that Will often finds it stressful and unpleasant to do community leadership stuff, media, talk to VIPs etc. He often seems to do it out of a sense of duty (i.e. belief that it’s the most impactful thing). His ideal lifestyle would be more like being an academic.
Maybe there’s some kind of internal conflict going on, but it seems more complicated than this makes out.
My hot take is that a bunch of the disagreement is about how much to prioritise something like the instrumental values of conventional status / broader appeal vs. proactively saying what you think even if it looks bad / being a highly able niche community.
My impression is that you’re relatively extreme in how much you rate the latter, so it makes sense to me you’d disagree with a bunch of Will’s decisions based on that.
My guess is you know Will better, so I would trust your judgement here a decent amount, though I have talked to other people who have worked with Will a decent amount who thought that status-seeking was pretty core to what was going on (for the sake of EA, of course, though it’s hard to disentangle these kinds of things).
I think this is a common misunderstanding in things that I am trying to communicate. I think people can optimize for status and prestige for many different reasons, and indeed I think “personal enjoyment of those things” is a decent fraction of the motivations for people who behave that way, but at least from my experiences and the books I’ve tried to read on adjacent topics, substantially less than the majority.
“This seems instrumentally useful” is I think the most common reason why people pursue prestige-optimizing strategies (and then having some kind of decision-theory or theory of ethics that doesn’t substantially push-back against somewhat deceptive/adversarial/zero-sum like things like prestige-optimization).
People do things for instrumental reason. Someone doesn’t need to enjoy doing bad things in order for them to do bad things. I don’t know why Will is pursuing the strategies I see him pursue, I mostly just see the consequences, which seem pretty bad to me.
Thank you for clarifying. I do really appreciate this and I’m sure others do too.
But as it sounds like this isn’t the first time this has been miscommunicated, one idea going forward might be to ask someone else to check your writing for tone before posting.
For example if you’d asked me, I would have told you that your comment reads to me like “Will is so selfish” rather than “Will and I have major disagreements on the strategies he should pursue but I believe he’s well-intentioned” because of things like:
The large majority of the time when people say that someone harmed others for the sake of their own popularity, they’re accusing them of being selfish (so you should probably clarify if that’s not what you mean).
You choose status-related words (with the negative connotations I just mentioned) when you could have used others e.g. “being on book tours and talking to lots of high net-worth and high-status people” rather than “promoting EA books and fundraising” (for orgs like yours incidentally, although of course that ended badly).
It’s a long comment entirely composed of negative comments about Will—you’d forgive a reader for thinking that you don’t think there’s anything good about him. (I don’t think the context of being asked “What’s your issue with Will Mckaskill as a public intellectual?” would make readers think “Oh, I guess that’s the reason Habryka is only mentioning negative things.” This is not how professionals tend to talk about each other—especially in public—unless they really don’t think there’s anything positive about someone.)
Similarly, certain word choices and the absence of steel-manning give the impression that you don’t think Will has any decent reasons in favour of making the decisions he does (e.g. calling Doing Good Better “misleading” rather than “simplified” or talking about its emphasis on certain things or what have you, saying “He never comments on the EA Forum” even though that seems to be generally considered a good thing and of course he does a decent amount in any case, and in fact even now saying “I don’t know why Will is pursuing the strategies I see him pursue” rather than “I can see that he might think...”).
Similarly, you claim that he “didn’t do anything about” CEA’s problems for the “very long period of time” he was there (nothing? really?).
The use of accusatory language like “This feels to me very much like trying to get the benefits of being a leader without actually doing the job of leadership”—it’s hard to read this as anything other than an accusation of selfishness.
Describing things in an insulting way (contrasting WWOTF with a “sane way to think about the future”, calling CEA a “massive shitshow”, “expelling him as a leader” etc.).
Not specifying that you mean “intellectual respect” when you say “as far as I can tell he is actually just not very respected among almost any of the other intellectual leaders of the community, at least here in the Bay” (with at least one person responding with what seemed like a very broad interpretation of your comments).
I know a lot of people are hurting right now and I know that EA and especially rationalist culture is unusually public and brutal when it comes to feedback. But my sense is that the kinds of things I’ve mentioned above resulted in a comment that came across as shockingly unprofessional and unconstructive to many people (popular, clearly, but I don’t think people’s upvotes/likes correlate particularly well with what they deem constructive) - especially given the context of one EA leader publicly kicking another while they’re down—and I’d like to see us do better.
[Edit: There are also many things I disagree with in your comment. My lack of disagreement should not be taken as an endorsement of the concrete claims, I just thought it’d be better to focus this comment on the kinds of framings that may be regularly leading to miscommunication (although I’m not sure if I’ll ever get round to addressing the disagreements).]
Personally I have found that getting too attached the supposed goodness of my intentions as a guide to my moral character has been a distraction, in times when my behavior has not actually been that good.
I’ve not looked into it in great detail, but I think of it as a classically Christian idea to try to evaluate if someone is a good or a bad person internally, and give reward/punishment based on that. In contrast, I believe it’s mostly better to punish people based on their behavior, often regardless of whether you judge them to internally be ‘selfish’ or ‘altruistic’. If MacAskill has repeatedly executed a lot of damaging prestige-seeking strategies and behaved in selfish ways, I think it’s worthwhile to punish the behavior. And in that case I think it’s worthwhile to punish the behavior regardless of whether he is open to change, regardless of whether the behavior is due to fundamental personality traits, and regardless of whether he reflectively endorses the decisions.
Ubuntu writes that they read Habryka as saying “Will is so selfish” rather than “Will and I have major disagreements on the strategies he should pursue but I believe he’s well-intentioned”. But I don’t Habryka’s comment to be saying either of these. I read the comment to simply be saying “Will has repeatedly behaved in ways that trade off integrity for popularity and prestige”. This is also my read of multiple behaviors of Will, and cost him a great deal of respect from me for his personal integrity and as a leader, and this is true regardless of the intentions.
I am actively trying to avoid relying on concepts like “well-intentioned”, and I don’t know whether he is well-intentioned, and as such saying “but I believe he’s well-intentioned” would be inaccurate (and also actively distract from my central point).
Like, I think it’s quite plausible Sam Bankman Fried was also well-intentioned. I do honestly feel confused enough about how people treat “well-intentionedness” that I don’t really know how to communicate around this topic.
I don’t think whether SBF was well-intentioned changes how the community should relate to him that much (though it is of course a cognitively relevant fact about him that might help you predict the details of a bunch of his behavior, but I don’t think that should be super relevant given what a more outside-view perspective says about the benefits of engaging with him).
The best resource I know on this is Nate’s most recent post: “Enemies vs. Malefactors”:
I personally have found that focusing the conversation on whether someone was “well-intentioned” is usually pretty counterproductive. Almost no one is fully ill-intentioned towards other people. People have a story in their head for why what they are doing is good and fair. It’s not like it never happens, but I have never encountered a case within the EA or Rationality community, of someone who has caused harm and also didn’t have a compelling inner-narrative for why they were actually well-intentioned.
I don’t know what is going on inside of Will. I think he has many good qualities. He seems pretty smart, he is a good conversationalist and he has done many things that I do think are good for the world. I also think he isn’t a good central figurehead for the EA community and think a bunch of his actions in-relation to the EA community have been pretty bad for the world.
I don’t think you are the arbiter of what “professionals” do. I am a “professional”, as far as I can tell, and I talk this way. Many professionals I work with daily also communicate more like this. My guess is you are overgeneralizing from a specific culture you are familiar with, and I feel like your comment is trying to create some kind of implicit social consensus against my communication norms by invoking some greater “professionalism” authority, which doesn’t seem great to me.
I am happy to argue the benefits of being careful about communicating negative takes, and the benefits of carefully worded and non-adversarial language, but I am not particularly interested in doing so from a starting-point of you trying to invoke some set of vaguely-defined “professionalism” norms that I didn’t opt-into.
The incentives against saying things like this are already pretty strong (indeed, I am far from the only person having roughly this set of opinions, though I do appear to be the only person who has communicated them at all to the broader EA community, despite this seeming of really quite high relevance to a lot of the community that has less access to the details of what is happening in EA than the leadership).
I do think there are bad incentives in this vicinity which result in everyone shit-talking each other all the time as well, but I think on the margin we could really use more people voicing the criticism they have of others, especially ones that are indeed not their hot-takes but are opinions that they have extensively discussed and shared with others already, and seem to have not encountered any obvious and direct refutations, as is the case with my takes above.
Edit: So this has got a very negative reaction, including (I think) multiple strong disagreevotes. I notice I’m a bit confused why, I don’t recognise anything in the post that is beyond the pale? Maybe people think I’m piling on or trying to persuade rather than inform, though I may well have got the balance wrong. Minds are changed through discussion, disagreement, and debate—so I’d like to encourage the downvoters to reply (or DM me privately, if you prefer), as I’m not sure why people disagree, it’s not clear where I made a mistake (if any) and how much I ought to update my beliefs.
This makes a lot of sense to me intuitively, and I’d be pretty confident that Will would probably be most effective while being happy, unstressed, and doing what he likes and is good at—academic philosophy! It seems very reminiscent to me of stories of rank-and-file EAs who end up doing things that they aren’t especially motivated by, or especially exceptional at, because of a sense of duty that seems counterproductive.
I guess the update I think ought to happen is that Will trading off academic work to do community building / organisational leadership may not have been correct? Of course, hindsight is 20-20 and all that. But it seems plausible, and I’d be interested to hear the community’s opinion.
In any case, it seems that a good next step would be to find people in the community who are good at running organisations and willing to the do the community-leadership/public facing stuff, so we can remove the stress from Will and let him contribute in the academic sphere? The EA Good Governance Project seems like a promising thing to track in this area.
I didn’t vote either way on your comment, but I take the disagreement to be people thinking (a) Will’s community building work was the right choice given what he and others knew then and/or (b) finding people “who are good at running organisations and willing to the do the community-leadership/public facing stuff” is really hard.
Leaving a comment here for posterity. I just recently had a conversation with Will where we shared some of our experiences working at CEA at the time. I stand by most of my comments here, but want to clear up a few things that I do think I have changed my mind on, after Will gave me more information on what actually happened:
After Will gave me more context on the overall organizational decision-making, and the context of the CEA and GWWC merger, I now don’t think it’s accurate to characterize as Will as absent from his job as CEO. Indeed, many things I thought were driven by Tara and Kerry were actually driven by Will instead. More concretely, during the time when I felt like he was quite absent, he was working on the GWWC merger, a lot of staff reorganization, fundraising, getting CEA into YC, and working on various outreach work as a result of the Doing Good Better launch.
At least Will is pretty confident that the CEA/GWWC merger was not announced at a tactically opportune time, since he scheduled it. It’s plausible that either Kerry or Tara suggested that date, and it is indeed the case that my subteam was almost fully blindsided by the merger happening, because we had Kerry and Tara screen a ton of information from us, but this was more likely an accident or at least something Will wasn’t aware of.
CEA did not apply to YC with EA Funds; CEA applied with general community building, and decided on EA Funds as the main project afterwards. This is important because my impression was that we pivoted towards funds in order to gain the prestige of being in YC, but that seems to have happened later (this doesn’t really change that I think this decision was still pretty bad, but I do think it’s less concerning for other reasons)
It was Nick, without much support from Open Phil, who ended up ramping up his trustee involvement a lot more and then eventually fired a bunch of people from CEA. Open Phil later on then got more involved during the search for the new CEO, but the original firing was mostly Nick independently (though of course he likely talked through decisions with some people at Open Phil, but it still seems important to not characterize what happened as “Open Phil stepped in to fire people”, given my current understanding, though this is still pretty fuzzy)
Very uncertain here, but I’m concerned by a dynamic where it’s simply too cheap and easy to comment on how others spend their time, or what projects they prioritise, or how they write books—without trying to empathise or steelman their perspective.
I agree with this in-general, though I still think sharing this kind of information can be quite valuable, as long as people appropriately discount it.
For my time at CEA, he was my boss. I agree with you that stuff like this can be pretty annoying coming from random outsiders, but I think if someone worked under someone (though to be clear with a layer of management between) this gives them enough context to at least say informative things about how someone spends their time.
I also think disgruntled ex-employees are not super uncommon, and I think it makes sense to adjust for that.
For the discourse part I do feel differently. Like, I don’t care that much about how Will spends his time in-detail, but de-facto I think he doesn’t really engage in debates or discourse with almost anyone else in EA, and I do think there are just straightforwardly bad consequences as a result of that, and I feel more confident in judging the negative consequences than whether the details of his time allocation are off.
I just want to say I really appreciated you providing this first-hand experience, and for discussing what others in the EA community feel about Will’s leadership from what you have witnessed in the Bay area. I was just talking to someone about this the other day, and I was really unsure about how people in EA actually felt about Will, since, as you said, he rarely comments on the forum and doesn’t seem very engaged with people in the community from what I can see.
I realised while reading your comment that I didn’t actually know what Habryka meant by “not very respected”—he adds color here.
I feel like I joined EA for this “randomista” flavored version of the movement. I don’t really feel like the version of EA I thought I was joining exists even though, as you describe here, it gets a lot of lip service (because it’s uncontroversially good and inspiring!!!!). I found it validating for you to point this out.
If it does exist, it hasn’t recruited me despite my pretty concentrated efforts over several years. And I’m not sure why it wouldn’t.
I don’t have a problem with longtermist principles. As far as I’m concerned maybe the best way to promote longterm good really is to take huge risks at the expense of community health / downside risks / integreity ala SBF (among others). But I don’t want to spend my life participating in some scheme to ruthlessly attain power and convert it into good, and I sure as hell don’t want to spend my life participating in that as a pawn. I liked the randomista + earn to give version of the movement because I could just do things that were definitely good to do in the company of others doing the same. I feel like that movement has been starved out by this other thing wearing it as a mask.
Just curious—do you not feel like GiveWell, Happier Lives Institute, and some of Founders Pledge’s work, for example, count as randomista-flavoured EA?
Just chiming in here as HLI was mentioned—although this definitely isn’t the most important part of the post. I certainly see us as randomista-inspired—wait, should that be ‘randomista-adjacent’ - but I would say that what we do feels very different from what other EAs, notably longtermists, do. Also, we came into existence about 5 years after Doing Good Better was published.
I also share Habryka’s doubts about how EA’s original top interventions were chosen. The whole “scale, neglectedness, tractability’ framework strikes me as a confusing, indeterminate methodology that was developed post hoc to justify the earlier choices. I moaned about the SNT framework at length in chapter 5 (pp171) of my PhD thesis.
I agree with you about SNT/ITN. I like that chapter of your thesis a lot, and also find John’s post here convincing.
It does seem to me that randomista EA is alive and largely well—GW is still growing, global health still gets the most funding (I think), many of Charity Entrepreneurship’s new charities are randomista-influenced, etc.
There’s a lot of things going on under the “EA” umbrella. HLI’s work feels very different from what other EAs do, but equally a typical animal welfare org’s work will feel very different, and a typical longtermist org’s work will feel very different, because other EAs do a lot of different things now.
“It doesn’t exist” is too strong for sure. I consider GiveWell central to the randomista part and it was my entrypoint into EA at large. Founder’s Pledge was also pretty randomista back when I was applying for a job there in college. I don’t know anything about HLI.
There may be a thriving community around GiveWell etc that I am ignorant to. Or maybe if I tried to filter out non-randomista stuff from my mind then I would naturally focus more on randomista stuff when engaging EA feeds.
The reality is that I find stuff like “people just doing AI capabilities work and calling themselves EA” to be quite emotionally triggering and when I’m exposed to it thats what my attention goes to (if I’m not, as is more often the case, avoiding the situation entirely). Naturally this probably makes me pretty blind to other stuff going on in EA channels. There are pretty strong selection effects on my attention here.
All of that said, I do think that community building in EA looks completely different than how it would look if it were the GiveWell movement.
I can certainly empathize with the longtermist EA community being hard to ignore. It’s much flashier and more controversial.
For what it’s worth I think it would be possible and totally reasonable for you to filter out longtermist (and animal welfare, and community-building, etc.) EA content and just focus on the randomista stuff you find interesting and inspiring. You could continue following GiveWell, Founders Pledge’s global health and development work, and HLI. Plus, many of Charity Entrepreneurship’s charities are randomista-influenced.
For example, I make heavy use of the unsubscribe feature on the Forum to try and keep my attention focused on the issues I care about rather than what’s most popular (ironically I’m unsubscribed and supposed to be ignoring the ‘Community’ feed lol).
Yeah. (as a note I am also a fan of the animal welfare stuff).
This is good suggestion.
I think most of this stuff is too dry to hold my attention by itself. I would like a social environment that was engaging yet systematically directed my attention more often to things I care about. This happens naturally if I am around people who are interesting/fun but also highly engaged and motivated about a topic. As such I have focused on community and community spaces more than, for example, finding a good randomista newsletter or extracting randomista posts from the forums.
Another reason to focus on community interaction, is that it is both much more fun and much more useful to help with creative problem solving. But forum posts tend to report the results of problem solving / report news. I would rather be engaging with people before that step, but I don’t know of a place where one could go to participate in that aside from employment. In contrast, I do have a sense of where one could go to participate in this kind of group or community re: AI safety.
Do you mean not very intellectually respected (partly because he rarely participates in discourse with other EAs) or not very respected in general?
And do you mean that they don’t think he’s a big deal like some other EAs seem to or that they have less respect for him than they have for a random stranger?
I do feel like these are quite close in our community. I think people respect him as a relatively competent speaker and figurehead, though also have a bunch of hesitations that would naturally come with that role. I also think he is probably more just straightforwardly respected in the UK, since he had more of a role in things over there.
And people would definitely trust him more than like a random stranger.
Wow thanks so much for the reply I didn’t expect that much detail and appreciate it. Thought leaders curating their own fame and sacrificing things (including other people) for it is expected to some degree, but some of this is more extreme than I would expect for the average famous person.
Will see if anyone perhaps closer to Will will rebuff this at all.
Thanks again.
Just to say, since I’ve been critical elsewhere, I think this comment is good and helpful, and I agree with at least the last bullet point, can’t really speak to most of the others.
I personally like Will’s writing and I think he’s a good speaker. But I do find it weird that millions were spent on promoting WWOTF.[1]I find that weird on its own (how can you be so confident it’s impactful?), but even more so when comparing WWOTF to The Precipice which is in my opinion (and from my impression many others’ opinion as well) a much better and more impactful book. I don’t know if Ben shares these thoughts or if he has any others.Edit to add: I vaguely remember seeing a source other than Torres. But as long as I can’t find it you can disregard this comment. I do think promoting the book was/is a lot more likely to be net positive than net negative, I’m still even promoting the book myself. It’s just the amount of money I’m concerned about compared to other causes. But as long as I don’t have a figure, I can’t comment.
Can’t find the source for this, so correct me if I’m wrong!
Just to be clear, I think marketing spending for a book is pretty reasonable. I think WWOTF was not a very good book, since it was really quite confused about AI Risk and described a methodology that I think basically no one adheres to and as such gave a lot of people a mistaken impression of how the longtermist part of the EA community actually thinks, but I think if I was in Will’s shoes and thought it was a really important book and contribution, I think spending a substantial amount of money on marketing seems pretty reasonable to me.
The only source for this claim I’ve ever found was Emile P. Torres’s article What “longtermism” gets wrong about climate change.
It’s not clear where they take the information about an “enormous promotional budget of roughly $10 million” from. Not saying that it is untrue, but also unclear why Torres would have this information.
The implication is also, that the promotional spending came out of EA pockets. But part of it might also be promotional spending by the book publisher.
ETA: I found another article by Torres that discusses the claim in a bit more detail.
That “floated” is so weasely!
I don’t believe the $10m claim. Indeed, I don’t even see how it would be possible to spend that much without buying a Super Bowl ad. At $12k a month, you would have to hire nearly 140 PR firms for 6 months to add up to $10m. Perhaps someone added an extra zero or two . . .
Thanks Jeroen that’s a fair point I think it was weird too.
Even if the wrong book was plugged though, it doesn’t feel like a net harm activity though, and surely doesn’t negate his good writing and speaking? I’m sure we’ll hear more!
’e.g. integrity, truth-seeking, x-risk reduction) that I care about — such as FTX, OpenAI, Anthropic, CEA, Will MacAskill’s career as a public intellectual — and those that do seem to have closed down or been unsupported (such as FHI, MIRI, CFAR’
Warning, this is coming from quite a tribal place, since I was an Oxford philosopher back when GWWC was first getting started, so consider me biased but:
Obviously FTX was very bad, and the only provably very harmful thing that the community has done so far, but I still want to push back against here. CEA and Will have been heavily involved with the bits of EA that seem to me to have obviously worked fairly well: global development stuff and farm animal welfare campaigning. Many lives have been saved by donations to AMF. Meanwhile, by your own lights, you think it is more likely than not that the most important effect of the Bay Area Rationalist cluster and the FHI has been to speed AI capabilities research that you yourselves think of as near-term extinction risk. It seems like, by your own lights, Will’s career as a public intellectual (as opposed to his and CEA’s involvement in setting up Alameda) has been harmful, to the exact extent that it has promoted ideas about working on AI risk that he got from FHI/MIRI/CFAR people, whilst it has been good otherwise (i.e. when he has been promoting ideas that are closer to the very beginnings of Oxford/GiveWell EA: at least if you agree that global development/animal welfare EA are good in themselves).
I think the way you quoted it is a bit misleading? I think the thing that is said is rather that Lightcone Offices has been used for and the projects “getting resources” from Lightcone’s work is Will Macaskill’s career as a public intellectual. I think this is linked in with a lot of the harms of rationalists being displaced by EAs. I think the knock on Macaskill is not one of active harm but that it reaps benefits which do not align with OP’s values. I also do not think Will’s AI risk models look like FHI/MIRI/CFAR people’s given how low his p(doom) in WWOTF is.
‘I think this is linked in with a lot of the harms of rationalists being displaced by EAs.‘
Yeah, this is probably some sort of a crux. Forget Will as an individual for a second, my own impression of things is that:
A) EAs as a group have achieved some pretty impressive things, and I expect them/us to continue doing so, for example, on biorisk (whether or not the EA brand survives the current reputational crisis).
B) The rationalists actually have very little in the way of legible achievements as a group, insofar as they are distinct from EAs. (I should note that I have however been very intellectual impressed by the individual rationalists I have interacted with professionally; I’m sure many individual rationalists are smarter and more capable than me!). The main exception that some very technically impressive people in current AI research having been partly inspired by Yudkowsky to get into AI. Which this post itself thinks is probably extremely net bad.
So firstly, I am personally not very keen on the idea that MIRI or CFAR are big contributors to anything good, since I haven’t seen evidence that’s persuaded me otherwise. And secondly, it’s not clear to me that by their own lights the authors should see MIRI or CFAR as major contributors to anything good, since they effectively think that they have been bad for AI X-risk. (They might not quite put it like that, but insofar as you think people being worried about AI X-risk has just sped-up progress, it’s hard not to see MIRI/CFAR/the Bay rationalist scene as a whole as having a large share of the responsibility. ) Given the combination of those 2 things, I am not very happy with the authors portraying rationalism as the ‘good’ thing threatened by bad EA, insofar as that’s a fair reading. (Though I think ‘at least we didn’t do FTX’ is a fair response.)
I’d also say that opinions vary on how actually “epistemically healthy” CFAR is: https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe/comment/dLyEcki7dBdxFkvJd
I don’t know who is right here, but having (apparently) ex-employees say this kind of stuff is not a good sign, community epistemics-wise. Nor is being a community in which people regularly either form cults or are wrongly accused of forming cults, as seems to have happened at least 3 times: https://www.lesswrong.com/posts/ygAJyoBK7MhEvnwBc/some-thoughts-on-the-cults-lw-had
https://www.lesswrong.com/posts/XPwEptSSFRCnfHqFk/zoe-curzi-s-experience-with-leverage-research
Note: I am absolutely not accusing Ben and Oliver of personally having “bad epistemics”.
LessWrong has had a few cults emerge from the ecosystem, but at least some of the hate for e.g Leverage is basically just because Leverage holds up a mirror to mainstream EA/rationalism and mainstream EA just really hates the reflection. “Yes, we are a cult, and what do you think you guys are?”
(incidentally, no one ever talks about the companies/institutions that came out of Leverage, but surely this should be factored into our calculations when we think about the costs & benefits!)
I don’t think that’s really inconsistent with anything I said. And I think that I am arguing here in relative favor of the less cult-y bits of EA. I’ve also never heard anything like the Leverage testimony about any non-rationalist EA org, though obviously that’s not proof it isn’t happening.
I mean, what about Alameda and FTX? Also early CEA. Also what about nonlinear? Of course not exactly the same as Leverage, but neither has any other rationalist-adjacent org.
FTX and Alameda sound extremely bad (obviously worse in effect than Leverage!) to me in a way that is not particularly “cult”, although I get that’s a bit vague (and stories of SBF threatening people are closer to that, as opposed to the massive fraud.) As for the other stuff, I haven’t heard the relevant stories, but you may be right, I am not particularly clued into this stuff, and it’s possible it’s just coincidence I have heard about crazy founder worship, and sleep deprivation etc. , vague stuff about cleansing yourself of “metaphorically” demonic forces, at Leverage but not at those other places. I recall bullying accusations against someone high up at Nonlinear, but not the details. Probably I shouldn’t have made the relative comparison between rationalists and non-rationalists, because I haven’t really been following who all the orgs are, and what they’ve been doing. Though on the other hand, I feel like the rationalists have hit a high enough level of cult-y incidents that the default is probably that other orgs are less like that. But maybe I should have just stuck to ‘there are conflicting reports on whether epistemics are actually all that good in the Bay scene, and some reasonable evidence against.’
Hi David,
This excludes impact on animals (which I think might the major driver in the nearterm), and also longterm impacts. I used to consider the overall impact of GiveWell’s top charities robustly positive, but no longer do. I agree that, mathematically, E(“overall effect”) > 0 if:
“Overall effect” = “nearterm effect on humans” + “nearterm effect on animals” + “longterm effect”.
E(“nearterm effect on humans”) > 0.
E(“nearterm effect on animals” + “longterm effect”) = k E(“nearterm effect on humans”).
k = 0.
However, setting k to 0 seems pretty arbitrary. One could just as well set it to −1, in which case E(“overall effect”) = 0. Since I am not confident |k| << 1, I am not confident either about the sign of E(“overall effect”).
Just to say, having be critical elsewhere, I do think it’s unusually impressive that you’re prepared to shut down a big project you’ve been involved with, simply because your not confident it’s a good use of money.
Some loose thoughts
If there is a problem with EA efforts/funding re: alignment, this kind of discussion seems very important. Hopefully we can flag and resolve
I’m always a little of concerned with people identifying a problem, and catastrophising . I’m often concerned this occurs when discussing management or skill gap related to bottlenecks in EA. I am not sure it is relevant here, but maybe
I suspect, similar to the global health space, most interventions (i.e. setting up a co-working office) will be net neutral. This probably provides a good argument for taking ToC’s and impact and evaluation more seriously.
I disagree with harsh comment’s about WWOTF. I think promoting the book was a great bet, just high variance. I agree the Precipice was a better book, and it would have been great if it gotten promoted to the same extent. But it’s not action relevant since you can’t release a book twice. I think CEA and Will where trying to be ambitious and “shooting their shot”. It’s sad we discourage this. [Note: I do think from a community health/fairness perspective, things were weird. But this can be fixed retrospectively by promoting other philosophers more, and Will less, as looks to be happening (will didn’t speak at EAG Bay)]
Am I right that in this year and a half, you spent ~$2 million (£1.73m)? Seems reasonable not to continue this if you don’t think its impactful
A bit more than that!
We had two floors for around 5-6 months, so the rent was closer to $140k per month for those. Food & Drink was also more for those periods, so my guess is the total is closer to $3MM.
I did also not account for all the furniture costs in that section (however, I suspect ~50% of the furniture will get used in the future either by projects by our team or other projects we like, so it’s not all sunk cost).
A quick fermi for how much furniture we bought is something like 40 standing desks (~$600) + 40 office chairs (~$600) + 20 couches (~$1000) is more than half of it, then give a factor of 2x for everything else (rugs, end-tables, lights, etc), which comes out to $136,000.
As I say, ~50% will get kept and used for other stuff, so it’s only about $80k of further sunk cost.
Thank you so much for voicing these concerns. I share them too and they need to be said more loudly. I’m extremely worried the EA/LessWrong community has had a net negative impact on the world simply because of the increased AI risk.[1] I haven’t heard any good arguments against this.
If we exclude AI-related work, I do think EA has been net positive.
This admirably honest statement deserves more emphasis. As we know from medicine and international development and anywhere that does RCTs, it is really, really hard—even when the results of your actions are right in front of you—to know whether you have helped someone or harmed them. There are just too many confounding factors, selection bias, etc.
The long-termist AGI stuff has always struck me as even worse off in this respect. How is anyone supposed to know that the actions they take today will have a beneficial impact on the world decades from now, rather than making things worse? And given the premises of AGI alignment, making things worse would be utterly catastrophic for humanity.
First of all, yikes.
Second of all, I think I could always sense that things were like this (broadly speaking), but simultaneously worried I was just paranoid and deranged. I think that this dynamic has been quite bad for my mental health.
This is surprising to me. Can you provide a link to the relevant post/comment?
Would you mind sharing a bit more of what you mean here?
I’m not sure I understand how an increase in respectability in LessWrong equates to a shrinking overton window. I would have guessed the opposite—an increase in respectability would have shifted or expanded the overton window in ways that are more epistemically desirable. But I feel like I’m missing something here.
Also, I feel appreciative that you’ve shared a bunch of concerns and learnings with us.
I think the idea is that when you (even unintentionally) signal to others (and to yourself) that you are/want to be a more mainstream respectable institution (e.g. via having a modern respectable looking website), then this causes people from inside and outside the institution to have expectations of the institution to be more mainstream respectable. This includes things like the application of more mainstream Overton windows, which e.g. leads to people complaining about discussions on LW that touch on mainstream-taboo topics even if they seem fine from the previous LW norms.
Got it, this was helpful. Thanks!
I am confused by this. Is it a quote, if so, what’s your comment on that quote?
Oh, sorry, I didn’t notice I’d posted that comment as a reply to you. Oops. Have reposted downthread.