Project lead of LessWrong 2.0, often helping the EA Forum with various issues with the forum. If something is broken on the site, it’s a good chance it’s my fault (Sorry!).
Habryka
(Copying over the same response I posted over on LW)
I don’t have all the context of Ben’s investigation here, but as someone who has done investigations like this in the past, here are some thoughts on why I don’t feel super sympathetic to requests to delay publication:
In this case, it seems to me that there is a large and substantial threat of retaliation. My guess is Ben’s sources were worried about Emerson hiring stalkers, calling their family, trying to get them fired from their job, or threatening legal action. Having things be out in the public can provide a defense because it is much easier to ask for help if the conflict happens in the open.
As a concrete example, Emerson has just sent me an email saying:
Given the irreversible damage that would occur by publishing, it simply is inexcusable to not give us a bit of time to correct the libelous falsehoods in this document, and if published as is we intend to pursue legal action for libel against Ben Pace personally and Lightcone for the maximum damages permitted by law. The legal case is unambiguous and publishing it now would both be unethical and gross negligence, causing irreversible damage.
For the record, the threat of libel suit and use of statements like “maximum damages permitted by law” seem to me to be attempts at intimidation. Also, as someone who has looked quite a lot into libel law (having been threatened with libel suits many times over the years), describing the legal case as “unambiguous” seems inaccurate and a further attempt at intimidation.
My guess is Ben’s sources have also received dozens of calls (as have I received many in the last few hours), and I wouldn’t be surprised to hear that Emerson called up my board, or would otherwise try to find some other piece of leverage against Lightcone, Ben, or Ben’s sources if he had more time.
While I am not that worried about Emerson, I think many other people are in a much more vulnerable position and I can really resonate with not wanting to give someone an opportunity to gather their forces (and in that case I think it’s reasonable to force the conflict out in the open, which is far from an ideal arena, but does provide protection against many types of threats and adversarial action).
Separately, the time investment for things like this is really quite enormous and I have found it extremely hard to do work of this type in parallel to other kinds of work, especially towards the end of a project like this, when the information is ready for sharing, and lots of people have strong opinions and try to pressure you in various ways. Delaying by “just a week” probably translates into roughly 40 hours of productive time lost, even if there isn’t much to do, because it’s so hard to focus on other things. That’s just a lot of additional time, and so it’s not actually a very cheap ask.
Lastly, I have also found that the standard way that abuse in the extended EA community has been successfully prevented from being discovered is by forcing everyone who wants to publicize or share any information about it to jump through a large number of hoops. Calls for “just wait a week” and “just run your posts by the party you are criticizing” might sound reasonable in isolation, but very quickly multiply the cost of any information sharing, and have huge chilling effects that prevent the publishing of most information and accusations. Asking the other party to just keep doing a lot of due diligence is easy and successful and keeps most people away from doing investigations like this.
As I have written about before, I myself ended up being intimidated by this for the case of FTX and chose not to share my concerns about FTX more widely, which I continue to consider one of the worst mistakes of my career.
My current guess is that if it is indeed the case that Emerson and Kat have clear proof that a lot of the information in this post is false, then I think they should share that information publicly. Maybe on their own blog, or maybe here on LessWrong or on the EA Forum. It is also the case that rumors about people having had very bad experiences working with Nonlinear are already circulating around the community and this is already having a large effect on Nonlinear, and as such, being able to have clear false accusations to respond against should help them clear their name, if they are indeed false.
I agree that this kind of post can be costly, and I don’t want to ignore the potential costs of false accusations, but at least to me it seems like I want an equilibrium of substantially more information sharing, and to put more trust in people’s ability to update their models of what is going on, and less paternalistic “people are incapable of updating if we present proof that the accusations are false”, especially given what happened with FTX and the costs we have observed from failing to share observations like this.
A final point that feels a bit harder to communicate is that in my experience, some people are just really good at manipulation, throwing you off-balance, and distorting your view of reality, and this is a strong reason to not commit to run everything by the people you are sharing information on. A common theme that I remember hearing from people who had concerns about SBF is that people intended to warn other people, or share information, then they talked to SBF, and somehow during that conversation he disarmed them, without really responding to the essence of their concerns. This can take the form of threats and intimidation, or the form of just being really charismatic and making you forget what your concerns were, or more deeply ripping away your grounding and making you think that your concerns aren’t real, and that actually everyone is doing the thing that seems wrong to you, and you are going to out yourself as naive and gullible by sharing your perspective.
[Edit: The closest post we have to setting norms on when to share information with orgs you are criticizing is Jeff Kauffman’s post on the matter. While I don’t fully agree with the reasoning within it, in there he says:
Sometimes orgs will respond with requests for changes, or try to engage you in private back-and forth. While you’re welcome to make edits in response to what you learn from them, you don’t have an obligation to: it’s fine to just say “I’m planning to publish this as-is, and I’d be happy to discuss your concerns publicly in the comments.”
[EDIT: I’m not advocating this for cases where you’re worried that the org will retaliate or otherwise behave badly if you give them advance warning, or for cases where you’ve had a bad experience with an org and don’t want any further interaction. For example, I expect Curzi didn’t give Leverage an opportunity to prepare a response to My Experience with Leverage Research, and that’s fine.]This case seems to me to be fairly clearly covered by the second paragraph, and also, Nonlinear’s response to “I am happy to discuss your concerns publicly in the comments” was to respond with “I will sue you if you publish these concerns”, to which IMO the reasonable response is to just go ahead and publish before things escalate further. Separately, my sense is Ben’s sources really didn’t want any further interaction and really preferred having this over with, which I resonate with, and is also explicitly covered by Jeff’s post.
So in as much as you are trying to enforce some kind of existing norm that demands running posts like this by the org, I don’t think that norm currently has widespread buy-in, as the most popular and widely-quoted post on the topic does not demand that standard (I separately think the post is still slightly too much in favor of running posts by the organizations they are criticizing, but that’s for a different debate).]
- 9 Sep 2023 8:00 UTC; 10 points) 's comment on Sharing Information About Nonlinear by (
Shutting Down the Lightcone Offices
Epistemic status: Probably speaking too strongly in various ways, and probably not with enough empathy, but also feeling kind of lonely and with enough pent-up frustration about how things have been operating that I want to spend some social capital on this, and want to give a bit of a “this is my last stand” vibe.
It’s been a few more days, and I do want to express frustration with the risk-aversion and guardedness I have experienced from CEA and other EA organizations in this time. I think this is a crucial time to be open, and to stop playing dumb PR games that are, in my current tentative assessment of the situation, one of the primary reasons why we got into this mess in the first place.
I understand there is some legal risk, and I am trying to track it myself quite closely. I am also worried that you are trying to run a strategy of “try to figure out everything internally and tell nice narratives about where we are all at afterwards”, and I think that strategy has already gotten us into is so great that I don’t think now is the time to double-down on that strategy.
Please, people at CEA and other EA organizations, come and talk to the community. Explore with us what wrong things happened. Figure out how we should change and what lessons we should learn. We will not figure out a new direction for EA behind closed doors. I am afraid some of you will try to develop some kind of consolidated narrative about what happened and where we are at, and try to present it as fact, when I think the reality of the situation is confusing and messy. I don’t want to cooperate with a lot of the PR-focused storytelling that I think EA has had far too much of in the last few years, and especially in this whole FTX situation, and I both want to be clear that I will push back on more of those kinds of narratives, and want to maybe make right now the time where we stop that kind of stuff.
It isn’t my job to answer a lot of straightforward question that people have on Twitter and on the EA Forum that other EA organizational leaders are much better placed to answer, and I think many others are better places to give context and answer people’s questions. When you talk about legal risk, I think you are only in a limited way talking about legal risk to the movement, but are instead primarily talking about legal risk to you personally and to your organizations, and your organizations are not the movement. I feel like when you say this, there is some equivocation that you are acting in the best interest of the movement by being hesitant around legal risk, when I think in this case that hesitation is at odds with what is actually good for the world. I have a feeling that one thing we are missing now and have missed before is courage to speak true things even when it is difficult, and I wish we had more of that right now.
Yes, you might get dragged into some terrible legal proceedings, and possibly even be fined some amount of money as you get caught in the legal crossfire. I expect given my comments on the forum I will probably also get dragged into those terrible legal proceedings. But if indeed we played a part in one of the biggest frauds of the 21st century, then I think we maybe should own up to spending a few hundred hours in legal proceedings, and take on some probability of having some adverse legal consequences. I think it’s worth it for actually having us learn from this whole mess.
[Edit: Edited a bunch of stuff to have a somewhat more nuanced sentiment, also added an epistemic status]
- My takes on the FTX situation will (mostly) be cold, not hot by 18 Nov 2022 23:57 UTC; 389 points) (
- 18 Nov 2022 5:23 UTC; 49 points) 's comment on Media attention on EA (again) by (
- 18 Nov 2022 23:13 UTC; 8 points) 's comment on Kelsey Piper’s recent interview of SBF by (
- 18 Nov 2022 9:38 UTC; 7 points) 's comment on Media attention on EA (again) by (
- 25 Oct 2023 16:48 UTC; 2 points) 's comment on How has FTX’s collapse impacted EA? by (
Over the course of me working in EA for the last 8 years I feel like I’ve seen about a dozen instances where Will made quite substantial tradeoffs where he traded off both the health of the EA community, and something like epistemic integrity, in favor of being more popular and getting more prestige.
Some examples here include:
When he was CEO while I was at CEA he basically didn’t really do his job at CEA but handed off the job to Tara (who was a terrible choice for many reasons, one of which is that she then co-founded Alameda and after that went on to start another fradulent-seeming crypto trading firm as far as I can tell). He then spent like half a year technically being CEO but spending all of his time being on book tours and talking to lots of high net-worth and high-status people.
I think Doing Good Better was already substantially misleading about the methodology that the EA community has actually historically used to find top interventions. Indeed it was very “randomista” flavored in a way that I think really set up a lot of people to be disappointed when they encountered EA and then realized that actual cause prioritization is nowhere close to this formalized and clear-cut.
I think WWOTF is not a very good book because it really fails to understand AI risk and also describes some methodology of longtermism that again feels like something someone wrote to sound compelling, but just totally doesn’t reflect how any of the longtermist-oriented EAs think about cause-prioritization. This is in-contrast to, for example, The Precipice, which seems like a much better book to me (though still flawed) and actually represents a sane way to think about the future.
The only time when Will was really part of a team at CEA was during the time when CEA went through Y-Combinator, which I think was kind of messed up (like, he didn’t build the team or the organization or really any of the products up to that point). As part of that, he (and some of the rest of the leadership) decided to refocus all of their efforts on building EA funds, despite the organization just having gone through a major restructuring to focus on talent instead of money, since with Open Phil there was already a lot of money around. This was explicitly not because it would be the most impactful thing to do, but because focusing on something clear and understandable like money would maximize the chances of CEA getting into Y-Combinator. I left the organization when this decision was made.
In-general CEA was a massive shitshow for a very long period of time while Will was a board member (and CEO). He didn’t do anything about it, and often exacerbated the problems, and I think this had really bad consequences for the EA community as I’ve written about in other comments. Instead he focused on promoting EA as well as his own brand.
Despite Will branding himself as a leader of the EA community, as far as I can tell he is actually just not very respected among almost any of the other intellectual leaders of the community, at least here in the Bay. He also doesn’t participate in any discourse with really anyone else in the community. He never comments on the EA Forum, he doesn’t do panel discussions with other people, and he doesn’t really steer the actions of any EA organizations, while of course curating an image of himself as the clear leader of the community. This feels to me very much like trying to get the benefits of being a leader without actually doing the job of leadership.
Will displayed extremely bad judgement in his engagement with Sam Bankman Fried and FTX. He was the person most responsible for entangling EA with FTX by publicly endorsing SBF multiple times, despite many warnings he received from many people in the community. The portrayal in this article here seems roughly accurate to me. I think this alone should justify basically expelling him as a leader in the EA community, since FTX was really catastrophically bad and he played a major role in it (especially in its effects on the EA community).
- 23 Mar 2023 0:40 UTC; 17 points) 's comment on Shutting Down the Lightcone Offices by (
- 23 Mar 2023 0:01 UTC; 8 points) 's comment on Shutting Down the Lightcone Offices by (
- 18 Mar 2023 11:30 UTC; 4 points) 's comment on Time Article Discussion—“Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed” by (
I feel like this post mostly doesn’t talk about what feels like to me the most substantial downside of trying to scale up spending in EA, and increased availability of funding.
I think the biggest risk of the increased availability of funding, and general increase in scale, is that it will create a culture where people will be incentivized to act more deceptively towards others and that it will attract many people who will be much more open to deceptive action in order to take resources we currently have.
Here are some paragraphs from an internal memo I wrote a while ago that tried to capture this:
========
I think it it was Marc Andreessen who first hypothesized that startups usually go through two very different phases:
Pre Product-market fit: At this stage, you have some inkling of an idea, or some broad domain that seems promising, but you don’t yet really have anything that solves a really crucial problem. This period is characterized by small teams working on their inside-view, and a shared, tentative, malleable vision that is often hard to explain to outsiders.
Post Product-market fit: At some point you find a product that works for people. The transition here can take a while, but by the end of it, you have customers and users banging on your door relentlessly to get more of what you have. This is the time of scaling. You don’t need to hold a tentative vision anymore, and your value proposition is clear to both you and your customers. Now is the time to hire people and scale up and make sure that you don’t let the product-market fit you’ve discovered go to waste.
I think it was Paul Graham or someone else close to YC (or maybe Ray Dalio) who said something like the following (NOT A QUOTE, since I currently can’t find the direct source):
> The early stages of an organization are characterized by building trust. If your company is successful, and reaches product-market fit, these early founders and employees usually go on to lead whole departments. Use these early years to build trust and stay in sync, because when you are a thousand-person company, you won’t have the time for long 10-hour conversations when you hang out in the evening.
> As you scale, you spend down that trust that you built in the early days. As you succeed, it’s hard to know who is here because they really believe in your vision, and who just wants to make sure they get a big enough cut of the pie. That early trust is what keeps you agile and capable, and frequently as we see founders leave an organization, and with that those crucial trust relationships, we see the organization ossify, internal tensions increase, and the ability to effectively correspond to crises and changing environments get worse.
It’s hard to say how well this model actually applies to startups or young organizations (it matches some of my observations, though definitely far from perfectly), and even more dubious how well it applies to systems like our community, but my current model is that it captures something pretty important.
I think whether we want it or not, I think we are now likely in the post-product-market fit part of the lifecycle of our community, at least when it comes to building trust relationships and onboarding new people. I think we have become high-profile enough, and have enough visible resources (especially with FTX’s latest funding announcements), and have gotten involved in enough high-stakes politics, that if someone shows up next year at EA Global, you can no longer confidently know whether they are there because they have a deeply shared vision of the future with you, or because they want to get a big share of the pie that seems to be up for the taking around here.
I think in some sense that is good. When I see all the talk about megaprojects and increasing people’s salaries and government interventions, I feel excited and hopeful that maybe if we play our cards right, we could actually bring any measurable fraction of humanity’s ingenuity and energy to bear on preventing humanity’s extinction and steering us towards a flourishing future, and most of those people of course will be more motivated by their own self-interest than their altruistic motivation.
But I am also afraid that with all of these resources around, we are transforming our ecosystem into a market for lemons. That we will see a rush of ever greater numbers of people into our community, far beyond our ability to culturally onboard them, and that nuance and complexity will have to get left at the wayside in order to successfully maintain any sense of order and coherence.
I think it is not implausible that for a substantial fraction of the leadership of EA, within 5 years, there will be someone in the world whose full-time job and top-priority it is to figure out how to write a proposal, or give you a pitch at a party, or write a blogpost, or strike up a conversation, that will cause you to give them money, or power, or status. For many months, they will sit down many days a week and ask themselves the question “how can I write this grant proposal in a way that person X will approve of” or “how can I impress these people at organization Y so that I can get a job there?”, and they will write long Google Docs to their colleagues about their models and theories of you, and spend dozens of hours thinking specifically about how to get you to do what they want, while drawing up flowcharts that will include your name, your preferences, and your interests.
I think almost every publicly visible billionaire has whole ecosystems spring up around them that try to do this. I know some of the details here for Peter Thiel, and the “Thielosphere”, which seems to have a lot of these dynamics. Almost any academic at a big lab will openly tell you that among the most crucial pieces of knowledge that any new student learns when they join, is how to write grant proposals that actually get accepted. When I ask academics in competitive fields about the content of their lunch conversations in their labs, the fraction of their cognition and conversations that goes specifically to “how do I impress tenure review committees and grant committees” and “how do I network myself into an academic position that allows me to do what I want” ranges from 25% to 75% (with the median around 50%).
I think there will still be real opportunities to build new and flourishing trust relationships, and I don’t think that it will be impossible for us to really come to trust someone who joins our efforts after we have become ‘cool,’ but I do think it will be harder. I also think we should cherish and value the trust relationships we do have between the people who got involved with things earlier, because I do think that lack of doubt of why someone’s here is a really valuable resource, and one that I expect is more and more likely to be a bottleneck in the coming years.
- 9 Nov 2022 17:39 UTC; 87 points) 's comment on FTX Crisis. What we know and some forecasts on what will happen next by (
- EA culture is special; we should proceed with intentionality by 21 May 2022 21:55 UTC; 85 points) (
I feel really quite bad about this post. Despite it being only a single paragraph it succeeds at confidently making a wrong claim, pretending to speak on behalf of both an organization and community that it is not accurately representing, communicating ambiguously (probably intentionally in order to avoid being able to be pinned on any specific position), and for some reason omitting crucial context.
Contrary to the OP it is easy to come up with examples where within the Effective Altruism framework two people do not count equally. Indeed most QALY frameworks value young people more than older people, many discussions have been had about hypothetical utility monsters, and about how some people might have more moral patienthood due to being able to experience more happiness or more suffering, and of course the moral patienthood of artificial systems immediately makes it clear that different minds likely matter differently in moral calculus.
Saying “all people count equally” is not a core belief of EA, and indeed I do not remember hearing it seriously argued for a single time in my almost 10 years in this community (which is not surprising, since it indeed doesn’t really hold any water after even just a tiny bit of poking, and your only link for this assertion is a random article written by CEA, which doesn’t argue for this claim at all and also just blindly asserts it). It is still the case that most EAs believe that the variance in the importance of different people’s experience is relatively small, that variance almost certainly does not align with historical conceptions of racism, and that there are at least some decent game-theoretic arguments to ignore a good chunk of this variance, but this does not mean that “all people count equally” is a “core belief” which should clearly only be reserved for an extremely small number of values and claims. It might be a good enough approximation in almost all practical situations, but it is really not a deep philosophical assumption of any of the things that I am working on, and I am confident that if I were to bring it up at an EA meetup, someone would quite convincingly argue against it.
This might seem like a technicality, but in this context the statement is specifically made to claim that EA has a deep philosophical commitment to valuing all people equally, independently of the details about how their mind works (either because of genetics, or development environment, or education). This reassurance does not work. I (and my guess is also almost all extrapolations of the EA philosophy) value people approximately equally in impact estimates because it looks like the relative moral patienthood of different people, and the basic cognitive makeup of people, does not seem to differ much between different populations, not because I have a foundational philosophical commitment to impartiality. If it was the case that different human populations did differ on the relevant dimensions a lot, this would spell a real moral dilemma for the EA community, with no deep philosophical commitments to guard us from coming to uncomfortable conclusions (luckily, as far as I can tell, in this case almost all analyses from an EA perspective lead to the conclusion that it’s probably reasonable to weigh people equally in impact estimates, which doesn’t conflict with society’s taboos, so this is not de-facto a problem).
Moving on, I do not believe that this statement is speaking on behalf of the employees of CEA, many of which I am confident also feel quite badly represented by this statement, and is also not speaking on behalf of Effective Altruism. I don’t know what process has produced it, but I don’t think it is speaking for me or almost anyone else I know within the EA community. Organizations themselves don’t have beliefs, and EA has generally successfully avoided descending into meaningless marketing and PR speech where organizations take positions despite nobody at those organizations actually believing those positions. If you want to make a statement on this matter, speak as an individual. Individuals can meaningfully have beliefs. Organizations pretending to have beliefs is usually primarily a tactic to avoid taking responsibility and creating a diffuse target.
Additionally, it is completely unclear from your statement whether you are referring to Bostrom’s original email or whether you are referring to Bostrom’s apology. I don’t know why you are being ambiguous, but it seems quite plausible that you are doing so in order to not be able to be pinned on either repudiating the statements in Bostrom’s apology, which seem quite reasonable to me and many other EAs (and would therefore attract ire from the community), or failing to repudiate those same statements which are attracting a lot of ire publicly due to not being explicitly anti-racist enough. If this is indeed what you are doing, then please stop. This ambiguity is toxic to clear communication. If this is not what you are doing, then please clarify, and also please get better at writing, it seems really extremely obvious that this was going to be a problem with this statement.
Lastly, you are also not linking to either Bostrom’s original statement, or his apology. I don’t know why. It would both clear up the ambiguity discussed above, and it would provide crucial context for anyone trying to understand what is going on, and who might have not seen Bostrom’s apology. My guess is you are doing this with some other PR reason in mind. Maybe so that when people Google this topic later on this doesn’t show up in search? Maybe so that the lack of context makes it less likely that other people outside of the community will understand what this statement is about? In any case, either please get better at communicating, or stop the weird PR games that you are seemingly trying to play here.
Overall, despite this only being a single paragraph, I think there has been little produced by CEA that has made me feel as badly represented, and that has made me feel as alienated from the EA community as this statement. Please stop on whatever course you are setting out where this is how you communicate with both the public and the community.
- Shutting Down the Lightcone Offices by 14 Mar 2023 22:47 UTC; 337 points) (LessWrong;
- Shutting Down the Lightcone Offices by 15 Mar 2023 1:46 UTC; 242 points) (
- Two things that I think could make the community better by 26 Apr 2023 14:59 UTC; 92 points) (
- Helping animals or saving human lives in high income countries is arguably better than saving human lives in low income countries? by 21 Mar 2024 9:05 UTC; 12 points) (
- 17 Mar 2023 17:54 UTC; 7 points) 's comment on Shutting Down the Lightcone Offices by (LessWrong;
I agree with some of the points of this post, but I do think there is a dynamic here that is missing, that I think is genuinely important.
Many people in EA have pursued resource-sharing strategies where they pick up some piece of the problems they want to solve, and trust the rest of the community to handle the other parts of the problem. One very common division of labor here is
I will go and do object-level work on our core priorities, and you will go and make money, or fundraise from other people in order to fund that work
I think a lot of this type of trade has happened historically in EA. I have definitely forsaken a career with much greater earning potential than I have right now in order to contribute to EA infrastructure and to work on object-level problems.
I think it is quite important to recognize that in as much as a trade like this has happened, this gives the people who have done object level work a substantial amount of ownership over the funds that other people have earned, as well as the funds that other people have fundraised (I also think this applies to Open Phil, though I think the case here is bunch messier and I won’t go into my models of the game theory here in-detail). Since the person who owns the funds has the ability to defect at basically any time on the arrangement and just do direct work themselves, the person who has been doing the object-level work so far has no ability to defect in the same way, and so this trade relies on the person doing object-level work trusting the person who made money to keep their future promise to act in both parties best interest.
My current guess is that the majority of EAs impact is downstream of trades like this, so taking this into account is a pretty huge deal in my books. For example, I think me being able to specialize into building infrastructure for the community, while trusting that I get to maintain some ability to direct the EA portfolio, was I think a huge multiplier on my impact in the world.
That means that I do think that a lot of the funds that have been raised within EA, though definitely not all of them, are meaningfully owned by the people who have forsaken direct control over that money in order to pursue our object-level priorities, and not the people in whose bank account the money is technically located.
To make the case clearer, I think there are many people who have forsaken a path in industry where they could have been quite successful entrepreneurs who could make many millions of dollars, and they do not currently have direct control over millions of dollars.
Overall I think the current balance of funds not reflecting this is a mistake, and I think in the world where this trade is working well, people who we think are responsible for a lot of the biggest positive impact would have been given hundreds of millions of dollars in exchange for that impact, and the ownership over the funds would be more clear. I am somewhat optimistic that more things like this will happen in the future as things like impact-certificates might take off more, which try to make this whole situation less fuzzy and more concrete.
Historically EA had a culture of the people running successful EA organizations not really being allowed to get rich from them running them (partially for valid signaling and grifter-related reasons), but this does mean that the current balance of funds does not reflect the fair and tacitly-agreed on allocation of funds, and this is a pretty precarious situation.
However, this does not mean that I think the money should be straightforwardly democratically allocated. I think the balance of funds and other sources of power in EA should roughly represent the balance of past positive impact that people have achieved (which includes the positive impact from making and fundraising money). I think given the heavy-tailedness of impact this also represents a very non-democratic allocation of funds, but it does meaningfully differ from putting the ownership clearly in the hands of the donors.
Of course, there are many donors who feel like they have not participated in any trade like this, though I think in most of those situations, I think it’s then the right call to charge a substantial surcharge on the literal cost of labor of someone doing object-level work, so that over time the cost for buying the altruistic impact does not just reflect the marginal cost of labor, but also (at least) the counterfactual cost of the people doing object-level work having abstained from a financially lucrative career.
As a concrete example of what this line of reasoning leads you to, you can take a look at the Lightcone Infrastructure salary policy. Our salary policy is that “we will pay you whatever we think you could have made in industry, minus 30%”. This compensation structure means we will pay many people salaries quite substantially above what they need to live. The 30% number is trying to find a roughly fair split of sacrifice between the donor and the worker, where the donor is paying 70% of market rate of the worker, and the worker is giving up 30% of their salary, together making progress towards the shared goal. This number is skewed towards the donor because salary doesn’t really capture most of the variance in income, since income is heavy-tailed and “salary” is anchored on the median outcome, and also because donors are selected from the pool of people who got lucky in the entrepreneurial lottery, and the balance they pay needs to also account for all the people who tried to make money for EA and failed.
My guess is the 70⁄30 split here is fairer, though still overall skewed quite a bit against the workers, but it’s at least an attempt to make the actual allocation of funds reflect the fair allocation better.
I think this means overall that saying that “the EA community does not own its donors’ money” is pretty inaccurate, though of course is still tracking something important. I do indeed think that many people who have done highly impactful work in the EA community have a quite strong and direct claim of ownership over the funds that have been made by various entrepreneurs who did earning-to-give, as well as various megadonors and pots of funds like Open Philanthropy’s endowment. I think in an ideal world this balance of funds and power would be made more explicit by having something like impact-certificate markets with evaluations from current donors, but we are pretty far from that, and in the meantime I do really care about not wrongly enshrining the meme that ignores all the past trades of division-of-labor that have happened (and are happening on a daily level).
- 23 Nov 2023 19:12 UTC; 14 points) 's comment on Open Phil Should Allocate Most Neartermist Funding to Animal Welfare by (
I think EA currently is much more likely to fail to achieve most of its goals by ending up with a culture that is ill-suited for its aims, being unable to change direction when new information comes in, and generally fail due to the problems of large communities and other forms of organization (like, as you mentioned, the community behind NeurIPS, which is currently on track to be an unstoppable behemoth racing towards human extinction that I so desperately wish was trying to be smaller and better coordinated).
I think EA Global admissions is one of the few places where we can apply steering on how EA is growing and what kind of culture we are developing, and giving this up seems like a cost, without particularly strong commensurate benefits.
On a more personal level, I do want to be clear that I am glad about having a bigger EA Global this year, but I would probably also just stop attending an open-invite EA Global since I don’t expect it would really share my culture or be selected for people I would really want to be around. I think this year’s EA Global came pretty close to exhausting my ability to be thrown into a large group of people with a quite different culture of differing priorities, and I expect less selection would cause me to hit that limit quite reliably.
I do think there are potentially ways to address many of the problems you list in this post by changing the admissions process, which I do think is currently pretty far from perfect (in-particular, I would like to increase the number of people who don’t need to apply to attend because they are part of some group or have some obvious signal that means they should pass the bar).
- 9 Feb 2023 8:34 UTC; 20 points) 's comment on Solidarity for those Rejected from EA Global by (
On funding, trust relationships, and scaling our community [PalmCone memo]
Huh, I am surprised that no one responded to you on this. I wonder whether I was part of that conversation, and if so, I would be interested in digging into what went wrong.
I definitely would have put Sam into the “un-lawful oathbreaker” category and have warned many people I have been working with that Sam has a reputation for dishonesty and that we should limit our engagement with him (and more broadly I have been complaining about an erosion of honesty norms among EA leadership to many of the current leadership, in which I often brought up Sam as one of the sources of my concern directly).
I definitely had many conversations with people in “EA leadership” (which is not an amazingly well-defined category) where people told me that I should not trust him. To be clear, nobody I talked to expected wide-scale fraud, and I don’t think this included literally everyone, but almost everyone I talked to told me that I should assume that Sam lies substantially more than population-level baseline (while also being substantially more strategic about his lying than almost everyone else).
I do want to add to this that in addition to Sam having a reputation for dishonesty, he also had a reputation for being vindictive, and almost everyone who told me about their concerns about Sam did so while seeming quite visibly afraid of retribution from Sam if they were to be identified as the source of the reputation, and I was never given details without also being asked for confidentiality.
- EA is a global community—but should it be? by 18 Nov 2022 8:23 UTC; 200 points) (
- How could we have avoided this? by 12 Nov 2022 12:45 UTC; 123 points) (
- 13 Nov 2022 21:11 UTC; 109 points) 's comment on The FTX Situation: Wait for more information before proposing solutions by (
- 13 Nov 2022 1:49 UTC; 75 points) 's comment on CEA/EV + OP + RP should engage an independent investigator to determine whether key figures in EA knew about the (likely) fraud at FTX by (
- SBF, extreme risk-taking, expected value, and effective altruism by 13 Nov 2022 17:44 UTC; 73 points) (
- 18 Nov 2022 17:01 UTC; 51 points) 's comment on EA is a global community—but should it be? by (
- 16 Nov 2022 19:17 UTC; 14 points) 's comment on Who’s at fault for FTX’s wrongdoing by (
- In Defense of SBF by 14 Nov 2022 16:10 UTC; -1 points) (
My tentative best guess on how EAs and Rationalists sometimes turn crazy
I want to push back on this post. I think sadly this post suffers from the same problem that 99% of all legal advice that people receive suffers from, which is that it is not actually a risk analysis that helps you understand the actual costs of different decisions.
The central paragraph that I think most people will react to is this section:
Being involved in litigation, even as a totally blameless witness—or even a perceived witness who in fact has no relevant knowledge at all—is expensive, time consuming, emotionally taxing, and unpleasant. Even cheap lawyers cost hundreds of dollars an hour these days and often bill in increments of .1 hours for their time. Anyone who gets caught up in court proceedings can expect to pay such a lawyer (or have their employer do so) for many hours of time to help them produce documents and communications (or formally object to having to do so) and then prepare them to be grilled for many more hours by another well-paid professional interlocutor with goals and motives at best orthogonal to their own, if not outright hostile.
Sadly, this post does not indicate what the actual expected time a witness might be expected to spend in litigation is.
I don’t currently know this number, but my guess is that less than 10% of any EA leaders who would end up writing publicly about this would end up being involved in any substantial amount of legal proceedings. My guess based on doing some independent legal research myself is that the actual cost of being involved in this kind of litigation is substantial, but not massive. Most likely a few thousand dollars in lawyer fees (for the simplicity of calculation, let’s say $10k), and probably a few days of being involved in court proceedings (let’s say for simplicity 5 days, spending like 5 hours on each of those days).
Finishing this very rough estimate, my guess is the actual expected cost of writing publicly about this is around 0.1 * $10k + 0.1 * 25 hours = $1k + 2.5 hours. My guess is this is well below what most EA leaders would be willing to pay to be able to speak openly and publicly about this, and would at most be a minor consideration on whether to actually say anything on this topic.
That said, I am not a legal professional, and my estimates might be totally wrong here. I would really appreciate a legal analysis that actually estimates the cost here, since I think as written, it seems totally plausible to me (and indeed more likely than not) that the expected cost of speaking up is pretty minor, though it is also plausible that the expected cost could be major, and settling which one it is is the primary question that legal analysis should aim to answer, in my opinion.
For related analysis that captures my feelings on this topic, see also Everyone Should be Less Afraid of Lawsuits by Alyssa Vance, in-particular this section:
“Lawyer” is traditionally a high-status job, and most people look to lawyers with some degree of deference and authority. When a lawyer says “don’t do X, you might get sued”, people usually listen. But lawyers are institutionally trained and personally incentivized to be conservative, largely because of asymmetric justice—if you get sued and the lawyer didn’t speak up, they could be blamed, while lawyers are almost never blamed for shooting down an otherwise high-value idea out of inaccurate risk assessment. Just like airports and missing a flight, an organization that always listens to lawyers is being more cautious than the optimum.
- 21 Nov 2022 9:50 UTC; 16 points) 's comment on The relative silence on FTX/SBF is likely the result of sound legal advice by (
- 30 Nov 2022 19:06 UTC; 13 points) 's comment on The relative silence on FTX/SBF is likely the result of sound legal advice by (
Long-Term Future Fund: April 2019 grant recommendations
Apply to the ML for Alignment Bootcamp (MLAB) in Berkeley [Jan 3 - Jan 22]
Yes, I at least strongly support people reaching out to my staff about opportunities that they might be more excited about than working at Lightcone, and similarly I have openly approached other people working in the EA community at other organizations about working at Lightcone. I think the cooperative atmosphere between different organizations, and the trust that individuals are capable of making the best decisions for themselves on where they can have the best impact, is a thing I really like about the EA community.
I thought the previous article by Charlotte Alter on sexual misconduct in EA was pretty misleading in a lot of ways, as the top comments have pointed out, since it omitted a lot of crucial context, primarily used examples from the fringes of the community, and omitted various enforcement actions that were taken against the people mentioned in the article, which I think overall produced an article that had some useful truths in it, but made it really quite hard for readers to come to a good map of what is actually going on with that kind of stuff in EA.
This article, in contrast, does not have, as far as I can tell, any major misrepresentations in it. I do not know the details about things like conversations between Will and Tara, of course, since I wasn’t there, and I have a bit of a feeling there is some exageration in the quotes by Naia here, but having done my own investigation and having talked to many people about this, the facts and rough presentation of what happened here seems basically correct.
It still has many of the trapping of major newspaper articles, and think continues to not be amazingly well-optimized for people to come to a clear understanding of the details, and is more optimized to tell a compelling story, but at least my perception is that the rough narrative of the story lines up pretty well with what I think indeed happened. When I found out similar details in early 2022, I also had quite a strong reaction that nobody seemed to be acting on all of these warning flags.
- 24 Dec 2023 3:05 UTC; 1 point) 's comment on Effective Aspersions: How the Nonlinear Investigation Went Wrong by (
I disagree. Or at least I think the reasons in this post are not very good reasons for Bostrom to step down (it is plausible to me he could pursue more impactful plans somewhere else, potentially by starting a new research institution with less institutional baggage and less interference by the University of Oxford).
Bostrom is as far as I can tell the primary reason why FHI is a successful and truth-oriented research organization. Making a trustworthy research institution is exceptionally difficult, and its success is not primarily measured in the operational quality of its organization, but in the degree to which it produces important, trustworthy and insightful research. Bostrom has succeeded at this, and the group of people (especially the early FHI cast including Anders Sandberg, Eric Drexler, Andrew Snyder Beattie, Owain Evans, and Stuart Armstrong) he has assembled under the core FHI research team have made great contributions to many really important questions that I care about, and I cannot think of any other individual who would have been able to do the same (Sean gives a similar perspective in his comment).
I think Bostrom overstretched himself when he let FHI grow to dozens and hundreds of people, and this seems to me like it was a mistake, and leaned too hard on skills that are hard to combine with intellectual integrity and vision, and skills that Bostrom otherwise does not seem exceptionally strong at (like navigating university politics and facilitating operational scaling). I do think that aspect of FHI has recently run into a bunch of problems, but at what I think of FHI’s core responsibility he has done very exceptional work and I see no suitable replacement for Bostrom at this point.
When I think of the value that FHI has produced for the world it has been in the unabashed exploration of ideas, with great willingness to explore wherever they go, even if this might involve engaging with ideas that are scary, sound crazy and speculative, or are societally taboo. To me, the core value add of FHI has always seemed to me to be one of providing a beacon, and one of the world’s best places to work at, for people who want to take ideas seriously and think rigorously about the future. The cultural components to create this kind of environment are very rare, and do not exist almost anywhere else in the world (and for example, IMO very clearly do not exist at GCRI or CSER or CSET or OpenAI or Anthropic or Longview or CLTR).
I think these are the things to pay attention when evaluating the historical performance of Bostrom’s FHI. Not on the basis of the organizations ability to write PR-statements, or scale a large research lab while navigating politically difficult relationships with a 1000 year old university. Most well-run large research labs with squeaky-clean public image do not answer interesting questions that are crucial to humanity’s future, in a way that a a reader can trust is actually driven by the desire to get the right answer, as opposed to pushing some kind of political or intellectual agenda. Indeed, I personally trust people more who don’t spend a huge chunk of their energy trying to maintain a completely clean public image.
For basically any other organization that is not FHI that pursues similar questions, when I read their takes on the macrohistory, or the future of humanity, I usually primarily see attempts to spread some ideology, to gain resources for some interest group, or attempts to build some sphere of influence by telling the right kind of macrohistory. For core FHI work, like “Eternity in Six Hours” (one of the papers that’s been most influential on my world view) I see what seems to me genuine interest in figuring out the truth and to answer the big questions, instead of secretly trying to trick me into supporting them, or get me to buy into their ideology, or support their favorite political cause or social movement, or to suspiciously shy away from a conclusion whenever that conclusion would be too hard to defend publicly to people who only want to spend 5 minutes on this question.
It is possible, and would be very sad, that FHI sadly cannot continue being this beacon, both because it scaled too quickly and its cultural magic is therefore no longer there, and because it is too deeply entwined with the University of Oxford, which will smother both its operational capacity and intellectual exploration.
In that case, I think the right choice is not for Bostrom to leave FHI, but for FHI to shut down. FHI is responsible for many of the best intellectual contributions for exploring the future of humanity, and I think before we do something that would substantially sabotage that legacy, it would be better to close down in a structured manner. I think it would be bad to let it fall in the hands of someone interested in making FHI into just another talent funnel, or another machine for producing prestige for Effective Altruism or AI Alignment or the people running FHI, while using up the credibility and intellectual integrity of Bostrom and many other core researchers who have created one of the highest integrity research institutions in the world.
I don’t know enough details about the FHI situation to have a strong judgement on whether Bostrom should stay and try to right the ship, or shut FHI down, but I think asking Bostrom to step down because of controversies belies the value that FHI most provides for the world.
I do think it is plausible that we should consider FHI a lesson on why getting too involved with risk-averse institutions will ultimately bite us, and we should be more hesitant to embed ourselves deeply into institutions like Oxford university. If indeed FHI is irrevocably tied to Oxford, and Oxford is unlikely to give FHI the operational and intellectual independence it needs, then I strongly encourage Bostrom to close down FHI, and to start something new with more independence, less need for both difficult PR work, and less need for navigating messy university politics.
It’s also possible to me that Sean’s proposal of finding a co-director to run FHI with might be the best choice, and might allow Bostrom to focus more on producing great research, and run a somewhat larger organization at the same time, though of course and finding such a co-director is also very difficult (though does seem easier than finding someone to lead the institution fully on their own).
I really want to be in favor of having a less centralized media policy, and do think some level of reform is in-order, but I also think “don’t talk to journalists” is just actually a good and healthy community norm in a similar way that “don’t drink too much” and “don’t smoke” are good community norms, in the sense that I think most journalists are indeed traps, and I think it’s rarely in the self-interest of someone to talk to journalists.
Like, the relationship I want to have to media is not “only the sanctioned leadership can talk to media”, but more “if you talk to media, expect that you might hurt yourself, and maybe some of the people around you”.
I think almost everyone I know who has taken up requests to be interviewed about some community-adjacent thing in the last 10 years has regretted their choice, not because they were punished by the community or something, but because the journalists ended up twisting their words and perspective in a way both felt deeply misrepresentative and gave the interviewee no way to object or correct anything.
So, overall, I am in favor of some kind of change to our media policy, but also continue to think that the honest and true advice for talking to media is “don’t, unless you are willing to put a lot of effort into this”.
The accusations are public and have already received substantial exposure. TIME itself seems to be leveraging this request for confidentiality in order to paint an inaccurate picture of what is actually going on and also making it substantially harder for people to orient towards the actual potential sources of risk in the surrounding community.
I don’t currently see a strong argument for not linking to evidence that I was easily able to piece together publicly, and also like, probably the accused can also figure out. The cost here is really only born by the people who lack context who I feel like are being substantially mislead by the absence of information here.
I’ll by-default repost the links and guess at identity of the person in-question in 24 hours unless some forum admin objects or someone makes a decent counterargument.
- 5 Feb 2023 1:01 UTC; -4 points) 's comment on EA, Sexual Harassment, and Abuse by (
- 5 Feb 2023 0:48 UTC; -11 points) 's comment on EA, Sexual Harassment, and Abuse by (
I don’t think I am a great representative of EA leadership, given my somewhat bumpy relationship and feelings to a lot of EA stuff, but I nevertheless I think I have a bunch of the answers that you are looking for:
The Coordination Forum is a very loosely structured retreat that’s been happening around once a year. At least the last two that I attended were structured completely as an unconference with no official agenda, and the attendees just figured out themselves who to talk to, and organically wrote memos and put sessions on a shared schedule.
At least as far as I can tell basically no decisions get made at Coordination Forum, and it’s primary purpose is building trust and digging into gnarly disagreements between different people who are active in EA community building, and who seem to get along well with the others attending (with some balance between the two).
I think attendance has been decided by CEA. Criteria have been pretty in-flux. My sense has been that a lot of it is just dependent on who CEA knows well-enough to feel comfortable inviting, and who seems to be obviously worth coordinating with.
I mean, my primary guess here is Carrick. I don’t think there was anyone besides Carrick who “decided” to make the Carrick campaign happen. I am pretty confident Carrick had no boss and did this primarily on his own initiative (though likely after consulting with various other people in EA on whether it was a good idea).
[edit: On more reflection and talking to some more people, my guess is there was actually more social pressure involved here than this paragraph implies. Like, I think it was closer to “a bunch of kind-of-but-not-very influential EAs reached out to him and told him that they think it would be quite impactful and good for the world if he ran”, and my updated model of Carrick really wasn’t personally attracted to running for office, and the overall experience was not great for him]
I expressed desire for it not to happen! Though like, I think it wasn’t super obvious to me it was a wrong call, but a few times when people asked me whether to volunteer for the Carrick campaign, I said that seemed overall bad for the world. I did not reach out to Carrick with this complaint, since doing anything is already hard, Carrick seemed well-intentioned, and while I think his specific plan was a mistake, it didn’t seem a bad enough mistake to be worth very actively intervening (and like, ultimately Carrick can do whatever he wants, I can’t stop him from running for office).
I think it could be a cost-effective use of $3-10 billion (I don’t know where you got the $8-15 billion from, looks like the realistic amounts were closer to 3 billion). My guess is it’s not, but like, Twitter does sure seem like it has a large effect on the world, both in terms of geopolitics and in terms of things like norms for the safe development of technologies, and so at least to me I think if you had taken Sam’s net-worth at face-value at the time, this didn’t seem like a crazy idea to me.
I don’t know why Will vouched so hard for Sam though, that seems like a straightforward mistake to me. I think it’s likely Will did not consult anyone else, as like, it’s his right as a private individual talking to other private individuals.
My guess is because he thought none of them are very good? I also don’t think we should take on board any of their suggestions, and many of them strike me as catastrophic if adopted. I also don’t think any of them would have helped with this whole FTX situation, and my guess is some of them would have likely made it worse.
I don’t know a ton of stuff that Will has done. I do think me and others have tried various things over the years to reduce hero worship. On Lesswrong and the EA Forum I downvote things that seem hero-worshippy to me, and I have written many comments over the years trying to reduce it. We also designed the frontpage guidelines on LW to reduce some of the associated community dynamics.
I do think this is a bit of a point of disagreement between me and others in the community, where I have had more concerns about this domain than others, but my sense is everyone is pretty broadly on-board with reducing this. I do sadly also don’t have a ton of traction on reducing this.
I do think it is indeed really sad that people fear reprisal for disagreement. I think this is indeed a pretty big problem, not really because EA is worse here than the rest of the world, but because I think the standard for success is really high on this dimension, and there is a lot of value in encouraging dissent and pushing back against conformity, far into the tails of the distribution here.
I expect the community health team to have discussed this extensively (like, I have discussed it with them for many hours). There are lots of things attempted to help with this over the years. We branded one EAG after “keeping EA weird”, we encouraged formats like whiteboard debates at EAG to show that disagreement among highly-engaged people is common, we added things like disagree-voting in addition to normal upvoting and downvoting to encourage a culture where it’s normal and expected that someone can write something that many people disagree with, without that thing being punished.
My sense is this all isn’t really enough, and we still kind of suck at it, but I also don’t think it’s an ignored problem in the space. I also think this problem gets harder and harder the more you grow, and larger communities trying to take coordinated action require more conformity to function, and this sucks, and is I think one of the strongest arguments against growth.
Anything I say here is in my personal capacity and not in any way on behalf of EA Funds. I am just trying to use my experience at EA Funds for some evidence about how these things usually go.
At least historically in my work at EA Funds this would be the opposite of how I usually evaluate grants. A substantial fraction of my notes consist of complaining that people seem too conformist to me and feel a bit like “EA bots” who somewhat blindly accept EA canon in ways that feels bad to me.
My sense is other grantmakers are less anti-conformity, but in-general, at least in my interactions with Open Phil and EA Funds grantmakers, I’ve seen basically nothing that I could meaningfully describe as punishing dissent.
I do think there are secondary things going on here where de-facto people have a really hard time evaluating ideas that are not expressed in their native ontology, and there is a thing where if you say stuff that seems weird from an EA framework this can come across as cringe to some people, and I do hate a bunch of those cringe reactions, and I think think it contributes a lot to conformity. I think that kind of stuff is indeed pretty bad, though I think almost all of the people who I’ve seen do this kind of thing would at least in the abstract strongly agree that punishing dissent is quite bad, and that we should be really careful around this domain, and have been excited about actively starting prices for criticism, etc.
Again, just using my historical experience at EA Funds as evidence. I continue to in no way speak on behalf of funds, and this is all just my personal opinion.
I would have to look through the data, but my guess is about 20% of EA Funds funding is distributed privately, though a lot of that happens via referring grants to private donors (i.e. most of this does not come from the public EA Funds funding). About three-quarters (in terms of dollar amount) of this is to individuals who have a strong preference for privacy, and the other quarter is for stuff that’s more involved in policy and politics where there is some downside risk of being associated with EA in both directions (sometimes the policy project would prefer to not be super publicly associated and evaluated by an EA source, sometimes a project seems net-positive, but EA Funds doesn’t want to signal that it’s an EA-endorsed project).
SFF used to have a policy of allowing grant recommenders to prevent a grant from showing up publicly, but we abolished that power in recent rounds, so now all grants show up publicly.
I personally really dislike private funding arrangements and find it kind of shady and have pushed back a bunch on them at EA Funds, though I can see the case for them in some quite narrow set of cases. I personally quite dislike not publicly talking about policy project grants, since like, I think they are actually often worth the most scrutiny.
There is no formal government here. If you do something that annoys a really quite substantial fraction of people at EA organizations, or people on the EA Forum, or any other large natural interest group in EA, there is some chance that someone at CEA (or maybe Open Phil) reaches out to someone doing a lot of things very publicly and asks them to please stop it (maybe backed up with some threat of the Effective Altruism trademark that I think CEA owns)
I think this is a difficult balance, and asking people to please associate less with EA can also easily contribute to a climate of conformity and fear, so I don’t really know what the right balance here is. I think on the margin I would like the world to understand better that EA has no central government, and anyone can basically say whatever they want and claim that it’s on behalf of EA, instead of trying to develop some kind of party-line that all people associated with EA must follow.
I do think this was a quite misleading narrative (though I do want to push back on your statement of it being “completely untrue”), and people made a pretty bad mistake endorsing it.
Up until yesterday I thought that indeed 80k fucked up pretty badly here, but I talked a bit to Max Dalton and my guess is the UK EAs seemed to maybe know a lot less about how Sam was living than people here in the Bay Area, and it’s now plausible to me (though still overall unlikely) that Rob did just genuinely not know that Sam was actually living a quite lavish lifestyle in many ways.
I had drafted an angry message to Rob Wiblin when the interview came out that I ended up not sending because it was a bit too angry that went approximately something like “Why the hell did you tell this story of SBF being super frugal in your interview when you know totally well that he lives in one of the most expensive apartments in the Bahamas and has a private jet”. I now really wish I had sent it. I wonder whether this would have caused Rob to notice something fishy was going on, and while I don’t think it would have flipped this whole situation, I do think it would have potentially made a decent dent into not being duped into this whole situation.