Thanks so much for bringing this degree of honesty, openness and detail about a decision this big. As someone not deeply embroiled in the longtermist/rationalist world your uncertainty about whether you and others are doing net harm vs good on the AI alignment front is prett chilling. I’m looking forward to responses, hoping the picture is not quite as bleak as you paint!
One question on something I do know a little about (which could be answered in a couple of sentances or even perhaps a link). What’s your issue with Will Mckaskill as a public intellectual? I’ve watched Ted talks, heard him do interviews etc. and he seemed on shallow thought to be a good advocate for EA stuff in general.
Over the course of me working in EA for the last 8 years I feel like I’ve seen about a dozen instances where Will made quite substantial tradeoffs where he traded off both the health of the EA community, and something like epistemic integrity, in favor of being more popular and getting more prestige.
Some examples here include:
When he was CEO while I was at CEA he basically didn’t really do his job at CEA but handed off the job to Tara (who was a terrible choice for many reasons, one of which is that she then co-founded Alameda and after that went on to start another fradulent-seeming crypto trading firm as far as I can tell). He then spent like half a year technically being CEO but spending all of his time being on book tours and talking to lots of high net-worth and high-status people.
I think Doing Good Better was already substantially misleading about the methodology that the EA community has actually historically used to find top interventions. Indeed it was very “randomista” flavored in a way that I think really set up a lot of people to be disappointed when they encountered EA and then realized that actual cause prioritization is nowhere close to this formalized and clear-cut.
I think WWOTF is not a very good book because it really fails to understand AI risk and also describes some methodology of longtermism that again feels like something someone wrote to sound compelling, but just totally doesn’t reflect how any of the longtermist-oriented EAs think about cause-prioritization. This is in-contrast to, for example, The Precipice, which seems like a much better book to me (though still flawed) and actually represents a sane way to think about the future.
The only time when Will was really part of a team at CEA was during the time when CEA went through Y-Combinator, which I think was kind of messed up (like, he didn’t build the team or the organization or really any of the products up to that point). As part of that, he (and some of the rest of the leadership) decided to refocus all of their efforts on building EA funds, despite the organization just having gone through a major restructuring to focus on talent instead of money, since with Open Phil there was already a lot of money around. This was explicitly not because it would be the most impactful thing to do, but because focusing on something clear and understandable like money would maximize the chances of CEA getting into Y-Combinator. I left the organization when this decision was made.
In-general CEA was a massive shitshow for a very long period of time while Will was a board member (and CEO). He didn’t do anything about it, and often exacerbated the problems, and I think this had really bad consequences for the EA community as I’ve written about in other comments. Instead he focused on promoting EA as well as his own brand.
Despite Will branding himself as a leader of the EA community, as far as I can tell he is actually just not very respected among almost any of the other intellectual leaders of the community, at least here in the Bay. He also doesn’t participate in any discourse with really anyone else in the community. He never comments on the EA Forum, he doesn’t do panel discussions with other people, and he doesn’t really steer the actions of any EA organizations, while of course curating an image of himself as the clear leader of the community. This feels to me very much like trying to get the benefits of being a leader without actually doing the job of leadership.
Will displayed extremely bad judgement in his engagement with Sam Bankman Fried and FTX. He was the person most responsible for entangling EA with FTX by publicly endorsing SBF multiple times, despite many warnings he received from many people in the community. The portrayal in this article here seems roughly accurate to me. I think this alone should justify basically expelling him as a leader in the EA community, since FTX was really catastrophically bad and he played a major role in it (especially in its effects on the EA community).
(Edit: See this comment I made with some minor retractions on the above. I do want to note that in as much as I did get things wrong, both me and Will agreed that it was likely because people hired and supervised by Will directly lied to both me and him, which I think is in substantial part Will’s fault, and as things go among the more forgivable reasons for getting things wrong. I also think most of the retractions don’t bear that much on my overall assessment, though I did make some minor updates on the mess at CEA being more “Will being taken advantage of” rather than “Will playing an active role in the advantage-taking”)
Fwiw I have little private information but think that:
I sense this misses some huge successes in EA getting where it is. Seems we’ve done pretty well all things considered. Wasn’t will part of that?
Will is a superlative networker
He is a very good public intellectual. Perhaps Ord could be if his books were backed to that extent. Perhaps Will could be better if he wrote different books. But he seems really good at it. I would guess that on that public intellectual side he’s a benefit not a cost
If I’d had the ability to direct billions in philanthropy I probably would have, even with nagging doubts.
It seems he’s maybe less good at representing the community or managing orgs. I don’t know if thats the case, but I can believe it.
If so, it seems possible there is a role as a public intellectual associated with EA but who isn’t the only one
I feel bad when writing criticism because personally I hope he’s well and I’m very grateful to him.
Also thanks Habryka for writing this. I think surfacing info like this is really valuable and I guess it has personal costs to you.
I agree Will’s made a bunch of mistakes (like yes CEA was messed up), but I find it hard to sign up to a narrative where status seeking is the key reason.
My impression is that Will often finds it stressful and unpleasant to do community leadership stuff, media, talk to VIPs etc. He often seems to do it out of a sense of duty (i.e. belief that it’s the most impactful thing). His ideal lifestyle would be more like being an academic.
Maybe there’s some kind of internal conflict going on, but it seems more complicated than this makes out.
My hot take is that a bunch of the disagreement is about how much to prioritise something like the instrumental values of conventional status / broader appeal vs. proactively saying what you think even if it looks bad / being a highly able niche community.
My impression is that you’re relatively extreme in how much you rate the latter, so it makes sense to me you’d disagree with a bunch of Will’s decisions based on that.
I agree Will’s made a bunch of mistakes (like yes CEA was messed up), but I find it hard to sign up to a narrative where status seeking is the key reason.
My guess is you know Will better, so I would trust your judgement here a decent amount, though I have talked to other people who have worked with Will a decent amount who thought that status-seeking was pretty core to what was going on (for the sake of EA, of course, though it’s hard to disentangle these kinds of things).
My impression is that Will often finds it stressful and unpleasant to do community leadership stuff, media, talk to VIPs etc. He often seems to do it out of a sense of duty (i.e. belief that it’s the most impactful thing). His ideal lifestyle would be more like being an academic.
I think this is a common misunderstanding in things that I am trying to communicate. I think people can optimize for status and prestige for many different reasons, and indeed I think “personal enjoyment of those things” is a decent fraction of the motivations for people who behave that way, but at least from my experiences and the books I’ve tried to read on adjacent topics, substantially less than the majority.
“This seems instrumentally useful” is I think the most common reason why people pursue prestige-optimizing strategies (and then having some kind of decision-theory or theory of ethics that doesn’t substantially push-back against somewhat deceptive/adversarial/zero-sum like things like prestige-optimization).
People do things for instrumental reason. Someone doesn’t need to enjoy doing bad things in order for them to do bad things. I don’t know why Will is pursuing the strategies I see him pursue, I mostly just see the consequences, which seem pretty bad to me.
I think this is a common misunderstanding in things that I am trying to communicate.
Thank you for clarifying. I do really appreciate this and I’m sure others do too.
But as it sounds like this isn’t the first time this has been miscommunicated, one idea going forward might be to ask someone else to check your writing for tone before posting.
For example if you’d asked me, I would have told you that your comment reads to me like “Will is so selfish” rather than “Will and I have major disagreements on the strategies he should pursue but I believe he’s well-intentioned” because of things like:
The large majority of the time when people say that someone harmed others for the sake of their own popularity, they’re accusing them of being selfish (so you should probably clarify if that’s not what you mean).
You choose status-related words (with the negative connotations I just mentioned) when you could have used others e.g. “being on book tours and talking to lots of high net-worth and high-status people” rather than “promoting EA books and fundraising” (for orgs like yours incidentally, although of course that ended badly).
It’s a long comment entirely composed of negative comments about Will—you’d forgive a reader for thinking that you don’t think there’s anything good about him. (I don’t think the context of being asked “What’s your issue with Will Mckaskill as a public intellectual?” would make readers think “Oh, I guess that’s the reason Habryka is only mentioning negative things.” This is not how professionals tend to talk about each other—especially in public—unless they really don’t think there’s anything positive about someone.)
Similarly, certain word choices and the absence of steel-manning give the impression that you don’t think Will has any decent reasons in favour of making the decisions he does (e.g. calling Doing Good Better “misleading” rather than “simplified” or talking about its emphasis on certain things or what have you, saying “He never comments on the EA Forum” even though that seems to be generally considered a good thing and of course he does a decent amount in any case, and in fact even now saying “I don’t know why Will is pursuing the strategies I see him pursue” rather than “I can see that he might think...”).
Similarly, you claim that he “didn’t do anything about” CEA’s problems for the “very long period of time” he was there (nothing? really?).
The use of accusatory language like “This feels to me very much like trying to get the benefits of being a leader without actually doing the job of leadership”—it’s hard to read this as anything other than an accusation of selfishness.
Describing things in an insulting way (contrasting WWOTF with a “sane way to think about the future”, calling CEA a “massive shitshow”, “expelling him as a leader” etc.).
Not specifying that you mean “intellectual respect” when you say “as far as I can tell he is actually just not very respected among almost any of the other intellectual leaders of the community, at least here in the Bay” (with at least one person responding with what seemed like a very broad interpretation of your comments).
I know a lot of people are hurting right now and I know that EA and especially rationalist culture is unusually public and brutal when it comes to feedback. But my sense is that the kinds of things I’ve mentioned above resulted in a comment that came across as shockingly unprofessional and unconstructive to many people (popular, clearly, but I don’t think people’s upvotes/likes correlate particularly well with what they deem constructive) - especially given the context of one EA leader publicly kicking another while they’re down—and I’d like to see us do better.
[Edit: There are also many things I disagree with in your comment. My lack of disagreement should not be taken as an endorsement of the concrete claims, I just thought it’d be better to focus this comment on the kinds of framings that may be regularly leading to miscommunication (although I’m not sure if I’ll ever get round to addressing the disagreements).]
Personally I have found that getting too attached the supposed goodness of my intentions as a guide to my moral character has been a distraction, in times when my behavior has not actually been that good.
I’ve not looked into it in great detail, but I think of it as a classically Christian idea to try to evaluate if someone is a good or a bad person internally, and give reward/punishment based on that. In contrast, I believe it’s mostly better to punish people based on their behavior, often regardless of whether you judge them to internally be ‘selfish’ or ‘altruistic’. If MacAskill has repeatedly executed a lot of damaging prestige-seeking strategies and behaved in selfish ways, I think it’s worthwhile to punish the behavior. And in that case I think it’s worthwhile to punish the behavior regardless of whether he is open to change, regardless of whether the behavior is due to fundamental personality traits, and regardless of whether he reflectively endorses the decisions.
Ubuntu writes that they read Habryka as saying “Will is so selfish” rather than “Will and I have major disagreements on the strategies he should pursue but I believe he’s well-intentioned”. But I don’t Habryka’s comment to be saying either of these. I read the comment to simply be saying “Will has repeatedly behaved in ways that trade off integrity for popularity and prestige”. This is also my read of multiple behaviors of Will, and cost him a great deal of respect from me for his personal integrity and as a leader, and this is true regardless of the intentions.
For example if you’d asked me, I would have told you that your comment reads to me like “Will is so selfish” rather than “Will and I have major disagreements on the strategies he should pursue but I believe he’s well-intentioned” because of things like:
I am actively trying to avoid relying on concepts like “well-intentioned”, and I don’t know whether he is well-intentioned, and as such saying “but I believe he’s well-intentioned” would be inaccurate (and also actively distract from my central point).
Like, I think it’s quite plausible Sam Bankman Fried was also well-intentioned. I do honestly feel confused enough about how people treat “well-intentionedness” that I don’t really know how to communicate around this topic.
I don’t think whether SBF was well-intentioned changes how the community should relate to him that much (though it is of course a cognitively relevant fact about him that might help you predict the details of a bunch of his behavior, but I don’t think that should be super relevant given what a more outside-view perspective says about the benefits of engaging with him).
A few times now, I have been part of a community reeling from apparent bad behavior from one of its own. In the two most dramatic cases, the communities seemed pretty split on the question of whether the actor had ill intent.
A recent and very public case was the one of Sam Bankman-Fried, where many seem interested in the question of Sam’s mental state vis-a-vis EA. (I recall seeing this in the responses to Kelsey’s interview, but haven’t done the virtuous thing of digging up links.)
It seems to me that local theories of Sam’s mental state cluster along lines very roughly like (these are phrased somewhat hyperbolically):
Sam was explicitly malicious. He was intentionally using the EA movement for the purpose of status and reputation-laundering, while personally enriching himself. If you could read his mind, you would see him making conscious plans to extract resources from people he thought of as ignorant fools, in terminology that would clearly relinquish all his claims to sympathy from the audience. If there were a camera, he would have turned to it and said “I’m going to exploit these EAs for everything they’re worth.”
Sam was committed to doing good. He may have been ruthless and exploitative towards various individuals in pursuit of his utilitarian goals, but he did not intentionally set out to commit fraud. He didn’t conceptualize his actions as exploitative. He tried to make money while providing risky financial assets to the masses, and foolishly disregarded regulations, and may have committed technical crimes, but he was trying to do good, and to put the resources he earned thereby towards doing even more good.
One hypothesis I have for why people care so much about some distinction like this is that humans have social/mental modes for dealing with people who are explicitly malicious towards them, who are explicitly faking cordiality in attempts to extract some resource. And these are pretty different from their modes of dealing with someone who’s merely being reckless or foolish. So they care a lot about the mental state behind the act.
(As an example, various crimes legally require mens rea, lit. “guilty mind”, in order to be criminal. Humans care about this stuff enough to bake it into their legal codes.)
A third theory of Sam’s mental state that I have—that I credit in part to Oliver Habryka—is that reality just doesn’t cleanly classify into either maliciousness or negligence.
On this theory, most people who are in effect trying to exploit resources from your community, won’t be explicitly malicious, not even in the privacy of their own minds. (Perhaps because the content of one’s own mind is just not all that private; humans are in fact pretty good at inferring intent from a bunch of subtle signals.) Someone who could be exploiting your community, will often act so as to exploit your community, while internally telling themselves lots of stories where what they’re doing is justified and fine.
Those stories might include significant cognitive distortion, delusion, recklessness, and/or negligence, and some perfectly reasonable explanations that just don’t quite fit together with the other perfectly reasonable explanations they have in other contexts. They might be aware of some of their flaws, and explicitly acknowledge those flaws as things they have to work on. They might be legitimately internally motivated by good intent, even as they wander down the incentive landscape towards the resources you can provide them. They can sub- or semi-consciously mold their inner workings in ways that avoid tripping your malice-detectors, while still managing to exploit you.
And, well, there’s mild versions of the above paragraph that apply to almost everyone, and I’m not sure how to sharpen it. (Who among us doesn’t subconsciously follow incentives, and live under the influence of some self-serving blind spots?)
I personally have found that focusing the conversation on whether someone was “well-intentioned” is usually pretty counterproductive. Almost no one is fully ill-intentioned towards other people. People have a story in their head for why what they are doing is good and fair. It’s not like it never happens, but I have never encountered a case within the EA or Rationality community, of someone who has caused harm and also didn’t have a compelling inner-narrative for why they were actually well-intentioned.
I don’t know what is going on inside of Will. I think he has many good qualities. He seems pretty smart, he is a good conversationalist and he has done many things that I do think are good for the world. I also think he isn’t a good central figurehead for the EA community and think a bunch of his actions in-relation to the EA community have been pretty bad for the world.
This is not how professionals tend to talk about each other—especially in public—unless they really don’t think there’s anything positive about someone.
I don’t think you are the arbiter of what “professionals” do. I am a “professional”, as far as I can tell, and I talk this way. Many professionals I work with daily also communicate more like this. My guess is you are overgeneralizing from a specific culture you are familiar with, and I feel like your comment is trying to create some kind of implicit social consensus against my communication norms by invoking some greater “professionalism” authority, which doesn’t seem great to me.
I am happy to argue the benefits of being careful about communicating negative takes, and the benefits of carefully worded and non-adversarial language, but I am not particularly interested in doing so from a starting-point of you trying to invoke some set of vaguely-defined “professionalism” norms that I didn’t opt-into.
But my sense is that the kinds of things I’ve mentioned above resulted in a comment that came across as shockingly unprofessional and unconstructive to many people (popular, clearly, but I don’t think people’s upvotes/likes correlate particularly well with what they deem constructive) - especially given the context of one EA leader publicly kicking another while they’re down—and I’d like to see us do better.
The incentives against saying things like this are already pretty strong (indeed, I am far from the only person having roughly this set of opinions, though I do appear to be the only person who has communicated them at all to the broader EA community, despite this seeming of really quite high relevance to a lot of the community that has less access to the details of what is happening in EA than the leadership).
I do think there are bad incentives in this vicinity which result in everyone shit-talking each other all the time as well, but I think on the margin we could really use more people voicing the criticism they have of others, especially ones that are indeed not their hot-takes but are opinions that they have extensively discussed and shared with others already, and seem to have not encountered any obvious and direct refutations, as is the case with my takes above.
Edit: So this has got a very negative reaction, including (I think) multiple strong disagreevotes. I notice I’m a bit confused why, I don’t recognise anything in the post that is beyond the pale? Maybe people think I’m piling on or trying to persuade rather than inform, though I may well have got the balance wrong. Minds are changed through discussion, disagreement, and debate—so I’d like to encourage the downvoters to reply (or DM me privately, if you prefer), as I’m not sure why people disagree, it’s not clear where I made a mistake (if any) and how much I ought to update my beliefs.
My impression is that Will often finds it stressful and unpleasant to do community leadership stuff, media, talk to VIPs etc. He often seems to do it out of a sense of duty (i.e. belief that it’s the most impactful thing). His ideal lifestyle would be more like being an academic.
This makes a lot of sense to me intuitively, and I’d be pretty confident that Will would probably be most effective while being happy, unstressed, and doing what he likes and is good at—academic philosophy! It seems very reminiscent to me of stories of rank-and-file EAs who end up doing things that they aren’t especially motivated by, or especially exceptional at, because of a sense of duty that seems counterproductive.
I guess the update I think ought to happen is that Will trading off academic work to do community building / organisational leadership may not have been correct? Of course, hindsight is 20-20 and all that. But it seems plausible, and I’d be interested to hear the community’s opinion.
In any case, it seems that a good next step would be to find people in the community who are good at running organisations and willing to the do the community-leadership/public facing stuff, so we can remove the stress from Will and let him contribute in the academic sphere? The EA Good Governance Project seems like a promising thing to track in this area.
I didn’t vote either way on your comment, but I take the disagreement to be people thinking (a) Will’s community building work was the right choice given what he and others knew then and/or (b) finding people “who are good at running organisations and willing to the do the community-leadership/public facing stuff” is really hard.
Leaving a comment here for posterity. I just recently had a conversation with Will where we shared some of our experiences working at CEA at the time. I stand by most of my comments here, but want to clear up a few things that I do think I have changed my mind on, after Will gave me more information on what actually happened:
After Will gave me more context on the overall organizational decision-making, and the context of the CEA and GWWC merger, I now don’t think it’s accurate to characterize as Will as absent from his job as CEO. Indeed, many things I thought were driven by Tara and Kerry were actually driven by Will instead. More concretely, during the time when I felt like he was quite absent, he was working on the GWWC merger, a lot of staff reorganization, fundraising, getting CEA into YC, and working on various outreach work as a result of the Doing Good Better launch.
At least Will is pretty confident that the CEA/GWWC merger was not announced at a tactically opportune time, since he scheduled it. It’s plausible that either Kerry or Tara suggested that date, and it is indeed the case that my subteam was almost fully blindsided by the merger happening, because we had Kerry and Tara screen a ton of information from us, but this was more likely an accident or at least something Will wasn’t aware of.
CEA did not apply to YC with EA Funds; CEA applied with general community building, and decided on EA Funds as the main project afterwards. This is important because my impression was that we pivoted towards funds in order to gain the prestige of being in YC, but that seems to have happened later (this doesn’t really change that I think this decision was still pretty bad, but I do think it’s less concerning for other reasons)
It was Nick, without much support from Open Phil, who ended up ramping up his trustee involvement a lot more and then eventually fired a bunch of people from CEA. Open Phil later on then got more involved during the search for the new CEO, but the original firing was mostly Nick independently (though of course he likely talked through decisions with some people at Open Phil, but it still seems important to not characterize what happened as “Open Phil stepped in to fire people”, given my current understanding, though this is still pretty fuzzy)
He then spent like half a year technically being CEO but spending all of his time being on book tours and talking to lots of high net-worth and high-status people.
he (and some of the rest of the leadership) decided to refocus all of their efforts on building EA funds
He also doesn’t participate in any discourse with really anyone else in the community. He never comments on the EA Forum, he doesn’t do panel discussions
Very uncertain here, but I’m concerned by a dynamic where it’s simply too cheap and easy to comment on how others spend their time, or what projects they prioritise, or how they write books—without trying to empathise or steelman their perspective.
I agree with this in-general, though I still think sharing this kind of information can be quite valuable, as long as people appropriately discount it.
For my time at CEA, he was my boss. I agree with you that stuff like this can be pretty annoying coming from random outsiders, but I think if someone worked under someone (though to be clear with a layer of management between) this gives them enough context to at least say informative things about how someone spends their time.
I also think disgruntled ex-employees are not super uncommon, and I think it makes sense to adjust for that.
For the discourse part I do feel differently. Like, I don’t care that much about how Will spends his time in-detail, but de-facto I think he doesn’t really engage in debates or discourse with almost anyone else in EA, and I do think there are just straightforwardly bad consequences as a result of that, and I feel more confident in judging the negative consequences than whether the details of his time allocation are off.
I just want to say I really appreciated you providing this first-hand experience, and for discussing what others in the EA community feel about Will’s leadership from what you have witnessed in the Bay area. I was just talking to someone about this the other day, and I was really unsure about how people in EA actually felt about Will, since, as you said, he rarely comments on the forum and doesn’t seem very engaged with people in the community from what I can see.
I think Doing Good Better was already substantially misleading about the methodology that the EA community has actually historically used to find top interventions. Indeed it was very “randomista” flavored in a way that I think really set up a lot of people to be disappointed when they encountered EA and then realized that actual cause prioritization is nowhere close to this formalized and clear-cut.
I feel like I joined EA for this “randomista” flavored version of the movement. I don’t really feel like the version of EA I thought I was joining exists even though, as you describe here, it gets a lot of lip service (because it’s uncontroversially good and inspiring!!!!). I found it validating for you to point this out.
If it does exist, it hasn’t recruited me despite my pretty concentrated efforts over several years. And I’m not sure why it wouldn’t.
I don’t have a problem with longtermist principles. As far as I’m concerned maybe the best way to promote longterm good really is to take huge risks at the expense of community health / downside risks / integreity ala SBF (among others). But I don’t want to spend my life participating in some scheme to ruthlessly attain power and convert it into good, and I sure as hell don’t want to spend my life participating in that as a pawn. I liked the randomista + earn to give version of the movement because I could just do things that were definitely good to do in the company of others doing the same. I feel like that movement has been starved out by this other thing wearing it as a mask.
Just curious—do you not feel like GiveWell, Happier Lives Institute, and some of Founders Pledge’s work, for example, count as randomista-flavoured EA?
“It doesn’t exist” is too strong for sure. I consider GiveWell central to the randomista part and it was my entrypoint into EA at large. Founder’s Pledge was also pretty randomista back when I was applying for a job there in college. I don’t know anything about HLI.
There may be a thriving community around GiveWell etc that I am ignorant to. Or maybe if I tried to filter out non-randomista stuff from my mind then I would naturally focus more on randomista stuff when engaging EA feeds.
The reality is that I find stuff like “people just doing AI capabilities work and calling themselves EA” to be quite emotionally triggering and when I’m exposed to it thats what my attention goes to (if I’m not, as is more often the case, avoiding the situation entirely). Naturally this probably makes me pretty blind to other stuff going on in EA channels. There are pretty strong selection effects on my attention here.
All of that said, I do think that community building in EA looks completely different than how it would look if it were the GiveWell movement.
I can certainly empathize with the longtermist EA community being hard to ignore. It’s much flashier and more controversial.
For what it’s worth I think it would be possible and totally reasonable for you to filter out longtermist (and animal welfare, and community-building, etc.) EA content and just focus on the randomista stuff you find interesting and inspiring. You could continue following GiveWell, Founders Pledge’s global health and development work, and HLI. Plus, many of Charity Entrepreneurship’s charities are randomista-influenced.
For example, I make heavy use of the unsubscribe feature on the Forum to try and keep my attention focused on the issues I care about rather than what’s most popular (ironically I’m unsubscribed and supposed to be ignoring the ‘Community’ feed lol).
Yeah. (as a note I am also a fan of the animal welfare stuff). This is good suggestion.
I think most of this stuff is too dry to hold my attention by itself. I would like a social environment that was engaging yet systematically directed my attention more often to things I care about. This happens naturally if I am around people who are interesting/fun but also highly engaged and motivated about a topic. As such I have focused on community and community spaces more than, for example, finding a good randomista newsletter or extracting randomista posts from the forums.
Another reason to focus on community interaction, is that it is both much more fun and much more useful to help with creative problem solving. But forum posts tend to report the results of problem solving / report news. I would rather be engaging with people before that step, but I don’t know of a place where one could go to participate in that aside from employment. In contrast, I do have a sense of where one could go to participate in this kind of group or community re: AI safety.
Just chiming in here as HLI was mentioned—although this definitely isn’t the most important part of the post. I certainly see us as randomista-inspired—wait, should that be ‘randomista-adjacent’ - but I would say that what we do feels very different from what other EAs, notably longtermists, do. Also, we came into existence about 5 years after Doing Good Better was published.
I also share Habryka’s doubts about how EA’s original top interventions were chosen. The whole “scale, neglectedness, tractability’ framework strikes me as a confusing, indeterminate methodology that was developed post hoc to justify the earlier choices. I moaned about the SNT framework at length in chapter 5 (pp171) of my PhD thesis.
I agree with you about SNT/ITN. I like that chapter of your thesis a lot, and also find John’s post here convincing.
It does seem to me that randomista EA is alive and largely well—GW is still growing, global health still gets the most funding (I think), many of Charity Entrepreneurship’s new charities are randomista-influenced, etc.
There’s a lot of things going on under the “EA” umbrella. HLI’s work feels very different from what other EAs do, but equally a typical animal welfare org’s work will feel very different, and a typical longtermist org’s work will feel very different, because other EAs do a lot of different things now.
Despite Will branding himself as a leader of the EA community, as far as I can tell he is actually just not very respected among almost any of the other intellectual leaders of the community, at least here in the Bay. He also doesn’t participate in any discourse with really anyone else in the community.
Do you mean not very intellectually respected (partly because he rarely participates in discourse with other EAs) or not very respected in general?
And do you mean that they don’t think he’s a big deal like some other EAs seem to or that they have less respect for him than they have for a random stranger?
Do you mean not very intellectually respected (partly because he rarely participates in discourse with other EAs) or not very respected in general?
I do feel like these are quite close in our community. I think people respect him as a relatively competent speaker and figurehead, though also have a bunch of hesitations that would naturally come with that role. I also think he is probably more just straightforwardly respected in the UK, since he had more of a role in things over there.
And people would definitely trust him more than like a random stranger.
Wow thanks so much for the reply I didn’t expect that much detail and appreciate it. Thought leaders curating their own fame and sacrificing things (including other people) for it is expected to some degree, but some of this is more extreme than I would expect for the average famous person.
Will see if anyone perhaps closer to Will will rebuff this at all.
Just to say, since I’ve been critical elsewhere, I think this comment is good and helpful, and I agree with at least the last bullet point, can’t really speak to most of the others.
I personally like Will’s writing and I think he’s a good speaker. But I do find it weird that millions were spent on promoting WWOTF.[1] I find that weird on its own (how can you be so confident it’s impactful?), but even more so when comparing WWOTF to The Precipice which is in my opinion (and from my impression many others’ opinion as well) a much better and more impactful book. I don’t know if Ben shares these thoughts or if he has any others.
Edit to add: I vaguely remember seeing a source other than Torres. But as long as I can’t find it you can disregard this comment. I do think promoting the book was/is a lot more likely to be net positive than net negative, I’m still even promoting the book myself. It’s just the amount of money I’m concerned about compared to other causes. But as long as I don’t have a figure, I can’t comment.
Just to be clear, I think marketing spending for a book is pretty reasonable. I think WWOTF was not a very good book, since it was really quite confused about AI Risk and described a methodology that I think basically no one adheres to and as such gave a lot of people a mistaken impression of how the longtermist part of the EA community actually thinks, but I think if I was in Will’s shoes and thought it was a really important book and contribution, I think spending a substantial amount of money on marketing seems pretty reasonable to me.
It’s not clear where they take the information about an “enormous promotional budget of roughly $10 million” from. Not saying that it is untrue, but also unclear why Torres would have this information.
The implication is also, that the promotional spending came out of EA pockets. But part of it might also be promotional spending by the book publisher.
ETA: I found another article by Torres that discusses the claim in a bit more detail.
MacAskill, meanwhile, has more money at his fingertips than most of us make in a lifetime. Left unmentioned during his “Daily Show” appearance: he hired several PR firms to promote his book, one of which was paid $12,000 per month, according to someone with direct knowledge of the matter. MacAskill’s team, this person tells me, even floated a total promotional budget ceiling of $10 million — a staggering number — thanks partly to financial support from the tech multibillionaire Dustin Moskovitz, cofounder of Facebook and a major funder of EA.
I don’t believe the $10m claim. Indeed, I don’t even see how it would be possible to spend that much without buying a Super Bowl ad. At $12k a month, you would have to hire nearly 140 PR firms for 6 months to add up to $10m. Perhaps someone added an extra zero or two . . .
Thanks Jeroen that’s a fair point I think it was weird too.
Even if the wrong book was plugged though, it doesn’t feel like a net harm activity though, and surely doesn’t negate his good writing and speaking? I’m sure we’ll hear more!
Thanks so much for bringing this degree of honesty, openness and detail about a decision this big. As someone not deeply embroiled in the longtermist/rationalist world your uncertainty about whether you and others are doing net harm vs good on the AI alignment front is prett chilling. I’m looking forward to responses, hoping the picture is not quite as bleak as you paint!
One question on something I do know a little about (which could be answered in a couple of sentances or even perhaps a link). What’s your issue with Will Mckaskill as a public intellectual? I’ve watched Ted talks, heard him do interviews etc. and he seemed on shallow thought to be a good advocate for EA stuff in general.
Over the course of me working in EA for the last 8 years I feel like I’ve seen about a dozen instances where Will made quite substantial tradeoffs where he traded off both the health of the EA community, and something like epistemic integrity, in favor of being more popular and getting more prestige.
Some examples here include:
When he was CEO while I was at CEA he basically didn’t really do his job at CEA but handed off the job to Tara (who was a terrible choice for many reasons, one of which is that she then co-founded Alameda and after that went on to start another fradulent-seeming crypto trading firm as far as I can tell). He then spent like half a year technically being CEO but spending all of his time being on book tours and talking to lots of high net-worth and high-status people.
I think Doing Good Better was already substantially misleading about the methodology that the EA community has actually historically used to find top interventions. Indeed it was very “randomista” flavored in a way that I think really set up a lot of people to be disappointed when they encountered EA and then realized that actual cause prioritization is nowhere close to this formalized and clear-cut.
I think WWOTF is not a very good book because it really fails to understand AI risk and also describes some methodology of longtermism that again feels like something someone wrote to sound compelling, but just totally doesn’t reflect how any of the longtermist-oriented EAs think about cause-prioritization. This is in-contrast to, for example, The Precipice, which seems like a much better book to me (though still flawed) and actually represents a sane way to think about the future.
The only time when Will was really part of a team at CEA was during the time when CEA went through Y-Combinator, which I think was kind of messed up (like, he didn’t build the team or the organization or really any of the products up to that point). As part of that, he (and some of the rest of the leadership) decided to refocus all of their efforts on building EA funds, despite the organization just having gone through a major restructuring to focus on talent instead of money, since with Open Phil there was already a lot of money around. This was explicitly not because it would be the most impactful thing to do, but because focusing on something clear and understandable like money would maximize the chances of CEA getting into Y-Combinator. I left the organization when this decision was made.
In-general CEA was a massive shitshow for a very long period of time while Will was a board member (and CEO). He didn’t do anything about it, and often exacerbated the problems, and I think this had really bad consequences for the EA community as I’ve written about in other comments. Instead he focused on promoting EA as well as his own brand.
Despite Will branding himself as a leader of the EA community, as far as I can tell he is actually just not very respected among almost any of the other intellectual leaders of the community, at least here in the Bay. He also doesn’t participate in any discourse with really anyone else in the community. He never comments on the EA Forum, he doesn’t do panel discussions with other people, and he doesn’t really steer the actions of any EA organizations, while of course curating an image of himself as the clear leader of the community. This feels to me very much like trying to get the benefits of being a leader without actually doing the job of leadership.
Will displayed extremely bad judgement in his engagement with Sam Bankman Fried and FTX. He was the person most responsible for entangling EA with FTX by publicly endorsing SBF multiple times, despite many warnings he received from many people in the community. The portrayal in this article here seems roughly accurate to me. I think this alone should justify basically expelling him as a leader in the EA community, since FTX was really catastrophically bad and he played a major role in it (especially in its effects on the EA community).
(Edit: See this comment I made with some minor retractions on the above. I do want to note that in as much as I did get things wrong, both me and Will agreed that it was likely because people hired and supervised by Will directly lied to both me and him, which I think is in substantial part Will’s fault, and as things go among the more forgivable reasons for getting things wrong. I also think most of the retractions don’t bear that much on my overall assessment, though I did make some minor updates on the mess at CEA being more “Will being taken advantage of” rather than “Will playing an active role in the advantage-taking”)
Fwiw I have little private information but think that:
I sense this misses some huge successes in EA getting where it is. Seems we’ve done pretty well all things considered. Wasn’t will part of that?
Will is a superlative networker
He is a very good public intellectual. Perhaps Ord could be if his books were backed to that extent. Perhaps Will could be better if he wrote different books. But he seems really good at it. I would guess that on that public intellectual side he’s a benefit not a cost
If I’d had the ability to direct billions in philanthropy I probably would have, even with nagging doubts.
It seems he’s maybe less good at representing the community or managing orgs. I don’t know if thats the case, but I can believe it.
If so, it seems possible there is a role as a public intellectual associated with EA but who isn’t the only one
I feel bad when writing criticism because personally I hope he’s well and I’m very grateful to him.
Also thanks Habryka for writing this. I think surfacing info like this is really valuable and I guess it has personal costs to you.
I agree Will’s made a bunch of mistakes (like yes CEA was messed up), but I find it hard to sign up to a narrative where status seeking is the key reason.
My impression is that Will often finds it stressful and unpleasant to do community leadership stuff, media, talk to VIPs etc. He often seems to do it out of a sense of duty (i.e. belief that it’s the most impactful thing). His ideal lifestyle would be more like being an academic.
Maybe there’s some kind of internal conflict going on, but it seems more complicated than this makes out.
My hot take is that a bunch of the disagreement is about how much to prioritise something like the instrumental values of conventional status / broader appeal vs. proactively saying what you think even if it looks bad / being a highly able niche community.
My impression is that you’re relatively extreme in how much you rate the latter, so it makes sense to me you’d disagree with a bunch of Will’s decisions based on that.
My guess is you know Will better, so I would trust your judgement here a decent amount, though I have talked to other people who have worked with Will a decent amount who thought that status-seeking was pretty core to what was going on (for the sake of EA, of course, though it’s hard to disentangle these kinds of things).
I think this is a common misunderstanding in things that I am trying to communicate. I think people can optimize for status and prestige for many different reasons, and indeed I think “personal enjoyment of those things” is a decent fraction of the motivations for people who behave that way, but at least from my experiences and the books I’ve tried to read on adjacent topics, substantially less than the majority.
“This seems instrumentally useful” is I think the most common reason why people pursue prestige-optimizing strategies (and then having some kind of decision-theory or theory of ethics that doesn’t substantially push-back against somewhat deceptive/adversarial/zero-sum like things like prestige-optimization).
People do things for instrumental reason. Someone doesn’t need to enjoy doing bad things in order for them to do bad things. I don’t know why Will is pursuing the strategies I see him pursue, I mostly just see the consequences, which seem pretty bad to me.
Thank you for clarifying. I do really appreciate this and I’m sure others do too.
But as it sounds like this isn’t the first time this has been miscommunicated, one idea going forward might be to ask someone else to check your writing for tone before posting.
For example if you’d asked me, I would have told you that your comment reads to me like “Will is so selfish” rather than “Will and I have major disagreements on the strategies he should pursue but I believe he’s well-intentioned” because of things like:
The large majority of the time when people say that someone harmed others for the sake of their own popularity, they’re accusing them of being selfish (so you should probably clarify if that’s not what you mean).
You choose status-related words (with the negative connotations I just mentioned) when you could have used others e.g. “being on book tours and talking to lots of high net-worth and high-status people” rather than “promoting EA books and fundraising” (for orgs like yours incidentally, although of course that ended badly).
It’s a long comment entirely composed of negative comments about Will—you’d forgive a reader for thinking that you don’t think there’s anything good about him. (I don’t think the context of being asked “What’s your issue with Will Mckaskill as a public intellectual?” would make readers think “Oh, I guess that’s the reason Habryka is only mentioning negative things.” This is not how professionals tend to talk about each other—especially in public—unless they really don’t think there’s anything positive about someone.)
Similarly, certain word choices and the absence of steel-manning give the impression that you don’t think Will has any decent reasons in favour of making the decisions he does (e.g. calling Doing Good Better “misleading” rather than “simplified” or talking about its emphasis on certain things or what have you, saying “He never comments on the EA Forum” even though that seems to be generally considered a good thing and of course he does a decent amount in any case, and in fact even now saying “I don’t know why Will is pursuing the strategies I see him pursue” rather than “I can see that he might think...”).
Similarly, you claim that he “didn’t do anything about” CEA’s problems for the “very long period of time” he was there (nothing? really?).
The use of accusatory language like “This feels to me very much like trying to get the benefits of being a leader without actually doing the job of leadership”—it’s hard to read this as anything other than an accusation of selfishness.
Describing things in an insulting way (contrasting WWOTF with a “sane way to think about the future”, calling CEA a “massive shitshow”, “expelling him as a leader” etc.).
Not specifying that you mean “intellectual respect” when you say “as far as I can tell he is actually just not very respected among almost any of the other intellectual leaders of the community, at least here in the Bay” (with at least one person responding with what seemed like a very broad interpretation of your comments).
I know a lot of people are hurting right now and I know that EA and especially rationalist culture is unusually public and brutal when it comes to feedback. But my sense is that the kinds of things I’ve mentioned above resulted in a comment that came across as shockingly unprofessional and unconstructive to many people (popular, clearly, but I don’t think people’s upvotes/likes correlate particularly well with what they deem constructive) - especially given the context of one EA leader publicly kicking another while they’re down—and I’d like to see us do better.
[Edit: There are also many things I disagree with in your comment. My lack of disagreement should not be taken as an endorsement of the concrete claims, I just thought it’d be better to focus this comment on the kinds of framings that may be regularly leading to miscommunication (although I’m not sure if I’ll ever get round to addressing the disagreements).]
Personally I have found that getting too attached the supposed goodness of my intentions as a guide to my moral character has been a distraction, in times when my behavior has not actually been that good.
I’ve not looked into it in great detail, but I think of it as a classically Christian idea to try to evaluate if someone is a good or a bad person internally, and give reward/punishment based on that. In contrast, I believe it’s mostly better to punish people based on their behavior, often regardless of whether you judge them to internally be ‘selfish’ or ‘altruistic’. If MacAskill has repeatedly executed a lot of damaging prestige-seeking strategies and behaved in selfish ways, I think it’s worthwhile to punish the behavior. And in that case I think it’s worthwhile to punish the behavior regardless of whether he is open to change, regardless of whether the behavior is due to fundamental personality traits, and regardless of whether he reflectively endorses the decisions.
Ubuntu writes that they read Habryka as saying “Will is so selfish” rather than “Will and I have major disagreements on the strategies he should pursue but I believe he’s well-intentioned”. But I don’t Habryka’s comment to be saying either of these. I read the comment to simply be saying “Will has repeatedly behaved in ways that trade off integrity for popularity and prestige”. This is also my read of multiple behaviors of Will, and cost him a great deal of respect from me for his personal integrity and as a leader, and this is true regardless of the intentions.
I am actively trying to avoid relying on concepts like “well-intentioned”, and I don’t know whether he is well-intentioned, and as such saying “but I believe he’s well-intentioned” would be inaccurate (and also actively distract from my central point).
Like, I think it’s quite plausible Sam Bankman Fried was also well-intentioned. I do honestly feel confused enough about how people treat “well-intentionedness” that I don’t really know how to communicate around this topic.
I don’t think whether SBF was well-intentioned changes how the community should relate to him that much (though it is of course a cognitively relevant fact about him that might help you predict the details of a bunch of his behavior, but I don’t think that should be super relevant given what a more outside-view perspective says about the benefits of engaging with him).
The best resource I know on this is Nate’s most recent post: “Enemies vs. Malefactors”:
I personally have found that focusing the conversation on whether someone was “well-intentioned” is usually pretty counterproductive. Almost no one is fully ill-intentioned towards other people. People have a story in their head for why what they are doing is good and fair. It’s not like it never happens, but I have never encountered a case within the EA or Rationality community, of someone who has caused harm and also didn’t have a compelling inner-narrative for why they were actually well-intentioned.
I don’t know what is going on inside of Will. I think he has many good qualities. He seems pretty smart, he is a good conversationalist and he has done many things that I do think are good for the world. I also think he isn’t a good central figurehead for the EA community and think a bunch of his actions in-relation to the EA community have been pretty bad for the world.
I don’t think you are the arbiter of what “professionals” do. I am a “professional”, as far as I can tell, and I talk this way. Many professionals I work with daily also communicate more like this. My guess is you are overgeneralizing from a specific culture you are familiar with, and I feel like your comment is trying to create some kind of implicit social consensus against my communication norms by invoking some greater “professionalism” authority, which doesn’t seem great to me.
I am happy to argue the benefits of being careful about communicating negative takes, and the benefits of carefully worded and non-adversarial language, but I am not particularly interested in doing so from a starting-point of you trying to invoke some set of vaguely-defined “professionalism” norms that I didn’t opt-into.
The incentives against saying things like this are already pretty strong (indeed, I am far from the only person having roughly this set of opinions, though I do appear to be the only person who has communicated them at all to the broader EA community, despite this seeming of really quite high relevance to a lot of the community that has less access to the details of what is happening in EA than the leadership).
I do think there are bad incentives in this vicinity which result in everyone shit-talking each other all the time as well, but I think on the margin we could really use more people voicing the criticism they have of others, especially ones that are indeed not their hot-takes but are opinions that they have extensively discussed and shared with others already, and seem to have not encountered any obvious and direct refutations, as is the case with my takes above.
Edit: So this has got a very negative reaction, including (I think) multiple strong disagreevotes. I notice I’m a bit confused why, I don’t recognise anything in the post that is beyond the pale? Maybe people think I’m piling on or trying to persuade rather than inform, though I may well have got the balance wrong. Minds are changed through discussion, disagreement, and debate—so I’d like to encourage the downvoters to reply (or DM me privately, if you prefer), as I’m not sure why people disagree, it’s not clear where I made a mistake (if any) and how much I ought to update my beliefs.
This makes a lot of sense to me intuitively, and I’d be pretty confident that Will would probably be most effective while being happy, unstressed, and doing what he likes and is good at—academic philosophy! It seems very reminiscent to me of stories of rank-and-file EAs who end up doing things that they aren’t especially motivated by, or especially exceptional at, because of a sense of duty that seems counterproductive.
I guess the update I think ought to happen is that Will trading off academic work to do community building / organisational leadership may not have been correct? Of course, hindsight is 20-20 and all that. But it seems plausible, and I’d be interested to hear the community’s opinion.
In any case, it seems that a good next step would be to find people in the community who are good at running organisations and willing to the do the community-leadership/public facing stuff, so we can remove the stress from Will and let him contribute in the academic sphere? The EA Good Governance Project seems like a promising thing to track in this area.
I didn’t vote either way on your comment, but I take the disagreement to be people thinking (a) Will’s community building work was the right choice given what he and others knew then and/or (b) finding people “who are good at running organisations and willing to the do the community-leadership/public facing stuff” is really hard.
Leaving a comment here for posterity. I just recently had a conversation with Will where we shared some of our experiences working at CEA at the time. I stand by most of my comments here, but want to clear up a few things that I do think I have changed my mind on, after Will gave me more information on what actually happened:
After Will gave me more context on the overall organizational decision-making, and the context of the CEA and GWWC merger, I now don’t think it’s accurate to characterize as Will as absent from his job as CEO. Indeed, many things I thought were driven by Tara and Kerry were actually driven by Will instead. More concretely, during the time when I felt like he was quite absent, he was working on the GWWC merger, a lot of staff reorganization, fundraising, getting CEA into YC, and working on various outreach work as a result of the Doing Good Better launch.
At least Will is pretty confident that the CEA/GWWC merger was not announced at a tactically opportune time, since he scheduled it. It’s plausible that either Kerry or Tara suggested that date, and it is indeed the case that my subteam was almost fully blindsided by the merger happening, because we had Kerry and Tara screen a ton of information from us, but this was more likely an accident or at least something Will wasn’t aware of.
CEA did not apply to YC with EA Funds; CEA applied with general community building, and decided on EA Funds as the main project afterwards. This is important because my impression was that we pivoted towards funds in order to gain the prestige of being in YC, but that seems to have happened later (this doesn’t really change that I think this decision was still pretty bad, but I do think it’s less concerning for other reasons)
It was Nick, without much support from Open Phil, who ended up ramping up his trustee involvement a lot more and then eventually fired a bunch of people from CEA. Open Phil later on then got more involved during the search for the new CEO, but the original firing was mostly Nick independently (though of course he likely talked through decisions with some people at Open Phil, but it still seems important to not characterize what happened as “Open Phil stepped in to fire people”, given my current understanding, though this is still pretty fuzzy)
Very uncertain here, but I’m concerned by a dynamic where it’s simply too cheap and easy to comment on how others spend their time, or what projects they prioritise, or how they write books—without trying to empathise or steelman their perspective.
I agree with this in-general, though I still think sharing this kind of information can be quite valuable, as long as people appropriately discount it.
For my time at CEA, he was my boss. I agree with you that stuff like this can be pretty annoying coming from random outsiders, but I think if someone worked under someone (though to be clear with a layer of management between) this gives them enough context to at least say informative things about how someone spends their time.
I also think disgruntled ex-employees are not super uncommon, and I think it makes sense to adjust for that.
For the discourse part I do feel differently. Like, I don’t care that much about how Will spends his time in-detail, but de-facto I think he doesn’t really engage in debates or discourse with almost anyone else in EA, and I do think there are just straightforwardly bad consequences as a result of that, and I feel more confident in judging the negative consequences than whether the details of his time allocation are off.
I just want to say I really appreciated you providing this first-hand experience, and for discussing what others in the EA community feel about Will’s leadership from what you have witnessed in the Bay area. I was just talking to someone about this the other day, and I was really unsure about how people in EA actually felt about Will, since, as you said, he rarely comments on the forum and doesn’t seem very engaged with people in the community from what I can see.
I realised while reading your comment that I didn’t actually know what Habryka meant by “not very respected”—he adds color here.
I feel like I joined EA for this “randomista” flavored version of the movement. I don’t really feel like the version of EA I thought I was joining exists even though, as you describe here, it gets a lot of lip service (because it’s uncontroversially good and inspiring!!!!). I found it validating for you to point this out.
If it does exist, it hasn’t recruited me despite my pretty concentrated efforts over several years. And I’m not sure why it wouldn’t.
I don’t have a problem with longtermist principles. As far as I’m concerned maybe the best way to promote longterm good really is to take huge risks at the expense of community health / downside risks / integreity ala SBF (among others). But I don’t want to spend my life participating in some scheme to ruthlessly attain power and convert it into good, and I sure as hell don’t want to spend my life participating in that as a pawn. I liked the randomista + earn to give version of the movement because I could just do things that were definitely good to do in the company of others doing the same. I feel like that movement has been starved out by this other thing wearing it as a mask.
Just curious—do you not feel like GiveWell, Happier Lives Institute, and some of Founders Pledge’s work, for example, count as randomista-flavoured EA?
“It doesn’t exist” is too strong for sure. I consider GiveWell central to the randomista part and it was my entrypoint into EA at large. Founder’s Pledge was also pretty randomista back when I was applying for a job there in college. I don’t know anything about HLI.
There may be a thriving community around GiveWell etc that I am ignorant to. Or maybe if I tried to filter out non-randomista stuff from my mind then I would naturally focus more on randomista stuff when engaging EA feeds.
The reality is that I find stuff like “people just doing AI capabilities work and calling themselves EA” to be quite emotionally triggering and when I’m exposed to it thats what my attention goes to (if I’m not, as is more often the case, avoiding the situation entirely). Naturally this probably makes me pretty blind to other stuff going on in EA channels. There are pretty strong selection effects on my attention here.
All of that said, I do think that community building in EA looks completely different than how it would look if it were the GiveWell movement.
I can certainly empathize with the longtermist EA community being hard to ignore. It’s much flashier and more controversial.
For what it’s worth I think it would be possible and totally reasonable for you to filter out longtermist (and animal welfare, and community-building, etc.) EA content and just focus on the randomista stuff you find interesting and inspiring. You could continue following GiveWell, Founders Pledge’s global health and development work, and HLI. Plus, many of Charity Entrepreneurship’s charities are randomista-influenced.
For example, I make heavy use of the unsubscribe feature on the Forum to try and keep my attention focused on the issues I care about rather than what’s most popular (ironically I’m unsubscribed and supposed to be ignoring the ‘Community’ feed lol).
Yeah. (as a note I am also a fan of the animal welfare stuff).
This is good suggestion.
I think most of this stuff is too dry to hold my attention by itself. I would like a social environment that was engaging yet systematically directed my attention more often to things I care about. This happens naturally if I am around people who are interesting/fun but also highly engaged and motivated about a topic. As such I have focused on community and community spaces more than, for example, finding a good randomista newsletter or extracting randomista posts from the forums.
Another reason to focus on community interaction, is that it is both much more fun and much more useful to help with creative problem solving. But forum posts tend to report the results of problem solving / report news. I would rather be engaging with people before that step, but I don’t know of a place where one could go to participate in that aside from employment. In contrast, I do have a sense of where one could go to participate in this kind of group or community re: AI safety.
Just chiming in here as HLI was mentioned—although this definitely isn’t the most important part of the post. I certainly see us as randomista-inspired—wait, should that be ‘randomista-adjacent’ - but I would say that what we do feels very different from what other EAs, notably longtermists, do. Also, we came into existence about 5 years after Doing Good Better was published.
I also share Habryka’s doubts about how EA’s original top interventions were chosen. The whole “scale, neglectedness, tractability’ framework strikes me as a confusing, indeterminate methodology that was developed post hoc to justify the earlier choices. I moaned about the SNT framework at length in chapter 5 (pp171) of my PhD thesis.
I agree with you about SNT/ITN. I like that chapter of your thesis a lot, and also find John’s post here convincing.
It does seem to me that randomista EA is alive and largely well—GW is still growing, global health still gets the most funding (I think), many of Charity Entrepreneurship’s new charities are randomista-influenced, etc.
There’s a lot of things going on under the “EA” umbrella. HLI’s work feels very different from what other EAs do, but equally a typical animal welfare org’s work will feel very different, and a typical longtermist org’s work will feel very different, because other EAs do a lot of different things now.
Do you mean not very intellectually respected (partly because he rarely participates in discourse with other EAs) or not very respected in general?
And do you mean that they don’t think he’s a big deal like some other EAs seem to or that they have less respect for him than they have for a random stranger?
I do feel like these are quite close in our community. I think people respect him as a relatively competent speaker and figurehead, though also have a bunch of hesitations that would naturally come with that role. I also think he is probably more just straightforwardly respected in the UK, since he had more of a role in things over there.
And people would definitely trust him more than like a random stranger.
Wow thanks so much for the reply I didn’t expect that much detail and appreciate it. Thought leaders curating their own fame and sacrificing things (including other people) for it is expected to some degree, but some of this is more extreme than I would expect for the average famous person.
Will see if anyone perhaps closer to Will will rebuff this at all.
Thanks again.
Just to say, since I’ve been critical elsewhere, I think this comment is good and helpful, and I agree with at least the last bullet point, can’t really speak to most of the others.
I personally like Will’s writing and I think he’s a good speaker. But I do find it weird that millions were spent on promoting WWOTF.[1]I find that weird on its own (how can you be so confident it’s impactful?), but even more so when comparing WWOTF to The Precipice which is in my opinion (and from my impression many others’ opinion as well) a much better and more impactful book. I don’t know if Ben shares these thoughts or if he has any others.Edit to add: I vaguely remember seeing a source other than Torres. But as long as I can’t find it you can disregard this comment. I do think promoting the book was/is a lot more likely to be net positive than net negative, I’m still even promoting the book myself. It’s just the amount of money I’m concerned about compared to other causes. But as long as I don’t have a figure, I can’t comment.
Can’t find the source for this, so correct me if I’m wrong!
Just to be clear, I think marketing spending for a book is pretty reasonable. I think WWOTF was not a very good book, since it was really quite confused about AI Risk and described a methodology that I think basically no one adheres to and as such gave a lot of people a mistaken impression of how the longtermist part of the EA community actually thinks, but I think if I was in Will’s shoes and thought it was a really important book and contribution, I think spending a substantial amount of money on marketing seems pretty reasonable to me.
The only source for this claim I’ve ever found was Emile P. Torres’s article What “longtermism” gets wrong about climate change.
It’s not clear where they take the information about an “enormous promotional budget of roughly $10 million” from. Not saying that it is untrue, but also unclear why Torres would have this information.
The implication is also, that the promotional spending came out of EA pockets. But part of it might also be promotional spending by the book publisher.
ETA: I found another article by Torres that discusses the claim in a bit more detail.
That “floated” is so weasely!
I don’t believe the $10m claim. Indeed, I don’t even see how it would be possible to spend that much without buying a Super Bowl ad. At $12k a month, you would have to hire nearly 140 PR firms for 6 months to add up to $10m. Perhaps someone added an extra zero or two . . .
Thanks Jeroen that’s a fair point I think it was weird too.
Even if the wrong book was plugged though, it doesn’t feel like a net harm activity though, and surely doesn’t negate his good writing and speaking? I’m sure we’ll hear more!