Building out research fellowships and public-facing educational programming for lawyers
Mjreard
Some combination of not having a clean thesis I’m arguing for, not actually holding a highly legible position on on the issues discussed, and being a newbie writer. Not trying to spare people’s feelings. More just expressing some sentiments, pointing at some things, and letting others take from that what they will.
If there was a neat thesis it’d be:
People who used to focus on global cause prioritization now seem focused on power accumulation within the AI policy world broadly construed and this is now the major determinant of status among all people who used to focus on global cause prioritization
This risks losing track of what is actually best for the world
You, reader, should reflect on this dynamic and the social incentives around it to make sure you’re not losing sight of what you think is actually important, and push back on these when you can.
Admin posted under my name after asking permission. It’s cool they have a system for accommodating people like me who are lazy in this very specific way
Great write up. I think all three are in play and unfortunately kind of mutually reinforcing, though I’m more agnostic about how much of each.
I think OP and grantees are synced up on xrisk (or at least GCRs) being the terminal goal. My issue is that their instrumental goals seem to involve a lot of deemphasizing that focus to expand reach/influence/status/number of allies in ways that I worry lend themselves to mission/value drift.
- May 9, 2025, 10:16 PM; 59 points) 's comment on The Soul of EA is in Trouble by (
Agree on most of this too. I wrote too categorically about the risk of “defunding.” You will be on a shorter leash if you take your 20-30% independent-view discount. I was mostly saying that funding wouldn’t go to zero and crash your org.
I further agree on cognitive dissonance + selection effects.
Maybe the main disagreement is that OP is ~a fixed monolith. I know people there. They’re quite EA in my accounting; much like I think of many leaders at grantees. There’s room in these joints. I think current trends are driven by “deference to the vibe” on both sides of the grant-making arrangement. Everyone perceives plain speaking about values and motivations as cringe and counterproductive and it thereby becomes the reality.
I’m sure org leaders and I have disagreements along these lines, but I think they’d also concede they’re doing some substantial amount of deliberate deemphasis of what they regard as their terminal goals in service of something more instrumental. They do probably disagree with me that it is best all-things-considered to undo this, but I wrote the post to convince them!
I agree with all of this.
My wish here is that specific people running orgs and projects were made of tougher stuff re following funding incentives. For example, it doesn’t seem like your project is at serious risk of defunding if you’re 20-30% more explicit about the risks you care about or what personally motivates you to do this work.
There are probably only about 200 people on Earth with the context x competence for OP to enthusiastically fund for leading on this work – they have bargaining power to frame their projects differently. Yet on this telling, they bow to incentives to be the very-most-shining star by OP’s standard, so they can scale up and get more funding. I would just make the trade off the other way: be smaller and more focused on things that matter.
I think social feedback loops might bend back around to OP as well if they had fewer options. Indeed, this might have been the case before FTX. The point of the piece is that I see the inverse happening, I just might be more agnostic about whether the source is OP or specific project leaders. Either or both can correct if they buy my story.
The Soul of EA is in Trouble
I hope my post was clear enough that distance itself is totally fine (and you give compelling reasons for that here). It’s ~implicitly denying present knowledge or past involvement in order to get distance that seems bad for all concerned. The speaker looks shifty and EA looks like something toxic you want to dodge.
Responding to a direct question by saying “We’ve had some overlap and it’s a nice philosophy for the most part, but it’s not a guiding light of what we’re doing here” seems like it strictly dominates.
An implicit claim I’m making here is that “I don’t do labels” is kind of a bullshit non-response in a world where some labels are more or less descriptively useful and speakers have the freedom to qualify the extent to which the label applies.
Like I notice no one responds to the question “what’s your relationship to Nazism?” with “I don’t do labels.” People are rightly suspicious when people give that answer and there just doesn’t seem to be a need for it. You can just defer to the question asker a tiny bit and give an answer that reflects your knowledge of the label if nothing else.
Yeah one thing I failed to articulate is how not-deliberate most of this behavior is. There’s just a norm/trend of “be scared/cagey/distant” or “try [too] hard to manage perceptions about your relationship to EA” when you’re asked about EA in any quasi-public setting.
It’s genuinely hard for me understand what’s going on here. Like there are vastly worse ~student groups people have been a part of from their current professional outlook that don’t induce this much panic. It seems like an EA cultural tick.
EA Adjacency as FTX Trauma
I overstated this, but disagree. Overall very few people have ever heard of EA. In tech, maybe you get up to ~20% recognition, but even there, the amount of headspace people give it is very small and you should act as though this is the case. I agree it’s negative directionally, but evasive comments like these are actually a big part of how we got to this point.
There’s a lesson here for everyone in/around EA, which is why I sent the pictured tweet: it is very counterproductive to downplay what or who you know for strategic or especially “optics” reasons. The best optics are honesty, earnestness, and candor. If you have explain and justify why your statements that are perceived as evasive and dishonest are in fact okay, you probably did a lot worse than you could have on these fronts.
Also, on the object level, for the love of God, no one cares about EA except EAs and some obviously bad faith critics trying to tar you with guilt-by-association. Don’t accept their premise and play into their narrative by being evasive like this. *This validates the criticisms and makes you look worse in everyone’s eyes than just saying you’re EA or you think it’s great or whatever.*
But what if I’m really not EA anymore? Honesty requires that you at least acknowledge that you *were.* Bonus points for explaining what changed. If your personal definition of EA changed over that time, that’s worth pondering and disclosing as well.
I think people overrate how predictable the effect of our actions on the future will be (even though they rate it very low in absolute terms); extinction seem like one of the very few (only?) things that seems like its effects will endure throughout a big part of the future. Still buy the theory that 0-1% of possible value is equally valuable to 98-99%; just about tractability
Donated.
I’ve been hugely impressed by the NT fellows and finalists I came across in my work at 80k and it seems like NT was either their first exposure to EA ideas or the first meaningful opportunity to actively apply the ideas (which can be just as important). I imagine uni groups are well in your debt for your role in helping finalists/fellows connect ahead of starting university too.
You’ve decided to give mostly to established institutions (GWWC, 80k, AMF, GW) – why those over more hits-based approaches (including things that wouldn’t be a burden on your time like giving to AIM or deputizing someone else to make risky grants to promising individuals/small orgs on your behalf)?
How do you think about opportunity costs when it comes to earning to give? Are there roles at other firms or in the US where you would expect to make substantially more (including downside risks), but pass on those for personal reasons?
Same for roles where you might make less but pass on those for ETG reasons.
I think earning to give is the correct primary route to impact for the majority of current EAs and a major current shortcoming of the movement is failing to socially reward earning to give relative to pursuing direct work. I worry that this project, if successful, would push this dynamic further in the wrong direction.
The short version of the argument is that excessive praise for ‘direct work’ has caused a lot of people who fail to secure direct work to feel un-valued and bounce off EA. Others have expanded their definitions of what counts as an impactful org to justify themselves according to the direct work standard when they could have more impact ETGing in a conventional job and donating to the very best existing orgs.
All the EA-committed dollars in the world are a tiny drop in the ocean of the world’s problems and it takes really incredible talent to leverage those dollars in a way that would be more effective than adding to them. Finding talent to do that is critical (I do this), but people need to be well calibrated and thoughtful in deciding whether and for how long to pursue particular direct work opportunities vs ETG. I think hurling (competing!) solemn pledges at them is not the way to make this happen.
The trailer for Ada makes me think it falls in a media no mans land between extremely low-cost, but potentially high-virality creator content and high-cost, fully produced series that go out on major networks. Interested to hear how Should We are navigating the (to me) inorganic nature of their approach.
Sounds like Bequest was making a speculative bet on high-cost, fully produced – which I think is worthwhile. When I think about in-the-water ideas like environmentalism and social justice, my sense is they leveraged media by gently injecting their themes/ideas into independently engaging characters and stories (i.e. the kinds of things for-profit studios would want to produce independent of whether these ideas appeared in the plot).
If your AI work doesn’t ground out in reducing the risk of extinction, I think animal welfare work quickly becomes the more impactful than anything AI. Xrisk reduction can be through more indirect channels, of course, though indirectness generally increases speculativeness of the xrisk story.