Building out research fellowships and public-facing educational programming for lawyers
Mjreard
The Tiny Flicker of Altruism
So much big picture, so few details
Tons of overlap in how the vlogbrothers think about their impact and EA. Great to see.
In particular, there was one episode of their podcast recently (I think it was “The Green Brothers are Often Wrong”), where they got comically close to describing themselves as EA; remarking that John was the heart (who cared a lot about people and “what was most important”) and Hank was the head, being really concerned with science and truth and progress and reasoning.
They are of course aware of EA via large EA participation in their PFA donation drive, but I believe they have a distant, caricatured view of the community itself. I heard of a livestream where they were asked about it and John said something to the effect of “there’s a lot of harm you can do if you only think of people as objects of analysis for you to intervene on when you should be dealing with them directly and empowering them in the ways that they decide they want to be empowered.”
My view is that it’s worth it, because there is a danger of people just jumping into jobs that have “AI” or even “AI security/safety” in the name, without grappling with tough questions around what it actually means to help AGI go well or prioritising between options based on expected impact.
I appreciate the dilemma and don’t want to imply this is an easy call.
For me the central question is all of this is whether you foreground process (EA) or conclusion (AGI go well). It seems like the whole space is uniformly rushing to foreground the conclusion. It’s especially costly when 80k – the paragon of process discourse – decides to foreground the conclusion too. Who’s left as a source of wisdom foregrounding process?
I know you’e trying to do both. I guess you can call me pessimistic that even you (amazing Arden, my total fav) can pull it off.
Thanks Vanessa, I completely agree on the meta level. No one owes “EA” any allegiance because they might have benefitted from it in the past or benefitted from its intellectual progeny and people are of course generally entitled to change their minds and endorse new premises.
Your comment *is a very meta comment though* and leaves open the possibility that you’re post hoc rationalizing following a trend that I see as starting with Claire Zebel’s post “EA and Longtermism, not Cruxes for Saving the World,” which I see as pretty paradigmatic of “the particular ideas that got us here (AI X-safety) no longer [are/feel] necessary, and seem inconvenient to where we are now in some ways, so let’s dispense with them.”
There could be fine object-level reasons for changing your mind on which premises matter of course and I’m extremely interested to hear those. In the absence of those object-level reasons though, I worry!
I’m still trying to direct the non-selfish part of myself towards scope-sensitive welfarism in a rationalisty way. For me that’s EA. Others, including maybe you, seem to construe it as something narrower than that and I wonder both what that narrow conception is and whether its fair to the public meaning of the term “Effective Altruism.”
If your AI work doesn’t ground out in reducing the risk of extinction, I think animal welfare work quickly becomes the more impactful than anything AI. Xrisk reduction can be through more indirect channels, of course, though indirectness generally increases speculativeness of the xrisk story.
Some combination of not having a clean thesis I’m arguing for, not actually holding a highly legible position on on the issues discussed, and being a newbie writer. Not trying to spare people’s feelings. More just expressing some sentiments, pointing at some things, and letting others take from that what they will.
If there was a neat thesis it’d be:
People who used to focus on global cause prioritization now seem focused on power accumulation within the AI policy world broadly construed and this is now the major determinant of status among all people who used to focus on global cause prioritization
This risks losing track of what is actually best for the world
You, reader, should reflect on this dynamic and the social incentives around it to make sure you’re not losing sight of what you think is actually important, and push back on these when you can.
Admin posted under my name after asking permission. It’s cool they have a system for accommodating people like me who are lazy in this very specific way
Great write up. I think all three are in play and unfortunately kind of mutually reinforcing, though I’m more agnostic about how much of each.
I think OP and grantees are synced up on xrisk (or at least GCRs) being the terminal goal. My issue is that their instrumental goals seem to involve a lot of deemphasizing that focus to expand reach/influence/status/number of allies in ways that I worry lend themselves to mission/value drift.
- May 9, 2025, 10:16 PM; 64 points) 's comment on The Soul of EA is in Trouble by (
Agree on most of this too. I wrote too categorically about the risk of “defunding.” You will be on a shorter leash if you take your 20-30% independent-view discount. I was mostly saying that funding wouldn’t go to zero and crash your org.
I further agree on cognitive dissonance + selection effects.
Maybe the main disagreement is that OP is ~a fixed monolith. I know people there. They’re quite EA in my accounting; much like I think of many leaders at grantees. There’s room in these joints. I think current trends are driven by “deference to the vibe” on both sides of the grant-making arrangement. Everyone perceives plain speaking about values and motivations as cringe and counterproductive and it thereby becomes the reality.
I’m sure org leaders and I have disagreements along these lines, but I think they’d also concede they’re doing some substantial amount of deliberate deemphasis of what they regard as their terminal goals in service of something more instrumental. They do probably disagree with me that it is best all-things-considered to undo this, but I wrote the post to convince them!
I agree with all of this.
My wish here is that specific people running orgs and projects were made of tougher stuff re following funding incentives. For example, it doesn’t seem like your project is at serious risk of defunding if you’re 20-30% more explicit about the risks you care about or what personally motivates you to do this work.
There are probably only about 200 people on Earth with the context x competence for OP to enthusiastically fund for leading on this work – they have bargaining power to frame their projects differently. Yet on this telling, they bow to incentives to be the very-most-shining star by OP’s standard, so they can scale up and get more funding. I would just make the trade off the other way: be smaller and more focused on things that matter.
I think social feedback loops might bend back around to OP as well if they had fewer options. Indeed, this might have been the case before FTX. The point of the piece is that I see the inverse happening, I just might be more agnostic about whether the source is OP or specific project leaders. Either or both can correct if they buy my story.
The Soul of EA is in Trouble
I hope my post was clear enough that distance itself is totally fine (and you give compelling reasons for that here). It’s ~implicitly denying present knowledge or past involvement in order to get distance that seems bad for all concerned. The speaker looks shifty and EA looks like something toxic you want to dodge.
Responding to a direct question by saying “We’ve had some overlap and it’s a nice philosophy for the most part, but it’s not a guiding light of what we’re doing here” seems like it strictly dominates.
An implicit claim I’m making here is that “I don’t do labels” is kind of a bullshit non-response in a world where some labels are more or less descriptively useful and speakers have the freedom to qualify the extent to which the label applies.
Like I notice no one responds to the question “what’s your relationship to Nazism?” with “I don’t do labels.” People are rightly suspicious when people give that answer and there just doesn’t seem to be a need for it. You can just defer to the question asker a tiny bit and give an answer that reflects your knowledge of the label if nothing else.
Yeah one thing I failed to articulate is how not-deliberate most of this behavior is. There’s just a norm/trend of “be scared/cagey/distant” or “try [too] hard to manage perceptions about your relationship to EA” when you’re asked about EA in any quasi-public setting.
It’s genuinely hard for me understand what’s going on here. Like there are vastly worse ~student groups people have been a part of from their current professional outlook that don’t induce this much panic. It seems like an EA cultural tick.
EA Adjacency as FTX Trauma
I overstated this, but disagree. Overall very few people have ever heard of EA. In tech, maybe you get up to ~20% recognition, but even there, the amount of headspace people give it is very small and you should act as though this is the case. I agree it’s negative directionally, but evasive comments like these are actually a big part of how we got to this point.
There’s a lesson here for everyone in/around EA, which is why I sent the pictured tweet: it is very counterproductive to downplay what or who you know for strategic or especially “optics” reasons. The best optics are honesty, earnestness, and candor. If you have explain and justify why your statements that are perceived as evasive and dishonest are in fact okay, you probably did a lot worse than you could have on these fronts.
Also, on the object level, for the love of God, no one cares about EA except EAs and some obviously bad faith critics trying to tar you with guilt-by-association. Don’t accept their premise and play into their narrative by being evasive like this. *This validates the criticisms and makes you look worse in everyone’s eyes than just saying you’re EA or you think it’s great or whatever.*
But what if I’m really not EA anymore? Honesty requires that you at least acknowledge that you *were.* Bonus points for explaining what changed. If your personal definition of EA changed over that time, that’s worth pondering and disclosing as well.
Thank you for praising my new hobby, Ozzie.
As an expansion on point 7: consistent, legible, self-motivated output on topics that matter is just a huge signal of value in intellectual work. A problem with basically all hiring is that the vast majority of people just want to “get through the day” in their work rather than push for excellence (or even just improvement). Naturally, in hiring, you’re trying to select for people who will care intrinsically about the quality*quantity of their outputs. There are few stronger signals of that than someone consistently doing something that looks like the work in their personal time for ~no (direct) reward.
Also, if you’re worried you’re not good enough, you’re probably right, but the only way to get good is to start writing bad stuff and make it better. I wrote the first post of my meh blog on this topic to keep me going. It’s sort of helped.