I lead the DeepMind mechanistic interpretability team
Neel Nanda
Oh, that handle is way better, and not what I took from the post at all!
Thanks a lot for the clarifications. If you agree with my tactical claims and are optimising for growth over a longer time frame than I agree, we probably don’t disagree much on actions and the actions you describes and cautions seem very reasonable to me. To me Growth feels like a somewhat unhelpful handle here that pushes me in the mind frame of what leads to short-term growth rather than a sustainable healthy community. But if it feels useful to you, fair enough
I’m specifically claiming silicon valley AI, where I think it’s a fair bit higher?
no one cares about EA except EAs and some obviously bad faith critics trying to tar you with guilt-by-association
I agree with your broad points, but this seems false to me. I think that lots of people seem to have negative associations with EA, especially given SBF and in the AI and tech space where eg it’s widely (and imo falsely) believed that the openai coup was for EA reasons
EDIT: Trying to distill my argument: the effect of growth on movement health is unclear, probably positive, but I do not think “optimise for growth” is what I would come up with if I was solely optimising for the strength of the EA community, it seems like there’s notably more important directions
Thanks a lot for the detailed and transparent post. I’m a bit surprised by the focus on growth.
While I do agree that feeling like you’re in a buzzing growing movement can be good for morale. I also think there are costs to morale from growth like lots of low context new people around, feeling like the movement is drifting away from what older participants care about, etc. Having a bunch of competent and excited new people join who do a lot of awesome stuff seems great for morale, but is significantly harder imo, and requires much more specific plans.
It’s extremely not obvious to me that this is the best way to recover community morale and branding. In particular, under the hypothesis that a lot of damage was caused by FTX, it seems like a good amount of your effort should be going into things like addressing whatever substantial concerns were caused in community members by this, better governance, better trust, better post mortemming and transparency on what went wrong, etc—far better to rebuild better foundations of the existing movement and maintain the existing human capital than try to patch it up with growth. I could even see some interpreting the focus on growth as a deliberate lack of acknowledgement of past failures and mistakes. By all means have growth as one of your goals, but it was surprising to me to have it so prominent
(Note—this is my model of what would be best for resolving community issues and thus having an impact, not a request for what I personally most care about, the changes I suggest would not make a massive difference to me personally)
This seems like a great post to exist!
I would say yes, but frame it as “they can help you think about AGI focused options and if they might be a good fit—but obviously there are also other options worth considering!”
Huh, I think this way is a substantial improvement—if 80K had strong views about where their advice leads, far better to be honest about this and let people make informed decisions, than giving the mere appearance of openness
This seems a reasonable update, and I appreciate the decisiveness, and clear communication. I’m excited to see what comes of it!
This was a very helpful post, thanks! Do you know of any way for UK donors to give to the rapid response fund? If not, has GWWC considered trying to set that up? (Like I think you have with a bunch of other charities)
Great post! I highly recommend using LLM assistance (especially Claude) here, eg drafting emails, preparing a script for phone calls, etc. Personally I find this all super awkward, and LLMs are much better than me at wording things gracefully. (Though you want to edit it enough that it doesn’t feel like LLM written slop)
I think you are conflating your specific cause prioritisation and a general question of how people who care about impact should think. If someone held your course prioritisation then they should clearly work at one of those top organisations, otherwise help with the issues, or earn the highest salary they can and donate that. I.E earning to give. Working at other impact-focused organisations not focused on those top causes wouldn’t make sense. I think that generally you should optimise for one thing rather than half-hardly optimising for several.
However, many people do not share your cause participation which leads to quite different conclusions. I have no regrets about doing direct work myself
I disagree. I think that if a government causes great harm by accident or great harm intentionally, either is evidence that it will cause great harm by accident or intentionally in future respectively and I just care about the great harm part
This is quite different from the case I would make for donor lotteries. The argument I would make is just that figuring out what to do with my money takes a bunch of time and effort. If I had 10 times the amount of money I could just scale up all of my donations by 10 times and the marginal utility would probably be about the same. So I would happily take a 10% chance to 10x my money and a 90% chance to be zero and otherwise follow the same strategy because in expectation the total good done is the same but the effort invested has 10% the cost, as I won’t bother doing it if I lose.
Further, it now makes more sense to invest way more effort, but that’s just a fun bonus. I can still just give the money to EA funds or whatever if that beats my personal judgement, but I can take a bit more time to look into this, maybe make some other grants if I prefer etc. And so likewise, being 100 or 1000x leveraged is helpful and justifies even more efforts in the world where I win.
Notably this argument works regardless of who else is participating in the lottery. if I just went to Vegas and bet a bunch of my money on roulette that gets a similar effect. Donor lotteries are just a useful way of doing this where everyone gets this benefit of a small chance of massively increasing their money and a high chance of losing it all, and it’s zero expected value unlike roulette
If people said all these things without the word marginal, would you be happy?
This feels less like a disagreement over what the word marginal means and more that you just disagree with people’s theory of impact and what they think the expected value of different actions are?
To me, it seems like it would be much more valuable to have a fully virtual event rather than one where all the in-person people want to prioritise other in-person people. I don’t know how the costs of organising an additional virtual event could compare to hybridizing an in-person event that would happen anyway would be, however
Human rights abuses seem much worse in China—this alone is basically sufficient for me
Seems pretty clear that he meant your second statement
There is publicly available and sufficient evidence indicating Charity X did not provide 10K meals to homeless individuals.
Because Sam was engaging in a bunch of highly inappropriate behaviour for a CEO like lying to the board which is sufficient to justify the board firing him without need for more complex explanations. And this matches private gossip I’ve heard, and the board’s public statements
Further, Adam d’Angelo is not, to my knowledge, an EA/AI safety person, but also voted to remove Sam and was a necessary vote, which is strong evidence there were more legit reasons