I lead the DeepMind mechanistic interpretability team
Neel Nanda
Yeah this is fair
My reasoning was roughly that the machine learning skill set is also extremely employable in finance which tends to pay better. though openai salaries do get pretty high nowadays and if you value openai and anthropic equity at notably above their current market value, then plausibly, they’re higher paying. Definitely agreed it’s not universal.
I disagree and think that b is actually totally sufficient justification. I’m taking as an assumption that we’re using an ethical theory that says people do not have an unbounded ethical obligation to give everything up to subsistence and that it is fine to set some kind of a boundary and fraction of your total budget of resources that you spend on altruistic purposes. Many people doing well paying altruistic careers (eg technical AI safety careers) could earn dramatically more money eg at least twice as much, if they were optimising for the highest paying career they could. I’m fairly sure I could be earning a lot more than I currently am if that was my main goal. But I consider the value of my labour from an altruistic perspective to exceed the additional money I could be donating and therefore do not see myself to have a significant additional ethical obligation to donate (though I do donate a fraction of my income anyway because I want to)
By foregoing a large amount of income for altruistic reasons, I think such people are spending a large amount of their resource budget on altruistic purposes, and that if they still have an obligation to donate more money that people in higher paying careers should be obliged to donate far more. Which is a consistent position, but not one I hold
Neel Nanda’s Quick takes
In light of recent discourse on EA adjacency, this seems like a good time to publicly note that I still identify as an effective altruist, not EA adjacent.
I am extremely against embezzling people out of billions of dollars of money, and FTX was a good reminder of the importance of “don’t do evil things for galaxy brained altruistic reasons”. But this has nothing to do with whether or not I endorse the philosophy that “it is correct to try to think about the most effective and leveraged ways to do good and then actually act on them”. And there are many people in or influenced by the EA community who I respect and think do good and important work.
I don’t think the board’s side considered it a referendum. Just because the inappropriate behaviour was about safety doesn’t mean that a high integrity board member who is not safety focused shouldn’t fire them!
Amazing, probably my favourite April Fool’s post of the day
Positive feedback: Great post!
Negative feedback: By taking any public actions you make it easier for people to give you feedback, a major tactical error (case in point)
Because Sam was engaging in a bunch of highly inappropriate behaviour for a CEO like lying to the board which is sufficient to justify the board firing him without need for more complex explanations. And this matches private gossip I’ve heard, and the board’s public statements
Further, Adam d’Angelo is not, to my knowledge, an EA/AI safety person, but also voted to remove Sam and was a necessary vote, which is strong evidence there were more legit reasons
Oh, that handle is way better, and not what I took from the post at all!
Thanks a lot for the clarifications. If you agree with my tactical claims and are optimising for growth over a longer time frame than I agree, we probably don’t disagree much on actions and the actions you describes and cautions seem very reasonable to me. To me Growth feels like a somewhat unhelpful handle here that pushes me in the mind frame of what leads to short-term growth rather than a sustainable healthy community. But if it feels useful to you, fair enough
I’m specifically claiming silicon valley AI, where I think it’s a fair bit higher?
no one cares about EA except EAs and some obviously bad faith critics trying to tar you with guilt-by-association
I agree with your broad points, but this seems false to me. I think that lots of people seem to have negative associations with EA, especially given SBF and in the AI and tech space where eg it’s widely (and imo falsely) believed that the openai coup was for EA reasons
EDIT: Trying to distill my argument: the effect of growth on movement health is unclear, probably positive, but I do not think “optimise for growth” is what I would come up with if I was solely optimising for the strength of the EA community, it seems like there’s notably more important directions
Thanks a lot for the detailed and transparent post. I’m a bit surprised by the focus on growth.
While I do agree that feeling like you’re in a buzzing growing movement can be good for morale. I also think there are costs to morale from growth like lots of low context new people around, feeling like the movement is drifting away from what older participants care about, etc. Having a bunch of competent and excited new people join who do a lot of awesome stuff seems great for morale, but is significantly harder imo, and requires much more specific plans.
It’s extremely not obvious to me that this is the best way to recover community morale and branding. In particular, under the hypothesis that a lot of damage was caused by FTX, it seems like a good amount of your effort should be going into things like addressing whatever substantial concerns were caused in community members by this, better governance, better trust, better post mortemming and transparency on what went wrong, etc—far better to rebuild better foundations of the existing movement and maintain the existing human capital than try to patch it up with growth. I could even see some interpreting the focus on growth as a deliberate lack of acknowledgement of past failures and mistakes. By all means have growth as one of your goals, but it was surprising to me to have it so prominent
(Note—this is my model of what would be best for resolving community issues and thus having an impact, not a request for what I personally most care about, the changes I suggest would not make a massive difference to me personally)
This seems like a great post to exist!
I would say yes, but frame it as “they can help you think about AGI focused options and if they might be a good fit—but obviously there are also other options worth considering!”
Huh, I think this way is a substantial improvement—if 80K had strong views about where their advice leads, far better to be honest about this and let people make informed decisions, than giving the mere appearance of openness
This seems a reasonable update, and I appreciate the decisiveness, and clear communication. I’m excited to see what comes of it!
My point is that “other people in the income bracket AFTER taking a lower paying job” is the wrong reference class.
Let’s say someone is earning $10mn/year in finance. I totally think they should donate some large fraction of their income. But I’m pretty reluctant to argue that they should donate more than 99% of it. So it seems completely fine to have a post donation income above $100K, likely far above.
If this person quits to take a job in AI Safety that pays $100K/year, because they think this is more impactful than their donations, I think it would be unreasonable to argue that they need to donate some of their reduced salary, because then their “maximum acceptable post donation salary” has gone down, even though they’re (hopefully) having more impact than if they donated everything above $100K
I’m picking fairly extreme numbers to illustrate the point, but the key point is that choosing to do direct work should not reduce your “maximum acceptable salary post donations”, and that at least according to my values, that max salary post donation is often above what they get paid in their new direct role.