I lead the DeepMind mechanistic interpretability team
Neel Nanda
Advice for Sending Cold Messages to Busy People at EAG
Some takes:
I think Holly’s tweet was pretty unreasonable and judge her for that not you. But I also disagree with a lot of other things she says and do not at all consider her to speak for the movement
To the best of my ability to tell (both from your comments and private conversations with others), you and the other Mechanize founders are not getting undue benefit from Epoch funders apart from less tangible things like skills, reputation, etc. I totally agree with your comment below that this does not seem a betrayal of their trust. To me, it seems more a mutually beneficial trade between parties with different but somewhat overlapping values, and I am pro EA as a community being able to make such trades.
AI is a very complex uncertain and important space. This means reasonable people will disagree on the best actions AND that certain actions will look great under some worldviews and pretty harmful under others
As such, assuming you are sincere about the beliefs you’ve expressed re why to found Mechanize, I have no issue with calling yourself an Effective Altruist—it’s about evidence based ways to do the most good, not about doing good my way
Separately:
Under my model of the world, Mechanize seems pretty harmful in a variety of ways, in expectation
I think it’s reasonable for people who object to your work to push back against it and publicly criticise it (though agree that much of the actual criticism has been pretty unreasonable)
The EA community implicitly gives help and resources to other people in it. If most people in the community think that what you’re doing is net harmful even if you’re doing it with good intentions, I think it’s pretty reasonable to not want to give you any of that implicit support?
That’s not my understanding of what happened with CAIP, there’s various funders who are very happy to disagree with OpenPhil who I know have considered giving to CAIP and decided against. My understanding is that it’s based on actual reasons, not just an information cascade from OpenPhil
No idea about Apart though
Strong agree—cause neutrality should not at all imply an even spread of investment. I just in fact do think AI is the most pressing cause according to my values and empirical beliefs
Really glad to hear it! (And that writing several thousand words of very in depth examples was useful!) I’d love to hear if it proves to be useful longer term
Socratic Persuasion: Giving Opinionated Yet Truth-Seeking Advice
Agreed with all of the above. I’ll also add that a bunch of orgs do work that is basically useless, and it should not be assumed that just because an org seems “part of the community” that working there will be an effective way to do good—public callouts are costly, and community dynamics and knowledge can be hard to judge from the outside.
Thank you for the post. I was a bit surprised by the bulletin board one. What goes wrong with just positioning the forum exactly as it is now, but saying you’re not going to do any maintenance or moderation. but without trying to reposition it as a bulletin board? At the very least I expect the momentum could keep it going for a while. Is the issue that you think you do a lot of active moderation work that sustains healthy discussion norms which matters a lot for the current forum but would matter less for a bulletin board?
I think we just agree. Don’t donate to politics unless you’re going to be smart about it
I work in AI. Most papers, in peer reviewed venues or not, are awful. Some, in both categories, are good. Knowing whether a work is peer reviewed or not is weak evidence of quality, since so many good researchers think peer review is dumb and don’t bother (especially in safety). Eg I would generally consider eg “comes from a reputable industry lab” to be somewhat stronger evidence. Imo the reason “was it peer reviewed” is a useful signal in some fields is largely because the best researchers try to get their work peer reviewed, so not being peer reviewed is strong evidence of incompetence. That’s not the case in AI
So, it’s an issue, but in the same way that all citations are problematic if you can’t check them yourself/trust the authors to do due diligence
Highly Opinionated Advice on How to Write ML Papers
Define “past a certain point”? What fraction of close races in EG the US meet this? Especially if you include eg primaries for either party with one candidate with much more sensible views than the other. Imo donations are best spent on specific interventions or specific close but neglected races, but these can be a big deal
I do not feel qualified to judge the effectiveness of an advocacy org from the outside—there’s a lot of critical information like whether they’re offending people, if they’re having an impact, whether they’re sucking up oxygen from other orgs in the space, if their policy proposals are realistic, if they’re making good strategic decisions, etc, that I don’t really have the information to evaluate. So it’s hard to engage deeply with an org’s case for itself, and I default to this kind of high level prior. Like, the funders can also see this strong case and still aren’t funding it, so I think my argument stands
I’m sorry to hear that CAIP is in the situation, and this is not at all my area of expertise/I don’t know much about CAIP specifically, so I do not feel qualified to judge this myself.
That said, I will note on the meta level that there is major adverse selection when funding an org in a bad situation that all other major funders have passed on funding, and I would be personally quite hesitant to fund CAIP here without thinking hard about it or getting more info.
Funders typically have more context and private info than me, and with prominent orgs like this there’s typically a reason, but funders are strongly disincentived from making the criticism public. In this case, one of the stated reasons CAIP quotes is “had heard from third parties that CAIP was not a valuable funding opportunity” can be a very good reason if the third party is trustworthy and well informed, and often critics would prefer to be anonymous. I would love to hear more about the exact context here, and why CAIP believes they are making a mistake that readers should ignore, to assuage fears of adverse selection
I generally only recommend donating this when you are:
Confident the opportunity is low downside (which seems false in the context of political advocacy)
If you have a decent idea of why those funders declined that you disagree with
Or you think sufficiently little of all mentioned funders (Open Philanthropy, Longview Philanthropy, Macroscopic Ventures, Long-Term Future Fund, Manifund, MIRI, Scott Alexander, and JueYan Zhang) that you don’t update much
You feel you have enough context to make an informed judgement yourself, and grant makers are not meaningfully more well informed than you
I’m skeptical that the reason is really just that it’s politically difficult for most funders to fund political advocacy. It’s harder, but there’s a fair amount of risk tolerant private donors, at least. If it were, I expect they would be back channelling to other less constrained funders that CAIP is a good opportunity, or possibly making public that they did not have an important reason to decline/think the org does good work (as Eli Rose did for Lightcone). I would love for any to reply to my comment saying this is all paranoia! There are other advocacy orgs that are not in as dire a situation.
It seems like your goal with this post was to persuade EAs like me. I was trying to explain why I didn’t feel like there was much here that I found persuasive. I generally only go and read linked resources if there’s enough to make me curious, so a post that asserts something and links resources but doesn’t summarise the ideas or arguments is not persuasive to me. I’ve tried to be fairly clear about which parts of what you’re saying I think I understand well enough to confidently disagree with, and what parts I predict I would disagree with based on prior experience with other concepts and discourse from this ideological space but have not engaged enough to be confident in—I consider this perfectly consistent with evidence-based judgement. Life is far too short to go and read a bunch of things about every idea that I’m not confident is wrong
I disagree that wealth accumulation causes damage
I’m not super sure what you mean by comprehensive donor education, but I predict I would disagree with it
I’m neither convinced that these orgs effect complex political change, nor that their political goals would be good for the world. For example, as I understand it, degrowth is a popular political view in such circles and I think this would be extremely bad
I’m not familiar with the techniques outlined here, but would guess that the goals and worldview behind such tricky conversations differ a fair bit from mine
This one seems vaguely plausible, but is premised on radical feminism having techniques for getting donors to exert useful non monetary influence, and that these techniques would work for the goals I care about, neither of which is obvious to me
I don’t currently see what the benefit to the EA movement of attempting some form of integration would be, and the differences in worldview seem pretty deep and insurmountable, though I would love to be convinced otherwise! This post felt more like it argued why radical feminism would benefit from EA
Though, my perspective is obviously flavoured by disagreeing with radical feminism on many things, and if you feel differently then naturally integration would seem much better
Interpretability Will Not Reliably Find Deceptive AI
Giving meaningful advance notice of a post that is critical of an EA person or organization should be
Significant upsides if done and lowered risk of misinformation, downside seems pretty negligible if you do this but don’t agree to substantial back and forth
I think there’s some speaking past each other due to differing word choices. Holly is prominent, evidenced by the fact that we are currently discussing her. She has been part of the EA community for a long time and appears to be trying to do the most good according to her own principles. So it’s reasonable to call her a member of the EA community. And therefore “prominent member” is accurate in some sense.
However, “prominent member” can also imply that she represents the movement, is endorsed by it, or that her actions should influence what EA as a whole is perceived to believe. I believe this is the sense that Marcus and Matthew are using it, and I disagree that she fits this definition. She does not speak for me in any way. While I believe she has good intentions, I’m uncertain about the impact of her work and strongly disagree with many of her online statements and the discourse norms she has chosen to adopt, and think these go against EA norms (and would guess they are also negative for her stated goals, but am less sure on this one).