I lead the DeepMind mechanistic interpretability team
Neel Nanda
I work in AI. Most papers, in peer reviewed venues or not, are awful. Some, in both categories, are good. Knowing whether a work is peer reviewed or not is weak evidence of quality, since so many good researchers think peer review is dumb and don’t bother (especially in safety). Eg I would generally consider eg “comes from a reputable industry lab” to be somewhat stronger evidence. Imo the reason “was it peer reviewed” is a useful signal in some fields is largely because the best researchers try to get their work peer reviewed, so not being peer reviewed is strong evidence of incompetence. That’s not the case in AI
So, it’s an issue, but in the same way that all citations are problematic if you can’t check them yourself/trust the authors to do due diligence
Highly Opinionated Advice on How to Write ML Papers
Define “past a certain point”? What fraction of close races in EG the US meet this? Especially if you include eg primaries for either party with one candidate with much more sensible views than the other. Imo donations are best spent on specific interventions or specific close but neglected races, but these can be a big deal
I do not feel qualified to judge the effectiveness of an advocacy org from the outside—there’s a lot of critical information like whether they’re offending people, if they’re having an impact, whether they’re sucking up oxygen from other orgs in the space, if their policy proposals are realistic, if they’re making good strategic decisions, etc, that I don’t really have the information to evaluate. So it’s hard to engage deeply with an org’s case for itself, and I default to this kind of high level prior. Like, the funders can also see this strong case and still aren’t funding it, so I think my argument stands
I’m sorry to hear that CAIP is in the situation, and this is not at all my area of expertise/I don’t know much about CAIP specifically, so I do not feel qualified to judge this myself.
That said, I will note on the meta level that there is major adverse selection when funding an org in a bad situation that all other major funders have passed on funding, and I would be personally quite hesitant to fund CAIP here without thinking hard about it or getting more info.
Funders typically have more context and private info than me, and with prominent orgs like this there’s typically a reason, but funders are strongly disincentived from making the criticism public. In this case, one of the stated reasons CAIP quotes is “had heard from third parties that CAIP was not a valuable funding opportunity” can be a very good reason if the third party is trustworthy and well informed, and often critics would prefer to be anonymous. I would love to hear more about the exact context here, and why CAIP believes they are making a mistake that readers should ignore, to assuage fears of adverse selection
I generally only recommend donating this when you are:
Confident the opportunity is low downside (which seems false in the context of political advocacy)
If you have a decent idea of why those funders declined that you disagree with
Or you think sufficiently little of all mentioned funders (Open Philanthropy, Longview Philanthropy, Macroscopic Ventures, Long-Term Future Fund, Manifund, MIRI, Scott Alexander, and JueYan Zhang) that you don’t update much
You feel you have enough context to make an informed judgement yourself, and grant makers are not meaningfully more well informed than you
I’m skeptical that the reason is really just that it’s politically difficult for most funders to fund political advocacy. It’s harder, but there’s a fair amount of risk tolerant private donors, at least. If it were, I expect they would be back channelling to other less constrained funders that CAIP is a good opportunity, or possibly making public that they did not have an important reason to decline/think the org does good work (as Eli Rose did for Lightcone). I would love for any to reply to my comment saying this is all paranoia! There are other advocacy orgs that are not in as dire a situation.
It seems like your goal with this post was to persuade EAs like me. I was trying to explain why I didn’t feel like there was much here that I found persuasive. I generally only go and read linked resources if there’s enough to make me curious, so a post that asserts something and links resources but doesn’t summarise the ideas or arguments is not persuasive to me. I’ve tried to be fairly clear about which parts of what you’re saying I think I understand well enough to confidently disagree with, and what parts I predict I would disagree with based on prior experience with other concepts and discourse from this ideological space but have not engaged enough to be confident in—I consider this perfectly consistent with evidence-based judgement. Life is far too short to go and read a bunch of things about every idea that I’m not confident is wrong
I disagree that wealth accumulation causes damage
I’m not super sure what you mean by comprehensive donor education, but I predict I would disagree with it
I’m neither convinced that these orgs effect complex political change, nor that their political goals would be good for the world. For example, as I understand it, degrowth is a popular political view in such circles and I think this would be extremely bad
I’m not familiar with the techniques outlined here, but would guess that the goals and worldview behind such tricky conversations differ a fair bit from mine
This one seems vaguely plausible, but is premised on radical feminism having techniques for getting donors to exert useful non monetary influence, and that these techniques would work for the goals I care about, neither of which is obvious to me
I don’t currently see what the benefit to the EA movement of attempting some form of integration would be, and the differences in worldview seem pretty deep and insurmountable, though I would love to be convinced otherwise! This post felt more like it argued why radical feminism would benefit from EA
Though, my perspective is obviously flavoured by disagreeing with radical feminism on many things, and if you feel differently then naturally integration would seem much better
Interpretability Will Not Reliably Find Deceptive AI
Giving meaningful advance notice of a post that is critical of an EA person or organization should be
Significant upsides if done and lowered risk of misinformation, downside seems pretty negligible if you do this but don’t agree to substantial back and forth
Sorry to hear that! I would recommend posting a request for funding on Manifund, that makes it easier for people to donate, and I believe has a mechanism whereby people get refunded unless you meet a minimum funding bar
My Research Process: Understanding and Cultivating Research Taste
Consequentialists should be strong longtermists
I’m skeptical of Pascal’s Muggings
Bioweapons are an existential risk
If this includes AI created/enhanced bioweapons it seems plausible, without that I’m much less sure, though if there’s another few decades of synth bio progress but no AGI, seems plausible too
Some Positions In EA Leadership Should Be Elected
I don’t see a reasonable way to choose the voting population
Should EA avoid using AI art for non-research purposes?
In my opinion, the counterfactual is highly likely to be zero or free art rather than paying a human artist. I think art adds value, and the marginal harm of another AI generated art image that does not result in foregone income for an artist is fairly negligible. I think we should have a high bar for trying to create norms in a community against a fairly normal action, and this does not meet that bar
This presupposes that the way something gets to change community direction is by having high karma, while I think it’s actually about being well reasoned and persuasive AND being viewed. Being high karma helps it be viewed, but this is neutral to actively negative if the post is low quality/flawed—that just entrenches people in their positions more/makes them think less of the forum. So in order for this change to help, there must be valuable posts that are low karma that would be high karma if voting was more democratic—I personally think that the current system is better at selecting for quality and this outweighs any penalty to dissenting opinions, which I would guess is fairly minor in practice
I think we just agree. Don’t donate to politics unless you’re going to be smart about it