I lead the DeepMind mechanistic interpretability team
Neel Nanda
Fun post! Note that you can see the diagrams here https://web.archive.org/web/20220523235545/https://markxu.com/dropping-out
The Wenar criticism in particular seems laughably bad, such that I find bad faith hypotheses like this fairly convincing. I do agree it’s a seductive line of reasoning to follow in general though, and that this can be dangerous
I got the OpenPhil grant only after the other grant went through (and wasn’t thinking much about OpenPhil when I applied for the other grant). I never thought to inform the other grant maker after I got the OpenPhil grant, which maybe I should have in hindsight out of courtesy?
This was covering some salary for a fixed period of research, partially retroactive, after an FTX grant fell through. So I guess I didn’t have use for more than X, in some sense (I’m always happy to be paid a higher salary! But I wouldn’t have worked for a longer period of time, so I would have felt a bit weird about the situation)
Without any context on this situation, I can totally imagine worlds where this is reasonable behaviour, though perhaps poorly communicated, especially if SFF didn’t know they had OpenPhil funding. I personally had a grant from OpenPhil approved for X, but in the meantime had another grantmaker give me a smaller grant for y < X, and OpenPhil agreed to instead fund me for X—y, which I thought was extremely reasonable.
In theory, you can imagine OpenPhil wanting to fund their “fair share” of a project, evenly split across all other interested grantmakers. But it seems harmful and inefficient to wait for other grantmakers to confirm or deny, so “I’ll give you 100%, but lower that to 50% if another grantmaker is later willing to go in as well” seems a more efficient version.
I can also imagine that they eg think a project is good if funded up to $100K, but worse if funded up to $200K (eg that they’d try to scale too fast, as has happened with multiple AI Safety projects that I know of!). If OpenPhil funds $100K, and the counterfactual is $0, that’s a good grant. But if SFF also provides $100K, that totally changes the terms, and now OpenPhil’s grant is actively negative (from their perspective).
I don’t know what the right social norms here are, and I can see various bad effects on the ecosystem from this behaviour in general—incentivising grantees to be dishonest about whether they have other funding, disincentivising other grantmakers from funding anything they think OpenPhil might fund, etc. I think Habryka’s suggestion of funging, but not to 100% seems reasonable and probably better to me.
Omg what, this is amazing(though nested bullets not working does seem to make this notably less useful). Does it work for images?
Yes, I presume this is referring to their Responsible Scaling Policy
I liked this, and am happy for this to have been a post. Maybe putting [short poem] in the title could help calibrate people on what to expect?
I’d be curious to hear your or Emma’s case for why it’s notably higher impact for a forum reader to donate via the campaign rather than to New Incentives directly (if they’re inclined to make the donation at all)
To me this post ignores the elephant in the room: OpenPhil still has billions of dollars left and is trying to make funding decisions relative to where they think their last dollar is. I’d be pretty surprised if having the Wytham money liquid rather than illiquid (or even having £15mn out of nowhere!) really made a difference to that estimate.
It seems reasonable to argue that they’re being too conservative, and should be funding the various things you mention in this post, but also plausible to me that they’re acting correctly? More importantly, I think this is a totally separate question to whether to sell Wytham,and requires different arguments. Eg I gather that CEEALAR has several times been considered and passed over for funding before, I don’t have a ton of context for why, but that suggests to me it’s not a slam dunk re being a better use of money.
I also work at Google, and a surprising amount of people (including EAs) aren’t aware of the substantial annual donation match! I only noticed by happenstance.
I didn’t know there were useful tools online for this, I agree this seems like a great thing for EA orgs/charities to have on their website if it’s easy to do
It still seems like a mistake to not point out to people that they can substantially increase their donation and thus lives saved, even if it doesn’t count towards the pledge
I think in hindsight the response (with the information I think the board had) was probably reasonable
Reasonable because you were all the same org, or reasonable even if EA Funds was its own org
Maybe it would have been cleaner if it wasn’t about Ben, though I don’t think a hypothetical person would have made the lesson as clear, and if Ben wasn’t fair game for having written that article, I don’t know who would be.
Thanks! This line in particular changed my mind about whether it was retributive, I genuinely can’t think of anyone else it would be appropriate to do this for
They were shocked at his lack of concern for her suffering and confirmed that he would probably really hurt her career if she came forward with her information.
Re-reading that section, it was surprisingly consistent with that interpretation, but this line seems to make no sense if it’s about Kat’s experience—if the trauma is publishing the previous post then “probably really hurt her career if she came forward with her information” does not make sense because the trauma was a public event
I also think orgs generally should have donor diversity and more independence, so giving more funding to the orgs that OP funds is sometimes good.
I’d be curious to hear more about this—naively, if I’m funding an org, and then OpenPhil stops funding that org, that’s a fairly strong signal to me that I should also stop funding it, knowing nothing more. (since it implies OpenPhil put in enough effort to evaluate the org, and decided to deviate from the path of least resistance)
Agreed re funding things without a track record, that seems clearly good for small donors to do, eg funding people to do independent research or start a small new research group, if you believe they’re promising
Yeah, that intermediate world sounds great to me! (though a lot of effort, alas)
Ah, gotcha. If I understand correctly you’re arguing for more of a “wisdom of the crowds” analogy? Many donors is better than a few donors.
If so, I agree with that, but think the major disanalogy is that the big donors are professionals, with more time experience and context, while small donors are not—big donors are more like hedge funds, small donors are more like retail investors in the efficient market analogy
Agreed!
I disagree, because you can’t short a charity, so there’s no way for overhyped charity “prices” to go down
Idk, I do just think that bad faith actors exist, especially in the public sphere. It’s a mistake to assume that all critics are in bad faith, but equally it’s naive to assume that it’s never bad faith