I misunderstood the order of events, which does change the story in important ways. The way OpenPhil handled this is not ideal for encouraging other funders, but there were no broken promises.
I apologise and I will try to be more careful in the future.
One reason I was too quick on this is that I am concerned about the dynamics that come with having a single overwhelmingly dominant donor in AI Safety (and other EA cause areas), which I don’t think is healthy for the field. But this situation is not OpenPhils fault.
Below the story from someone who was involved. They have asked to stay anonymous, please respect this.
The short version of the story is: (1) we applied to OP for funding, (2) late 2022/early-2023 we were in active discussions with them, (3) at some point, we received 200k USD via the SFF speculator grants, (4) then OP got back confirming that they would fund is with the amount for the “lower end” budget scenario minus those 200k.
My rough sense is similar to what e.g. Oli describes in the comments. It’s roughly understandable to me that they didn’t want to give the full amount they would have been willing to fund without other funding coming in. At the same time, it continues to feel pretty off to me that they let the SFF specultor grant 1:1 replace their funding, without even talking to SFF at all—since this means that OP got to spend a counterfactual 200k on other things they liked, but SFF did not get to spend additional funding on things they consider high priority.
One thing I regret on my end, in retrospect, is not pushing harder on this, including clarifying to OP that the SFF funding we received was partially uncoined, i.e. it wasn’t restricted to funding only the specific program that OP gave us (coined) funding for. But, importantly, I don’t think I made that sufficiently clear to OP and I can’t claim to know what they would have done if I had pushed for that more confidently.
I misunderstood the order of events, which does change the story in important ways. The way OpenPhil handled this is not ideal for encouraging other funders, but there were no broken promises.
I apologise and I will try to be more careful in the future.
One reason I was too quick on this is that I am concerned about the dynamics that come with having a single overwhelmingly dominant donor in AI Safety (and other EA cause areas), which I don’t think is healthy for the field. But this situation is not OpenPhils fault.
Below the story from someone who was involved. They have asked to stay anonymous, please respect this.