If a project is partially funded by e.g. open philanthropy, would you take that as a strong signal of the projects value (e.g. not worth funding at higher levels)?
Nah, at least in my own evaluation I don’t think Open Phil evaluations take a large role in my evaluation qua evaluation. That said, LTFF has historically[1] been pretty constrained on grantmaker time so if we think OP evaluation can save us time, obviously that’s good.
A few exceptions I can think of:
I think OP is reasonably good at avoiding types-of-downside-risks-that-I-model-OP-as-caring-about (eg reputational harm), so I tend to spend less time vetting grants for that downside risk vector when OP has already funded them.
For grants into technical areas I think OP has experience in (eg biosecurity), if a project has already been funded by OP (or sometimes rejected) I might ask OP for a quick explanation of their evaluation. Often they know key object-level facts that I don’t.
In the past, OP has given grants to us. I think OP didn’t want to both fund orgs and to fund us to then fund those orgs, so we reduced evaluation of orgs (not individuals) that OP has already funded. I think switching over from a “OP gives grants to LTFF” model to a “OP matches external donations to us” model hopefully means this is no longer an issue.
Another factor going forwards is that we’ll trying to increase epistemic independence and decrease our reliance on OP even further, so I expect to try to actively reduce how much OP judgments influence my thinking.
And probably currently as well, though at this very moment funding is a larger concern/constraint. We did make some guest fund manager hires recently so hopefully we’re less time-bottlenecked now. But I won’t be too surprised if grantmaker time becomes a constraint again after this current round of fundraising is over.
If a project is partially funded by e.g. open philanthropy, would you take that as a strong signal of the projects value (e.g. not worth funding at higher levels)?
Nah, at least in my own evaluation I don’t think Open Phil evaluations take a large role in my evaluation qua evaluation. That said, LTFF has historically[1] been pretty constrained on grantmaker time so if we think OP evaluation can save us time, obviously that’s good.
A few exceptions I can think of:
I think OP is reasonably good at avoiding types-of-downside-risks-that-I-model-OP-as-caring-about (eg reputational harm), so I tend to spend less time vetting grants for that downside risk vector when OP has already funded them.
For grants into technical areas I think OP has experience in (eg biosecurity), if a project has already been funded by OP (or sometimes rejected) I might ask OP for a quick explanation of their evaluation. Often they know key object-level facts that I don’t.
In the past, OP has given grants to us. I think OP didn’t want to both fund orgs and to fund us to then fund those orgs, so we reduced evaluation of orgs (not individuals) that OP has already funded. I think switching over from a “OP gives grants to LTFF” model to a “OP matches external donations to us” model hopefully means this is no longer an issue.
Another factor going forwards is that we’ll trying to increase epistemic independence and decrease our reliance on OP even further, so I expect to try to actively reduce how much OP judgments influence my thinking.
And probably currently as well, though at this very moment funding is a larger concern/constraint. We did make some guest fund manager hires recently so hopefully we’re less time-bottlenecked now. But I won’t be too surprised if grantmaker time becomes a constraint again after this current round of fundraising is over.