I really appreciated this list of examples and it’s updated me a bit towards checking in with LTFF & others a bit more. That said, I’m not sure adverse selection is a problem that Manifund would want to dedicate significant resources towards solving.
One frame: is longtermist funding more like “admitting a Harvard class/YC batch” or more like “pre-seed/seed-stage funding”? In the former case, it’s more important for funders to avoid bad grants; the prestige of the program and its peer effects are based on high average quality in each cohort. In the latter case, you are “black swan farming”; the important thing is to not miss out on the one Facebook that 1000xs, and you’re happy to fund 99 duds in the meantime.
I currently think the latter is a better representation of longtermist impact, but 1) impact is much harder to measure than startup financial results, and 2) having high average quality/few bad grants might be better for fundraising...
In the latter case, you are “black swan farming”; the important thing is to not miss out on the one Facebook that 1000xs, and you’re happy to fund 99 duds in the meantime.
One risk of this framing is that as a seed funder your downside is pretty much capped at “you don’t get any money” while with longtermist grantmaking your downside could be much larger. For example, you could fund someone to do outreach who is combative and unconvincing or someone who will use poor and unilateral judgement around information hazards. The article has an example of avoiding a grant that could potentially have had this kind of significant downside risk with “concluded that the applicant has enough integrity or character issues or red flags that I’m not comfortable with recommending funding to them”.
I’ve heard this argument a lot (eg in the context of impact markets) and I agree that this consideration is real, but I’m not sure that it should be weighted heavily. I think it depends a lot on what the distribution of impact looks like: the size of the best positive outcomes vs the worst negative ones, their relative frequency, how different interventions (eg adding screening steps) reduces negative projects but also discourages positive ones.
For example, if in 100 projects, you have [1x +1000, 4x −100, 95x ~0], then I think black swarm farming still does a lot better than some process where you try to select the top 10 or something. Meanwhile if your outcomes look more like [2x +1000, 3x −1000, 95x ~0] then careful filtering starts to matter a lot.
My intuition is that the best projects are much better than the worst projects are bad, and also that the best projects don’t necessarily look that good at the outset. (To use the example I’m most familiar with, Manifold looked pretty sketchy when we applied for ACX Grants, and got turned down by YC and EA Bahamas; I’m still pretty impressed that Scott figured we were worth funding :P)
I really appreciated this list of examples and it’s updated me a bit towards checking in with LTFF & others a bit more. That said, I’m not sure adverse selection is a problem that Manifund would want to dedicate significant resources towards solving.
One frame: is longtermist funding more like “admitting a Harvard class/YC batch” or more like “pre-seed/seed-stage funding”? In the former case, it’s more important for funders to avoid bad grants; the prestige of the program and its peer effects are based on high average quality in each cohort. In the latter case, you are “black swan farming”; the important thing is to not miss out on the one Facebook that 1000xs, and you’re happy to fund 99 duds in the meantime.
I currently think the latter is a better representation of longtermist impact, but 1) impact is much harder to measure than startup financial results, and 2) having high average quality/few bad grants might be better for fundraising...
One risk of this framing is that as a seed funder your downside is pretty much capped at “you don’t get any money” while with longtermist grantmaking your downside could be much larger. For example, you could fund someone to do outreach who is combative and unconvincing or someone who will use poor and unilateral judgement around information hazards. The article has an example of avoiding a grant that could potentially have had this kind of significant downside risk with “concluded that the applicant has enough integrity or character issues or red flags that I’m not comfortable with recommending funding to them”.
I’ve heard this argument a lot (eg in the context of impact markets) and I agree that this consideration is real, but I’m not sure that it should be weighted heavily. I think it depends a lot on what the distribution of impact looks like: the size of the best positive outcomes vs the worst negative ones, their relative frequency, how different interventions (eg adding screening steps) reduces negative projects but also discourages positive ones.
For example, if in 100 projects, you have [1x +1000, 4x −100, 95x ~0], then I think black swarm farming still does a lot better than some process where you try to select the top 10 or something. Meanwhile if your outcomes look more like [2x +1000, 3x −1000, 95x ~0] then careful filtering starts to matter a lot.
My intuition is that the best projects are much better than the worst projects are bad, and also that the best projects don’t necessarily look that good at the outset. (To use the example I’m most familiar with, Manifold looked pretty sketchy when we applied for ACX Grants, and got turned down by YC and EA Bahamas; I’m still pretty impressed that Scott figured we were worth funding :P)