Meta Trap #1: Meta Orgs Risk Not Actually Having an Impact
However, we’ve also now constructed a meta-chain that is five steps removed from the actual impact. There’s a lot that can now go wrong in this chain—the chapters could get set up successfully but fail to get enough people to donate, the chapters could fail to get set up at all due to problems unrelated to the mentoring, the mentors themselves could fail to be better than if the full-time staff member just advised full-time instead, and the staff member could end up being really bad at recruiting volunteer mentors.
This doesn’t mean the chapter chain doesn’t have high expected value or that it’s not worth doing. It just means that it’s risky, and I’m nervous that as the levels of meta scale up, the additional risk taken on by introducing ways to break the chain might be much greater than the additional leverage taken by introducing another meta step.
We as effective altruists either considering launching or supporting meta-projects need to figure out:
How to make calculations of probability chains we actually feel we can rely on. How would we figure this out? I’d guess you’d take lessons from How To Measure Anything, and then get good at Bayesian thinking? I don’t know, though I figure this is something we could seek help from CFAR and/or the rationalist community in figuring out how to do.
Actually doing them and publishing them for feedback before any of us launch the project. Where there is a bottleneck, or feedback from the community otherwise convinces founders there is a weak link in the probability chain, they can refine or change the pland to improve the expected odds of success.
As funders or supporters of the such a meta-charity or meta-project, we’d do best to demand a final and meticulous draft, in the form of a report or something, building on the calcuation in step (2) be published for scrutiny before going forward with funding.
We as effective altruists either considering launching or supporting meta-projects need to figure out:
How to make calculations of probability chains we actually feel we can rely on. How would we figure this out? I’d guess you’d take lessons from How To Measure Anything, and then get good at Bayesian thinking? I don’t know, though I figure this is something we could seek help from CFAR and/or the rationalist community in figuring out how to do.
Actually doing them and publishing them for feedback before any of us launch the project. Where there is a bottleneck, or feedback from the community otherwise convinces founders there is a weak link in the probability chain, they can refine or change the pland to improve the expected odds of success.
As funders or supporters of the such a meta-charity or meta-project, we’d do best to demand a final and meticulous draft, in the form of a report or something, building on the calcuation in step (2) be published for scrutiny before going forward with funding.