Hi Eric, thanks for your note! Happy to provide some more context on a few things:
You’re right, the 160,000 include an analysis of finance & accountability that is automated off of 990s. The Impact & Results is not automated. Honestly, the key barrier to “scale” here is smart labor (a team of 3 has been working on this). Certainly in typical EA terms, many of the nonprofits that are analyzed are not the most cost-effective. But we also know that standard EA nonprofits are a fraction of the $300 bil nonprofit sector, and there is a portion of that money that has high intra-cause elasticity but low inter-cause elasticity. Impact analysis could be a way of shifting that money, yielding very cost-effective returns (again, ImpactMatters spent half a million or so last year to rate $15 billion in nonprofit spending. How much did we actually move? Probably not a lot. But hopefully this acquisition changes that, and we’ll be running experiments over the next year to figure that out).
If anyone is looking to pitch in on the cost-effectiveness analysis, we’re looking to build a small volunteer team—more
True! But the beauty is (as we see it), that now there is actually a largeish raw dataset that donors can use to apply own weighting & build benefit cost analyses. The barrier to benefit/cost has never been the b/c methodology … but the raw CEA estimates to feed in
I’m not sure what “incorrect” means in this context. Fwiw, we are working on moving to a continuous scale that may address some of your critiques. But I don’t think that anyone believes that cardinal rankings are actually that much use in the space.
Got to disagree! Sorry, but this is incorrect—maybe not the claim about oversimplification, but your summary of our methodology certainly is. Some nonprofits run an emergency shelter as well as other shelter or housing programs, such as transitional housing or permanent supportive housing. Wherever possible, we exclude from our calculation the value of these non-emergency shelter services as well as the costs of providing them. If the nonprofit has not separated out programmatic costs in this way, we apply a standard cost adjustment. The cost adjustment is calculated using HUD’s Housing Inventory Count dataset. The Housing Inventory Count dataset reports the number of individuals sheltered by each nonprofit on a single night in January, broken out by six types of shelter and housing programs, including emergency shelter. This allows us to calculate the number of individuals sheltered as part of a nonprofit’s emergency shelter program as a percentage of the total individuals it sheltered across all programs. We then multiply the proportion by total programmatic costs, yielding an estimate of costs associated only with the nonprofit’s emergency shelter program. See Reference Manual on Data Analysis for more details on this calculation.
Hi Eric, thanks for your note! Happy to provide some more context on a few things:
You’re right, the 160,000 include an analysis of finance & accountability that is automated off of 990s. The Impact & Results is not automated. Honestly, the key barrier to “scale” here is smart labor (a team of 3 has been working on this). Certainly in typical EA terms, many of the nonprofits that are analyzed are not the most cost-effective. But we also know that standard EA nonprofits are a fraction of the $300 bil nonprofit sector, and there is a portion of that money that has high intra-cause elasticity but low inter-cause elasticity. Impact analysis could be a way of shifting that money, yielding very cost-effective returns (again, ImpactMatters spent half a million or so last year to rate $15 billion in nonprofit spending. How much did we actually move? Probably not a lot. But hopefully this acquisition changes that, and we’ll be running experiments over the next year to figure that out).
If anyone is looking to pitch in on the cost-effectiveness analysis, we’re looking to build a small volunteer team—more
True! But the beauty is (as we see it), that now there is actually a largeish raw dataset that donors can use to apply own weighting & build benefit cost analyses. The barrier to benefit/cost has never been the b/c methodology … but the raw CEA estimates to feed in
I’m not sure what “incorrect” means in this context. Fwiw, we are working on moving to a continuous scale that may address some of your critiques. But I don’t think that anyone believes that cardinal rankings are actually that much use in the space.
Got to disagree! Sorry, but this is incorrect—maybe not the claim about oversimplification, but your summary of our methodology certainly is. Some nonprofits run an emergency shelter as well as other shelter or housing programs, such as transitional housing or permanent supportive housing. Wherever possible, we exclude from our calculation the value of these non-emergency shelter services as well as the costs of providing them. If the nonprofit has not separated out programmatic costs in this way, we apply a standard cost adjustment. The cost adjustment is calculated using HUD’s Housing Inventory Count dataset. The Housing Inventory Count dataset reports the number of individuals sheltered by each nonprofit on a single night in January, broken out by six types of shelter and housing programs, including emergency shelter. This allows us to calculate the number of individuals sheltered as part of a nonprofit’s emergency shelter program as a percentage of the total individuals it sheltered across all programs. We then multiply the proportion by total programmatic costs, yielding an estimate of costs associated only with the nonprofit’s emergency shelter program. See Reference Manual on Data Analysis for more details on this calculation.
Thanks for engaging here Elijah and thanks for your hard work. It means a lot to me and I am sure many others here.