I’m not sure I follow your analysis—what does these numbers represent?
CE incubatee
p5: ~$100k
Median: ~$500K
Mean: ~$6M
p95: ~$23M
I do generally believe that at the point you’re big enough to have a significant impact, it’s quite likely that your investors will pressure you to squeeze money out of it in a way that will likely ruin said impact.
I think this could be a crux. Overall, I’m not very convinced that this effect is particularly strong relative to incentives that apply in the nonprofit case. Some rough heuristics for why I think it’s often overstated: * Once a company is operating at scale (e.g. Series B) it’s rare to see large pivots, and if a change isn’t large why would it cost a lot of impact? * Most incubators want people to work on things they are excited about, there’s only so much you can push people to change directions before the founder gets fed up. * I suspect that investors might place a lot of pressure on companies to change the way they operate—but overall (relative to nonprofits) I think that’s probably a good thing.
I think I’m also less convinced than you are that the skillsets for for-profit and non-profit founders are highly transferable. I guess it depends a lot on the particular organisations.
I looked at a handful of mental health startups to inform my guesses on impact. I looked the most deeply into BetterHelp, but you can clearly see through their numbers that their prices have almost doubled in 5 years (and steadily, too, this wasn’t a COVID thing). From the research I did, my sense was that it wasn’t getting passed back onto their counsellors, nor fuelling an increase in growth spending. There’s no way it got twice as expensive to deliver therapy.
I think if we had to point to a single mechanism, once you run out of user growth—as BetterHelp have—your investors push you to find an equilibrium price. That price is necessarily going to be higher than the price that guarantees the broadest impact, and likely to be higher than the price for the most (breadth x depth) impact.
My best guess is regional pricing can act as a crude form of means-testing, but this probably comes with a perverse incentive to ignore the cheaper regions (as BetterHelp have—almost all of their users are in the U.S.).
(All of that goes out the window if you don’t go direct to consumers—I think deeper forms of healthtech might be quite value-aligned!)
Thanks for sharing your analysis! Very interesting to read.
I do generally believe that at the point you’re big enough to have a significant impact, it’s quite likely that your investors will pressure you to squeeze money out of it in a way that will likely ruin said impact.
Did you look into whether steward-ownership is a viable strategy for mitigating against this risk?
I didn’t. It evidently works—as do cooperatives, which I was also excited to found—but I think the big worry is up the top end. It’s very hard to imagine a FAANG company structured this way. And some of the average-case calculations above are skewed upwards by a handful of top success stories.
At Google, Larry Page and Sergey Brin control 51% of the shareholder vote thanks to their supervoting stock.
At Meta/Facebook, Zuckerberg controls 61% of the vote.
Anthropic is a public benefit corporation.
OpenAI was supposed to be controlled by a nonprofit board though Sam Altman is trying to convert it into a public benefit corporation.
Shareholders are also unlikely to remove Elon Musk from Tesla even if he does a lot of things against Tesla’s interests.
Executives are under intense pressure to make profit to prevent the business from going bankrupt, and maybe to get bonuses or reputation, but the pressure to avoid getting voted out by shareholders is relatively less.
Charities have a lot of the same pressures (minus the bonuses).
I don’t have any expertise, I may be totally wrong.
How much money a CE charity might be able to raise on average. (This makes the assumption that deployed cash from a charity is roughly equivalent to donated cash from Founding to Give which is what the other numbers represent).
Did you adjust for the likelihood that some of the funding secured by the charity would likely would have gone to other effective charities in the same cause area?
I don’t quite understand why you’re comparing donated money from people earning to give to money consumed by a charity.
I think a better comparison could be to just convert everything into impact adjusted dollars imagining that you’re able to sell off your impact equity. In this scheme it’s clearer that taking more money from value aligned funders is bad, whilst taking money from non-aligned funders is roughly neutral and donating a bunch of money is very good. The charity has to essentially “pay back” in impact the value aligned funder to get itself out of a hole, whereas the for profit doesn’t need to worry about that (on impact grounds).
To be clear, I think many nonprofits do a lot of good and are well worth funding—I spend most of my time trying to fund them—but it is harder to work out net positive if you’re consuming a bunch of fungible and value aligned resources.
Thanks for the feedback! I’m not really smart enough to figure something like that out tbh, and by the point I’d seen that my realistic options were within an order of magnitude of each other (and both high-risk with high overlap) I was pretty satisfied that my decision was likely gonna hinge on something else.
Maybe your adjustment would take it outside that range but I think at the point of extreme success these charities would be selling impact at competitive rates (so funders would be getting marginal value out of them), and more than likely going to counterfactual funders (ex. government). Maybe this is truer in GHD and especially mental health than in animal welfare, which seems more concentrated. But yeah, at this point I was pretty satisfied that Kaya Guides had minimal risk of substantial funding displacement in a success scenario (I can’t be too specific about this in public), so I picked it.
(Maybe again—I’m just highlighting my specific scenario, there’s definitely an attempt to generalise here but I didn’t think too hard through it)
Makes sense. Sorry I didn’t say this before but I really appreciate your comments on this post and the high effort modelling that you did when working out what to work on. I think it’s a great example to set for the community and shows how seriously you can take these decisions (if you want to).
I think this point is really important. If the charity raises 20 million from GiveWell that’s great, but counterfactual impact is unlikely to be very big. If they raise it though from other foundations or individuals, I would say the counterfactul impact per dollar might be 5-10x as much.
But in general agree with almost everythng @huw nicely points out here.
And yes don’t expevt @huw to account for this in a calculation. I think it would be nice to have a super well-researched estimate of “Value per dollar” of givewell funding vs. other foundations, and am surprised this hasn’t been done.
If the data were available, the amount an CE charity might be able to raise on average from funders other than highly-aligned funders might work better if someone were deploying your analysis for a different decision about whether to found a CE charity vs. earn to give. You’ve mentioned that you were “satisfied that Kaya Guides had minimal risk of substantial funding displacement in a success scenario,” so it makes sense that you wouldn’t adjust for this when making your specific decision.
(The working, rough assumption here is that the average CE charity can put a dollar to use roughly as well as the average GiveWell grantee or ACE-recommended charity—so moving $1 from the latter to the former produces neither a net gain nor a net loss. That’s unlikely to be particularly correct, but it’s probably closer to the actual effect than not adjusting for where the money went counterfactually).
I’m not sure I follow your analysis—what does these numbers represent?
CE incubatee
p5: ~$100k
Median: ~$500K
Mean: ~$6M
p95: ~$23M
I do generally believe that at the point you’re big enough to have a significant impact, it’s quite likely that your investors will pressure you to squeeze money out of it in a way that will likely ruin said impact.
I think this could be a crux. Overall, I’m not very convinced that this effect is particularly strong relative to incentives that apply in the nonprofit case. Some rough heuristics for why I think it’s often overstated:
* Once a company is operating at scale (e.g. Series B) it’s rare to see large pivots, and if a change isn’t large why would it cost a lot of impact?
* Most incubators want people to work on things they are excited about, there’s only so much you can push people to change directions before the founder gets fed up.
* I suspect that investors might place a lot of pressure on companies to change the way they operate—but overall (relative to nonprofits) I think that’s probably a good thing.
I think I’m also less convinced than you are that the skillsets for for-profit and non-profit founders are highly transferable. I guess it depends a lot on the particular organisations.
I looked at a handful of mental health startups to inform my guesses on impact. I looked the most deeply into BetterHelp, but you can clearly see through their numbers that their prices have almost doubled in 5 years (and steadily, too, this wasn’t a COVID thing). From the research I did, my sense was that it wasn’t getting passed back onto their counsellors, nor fuelling an increase in growth spending. There’s no way it got twice as expensive to deliver therapy.
I think if we had to point to a single mechanism, once you run out of user growth—as BetterHelp have—your investors push you to find an equilibrium price. That price is necessarily going to be higher than the price that guarantees the broadest impact, and likely to be higher than the price for the most (breadth x depth) impact.
My best guess is regional pricing can act as a crude form of means-testing, but this probably comes with a perverse incentive to ignore the cheaper regions (as BetterHelp have—almost all of their users are in the U.S.).
(All of that goes out the window if you don’t go direct to consumers—I think deeper forms of healthtech might be quite value-aligned!)
Thanks for sharing your analysis! Very interesting to read.
Did you look into whether steward-ownership is a viable strategy for mitigating against this risk?
I didn’t. It evidently works—as do cooperatives, which I was also excited to found—but I think the big worry is up the top end. It’s very hard to imagine a FAANG company structured this way. And some of the average-case calculations above are skewed upwards by a handful of top success stories.
At Google, Larry Page and Sergey Brin control 51% of the shareholder vote thanks to their supervoting stock.
At Meta/Facebook, Zuckerberg controls 61% of the vote.
Anthropic is a public benefit corporation.
OpenAI was supposed to be controlled by a nonprofit board though Sam Altman is trying to convert it into a public benefit corporation.
Shareholders are also unlikely to remove Elon Musk from Tesla even if he does a lot of things against Tesla’s interests.
Executives are under intense pressure to make profit to prevent the business from going bankrupt, and maybe to get bonuses or reputation, but the pressure to avoid getting voted out by shareholders is relatively less.
Charities have a lot of the same pressures (minus the bonuses).
I don’t have any expertise, I may be totally wrong.
How much money a CE charity might be able to raise on average. (This makes the assumption that deployed cash from a charity is roughly equivalent to donated cash from Founding to Give which is what the other numbers represent).
Did you adjust for the likelihood that some of the funding secured by the charity would likely would have gone to other effective charities in the same cause area?
No, but I think that’s reasonable in most cases (although hard to figure out exactly how to allocate it).
I don’t quite understand why you’re comparing donated money from people earning to give to money consumed by a charity.
I think a better comparison could be to just convert everything into impact adjusted dollars imagining that you’re able to sell off your impact equity. In this scheme it’s clearer that taking more money from value aligned funders is bad, whilst taking money from non-aligned funders is roughly neutral and donating a bunch of money is very good. The charity has to essentially “pay back” in impact the value aligned funder to get itself out of a hole, whereas the for profit doesn’t need to worry about that (on impact grounds).
To be clear, I think many nonprofits do a lot of good and are well worth funding—I spend most of my time trying to fund them—but it is harder to work out net positive if you’re consuming a bunch of fungible and value aligned resources.
Thanks for the feedback! I’m not really smart enough to figure something like that out tbh, and by the point I’d seen that my realistic options were within an order of magnitude of each other (and both high-risk with high overlap) I was pretty satisfied that my decision was likely gonna hinge on something else.
Maybe your adjustment would take it outside that range but I think at the point of extreme success these charities would be selling impact at competitive rates (so funders would be getting marginal value out of them), and more than likely going to counterfactual funders (ex. government). Maybe this is truer in GHD and especially mental health than in animal welfare, which seems more concentrated. But yeah, at this point I was pretty satisfied that Kaya Guides had minimal risk of substantial funding displacement in a success scenario (I can’t be too specific about this in public), so I picked it.
(Maybe again—I’m just highlighting my specific scenario, there’s definitely an attempt to generalise here but I didn’t think too hard through it)
Makes sense. Sorry I didn’t say this before but I really appreciate your comments on this post and the high effort modelling that you did when working out what to work on. I think it’s a great example to set for the community and shows how seriously you can take these decisions (if you want to).
I think this point is really important. If the charity raises 20 million from GiveWell that’s great, but counterfactual impact is unlikely to be very big. If they raise it though from other foundations or individuals, I would say the counterfactul impact per dollar might be 5-10x as much.
But in general agree with almost everythng @huw nicely points out here.
And yes don’t expevt @huw to account for this in a calculation. I think it would be nice to have a super well-researched estimate of “Value per dollar” of givewell funding vs. other foundations, and am surprised this hasn’t been done.
If the data were available, the amount an CE charity might be able to raise on average from funders other than highly-aligned funders might work better if someone were deploying your analysis for a different decision about whether to found a CE charity vs. earn to give. You’ve mentioned that you were “satisfied that Kaya Guides had minimal risk of substantial funding displacement in a success scenario,” so it makes sense that you wouldn’t adjust for this when making your specific decision.
(The working, rough assumption here is that the average CE charity can put a dollar to use roughly as well as the average GiveWell grantee or ACE-recommended charity—so moving $1 from the latter to the former produces neither a net gain nor a net loss. That’s unlikely to be particularly correct, but it’s probably closer to the actual effect than not adjusting for where the money went counterfactually).