Looking to advance businesses with charities in the vast majority shareholder position. Check out my TEDx talk for why I believe Profit for Good businesses could be a profound force for good in the world.
Brad Westđ¸
Nick, I think youâre imagining a different model than what Iâm proposing. Youâre picturing a founder who needs to be driven by altruism instead of greed. Thatâs not the idea.
The model is: a foundation buys an already-successful business from its existing owners and keeps the professional management in place. The managers keep getting paid salaries and bonuses. They keep running the business exactly as before. The only thing that changes is where the profits go after theyâre generated. This isnât about finding saintly founders. Itâs about acquisition. Private equity does this constantly. They buy businesses, keep management, extract profits. Weâre proposing the same thing, just with a charitable foundation as the equity holder instead of a PE fund.
Youâre right that greed drives startup founders. But startups are a tiny fraction of the economy. Most market share consists of mature companies run by professional managers who are already separated from ownership. They donât know or care whether their shares are held by Vanguard, Blackstone, or a foundation. They come to work, hit their targets, collect their bonus. Thatâs the context where this operates.
This is precisely why this model is scalable. It doesnât require heroes. It just requires a foundation to buy out an existing business and keep the operations the same. In most businesses, management does not have much equity so the PFG business can offer the same compensation packages that a normal business would.
Kyle, appreciate the engagement. I think thereâs a core misread I should clear up: COA doesnât require anyone to pay more. Thatâs the whole point. The thesis isnât âpeople will pay a premium for charity-owned.â Itâs âat price parity, stakeholders prefer charity-owned, and that preference shows up in conversion, retention, and terms.â You donât need customers to pay a charity tax. You need them to choose you over an equivalent competitor. The stated and revealed preference research suggests they will. So your concern about commodity and B2B customers actually supports my thesis. They wonât pay more, and they donât have to. In fact, commodities might be the best for PFG if the business has the capital required, because it creates a differentiator where there are otherwise none. At equal price and quality, preference tips the balance. Even small advantages in win rates compound on thin margins. A business operating at 10% margin that improves by 5 percentage points doesnât improve profit by 5%. It increases by 50%.
On the acquisition mechanics: yes, youâre buying profitable businesses at normal multiples. The thesis is that charitable ownership improves margins post-acquisition, not that youâre getting a discount upfront. Debt service comes first, charitable distributions come from what remains. If COA improves margins even modestly, the spread over borrowing costs funds both repayment and distributions. Same as any leveraged acquisition, just with a different equity holder. And foundation-owned businesses actually show lower default rates in the data, so lending terms should be competitive or better. The âentire economyâ scope follows from the mechanism. The preference operates on profit destination, not product category. And because the preference advantage doesnât come with a clear operating disadvantage, weâd have to look for when a disadvantage might emerge. This could possibly be businesses like startups, where equity incentives for the key early players might outweigh such an advantage. But in most of the economy, ownership and management are separate. In the lower-middle market, where experimental acquisitions might feasibly take place, the kinds of acquisitions that keep operations in place but change ownership â continuity acquisitions- happen all the time
On the beachhead: agreed, this is whatâs needed. Iâm working toward a fund structure to do instrumented acquisitions. The goal is generating real data, not just arguing from theory. Section 1.1 of the research compilation has more on the preference research if you want to dig in.
EDIT: Re AI timelines, one of the risks (certainly not the only one) is that it will cause wealth to be concentrated among the owners of capital. Having charities be the holders of that capital is likely a better outcome than a very small group who are accountable to no one.
If youâre interested in the plausible margin effects, sector selection criteria, and financial projects, you can check out the research compilation that I linked to (Section 1 for stakeholder preference research, Section 4 on the effect of parity (no consumer sacrifice) on adoption, and Sections 9A and 9B on sector selection criteria and financial modeling, respectively).
And Claude helped organize and review the draft, but I wrote it.
Yeah, the downside would be the cost of running the program, which would be very small in relation to the value of the capital (which would be going to charity, so just subject to normal business risks).
If you see differences in post-acquisition performance, they can expand the fund and other philanthropists will have the incentive to copycat. If the thesis is generally proven, lenders will have the incentive to finance further acquisitions (leveraged buyouts); the sky, or most of the entire economy (other than perhaps startups where equity incentives might outweigh COA advantages), is the limit.
Truly absurd that this is not being explored.
Are We IgÂnorÂing the SoluÂtion to FundÂing EffecÂtive CharÂiÂties?
It might make sense to have the ability to toggle a âharm negationâ and a âtotal counterfactual expected differenceâ calculation. But youâre right that a lot of people who offsetting might appeal to may not want to investigate these distinctions.
Sorry if I havenât been clear.
I agree that the animal movement, individually, and collectively, should take into account the entire counterfactual difference between someone being vegan and someone being an omnivore. This would include the harm caused by being an omnivore by increasing the demand for factory farmed meat as well as the absence of positive effects of being a vegan (such as normalizing being vegan and increasing demand for vegan products). Ideally, in deciding oneâs dietary choices, one who was concerned with animal welfare would consider the the harm avoided by being vegan and the good that is caused. They would then quantify the cost for animal welfare charities to both commensurately decrease the harm caused and effectuate the good that is not realized. This would probably a better measure and one could say, âOK Iâm donating 10% to effective charities already. Is it easier for me to pay the cost of the whole counterfactual difference in addition to this which I would otherwise donate? Or is it easier for me to be vegan?â
The other frame for offsetting, however, would be to make it match the psychological appeal of undoing the harm one caused. If this is what is motivating people to donate to animal welfare charities, then it would make more sense to only include the harms that are caused by being an omnivore (i.e., contributing demand for factory-farmed meat). People may not feel morally obligated to make the positive difference, just not to cause the harm (or to undo it).
So, definitely for decision making of individuals and within the movement, considering the positives as well as the negatives avoided of veganism is important. Whether having âoffsettingâ include it is a prudential question that would really depend on the psychologies that cause people to offset.
I agree. But current offsetting focuses on just negating the negatives.
The reason not to is it may accord more with the psychological reasons for offsetting to focus on just the harm negation. The measure weâre discussing may go beyond what makes sense to call âoffsettingâ.
An interesting question I have regarding offsetting is whether it should just be measuring the negative aspects of contributing to animal suffering by increasing demand for factory farmed products, or whether it should also be considering the positives avoided by not being vegan (signaling value, increasing the demand for vegan products, other possible things).
Because if one were considering whether or not to be vegan or to donate $X dollars, they should probably consider the full counterfactual (positives foregone as well as negatives caused).
Iâm not drawing a metaphysical distinction between humans and animals. I care about welfare, full stop.
The difference is empirical, not metaphysical. Human suffering triggers compensatory responses from other humans that multiply the costs. People who learn hospitals might harvest organs stop going to hospitals. Communities that tolerate trafficking erode the trust structures enabling cooperation. Social fabric frays. These system-level effects make the total harm enormous and difficult to quantify. You canât reliably offset what you canât measure.
Farmed animals donât generate these dynamics. A chicken doesnât know some humans eat chickens while others donate to reduce chicken suffering. Thereâs no institutional trust to erode, no behavioral adaptation that cascades through society. The welfare calculus is direct and measurable.
On the organ case: if you modify it enough to truly eliminate the systemic effects (no fear, no institutional erosion, no social knowledge of what occurred) then yes, I bite the bullet. Saving five lives at the cost of one is better than letting five die to keep one alive. If that conclusion seems monstrous, Iâd suggest your intuition is tracking the systemic costs youâve stipulated away, not the raw welfare math.
But we donât need to resolve exotic hypotheticals here. Youâre arguing from analogy to human cases where offsetting fails. It fails because of empirical features those cases have, not because human suffering can never be weighed against animal suffering.
Ultimately, for me, it all cashes out in the experiences of beings, whether human, chicken, or digital consciousness. Thatâs what matters.
But there are important consequentialist reasons that make the doctor killing patients fail in the real world. Once you live in a world in which people are being killed and the organs are being repurposed when they go to hospitals, people cease going to hospitals.
On the other hand, the differences in treatments in farmed animals are not going to trigger responses from said farmed animals that lead to such knock-on effects. You can simply look at the welfare consequences.
I think of it from the perspective I would have if I knew I would die and immediately be reborn as a chicken. Would I rather there be more Georges in the world who are vegan and do not contribute directly to the demand which causes my torture or Henrys who are omnivores and thus contribute directly to the torture, but donate an amount that neutralizes the effect and then some more?
If we actually care about welfare of animals more than we care about moral purity, we would rather there are more Henrys than Georges.
Glad to hear about your commitment to utilitarianism!
I would note, re the camper van, that minimizing costs so that you can give more is only one part of the equation. There may be productivity costs associated with putting your own well-being at too low of a floor such that it may make sense to spend a bit more on yourself.
If my recollection is correct, the Mormon church had a similar strategy for showings of The Book of Mormon musical.
I donât know if a good text on this exists, but I think a strong book on using counterfactual thinking would be great for EAs. Might be a great book for someone to consider writing specifically from an EA perspective.
I just asked Claude about good books about counterfactual thinking and I think I might listen to the audiobook of âThinking in Betsâ by Annie Duke, to see whether I would recommend it to EAs.
It only relates to it insofar as someone could view your post (just looking at the title) as implying socialism and EA (at the broadest level, trying to do the most good we can with resources) are at odds. In reality, a lot of critics of EA are addressing the communityâs choice of priorities rather than EA at the broadest level. I would prefer it if such critics embraced the EA framework explicitly and made the case that their cause area or philosophy is actually the most EA, if this is pretty much what they are doing.
Thereâs a lot of conflation between what the EA community is prioritizing at any given point and EA as a philosophy to guide moral behavior. I think this conflation probably does a lot of damage to EAâs ability to proliferate.
I didnât downvote, but the initial title seems to suggest that Effective Altruism and socialism would be at odds. I donât really think that EA and socialism are at odds, nor any ideology per se. If we have different views within EA about what forms of government, etc. are likely to produce flourishing within EA, we would expect EAs who have different political beliefs. To be clear, you made clear in your post that you can be committed to different beliefs and still be EA, but the top level framing I found a bit jarring. Generally, a lot of the arguments that purport to be against EA are arguments against some cause prioritizations or perspectives on how to do the most good, which seem like they might make more sense within the tent of EA.
Thanks, Jason â this is exactly the kind of scrutiny I think the idea needs.
On your core point: I basically agree with your descriptive read. Looking at Thankyouâs ~4â5% donate-able margin or Newmanâs ~5% donations on ~5% net margins does not scream âthese businesses are crushing the for-profit competition.â When I talk about large profit uplifts, Iâm not claiming we already see that in their published numbers; Iâm saying the mechanism (thin margins + modest stakeholder advantages) could plausibly generate big differences if we ever set this up deliberately and at scale. What we actually have today are a handful of pioneers operating under lousy conditions: low category awareness, no shared certification, and a chronic capital misfit (too âweirdâ for normal investors, too âbusinessyâ for most philanthropy). In that world, youâd expect âsurvive and sometimes do well,â not âobvious margin dominance.â
I also think youâre right to flag survivorship and the Paul Newman effect. Newmanâs Own probably got a brand tailwind few founders can replicate, and we donât have good public data on the PFG attempts that fizzled. Thatâs partly why the article opens by saying âthe basic math is compelling, what we lack is rigorous measurement.â Iâm using Thankyou/âNewmanâs as existence proofs (âthis can work at allâ), not as clean evidence that the multiplier is already realized in the wild.
On the Kraft question and capital: I donât think the story has to be âsmall PFG beats the global leader on day one.â The more realistic path I have in mind is stepwise:
First, PFG companies grow with philanthropic and mission-aligned capital in niches where capital requirements are tractable and stakeholder preference is strong (ticketing, insurance distribution, hated-fee services, values-expressive categories, etc.). At that stage, the relevant comparison really is similarly sized âprofit-for-yachtâ firms, not KraftHeinz.
Then, if they show normal or better cash-flow performance, they can access ordinary credit markets. Banks and lenders care about coverage ratios and default risk, not whether residual profits go to a foundation or to a family office. A PFG firm with solid EBITDA and a boring business model can still borrow to expand, even if 100% of distributable profits ultimately go to charity.
Only much later do we get to the level where youâre buying or competing head-to-head with a Kraft-scale incumbent. At that point, youâd likely be using a mix of philanthropic equity, retained earnings, and conventional debt â not trying to fund a $50B play entirely out of grants.
So Iâm not claiming âPFG firms are already out-earning Kraftâ or that we have tidy margin graphs to prove a large edge. Iâm claiming: (a) we have decent evidence that stakeholder preferences exist at parity and can matter; (b) the margin arithmetic makes it at least plausible that, in the right contexts, this could translate into big differences in distributable profits; and (c) given that, itâs rational for philanthropists to run some careful, sector-specific experiments rather than either assuming PFG canât compete or assuming it already does. If those trials show no advantage once you control for capital and sector, Iâll happily update. Right now, the main thing Iâm arguing against is staying forever in exactly the âanecdata vs intuitionâ uncertainty youâre highlighting.
On another note, it might be worth engaging, potentially on their community forum, to see where your cruxes are regarding vaping as harm reduction and determine whether there are any areas where your perspectives could inform each other or there is common ground.
Thanks for the thoughtful reply â and yes, I do think this is a pretty serious concern for trust and scale.
The core issue, as I see it, is that for the âweâre neutralizing opposing political donationsâ story to really hold, donors should be doing something like:
âThis is money I was otherwise going to use to support the specific zero-sum political cause indicated (or a very close substitute), and Iâm now redirecting it instead.â
One concrete way to reinforce that would be a short pledge at checkout, e.g.:
âI understand that DuelGood only works if donors genuinely redirect money they would otherwise have used to support the indicated political cause (or a very similar one). I pledge that this donation meets that description.â
You could then reserve the strongest âduel/âneutralizationâ framing and stats for donors who sign that pledge, and be transparent about that in the FAQ.
Iâd really love to see DuelGood work â turning political deadlock into bednets is a very compelling vision.
My apologies for not having followed the links in your post in the first place.
Yeah, thereâs the possibility of a double-standard. Essentially the PFG is reputationally penalized for competitive choices in ways their normal competitors are not.
It seems the short term solution to this is selecting contexts that arenât fraught with ethical issues.
And if you succeed in the short term, the long term solution would be a messaging campaign that tried to get at this irrational double-standard where competitive business choices are not popular.