Let’s say you’re a billionaire. You want to have a flibbleflop, so you post a prize:
Make a working flibbleflop — $1 billion.
There begins a global effort to build working flibbleflops, and you see some teams of brilliant people starting to work on flibbleflop engineering. But it doesn’t take long for you to notice that the teams keep running into one specific problem: they need money to start (buy flobble juice, hire deeblers, etc), money they don’t have.
So, the people who want to build the flibbleflop go and pitch to investors. They offer investors a chunk of their prize money if they end up winning, in exchange for cold hard cash right now to get started building. If the investors think that the team is likely to build a successful flibbleflop and win the billion dollar prize, they invest. If not, not.
If you squint, you could replace “flibbleflop” with highly capable LLMs, quantum computers, or any number of cool and potentially lucrative technologies. But if you stop squinting, and instead add the adjective “altruistic” before “billionaire,” you could replace “flibbleflop” with “malaria vaccine.” Let’s see what happens:
Make a working malaria vaccine — $1 billion.
There begins a global effort to build working malaria vaccines, and you see some teams of brilliant people starting to work on vaccine engineering. But it doesn’t take long for you to notice that the teams keep running into one specific problem: they need money to start (buy lab equipment, hire researchers, etc), money they don’t have.
So, what should they do?
Obviously, the people who want to build the vaccine should go and pitch to investors. They should offer investors a chunk of their prize money if they end up winning, in exchange for cold hard cash right now to get started building. If the investors think that the team is likely to build a successful malaria vaccine and win the billion dollar prize, they should invest. If not, not.
The prize part of this is how a lot of philanthropy is done. An altruistic billionaire notices a problem and makes a prize for the solution. But the investing part of it is pretty unique, and doesn’t happen too often.
Why is this whole setup good? Why would you want the investing thing on the side? Mostly, because it resolves the problem that some teams will be wonderfully capable but horribly underfunded. In exchange for a chunk of their (possible) future winnings, they get to be both wonderfully capable and wonderfully funded. This is how it already works for AI or quantum computing or any other potentially lucrative tech that has high barriers to entry; we can solve the same problem in the same way for the things that altruistic billionaires care about, too.
But backing up a bit, why would an altruistic billionaire want to do this as a prize in the first place? Why not use grants, like how most philanthropy works?
Prizes reward results, not promises. With a prize, you know for a fact that you’re getting what you paid for; when you hand out grants, you get a boatload of promises and sometimes results.
The investors care a lot about not losing their money. They’re also very good at picking which teams are going to win — after all, investors only get rewarded for picking good teams if the teams end up winning.
The issue of figuring out which people are best to work on a problem is totally different from the issue of figuring out which problems to solve. Using a prize system means that you, as a lazy-but-altruistic billionaire, don’t have to solve both issues — just the second one. Investors do the work of figuring out who the good teams are; you just need to figure out what problems they should solve.
If you do this often enough — set up prizes for solutions to problems you care about, then let people build those solutions and get the prizes — you can start making prizes for more and more things, with investors who profit from picking the right teams to work on the right problems. You can also get more and more vague, expecting that teams (and investors) will figure out your preferences as they go. And in the extreme, you can just say “make stuff I value, and I’ll give you a prize to the extent that I value it.” When those things that you’re giving out prizes for are valuable to just you, people call it “capitalism.” When the things you’re giving out prizes for are valuable for the world, we call it an “impact market.”
An impact market looks like a marketplace of philanthropies giving out prizes for the completion of huge, successful, altruistic projects, with investors picking and choosing the best teams for the world’s problems. Impact markets are awesome.
These are not my original ideas. However, every single other writeup of these ideas I’ve ever encountered is filled with bizarre jargon that make the ideas unnecessarily difficult to understand.
Note: my explanation of when an investor invests isn’t exactly accurate, but it’s close enough for a simple explanation. In reality, investors will choose to invest if they think that [the probability that the team will win multiplied by the share of the winnings the investors would get if the team won] is sufficiently higher than [the amount of cold hard cash the team needs to get started]; what “sufficiently” means in this context depends on what other possible ways the investors could invest their money. For example, let’s say the investor thinks there’s a 10% chance that the team wins; the team promised the investors $500mm of the total $1bn prize if they win; and the team needs $40mm to get started. In this case, the investors expect to make $50mm in revenue (10% probability of success multiplied by $500mm if the team wins) with an initial investment of $40mm. That’s a 25% return on their investment, which is traditionally considered pretty great. However, if the investor has another opportunity that they expect to make 30% on, then it’s not going to be sufficiently great.
Explaining Impact Markets
Link post
Let’s say you’re a billionaire. You want to have a flibbleflop, so you post a prize:
There begins a global effort to build working flibbleflops, and you see some teams of brilliant people starting to work on flibbleflop engineering. But it doesn’t take long for you to notice that the teams keep running into one specific problem: they need money to start (buy flobble juice, hire deeblers, etc), money they don’t have.
So, the people who want to build the flibbleflop go and pitch to investors. They offer investors a chunk of their prize money if they end up winning, in exchange for cold hard cash right now to get started building. If the investors think that the team is likely to build a successful flibbleflop and win the billion dollar prize, they invest. If not, not.
If you squint, you could replace “flibbleflop” with highly capable LLMs, quantum computers, or any number of cool and potentially lucrative technologies. But if you stop squinting, and instead add the adjective “altruistic” before “billionaire,” you could replace “flibbleflop” with “malaria vaccine.” Let’s see what happens:
There begins a global effort to build working malaria vaccines, and you see some teams of brilliant people starting to work on vaccine engineering. But it doesn’t take long for you to notice that the teams keep running into one specific problem: they need money to start (buy lab equipment, hire researchers, etc), money they don’t have.
So, what should they do?
Obviously, the people who want to build the vaccine should go and pitch to investors. They should offer investors a chunk of their prize money if they end up winning, in exchange for cold hard cash right now to get started building. If the investors think that the team is likely to build a successful malaria vaccine and win the billion dollar prize, they should invest. If not, not.
The prize part of this is how a lot of philanthropy is done. An altruistic billionaire notices a problem and makes a prize for the solution. But the investing part of it is pretty unique, and doesn’t happen too often.
Why is this whole setup good? Why would you want the investing thing on the side? Mostly, because it resolves the problem that some teams will be wonderfully capable but horribly underfunded. In exchange for a chunk of their (possible) future winnings, they get to be both wonderfully capable and wonderfully funded. This is how it already works for AI or quantum computing or any other potentially lucrative tech that has high barriers to entry; we can solve the same problem in the same way for the things that altruistic billionaires care about, too.
But backing up a bit, why would an altruistic billionaire want to do this as a prize in the first place? Why not use grants, like how most philanthropy works?
Prizes reward results, not promises. With a prize, you know for a fact that you’re getting what you paid for; when you hand out grants, you get a boatload of promises and sometimes results.
The investors care a lot about not losing their money. They’re also very good at picking which teams are going to win — after all, investors only get rewarded for picking good teams if the teams end up winning.
The issue of figuring out which people are best to work on a problem is totally different from the issue of figuring out which problems to solve. Using a prize system means that you, as a lazy-but-altruistic billionaire, don’t have to solve both issues — just the second one. Investors do the work of figuring out who the good teams are; you just need to figure out what problems they should solve.
If you do this often enough — set up prizes for solutions to problems you care about, then let people build those solutions and get the prizes — you can start making prizes for more and more things, with investors who profit from picking the right teams to work on the right problems. You can also get more and more vague, expecting that teams (and investors) will figure out your preferences as they go. And in the extreme, you can just say “make stuff I value, and I’ll give you a prize to the extent that I value it.” When those things that you’re giving out prizes for are valuable to just you, people call it “capitalism.” When the things you’re giving out prizes for are valuable for the world, we call it an “impact market.”
An impact market looks like a marketplace of philanthropies giving out prizes for the completion of huge, successful, altruistic projects, with investors picking and choosing the best teams for the world’s problems. Impact markets are awesome.
These are not my original ideas. However, every single other writeup of these ideas I’ve ever encountered is filled with bizarre jargon that make the ideas unnecessarily difficult to understand.
Note: my explanation of when an investor invests isn’t exactly accurate, but it’s close enough for a simple explanation. In reality, investors will choose to invest if they think that [the probability that the team will win multiplied by the share of the winnings the investors would get if the team won] is sufficiently higher than [the amount of cold hard cash the team needs to get started]; what “sufficiently” means in this context depends on what other possible ways the investors could invest their money. For example, let’s say the investor thinks there’s a 10% chance that the team wins; the team promised the investors $500mm of the total $1bn prize if they win; and the team needs $40mm to get started. In this case, the investors expect to make $50mm in revenue (10% probability of success multiplied by $500mm if the team wins) with an initial investment of $40mm. That’s a 25% return on their investment, which is traditionally considered pretty great. However, if the investor has another opportunity that they expect to make 30% on, then it’s not going to be sufficiently great.