Impact Prizes as an alternative to Certificates of Impact

Epistemic state: quite uncertain

TLDR Example

An EA donor puts up a $50k prize for dis­tri­bu­tion in 2022. In 2022, sev­eral pro­jects that have started since 2019 ap­ply. Their net EA im­pacts are es­ti­mated, and these es­ti­mates (vs. the to­tal value es­ti­mate of all sub­mis­sions) are even­tu­ally used to give them cor­re­spond­ing pro­por­tional amounts of the ​$50k.

Back in 2019, sev­eral pro­jects sell “rights” to their prize, and these get sold around. It’s ex­pected that $1M in es­ti­mated to­tal value will ap­ply, so the mar­ket value of the claim of ev­ery $10 of es­ti­mated im­pact is ​$0.50. One pro­ject sets up an es­ti­ma­tion ser­vice where they pub­li­cly es­ti­mate the even­tual eval­u­a­tion of ev­ery pro­ject, to help make the mar­ket more effi­cient, with the goal of them­selves get­ting part of the prize.

Im­pact Prizes

I re­ally like the goal of Cer­tifi­cates of Im­pact, but per­son­ally find them sub­op­ti­mal in prac­tice. I think Im­pact Prizes pre­sent an in­ter­est­ing al­ter­na­tive. It’s also pos­si­ble Cer­tifi­cates of Im­pact could be used with Im­pact Prizes to gain the ad­van­tages of both down the road.

The most ba­sic defi­ni­tion of Im­pact Prizes is some­thing like,

“Dec­la­ra­tions and fulfill­ment of prizes aimed at pub­lic benefit.”

Such a defi­ni­tion would ap­ply to many ex­ist­ing char­ity prizes. They’ve re­cently been used with suc­cess on LessWrong in the in­ter­ac­tions of the AI Align­ment Prize.

I think these get more in­ter­est­ing with some ex­tra less-ex­plored fea­tures.

Pos­si­ble Features


If one group has an ex­pec­ta­tion of mak­ing $2,000 of prize money from a fu­ture Im­pact Prize, they should be able to sell that claim to a 3rd party. This should be re­ally sim­ple, and that 3rd party should be able to eas­ily re­sell it.

We can call “parts” of this claim “to­kens.”[1]

Sup­pose there’s a sin­gle $10,000 prize, to be awarded in 2020, and a spe­cific group has a 20% chance of win­ning that prize. Then that group has an ex­pected value of $2,000 of prize money. That group cre­ates 100 to­kens rep­re­sent­ing 100% of the claim to that prize. They sell 50 to­kens for $1,000.

Later, they do win the $10,000 prize. How­ever, be­cause they only own 50% of the to­kens, they only get $5,000. The other $5,000 goes to the pre­vi­ous share pur­chaser.

Pro­por­tional Prizes

If only one prize was given, then share pur­chasers would only be in­ter­ested in pro­jects that have chances of be­ing the top sub­mit­ted pro­ject. This seems sub­op­ti­mal.

Imag­ine in­stead that once the prize eval­u­a­tion ses­sion be­gins, ev­ery sin­gle pro­ject is nu­mer­i­cally eval­u­ated for im­pact. Then each one gets a re­ward in pro­por­tion to the im­pact that the pro­ject was rated as hav­ing.


Say the $10,000 prize at­tracted 20 pro­ject en­tries, each of which was eval­u­ated to have saved 1 life (these were all effi­cient anti-malaria pro­jects). Each in­di­vi­d­ual pro­ject would be awarded ​

This means that even small pro­jects would re­ceive re­wards, and thus, they could effec­tively is­sue and sell to­kens.

Prob­a­bil­is­tic Evaluations

If a ran­dom sam­ple of pro­jects was se­lected to be eval­u­ated, this may not change the ex­pected value for stock pur­chases that much. A smaller pro­por­tion would get awards, but they would get pro­por­tion­ally more.

If the dis­tri­bu­tion of pro­ject im­pacts was con­sid­ered very long-tailed and the sam­ple very small, then this would dis­in­cen­tivize in­vest­ments in bet­ter pro­jects. Per­haps one par­tial solu­tion would be to do a first quick round of re­view, en­sure that the high­est po­ten­tial pro­jects make the cut to be in the prize round, and then ran­domly se­lect lower po­ten­tial pro­jects for the rest of the round.

Prag­matic Priors

In­stead of us­ing prob­a­bil­is­tic eval­u­a­tions, it could make sense to use de­cent pri­ors. Imag­ine that all pro­jects start off with wide dis­tri­bu­tions based on em­piri­cal pri­ors. Then eval­u­a­tors would grad­u­ally nar­row these down in mul­ti­ple passes, spend­ing eval­u­a­tion time roughly in pro­por­tion to the im­pact on the fi­nal re­sult.

Coun­ter­fac­tual Prize Adjustment

I as­sume the main goal for many Im­pact Prizes would be to en­courage valuable ac­tivity. This may not ac­tu­ally cor­re­late that well with to­tal pro­ject im­pact. There could be many sub­mit­ted pro­jects that would have been done equally well if it wasn’t for the Im­pact Prizes.

If this was a con­cern, it may be rea­son­able to es­ti­mate coun­ter­fac­tual prize value on some scale along with pro­ject value. Pro­jects that would have been helped more by marginal prize money could be granted pro­por­tion­ally larger prizes. Say that each pro­ject is rated on a lin­ear scale of 0-10 in terms of “coun­ter­fac­tual effect of prize amount”, and this was mul­ti­plied by its pro­ject im­pact es­ti­mate.


Say pro­ject A is a large-scale United Na­tions effort that cre­ated $2 Million of value, and pro­ject B is a smaller pro­ject by an in­de­pen­dent or­ga­ni­za­tion. It comes out that the United Na­tions effort would have done the pro­ject with­out any ex­pec­ta­tion of a re­ward, while for the in­de­pen­dent or­ga­ni­za­tion, the re­ward was a de­ci­sive fac­tor. In this case, it seems pos­si­bly use­ful to be able to fa­vor the in­de­pen­dent or­ga­ni­za­tion in the award out­come.

Tool­ing and Earmarking

In­stead of pre­sent­ing $10,000 for “all pro­jects”, it may make more sense to di­vide this pool to en­courage a few ar­eas. For in­stance, it may be com­mon prac­tice to ear­mark 20% for sup­port and eval­u­a­tion. The idea of this would be to en­courage some peo­ple to “do good” by do­ing things that would help the prize. Some things to help could in­clude set­ting up a Pre­dic­tion Tour­na­ment to es­tab­lish com­mon knowl­edge of prize ex­pec­ta­tions, or web tools to make pur­chas­ing and sel­l­ing more ac­cessible.

Be­cause some prizes would go to­wards efforts to help the prize sys­tem, this could lead to a minor prize-value-pro­mo­tion econ­omy. As stated above, some peo­ple could set up pre­dic­tion sys­tems, and other peo­ple could make pre­dic­tions of prize out­comes on them. Others may act as po­lice, de­tect­ing and re­port­ing on bad ac­tors.

Users could in­ves­ti­gate not only bad be­hav­ior for prizes but also good be­hav­ior. In many tour­na­ment sys­tems groups be­come quite com­pet­i­tive and in­di­rect ser­vices like ed­u­ca­tion or col­lab­o­ra­tion can be un­der­val­ued. If there could be a lot of value in some of these ar­eas, then it should be ev­i­dently valuable when peo­ple point that out. The pres­ence of some mo­ti­vated ac­tors ac­tively in­ves­ti­gat­ing and pro­mot­ing over­looked ac­tivity would hope­fully lead to more of that ac­tivity.

Deal­ing With Mul­ti­ple Prizes

One dis­ad­van­tage of Im­pact Prizes, com­pared to Cer­tifi­cates of Im­pact, is that they could get com­pli­cated when there are sev­eral differ­ent prizes by differ­ent donors. A naive im­ple­men­ta­tion of Im­pact Prizes could de­mand a unique to­ken mint­ing per pro­ject per prize, which would make things very messy. Any given pro­ject may have dozens of to­kens to worry about and trade, and many ex­changes may be be­tween clusters of to­kens at a time.

A sim­pler setup would look some­thing more like Cer­tifi­cates of Im­pact. Only one to­ken is made per pro­ject, but that to­ken can be used for all Im­pact Prizes. Per­haps there would be a few com­mon stan­dards of to­kens if Im­pact Prizes with differ­ent pa­ram­e­ters.


Say in 50% of to­kens of a pro­ject are sold for $2,000, and later that pro­ject wins ​$5,000 from an Im­pact Prize. With the case of shared to­kens, this to­ken-holder could ex­pect to pos­si­bly win even more money later on from other Im­pact Prizes as well.

A re­lated tech­nique could just be that fu­ture donors of­ten donate to ex­ist­ing Im­pact Prizes in­stead of cre­at­ing new ones. This would mean that Im­pact Prizes would be lower-bound (the ex­ist­ing cash pool) but not sim­ply up­per-bound (it’s not clear how much more money will be added).

It’s pos­si­ble that Cer­tifi­cates of Im­pact could work effec­tively as one of these to­ken stan­dards.

Risks and Insurance

One bias that these sys­tems may cre­ate is that ac­tors may be mo­ti­vated to max­i­mize up­side risk, but may not care about min­i­miz­ing down­side risks. As long as Im­pact Prizes can only give out money (rather than de­mand money), than the low­est one should ex­pect from a highly risky out­come is zero.

One way to get around this would be with for­mal in­surance sys­tems. All pro­jects that cre­ate to­kens could be re­quired to pur­chase in­surance upon pro­ject for­ma­tion. When it comes to eval­u­a­tion time, the Im­pact Prize could re­quest that the in­surer pay­out for any pro­jects that are eval­u­ated to be net-nega­tive. It’s not ob­vi­ous how they should strike a bal­ance be­tween charg­ing for the en­tire cost or for the pro­por­tional cost.

In the case of mul­ti­ple prizes, per­haps dam­ages should be han­dled out­side the prize sys­tem.


Le­gal Implications

I think that to­k­enized Im­pact Prize sys­tems in par­tic­u­lar may be quite legally com­pli­cated. Cor­po­rate stock sys­tems come with lots of rules, in part be­cause there’s been an es­tab­lished record of peo­ple ma­nipu­lat­ing them in shady ways for per­sonal gain.

If so­phis­ti­cated fi­nan­cial in­stru­ments like short­ing be­came pos­si­ble, challenges could arise that would nor­mally be ad­dressed by cor­po­rate law. For in­stance, in­sider trad­ing is reg­u­lated, in part, to pre­vent cor­po­rate em­ploy­ees from tak­ing rel­a­tively sim­ple ac­tions to short their own stocks and then pur­posely cause bad things to hap­pen.

If an Im­pact Prize sys­tem was es­tab­lished, it would have to ei­ther work within the cur­rent le­gal in­fras­truc­ture, like stock, or out­side of it. Both come with dis­ad­van­tages.

It’s pos­si­ble that only ac­cred­ited in­vestors would be able to pur­chase Im­pact Prize to­kens, though this may be a fine first step.

Th­ese con­sid­er­a­tions would re­ally need to be eval­u­ated by an ac­tual at­tor­ney. I sug­gest any­one con­sid­er­ing do­ing this at scale hire an at­tor­ney first.

That said, similar prob­lems would come up with Cer­tifi­cates of Im­pact if they was done to a similar scale. They also may be ad­e­quately ad­dressed by ex­ist­ing cryp­tocur­rency To­ken pro­jects.

Eval­u­a­tion Costs

The fi­nal prize eval­u­a­tions could be quite costly to pro­duce. A few meth­ods above could help, but sig­nifi­cant costs would re­main. I feel like there’s prob­a­bly clever ways of think­ing about this to in­cen­tivize ev­ery­one to max­i­mize to­tal value. For ex­am­ple, per­haps the eval­u­a­tion cost comes out of each pro­ject’s value, in­cen­tiviz­ing pro­jects not to ap­ply if that to­tal would be be­low zero, and in­cen­tiviz­ing them to make the eval­u­a­tion easy.

Open­ness Costs

My cur­rent model is that there a lot of in­cen­tives to not make most kinds of eval­u­a­tions pub­lic. Per­haps the best com­par­i­son is prizes that are given out based on rubrics, though here most of the re­sults of most of those rubrics are not made pub­lic.

The Im­pact Prize eval­u­a­tions may be con­tro­ver­sial and are likely to be at least some­what mi­s­un­der­stood. Public eval­u­a­tions may re­ally re­quire a com­mu­nity that is quite epistem­i­cally ma­ture.

Con­tro­versy could cre­ate li­a­bil­ity. If a Twit­ter war or similar gets started, it’s pos­si­ble there could be enough anger for any prize to be can­celed, or at least to stop fu­ture prizes.


For such a sys­tem to work well, a de­cent of work may be needed both on tech­ni­cal tool­ing and in im­ple­men­ta­tion cre­ativity.

Cul­tural Risks

If Im­pact Prizes took off, I could imag­ine some ac­tors draw­ing into the ecosys­tem who only mo­ti­vated by mak­ing prof­its. The to­ken sys­tem may be looked at as a form of gam­bling (some­what similar to the stock mar­ket) and may lead to some gam­bling ten­den­cies. I think there may be some sig­nifi­cant down­sides here, but es­ti­mate that the up­sides will be higher. (but this should be tested!) It could ob­vi­ously be partly com­bat­ted us­ing some of the tech­niques men­tioned above.

Com­par­i­son to Cer­tifi­cates of Impact

A Philo­soph­i­cal Comparison

Per­haps the main philo­soph­i­cal differ­ence be­tween Im­pact Prize To­kens and Cer­tifi­cates of Im­pact is that Cer­tifi­cates of Im­pact, ac­cord­ing to Paul Chris­ti­ano, are sup­posed to rep­re­sent causal re­spon­si­bil­ity. As he writes,

Allo­cat­ing cer­tifi­cates re­quires ex­plicit and trans­par­ent al­lo­ca­tion of causal re­spon­si­bil­ity, both within teams and be­tween teams and donors.

I per­son­ally find the causal re­spon­si­bil­ity bit un­in­tu­itive, and don’t ex­pect a much larger com­mu­nity (es­pe­cially out­side the EA sphere) to ac­cept it.

Im­pact Prize to­kens would be de­cou­pled from this idea.

A Ra­tio Comparison

I be­lieve Cer­tifi­cates of Im­pact are sup­posed to be priced at their ex­pected rates of im­pact, so $1 worth of cer­tifi­cates means $1 of coun­ter­fac­tual im­pact.

I think this will prove some­what in­flex­ible. I ques­tion the mar­ket vi­a­bil­ity of a $1 to $1 peg. If the de­mand is much less than what is nec­es­sary to cre­ate a $1 to $1 peg, then I would ex­pect this to re­sult in an illiquid mar­ket.

That said, of course a $1 to $1 would make things very sim­ple if it works. A vari­able ra­tio could be fairly con­fus­ing and could re­quire so­phis­ti­cated pur­chasers (well, ones that could do two mul­ti­pli­ca­tions.)


Per­haps the main challenge to Im­pact Prizes as I dis­cuss them is their ad­di­tional com­plex­ity, com­pared to Cer­tifi­cates of Im­pact. It may re­quire set­ting up a prize in ad­vance, and then when that hap­pens ei­ther do­ing a bunch of eval­u­a­tions or figur­ing out clever ways of de­creas­ing that bur­den.

Fur­ther Work

The fea­ture space is quite large. I’d like it to be larger. I’d be cu­ri­ous to hear other ideas for fea­tures or to mod­ify the above fea­tures.

One area I’m par­tic­u­larly in­ter­ested in is how best to struc­ture the open­ness of eval­u­a­tions. I think that the “Open­ness Cost” is very sig­nifi­cant, and it would be nice to be able to re­duce it while still main­tain­ing much of the benefits of the eval­u­a­tions.

[1] There’s much about the Blockchain world I don’t like, but they have used to­kens ex­ten­sively for this spe­cific pur­pose. I don’t want to use the word “shares” be­cause these parts will not have any vot­ing rights, and legally there are other im­por­tant dis­tinc­tions.

Spe­cial thanks to Ryan Carey for dis­cussing the con­cept, pro­vid­ing writ­ing feed­back, and sug­gest­ing I use the name “Im­pact Prizes” in­stead of some­thing more ob­tuse.