How Can Donors Incentivize Good Predictions on Important but Unpopular Topics?

Altru­ists of­ten would like to get good pre­dic­tions on ques­tions that don’t nec­es­sar­ily have great mar­ket sig­nifi­cance. For ex­am­ple:

  • Will a repli­ca­tion of a study of cash trans­fers show similar re­sults?

  • How much money will GiveWell move in the next five years?

  • If cul­tured meat were price-com­pet­i­tive, what per­cent of con­sumers would pre­fer to buy it over con­ven­tional meat?

If a donor would like to give money to help make bet­ter pre­dic­tions, how can they do that?

You can’t just pay peo­ple to make pre­dic­tions, be­cause there’s no in­cen­tive for their pre­dic­tions to ac­tu­ally be ac­cu­rate and well-cal­ibrated. One step bet­ter would be to pay out only if their pre­dic­tions are cor­rect, but that still in­cen­tivizes peo­ple who may be un­in­formed to make pre­dic­tions be­cause there’s no down­side to be­ing wrong.

Another idea is to offer to make large bets, so that your coun­ter­party can make a lot of money for be­ing right, but they also want to avoid be­ing wrong. That would in­cen­tivize peo­ple to ac­tu­ally do re­search and figure out how to make money off of bet­ting against you. This idea, how­ever, doesn’t nec­es­sar­ily give you great prob­a­bil­ity es­ti­mates be­cause you still have to pick a prob­a­bil­ity at which to offer a bet. For ex­am­ple, if you offer to make a large bet at 50% odds and some­one takes you up on it, then that could mean they be­lieve the true prob­a­bil­ity is 60% or 99%, and you don’t have any great way of know­ing which.

You could get around this by offer­ing lots of bets at vary­ing odds on the same ques­tion. That would tech­ni­cally work, but it’s prob­a­bly a lot more ex­pen­sive than nec­es­sary. A slightly cheaper method would be to de­ter­mine the “true” prob­a­bil­ity es­ti­mate by bi­nary search: offer to bet ei­ther side at 50%; if some­one takes the “yes” side, offer again at 75%; if they then take the “no” side, offer at 62.5%; con­tinue un­til you have reached satis­fac­tory pre­ci­sion. This is still pretty ex­pen­sive.

In the­ory, if you cre­ate a pre­dic­tion mar­ket, peo­ple will be will­ing to bet lots of money when­ever they think they can out­perform the mar­ket. You might be able to start up an ac­cu­rate pre­dic­tion mar­ket by seed­ing it with your own pre­dic­tions; then savvy new­com­ers will come and bet with you; then even savvier in­vestors will come and bet with them; and the pre­dic­tions will get more and more ac­cu­rate. I’m not sure that’s how it would work out in prac­tice. And any­way, the biggest prob­lem with this ap­proach is that (in the US and the UK) pre­dic­tion mar­kets are heav­ily re­stricted be­cause they’re con­sid­ered similar to gam­bling. I’m not well-in­formed about the the­ory or prac­tice of pre­dic­tion mar­kets, so there might be clever ways of in­cen­tiviz­ing good pre­dic­tions that I don’t know about.

An­thony Aguirre (co-founder of Me­tac­u­lus, a web­site for mak­ing pre­dic­tions), pro­posed pay­ing peo­ple based on their track record: peo­ple with a his­tory of mak­ing good pre­dic­tions get paid to make more pre­dic­tions. This in­cen­tivizes peo­ple to es­tab­lish and main­tain a track record of mak­ing good pre­dic­tions, even though they don’t get paid di­rectly for ac­cu­rate pre­dic­tions per se.

Aguirre has said that Me­tac­u­lus may im­ple­ment this in­cen­tive struc­ture at some point in the fu­ture. I would be in­ter­ested to see how it plays out and whether it turns out to be a use­ful en­g­ine for gen­er­at­ing good pre­dic­tions.

One prac­ti­cal op­tion, which goes back to the first idea I men­tioned, is to pay a group of good fore­cast­ers like the Good Judg­ment Pro­ject (GJP). In the­ory, they don’t have a strong in­cen­tive to make good pre­dic­tions, but they did win IARPA’s 2013 fore­cast­ing con­test, so in prac­tice it seems to work. I haven’t looked into how ex­actly to get pre­dic­tions from GJP, but it might be a rea­son­able way of con­vert­ing money into knowl­edge.

Based on my limited re­search, it looks like donors may be able to in­cen­tivize dona­tions rea­son­ably effec­tively with a con­sult­ing ser­vice like GJP, or per­haps by do­ing some­thing in­volv­ing pre­dic­tions mar­kets, al­though I’m not sure what. I still have some big open ques­tions:

  1. What is the best way to get good pre­dic­tions?

  2. How much does a good pre­dic­tion cost? How does the cost vary with the type of pre­dic­tion? With the ac­cu­racy and pre­ci­sion?

  3. How ac­cu­rate can pre­dic­tions be? What about rel­a­tively long-term pre­dic­tions?

  4. As­sum­ing it’s pos­si­ble to get good pre­dic­tions, what are the best types of ques­tions to ask, given the trade­off be­tween im­por­tance and pre­dict-abil­ity?

  5. Is it pos­si­ble to get good pre­dic­tions from pre­dic­tion mar­kets, given the cur­rent state of reg­u­la­tions?