How effective are prizes at spurring innovation?

Editorial note

This report is a “shallow” investigation, as described here, and was commissioned by Open Philanthropy and produced by Rethink Priorities. Open Philanthropy does not necessarily endorse our conclusions.

The primary focus of the report is a literature review of the effectiveness of prizes in spurring innovation and what design features of prizes are most effective in doing so. We also spoke to one expert. We mainly focused on large inducement prizes (i.e. prizes that define award criteria in advance to spur innovation towards a pre-defined goal). However, as there is relatively little published literature on this type of prize, we also include and discuss the evidence we found on other types of prizes (such as recognition prizes) and related concepts (such as advance market commitments and Grand Challenges).

We don’t intend this report to be Rethink Priorities’ final word on prizes and we have tried to flag major sources of uncertainty in the report. We hope this report galvanizes a productive conversation about the effectiveness of prizes within the effective altruism community. We are open to revising our views as more information is uncovered.

Key takeaways

  • Recent decades have witnessed a boom in large prizes. Since 1970, the total cash value offered by large (≥ $100,000) recognition and inducement prizes has grown exponentially. This growth is mainly driven by inducement prizes, which comprised 78% of the total prize purse in 2007.

  • We found little quantitative empirical evidence on the effect of prizes on innovation, arguably due to two factors: First, as there is a substantial divergence between economic theory and actual prizes implemented in practice, there is little theory for empirical research to test. Second, it’s difficult to do a counterfactual analysis, and there is an over-reliance on historical case studies, which are often misleading.

  • We found only a few studies on the impact of prizes on innovation and intermediate outcomes that have used a convincing counterfactual analysis (summary table here):

    1. For inducement prizes, we found only one study establishing a causal effect of prizes on the number of patents as a proxy for innovation in 19th /​ early 20th century England (Brunt, Lerner, & Nicholas, 2012). Two other articles focused on intermediate outcomes and found that prizes increased the number and diversity of coauthor collaboration and influenced the direction of research (Sigurdson, 2021).

    2. For recognition prizes, we found stronger evidence on their innovation-related and field-shaping effects, including large positive effects on the number of publications, citations, entrants, and incumbents for prizewinning topics since 1970 (Jin, Ma, & Uzzi, 2021). Moreover, there is evidence of recognition prizes boosting patents in late 19th and early 20th century Japan (Nicholas, 2013). We also found evidence that prizes have become increasingly concentrated among a small group of scientists and ideas in recent decades (Ma & Uzzi, 2018), and that there may be negative spillover effects of prizes on the allocation of attention (Reschke, Azoulay, & Stuart, 2018).

  • Inducement prizes can leverage substantial amounts of private capital, with figures pointing to 2-50 times the amount of private capital relative to the cash rewards. We haven’t vetted these numbers, but our best guess is that the average large inducement prize (≥ $100,000) leverages 2-10 times the amount of private capital relative to the cash rewards (80% confidence interval).

  • There seems to be no consensus in economic theory about when to choose prizes for innovation (over patents or grants). The policy literature provides some rules of thumb: Prizes are most useful (1) when the goal is clear but the path to achieving it is not, and (2) in industries that are susceptible to underproduction of innovation due to market failure (e.g. neglected tropical diseases or climate change interventions).

  • We have found relatively little empirical (or theoretical) evidence on how to effectively design a prize. The available evidence suggests:

    1. There is only a weak relationship between cash amounts and innovative activity and outputs. It appears that (prestigious) medals provide stronger incentives than monetary rewards.

    2. No compensation scheme performed unambiguously better, but a winner-takes-all scheme in a single contest and a multiple prize scheme in a series of successive contests could yield more innovation activity and output.

    3. A smaller number of participants leads to higher efforts but reduces the likelihood of finding a particularly good solution. A diverse set of participants seems to be beneficial.

  • Several critiques of prizes exist, including Zorina Khan’s critical review of prizes from a historical perspective highlighting that prizes can fail easily if not well-designed. Moreover, inducement prizes are associated with a number of risks, such as the exclusion of certain population groups in the pool of participants, and the potential of duplicative and wasteful efforts by participants.

  • We provide a list of exemplary recent and large inducement prizes and two case studies of large-scale inducement prizes (Google Lunar X Prize and Auto X Prize).

  • We also reviewed two closely related concepts to prizes:

    1. One idea that has recently gained momentum in the global health and development space is advance market commitments (AMCs) with the pneumococcal pilot AMC yielding promising outcomes. However, the pilot focused only on building supply capacity for an already existing vaccine and AMCs have not been tested as a tool to incentivize R&D activity yet. AMCs have received some criticism, most notably that the cost-effectiveness of the pilot was low relative to other vaccine interventions ($4,722 per child saved according to one estimate). It is not very clear when AMCs should be used (vs. prizes or other mechanisms), but it depends on the level of market maturity and the type of market failure. We believe that AMCs are a promising incentive mechanism that deserves further review.

    2. We also looked at Grand Challenges, which appears to be a mixed model in terms of funding: applicants are funded for their grant proposals, but also receive additional support throughout the development pipeline if successful. Additionally, they appear more problem-focused in their calls for applicants, compared to other prizes which seek a specific solution.

  • In conclusion, we don’t think the evidence supports an indiscriminate use of inducement prizes, but we recommend considering them in specific circumstances (e.g. in the case of market failure and a clear goal but unclear path to success). We also recommend reviewing recognition prizes more closely, as we found them to be associated with more positive outcomes than we anticipated. We are not convinced that very large cash rewards are beneficial and advise focusing more on creating prestige and visibility around a prize instead. We also believe that it would be worthwhile to review AMCs further, especially for novel and untested applications.

  • If we had more time, we would spend more time reviewing the literature on recognition prizes, as we found them to be more promising than we anticipated. We would also want to speak to scholars who have thought deeply about prizes, to check if we missed or mis-weighed any important considerations. Moreover, we would try to come up with more concrete recommendations on what incentive mechanism works best in what context (AMCs vs. inducement prizes vs. grants and other mechanisms). We would also review other potential applications and existing proposals of AMCs.

Types and definitions of prizes

In our understanding, there is no universally agreed-upon typology and set of prize definitions in the literature. According to Everett (2011, p. 7), the simplest distinction between prizes is that made between recognition prizes (also called blue-sky prizes or awards [Kay, 2011, p. 10]) and inducement prizes (also called targeted prizes)[1]. Recognition prizes are awarded ex post and in recognition of a specific or general achievement (e.g. Nobel Prize, Man Booker Prize). Inducement prizes are established ex ante, defining award criteria in advance in order to spur innovation towards a pre-defined goal (e.g. Ansari X Prize).

Within inducement prizes, some authors distinguish between grand innovation prizes and smaller-scale competitions (Murray et al., 2012, p.1), where the former refer to large-scale monetary prizes with no path to success known ex ante and believed to require a breakthrough solution and significant commitment, and the latter refer to challenges for well-defined problems that often require only limited time commitment or involve adapting existing solutions to problems (e.g. Innocentive, Topcoder).

The main focus of this report is on inducement prizes, and in particular, grand innovation prizes. However, as there is relatively little published literature on these types of prizes, we also include and discuss evidence we found on other types of prizes.

Two common features of inducement prizes are that (1) they only pay if a specific goal is achieved, and (2) they do not require the funder to decide how the goal should be met or who is in the best position to meet it (Kalil, 2006, p. 8). This stands in contrast to grants, which require the funding agency to determine who will receive funds to achieve a goal. Moreover, grants pay for efforts and are not tied to outcomes (McKinsey, 2009, p. 36).

Another focus of this report are advance market commitments (AMCs), which are conceptually related to prizes.[2] Advance market commitments offer a prospective guarantee by a donor to purchase a fixed amount of a specified technology at a fixed price (Koh Jun, 2012, p. 87). Advance market commitments are similar to prizes in that both are considered a form of “pull” funding; that is, they guarantee a reward upon an achievement that meets certain criteria (Koh Jun, 2012, p. 86). This is in opposition to “push” funding, which provides grants for the innovator’s investments whether they result in a successful product or not. The distinction between inducement prizes and AMCs is somewhat fuzzy,[3] but two exemplary differences are that (1) typically, prizes are paid out in a lump sum, while AMCs are paid out on a per-unit basis, and (2) AMCs aim to induce production, while prizes focus on inducing innovation.

Grand Challenges, a set of initiatives launched in 2003 by the Gates Foundation, are yet another related concept that we investigate in this report.[4] According to the website, it is a “family of initiatives fostering innovation to solve key global health and development problems.” While we had difficulties pinning down the details of the incentive structure, our impression is that these initiatives are some hybrid form of push and pull funding mechanisms. Grand Challenges are different from inducement prizes in that they award grants for selected research proposals, i.e. they pay for effort. On the other hand, in at least some cases grantees also have an opportunity to receive more funding based on the outcomes of their research. For example, in Grand Challenges Explorations, grantees can apply for additional support for projects that demonstrate innovative solutions (Grand Challenges Explorations Round 26 Rules and Guidelines, pp. 1, 6).

The recent boom in inducement prizes

The use of inducement prizes to incentivize technological and scientific breakthroughs dates back hundreds of years, and prizes have been proliferating as a tool used by policy-makers, firms, and NGOs in recent decades (Sigurdson, 2021, p. 1).

As Figure 1 shows, the total cash value offered by large (≥ $100,000) recognition and inducement prizes has grown exponentially since 1970 according to a data set of 219 prizes collected by McKinsey (2009, p. 16). They also found that the total number of prizes has increased steeply, with more than 60 of the 219 prizes having been launched since 2000.

Figure 1 - Aggregate prize purse, prizes over $100,000 (McKinsey, 2009, p. 16)

In recent decades, there has been a shift from recognition prizes towards inducement prizes. According to McKinsey’s analysis (2009, p. 17), before 1991, only 3% of the value of large prize purses (≥ $100,000) that were investigated in the report came from inducement prizes (with recognition prizes making up the other 97%). From 1991 to 2007, this number increased to 78% (see Figure 2 below). We have not found any more recent data on the growth of inducement prizes, but we suspect that the trend has continued given the extensive media coverage we found on prizes.

Figure 2 - Growth in inducement prizes (McKinsey, 2009, p. 17)

Little empirical evidence for the effect of prizes on innovation

Despite the recent boom in prizes we described in the previous section, we found a surprisingly little amount of empirical evidence on the effect of prizes on any dimension of innovation.

According to Jin, Ma, and Uzzi (2021, p. 2), prize research so far has mainly studied how awards change prizewinners’ careers, and it is unclear whether the link between prizes and growth for a single prizewinner’s work extends to changes in the growth of an entire topic. They claimed that current theoretical arguments and empirical work are at a nascent stage.

Sigurdson (2021, p. 6) found no rigorous quantitative studies of the impact of modern large-scale inducement prizes on any dimension of innovation beyond the immediate technical solution to the prize itself. He attributes two main factors to the challenge of knowing if or how inducement prizes impact innovation:

  1. There is little theory for empirical research on modern inducement prizes to test, as much of the economic theory on inducement prizes has considered their use mainly as an alternative to patents.[5] (See Burstein and Murray ([2016, p. 408] for a nice description of the divergence between theoretical and actual prizes.)

  2. There is a lack of counterfactual analysis and an over-reliance on historical case studies, which are often misleading. Sigurdson (2021) found only a handful of studies on prizes that have used counterfactual analysis, which were in many ways different from the types of prizes offered today and thus provide limited comparability to modern prizes like the X Prize.

Moreover, Sigurdson (2021) mentioned that due to many different existing forms of prizes, empirical studies are scattered across quite distinct prize forms. This limits the generalizability of findings from one study to another.

According to Burstein and Murray (2016, p. 408):

Modern innovation prizes, as typically implemented, are a scholarly mystery. Three literatures speak to such prizes — economic, policy, and empirical — and yet none adequately justifies the use of innovation prizes in practice, explains when they should be chosen over other mechanisms, or explains whether or why they work. As a result, prizes remain little understood as an empirical matter and poorly justified as a theoretical matter.

Available evidence suggests prizes can shape the trajectory of science and innovation

[Confidence: We have medium confidence that prizes can effectively spur innovation, which is largely based on four quasi-experimental studies and two case studies. We deem it unlikely that a longer review of the literature would yield substantially more insights. The publication of new quasi-experimental studies on the innovation-related effects of recent large prizes might change our view, but we are not aware of any forthcoming studies on this topic.]

We found little high-quality empirical evidence on the effectiveness of prizes in spurring innovation. In this section, we summarize our takeaways based on the most important pieces of available literature we found on the effects of prizes on innovation and intermediate outcomes. Please refer to Appendix 1 for a more thorough discussion of the evidence and Appendix 2 for an overview of the best quantitative studies (in our view) in a table format.

Apart from the studies we describe here, a useful summary of studies on the impact of inducement prizes can be found in Gök (2013).[6] We decided to not include the majority of studies reviewed by Gök in this section, as we deem their quality and internal validity comparatively low.[7] However, we refer to some of those studies in other sections of this report. We also created a list of other, probably less relevant, studies we came across during our research that we decided not to or didn’t have time to review here.

Regarding inducement prizes, we found only one article that tried to establish a causal effect of inducement prizes on innovation output (Brunt, Lerner, & Nicholas, 2012). The authors used data on nearly 2,000 awards and 15,000 entries for technological development by the Royal Agricultural Society of England (RASE) at annual competitions between 1839 and 1939. RASE awarded both medals and monetary prizes of more than £1 million. Using negative binomial regressions, the study found a significant and positive effect on the number of patents as a proxy for innovation. For example, in one specification, an additional medal was associated with an 8% increase in the number of patents. We deem the internal validity high enough that we trust at least the sign of the relationship. However, we think its external validity is quite low. We are doubtful whether inducement prizes for agricultural technology in 19th and early 20th century England can teach us much about how effective a modern inducement prize for other technologies or outcomes would be. Moreover, we’re reluctant to put too much weight on this single piece of evidence.

Another well-executed set of articles as part of a doctoral dissertation examined the causal relationship between inducement prizes and intermediate outcomes relevant to innovation (Sigurdson, 2021). The author used data on more than 1,600 participating and control scientists in the context of the 2005 DARPA Grand Challenge, an inducement prize for autonomous vehicles. Using a difference-in-differences approach combined with matching, he established that the prize increased the number of and diversity of coauthors that participants collaborated with after the prize. More precisely, prize participants had a 31% increase in the number of unique coauthors per year within the 10-year period after the prize compared to non-participating researchers. Prize participants were also more likely to publish work with coauthors from other scientific disciplines than non-participating researchers. Another interesting finding was that prizes may influence the direction of research by enabling the discovery of breakthrough ideas. However, it is not clear to us how these intermediate outcomes relate to the overall quantity and quality of innovation.

Regarding recognition prizes, there seems to be somewhat stronger evidence for their innovation-related and field-shaping effects. An impressive high-quality article focused on 405 scientific prizes that were conferred 2,900 times between 1970 and 2007 with respect to 11,000 scientific topics in 19 disciplines (Jin, Ma, & Uzzi, 2021). The data set represents almost all recognition prizes worldwide in that time period. Using a difference-in-differences regression design combined with matching, the authors found huge positive effects on various indicators of research effort for prizewinning topics, such as the number of publications (40%), citations (33%), entrants (37%), and incumbents (55%) in the first 5-10 years after the prize. While this is no direct evidence of effects on innovation, we think it’s plausible to conjecture that a large amount of additional attention received by a scientific topic increases its speed and quality of innovation. Moreover, the sheer scale and comprehensiveness of the data set and, in our view, unusually high internal validity of the study convinced us to trust and put a lot of weight on its findings.

We also found direct evidence of the effects of recognition prizes on innovation from a study focusing on late 19th and early 20th century Japan (Nicholas, 2013). Similarly to Brunt, Lerner, and Nicholas (2012), the author used a binomial regression model on a data set combining patent counts and various information on prizes. He found that recognition prizes (mostly non-pecuniary) boosted patents, but only in less technologically developed areas of Japan. However, as the estimated effects had large standard errors and we deem the study’s external validity rather limited, we do not weigh this piece of research strongly in our conclusions.

A study on more than 3,000 recognition prizes in science in diverse disciplines over more than 100 years and in over 50 countries followed the career trajectories of almost 11,000 prizewinners (Ma & Uzzi, 2018). The authors found that prizes were increasingly concentrated among a small and tightly connected elite, suggesting that a small group of scientists and ideas pushed scientific boundaries. However, it is unclear how these networks affected knowledge transfer and innovation.

We also found evidence to suggest that articles written by scientists who later became Howard Hughes Medical Institute investigators (and thus received a significant amount of no-strings-attached funding) received more citations than those who did not, though the effect was small and short-lived (Azoulay, Stuart, & Wang, 2014). Interestingly, a related study found a redistribution effect of scientific attention and recognition away from researchers that work proximate to prizewinners (measured by the number of citations), suggesting negative spillover effects of prizes on the allocation of attention (Reschke, Azoulay, & Stuart, 2018). This effect was canceled out by the extra attention of the prizewinner only for fields that are comparatively poorly cited/​neglected.

Overall, the available (though scarce) evidence points to prizes having the potential and ability to affect intermediate outcomes (such as collaboration patterns among innovators) and to shape the trajectory of science and innovation. We found the evidence for the field-shaping effects of recognition prizes stronger and more convincing than for inducement prizes, though in both cases it’s difficult to anticipate the extent to which prizes affect the quantity and quality of innovation. Our view is mainly based on the fact that inducement prizes have been less researched, not because recognition prizes are necessarily better.

Prizes can effectively leverage private capital

[Confidence: We have medium confidence that prizes can leverage significant amounts of private capital, based on recent example figures and historical prizes. We deem it unlikely that a further review of the literature would change our view. However, a systematic assessment of the leveraging effect of modern large-scale inducement prizes, which to our knowledge does not exist, would likely reduce our uncertainty.]

We found some empirical evidence that prizes can leverage private sector investment greater than the cash value of the prize. While, to our knowledge, there has been no systematic assessment of the leveraging effect of different prizes, we found some example figures, which point to 2-50 times the amount of private capital leveraged by prizes relative to the cash rewards.

Khan (as quoted in Hayes, 2021) investigated a large number of historical prizes and found that, “in almost all prize competitions, the investments of time and resources on the part of the competitors generally exceed even the absolute value of the award.” Khan’s result for historical prizes seems to also hold for modern grand innovation prizes, as some example figures show, including:

  • The Ansari X Prize stimulated at least $100 million in private capital with a cash value of $10 million (Hoyt & Phills, 2007).

  • The Shell Springboard Prize achieved a return on investment between 200% and 900%, where the return was measured as the total spending from competitors and investment represented the total cost of the competition (Everett, 2011, p. 13).

  • In the NASA Centennial Challenge, competitors pursued prizes whose value represented “about one-third of the amount it takes to win” (McKinsey, 2009, p. 25).

  • In the Northrop Grumman Lunar Lander Challenge, a $2 million prize spurred $20 million total investment (Kay, 2011, p. 87).

  • Brunt, Lerner, and Nicholas (2012, p. 5) found that the costs of technology development were three times higher than the monetary rewards in the RASE prizes.

  • Schroeder (2004) estimated the returns on investment for three different prizes. Strikingly, he found that entrant investments were 40 times higher than the size of the cash purse for the Ansari X Prize, and 50 times higher for the DARPA Grand Challenge.

Kalil (2006, p. 7) explained:

[T]his leverage can come from a number of different sources. Companies may be willing to cosponsor a competition or invest heavily to win it because of the publicity and the potential enhancement of their brand or reputation. Private, corporate dollars that are currently being devoted to sponsorship of America’s Cup or other sports events might shift to support prizes or teams. Wealthy individuals are willing to spend tens of millions of dollars to sponsor competitions or bankroll individual teams simply because they wish to be associated with the potentially historical nature of the prize. Most areas of science and technology are unlikely to attract media, corporate, or philanthropic interest, however.

We would like to note that we haven’t vetted any of the aforementioned figures and we suspect that they were calculated in different ways. Moreover, we don’t know whether these figures are representative, but we think it’s possible that there is a publication bias in the sense that prizes that were less successful or received less participant and media attention were less likely to be studied. Our best guess is that the average large inducement prize (≥ $100,000) leverages 2-10 times the amount of private capital relative to the cash rewards (80% confidence interval).

When do prizes work best?

In this section, we first describe in what cases prizes should be used and might be a good choice compared to other innovation incentive mechanisms. We then discuss a number of prize design issues and their implications.

Prizes are effective when there is a clear goal with an unknown path to success or in case of market failure

[Confidence: We have relatively low confidence in our assessment of when prizes are effective, which is largely based on rules of thumb from the policy literature. We expect that 10 more hours of research (possibly consisting of a review of the policy literature and an interview with Zorina Khan) would provide us with an understanding of how these rules were derived and whether they are generalizable.]

According to Burstein and Murray (2016), economic theory on innovation incentives does not give concrete answers about when to choose prizes for innovation over patents or grants. Instead, it lists a number of factors that may influence the choice. Kapczynski (2012, p. 19) summarized:

[T]he . . . economics literature has proliferated a series of parameters that influence the comparative efficiency of these different systems, including, most importantly, the competitiveness of the research environment; the cost of research as compared to the value of the reward; the riskiness of research or creativity; the importance of private information about the cost or value of creation; the costs of overseeing effort in the context of contracts; and the comparative costs of rent seeking, uncertainty, and the administration of each system. The information economics literature thus offers no general endorsement of any mechanism.

Burstein and Murray (2016) also explained that the policy literature does not do much better at explaining and justifying prizes than does the economic literature, as it provides only rules of thumb for determining when and how to use prizes. According to the authors, a basic rule seems to be that prizes are most useful when the goal is clear but the path to achieving it is not (e.g. Kalil, 2006, p. 6). They explained that one line of literature suggests that prizes are useful in industries that are particularly susceptible to under-production of innovation because private actors lack a viable market. As two examples, they mentioned the market for pharmaceuticals targeting diseases endemic to the developing world, where ability to pay is lower than the need, and the market for technologies to address climate change where social value far exceeds private value. They found that these are two areas in which prizes have most frequently been proposed.

McKinsey (2009, p. 37) created a flow chart as a tool to decide when to use prizes versus other philanthropic instruments, such as grants or infrastructure investments. Essentially, they recommended using prizes “when a clear goal can attract many potential solvers who are willing to absorb risk. This formula is most obvious in so-called ‘incentive’ prizes. [...] But the formula also holds for good ‘recognition’ prizes like the Nobels.” While this sounds plausible to us, we are not sure where McKinsey derived these recommendations from, as this was not stated in their report.

In her book Inventing Ideas (2020), Zorina Khan, an economic historian and expert on prizes, wrote a list of cases and circumstances under which prizes are potentially effective (see Appendix 4 of the book). We copied this list here:

  • To achieve philanthropic or nonprofit objectives: This might include circumstances where market failure occurs, although the ultimate goal should be to enable markets to work rather than to replace the market mechanism with monopsonies

  • Social objectives: Prizes can help to promote unique, qualitative, social, or technical goals that are not scalable or for which there is no market

  • To publicize or draw attention to otherwise ignored issues: To focus attention, facilitate coordination, or signal quality: however, if the objective is publicity or opportunity to work with/​learn from other competitors, then from a social perspective there are likely to be more effective means of marketing

  • As signals of quality: In markets for experience goods and instances where informational costs are high”

We are not sure whether these points were derived from theoretical considerations or from empirical findings, as they were listed in the Appendix without any context. We skimmed through her book looking for concrete examples or justifications for these points, but we were unable to find them. A deeper read of the book or a phone call with Khan might provide clarification. Unfortunately, she was unavailable to meet with us.

To summarize, we did not find very concrete answers on when prizes are most effective compared to other mechanisms to incentivize innovation. However, two common rules we encountered were that prizes are effective (1) when the goal is clear, but the path to achieving the goal is not, and (2) when innovators lack a viable market to innovate, such as in the case of neglected tropical diseases or technologies to address climate change.

Prize design influences the ability to spur innovation

[Confidence: We have relatively low confidence in our recommendations regarding the optimal design of prizes, which are largely based on few small-scale lab or field experiments. A further review of the literature is unlikely to change our views, but the publication of new (quasi-) experimental studies on the effects of different design parameters of large-scale prizes might alter our conclusions.]

We have found relatively little empirical (or theoretical, for that matter) evidence on how to effectively design a large prize for innovation. We came across few papers that empirically tested the implications of different design features of prizes, which were largely small experiments with relatively small monetary rewards. While we deem the research to be overall of high quality and high in internal validity, the findings may not necessarily extend to other settings. Moreover, it is not clear how different design features would interact. We summarize our most important findings and recommendations in the following (see Appendix 3 for a more thorough discussion of the studies we reviewed). We would like to emphasize that our recommendations are very tentative and based on relatively little evidence that may not necessarily extend to other settings.

Prestige is a stronger incentive than monetary rewards

We reviewed six empirical studies (two quasi-experimental studies, one field experiment (RCT), two case studies, and one textbook that combines regression analyses with qualitative observations) on the importance of monetary vs. non-monetary prize incentives. The results from these studies are mixed. Some studies found that monetary incentives spurred innovations relative to only non-monetary incentives (e.g. Boudreau & Lakhani, 2011; Jin, Ma, & Uzzi, 2021), while others found that medals provided stronger incentives than monetary rewards (e.g. Brunt, Lerner, & Nicholas, 2012; Kay, 2011; Khan, 2020; Murray et al., 2012). Overall, there appears to be only a weak relationship between the size of the cash rewards and innovative activity and outputs. Moreover, a consensus seems to be that medals provide stronger incentives than monetary rewards, and that prize participants are typically not only motivated by monetary rewards (as assumed by economic theory), but by a host of other factors (e.g. prestige, reputation, visibility). While one could hypothesize that the prestige of an award is a function of the size of the prize, this hypothesis is not strongly supported by the data we found. A McKinsey (2009, p. 58) analysis found only a weak correlation between the cash award of a prize and the prize’s exposure (proxied by the number of online mentions in Google search results).

We have not investigated how costly it is to create publicity and visibility around a prize, but according to Jim English,[8] who was interviewed by McKinsey (2009, p. 60), “prizes fail when the sponsor fails to understand how much effort and investment is required beyond the simple ‘economic capital’ of the award itself. A sponsor might imagine that a prize that carries cash value of, say, $50,000 requires around $60,000 or $75,000 a year to run. But depending on the kind of prize and the field of endeavor, the actual costs might be $500,000 or more when you include raising public awareness that a prize exists, inducing people to nominate and apply, mounting a publicity campaign, and administering the whole program.

Our conclusion from these results is that offering a monetary reward makes sense, but only insofar as it increases the prestige and visibility of the prize. We recommend focusing more on creating prestige around a competition, and offering medals rather than a very large cash reward. We have not investigated the costs of creating publicity, but according to one figure, the total costs (including administrative and publicity costs) required to successfully run a prize might be around 10 times the offered cash award.

Winner-takes-all scheme could generate more novel innovation than multiple prize scheme in single contest

We reviewed three studies (one field experiment, one laboratory experiment, and one study combining an online experiment with an empirical analysis of an actual innovation platform) on the effects of different prize structures and compensation schemes. Graff Zivin and Lyons (2021) ran a field experiment with 184 participants on a software innovation contest with a cash purse of up to $15,000, and found that a winner-takes-all compensation scheme generated more novel innovation relative to a multiple prize scheme that rewarded a greater number of contributors. Hofstetter et al. (2017) found the same result in an online experiment and an analysis of 260 innovation contests from an innovation platform. However, they also found that when those schemes are compared in successive contests, the winner-takes-all scheme can have a deterrent effect on participants and decrease the effort and innovativeness of those who had received no reward in the first contest. Brüggeman and Meub (2015) did a comparison between the prize for the aggregate innovativeness with a prize for the best innovation in a laboratory experiment and found that neither option is unambiguously “better” than the other.

We believe that these findings do not lend to unambiguously recommending a particular compensation scheme. However, depending on whether the prize competition is meant to be a single contest or a series of successive contests, we recommend either a winner-takes-all scheme or a multiple prize scheme.

A larger, more diverse pool of competitors is better for high-uncertainty problems

We reviewed three studies (one field experiment, and two regression analyses based on online innovation platforms) on the effects of three different competitor characteristics. Boudreau et al. (2011) ran a regression analysis on 10,000 software competitions from a software competition platform. They found that constraining the number of competitors in a contest increased the effort exerted by participants, but decreased the chance of finding a particularly good solution. However, a larger number of competitors increased overall contest performance for high-uncertainty problems. Jeppesen and Lakhani (2010) found a positive relationship between problem-solving success and marginality (i.e. being distant from the field of a problem) in a regression analysis of 166 scientific challenges from an online platform with over 12,000 participating scientists. One interpretation of this finding could be that the best way to solve problems is to have experts from vastly different fields attempt solutions. A 10-day field experiment of a software contest with over 500 software developers found that allowing contest participants to self-select into different institutional settings can increase effort and the performance of solutions (Boudreau & Lakhani, 2011).

We recommend allowing a larger number of prize entrants if the challenge is around a high-uncertainty problem, and vice versa. Moreover, designing the contest such that it attracts a diverse set of participants will probably lead to better outcomes, but we are not sure how to achieve this in practice. We would advise against assigning entrants to specific teams and settings and recommend letting the participants self-select instead.

Limitations and risks of prizes

[Confidence: We have medium confidence regarding the critical review of prizes from a historical perspective, as our conclusions are largely based on only one researcher’s (Zorina Khan) high-quality body of work. As we are not aware of any other researcher who studied prizes from a historical perspective at a similar level of depth, we think that another 10 hours of research on this topic is unlikely to substantially change our views.]

Critical review of prizes from a historical perspective

Zorina Khan, a professor of economics and a specialist in intellectual property, entrepreneurship, and innovation, has extensively investigated prizes for innovation from a historical perspective. In our impression, she seems to be the most prominent scientist who has voiced skepticism about prizes in numerous instances. In the following, we attempt to summarize her critical perspective on prizes based on her book (2020), two articles (2015; 2017), and a podcast interview (Hayes, 2021), and state our opinions on and takeaways from her reasoning.

Patents, not prizes, fueled the rise of the U.S. economy

In her podcast interview and book, Khan (Hayes, 2021; Khan, 2020) discussed how the United States succeeded in overtaking Europe in the 19th century to become the global technology leader in the 20th century. She argued that patents, not prizes, fueled the rise of the U.S. as a global economic power, and she supported her hypothesis in two lines of reasoning. First, she contended that encouraging innovation by awarding prizes is inferior to granting patents. For example, in her 2017 article, in a situation where prizes and patents were substitutes, she observed an adverse selection effect with regards to prizes. That is, inventors who had valuable ideas in the marketplace would bypass the prize system and pursue returns from commercialization (i.e. patents), whereas people with “rubbish inventions” would apply for prize awards. Second, she argued that the U.S. patent system, which was the first modern patent system with market-oriented patent policies, was superior to European patent systems and led to the democratization of inventions.[9] However, not all economists share her conclusions. For example, Moser (2016, p. 1) challenged the view that patents were the primary driver of innovation.

Overall, we are not convinced that these arguments provide strong reasons against using prizes. While patents may be superior to prizes in a context where both are substitutes for each other, we don’t think this argument necessarily extends to the modern context, where there are patent systems in all virtually countries and prizes and patents are complements.

Prizes can create market distortions

Khan (Hayes, 2021) also discussed the case when prizes and patents are complements — that is, when inventors can get both patents and prizes, as is the case today. She explained that this leads to a market distortion because inventors would get overcompensated through what she called “award stacking”: inventors chasing both a prize and a reward in the market. She argued that prizes are monopsonies: the person who is offering the prize is the only buyer. Khan explained a finding from her research that monopsonies can lead to very large social costs, including arbitrary, idiosyncratic outcomes, unjust discrimination, and even corruption. She wrote in her book (2020, p. 397):

Prizes can be effective for private entities who are able to free-ride off the efforts of the entire cohort striving for the award, while only paying for one successful solution; however, social welfare is reduced by the lost resources and investments made by the many losers in the prize competitions. This is especially true if the objective of the competition is highly specific to the grantor and results cannot readily be transferred to other projects. Moreover, the secrecy involved in most prize systems tends to inhibit the diffusion of useful information, especially for outsiders. These net social losses suggest that prize competitions are inappropriate policy instruments for government agencies that should be promoting overall welfare.”

According to Khan, patents are different because they are market-oriented incentives. This means that if an invention is valuable, the patent is going to be rewarded with profits in the market, while if an invention is useless, they’re going to get nothing, and society also benefits because the patentee discloses all of the information to the public.[10]

More generally, Khan made the point that, whenever possible, the best option is to ensure well-functioning markets exist. For example, Khan voiced skepticism about the Carbon X Prize, a currently ongoing $100 million prize that is, according to the X Prize Foundation, the largest prize in history, which is funded by Elon Musk to incentivize innovations for carbon removal (Hayes, 2021). She opined that, while the Carbon X Prize attracted a lot of media attention to the problem of excessive carbon emissions, this could have been done much more cheaply.[11] Khan explained that instead of grand innovation prizes, there is a need to set up mechanisms to ensure correct prices for emissions and there are known ways of doing that (e.g. carbon taxes, futures markets, carbon offset credits). In her view, the best policy would be to auction off carbon rights to forms and facilitate markets for trading in emissions.

Again, we are not convinced that one can generally conclude that social welfare is reduced by the lost resources and investments made by losers in prize competitions. First of all, there are other benefits to participants and society beyond winning the prize, such as learnings and a potential commercialization of the developments. Moreover, Khan’s lines of reasoning here focus on the case where market-oriented incentives exist and are sufficient to induce innovation. This is not always the case, as there can be market failures for various reasons, such as for medicines for neglected tropical diseases. Our takeaway is that prizes should be avoided in areas where sufficient market-oriented incentives exist to induce innovation.

Historical prizes often failed

Khan (2015) surveyed and summarized empirical research using samples drawn from Britain, France, and the United States, including “great inventors” and their ordinary counterparts, and prizes at industrial exhibitions. She found that prizes suffered from a number of disadvantages in design and practice, which might be inherent to their non-market orientation. She argued that historical prizes were often much less successful than nowadays often claimed, and current debates about prizes tend to be centered around historical anecdotes and potentially misleading case studies.

In her analysis of data on early prize-granting institutions in the 18th and 19th centuries, she reviewed a number of frequently cited historical prizes (e.g. the Longitudinal Prize from 1714), and explained what lessons this evidence offers for designing effective mechanisms to incentivize innovation. She argued that the majority of organizations specializing in granting prizes for industrial innovations at that time ultimately became disillusioned with this policy, partly due to a lack of market orientation of prizes. This was for numerous reasons. For example, prizes “were not wholly aligned with the economic value of innovations for the individual industry” (p. 18). Moreover, the majority of offered prizes had never been actually granted. There was often a lack of transparency in the judging process, which led to idiosyncratic and inconsistent decisions — prizes were given out in an arbitrary manner, which reduced the incentives for inventors. Khan added that prizes tended to offer private benefits to both the proposer and the winner, largely because they served as advertisements. Winners of such prize awards were generally unrepresentative of the most significant innovations, partly because the market value of useful inventions was typically far greater than any prize that could be offered. She concluded:

This is not to say that administered inducements are never effective, especially in the context of such market failure as occurs in the provision of tropical medicines or vaccines, where significant gaps might exist between private and social returns. However, in distinguishing between the numerous ingenious theoretical prize mechanisms that have been proposed, such transaction costs need to be recognized and incorporated. In particular, governance issues and the potential for rent seeking and corruption should be explicitly addressed, especially in countries where complementary institutions and political control mechanisms are weak or nonexistent. The historical record indicates that the evolution of the institution of innovation prizes over the past three centuries serves as a cautionary tale rather than as a success story” (p. 42).

Our conclusion from Khan’s reasoning is not that prizes are necessarily a bad idea, but that prizes may fail if not designed and implemented well. Moreover, as she explained, there might be cases where prizes are potentially effective, such as in the provision of tropical medicines, where private returns do not match the social returns and one can therefore not rely on markets to provide sufficient incentives for innovation.

Overall, our takeaways from Khan’s reasoning are that prizes should not be used carelessly and are certainly not a cure-all mechanism. Ensuring well-functioning markets and correct prices, for example for carbon emissions, might be the first-best option. However, if this is not possible or not realistic in a reasonable time frame, prizes might potentially be a good option, which Khan seems to agree with. Moreover, the failures of historical prizes teach us that prizes need to be very carefully designed and implemented to not risk more harm than good.

Prizes are associated with risks that can likely be alleviated though design

[Confidence: We have relatively little confidence regarding the risks of prizes we outline below, as these are largely based on only one review article from the grey literature. It’s possible that another 10 hours of research might bring our attention to more risks than we found so far, though we deem it unlikely that we would find risks problematic enough to prohibit the use of prizes.]

Roberts, Brown and Stott (2019, p. 21) provided a summary of the risks associated with innovation inducement prizes as identified in the literature. We summarize these in the following and state our opinions.

Excluding potential participants: Zhang et al. (2015) compared the results of an idea award to promote sexual health in China in theory to the more traditional, expert-led method of designing behavior change communications. They noted that idea awards are typically run via online platforms, which risk excluding certain sections of the population. According to Roberts, Brown, and Stott, “for Social Prizes and those where the participation in the prize itself is expected to confer benefits to the participants, who is excluded, becomes important, and especially so in a development context” (2019, p. 21).

Relatedly, as we explained here, Ma and Uzzi (2018) found that prizes tend to be concentrated within certain groups, and this is especially acute when looking at the number of prizes conferred to women. Women are underrepresented among prizewinners in physics, chemistry, and biology, and those who do win prizes get less money and prestige compared to men (Ma et al., 2019). Moreover, there is evidence of unconscious gender bias in the scientific award process favoring male researchers from Europe and America (Lincoln et al., 2012, p. 1).

Overall, while we believe the evidence that gender biases and the unintentional exclusion of certain participants are a risk in prize competitions and awards, we are not sure and have not seen evidence on whether the risk is larger for prizes compared to other incentive mechanisms, such as grants.

Risks experienced by participants: The authors cited Acar (2015) conducted a survey with participants on the InnoCentive.com online platform and mentioned the risk of opportunism, where those that receive the information generated by the prize use it opportunistically. This can, in turn, make inventors fearful of disclosing knowledge. Acar (2015) described how some participants in science contests experience fear of opportunism, and noted that female and older participants had significantly less fear of disclosing their scientific knowledge.

We don’t have a clearly formed opinion on this point, but our overall impression is that these issues can be alleviated by taking fear of opportunism into account in the design of prize contests (e.g. via intellectual property protection and compensation structure), as Acar (2015) suggested in their discussion section.

Duplicating resources: Roberts, Brown, and Stott (2019) argued that the multiplier effect of prizes — i.e. there being more than one solver — may have benefits for the funder, but can represent duplicative and potentially wasteful efforts by solvers (citing Lee [2014], who did a legal examination of social innovation). They also argued that the number of solvers can introduce risks in terms of motivation for future prizes, citing Desouza (2012), who warned of the risk of reducing the pool of potential future solvers if prize managers fail to communicate effectively with participants after the prize ends, drawing on survey data among U.S. citizens who participated in government innovation inducement prizes.

We would like to note that this potential demotivating effect of multiple solvers sounds generally plausible to us and has also been found by Boudreau et al. (2011), as we explained here. Thus, the number of prize entrants is a factor that needs to be considered in the prize design.

Power imbalance: Roberts, Brown, and Stott (2019) cited Eagle (2009), an article from social marketing (given the similarities between social prizes and community-based behavior change interventions), observing that criticisms of social marketing include being patronizing and manipulative, appealing to people’s base instincts, and extending the power imbalance between the state and individuals.

As the cited evidence is from social marketing, we are not sure whether these risks extend to prizes.

Overall, while we, by and large, find the points raised plausible, we don’t find the evidence behind them very strong and are not sure whether these risks are unique to prizes or hold equally for other incentive mechanisms, like grants. Moreover, we don’t find the risks problematic enough to prohibit the use of prizes, and we suspect that some of these risks can at least partly be alleviated by designing prizes accordingly (e.g. addressing fears of opportunism).

List of recent large prizes

We provide a list of large innovation inducement prizes here that includes information on a few key features, such as the numbers of entrants, mobilized private capital, and some information about the winners. We included all prizes we found with a cash award of at least $100,000 from the 20th century on.

Based on this list, we created a histogram showing the distribution of prize amounts in Figure 3 below. The largest prizes we found in terms of cash awards were America’s Space Prize at $50 million and the GE Ecomagination Challenge: Powering the Grid at $55 million.

Figure 3 - Distribution of prize amounts based on the list of prizes we assembled (in millions USD)

Two case studies of large-scale inducement prizes

As we discussed above, there is a paucity of rigorous, quantitative studies on the impact of prizes. In this section, we therefore complement the empirical evidence discussed in the previous sections with the findings of two case studies of modern, large-scale prizes. We focus on two examples of the X Prize: the Google Lunar X Prize, and the Auto X Prize. According to Murray et al. (2012, p. 4), the X Prize can be assumed typical of contemporary Grand Innovation Prizes[12] in design and implementation, as the approach developed by the X Prize Foundation is emerging as a canonical design and prizes in the X Prize “tradition” seem increasingly common. Each of the X Prizes share a similar architecture, scale, and scope.

A caveat to these case studies is that there has been no unified framework or clear approach within which to evaluate prizes and undertake comparative analysis (Murray et al., 2012, p. 3). Thus, it is difficult to systematically evaluate the performance of different prizes and to summarize and compare the findings of different case studies.

Google Lunar X Prize

By far the most comprehensive case study on modern prizes we found was done as a public policy dissertation project by Kay (2011). Kay (2011) used an empirical, multiple-case-study methodology to investigate a main case study — the Google Lunar X Prize (see Appendix 4 for a more detailed description of the Google Lunar X Prize) — and two pilot cases: the Ansari X Prize and the Northrop Grumman Lunar Lander Challenge. He then examined four main aspects of these prizes: the motivations of prize entrants, the organization of prize R&D activities, the prize technologies, and the impact of prizes on technological innovation.

The study used different sources of data, such as direct observation, on-site interviews, questionnaires, and document analysis. Kay (2011) triangulated the different data sources with equal weighting for data collected through different methods. We would like to note that we were only able to read a fraction of his almost 400-page-long dissertation, which is very rich and comprehensive. Thus, it is possible that we missed some potentially important and interesting aspects of his analysis.

We summarize his results and conclusions in the following (see Kay [2011, pp. 263-264] for a nice overview of the research questions, hypotheses, variables of interests, results, and conclusions in a table format).

  • Motivations of prize entrants:
    The prizes attracted diverse entrants, including unconventional ones, such as individuals and organizations that were generally uninvolved with the prize technologies. Participants were primarily drawn to the non-monetary benefits of prizes (e.g. visibility, prestige, opportunity to participate in technology development) and the potential market value of the prize technologies. The author found that the monetary reward was less important to participants relative to other incentives. However, it was still important to propagate the idea of the prize. Interestingly, the prizes attracted many more people apart from the participants, such as volunteers and partners, who contributed indirectly to the prize and supported official participants.

  • Organization of prize R&D activities:
    Prizes could increase R&D activity and redirect ongoing industry projects to target diverse technological goals. However, he concluded that the development of prize competitions was difficult to predict. The organization of prize R&D activities and participants’ effort depended on the participants’ characteristics (e.g. goals, skills, resources), and could not be directly influenced by a specific competition design. He found interactions between R&D and fundraising activities, which might, in some circumstances, divert the participants’ efforts away from technological development.

  • Prize technologies:
    Prizes could selectively target technologies at different maturity levels (e.g. experimental research, incremental developments, commercialization). However, the quality of the innovation output was difficult to predict.

  • Effect of prizes on technological innovation:
    Prizes could spur innovation beyond what would have happened in their absence. However, the effect of prizes depended highly on the prize entrants’ characteristics and the evolution of the prize competition’s overall context, such as whether the business context is favorable. He found the impact on innovation to be larger for larger prize incentives, more significant technology gaps, and sufficiently open-ended challenge definitions to allow for unconventional approaches. Moreover, he concluded that prizes cannot induce, but only enable technological breakthroughs, and they may require complementary incentives (e.g. commitments to purchase inventions) or support (e.g. seed funding).

Kay (2011) also concluded that prizes are particularly appropriate to, for example (the prizes in parentheses are from Table 8.3 on p. 279):

Furthermore, according to Kay (2011, p. 293f), prizes can selectively focus on specific technologies, and target certain innovators and geographic areas. They can also leverage significant amounts of funding. Relative to other incentive mechanisms (e.g. grants), prizes involve higher programmatic risks, as their outputs are difficult to predict. The incentive power of prizes depends on their uniqueness — that is, a prize in a context with many rival prizes has less incentive power than a similar prize that is held in a context without equivalent (or any) competing prizes. A successful prize design depends on many parameters.[13]

Auto X Prize

In this section, we summarize the findings of a case study of the Auto X Prize, conducted by Murray et al. (2012).[14] See Appendix 4 for more detailed background information on the Auto X Prize. Murray et al. (2012) provided a systematic examination of a recent Grand Innovation Prize (GIP): the Auto X Prize. To do so, the authors defined three dimensions for GIP evaluation: objectives, design (including ex ante specifications, ex ante incentives, qualification rules, and award governance), and performance. They compared observations from three domains within this framework: empirics, theory, and policy. Their analysis is based on a combination of various data sources: direct observation, interviews, surveys, and extant theory and policy documents.

The authors concluded that the empirical reality of the Auto X Prize deviated substantially from the ideal form of a prize, as described in the theoretical economic literature and advocated in policy-making documents. We find this unsurprising but interesting, as it shows that the theoretical research on prizes done so far is of limited usefulness in understanding modern prizes implemented in practice.

They found the contrasts to be particularly strong with respect to four areas. We copy these points here (p. 13):

  1. “Contrary to the dominant theoretical perspective,[15] which assumes GIPs have a single, ultimate objective – to promote innovative effort – we find that GIPs blend a myriad of complex goals, including attention, education, awareness, credibility and demonstrating the viability of alternatives. Paradoxically, our results suggest that prizes can be successful even when they do not yield a “winner” by traditional standards. Conversely, prizes in which a winner is identified and a prize awarded may still fail to achieve some of their most important design objectives.

  2. We find the types of problems that provide the target for GIPs are not easily specified in terms of a single, universal technical goal or metric.[16] The reality is not nearly as clear or simple as either theorists or advocates have assumed. The complex nature of the mission (e.g., a highly energy efficient vehicle that is both safe to drive and can be manufactured economically), and the systemic nature of the innovations required to solve the stated problem, requires that multiple dimensions of performance be assessed. Some of these dimensions can neither be quantified nor anticipated, while others may change as the competition unfolds. Common metrics used today (e.g., miles per gallon) may be driven by current technical choices (i.e., gasoline engines), and translating them to work for new approaches (e.g., hydrogen fuel cells) may not be easily achieved. If done poorly, this will bias competitions in favor of certain technical choices and away from others. [The Auto X Prize] demonstrates that contemporary GIPs are complex departures from smaller prizes examined by prior researchers, where the competitions involved individuals vying to solve relatively narrow problems (e.g., Lakhani et al., 2007). In those studies, the objective functions for solution providers are much more easily specified, as are the accompanying test procedures and mechanisms for governing and managing the process.

  3. We find a clear divergence between theoretical treatments of the incentive effect of a prize purse and the reality of why participants compete. Critically, there are a variety of non-prize incentives that are just as (if not more) salient to participants, many of which can be realized regardless of whether a team “wins” or not. Some of these broader incentives – publicity, attention, credibility, access to funds and testing facilities – are financial in nature, but not captured by the size of the purse. Others – such as community building – are social in nature and are difficult to measure in terms of the utility they generate for participants. Prior work has tended to view situations where prize participants collectively “spend” substantially more than the prize purse (i.e., in terms of resources) as evidence that prizes are inefficient in terms of inducing the correct allocation of inventive effort. Our observations however, provide an alternative explanation for why this may not be the case. Participants might, in fact, be responding rationally to a broader range of incentives than has been assumed in prior work.

  4. Our work highlights the critical and underappreciated role of prize governance and management, a topic that is notably absent in the theoretical literature. We find that the mechanisms for governance and management must be designed explicitly to suit the particular prize being developed, a costly and time-consuming activity. Furthermore, given the difficulties in specifying ex ante all that can happen, rule modifications and adaptations along the way are to be expected, and these must be handled in a way that respects the rights and opinions of those participants who are already committed to the effort.”

The authors concluded that, our results suggest that GIPS cannot be viewed as a simple incentive mechanism through which governments and others stimulate innovation where markets have failed. Rather they are best viewed as a novel type of organization, where a complex array of incentives are considered and managed in order to assure that successful innovation occurs.

A brief review of two related concepts to prizes

In the following, we briefly review two relatively novel concepts related to prizes that have recently gained momentum in the global health and development space. First, we review advance market commitments (AMCs) with a case study on the pneumococcal pilot AMC and a discussion of its critiques. Second, we briefly review the Grand Challenges launched by the Bill & Melinda Gates Foundation.

Advance market commitments (AMCs) have a lot of potential for impact and current critiques and issues may be resolved with more research and experience

[Confidence: We have medium confidence in our conclusions regarding AMCs, which are predominantly based on a case study of the pilot pneumococcal AMC and a conversation with one expert. Given that AMCs have received limited attention in the scientific and grey literature so far, we believe that a further review of the literature is unlikely to change our views. However, conversations with other experts might change our conclusions.]

Brief introduction to AMCs

The idea of advance market commitments is to provide money to guarantee a market for a product. AMCs were first proposed by economics professor Michael Kremer (2000a, 2000b) and gained additional momentum when a report by the Center for Global Development (Levine et al., 2005) expanded on Kremer’s ideas and introduced the concept of the AMC as a financial mechanism that could encourage the production and development of affordable vaccines tailored to the needs of developing countries.

AMCs aim to address two failings of global health markets: First, pharmaceutical companies have little incentive to develop medicines for diseases that are more prevalent in low-income countries due to the low purchasing power of those who are most affected. Thus, private R&D investments into neglected diseases are much lower than the socially desirable levels (e.g. Kremer & Glennerster, 2004). Second, once developed, medicines often reach low-income countries much later after their introduction in high-income countries, which leaves many people in poorer countries untreated or unvaccinated despite the existence of products to prevent deaths (MSF, 2020, p. 3).

In the case of vaccines, AMC donors pledge that if a firm develops a specified new vaccine and sets the price close to manufacturing cost, they will “top up” the price by a certain amount per dose. This top-up payment strengthens firms’ incentives by increasing the profitability of serving those markets. Moreover, the AMC’s price cap ensures that the vaccine remains affordable for people in poverty (Scherer, 2020).

The first AMC was piloted in 2007 to purchase pneumococcal vaccines, which we detail in the next section. Christopher Snyder, an economics professor who researches AMCs mentioned to us in a conversation that it is difficult to know exactly how many proposals and ongoing AMCs there are, as some are technically not AMCs, but used the term for branding purposes (e.g. COVAX AMC), while others are a mix between different mechanisms of push and pull finding (e.g. carbon removal AMC). We provide some examples of proposed and ongoing AMCs and related mechanisms:

  • In 2020, the COVAX AMC was launched to make donor-funded doses of Covid-19 vaccines available to LMICs. Moreover, Operation Warp Speed,[17] a public-private partnership initiated by the US government had a Covid-19 AMC as one of its components.

  • In 2022, the Frontier fund was launched to mobilize a $925 million for carbon removal using an AMC.

  • There is an ongoing eight-year AMC for foot-and-mouth disease vaccines for animals, tailored to the needs of Eastern Africa.

  • The Center for Global Development proposed an advance commitment[18] for tuberculosis (Chalkidou et al., 2020).

  • An advance purchase commitment[19] for an Ebola vaccine was signed by Gavi, the Vaccine Alliance, in 2016.

Insights and lessons learned from the pilot pneumococcal AMC

Program launch

In 2007, Gavi, the Vaccine Alliance, piloted the use of an AMC to purchase pneumococcal vaccines (PCV) for children in the developing world with a total commitment of $1.5 billion by the Gates Foundation and five countries. At the time, the World Health Organization estimated that pneumococcus killed more than 700,000 children under five in developing countries per year (WHO, 2007).

According to Kremer et al. (2020, p. 270), “the design called for firms to compete for 10-year supply contracts capping the price at $3.50 per dose. A firm committing to supply X million annual doses (X/​200 of the projected 200 million annual need) would secure an X/​200 share of the $1.5 billion AMC fund, paid out as a per-dose subsidy for initial purchases. The AMC covered the 73 countries below the income threshold for Gavi eligibility. Country co-payments were set according to standard Gavi rules.”

Program outcomes

According to Kremer et al. (2020), by 2016, PCV was distributed in 60 of the 73 eligible countries, with doses sufficient to immunize over 50 million children annually. As Figure 4 below shows, by 2018, nearly half of the target child population in Gavi countries was covered, slightly surpassing the coverage rate in non-Gavi countries.

Figure 4 - PCV coverage in Gavi countries relative to non-Gavi countries (Kremer et al., 2020, online appendix)

According to estimates from Tasslimi et al. (2011), which we did not have time to review, the PCV rollout has been highly cost-effective. At initial program prices, the PCV rollout averted a DALY at $83. According to Kremer et al. (2020), evidence on the cost-effectiveness of PCV does not prove the cost-effectiveness of the overall AMC because of a lack of a valid counterfactual. However, they argue that the high cost-effectiveness of the PCV implies that the AMC would have been worthwhile were there even a small chance that it sped up PCV adoption.

While it is impossible to know for sure whether the AMC sped up PCV adoption, Kremer et al. (2020, p. 5) compared the PCV adoption with the rotavirus vaccine adoption as an approximate counterfactual.[20] The authors claimed that according to Figure 5 below, the rate of vaccine coverage in Gavi countries converged to the global rate almost five years faster for PCV than for the rotavirus vaccine. They calculated that had PCV coverage increased at the same rate as the rotavirus vaccine (i.e. slower), over 12 million DALYs would have been lost. Thus, to the extent that we can consider the rotavirus vaccine a reasonable counterfactual (which we did not have time to investigate), we can estimate the number of DALYs averted by the PCV AMC as 12 million.

Figure 5 - Coverage for vaccines rolled out with and without an AMC (Kremer et al., 2020, online appendix)

When should AMCs be used?

According to Sigurdson (2021, p. 8), while prizes may work well for solving challenges in which innovations can be easily decoupled from implementation, in scenarios where implementation is as or more important than the innovation itself (e.g. vaccine delivery), other mechanisms such as AMCs may be more suitable to incentivize a desired solution.

An independent process and design evaluation report commissioned by Gavi (Chau et al., 2013, p. 81) laid out a few steps to follow in order to determine whether an AMC or a different type of program is appropriate. We provide a copy in Appendix 5. Although the report provides some important points for consideration (e.g. the level of market maturity and the type of market failure), it does not give very concrete guidelines for the choice.

In a conversation with Snyder, he explained that the incentives of an AMC may not align with the best outcome in all contexts. He gave the example of Ebola vaccines, where, in his view, inducement prizes with a fixed payment are better than AMCs tied to sales. He explained that if local vaccinations in an emergent outbreak are very effective, the epidemic is quelled before many units of a vaccine get sold. This limits how lucrative the AMC can be and provides perverse incentives (i.e. a highly performing vaccine that stopped the outbreak very early would be rewarded less than a poorer performing drug). In his view, a more appropriate model in this case would be rewarding firms for the social benefit gained or harm averted as a result of their product.[21] Snyder makes this point in a working paper in which he and colleagues designed the optimal mechanism for diseases like Ebola and Covid-19 (Snyder, Hoyt, & Douglas, 2022).


This approach also comes with drawbacks, as it may be hard to come to a consensus on how much harm averted is strictly attributable to the funded product.

Discussion of critiques of AMCs

We found several critiques of different aspects of AMCs, relating to both the AMC concept in general and to the pneumococcal pilot AMC specifically. A 2020 report[22] published by Médécins Sans Frontières (MSF) provided a critical analysis and lessons learned of the pilot AMC and its impact on access to pneumonia vaccines for populations in need. Others found theoretical flaws in the AMC concept (Sonderholm, 2009), or criticized high program costs (e.g. Light, 2005). In this section, we list and explain some of these critiques and provide a brief discussion.

R&D not accelerated

According to MSF (2020, p. 1), “the AMC was flawed from the outset in its selection of pneumococcal disease, which already had a vaccine on the market, since 2000. PCV was virtually inaccessible to developing countries due to its high price, not because of a lack of R&D. The selection of a disease with an existing vaccine provided little, if any, incentive for accelerating R&D timelines of other manufacturers who had already begun development prior to the AMC inception.

We discussed this point with Christopher Snyder. He explained that while it is technically true that the pilot AMC did not speed up R&D, this was actually a deliberate design feature of this particular AMC. He explained that while the AMC concept was first proposed with a technologically distant target in mind, particularly to encourage research on vaccines such as malaria (Kremer & Glennerster, 2004), the concept was later expanded to also encompass technologically close targets in a Center for Global Development working group report (Levine et al., 2005). For a vaccine that is further in its R&D process, the challenge switches from incentivizing R&D to incentivizing adequate capacity (Kremer et al., 2020, p. 1), which is very expensive and requires a substantial investment, even after the R&D process is completed.[23] Technologically distant and close targets require different AMC designs. In our understanding, the AMC concept has not been tested for a technologically distant target yet, as originally envisioned for a malaria vaccine.

High program costs

There have been some concerns about the cost-effectiveness of the AMC. For example, according to Donald Light, a health policy researcher, the estimated cost per child saved under the PCV AMC was $4,722, whereas programs extending vaccines for diseases (such as polio, measles, and yellow fever) to children who don’t receive them would save more lives at a lower cost. Light mentioned multidrug package interventions for neglected tropical disease interventions that cost about 40 cents per person per year (Scudellari, 2011; Light, 2005). We haven’t been able to find the direct source of this cost-effectiveness estimate and we don’t know how it was calculated.

According to MSF (2020, p. 2), “a lack of transparency on costs, capacity, and pricing decisions fed criticism that the AMC acted as a vehicle for private companies to make unnecessarily high profits at the expense of broader vaccine access. The AMC design team lacked critical information and sufficient expertise to appropriately negotiate the original price per dose. If more data from the manufacturers on the costs of production and capacity scale-up had been forthcoming, and if more experts with economic or vaccine-industry experience had been involved, the initial price ceiling of $3.50 per dose might have been lower but still sufficient to incentivize manufacturers to participate in the AMC. [...] PCV remains one of the most expensive among the 12 vaccines supported by Gavi.”

We mentioned this critique to Snyder, who responded that he agrees that, for example, the polio vaccine at a couple of cents per dose is much more cost-effective than the pneumococcal AMC. However, he cautioned that average and marginal cost-effectiveness should not be confused, stating that polio might be better on average, but the pneumococcal AMC might still be worth funding on the margin. He also argued that the option value of learning should be factored into the benefits of the program, though acknowledged this benefit has diminishing returns.

Moreover, Snyder explained the difficulties in setting the right price due to information asymmetries[24] between firms and AMC funders. AMC designers do not know the manufacturers’ reservation price for installing capacity to supply vaccines. As AMC designers have an asymmetric loss function in setting firm prices (that is, offering firms less than their reservation price risks children not receiving vaccines, which is very costly relative to what donors can save by paying firms somewhat less), maximizing social welfare under uncertainty requires paying firms more than the expected cost of the vaccine (Kremer et al., 2020, p. 3).

Kremer et al. (2020, p. 3) (coauthored by Snyder) wrote, “While some activists have argued that the $3.50 paid per dose exceeds manufacturing costs, the relevant issue for AMC designers is not manufacturing costs but firms’ reservation values. Their reservation values may substantially exceed manufacturing costs for several reasons: the AMC top up may not fully defray their capacity costs, or they may fear that offering a low AMC price would lead higher-income countries to press for price reductions. While these factors imply ex ante optimal prices will exceed expected production costs, the facts that both firms participated even though one likely had substantially higher manufacturing costs and that both continued to participate at $2.90 per dose suggests that at least one firm would likely have participated at a lower price. Still, prices for PCV are much lower under the AMC than outside it. Currently, lower-income countries in the Americas pay $12 or more per dose (WHO 2019); the US Centers for Disease Control and Prevention (CDC) pay $137 (CDC 2019). As we show in Figure 3 in the online Appendix, the percentage discount GAVI receives compared to various global price measures is deeper for PCV than for almost any other vaccine.

No incentive for technology transfer to developing country manufacturers

According to MSF (2020), “no incentive or plan for PCV technology transfer to developing-country manufacturers was included in the design of the AMC. The AMC has yet to prove that it can serve as a model for encouraging long-term, sustainable vaccine production.

Our understanding from talking to Snyder is that the pilot AMC was not explicitly designed for a PCV technology transfer to developing country manufacturers, but first and foremost to establish an adequate supply capacity. In response to this critique point, Snyder explained that so far there is no evidence of the existing PCV supply capacity being withdrawn. Once a firm establishes the vaccine capacity, there is little reason to pull out. Moreover, the pilot suggests that the prices went down over time (to $2.90) due to public pressures for the manufacturers. In fact, in 2019, a vaccine developed by the Serum Institute of India qualified for the AMC program, pricing its vaccine at $2 per dose for low-income countries (Kremer et al., 2020, p. 2).

Low competition among manufacturers

MSF (2020, p. 2) wrote that, “While the funding was intended to help encourage competition to reduce the overall price of PCV, in reality the bulk of the money essentially served as a subsidy for Pfizer and GlaxoSmithKline (GSK), which until December 2019 were the only two manufacturers of PCV. Of the $1.5 billion, $1.238 billion (82%) was disbursed to Pfizer and GSK. In 2020, a third PCV manufacturer, and the first in a developing country, the Serum Institute of India, was finally awarded a portion of the funding at $75 million (5%).

According to Kremer et al. (2020), there is a tradeoff regarding the number of firms funded by the AMC. They wrote (p. 4), “A key issue for future AMCs will be whether to split the AMC among multiple suppliers and reserve tenders for future entrants, as did the pilot pneumococcus AMC, or to concentrate incentives on a single supplier, as envisioned in Kremer and Glennerster (2004). Sponsors of the AMC pilot prioritized entry of multiple vaccines because they saw competition as essential for holding down long-run prices and avoiding supply interruptions. Kremer and Glennerster (2004) prioritized the development of a vaccine where none currently existed, relying on the price cap, to which the firm agrees to access AMC funds, to keep prices near marginal cost over the long term. [...] For distant technological targets, incentivizing a sequence of entrants reduces incentives for the first vaccine to enter. Thus, structuring a program to incentivize multiple entrants may substantially increase total costs. On the other hand, the design and enforcement of long-term contracts that hold prices close to marginal cost and assure consistent supply through penalty clauses for supply interruptions may be difficult.

Overall, we think that MSF (2020) raised a fair critique that competition was low for the pilot AMC. However, our impression is that the decision whether to incentivize only one or multiple suppliers is a deliberate design choice that involves trade-offs which need to be carefully considered, e.g. the cost and the importance of avoiding supply interruptions. Thus, we don’t think that a low competition among manufacturers is a natural consequence of an AMC, but can be prevented by designing the AMC accordingly.

Supply shortages

According to MSF (2020), “Supply capacity of existing manufacturers did not meet full PCV demand: During AMC implementation, demand at times exceeded supply despite the large subsidies given to the manufacturers to scale up production capacity. Pfizer and GSK were conservative in expanding their production capacity to only fulfill the number of doses stipulated in supply agreements, but these agreements were based on initial forecasts that were lower than the actual demand. This resulted in supply shortages of up to 29 million doses from 2012 to 2014, delaying 23 country introductions, and leading to an estimated 26 million children born without access to PCV.

Unfortunately, we haven’t been able to discuss this critique with Snyder or any other expert. Thus, we do not know to what extent this supply issue was really problematic and could have been resolved.

Ethical dilemma resulting from a theoretical flaw of AMCs

According to Sonderholm (2009, p. 2), the AMC concept has a theoretical flaw, in that there is an ethical dilemma that arises from respecting developing countries’ preferences for culturally acceptable but less effective vaccines. He described a scenario in which several products have been licensed under an AMC scheme with one product being significantly medically superior to the other one. He stated that it is not obvious that governments of developing countries will always choose the medically superior product, even if priced at a level identical to that of inferior ones, explaining that the medically superior product might have side effects that are culturally offensive in certain countries. According to Sonderholm (2009), “from the point of view of CGD, it is a suboptimal outcome if the best available medicines are not the ones that are purchased. [...] For example, if a vaccine generated side effects that were medically harmless but culturally unacceptable, there might be an unwillingness to use the vaccine. Because an AMC scheme is demand driven, it compels the donors to fund the purchase of medically inferior products when it is such products that are in demand by developing world governments. This theoretical feature of an AMC scheme is an unattractive one that should make policy-makers and donors hesitant about signing up to it.

Though Sonderholm’s (2009) critique has been mentioned by Kremer et al. (2020, p. 3), we have not come across any explicit reactions to it. We have not been able to discuss this critique with Snyder.

Summary

AMCs are still a fairly novel concept and there has so far been pretty limited empirical evidence, which mainly comes from the pilot pneumococcal AMC. All in all, we think that the results of the pilot AMC are quite promising in that it arguably sped up PCV adoption (by about five years if one uses the rotavirus vaccine as a very rough counterfactual). Moreover, the use of AMCs in the context of Covid-19 vaccines and carbon removal has been advocated for by some highly renowned economists whose judgment we trust.

A major limitation of the pilot AMC is that it focused deliberately on a technologically close target where the R&D process had already been completed (thus mainly incentivizing the establishment of supply capacity), not on a distant target as originally envisioned. Thus, it still remains to be tested whether AMCs can be effective tools to incentivize R&D activity, such as in the case of malaria vaccines.

AMCs have also been subject to some criticism, most notably that its cost-effectiveness is comparatively low. According to the only cost-effectiveness estimate we found on the pilot AMC, the cost per child saved under the AMC was $4,722, which is more expensive than extending existing vaccines for diseases like polio, measles, and yellow fever. We also found a cost-effectiveness estimate of the PCV rollout at $83 per DALY, but it is unclear how to translate this into a cost-effectiveness of the AMC due to a lack of a valid counterfactual. Moreover, the cost-effectiveness may differ for other outcomes and likely depends on whether the focus is on a technologically distant or close target (as this affects the information asymmetry and thus pricing decisions).

Whether or not AMCs are an appropriate tool in a specific case depends on the level of market maturity and the type of market failure, but we have not encountered very concrete rules on when AMCs should be used. It may be that we missed some literature on this or that it hasn’t been written down clearly yet, but we expect that those involved in the development of the AMC concept (e.g. Michael Kremer, Jonathan Levin, Christopher Snyder, and the Center for Global Development working group on AMCs) can provide valuable support for this decision.

All in all, we believe that AMCs have a lot of potential. We think it’s plausible to assume that many critiques and issues can be at least partially resolved with more research and experience, as the research is still at a nascent stage. One avenue of research we deem promising is to test AMCs in the context of technologically distant targets to know whether they can also be used to incentivize R&D, as originally conceived. We recommend investigating AMCs further, or at least keeping it on the back burner for further consideration in the future.

Grand Challenges are conceptually closer to grants than to prizes, with a focus on fostering innovation and R&D capacity in low- and middle-income countries

[Confidence: We have little confidence in our brief review of Grand Challenges, as we came across this concept at a late stage of this report and only had time for superficial reading of the Grand Challenges website. We expect that any additional hour of research, especially reviewing third party accounts and publications related to Grand Challenges, might alter our conclusions.]

Grand Challenges, “a family of initiatives fostering innovation to solve key global health and development problems” were launched by the Bill & Melinda Gates Foundation in 2003 (Grand Challenges website). The Grand Challenges model was initially created to “drive innovation and test novel solutions for health problems that disproportionately affect the world’s poor” and to fund “high-risk, potentially high-reward concepts” (Mundel, 2014). The set of initiatives defines challenges as as an “open request for grant proposals” and has awarded more than 3,400 grants in over 110 countries (Grand Challenges Fact Sheet, 2021). The first initiative was called Grand Challenges in Global Health and was followed by the launch of various other initiatives, such as Grand Challenges Explorations, Grand Challenges India, and Grand Challenges for Development (ibid).

Grand Challenges share some similarities with inducement prizes, but also have a few key differences. While most inducement prizes have a clearly defined output, Grand Challenges have a much more open-ended and vague criteria.[25] The grants are typically large, ranging from $100,000 to $1 million, with lower amounts for proof-of-concept, and “clear stage gates for scaling” (Singer, 2014). Unlike most grants, Grand Challenges grants are, in at least some cases, awarded based on the outcome and not only on the proposal. For example, whether follow-up funding is granted can depend on the success of a project (e.g. here). There is usually an explicit requirement or recommendation that the programs are run by, or in conjunction with, local institutions or involve lead investigators from the relevant context.

To get a sense of the impact and cost-effectiveness of Grand Challenges, we briefly reviewed a Gates Foundation blog post describing how they measure the value of Grand Challenges (Buchsbaum, 2014). Due to a long time span until the impacts of Grand Challenges materialize, they do not measure the impact in terms of improved or saved lives. Instead, they use a return on investment (ROI) framework as a proxy, combining four different metrics (see Figure 6 below). We describe those four metrics in the following.

Figure 6 - Framework for an intermediate evaluation of the return on investment (ROI) for Grand Challenges (Buchsbaum, 2014)

1. Projects Transitioned to Development: The Gates Foundation defines the main metric of success as “identifying new investment opportunities that ultimately result in lives saved.” It is unclear whether they delineate between projects that result in lives saved at a cost of $1,000 per DALY (disability-adjusted life year) or projects that result in lives saved at a cost of $100 per DALY. They also look into whether projects have achieved a proof-of-concept stage and aim to increase investments in order to take it towards development. Notably, they mentioned there were projects that responded well to the challenge call but subsequently faced other challenges (such as regulatory or commercial barriers), and were unable to transition to development as a result. This emphasizes the need for challenges to take into account all aspects of the paths to development, and not just the technical solutions to a problem. In the case of other projects that did not result in an adequate solution, the scientific advancements and knowledge gained may be helpful for future research and innovation efforts.

2. Strategic Learning and Landscaping: According to Buchsbaum (2014), this metric includes all of the learnings from “extensive consultation with experts” and learning from the various submissions from innovators. These have apparently helped shape and guide the foundation’s thinking and design of future initiatives. The blog post does not explain how exactly this is measured.

3. Increased Funding for Innovation: According to Buchsbaum (2014), the Grand Challenges provides an “attractive platform for building co-funding partnerships and providing new funders a mechanism to increase their portfolios of solution focused investments.” Buchsbaum (2014) explained that there is evidence for the indirect increase in funding for innovations against challenge problem statements. He hypothesized that posting a challenge inspires scientists to develop new ideas and obtain funding even if they are not selected for funding by Grand Challenges. To test this hypothesis, the Grand Challenges team surveyed declined applicants (~97% of all ideas submitted) and found that 43% of respondents suggested that the idea they submitted was “a new idea formulated in response to the challenge topic, and 8% of respondents were able to obtain funding from other sources. Grand Challenges believes this represents, although indirect, “an important source of funding leverage for new ideas against challenge calls.

However, it is, at least to us, unclear whether the winning applicants were also “new ideas,” and to what extent there might be overlap between the new ideas and those that were able to find other funding. While this is evidence suggesting the challenges contributed to additional innovation, the extent to which this is indeed an “important source of funding leverage” is, in our view, less clear and depends on other variables that were not clarified in the blog post.

4. Increased Awareness of Global Health and Development: According to Buchsbaum (2014), the Grand Challenges increase the awareness of global health and development through the opportunity the Grand Challenges provide for applicants, the marketing and outreach, as well as media attention that the challenges and awards capture. This was not well quantified in the blog and it is not clear to us how this metric is measured.

In a later blog post, Buchsbaum (2015) expanded on the original four metrics by adding two metrics that were previously missing. These include “the value of advancing knowledge that does not yet directly result in a product or intervention” (though this does appear to be included in the 2014 blog post here). The other benefit was “the value of the research and development capacity” in low- and middle-income countries.

In another blog post on the Gates Foundation website, Buchsbaum and Singer (2014) suggest that the challenges have shifted away from a dearth of innovation to a bottleneck at the proof-of-concept stage, where many promising innovations do not achieve scale.[26] To address this, they described the role of a “curator,” which ensures the innovations are “investment ready” and conduct comparative analyses, and the role of a “broker”, which helps act as a bridge between innovations and investors. Despite websites that aim to serve this function, the authors make a special emphasis on the human element in this space.

Singer (2014) referenced the rapid spread of Grand Challenges globally, although it is unclear what has driven this spread. Similarly, and more recently, Buchsbaum and Tesfagiorgis (2016) indicated some degree of crowdedness in this space, calling for data sharing and combining resources. This resulted in the Global Innovation Exchange, though it has been defunct since 2021, and now serves as an online resource.

In summary, Grand Challenges seem to be conceptually closer to grants than to prizes, with a focus on innovations that can be brought to scale, and fostering innovation and R&D capacity in low- and middle-income countries. Based on their own metrics, it is difficult to assess whether a Grand Challenge model would be more or less cost-effective than other models of financing innovation, such as regular grantmaking, prizes, or an AMC model. It is also unclear the extent to which Grand Challenges are successful because of the financial gains, their particular approach or funding model, or prestige/​reputation effects. With more time, we could contact experts in this space to see if they have more unpublished data about their impact to more clearly elucidate these nuances.

Conclusion and recommendations

Inducement prizes have received little attention in the empirical and theoretical literature. The only direct evidence we found on inducement prizes having an effect on innovation comes from a study on prizes for agricultural technology in late 19th and early 20th century England. While the available literature points somewhat towards inducement prizes having favorable outcomes, we don’t think the evidence provides a strong case for using inducement prizes. A noteworthy finding is that inducement prizes can leverage large amounts of private capital (~2-50x the cash reward), but we don’t know how representative this finding is.

We think that there is somewhat more (and more convincing) evidence in favor of recognition prizes. We would advise having a deeper look into the literature on recognition prizes, though our guess is that there is not much more to be found than what we already have. A good starting point could be to skim further articles we found on recognition prizes but did not have time to review (listed here).

While the literature does not provide very concrete guidelines on when prizes should be used (versus other incentive mechanisms), a rule of thumb seems to be that inducement prizes are most useful (1) when the goal is clear but the path to achieving it is not, and (2) in industries that are susceptible to underproduction of innovation due to market failure (e.g. neglected tropical diseases or technology to address climate change). This holds similarly for AMCs, but it is less clear (at least to us) how to choose between inducement prizes and AMCs.

It is also not very clear how an effective inducement (or recognition) prize should be designed. Most of the evidence comes from field or lab experiments on relatively small-scale prizes, which are very different in nature than grand innovation prizes, like the X Prize. One noteworthy general finding is that the prestige and visibility of a prize seems to matter much more than the cash reward in incentivizing participants. The cash reward seems to matter mostly insofar as it increases the prize’s visibility, but there is only a weak correlation between the two. Thus, we would advise against spending millions on a large cash reward and think carefully about how to create prestige around a prize.

Although the evidence base on prizes is pretty limited, we are doubtful whether funding more research on prizes is worthwhile. Prizes, especially large-scale prizes, are very complex and their success depends on many different parameters. For example, findings that hold for small-scale prizes in one context may not extend to another, and there is always the challenge of finding or constructing an adequate counterfactual. Our current belief is that the amount and cost of research needed to sufficiently resolve key uncertainties is disproportional compared to its benefits. Thus, we would not recommend funding prizes just for research purposes.

Advance market commitments, a fairly novel concept, have been endorsed by several highly renowned economists. So far, there has only been one case study that yielded promising results, but it also received critique. While the pneumococcal pilot AMC seemed to have been successful in establishing and speeding up vaccine rollout in poorer countries, it may not have been very cost-effective relative to other vaccine interventions. Moreover, the pilot focused deliberately only on establishing supply capacity for an already existing vaccine, not on inducing R&D activity. We would recommend looking into the possibility of funding an AMC to test whether it can actually be effective at inducing R&D activity, as it was originally conceived. Moreover, we would advise having a deeper look at AMCs, particularly to get a better sense of how promising other proposals and applications are.

Overall, we came away somewhat more optimistic about recognition prizes and AMCs and somewhat less optimistic about inducement prizes, relative to our priors. Nonetheless, we believe that inducement prizes are still worth considering, but only in certain circumstances and contexts, with more focus on building prestige than on increasing the cash rewards, and with a very careful design that is informed by what went well and what went wrong in historical prizes and more recent case studies. We don’t have a firm conclusion regarding Grand Challenges, as we only had time to review them very briefly and didn’t encounter anything striking that affected our bottom-line conclusions.

What we would (and wouldn’t) do with more time

  • We would not spend more time investigating the empirical and theoretical literature on inducement prizes. As we found very few high-quality studies on the effects of inducement prizes, we are quite confident that the literature we included in this report gives a fairly complete picture of the evidence base to date.

  • As we found the evidence on the innovation-related and field-shaping effects of recognition prizes somewhat more convincing than the evidence on inducement prizes (yet this report focused mostly on inducement prizes), we would briefly try to review whether there is any other convincing evidence that supports (or contradicts) the huge effects on growth metrics of scientific fields found by Jin, Ma, and Uzzi (2021).

  • Given that prestige has been found to matter more as an incentive in prizes than monetary rewards, we would be curious to know whether and how prestige around a prize can be created. We don’t know whether it’s something that can be directly influenced and how costly it would be, but we think it’s worthwhile investigating, as it’s possible that this can be built cheaply and might be a more cost-effective incentive than very large cash awards.

  • As we haven’t been able to interview any prize experts besides Christopher Snyder on AMCs, we would try to talk to both proponents and opponents of prizes to see whether there are any important considerations that we missed. We haven’t found many academics who strongly support prizes (except AMCs), but Joseph Stiglitz is one exception who advocated for prizes instead of patents in the developing world (e.g. Stiglitz, 2007). It would be interesting to know what types of prizes (inducement prizes or AMCs) he envisions and in what cases. On the more critical side, we would recommend talking to Zorina Khan in more detail about her views about the potential and pitfalls of prizes and in what cases they are potentially (in)effective. Moreover, we think that talking to someone who has been involved in designing and implementing a prize could also be worthwhile, particularly someone from a prize that “failed,” though we don’t have a concrete suggestion for this.

  • As we were only able to give relatively vague rules of thumb on when to use inducement prizes or AMCs, we would try to make this more concrete, especially to have a clearer distinction between cases when either of those, a mix of them, or other types of incentives are likely to yield superior results. We suspect that there is not much literature on this question yet, but believe that talking to Christopher Snyder, John Levin, or Michael Kremer could give more insight into this question.

  • We would review other potential applications and existing proposals of AMCs to get a sense of whether those might be a promising avenue for future funding and research, such as AMCs for technologically distant targets (e.g. vaccines that are still in the R&D process) or for technologies to address climate change.

  • As we only had time for a very short and superficial review of Grand Challenges, we don’t have a firm grasp of their impact and cost-effectiveness, and we are not sure whether sufficient evidence exists to evaluate this. With more time, we could contact experts in this space to see if they have more unpublished data about their impact to more clearly elucidate these nuances.

Acknowledgments

This post is a project of Rethink Priorities–a think tank dedicated to informing decisions made by high-impact organizations and funders across various cause areas. Jenny Kudymowa was the lead researcher and main author of this report. Bruce Tsai contributed to the research and writing of this report. Jason Schukraft and Tom Hird supervised the report. Thanks to Greer Gosnell, Ruby Dickson, and Marcus Davis for helpful comments on drafts. Further thanks to Christopher Snyder for taking the time to speak with us. Open Philanthropy provided funding for this project and we use their general frameworks for evaluating cause areas, but they do not necessarily endorse its conclusions.

If you are interested in Rethink Priorities’ work, please visit our research database and subscribe to our newsletter.

References

Acar, O. A., & van den Ende, J. (2015). Understanding fear of opportunism in global prize-based science contests: Evidence for gender and age differences. PLOS ONE, 10(7), e0134898.

https://​​doi.org/​​10.1371/​​journal.pone.0134898

Azoulay, P., Stuart, T., & Wang, Y. (2014). Matthew: Effect or fable? Management Science, 60(1), 92-109.

https://​​doi.org/​​10.1287/​​mnsc.2013.1755

Boudreau, K. J., Lacetera, N., & Lakhani, K. R. (2011). Incentives and problem uncertainty in innovation contests: An empirical analysis. Management Science, 57(5), 843-863.

https://​​doi.org/​​10.1287/​​mnsc.1110.1322

Boudreau, K. J., & Lakhani, K. R. (2011). “Fit”: Field experimental evidence on sorting, incentives and creative worker performance. Harvard Business School Working Paper No. 11-107.

https://​​perma.cc/​​242V-FT36

Brüggemann, J., & Meub, L. (2015). Experimental evidence on the effects of innovation contests. University of Göttingen, Center for European, Governance and Economic Development Research Discussion Paper No. 251).

https://​​perma.cc/​​A9Z8-7A7U

Brunt, L., Lerner, J., & Nicholas, T. (2012). Inducement prizes and innovation. The Journal of Industrial Economics, 60(4), 657-696.

https://​​doi.org/​​10.1111/​​joie.12002

Buchsbaum, S. (2014, October 3). How do we measure the value of Grand Challenges? Bill & Melinda Gates Foundation.

https://​​perma.cc/​​GD8L-HWJ8

Buchsbaum, S. (2015, April 27). How do we measure the impact of Grand Challenges. Bill & Melinda Gates Foundation.

https://​​perma.cc/​​36X8-3AXV

Buchsbaum, S., & Singer, P. (2014, September 23). Scaling impact: The next great challenge in global health and development innovation. Bill & Melinda Gates Foundation.

https://​​perma.cc/​​A7X6-CQ92

Buchsbaum, S., & Tesfargiorgis, K. (2016, May 3). Building the Grand Challenges community. Bill & Melinda Gates Foundation.

https://​​perma.cc/​​9FDV-N8KM

Burstein, M. J., & Murray, F. E. (2016). Innovation prizes in practice and theory. Harvard Journal of Law & Technology, 29(2), 401-453.

https://​​perma.cc/​​Z3ZD-GRCJ

Chalkidou, K., Garau, M., Silverman, R., & Towse, A. (2020). Blueprint for a Market-Driven Value-Based Advance Commitment for Tuberculosis. Center for Global Development.

https://​​perma.cc/​​5A5M-FK6V

Chau, V., Hausman, V., Deelder, W., Rastegar, A., De Monte, M., & Aizenman, Y. (2013). The advance market commitment for pneumococcal vaccines: Process and design evaluation. Gavi, the Vaccine Alliance.

https://​​perma.cc/​​7N43-HXR2

Desouza, K. C. (2012). Challenge.gov: Using competitions and awards to spur innovation. IBM Center for the Business of Government.

https://​​perma.cc/​​3XF6-PS9N

Eagle, L. (2009). Social marketing ethics. Report prepared for the National Social Marketing Centre.

https://​​perma.cc/​​BW82-6SY5

Everett, B., Barnett, C., & Verma, R. (2011). Evidence review — Environmental innovation prizes for development. DFID Resource Centre for Environment, Water and Sanitation.

https://​​perma.cc/​​P98G-J8RJ

Gök, A. (2016). The impact of innovation inducement prizes. Nesta Working Paper No. 1318.

https://​​perma.cc/​​35CC-A7YH

Graff Zivin, J., & Lyons, E. (2021, May). The effects of prize structures on innovative performance. AEA Papers and Proceedings, 111, 577-81.

https://​​dx.doi.org/​​10.1257/​​pandp.20211119

Hayes, K. (Host) (2021, May 9). Big Dollars, Big Rewards? The Roles of Prizes in Driving Innovation [Audio podcast episode]. In Resources Radio.

https://​​perma.cc/​​4NVX-5BTN

Hofstetter, R., Zhang, J. Z., & Herrmann, A. (2018). Successive open innovation contests and incentives: Winner‐take‐all or multiple prizes? Journal of Product Innovation Management, 35(4), 492-517.

https://​​doi.org/​​10.1111/​​jpim.12424

Hoyt, D., & Phills, J. (2007). X PRIZE Foundation: Revolution through competition. Stanford Graduate School of Business Case No. SI-90.

https://​​perma.cc/​​8GJD-P8Y7

Jeppesen, L. B., & Lakhani, K. R. (2010). Marginality and problem-solving effectiveness in broadcast search. Organization Science, 21(5), 1016-1033.

https://​​doi.org/​​10.1287/​​orsc.1090.0491

Jin, C., Ma, Y., & Uzzi, B. (2021). Scientific prizes and the extraordinary growth of scientific topics. Nature Communications, 12, 5619.

https://​​doi.org/​​10.1038/​​s41467-021-25712-2

Kalil, T. (2006). Prizes for technological innovation. The Brookings Institution Discussion Paper 2006-08.

https://​​perma.cc/​​B8PZ-ETMH

Kapczynski, A. (2012). The cost of price: Why and how to get beyond intellectual property internalism. UCLA Law Review, 59, 970.

https://​​perma.cc/​​7LJG-4TB3

Kay, L. (2011). How do prizes induce innovation? Learning from the Google Lunar X-Prize [Doctoral dissertation]. Georgia Institute of Technology.

https://​​perma.cc/​​4V25-8N5Y

Khan, B. Z. (2015). Inventing prizes: A historical perspective on innovation awards and technology policy. Business History Review, 89(4), 631-660.

https://​​doi.org/​​10.1017/​​S0007680515001014

Khan, B. Z. (2017). Prestige and profit: The Royal Society of Arts and incentives for innovation, 1750-1850. National Bureau of Economic Research Working Paper No. 23042.

https://​​doi.org/​​10.3386/​​w23042

Khan, B. Z. (2020). Inventing ideas: Patents, prizes, and the knowledge economy. Oxford University Press, USA.

https://​​perma.cc/​​NXK6-GU44

Koh Jun, O. (2012). Optimal use of donor funding to incentivize vaccine research & development for neglected diseases: An analysis of different R&D incentive mechanisms. Journal of Public and International Affairs, 22, 80-108.

https://​​perma.cc/​​TF4E-BEBU

Kremer, M. (1998). Patent buyouts: A mechanism for encouraging innovation. The Quarterly Journal of Economics, 113(4), 1137-1167.

https://​​doi.org/​​10.1162/​​003355398555865

Kremer, M. (2000a). Creating markets for new vaccines. Part I: Rationale. Innovation Policy and the Economy, 1, 35-72.

https://​​perma.cc/​​597W-NDVQ

Kremer, M. (2000b). Creating markets for new vaccines. Part II: Design issues. Innovation Policy and the Economy, 1, 73-118.

https://​​perma.cc/​​GEM7-RAFY

Kremer, M. (2002). Pharmaceuticals and the developing world. Journal of Economic Perspectives, 16(4), 67-90.

https://​​doi.org/​​10.1257/​​089533002320950984

Kremer, M., & Glennerster, R. (2004). Strong medicine: Creating incentives for pharmaceutical research on neglected diseases. Princeton University Press.

https://​​perma.cc/​​E7HL-F3A4

Kremer, M., Levin, J., & Snyder, C. M. (2020). Advance market commitments: Insights from theory and experience. AEA Papers and Proceedings, 110, 269-73.

https://​​doi.org/​​10.1257/​​pandp.20201017

Lee, P. (2014). Social innovation. Washington University Law Review, 92(1), 1-71.

https://​​perma.cc/​​M42V-SK6V

Levine, R., Kremer, M., & Albright, A. (2005). Making markets for vaccines: Ideas to action. The report of the Center for Global Development Advance Market Commitment Working Group. Washington, DC.

https://​​perma.cc/​​Y8ZL-NNMR

Light, D. W. (2005). Making practical markets for vaccines: Why I decided that the Center for Global Development Report, Making Markets for Vaccines, offers poor advice to government and foundation leaders. PLoS Medicine, 2(10), e271.

https://​​perma.cc/​​9LZ9-PS4M

Lincoln, A. E., Pincus, S., Koster, J. B., & Leboy, P. S. (2012). The Matilda Effect in science: Awards and prizes in the US, 1990s and 2000s. Social Studies of Science, 42(2), 307-320.

https://​​doi.org/​​10.1177%2F0306312711435830

Ma, Y., Oliveira, D. F., Woodruff, T. K., & Uzzi, B. (2019). Women who win prizes get less money and prestige. Nature, 565, 287-288.

https://​​perma.cc/​​V9XH-3S2E

Ma, Y., & Uzzi, B. (2018). Scientific prize network predicts who pushes the boundaries of science. Proceedings of the National Academy of Sciences, 115(50), 12608-12615.

https://​​doi.org/​​10.1073/​​pnas.1800485115

McKinsey & Company. (2009). “And the winner is …”: Capturing the promise of philanthropic prizes.

https://​​perma.cc/​​VD9A-FLRB

Moser, P. (2016). Patents and innovation in economic history. Annual Review of Economics, 8, 241-258.

https://​​doi.org/​​10.1146/​​annurev-economics-080315-015136

Mundel, T. (2014). Celebrating ten years of Grand Challenges. Bill & Melinda Gates Foundation.

https://​​perma.cc/​​9R4Q-KV2D

Médecins Sans Frontières (MSF). (2020). Analysis and critique of the advance market commitment (AMC) for pneumococcal conjugate vaccines (PCVs) and impact on access. MSF Briefing Document June 2020.

https://​​perma.cc/​​F3MV-4VDV

Murray, F., Stern, S., Campbell, G., & MacCormack, A. (2012). Grand Innovation Prizes: A theoretical, normative, and empirical evaluation. Research Policy, 41(10), 1779-1792.

https://​​doi.org/​​10.1016/​​j.respol.2012.06.013

Nicholas, T. (2013). Hybrid innovation in Meiji, Japan. International Economic Review, 54(2), 575-600.

https://​​doi.org/​​10.1111/​​iere.12007

Reschke, B. P., Azoulay, P., & Stuart, T. E. (2018). Status spillovers: The effect of status-conferring prizes on the allocation of attention. Administrative Science Quarterly, 63(4), 819-847.

https://​​doi.org/​​10.1177%2F0001839217731997

Roberts, J., Brown, C., & Stott, C. (2019). Using innovation inducement prizes for development: What more has been learned? Ideas to Impact Discussion Paper August 2019..

https://​​perma.cc/​​RQ8S-YVMF

Scherer, L. (2020). Price guarantee spurred vaccine development for poor nations. The Digest, 4, April 2020.

https://​​perma.cc/​​XXR5-VCK6

Schroeder, A. (2004). The application and administration of inducement prizes in technology. Independence Institute Research Paper IP-11-2004.

https://​​perma.cc/​​YB6S-8PEQ

Scudellari, M. (2011). Are advance market commitments for drugs a real advance? Nature Medicine, 17(2), 139.

https://​​perma.cc/​​K2YN-6LSJ

Sigurdson, K. (2021). Three essays on the impact of inducement prizes on innovation [Doctoral dissertation]. University of Toronto.

https://​​perma.cc/​​9REP-E2KS

Singer, P. (2014). Grand Challenges: Success and surprise. Bill & Melinda Gates Foundation.

https://​​perma.cc/​​GWR6-VCRK

Snyder, C. M., Hoyt, K., & Gouglas, D. (2022). An Optimal Mechanism to Fund the Development of Vaccines Against Epidemics.

https://​​perma.cc/​​Q5JX-T57L

Sonderholm, J. (2010). A theoretical flaw in the advance market commitment idea. Journal of Medical Ethics, 36(6), 339-343.

http://​​doi.org/​​10.1136/​​jme.2009.033092

Stiglitz, J. E. (2007). Prizes, not patents. Post-Autistic Economics Review, 42, 46-47.

https://​​perma.cc/​​YSV7-TT3D

Tabarrok, A. (2022). What Operation Warp Speed Did, Didn’t and Can’t Do. Marginal Revolution.

https://​​perma.cc/​​V764-R63Z

Tasslimi, A., Nakamura, M. M., Levine, O., Knoll, M. D., Russell, L. B., & Sinha, A. (2011). Cost effectiveness of child pneumococcal conjugate vaccination in GAVI-eligible countries. International Health, 3(4), 259-269.

https://​​doi.org/​​10.1016/​​j.inhe.2011.08.003

WHO. (2007). Pneumococcal conjugate vaccine for childhood immunization. World Health Organization position paper. Weekly Epidemiological Record, 82(12), 93-104.

https://​​perma.cc/​​3RD4-2F7T

Williams, H. (2012). Innovation inducement prizes: Connecting research to policy. Journal of Policy Analysis and Management, 31(3), 752-776.

https://​​doi.org/​​10.1002/​​pam.21638

Wright, B. D. (1983). The economics of invention incentives: Patents, prizes, and research contracts. The American Economic Review, 73(4), 691-707.

https://​​perma.cc/​​8XFX-WS3X

Zhang, Y., Kim, J. A., Liu, F., Tso, L. S., Tang, W., Wei, C., … & Tucker, J. D. (2015). Creative contributory contests (CCC) to spur innovation in sexual health: 2 cases and a guide for implementation. Sexually Transmitted Diseases, 42(11), 625-628.

https://​​doi.org/​​10.1097/​​OLQ.0000000000000349

Zuckerman, H. (1992). The scientific elite: Nobel laureates’ mutual influences. In R. S. Albert (Ed.), Genius & Eminence, 2nd ed. (157-170). Pergamon Press.

https://​​perma.cc/​​XZZ5-8F52

Appendix

Appendix 1 - More detailed discussion of the effects of prizes on innovation and intermediate outcomes

In this section, we describe the eight best pieces of quantitative evidence (in our view) on the effects of prizes on innovation and intermediate outcomes.

Jin, Ma, and Uzzi (2021)

Jin, Ma, and Uzzi (2021) conducted a longitudinal analysis of nearly all recognition prizes worldwide to investigate whether scientific prizes predict changes in the growth of a research topic. Although the study is based on recognition prizes only (and thus no inducement prizes), we include it nonetheless because we found the sheer scale of the data set and the methodology especially convincing relative to other studies we found.

From various sources, the authors collected data on 405 scientific prizes that were conferred 2,900 times between 1970 and 2007 with respect to more than 11,000 scientific topics in 19 disciplines. They then merged this information with various data on scientific topics and publications. Using a difference-in-differences regression design combined with Dynamic Optimal Matching,[27] More precisely, the authors matched prizewinning topics with five non-prizewinning topics that had statistically equivalent growth patterns in six different growth indices in the 10-year period before a prize was conferred.


the authors found that topics associated with a scientific prize experience extraordinary growth in productivity, impact, and new entrants. More precisely, relative to matched non-prizewinning topics, prizewinning topics produce 40% more publications and 33% more citations, retain 55% more scientists, and gain 37% and 47% more new entrants and star scientists, respectively, in the first five to 10 years after the prize.

Figure 7 - Scientific prizes and extraordinary growth (Jin, Ma, & Uzzi, 2021, p. 4)

Figure 7 above plots the average magnitudes of extraordinary growth for all six growth measures during the first 10 years following the prize. The plotted magnitudes are unfortunately not straightforward to interpret, but can be interpreted as follows with a little transformation. The differences in growth are expressed as Δt = log(Yt) - log(Ỹt), where Yt is the prizewinning topic’s growth at time t and Ỹt is the same quantity for the matched topic’s growth. Figure 7 plots Δt. For an intuitive interpretation of the differences in growth after 10 years, we need to calculate eΔ10 − 1 = (Y10 - Ỹ10 )/​ Ỹ10. Thus, for example, as seen in panel a in Figure 7 above, at 10 years after the prize, prizewinning topics are 39.8% more productive in terms of the number of publications than matched topics (Δ10 = 0.3351, eΔ10 − 1 = 0.3981). As an approximate but less accurate shortcut, one can also interpret Δt as a percentage change directly, without using the transformation.[28] [29]

While the authors cannot determine a mechanism by which prizewinning plays a role in the abnormal growth of topics, they explain that their findings are consistent with, for instance, Zuckerman’s (1992) theoretical argument that prizes may act as signals to scientists that a prizewinning topic offers comparatively strong prospects for professional growth.

The authors also found that funding does not account for a prizewinning topic’s growth. Rather, growth is positively associated with the degree to which the prize is discipline-specific (for recent research), or has prize money. Unfortunately, the coefficients are again not straightforward to interpret intuitively and the authors make no attempt at doing so.

Overall, we found the study design very convincing given the inherent difficulty of constructing a convincing counterfactual for the effects of prizes. The data set on prizes from all over the world is very comprehensive and fairly recent. Moreover, we were impressed that the authors went to extraordinary lengths to collect and cross-validate their data from various sources, and constructed control groups of non-prizewinning topics with growth patterns statistically equivalent to prizewinning topics for 10 years before the prize year on six different growth criteria. Although the study does not claim causality, the excellent quasi-experimental study design convinced us that the results come pretty close to causality. However, as noted above, a major caveat of this study is that it focuses on recognition prizes only; inducement prizes are not part of the sample.[30]

Brunt, Lerner, and Nicholas (2012)

Brunt, Lerner, and Nicholas (2012) estimated the effect of inducement prizes on innovation using data on awards for technological development by the Royal Agricultural Society of England (RASE) at annual competitions between 1839 and 1939. They detected large impacts of the prizes on competitive entry and, more importantly, they also found an effect on patents (as a proxy for innovation). The author concluded that prizes encouraged competition and that medals were more important than monetary awards. Moreover, the impact on innovation they observed could not be explained by merely a redirection of existing inventive activity, implying that prizes raised aggregate innovative output beyond just redirecting existing efforts.

RASE awarded both substantial monetary prizes (more than £1 million) and its own prestigious medals for machinery and innovative implements. The society asked award-seekers to produce specific improvements and gave them one year before awards were bestowed. The data set comprises about 15,000 entrant inventions that competed for the prizes and nearly 2,000 awards that were made, and was merged with a data set of all British patents from the same time frame.

Interestingly, the authors found that the costs incurred for technological development were higher than the monetary awards received by winners, indicating that the prizes leveraged a significant amount of private capital. On average, the monetary awards covered only around one third of the sale price of a single unit of an implement or machine exhibited by a successful entrant, as shown below in Figure 8.

Figure 8 - Regression plot of prizes awarded against projected sale price of winning innovation (Brunt, Lerner, & Nicholas, 2012, p. 13)

The authors used negative binomial regressions to predict the number of individual entrants and the count of granted patents in different technology categories, conditional on the awards.[31][32] A crucial aspect of the authors’ identification strategy to isolate the effect of prizes on contemporaneous innovative activity is the fact that they found the largest spikes in patenting activity in the year of the show, suggesting an immediate relationship between prizes and patenting in terms of timing (Brunt, Lerner, & Nicholas, 2012, p. 4).


Table 1 below reports some of their estimates of the entrant equation. Columns 1 and 2 show the effects of monetary and medal awards on entrant counts. A doubling of monetary prizes implied an 11% increase in entrants and each additional medal announced in the prize schedule increased the expected entrant count by 11%. The authors also investigated whether awards per se had an effect vs. their monetary value. To test this, “column 9 specifies the monetary prizes as variables measuring both the average monetary amount and the number of monetary prizes offered in the schedule. A doubling in the number of awards, controlling for average value, induced a 33 per cent increase in entrants, while higher value prizes, conditioning on the number of awards, are associated with a slightly lower level of entry” (p. 681). According to the authors, “rather than compensating inventors directly for the costs of research and development, the awards provided a ‘seal of quality’ for inventors who could advertise this to potential buyers” (ibid).

Table 1 - Contest entrant regression results (Brunt, Lerner, & Nicholas, 2012, p. 22)

The authors ran similar regressions using the number of patents as an outcome and concluded that prizes boosted the patents as a proxy for innovation (see Table 2 below). Prize variables (money prizes and medals) were statistically insignificant in most specifications in the patent regressions (see Panels A and B). Only when limiting the time period to the prize rotation period (1856-1872) were statistically significant associations between the patent count and prize variables found (see Panel C).

During the prize rotation period, prize awards were rotated every three years between three different technology categories. Econometrically this means that these rotating ex ante prizes were not driven by any demand or supply shocks to innovation because they were announced independently of any cycles of innovation or “hot” technology categories, which would alleviate bias concerns (pp. 5-6).

For example, during this period, an additional medal was associated with an 8% increase in patents (column 2) and a gold medal with a 12-15% increase in patents (columns 4 and 5). Doubling the monetary award only resulted in a 1% increase in patents (column 1). The authors’ interpretation of this was that giving longer lead times to inventors raised the number of competition entries and the intensity of innovation.

Another result we found very interesting is a positive association found between prizes and patents in regressions where only non-participants of the RASE competitions who also patented in agriculture-related areas were included (we do not show the regression table here). According to the authors, a possible explanation for this result is that the prize schedule signaled to these inventors areas of technological development that are potentially profitable. Thus, RASE’s inducement prizes had an effect on aggregate innovative activity.

Nonetheless, we are still somewhat surprised by the fact that the prize variables were insignificant in most specifications of the patent regressions, i.e. when the time period was not restricted to the prize rotation period. As we are not exactly sure how to interpret this finding, we contacted the authors for clarification, but we unfortunately haven’t heard back.

The findings of this study have a number of caveats and limitations. One of them is that we deem the internal validity only moderately high. The results might potentially be upward or downward biased due to several reasons, which we skip here. Moreover, it is unclear to what extent these historical results from the 19th and early 20th centuries would extend to current prizes, and to prizes for different outcomes, such as where market failure occurs, as is the case with neglected tropical diseases, for instance.

Table 2 - Patent regression results (Brunt, Lerner, & Nicholas, 2012, p. 23)

Nicholas (2013)

Nicholas (2013) did a study with a methodology similar to Brunt, Lerner, and Nicholas (2012), examining the effects of recognition prizes on patents in late 19th and early 20th century Japan. Nicholas (2013) used a panel data set composed of patent counts and various data on prizes during the period 1885-1911 in Japan. He concluded that prizes strongly boosted patents, especially in less developed prefectures, and they also induced large spillovers of technical knowledge in prefectures adjacent to those with prizes.

Similar to the approach used by Brunt, Lerner, and Nicholas (2012), Nicholas (2013) also relied on a negative binomial regression model using within-prefecture variation over time to identify the effect of prizes on patents (see Table 3 below).[33] As the number of patents may not respond immediately to the change in incentives, he used distributed lags of prize variables to estimate the impact on patents.

Here we describe some exemplary results of a simple dummy variable approach he used to

identify the presence or absence of prize competitions in a prefecture in a given year (see Table 3 below, columns 5-8). In column 5 (panel A), the size and statistical significance of the coefficients increased from t − 1 to t − 3, though they were, at best, significant at the 10% significance level. Across different specifications (columns 5-8), he found that the t − 3 prize competition dummy was associated with a ~11-15% increase in the number of patents.[34] The relationship between prizes and patents was even stronger when only considering less technologically developed prefectures (panels B and C). When summing up the distributed lag variable coefficients (t − 1 to t − 3), we see that prizes boosted patenting by ~17-61%. He explained that results were consistent with other researchers’ accounts indicating that a consequence of the prize competitions was to boost technological development in less advanced areas. However, in our understanding, Nicholas (2013) did not try to investigate or speculate why the effect was highest in less developed areas, and we also refrain from doing so.

Table 3 - Patent regression results (Nicholas, 2013, p. 18)

Nicholas (2013) acknowledged that possible selection effects might bias the results, that is, if the location choice of the prize competitions was correlated with patent outcomes. To counter this concern, he tested for selection effects using prefecture-level characteristics to predict prize competitions. He found little evidence of prize competition selection across different specifications, concluding that selection was unlikely to be a concern.

Interestingly, he also did a cost-benefit assessment of the prize competitions, linking competition expenditures with the expected market value of patents. He estimated an implied cost per patent higher than the value of patents, implying that the cost of the prize competitions was high relative to the output gains. The financial cost was more reasonable when taking into account the extra patents induced by spillover and for non-patented innovations that were plausibly generated as a consequence of the prizes.

While we think this study was carefully executed, we deem its internal validity a bit lower than Jin, Ma, and Uzzi’s (2021) study. The coefficients are reasonably robust across specifications, but have quite large standard errors. Moreover, it is unclear how relevant these results from early 20th century Japan are to modern inducement prizes for innovation in other settings. The prizes in this analysis were mostly non-pecuniary and inventors could also pursue patents. Moreover, the prizes were not inducement prizes, but were awarded ex post for innovations that had already been developed. Nonetheless, this analysis shows that non-pecuniary prizes can provide effective incentives for inventors, especially in areas at an early stage of technological development.

Sigurdson (2021)

Sigurdson (2021) wrote an empirical PhD thesis in economics aiming to establish a causal relationship between prizes and the rate and direction of innovation. While his thesis does not directly focus on the impact of prizes on innovation, it focuses on intermediate outcomes that might, in turn, affect innovation. In particular, he explored how prizes affect (1) collaboration behavior among innovators at the individual level, (2) the use of particular knowledge or approaches at the field level, and (3) the ability to compete in a subsequent prize at the team level. We find his thesis very carefully executed and the methodology convincing, although it cannot fully address all endogeneity concerns related to the propensity of some researchers to self-select into prize participation.[35] We summarize the findings of the first two studies below and skip the last study, as we deem it less relevant to this report.

Study 1 - Prizes and collaboration behavior among researchers

In the first study, Sigurdson (2021) explored whether inducement prizes increase the relative returns to research collaboration at the level of the individual innovator using the 2005 DARPA Grand Challenge — a prize competition for autonomous vehicles — as the research setting and a data set of more than 1,600 scientists, including prize participants and matched control scientists. He used a difference-in-differences approach combined with matching, tracking publishing activity before and after the prize and comparing university professors who participated in the prize with a control group of non-participant professors.

He found that compared to researchers who did not participate, prize participants experienced a 31% increase in the number of unique coauthors per year (corresponding to four additional coauthors) they published within the 10-year period after the prize.[36] This effect was partly driven by increases in research productivity (i.e. approximately two additional publications compared to a control group with five publications per year), in returning coauthors (intensive margin), and in new first-time coauthors (extensive margin). He also found that prize participants experienced an increase in coauthor diversity measured by publication subject area — that is, participants are more likely to publish work with coauthors who are more active in other disciplines. These effects were strongest among researchers from higher-prestige universities. Interestingly, the effects were not driven by increased collaboration with other prize participants. Thus, the increase in coauthors of prize participants did not appear to be driven by new contacts met directly during prize competitions.

While Sigurdson (2021) remained rather vague in discussing possible mechanisms[37] for the effects, we find it intuitively plausible that when incentivized to work on a problem that requires interdisciplinary expertise and an expansion of one’s typical circle of coauthors, as we think is arguably the case with autonomous vehicles, the propensity to work with more and more diverse coauthors for later research projects increases. Unfortunately, it is not clear whether this result would extend to other settings and prize competitions in which interdisciplinary work is less crucial. Moreover, we are not sure whether and how these effects on collaboration affect the quantity and quality of innovation.

One might be concerned that there is some selection bias in prize participation, which would bias the results. It is possible that some types of researchers are predisposed to participate in the prize, and that the same characteristics that lead to this predisposition also affect the propensity to collaborate after the prize, which could lead to an overestimation of the prize effect. Sigurdson (2021) did two robustness checks to address this concern. First, he combined the difference-in-differences approach with propensity score matching to restrict the control group to a matched sample of non-participants who were most likely to participate in the prize based on pre-prize variables, but did not participate. Second, he dropped researchers from top-performing teams (i.e. highly collaborative outliers) from the sample. Overall, the results from these robustness tests are similar to the results using the full sample (albeit with different magnitudes and statistical significance, though one cannot uniformly say that these changes go in a particular direction), supporting the main result that prize participation increased post-prize collaboration measured via unique coauthors.

Study 2 - Prizes and the identification of breakthrough ideas in science

In the second study, Sigurdson (2021) tested the hypothesis that inducement prizes are an effective mechanism for identifying breakthrough ideas in science. To explore this hypothesis, he again used data from the 2005 DARPA Grand Challenge for autonomous vehicles and investigated how the prize affected the relative salience of a subfield of research within robotics that was targeted by the prize.

His analysis was based on a difference-in-differences approach and a by-product of the prize: a special issue of a scientific journal consisting of articles written by prize participants. Specifically, he used the citations in this journal issue, as he assumed them to be a representative sample of robotics knowledge used by prize participants. More precisely, he compared the citation rates of the literature cited in this journal issue (i.e. the prize-relevant literature) after the prize with a control group of robotics literature in the same field and time period, but not directly related to the prize. To reduce endogeneity concerns related to literature published in response to the prize, he limited his analysis to research published before the prize was announced.

According to Sigurdson (2021), the difference in future citation rates between these two groups of research (i.e. prize-relevant research and control research) can be interpreted as a measure of the impact of the prize on the trajectory of research in the field of robotics. He found that the prize-relevant literature experienced an 18% rise in future citations relative to the literature in the control group, and a 36% rise when only focusing on journal articles, but not conference papers or reviews. However, the standard errors are very large. Sigurdson’s (2021) interpretation of these results was that inducement prizes may provide a mechanism for identifying breakthrough ideas in science and for helping these ideas to take hold in a field “by virtue of highly visible, objective benchmarks for the evaluation of different and competing approaches” (p. 70).

Ma and Uzzi (2018)

Ma and Uzzi (2018) used large-scale data on scientific recognition prizes to examine the growth dynamics of prizes and the connections prizes and prizewinners make within and across disciplines. More precisely, they studied how genealogical and coauthorship networks were associated with prizewinning and the concentration of prizes among scientists. The authors used data on more than 3,000 different scientific prizes across diverse disciplines and the career trajectories of almost 11,000 prizewinners worldwide for over 100 years. Their analysis combined different methodologies, such as ordered logit regressions for a scientist’s propensity for winning multiple prizes and different network analysis methods.

They found several links between prizes and scientific advances. First, despite a proliferation of prizes over time[38] and across the globe, prizes were more concentrated within a smaller and smaller group of tightly interconnected scientific elites. This suggested that the boundaries of science were advanced by a relatively small number of ideas and scholars. For example, 64% of prizewinners have won two prizes and 14% have won five prizes or more. Second, certain prizes created interlocks between disciplines and thereby created pathways by which knowledge spread across disciplines. This means that certain prizewinners and their ideas connected disciplines via the prize network. For example, winners of the Howard Hughes Medicine Institute Award were more highly cited across disciplines and combined more novel lines of research than a control group of equally accomplished, non-prizewinning researchers. Third, genealogical and coauthorship networks predicted a scientist’s propensity for winning multiple prizes.

It is unclear how these networks affect knowledge transfer and innovation. According to Ma and Uzzi (2018), two effects are possible. On the one hand, if these networks work like other social networks, they may provide better divisions of specialized labor, continuous learning opportunities, and support for risk-taking. On the other hand, the small groups and tightly interconnected elites may be vulnerable to in-group thinking or could create in-group biases. It’s unclear which effect is dominant.

Azoulay, Stuart, and Wang (2014) & Reschke, Azoulay, and Stuart (2018)

Azoulay, Stuart, and Wang (2014) compared the citations of papers written by scientists who ended up becoming a Howard Hughes Medical Institute (HHMI) investigator (and thus received a significant source of no-strings-attached funding). The papers from future HHMI winners received more citations than those who did not. However, the effect was small and short-lived.

Interestingly, there seems to be a redistribution effect, as Reschke, Azoulay, and Stuart (2018) showed with a difference-in-differences study design that scientists in the same field as an HHMI winner received fewer citations after the announcement of the award. However, this effect was dependent on the number of citations a particular topic was getting prior to the award. For example, for a topic that was already well cited, the effect of someone else in the field receiving an HHMI appointment appeared to have had a negative effect on citations of neighboring articles.[39] On the other hand, for topics with minimal citations, the effect of someone else in the field receiving an HHMI appointment had a positive effect on neighboring articles. However, this effect seems to be most relevant for topics that were very poorly cited (i.e. 10th decile). According to Reschke, Azoulay, and Stuart (2018, p.1)

This pattern reflects more than the trivial transfer of attention from non-winners to winners: once prizes are announced, actors cede scientific territory to prize winners and pursue other opportunities. These negative spillover effects are moderated or even reversed by scientists’ social connections and by the novelty and stature of scientific domains.

We came across these two articles at a very late stage in this report. Thus, we didn’t have time to review their quality and methodology in detail.

Appendix 2 - Summary table of evidence of effects of prizes on innovation and intermediate outcomes

Appendix 3 - More detailed discussion of design issues

Monetary vs. non-monetary incentives

  • Jin, Ma, and Uzzi (2021) conducted a longitudinal analysis of nearly all recognition scientific prizes worldwide, including over 11,000 scientific topics from 19 disciplines. They found that topics associated with a scientific prize experienced extraordinary growth in productivity, impact, and new entrants (we describe the paper in more detail here). More importantly, they found that this growth is higher if the prize has prize money. Unfortunately, the effect sizes for the relationship between prize money and growth are not straightforward to interpret, but our interpretation is that they are rather modest. Moreover, as prize money is operationalized as a binary variable, it’s not clear whether the magnitude of the prize money matters. A major caveat is that the study focuses exclusively on recognition prizes; thus, the findings may not extend to inducement prizes.

  • Boudreau and Lakhani (2011) conducted a 10-day field experiment of a contest in which over 500 software developers prepared solutions to computational algorithmic problems. They conducted this randomized controlled trial in collaboration with Topcoder — a company that administers computer programming contests. The competitors were randomized into two groups, one of which could win a cash prize of $1,000, while the other group received no cash prize. Their results suggest that the cash prize nearly doubled problem-solving performance.[40] Interestingly, the authors found that the effect of cash incentives was significantly greater for higher-skilled participants (as measured by ex ante skill, prior to the experiment) relative to lower-skilled participants, consistent with higher-skilled workers having more of a chance of winning the prize. Again, it is not clear whether a higher cash amount would have increased problem-solving performance even further.

  • Brunt, Lerner, and Nicholas (2012) investigated in an econometric study whether prize competitions by the Royal Agricultural Society of England in the late 19th and early 20th centuries created competitive entry and spurred patents. As we explained here, while they found that both monetary prizes and medals have a positive relationship with the number of patents, medals were more important than monetary rewards in increasing patent numbers. Tom Nicholas, one of the authors of this study, was interviewed by McKinsey (2009, p. 31), and stated, “People are much more induced by winning a medal award than by winning a monetary award.” He hypothesized that “it’s much easier to market a product having won a medal.

  • Kay (2011) used an empirical, multiple case-study methodology and various data sources to investigate three cases of recent aerospace technology prizes: a main case study, the Google Lunar X Prize; and two pilot cases, the Ansari X Prize and the Northrop Grumman Lunar Lander Challenge (we describe the paper in more detail here). Entrants were generally attracted by the non-monetary benefits of participation (e.g. reputation, visibility) and the potential market value of the technologies involved in competitions. Kay concluded that the monetary reward was not as important as other prize incentives, yet it was still important to position and disseminate the idea of the prize.

  • Murray et al. (2012) systematically examined the use of a grand innovation prize (GIP) in action (see Types and definitions of prizes for a very brief explanation of GIPs), the Auto X Prize (we describe the paper in more detail here). They compared observations of GIPs from three domains — empirical reality, theory, and policy — to better understand their function as an incentive mechanism for encouraging new solutions to large-scale social challenges. They found that while a core assumption of economic theory is that the prize incentive is entirely determined by the monetary prize value, in practice, prize participants are motivated by a much broader set of incentives. He recommended prize organizers consider incentives as extending beyond the money and intellectual property rights to encompass media attention, reputation, and education (p. 9).

  • Khan (2020) used extensive archival data to study technological change across time, countries, and political economic systems. She found that in many prizes, the monetary payout was merely a windfall rather than an incentive. Moreover, many contestants incurred expenditures that exceeded the award itself, and benefited from returns other than the cash prize — e.g. returns in adjacent markets, celebrity advertising, product differentiation, learning gained from the projects of other participants, and the prospect of pursuing additional prizes or grants.

One could hypothesize that the visibility, prestige, and media coverage is a function of the size of the prize. While Kay (2011, p. 267) concluded that a monetary reward is important to disseminate the idea of a prize in the media, McKinsey (2009) found only a small correlation between the cash reward of prizes and the exposure they received (proxied by the frequency of online mentions in Google search results), even when correcting for the longevity of a prize (see Figure 9 below). For example, Pulitzer Prizes, which award “only” $10,000 to each winner, receive more exposure than any other prize in the United States, as approximated by the number of online mentions. Thus, if there is a relationship between cash rewards and visibility, it seems rather weak.

Figure 9 - Award size of a prize and exposure (McKinsey, 2009, p. 58)

Prize structures and compensation schemes

  • Graff Zivin and Lyons (2021) conducted a field experiment in partnership with a life sciences company to compare how two different compensation schemes affected innovation performance. They ran a software innovation contest that lasted 54 hours and had 184 participants, which could win a prize of up to $15,000 within a firm. A winner-take-all compensation scheme generated significantly more novel[41] innovation relative to a compensation scheme that offered the same total compensation shared across the 10 best innovations. Moreover, they found that the winner-take-all compensation scheme did not reduce output levels[42] on average, but increased them when innovators were working in teams.

  • Hofstetter et al. (2017) ran an experiment by inviting a cohort of innovators to participate in two successive contests and randomly varied the incentive structure. Half of the participants were allocated into winner-take-all contests, and the other half into contests with a multiple prize structure in which the top 20 innovators would receive a prize (the total prize being identical across groups). The winner-take-all contests yielded significantly better ideas compared to multiple prizes in the first round. However, this result flipped when the innovators were asked to participate again in the second contest. While 50% of those in the multiple-prize contest group chose to participate again, only 37% did so from the winner-take-all group. Moreover, innovators who had received no reward in the first contest showed significantly lower effort in the second contest and generated fewer ideas. In the second contest, the multiple prizes contest generated better ideas than the winner-take-all contest. Confirming these findings, the authors found similar effects in an empirical investigation of over 260 contests and 6,000 innovators from the open innovation platform Atizo.com. Most importantly, these data show that innovator churn could be reduced by the addition of more (albeit smaller average) rewards.

  • Brüggeman and Meub (2015) investigated in a laboratory experiment the effects of two different innovation contests on subjects’ innovativeness. Subjects were randomly allocated into two different contests: one in which a prize could be won for the aggregate innovativeness, and the other which awarded a prize for the best innovation.[43] The experiment consisted of a creative real effort task that simulated a sequential innovation process, in which subjects determined royalty fees (which also served as a measure of cooperation as they gave the prices for building upon others’ prior innovations[44]) for their created products. The authors found that, relative to a benchmark condition without an innovation contest, both contest conditions reduced the subjects’ willingness to cooperate. With respect to innovativeness, neither a prize for the aggregate innovativeness nor a prize for the best innovation had a positive overall impact. Therefore, the authors concluded that both types of contests cannot unambiguously be recommended as effective policy instruments.

Number, background, and sorting of competitors

  • Boudreau et al. (2011) used a data set of almost 10,000 software competitions related to the solution of 645 problems from Topcoder, a contest platform for elite software developers, to investigate the implications of different numbers of competitors. The number of competitors in the competitions ranged between 10 and 20 and was plausibly exogenously determined. The authors found that a larger number of competitors reduced the effort exerted by each participant, but increased the probability of extreme-value solutions (i.e. particularly good solutions). The effort-reducing effect of greater rivalry dominated for less uncertain problems, whereas the effect on the extreme value dominated for more uncertain problems. Thus, the authors concluded that adding competitors systematically increases overall contest performance for high-uncertainty problems. They did not, however, discuss ways of increasing rivalry in practice, but merely mentioned that rivalry can be constrained by admitting only a limited number of entrants.

  • Jeppesen and Lakhani (2010) analyzed results from online scientific problem-solving challenges hosted at InnoCentive.com to explore the relationship between a contest winner’s area of expertise and the focal field of the problem. They used a data set of 166 science challenges, originating from the R&D labs of 26 firms in 10 countries, involving over 12,000 scientists. They found a positive relationship between marginality (being distant from the field of the problem) and problem-solving success. Both technical marginality (i.e. a solver’s self-assessed technical expertise distance from the problem field) and social marginality (proxied by being a female scientist-solver, as women have been shown to be in the “outer circle” of the sciences) related independently to successful problem resolution. The authors state that one interpretation of the findings would be that the best way to solve problems is to have experts from vastly different fields attempt solutions; however, they urged caution in extrapolating the findings to the extreme.

  • Boudreau and Lakhani (2011) conducted a 10-day field experiment of a contest in which over 500 software developers prepared solutions to some computational algorithmic problems, as already described further above in this section. The main focus of this experiment was on the effects of allowing participants to self-select into competitive versus team-based regimes. The goal was to evaluate whether allowing workers to sort into different institutional settings based on their intrinsic preferences would increase performance. Participants were randomly assigned into two groups with identical skills distributions and exposed to the same competitive institutional setting. The “sorted” group was composed of individuals who preferred the competitive regime instead of a team-based outside option. The “unsorted” group had population-average preferences for working in the regime or the outside option. The authors found that sorting on the basis of institutional preferences doubled both effort and the performance of solutions. Gök (2013) interpreted these findings as showing that while teamwork is an important facilitating factor for innovation performance, this should be voluntary and natural, and those prizes whose rules strictly enforce teamwork might decrease innovative performance.

Effective design features based on learnings from historical prizes

Khan (2020, Appendix 4) listed a number of design features of potentially effective prizes, which we copy here.

  • “Design of prize system:

    • Transparency and accountability in rules and decision-making

    • Projects with short completion times:

      • Since the revenue is fixed, a longer time period increases uncertainty and costs to participants, which reduces expected profits

    • Finance is staggered, with follow-up monitoring

    • Coordination among prize-granting agencies to prevent duplicative efforts

    • Specific mechanisms in place for scaling, commercialization to meet consumer needs

    • Rules to eliminate rent-seeking; overcompensation through multiple sources

    • Governance issues are explicitly recognized and addressed

These design features are stated in the Appendix of Khan’s book without any context or further explanation, thus, we put rather little weight on them. We suspect that Kahn derived those recommendations from her analysis of what went wrong with historical prizes, which we discuss in more detail in Problems with and lessons learned from historical prizes, but we are unsure.

Appendix 4 - Descriptions of Google Lunar X Prize and Auto X Prize

Description of Google Lunar X Prize

In this section, we summarize the description of the Google Lunar X Prize in Kay (2011, p. 96ff) and the Wikipedia article. The Google Lunar X Prize was a $30 million competition that started in 2007, organized by the X Prize Foundation and sponsored by Google. The competition required prize entrants to land a robot on the surface of the Moon, among other secondary goals, by December 2015. According to the X Prize Foundation, this was the largest prize competition in terms of cash purse and was designed to “accelerate technology developments supporting the commercial creation of multiple systems capable of reaching the lunar surface and performing operations over an extended period of time.” More broadly, the purpose of the competition was to: educate the global public about the benefits of opening up space and exploring the Moon; inspire and excite the world about science, technology, math, and engineering; enable and qualify a new generation of engineers and entrepreneurial companies able to design, build, deliver, and operate space hardware; and, open the space frontier to new ideas and new participants by lowering the costs by a factor of 30.

The challenge posed by this prize required launching a spacecraft from Earth to the Moon, landing on the Moon, deploying a rover to traverse 500 meters, and collecting and sending back to Earth high-definition video footage. The cash purse was divided into a Grand Prize (valued at $20 million and awarded to the first team to complete all of the mission requirements), Second Place Prize (valued at $5 million), and other Bonus Prizes (valued at $4 million). At the discretion of the X Prize Foundation, the prize might also be awarded (as a “consolation prize”) to a team that accomplished most of the requirements to win the Grand Prize but, due to unforeseen circumstances, ultimately failed to meet all the mission requirements. Teams could also earn additional money by completing additional tasks beyond the baseline requirements, such as traveling 10 times the baseline requirements, or capturing images of man-made objects on the Moon.

To enter the Google Lunar X Prize, teams had to register between 2007 and 2010 and submit an application package with diverse information about the team and its members, finances, and mission plan, and pay a registration fee of $10,000, which was later raised to $50,000. The participating teams owned all the intellectual property associated with the design, manufacture, and operation of the spacecrafts. Until 2011, 35 teams from 17 countries entered the Google Lunar X Prize, of which six teams have withdrawn or merged. The actual number of teams and participating countries exceeded the initial target of the X Prize Foundation, which was about a dozen teams from a few countries. Many more potential entrants have demonstrated interest in this competition. The X Prize Foundation received more than 2,500 inquiries from individuals, companies, and universities from 96 different countries.

The original deadline was the end of 2014 and was later extended to 2018. By 2018, five teams remained in the competition. However, the X Prize Foundation announced that “no team would be able to make a launch attempt to the Moon by the [31 March 2018] deadline… and the US$30 million Google Lunar X Prize will go unclaimed.” The foundation went on to announce that the prize would continue as a non-cash competition. In 2019, a participating spacecraft crashed while attempting to land on the moon. The team was awarded a $1 million “Moonshot Award” by the foundation in recognition of touching the Moon’s surface.

Description of Auto X Prize

In this section, we summarize the descriptions of the Auto X Prize in Murray et al. (2012) and Burstein and Murray (2016). The Progressive Automotive Insurance X Prize, also called Auto X Prize, was a $10 million prize for a highly efficient vehicle. The Auto X Prize was launched in 2006 and known as the Automotive X Prize until 2008, when Progressive Insurance was announced as title sponsor. It was launched with the broad purpose to provide incentives to ”teams from around the world to focus on a single goal of [building] viable, super fuel-efficient vehicles that give people more car choices and make a difference in their lives.” The basic goal of the prize was stated simply: “A ten million dollar cash purse will be awarded to the teams that win a long-distance stage race for clean, production-capable vehicles that exceed 100 miles-per-gallon energy equivalent.”

The goal of the prize translated into a wide range of requirements. Entrants had to demonstrate compliance with certain safety standards, and that their vehicles could be manufactured at scale and in accordance with a sustainable business plan. Moreover, the vehicles had to appeal to consumers, incorporating all of the usual features of modern cars — so that an average person without special knowledge could drive the car. The initial prize proposed called for two divisions — Mainstream and Alternative — with the “same requirements for fuel economy and emissions, but different design constraints.” A “winner-takes-all” design awarded $5 million per division to the team with the fastest vehicle with fuel efficiency in excess of 100 miles-per-gallon equivalent around a course.

The Auto X Prize also had goals beyond the development of new automotive technologies. The U.S. Department of Energy contributed a $3.5 million national education program for school students in conjunction with the Auto X Prize. The prize organizers also sought publicity for the prize, with the intention of using it as a way to start a broader national conversation about energy efficiency and to create an industry for fuel-efficient vehicles. Thus, the organizers sought to “provide many opportunities for recognition so that it’s worthwhile to compete, and not just for first place,” and to “make heroes out of the competitors and winner(s) through widespread exposure, media coverage and a significant cash reward” (Burstein & Murray, 2016, p. 420).

The prize attracted a wide range of competitors, such as auto industry professionals working for startup companies with venture capital financing, hobbyists who self-financed their entries, students from universities and a high school, and engineers from other industries. The entrants brought a range of technical expertise to the competition, including mechanical engineering, electrical engineering, computer science, materials science, and aerospace engineering.

The competition was conducted in a series of stages, each stage designed to winnow the field. Registration was easy: a team provided an application with basic technical information about the vehicle, paid a $5,000 entry fee, and signed an agreement. “The X Prize administrators applied a light screen to registrations, weeding out only those applicants that were “clearly unqualified.” By the February 2009 deadline, 111 teams registered a total of 136 vehicles for judging in the next stage. The registered teams then competed in a “design judging” stage, in which they provided detailed data submissions to demonstrate that their vehicles were production-capable. The Auto X Prize provided contestants with broad outlines of the minimal design requirements, and then convened panels of experts with broad discretion to determine which cars would qualify for the on-track events. These expert panels — judging submissions on safety and emissions, manufacturability and cost, features, and business plan — met and considered submissions over the course of several days” (Burstein and Murray, 2016, p. 421).

Forty-three teams representing 56 vehicles passed the design judging stage and were qualified for the on-track race events. The competitors were provided with additional technical requirements and safety checks on their vehicles. Only 33 vehicles eventually entered the race events, which were conducted in several stages. Ultimately, nine teams competed in the final races. Two winners were announced in September 2010. The mainstream class winner was a group of automobile engineers from Virginia, whose gasoline-powered car was significantly lighter than any car on the market.

Appendix 5 - Steps to decide whether an AMC is appropriate

In the following, we provide a copy of two proposed steps to determine whether an AMC or a different type of program is appropriate (Chau et al., 2013, p. 81):

1. “Evaluate the current market context and challenges.

Future program designers should begin by identifying the level of market maturity and type of market failure. Market failures can exist in many forms and across many points of a product life cycle. Below are a few examples:

  • New product development: Nascent markets where a product has not yet been developed and research and development is required

  • Product launch: Late-stage markets where a product has been developed or nearly developed, but has not launched in desired markets or capacity is lacking

  • Secondary supplier entry: Developed markets where additional suppliers should be incentivized to enter an existing market with a revised or improved product offering

  • Lack of product uptake: Markets in which a product exists but has not been utilized effectively or demand has not materialized on a large scale

2. Determine the best approach for addressing the market challenge.

The “Making Markets for Vaccines” report from the CGD working group outlined two separate conceptions of an AMC: early-stage programs for products that require intensive R&D, and late-stage initiatives for products much closer to market. The two scenarios require very different approaches to pricing and structure. In many cases, particularly those where products are very near to market, an AMC as originally conceived may not be the approach best suited to the particular market failure. In these cases, program designers should feel free to deviate from the original AMC concept and borrow approaches from other forms of market-shaping mechanisms. For instance, manufacturers have stressed their preference for individual purchase guarantees to offset the risks they run in making large upfront investments; though these may not be suitable in all contexts, intermediate approaches that improve the situation for all sides may be possible. Designers of future programs should, from the start, take into account the pragmatic realities of a market and design tailored, nuanced solutions accordingly.

Footnotes

  1. ^

    According to Roberts, Brown, and Stott (2019, p. 13), there are many different terms that describe some form of an inducement prize, such as challenge prize, (social) innovation competition, innovation contest, innovation challenge, or research tournaments. In this report, we simply refer to all of those prizes as inducement prizes, as their distinction seems rather blurred to us.

  2. ^

    We have also seen advance market commitments referred to as a type of inducement prize (e.g. Williams, 2012, p. 12), though most articles we came across treated inducement prizes and advance market commitments as separate concepts (e.g. Koh Jun, 2012, pp. 86-87).

  3. ^

    This has been confirmed in a conversation by Christopher Snyder.

  4. ^

    Note that Grand Challenges is not a protected term and has been used in many different contexts, meaning very different things, e.g. an initiative to improve STEM education in the U.S. Here we only refer to the set of initiatives launched by the Gates Foundation.

  5. ^

    Nowadays, there are patent systems in all countries (Hayes, 2021). Thus, the prizes implemented in recent decades (like the X Prize Foundation and the Netflix Prize) cannot be considered alternatives to patents, but rather complements, and share little in common with the hypothetical prizes analyzed in theory (Sigurdson, 2021, p. 3).

  6. ^

    Gök’s (2013) annex (pp. 16-20) provides a nice summary of the evidence in a table format.

  7. ^

    Most of the studies reviewed by Gök (2013) are based on ex ante assessments and case studies, which we believe provide little insight on the causal impact of prizes.

  8. ^
  9. ^

    She argued in her podcast, “It’s not hyperbole to say that the American economy has been the most successful in all of human history. The question is, how did we get to this point? […] In Europe, innovation policies were the opposite of inclusive. They felt that only privileged people with wealth or status were capable of recognizing and making valuable contributions. And my data show that rewards were based primarily on the identities of inventors, rather than the productivity of their discoveries. The American model was completely different. It was based on the principle that diversity of ideas mattered most. [...] So if you want a pithy concluding statement, it would be that the United States’ economic success was due more to patents for paper clips than prizes for starships” (Hayes, 2021).

  10. ^

    In her podcast interview, she cited the Google Lunar X Prize as an example where misallocation of resources was evident “For 10 years, this competition went on, and Google got 10 years of free publicity and insights and information from the competitors for the prize. Then Google canceled the award, so nobody actually got the $30 million. So my take on this is that prize awards are great for the monopsonist who offers the award. Markets are generally better for the rest of us. In the prize system, one person wins a prize. In the market, everyone can get a prize.” (Hayes, 2021).

  11. ^

    Now, as for Elon Musk, his prize certainly attracted a lot of media attention to the problem of excessive carbon emissions. But I think he could have done this far more cheaply by turning cartwheels all the way from Mountain View to Palo Alto” (Hayes, 2021).

  12. ^

    Murray et al. (2012, p. 1) define Grand Innovation Prizes as “large monetary prizes awarded to the innovator(s) providing the best or first solution to a predetermined set of significant new performance goals with no path to success known ex ante and believed to require significant commitment and a breakthrough solution.

  13. ^

    Kay (2011, p. 281) wrote that, “Program managers should consider the following points. First, a significant part of the effort to implement programs has to be devoted to attracting serious entries with diverse profiles. Second, the prize design should focus on the appropriate definition of the prize challenge and incorporate expert insights. Third, the costs of the prize program may exceed significantly the cash purse if additional support (e.g. seed funding) is offered to entrants. And, fourth, the success of prize programs is context-specific and competitions have implementation time frames that are more appropriate than others.

  14. ^

    Burstein and Murray (2016) wrote another extensive case study of the Auto X Prize. We did not have time to review it for this report.

  15. ^

    According to theory, prizes are likely to be effective relative to alternative incentive mechanisms under two conditions: First, when their objectives are focused on problems for which there is adequate information to enable a “social planner” to define the properties of the solution, but with little understanding about who has the information to develop such a solution (Wright, 1983), and second, when there is no “upside” to the development of a particular solution, i.e. if the prize constitutes the full value of the solution, as is the case for social challenges where markets are poorly functioning (e.g. Kremer, 1998, 2002).

  16. ^

    Interestingly, Murray et al. (2012) (p. 7) found that prize definition can be itself a key goal and value of a GIP, that is, prize specification can both be an objective of a GIP as well as an input to the design. Early on in the development of the Auto X Prize, the Foundation’s design specification committee requested feedback from the public and potential participants on a preliminary draft of the prize guidelines.

  17. ^

    See Tabarrok (2022) for an interesting write-up of Operation Warp Speed.

  18. ^

    More precisely, the proposal is for a “market-driven, value-based advance commitment” that “builds on the advance market commitment (AMC) mechanism [...] with several important innovations and improvements” (Chalkidou et al., 2020, p. x). We have not investigated how exactly this concept differs from AMCs.

  19. ^

    According to an email exchange with Christopher Snyder, advance purchase commitments are not the same as AMCs. We have not investigated what the exact similarities and differences are between those concepts.

  20. ^

    Kremer et al. (2020, p. 5) explained the choice of the rotavirus vaccine from the six global vaccine initiatives proceeding around that time as follows: “Three of them (IPV, second dose of measles, birth dose of hepatitis) involved early-vintage rather than new vaccines. The yellow-fever vaccine was not rolled out in many high-income countries, leaving no good base rate for coverage speed comparison. We conjecture the results would be stronger using HPV, the remaining candidate apart from rotavirus, for comparison, but any slow rollout of HPV vaccine in GAVI countries could be attributed to its administration to older children, slowing coverage expansion.

  21. ^

    Snyder makes this point in a working paper in which he and colleagues designed the optimal mechanism for diseases like Ebola and Covid-19 (Snyder, Hoyt, & Douglas, 2022).

  22. ^

    The report draws from two evaluations commissioned by the Gavi AMC Secretariat (Chau et al., 2013; Boston Consulting Group, 2015) as well as Gavi’s Pneumococcal AMC Annual Reports.

  23. ^

    He contrasted the expensive vaccine capacity building with treatments, which can often be produced very cheaply and without much investment, once the research has been done.

  24. ^

    Snyder explained that this information asymmetry is especially relevant for technologically close targets, where the asymmetry might be larger. Technologically distant targets, on the other hand, may have a more symmetrical information gap.

  25. ^

    E.g. One challenge was looking for interventions for the following targets:

    - “Increasing access to cesarean section where it is currently inadequate

    - Increasing quality and safety of cesarean section to reduce iatrogenic harm to both mothers and newborns

    - Reducing rates of non-medically indicated cesarean section.

  26. ^

    Definition found here: “Scaling up expands, replicates, adapts, and sustains successful policies, programs, or projects to reach a greater number of people. It is part of a broader process of innovation and learning.”

  27. ^

    More precisely, the authors matched prizewinning topics with five non-prizewinning topics that had statistically equivalent growth patterns in six different growth indices in the 10-year period before a prize was conferred.

  28. ^

    We can use this approximation when two variables, say x2 and x1, are close to each other (i.e. when x2 /​ x1 ≈ 1), the percent change (x2 - x1)/​x2 approximates the log difference log(x2) - log (x2). The further away x2 /​ x1 gets from 1, the worse the approximation. A nice explanation of why this holds can be found here.

  29. ^

    It is unclear whether and how these growth effects continue beyond the 10-year post-prize period studied in this paper. Given the slightly concave growth patterns we see in Figure 7, we speculate that extraordinary growth tapered off over time but continued even beyond the studied time horizon.

  30. ^

    The article does not explicitly state what types of prizes were included in the analysis. Prof. Brian Uzzi confirmed to us via email that the study focused only on recognition prizes and not on inducement prizes.

  31. ^

    The specifications included year and technology category fixed effects and technology category time trends (Brunt, Lerner, & Nicholas, 2012, p. 19).

  32. ^

    A crucial aspect of the authors’ identification strategy to isolate the effect of prizes on contemporaneous innovative activity is the fact that they found the largest spikes in patenting activity in the year of the show, suggesting an immediate relationship between prizes and patenting in terms of timing (Brunt, Lerner, & Nicholas, 2012, p. 4).

  33. ^

    His models included prefecture, year, and region-by-year fixed effects and additional controls (p. 4).

  34. ^

    Here we report the coefficients directly as an approximation of the effect size. For a more precise interpretation of the coefficients, we need to transform them as follows: (eɑ − 1)*100. See footnote 28 of the current document for an explanation.

  35. ^

    There is no random assignment of participants to the prizes, which makes it difficult to rule out the possibility that unobserved characteristics of those who participate in an inducement prize may influence the observed after-effects (p. 8).

  36. ^

    The control group in the post-prize period had 12.8 unique coauthors (p. 32).

  37. ^

    He explained: Researchers who learn the vocabularies and norms of research production from the diverse peers they are required to work with during a prize may transfer these learnings to future collaborations, generating effects that might be of value to policy makers and other prize sponsors depending on their innovation-related goals” (p. 59).

  38. ^

    Ma and Uzzi (2018, p. 2) found that before 1980, there was a similar proliferation rate for disciplines and prizes, although there were nearly twice as many scientific disciplines as prizes. After 1980, prizes continued to proliferate and by 2015 outnumbered the number of scientific fields at a 2:1 ratio.

  39. ^

    Neighbors was defined as “individuals who work in economic, intellectual, or artistic domains that are proximate to prize winners”.

  40. ^

    A measure of problem-solving performance was the quality of each algorithm/​solution, calculated with an automated test suite.

  41. ^

    Contest submissions were judged by six industry experts on a five-point scale across five equally weighted categories (Graff Zivin & Lyons, 2021, p. 2).

  42. ^

    The quantity of innovative output was an indicator for whether or not participants submitted a proposal for evaluation by the judges (Graff Zivin & Lyons, 2021, p. 2).

  43. ^

    According to Brüggeman and Meub (2015, p. 3), in the treatment with “the aggregate innovativeness, we implement a contest with a relative payoff-scheme disproportionately rewarding the most innovative subject. In the treatment with the prize for the best innovation, subjects are paid proportionally for each innovation while an additional bonus is awarded to the subject who has created the most valuable innovation. In the benchmark treatment, subjects are merely paid proportionally to their innovations”.

  44. ^

    According to the authors, “Subjects who are reluctant to cooperate will ask for higher royalty fees, while those interested in cooperation choose lower fees and might expect some reciprocal behavior” (p. 12).