A proposal for a small inducement prize platform

Inducement prizes are cash prizes awarded to people who accomplish a particular feat specified ahead of time. Inducement prizes offer advantages over traditional hiring practices since the prize is allocated according to a post-hoc evaluation of performance, rather than upfront to a specific worker. Here, I propose a specific model for a small inducement platform intended to facilitate the creation of high quality effective altruism research.

The problem

Inducement prizes have experienced a long history; accordingly, economics literature going back over a century has provided empirical evidence that inducement prizes can spur innovation more efficiently than hiring researchers or engineers directly in certain circumstances (for some examples see this paper and this one).

Many existing organizations, such as the X Prize Foundation, have enjoyed some success by facilitating the creation of large inducement prizes. However, most inducement prizes are not facilitated on content-specific platforms. Rather, companies, non-profits, governments, and rich individuals interested in creating inducement prizes generally go through their own media, such as for the Brain Preservation Technology Prize and the Methuselah Mouse Prize.

Facilitating your own inducement prize contest makes sense for large bounties, but I’m currently unaware of any specific platforms dedicated to the procurement of smaller prizes, such as those with sums less than $10,000.

The closest platform that I’m currently aware of is the private Facebook group Bountied Rationality. However, there are a number of problems I see with Bountied Rationality, which should provide some indication for how I think an alternative platform could be improved,

  • The group visibility is set to private, which makes it harder to share research value created inside the group.

  • The integrity of the group itself is held up by the personal trust of those within the rationalist/​effective altruism community, rather than a market-driven model of reputation.

  • There is no means of arbitrating disputes, and therefore there is no guarantee that people will be paid fairly for accomplishing the task as specified.

I still think that the group Bountied Rationality creates a lot of value. But the issues I’ve outlined above plausibly limit its ability to grow larger, and hamper its status as a reliable engine for producing outsourced insights and research.

The proposal

My proposed alternative is a public, market-driven bounty system aimed at procuring small inducement prizes, targeted at the effective altruism community. Below, I’ll list a specific set of features which I think could help the platform to thrive.

Public

The first main difference between my model and the group Bountied Rationality is that content on the platform would be public. This feature makes the platform less suitable for personal requests, but more suitable for public research, such as inducing mathematics results, well-crafted bibliographies, well-sourced research summaries for a given topic, and in-depth investigations into potential interventions.

The Effective Altruist Forum, Lesswrong and Stackexchange already allow for something similar, in that they allow users to ask public questions, and the community answers are then curated via upvotes and downvotes. This model has been helpful to many, but in my experience, people are often hesitant to provide long-form and well-sourced answers to questions, probably due to the lack of strong incentives offered to those who give good answers.

Escrow and arbitration

The second main feature I propose is a requirement that people put their money in escrow, and that they must name someone as the arbitrator for the bounty. Escrow ensures that the bounty offerers cannot simply keep their money long after someone has already satisfied the conditions of the bounty (a problem with which I have personally been acquainted with).

The purpose of requiring people to name someone as an arbitrator serves a similar purpose as escrow. By naming a trusted third party to settle disputes, bounty offerors would be encouraged outline very specific conditions under which they want their bounty to be distributed. This incentive, and the fact that the offerors cannot simply unfairly refuse to pay, provides bounty hunters assurance that they will get paid if they perform the task successfully.

My own experience on Metaculus made me realize just how important it is for platforms to build solid mechanisms to build community trust. Even though people on Metaculus are not trading with real money, disputes can become agonizing and people can get angry when questions do not resolve in the way that they thought it would. As a result of these issues, Metaculus moderators and admins have become very careful in the way that they write questions, to ensure that questions are resolved unambiguously whenever possible.

One market-driven way of ensuring community trust is to openly allow users to bid to become arbitrators of particular bounties. In effect, the role of arbitrator could be something like a paid position: they would provide the services of trust and reliability, which could then flow through the platform, promising a fair environment for the bounty offerors and bounty hunters.

Targeted at the effective altruism community

My limited research suggests that some bounty-like services already exist. The most common bounty platforms are bug-bounty platforms, which at the moment far esclipse the size of the Facebook group Bountied Rationality.

However, even as some of these platforms exist—and perhaps even one exists that uses the arbitration system I described above—a large potential drawback comes from networking effects. If an EA tried to induce complex research using an existing platform, they would be unlikely to attract the people best suited to doing that research. As a result, an EA would be better off just trying to induce the research more informally, either by asking for people to collaborate with them in the community, or by hiring someone to perform the research directly.

My hope is that creating a platform that facilitates small-scale inducement prize contests would help solve this problem, better allowing EAs in need of research solutions to target people most likely to provide them.