Building a successful economy for collaborative cognitive work with high externalities


[Epistemic status: Quite confident. Lots of this seems obvious from first principles. Though it’s far from exhaustive. Wary of carrying costs and the planning fallacy, I publish this post rough and incomplete, rather than not at all.]

Global markets are currently only (somewhat) efficient in incentivising problem-solving in areas where the benefits can be internalised, such as by earning a profit from the product one has built. ^[Ignoring various market failures such as long-time horizons, large coordination problems, high initial costs, and more.]

Several people in the EA community have suggested that we should be able to use monetary mechanisms to gain similar benefits in areas with large externalities. Suggested mechanisms include prizes, bounties ([1], [2]), impact certificates and grants. (I will not focus on grants in this post, as there’s a ton of content about them on this site already.)

This spreadsheet summarises these efforts, in order to 1) allow people looking to do freelance cognitive work to find good opportunities, and 2) allow people interested in making prizes work to survey the history of approaches and why they failed/​succeeded.

The current post is an attempt to analyse what is needed to make prizes work—that is, to effectively change some people’s behaviour in a way which directly optimises for improving the long-term future (or some other goal we care about).

Examples of such behaviour changes include:

  • Someone spending a year living off of one’s savings, learning how to summarise comment threads, with the expectation that people will pay well for this ability in the following years

  • A competent literature-reviewer gathering 5 friends to teach them the skill, in order to scale their reviewing capacity to earn more prize money

  • A college student building up a strong forecasting track-record and then being paid enough to do forecasting for a few hours each week that they can pursue their own projects instead of having to work full-time over the summer

  • A college student dropping out to work full-time on answering questions on LessWrong, expecting this to provide a stable funding stream for 2+ years

  • A professional with a stable job and family and a hard time making changes to their life-situation, taking 2 hours/​week off from work to do skilled cost-effectiveness analyses, while being fairly compensated

  • Some people starting a “Prize VC” or “Prize market maker”, which attempts to find potential prize winners and connect them with prizes (or vice versa), while taking a cut somehow

Etc. etc. (I expect the above to be a small subset of the space of exciting optimisation that emerges when you manage to get the incentives right.)

There are at least four main ways in which incentives affect behaviour:

  1. Conscious motivation: people deliberately change their behaviour to benefit from the incentives

  2. Reinforcement learning: people unconsciously change their behaviour in line with the incentives, due to the positive reinforcement that gives them

  3. Selection effects: people whose behaviour aligns with the incentives will tend to be more successful and influential than people whose behaviour does not, absent any actual changes in a single person’s behaviour

  4. Mimetics: people or entire communities share tips, tricks, memes, norms and more to enable others to benefit from the incentives

(I have written more about these here.)

Each have separate implications for how to make prizes successful. I don’t think I have exhausted each sub-mechanism and look forward to collaboratively making more progress in the comments.


1. Conscious motivation

How can one ensure that individuals will consciously choose strategies to optimise for winning the prizes?

Clarity: It must be clear to people what they are optimising for

This property fails at the tails. Sometimes it’s better to have the prize giver be someone more rational than the prize taker, meaning that it’s too hard for the latter to goodhart on the desires of the former, instead leaving them to simply do the best they can and treating the prize signal as an objective evaluation of good work

Stability: People must be able to change their behaviour in expectation

It’s not sufficient that I get a one-time prize for something I did a year ago. I must expect that there will be a stable funding stream in the future, such that I can condition my future plans on this (e.g. dropping out of college, not satisficing on a job offer when the alternative is prize work, skill-building for a skill in demand by prize-givers) -- while at the same time have prize-givers condition their actions on my availability (e.g. set aside resources to manage applications, give feedback, build supporting infrastructure, and making sure funds are available)

One might split this into:

  1. Precommitments/​reliable expectations of future funding

  2. Common knowledge (between both parts of the two-factor market): for a community to build infrastructure and make plans resting upon prizes, the existence and broad rules of the prizes should be common knowledge. This enables would-be producers (e.g. potential college dropouts) and would-be consumers (e.g. EA orgs investing time into turning research questions into an outsourceable format) to move in-lockstep to the Nash equilibria where they successfully trade resources via prizes.


2. Reinforcement learning (unconscious motivation)

How can one ensure that prizes unconsciously affect behaviour?

Quick and smooth payout

Insofar as humans are hyperbolic discounters (regardless of whether we want to or not), avoiding irritating paperwork and long delays might be helpful.

Clear credit assignment

In order for someone to do more of what worked, they need to have a good sense of what aspect of their work is being rewarded (“Did I write good comments? Did I make good predictions? Was the summary good? Was the research topic novel and interesting?” etc.)

Incentivise exploration (avoid an overly sparse reward signal), otherwise prize workers won’t learn the most effective strategies

Effectively balance intrinsic and extrinsic motivation

Section 1.5 of Kraut and Resnick’s “Building successful online communities” has a useful chapter on this, including this diagram from a 2001 meta-analysis about when extrinsic rewards harm (-) vs enhance (+) intrinsic rewards:

[diagram—I can’t get the “add image” option to work]


3. Selection effects

How can one those with a comparative advantage in doing certain prize work will tend to be the ones doing that work?

I believe that to a large extent selection effects will be present whether one wants them to or not, so the main question is rather whether there are effective ways of choosing which selection effects one wants to amplify.

Here are some examples of unwanted selection effects:

  • The people who win prizes are those most eager to work for prizes, not the ones who had the highest comparative advantage in doing so

  • The incentive dynamics are set by the prize givers most generous in giving out prizes, rather than those with the best ideas of what should be funded (e.g. suppose foundation X are less careful in thinking about the opportunity costs of money than foundation Y, and so decide to award 5x as much prize money, which on the margin disincentives the more valuable work preferred by Y)

  • The people who work for a particular prize are those who thought it was interesting/​a good idea (e.g. awarding a prize for responses to “Does God exist?” and only having theologians put in the work, almost all of whom answer “Yes”)

Beyond listing these, I am uncertain about what action-guiding advice there is here.


4. Memetics

How can one ensure that people successfully communicate things like: the existence of prizes, good strategies and heuristics for prize work, promising prize workers, norms and best practices for prize design, etc.?

As with selection effects, there are likely several adverse effects that memetics have on a prize (for example, messages will tend to lose nuance when being shared between many people, as there are more ways to misunderstand a claim than to understand it); and it might be hard to deliberately intervene to prevent them.

“Memeifying” prizes/​creating a conceptual handle

Having a simple name for something enables the difference between:

Without meme

Alice: “Hey, you seem pretty well off lately, but I haven’t noticed you getting a job or anything? What happened?”

Bob: “Oh, it’s because of [20 minute explanation of the ideas behind having a market for impact via prizes]”

And

With meme

Alice: “Hey, you seem pretty well off lately, but I haven’t noticed you getting a job or anything? What happened?”

Bob: “Yeah, I did some cognitive prize work!”

Using this conceptual handle, Alice can now quickly ask other friends “Do you know anything about how to be successful at ‘cognitive prize work’?”, she can easily Google or search her favourite blogs for posts about “cognitive prize work”, and more.

For a real life example, Wei Dai writes:

For both of the AI alignment related bounties, when a friend or acquaintance asks me about my “work”, I can now talk about these prize that I recently won, which sounds a lot cooler than “oh, I participate on this online discussion forum”. :)

Producing sharable material

Ensuring there is a key reference public write-up of memes one want to incentivise the spread of, e.g. what traits caused certain prize winners to receive their prizes (“X has the skill of being both brief and accurate”, “Y used a Guesstimate model in a helpful way”) as well as ongoing public discussion of the work (“Three things I did to improve as a prize worker”, “OpenPhil recommendations for aspiring prize workers”).

A great example here is the April 2019 EA Long-term future fund write-up.