[Question] How might better collective decision-making backfire?

I’ve started work on a project that aims to do something like “improving our collective decision-making.”[1] Broadly, it’s meant to enable communities of people who want to make good decisions to collaboratively work out what these decisions are. Individual rationality is helpful for that but not the focus.

Concretely, it’s meant to make it easier to collaborate on probabilistic models in areas where we don’t have data. You can read more about the vision in Ozzie Gooen’s Less Wrong post on Squiggle. But please ignore this for the sake of this question. I hope to make the question and answers to it useful to a broader audience by not narrowly focusing it on the concrete thing I’m doing. Other avenues to improve collective decision-making may be improving prediction markets and developing better metrics for things that we care about.

Before I put a lot of work into this, I would like to check whether this is a good goal in the first place – ignoring tractability and opportunity costs.[2] By “good” I mean something like “robustly beneficial,” and by “robustly beneficial” I mean something like “beneficial across many plausible worldviews and futures, and morally cooperative.”

My intuition is that it’s about as robustly positive as it gets, but I feel like I could easily be wrong because I haven’t engaged with the question for a long time. Less Wrong and CFAR seem to have similar goals, though I perceive a stronger focus on individual rationality. So I like to think that people have thought and maybe written publicly about what the major risks are in what they do.

I would like to treat this question like a Stack Exchange question where you can also submit your own answers.[3] But I imagine there are many complementary answers to this question. So I’m hoping that people can add more, upvote the ones they find particularly concerning, important, or otherwise noteworthy, and refine them with comments.

For examples of pretty much precisely the type of answers I’m looking for, see my answers titled “Psychological Effects” and “The Modified Ultimatum Game.”

A less interesting answer is my answer “Legibility.” I’m less interested in it here because it describes a way in which a system can fail to attain the goal of improving decision-making rather than a way in which the successful realization of that goal backfires. I wanted to include it as a mild counterexample.

If you have ideas for further answers, it would be interesting if you could also think of ways to work around them. It’s usually not beneficial to abandon a project if there is any way in which it can backfire but to work out how the failure mode can be avoided without sacrificing all of the positive effects of the project.

You can also message me privately if you don’t want to post your answer publicly.

Acknowledgements: Thanks for feedback and ideas to Sophie Kwass and Ozzie Gooen!

(I’ve followed up on this question on my blog.)


  1. ↩︎

    How this should be operationalized is still fairly unclear. Ozzie plans to work out more precisely what it is we seek to accomplish. You might as well call it “collaborative truth-seeking,” “improving collective epistemics,” “collaborating on better predictions,” etc.

  2. ↩︎

    I’ve considered a wide range of contributions I could make. Given my particular background, this currently seems to me like my top option.

  3. ↩︎

    My answers contain a few links. These should not be interpreted as endorsements of the linked articles or websites.

No comments.