Open Sourcing Metaculus

Link post

This is a linkpost for an announcement by Metaculus CEO, Deger Turan, originally published June 5th, 2024. In addition to spreading the word here, we are interested to gather feedback and ideas from the EA community.

We make reasoning and coordination tools for the public benefit, and that requires the trust and participation of the wider forecasting community. That’s why we’re going open source. GitHub issues, pull requests, and community-sourced forecasts will dictate much of what we build and how. Ultimately, forecasters will benefit from better infrastructure, the Metaculus team will benefit from more idea diversity and greater capacity, and readers will get more accurate aggregates.

What’s next

Metaculus will go open source in Q3. There will be opportunities to audit, critique, and improve on our efforts. Creating a better epistemic environment is a collective endeavor.

As part of open-sourcing Metaculus, we’ll help implement our best features into other open-source forecasting codebases. We’ll make our roadmap and building process public, and we’ll give more visibility into our decision-making criteria and discussions. Our codebase is far from perfect, but we can improve it better together out in the open.

In a future post I’ll write more about how others can contribute to this project, and specifications for how code can be modified, used, and distributed—as well as guidelines for coding standards and our submission process. We’ll provide processes for pull requests, and how we’ll implement submissions from the community.

We want to be transparent that we may still choose to offer proprietary offerings targeting particular enterprise and commercial use cases: We believe that part of making forecasting more useful and more used is by productizing it and generating revenue.

Why isn’t Metaculus already open source?

Before I joined as CEO I wondered why Metaculus wasn’t already open source. I discovered the idea was being discussed, but that there were some concerns about platform security, about accessibility of the codebase and about the potential for exploits that would allow some forecasters to game the system and create an uneven playing field. Now that we’ve been able to observe the new scoring system at work for some time with no issues, we’re confident the benefits outweigh the risks. While the codebase is nowhere near perfect, we believe making improvements on the platform is better done in the open, with feedback from the larger ecosystem guiding us explicitly.

We’re looking for

  • Researchers who want to backtest their aggregation algorithms on our data

  • UX specialists who see better ways to translate beliefs into forecasts

  • Nonprofits who want to spin up their own internal forecasting engines built on our architecture

  • Developers who can help add new features and enhance existing ones

As our plan evolves I’ll provide more details. Until then, let me know what you think: What resources and documentation will you want to see? What would make contributing easier?

Crossposted to LessWrong (0 points, 0 comments)