Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Here’s a list of links and people I have found on this topic:
Paal Kvarberg has scoped this idea and got feedback in this document, but I think didn’t pursue the idea.
Ozzie Gooen got funding from LTFF in 2019 and is now involved in foretold.io, an open-source prediction market. This looks similar, but less ambitious than what I’m trying to do. The notebooks part in particular doesn’t look like it has adoption. He also started guesstimate, a probabilistic spreadsheet app which can do stuff like do drake’s equation with probability distributions. He also has a lesswrong collection of posts on “Prediction-driven collaborative reasoning systems”
These people from Manifold Market are building “charity prediction markets” which allow the use of real money in a prediction market, if the money will be donated to charity.
The Quantified Uncertainty Research Institute (QURI, pronounced “query”), which have a variety of projects (which you can see on their public AirTable), including foretold.io (mentioned before), and a probabilistic programming language called Squiggle.
QURI’s current active project is Metaforecast, a site that collects and links to predictions and estimates from other platforms such as Metaculus.
Metaculus is also currently working on a similar idea (causal graphs). Here are some more people who are thinking or working on related ideas, (who might also appreciate your post): Adam Binks, David Manheim and Arieh Englander (see their MTAIR project).
Yeah I think Metaculus would be the best people to go talk about this. I know Gaia (Metaculus’s CEO) is super excited about causal graphs; happy to intro you two if you’d like!
Also: Max, what’s your background? Do you have the capacity to do direct work on this (ie build this out on your own?) If so, what are your bottlenecks (eg a year of funding for this project?)
My background is as a software developer, with professional web-dev experience. I’m currently doing a research master’s on ML (transformers) and last year did a project surveying the field of probabilistic programming languages.
Because of my Master’s, I don’t have capacity to work on this right at the moment, but come September this year it’s absolutely a candidate for things I would work on. I do have the skills and will have capacity to work on this on my own if I think that it’s my best option for impact. From September onward, I have two bottlenecks: 1) Funding 2) Finding the best use of my time among many options.
FWIW we’d love something like this in Manifold too, but that’s probably a bit farther out; Metaculus is much better developed in terms of complex in-depth estimation/forecasting, while Manifold is trying to focus on being as simple as possible.
Happy to see more discussion on these topics.
Much of this is a part of what both some of the EA forecasting community, and what we at [QURI](https://quantifieduncertainty.org/), are working on.
I think the full thing is much more work than you think it is. I suggest trying to take one subpart of this problem and doing it very well, instead of taking the entire thing on at once.
I’ve been thinking about which sub-parts to tackle, but I think that the project just isn’t very valuable until it has all three of:
A Prediction / estimation aggregation tool
Up-to-date causal models (using a simplified probabilistic programming language)
Very good UX, needed for adoption.
It’s a lot of work, yes, but that doesn’t mean it can’t happen. I’m not sure there’s a better way to split it up and still have it be valuable. I think the MVP for this project is a pretty high bar.
Ways to split it up:
Do the probabilistic programming language first. This isn’t really valuable, it’s a research project that no one will use.
Do the prediction aggregation part first. This is metaculus.
Do the knowledge graph part first. This is maybe a good start—it’s a wiki with better UX? I’m sure someone is scoping this out / doing it.
These things empower each other.
It’s hard, but nevertheless I’d estimate definitely no more than 3 person-years of effort for the following things:
A snappy, good-looking prediction/estimation (web) interface.
A causal model editor with a graph view.
A backend that can update the distributions with monte-carlo simulations.
Rich-text comments and posts attached to models, bets and “markets” (still need a better name than “markets”)
I-frames for people to embed the UI elsewhere.
What do you estimate?
I’d love to make an aggregate estimate for how much work this project would take
I think this idea is really cool (albeit hard to pull off successfully)! You’re definitely not the first person to think of it, but I don’t know of any comparable efforts that have turned into actual products yet. I think the biggest challenge will be maintaining the right ratio of models to estimators, as you could very easily have the former outrun the latter without some kind of subsidy for people’s time. There’s already a challenge around recruiting and retaining talented forecasters, and this sort of estimation may be in some ways more cognitively demanding / less rewarding for the participants. So you might want to have a setup where the impact models are tightly curated and there are incentives to attract estimators, at least at the beginning.
If you end up moving forward with a prototype, I’d be interested in providing input on the product design as an alpha user.
Actually, come to think of it, the S-Process used for the Survival and Flourishing Fund is an implementation of one version of this idea.
Thanks, that’s good feedback. I will check out the linked video. If you know anyone to get in touch with, I’d be keen to talk to them.
My guess is that this idea has been independently thought about many times, and if it’s not bad for some reason, already funded.
I’ve found a similar project by Paal Kvarberg, described in this document here: https://docs.google.com/document/d/1An-NGrQUPSJ4v8HdsZO-BcAq5NpyrEqv32rdk-N4guo/edit#
I think he didn’t pursue it—the document hasn’t been updated for a year.
Seems like I forgot to change “last updated 04.january 2021” to “last updated 04. january 2022″ when I made changes in january haha.
I am still working on this. I agree with Ozzie’s comment below that doing a small part of this well is the best way to make progress. We are currently looking at the UX part of things. As I describe under this heading in the doc, I don’t think it is feasible to expect many non-expert forecasters to enter a platform to give their credences on claims. And the expert forecasters are, as Ian mentions below, in short supply. Therefore, we are trying to make it easier to give credences on issues while reading about them the same place you read about them. I tested this idea out in a small experiment this fall (with google docs), and it does seem like motivated people who would not enter prediction platforms to forecast issues might give their takes if elicited this way. Right now we are investigating this idea further through an mvp of a browser extension that lets users give credences on claims found in texts on the web. We will experiment some more with this during the fall. A more tractable version of the long doc is likely to appear at the forum at some point.
I’m not wedded to the concrete ideas presented in the doc, I just happen to think they are good ways to move closer to the grand vision. I’d be happy to help any project moving in that direction:)
I’m in favour of the project, but here’s a consideration against: Making people in the community more confident about what the community thinks about a subject, can be potentially harmfwl.
Testimonial evidence is the stuff you get purely because you trust another reasoner (Aumann-agreement fashion), and technical evidence is everything else (observation, math, argument).
Making people more aware of testimonial evidence will also make them more likely to update on it, if they’re good Bayesians. But this also reduces the relative influence that technical evidence has on their beliefs. So although you are potentially increasing the accuracy of each member’s beliefs, you are also weakening the link community opinion has to technical evidence, and that leaves us more prone to information cascades and slower to update on new discoveries/arguments.
But this is mainly a problem for uniform communities where everyone assigns the same amount of trust to everyone else. If, on the other hand, we have “thought leaders” (highly trusted researchers who severely distrust others’ opinions and stubbornly refuse to update on anything other than technical evidence), then their technical-evidence grounded beliefs can filter through to the rest of the community, and we get the best of both worlds.
One of the approaches here, is to A) require people sign up, and B) don’t show people aggregated predictions until they have posted their own.
I think that’s a good idea to reduce groupthink! Also, I think it can be helpful to uncover if specific individuals and sub-groups think a proposal is promising based on their estimates, since rarely will an entire group view something similarly. This could bring individuals together to further discuss and potentially support/execute the idea.
Thanks for writing up this idea in such a succinct and forceful way. I think the idea is good, and would like to help any way I can. However, I would encourage thinking a lot about the first part “If we get the EA community to use a lot of these”, which I think might be the hardest part.
I think that there are many ways to do something like this, and that it’s worth thinking very carefully about details before starting to build. The idea is old, and there is a big graveyard of projects aiming for the same goal. That being said, I think a project of this sort has amazing upsides. There are many smart people working on this idea, or very similar ideas right now, and I am confident that something like this is going to happen at some point.
I think this is an excellent idea and one that I’ve wanted to exist for quite a few years now! My interest in this area stems from wanting to surface compelling strategies and projects that don’t receive sufficient attention because they’re not currently “trending” in the movement. By explicitly laying out theories of change and crowdsourcing estimates for each part of those theories of change, this makes it much easier for people to to understand proposals, identify how various people and groups in the community think about a proposal, and compare the expected impact of various strategies and projects against each other.
Right now, people submit ideas and projects on the EA Forum, but that doesn’t clearly translate to action. But if it’s pretty clear that specific individuals, sub-groups in the community, or the community at large think a theory of change is promising, I think this has the potential to greatly increase awareness and the likelihood of execution of promising proposals.