Can you point to any examples of GiveWell numbers that you think a crowd would have a good chance of answering more accurately? A lot of the figures on the sheets either come from deep research/âliterature reviews or from subjective moral evaluation, both of which seem to resist crowdsourcing.
If you want to see what forecasting might look like around GiveWell-ish questions, you could reach out to the team at Metaculus and suggest they include some on their platform. They are, to my knowledge, the only EA-adjacent forecasting platform with a good-sized userbase.
Overall, the amount of community participation in similar projects has historically been pretty low (e.g. no âEA wikiâ has ever gotten mass participation going), and I think youâd have to find a way to change that before you made substantial progress with a crowdsourcing platform.
Audience size is a big challenge here. There might be a few thousand people who are interested enough in EA to participate in the community at all (beyond donating to charity or joining an occasional dinner with their university group). Of those, only a fraction will be interested in contributing to crowdsourced intellectual work.
By contrast, StackOverflow has a potential audience of millions, and Wikipediaâs is larger still. And yet, the most active 1% of editors might account for⊠half, maybe, of the total content on those sites? (Couldnât quickly find reliable numbers.)
If we extrapolate to the EA community, our most active 1% of contributors would be roughly 10 people, and Iâm guessing those people already find EA-focused ways to spend their time (though I canât say how those uses compare to creating content on a website like the one you proposed).
I am not sure I can think of obvious numbers that a crowd couldnât answer with a similar level of accuracy. (There is also the question, of accuracy compared to what? future givewell evaluations?) Consider metaculusâ record vs any other paid experts. I think your linked point about crowd size is the main one. How large a community could you mobilise to guess these things.
Metaculus produces world class answers off a user base of 12,000. How many users does this forum have? I guess if you ran an experiment here youâd be pretty close. If you ran it elswhere you might bet 1-10% buy-in. I think even 3 orders of magnitude off isnât bad for an initial test. And if it worked it seems likely you could be within 1 order of magnitude pretty quickly.
I suggest the difference between this and EA wiki would be that it was answering questions.
For the value that givewell offers testing this seems very valuable.
I would say âcomparing the crowdâs accuracy to realityâ would be best, but âfuture GiveWell evaluationsâ is another reasonable option.
Consider Metaculusâs record vs any other paid experts.
Metaculus produces world class answers off a user base of 12,000.
I donât know what Metaculusâs record is against âother paid experts,â and I expect it would depend on which experts and which topic was up for prediction. I think the average researcher at GiveWell is probably much, much better at probabilistic reasoning than the average pundit or academic, because GiveWellâs application process tests this skill and working at GiveWell requires that the skill be used frequently.
I also donât know where your claim that âMetaculus produces world-class answersâ comes from. Could you link to some evidence? (In general, a lot of your comments make substantial claims without links or citations, which can make it hard to engage with them.)
Open Philanthropy has contracted with Good Judgment Inc. for COVID forecasting, so this idea is definitely on the organizationâs radar (and by extension, GiveWellâs). Have you tried asking them why they donât ask questions on Metaculus or make more use of crowdsourcing in general? Iâm sure theyâd have a better explanation for you than anything I could hypothesize :-)
I donât feel like open philanthropy would answer my speculative emails. Now that you point it out they might, but in general I donât feel worthy of their time.
(Originally I wrote this beaty of a sentence â previously I donât think Iâd have thought they thought me worthy of their time.â)
If you really think GiveWell or Open Philanthropy is missing out on a lot of value by failing to pursue a certain strategy, it seems like you should aim to make the most convincing case you can for their sake!
(Perhaps it would be safer to write a post specifically about this topic, then send it to them; that way, even if thereâs no reply, you at least have the post and can get feedback from other people.)
Also, possibly room for a ârequest citationâ button. When you talk in different online communities itâs not clear how much citing you should do. An easy way to request and add citations would not require additional comments.
Can you point to any examples of GiveWell numbers that you think a crowd would have a good chance of answering more accurately? A lot of the figures on the sheets either come from deep research/âliterature reviews or from subjective moral evaluation, both of which seem to resist crowdsourcing.
If you want to see what forecasting might look like around GiveWell-ish questions, you could reach out to the team at Metaculus and suggest they include some on their platform. They are, to my knowledge, the only EA-adjacent forecasting platform with a good-sized userbase.
Overall, the amount of community participation in similar projects has historically been pretty low (e.g. no âEA wikiâ has ever gotten mass participation going), and I think youâd have to find a way to change that before you made substantial progress with a crowdsourcing platform.
Audience size is a big challenge here. There might be a few thousand people who are interested enough in EA to participate in the community at all (beyond donating to charity or joining an occasional dinner with their university group). Of those, only a fraction will be interested in contributing to crowdsourced intellectual work.
By contrast, StackOverflow has a potential audience of millions, and Wikipediaâs is larger still. And yet, the most active 1% of editors might account for⊠half, maybe, of the total content on those sites? (Couldnât quickly find reliable numbers.)
If we extrapolate to the EA community, our most active 1% of contributors would be roughly 10 people, and Iâm guessing those people already find EA-focused ways to spend their time (though I canât say how those uses compare to creating content on a website like the one you proposed).
Has anyone ever tried making an EA stack exchange?
I am not sure I can think of obvious numbers that a crowd couldnât answer with a similar level of accuracy. (There is also the question, of accuracy compared to what? future givewell evaluations?) Consider metaculusâ record vs any other paid experts. I think your linked point about crowd size is the main one. How large a community could you mobilise to guess these things.
Metaculus produces world class answers off a user base of 12,000. How many users does this forum have? I guess if you ran an experiment here youâd be pretty close. If you ran it elswhere you might bet 1-10% buy-in. I think even 3 orders of magnitude off isnât bad for an initial test. And if it worked it seems likely you could be within 1 order of magnitude pretty quickly.
I suggest the difference between this and EA wiki would be that it was answering questions.
For the value that givewell offers testing this seems very valuable.
I would say âcomparing the crowdâs accuracy to realityâ would be best, but âfuture GiveWell evaluationsâ is another reasonable option.
I donât know what Metaculusâs record is against âother paid experts,â and I expect it would depend on which experts and which topic was up for prediction. I think the average researcher at GiveWell is probably much, much better at probabilistic reasoning than the average pundit or academic, because GiveWellâs application process tests this skill and working at GiveWell requires that the skill be used frequently.
I also donât know where your claim that âMetaculus produces world-class answersâ comes from. Could you link to some evidence? (In general, a lot of your comments make substantial claims without links or citations, which can make it hard to engage with them.)
Open Philanthropy has contracted with Good Judgment Inc. for COVID forecasting, so this idea is definitely on the organizationâs radar (and by extension, GiveWellâs). Have you tried asking them why they donât ask questions on Metaculus or make more use of crowdsourcing in general? Iâm sure theyâd have a better explanation for you than anything I could hypothesize :-)
Noted on the lack of citations.
I donât feel like open philanthropy would answer my speculative emails. Now that you point it out they might, but in general I donât feel worthy of their time.
(Originally I wrote this beaty of a sentence â previously I donât think Iâd have thought they thought me worthy of their time.â)
If you really think GiveWell or Open Philanthropy is missing out on a lot of value by failing to pursue a certain strategy, it seems like you should aim to make the most convincing case you can for their sake!
(Perhaps it would be safer to write a post specifically about this topic, then send it to them; that way, even if thereâs no reply, you at least have the post and can get feedback from other people.)
Also, possibly room for a ârequest citationâ button. When you talk in different online communities itâs not clear how much citing you should do. An easy way to request and add citations would not require additional comments.