All the most important models should have crowdsourced answers also.
I *think* GiveWell uses models to make decisions. It would be possible to crowdsource numbers for each step. I predict you would get better answers if you did this. The wisdom of crowds is a thing. It breaks down when the crowd doesn’t understand the model, but if you are getting the to guess individual parts of a model, it works again.
Linked to the Stack Overflow point I made, I think there could easily be a site for crowdsourcing the answers to the GiveWells questions. I think there is a 10% chance that with 20k you could build a better site that could come up with better answers if EAs enjoyed making guesses for fun—wikipedia is the best encyclopaedia in the world. This is because it leverages the free time and energy of *loads* of nerds. GiveWell could do the same.
Can you point to any examples of GiveWell numbers that you think a crowd would have a good chance of answering more accurately? A lot of the figures on the sheets either come from deep research/literature reviews or from subjective moral evaluation, both of which seem to resist crowdsourcing.
If you want to see what forecasting might look like around GiveWell-ish questions, you could reach out to the team at Metaculus and suggest they include some on their platform. They are, to my knowledge, the only EA-adjacent forecasting platform with a good-sized userbase.
Overall, the amount of community participation in similar projects has historically been pretty low (e.g. no “EA wiki” has ever gotten mass participation going), and I think you’d have to find a way to change that before you made substantial progress with a crowdsourcing platform.
Audience size is a big challenge here. There might be a few thousand people who are interested enough in EA to participate in the community at all (beyond donating to charity or joining an occasional dinner with their university group). Of those, only a fraction will be interested in contributing to crowdsourced intellectual work.
By contrast, StackOverflow has a potential audience of millions, and Wikipedia’s is larger still. And yet, the most active 1% of editors might account for… half, maybe, of the total content on those sites? (Couldn’t quickly find reliable numbers.)
If we extrapolate to the EA community, our most active 1% of contributors would be roughly 10 people, and I’m guessing those people already find EA-focused ways to spend their time (though I can’t say how those uses compare to creating content on a website like the one you proposed).
I am not sure I can think of obvious numbers that a crowd couldn’t answer with a similar level of accuracy. (There is also the question, of accuracy compared to what? future givewell evaluations?) Consider metaculus’ record vs any other paid experts. I think your linked point about crowd size is the main one. How large a community could you mobilise to guess these things.
Metaculus produces world class answers off a user base of 12,000. How many users does this forum have? I guess if you ran an experiment here you’d be pretty close. If you ran it elswhere you might bet 1-10% buy-in. I think even 3 orders of magnitude off isn’t bad for an initial test. And if it worked it seems likely you could be within 1 order of magnitude pretty quickly.
I suggest the difference between this and EA wiki would be that it was answering questions.
For the value that givewell offers testing this seems very valuable.
I would say “comparing the crowd’s accuracy to reality” would be best, but “future GiveWell evaluations” is another reasonable option.
Consider Metaculus’s record vs any other paid experts.
Metaculus produces world class answers off a user base of 12,000.
I don’t know what Metaculus’s record is against “other paid experts,” and I expect it would depend on which experts and which topic was up for prediction. I think the average researcher at GiveWell is probably much, much better at probabilistic reasoning than the average pundit or academic, because GiveWell’s application process tests this skill and working at GiveWell requires that the skill be used frequently.
I also don’t know where your claim that “Metaculus produces world-class answers” comes from. Could you link to some evidence? (In general, a lot of your comments make substantial claims without links or citations, which can make it hard to engage with them.)
Open Philanthropy has contracted with Good Judgment Inc. for COVID forecasting, so this idea is definitely on the organization’s radar (and by extension, GiveWell’s). Have you tried asking them why they don’t ask questions on Metaculus or make more use of crowdsourcing in general? I’m sure they’d have a better explanation for you than anything I could hypothesize :-)
I don’t feel like open philanthropy would answer my speculative emails. Now that you point it out they might, but in general I don’t feel worthy of their time.
(Originally I wrote this beaty of a sentence ” previously I don’t think I’d have thought they thought me worthy of their time.”)
If you really think GiveWell or Open Philanthropy is missing out on a lot of value by failing to pursue a certain strategy, it seems like you should aim to make the most convincing case you can for their sake!
(Perhaps it would be safer to write a post specifically about this topic, then send it to them; that way, even if there’s no reply, you at least have the post and can get feedback from other people.)
Also, possibly room for a “request citation” button. When you talk in different online communities it’s not clear how much citing you should do. An easy way to request and add citations would not require additional comments.
All the most important models should have crowdsourced answers also.
I *think* GiveWell uses models to make decisions. It would be possible to crowdsource numbers for each step. I predict you would get better answers if you did this. The wisdom of crowds is a thing. It breaks down when the crowd doesn’t understand the model, but if you are getting the to guess individual parts of a model, it works again.
Linked to the Stack Overflow point I made, I think there could easily be a site for crowdsourcing the answers to the GiveWells questions. I think there is a 10% chance that with 20k you could build a better site that could come up with better answers if EAs enjoyed making guesses for fun—wikipedia is the best encyclopaedia in the world. This is because it leverages the free time and energy of *loads* of nerds. GiveWell could do the same.
Can you point to any examples of GiveWell numbers that you think a crowd would have a good chance of answering more accurately? A lot of the figures on the sheets either come from deep research/literature reviews or from subjective moral evaluation, both of which seem to resist crowdsourcing.
If you want to see what forecasting might look like around GiveWell-ish questions, you could reach out to the team at Metaculus and suggest they include some on their platform. They are, to my knowledge, the only EA-adjacent forecasting platform with a good-sized userbase.
Overall, the amount of community participation in similar projects has historically been pretty low (e.g. no “EA wiki” has ever gotten mass participation going), and I think you’d have to find a way to change that before you made substantial progress with a crowdsourcing platform.
Audience size is a big challenge here. There might be a few thousand people who are interested enough in EA to participate in the community at all (beyond donating to charity or joining an occasional dinner with their university group). Of those, only a fraction will be interested in contributing to crowdsourced intellectual work.
By contrast, StackOverflow has a potential audience of millions, and Wikipedia’s is larger still. And yet, the most active 1% of editors might account for… half, maybe, of the total content on those sites? (Couldn’t quickly find reliable numbers.)
If we extrapolate to the EA community, our most active 1% of contributors would be roughly 10 people, and I’m guessing those people already find EA-focused ways to spend their time (though I can’t say how those uses compare to creating content on a website like the one you proposed).
Has anyone ever tried making an EA stack exchange?
I am not sure I can think of obvious numbers that a crowd couldn’t answer with a similar level of accuracy. (There is also the question, of accuracy compared to what? future givewell evaluations?) Consider metaculus’ record vs any other paid experts. I think your linked point about crowd size is the main one. How large a community could you mobilise to guess these things.
Metaculus produces world class answers off a user base of 12,000. How many users does this forum have? I guess if you ran an experiment here you’d be pretty close. If you ran it elswhere you might bet 1-10% buy-in. I think even 3 orders of magnitude off isn’t bad for an initial test. And if it worked it seems likely you could be within 1 order of magnitude pretty quickly.
I suggest the difference between this and EA wiki would be that it was answering questions.
For the value that givewell offers testing this seems very valuable.
I would say “comparing the crowd’s accuracy to reality” would be best, but “future GiveWell evaluations” is another reasonable option.
I don’t know what Metaculus’s record is against “other paid experts,” and I expect it would depend on which experts and which topic was up for prediction. I think the average researcher at GiveWell is probably much, much better at probabilistic reasoning than the average pundit or academic, because GiveWell’s application process tests this skill and working at GiveWell requires that the skill be used frequently.
I also don’t know where your claim that “Metaculus produces world-class answers” comes from. Could you link to some evidence? (In general, a lot of your comments make substantial claims without links or citations, which can make it hard to engage with them.)
Open Philanthropy has contracted with Good Judgment Inc. for COVID forecasting, so this idea is definitely on the organization’s radar (and by extension, GiveWell’s). Have you tried asking them why they don’t ask questions on Metaculus or make more use of crowdsourcing in general? I’m sure they’d have a better explanation for you than anything I could hypothesize :-)
Noted on the lack of citations.
I don’t feel like open philanthropy would answer my speculative emails. Now that you point it out they might, but in general I don’t feel worthy of their time.
(Originally I wrote this beaty of a sentence ” previously I don’t think I’d have thought they thought me worthy of their time.”)
If you really think GiveWell or Open Philanthropy is missing out on a lot of value by failing to pursue a certain strategy, it seems like you should aim to make the most convincing case you can for their sake!
(Perhaps it would be safer to write a post specifically about this topic, then send it to them; that way, even if there’s no reply, you at least have the post and can get feedback from other people.)
Also, possibly room for a “request citation” button. When you talk in different online communities it’s not clear how much citing you should do. An easy way to request and add citations would not require additional comments.