Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Here’s a provocative take on your experience that I don’t really endorse, but I’d be interested in hearing your reaction to:
In response to such a comment, I might say that GiveWell actually had much more reason to think AMF was indeed one of the most cost-effective charities than GWWC, that Peter Singer’s recommendations were good but substantially less cost-effective (and that improvement is clearly worth it), and that the above illustration of the wicked problem experience is useful because it applies more strongly in other areas (e.g. AI forecasting). But I’m curious about your response.
Apologies for chiming in so late!
I believe GWWC’s recommendation of Against Malaria Foundation was based on GiveWell’s (otherwise they might’ve recommended another bednet charity). And Peter Singer generally did not recommend the charities that GiveWell ranks highly, before GiveWell ranked them highly.
I don’t want to deny, though, that for any given research project you might undertake, there’s often a much quicker approach that gets you part of the way there. I think the process you described is a fine way to generate some good initial leads (I think GWWC independently recommended Schistosomiasis Control Initiative before GiveWell did, for example). As the stakes of the research rise, though, I think it becomes more valuable and important to get a lot of the details right—partly because so much money rides on it, partly because quicker approaches seem more vulnerable to adversarial behavior/Goodharting of the process.
I’m going to point aspiring researchers who ask me what it’s like to work at an EA think tank to this article. This is exactly my experience for many projects where the end result is an article. It’s a bit different when the end result is a decision like “what charity to start”.
This link is broken.
Fixed, thanks!
I think descriptions like this of the challenges doing good research poses are really helpful! The description definitely resonates with me.
I might have a suggestion here:
How about writing a draft with imperfect arguments, and then getting a few people from your target audience to read it while you’re watching them (for example, in a video call with share screen), and you’ll hear their questions/thoughts/pushbacks?
I think about this like “user testing”: The pushbacks that people have are often different from what I’d guess myself.
Meta: I’m expecting you to have all sorts of pushbacks to this suggestion, I could explain my thoughts and experiences here over 10 pages, probably. But I’m not going to! I’m going to hope you tell me what you care about and I’ll improve only that part
(Big fan btw!)
I think this is often a good approach!
This is the kindest way anyone ever told me that I didn’t help ;) <3 <3
If anyone’s interested, I just posted about this idea yesterday: https://www.lesswrong.com/posts/8BGexmqqAx5Z2KFjW/how-to-make-your-article-more-persuasive-spoiler-do-user
Would you consider asking representative samples of populations about priority problems, coordinating local experts to develop solutions which best address most of these priorities, and asking these experts to find organizations and individuals who deploy these solutions at the lowest (marginal) cost?
In this way, you are independent of the data that is posted online (which may be non-representative of all entity capacities), work much more efficiently (coordinating experts who spent substantial time learning about their domains), and get much better cost-effectiveness (combining solutions and getting local prices).
Feel free to review my initial reactions to this piece.