Interesting. I’m sorry to hear that the system is so fucked up. I really hope you’ll be able to improve it.
alamo 2914
I agree with the 2 commenters below. I wouldn’t trust him very much on climate change, but the CCC’s work on GHD is of a different nature. The CCC has many researchers, of whom Bjørn is only one. I would also like to add—if he can get Nobel Prize winning economists Vernon Smith, Tom Schelling, Finn Kydland and Douglass North to work with him, multiple times each, that’s a good indicator that the CCC’s research is good.
I agree with your points about R&D and E-procurement, (and some are mentioned in the report), thanks for your input.
It’s really cool that your wife works in land tenure! The philosophical framework I have in mind for land tenure reminds me of the one for other estimates. As Scott Alexander put it—IF IT’S WORTH DOING, IT’S WORTH DOING WITH MADE-UP STATISTICS. Essentially, it’s better to at least have some information in your land registration system, even if not very accurate, than none. What do you think about this?
As for education, I don’t know.
It’s true that he never identified as an EA. I mean EA in the sense of using reason and evidence to perform cost-effectiveness among different causes, and choosing the best, regardless of what comes up. These are IMO the core characteristics of an EA, everything else is a bonus.
Also, the first occurrences of something will usually be different than what it will develop to be. Was Hippocrates a licensed MD? Did he rely on evidence-based medicine? Did Galileo ever do a PhD in physics?
I would rate him decently on truth-seeking. He used the world’s best economists in his think-tank, multiple times over. It would surely have been easier to invite less esteemed economists. You might think he did that only to raise the status of the CCC (and by extension himself), but that’s too cynical in my opinion.
Copenhagen Consensus Center’s newest research on global poverty—we should be talking about this
I might’ve used too strong of a language with my original post, such as the talk about being a sucker. For me it’s useful to think about donations as a product I’m buying, but I probably took it too far. And I don’t think I’ve properly emphasized my main message, which was (as I’ve added later) - the explore-exploit tradeoff for causes is really hard if you don’t know how far exploration could take you. Honestly, I’m most interested in your take on that. I initially only used GiveWell and CEARCH to demonstrate that argument and show I how got to it.
The drug analogy is interesting, although I prefer the start-up analogy. Drug development is more binary—some drugs can just flat-out fail in humans, while start-ups are more of a spectrum (the ROI might be smaller than thought etc.). I don’t see a reason to think of CEARCH recommended programs or for most other exploratory stuff as binary. Of course lobbying could flat-out fail, but it’s unlikely we’ll have to update our beliefs that this charity would NEVER work, as might happen in drug development. And obviously with start-ups, there’s also a lot of difference between the initial market research and the later stages (as you said).
GiveWell has a lot of flaws for cause exploration. They really focus on charity research, not cause research. It’s by design really biased towards existing causes and charities. The charities must be interested and cooperate with GiveWell. They look for the track record, i.e. charities operating in high-risk, low tractibillity areas such as policy have a harder time. In most cases it makes sense, sometimes it can miss great opportunities.
Yes, they’ve funded some policy focused charities, but they might’ve funded much more if they were more EV maximizing instead of risk-averse. Seeing the huge leverage such options provide, it’s entirely possible.
Also, they aren’t always efficient—look at GiveDirectly. Their bar for top charities was 10x GiveDirectly for years, yet they kept GiveDirectly as a top charity until last year??? This is not some small, hard to notice inefficiency. It literally is their consistent criteria for their flagship charities. Can you imagine a for-profit company telling their investors “well, we believe these other channels have a ROI of at least 10x, but please also consider investing in this channel with x ROI”, for multiple years? I can’t. Let alone putting that less efficient channel as one of the best investments…
That’s exactly what I mean when I say altruism, even EA, can have gross inefficiency in allocations. It’s not special to GiveWell, I’m just exemplifying.
If GiveWell can make such gross mistakes, then probably others can. Another example was their relative lack of research on family planning, which I’ve written about. They’re doing A LOT of great things too. But I must say I am a bit skeptical of their decision making sometimes.
Keep in mind, CEARCH would have to be EXTREMELY optimistic in order for us to say that it hasn’t found a couple of causes 10x GiveWell. We are talking about 40x optimistic. That might be the case, but IMO it’s a strong enough assertion to require proof. Do you have examples of something close to 40x optimism in cost-effectiveness?
I agree that a lot of the difference in EAs donations can come from differing perspectives, probably most. But I think even some utilitarian, EV maximizing, 0-future discount, animal equalists EAs donate to different causes (or any other set of shared beliefs). It’s definitely not impossible.
As for other examples of 10x GiveWell cost-effectiveness in global health:
CE has estimated another charity yields $5.62 per DALY.
An Israeli non-profit, which produced an estimate of 4.3$ QALYs per dollar, in cooperation with EA Israel. A volunteer said to me he believed they were about 3x too optimistic, but that’s still around 10x GiveWell.
Also here is an example of 4x disagreement between GiveWell and Founders Pledge, and an even bigger disagreement with RP, on a mass media campaign for family planning. Even the best in the business can disagree.
Sorry for this being a bit of a rave
Thanks for the reply!
If I understand your main arguments correctly, you’re basically saying that high cost-effectiveness options are rare, uncertain, and have a relatively small funding gap that is likely to be closed anyways. Also new charities are likely to fail, and can be less effective. And smart EAs won’t waste their money.
Uncertainty and rarity: Assume that CEARCH is on average 5x too optimistic about their high confidence level report, 20x too optimistic about their low confidence stuff (that’s A LOT). Still, out of 17 causes they researched, 4 are still over 10x effective as top GiveWell charities. Almost 1/4th. They were probably lucky—Rethink Priorities and CE and such don’t have such a high rate (it would be an interesting topic for analysis). But still, their budgets are miniscule. RP has spent around 14m$ in it’s entire lifetime. CEARCH is composed of only 2 full-time workers, and was founded less than 2 years ago. CE had a total income of £775k in 2022. The cost of operations for these stuff is tiny, compared to the amount we spend on direct work.
Small funding gap, likely to be closed anyways: Let’s say that on average, finding such causes requires 5m$ (seems overblown with the aforementioned info). And assume these causes are on average 20x effective as top GiveWell charities. And the funding gap is indeed small—only 10m$ on average. That’s 15m$, that would’ve done as much good as 200m$ for GiveWell. So by finding and donating to 2.6 causes/y, we can equal the impact GiveWell has done in 2021. Those funding gaps aren’t that likely to be closed—it took more then 10 years after the inception of EA for CEARCH to find those causes. In the stock market, a 2% misprice may be closed within a few hours. In altruism, a 500% misallocation will never be closed without deliberate challenge.
And these causes pretty easy to find. CEARCH has been started in 2022 and has already found 4 causes 10x GiveWell under my aforementioned pessimistic assumptions. CE and RP have found more. There are big funding gaps, because there are many causes like this. There are many big world governments to do lobbying to. We should aim to close the funding gaps as soon as possible, because that would help more people.
New charities likely to fail, and be less effective: CE’s great work shows that might not be true. A substantial number of their charities report significant success. Also, I assume that’s taken into account in exploratory research. It can still diminish the impact by 50% and it won’t matter to the overall scheme.
EAs won’t waste their money on bad donations: If that was true, then all EAs seeking to maximize expected value would roughly agree on where to donate their money. Rather, we see the community being split into 4 main parts (global H&P, animals, existential risk, meta). Some people in EA simply don’t and won’t donate to some of these parts. This shows that at least a part of the community might donate to worse charities.
Imagine you have 2 investments that you will return your money only in 10 years.
a safe investment that would return you Y.
a risky start-up that you expect to return 10Y in EV.
What would you choose? I bet the start-up. With altruism there’s no reason to be loss averse, so the logic is even more solid.
I guess my are that we should spend more on cause prioritization and supporting new charities (akin to CE). But then—when do we know we’ve found a decent cause? The exploration-exploitation trade-off is impossible if you don’t know how far exploration will take you.
EA is the smartest, most open community I know. I’m sure it will explore this.
[Question] What’s the Limit for Cost-Effectiveness?
Replicato: Building a Website to Find Research Replications & Retractions
It’s an interesting point, but they’re just reviewing the evidence…
A better exercise to not fall into self-deception is ‘mental contrasting’, in which you first think about achieving your goals, and then about the obstacles that stand in your way and how to overcome them. It might also help in goal achievement, especially in combination with a technique called ‘implementation intention’.[1]
- ^
Wang G, Wang Y and Gai X (2021) A Meta-Analysis of the Effects of Mental Contrasting With Implementation Intentions on Goal Attainment. Front. Psychol. 12:565202. doi: 10.3389/fpsyg.2021.565202
- ^
Thanks, that proposal is indeed very interesting!
I agree, it seems like a very good idea to post it here. I assume it’s seen by dozen times more people than in your team. Also, the EA forum is definitely less biased about any thing than organizations that work for that thing. Just the way humans work.
That’s great, thank you. It might be valuable to also write on MCII—mental contrasting & implementation intention. A meta-analysis showed it is quite effective[1] (g = 0.336), and it mentioned 2 papers that found it to be more effective than its components—either implementation intention or mental contrasting alone[2] [3]
- ^
Wang G, Wang Y and Gai X (2021) A Meta-Analysis of the Effects of Mental Contrasting With Implementation Intentions on Goal Attainment. Front. Psychol. 12:565202. doi: 10.3389/fpsyg.2021.565202
- ^
Adriaanse, M. A., Oettingen, G., Gollwitzer, P. M., Hennes, E. P., De Ridder, D. T. D., and De Wit, J. B. F. (2010). When planning is not enough: fighting unhealthy snacking habits by mental contrasting with implementation intentions (MCII). Eur. J. Soc. Psychol. 40, 1277–1293. doi: 10.1002/ejsp.730
- ^
Kirk, D., Oettingen, G., and Gollwitzer, P. M. (2013). Promoting integrative bargaining: mental contrasting with implementation intentions. Int. J. Conflict Manage. 24, 148–165. doi: 10.1108/10444061311316771
- ^
That’s neat. May I ask why have you published this script, but not the script for the GiveDirectly episode?
Hi, as other commenters said, I think that from a purely EA perspective, there might be better ideas to cover. But it’s really good that you published the draft here, as it is likely to help increase your positive impact and quality of production in general (they’re already top notch though). With the size of your channel it’s probably cost-effective for other EAs to help you anyway (it’s not common to leverage exposure to >100k people). Also not every idea you cover needs to be 100% EA, it’s very important to just do interesting stuff every now and then.
As for my suggestion, maybe you could have charter cities in the broader context of evidence based policy & desicion making- as charter cities being one extreme & interesting example of policy research. Taking about evidence based policy could also raise the question of effectiveness and intuition (obviously of value for EA), as social programs are near impossible to assess intuitively, and most don’t really work—https://80000hours.org/articles/effective-social-program/.
There are other good resources on evidence based policy, for example -
Social Programs That Work—https://evidencebasedprograms.org/
The Campbell Collaboration—https://www.campbellcollaboration.org
I agree that charter cities are very interesting and a likely good way to get people interested in policy in general. Although I don’t think this video 100% aligns with EA, it still seems of extreme potential and value, so good luck!
P.S. - y’all are doing a great job with the channel. You are really interesting and explain complicated subjects concisely and elegantly. Keep it on :)
Cool. Thanks for sharing and good luck!
Thanks, I see it’s very hard to think of something that hasn’t been already thought of by EAs. By coincidence, I’ve just seen a post by ‘Rethink Priorities’ on the subject.
As a secular Jew, I loved it :) (and was also frightened by the memories of my Jewish legacy lessons in middle school).
Also now that I have a small chance for Scott’s attention, I recommend you take a look at table 2 in this study. It’s about physical and mental diseases caused by mismatch from our current environments to the ones we primarily developed in, highly interesting IMO (return to caves—new cause area?).