I research a wide variety of issues relevant to global health and development. I also consult as a researcher for GiveWell. I’m always happy to chat—if you think we have similar interests and would like to talk, send me a calendar invite at karthikt@berkeley.edu!
Karthik Tadepalli
I redirected my giving from GiveWell to EA Animal Welfare Fund. I had been meaning to for a while (since the donation election), so wouldn’t necessarily call it marginal, but it was the trigger.
Interestingly, metaculus forecast on this was off by an order of magnitude (15% vs 300-400%). Only three people forecasted, so I wouldn’t read too much into it, but it is a wide gap.
I’ve had a substantive/technical conversation with Emmanuel over Zoom, can confirm he is not a scammer.
I’m curious how many people actually split their individual giving across cause areas. It seems like a strange decision, for all the reasons you outline.
Sorry for demanding the spoon-feeding, but where do I find a list of such organizations?
I might do this. What organizations would you be most interested in seeing this for?
The CE of redirecting money is simply (dollars raised per dollar spent) * (difference in CE between your use of the money vs counterfactual use). So if GD raises $10 from climate mitigation for every $1 it spent, and that money would have otherwise been neutral, then that’s a cost-effectiveness of 10x in GiveWell units.
There’s nothing complicated about estimating the value of leverage. The problem is actually doing leverage. Everyone is trying to leverage everyone else. When there is money to be had, there are a bunch of organizations trying to influence how it is spent. Melinda French Gates is likely deluged with organizations trying to pitch her for money. The CEAP shutdown post you mentioned puts it perfectly:
The core thesis of our charity fell prey to the 1% fallacy. Within any country, much of the development budget is fixed and difficult to move. For example, most countries will have made binding commitments spanning several years to fund various projects and institutions. Another large chunk is going to be spent on political priorities (funding Ukraine, taking in refugees, etc.) which is also difficult for an outsider to influence.
What is left is fought over by hundreds, if not thousands of NGOs all looking for funding. I can’t think of any other government budget with as many entities fighting over as small a budget. The NGOs which survive in this space, are those which were best at getting grants. Like other industries dependent on government subsidies, they fight tooth and nail to ensure those subsidies stay put.
This doesn’t mean that leverage is impossible. It just means that leverage opportunities tend to be specific and limited. We have to take them on opportunistically, rather than making leverage a theory of impact.
During the animal welfare vs global health debate week, I was very reluctant to make a post or argument in favor of global health, the cause I work in and that animates me. Here are some reflections on why, that may or may not apply to other people:
Moral weights are tiresome to debate. If you (like me) do not have a good grasp of philosophy, it’s an uphill struggle to grasp what RP’s moral weights project means exactly, and where I would or would not buy into its assumptions.
I don’t choose my donations/actions based on impartial cause prioritization. I think impartially within GHD (e.g. I don’t prioritize interventions in India just because I’m from there, I treat health vs income moral weights much more analytically than species moral weights) but not for cross-cause comparison. I am okay with this. But it doesn’t make for a persuasive case to other people.
It doesn’t feel good to post something that you know will provoke a large volume of (friendly!) disagreement. I think of myself as a pretty disagreeable person, but I am still very averse to posting things that go against what almost everyone around me is saying, at least when I don’t feel 100% confident in my thesis. I have found previous arguments about global health vs animal welfare to be especially exhausting and they did not lead to any convergence, so I don’t see the upside that justifies the downside.
I don’t fundamentally disagree with the narrow thesis that marginal money can do more good in animal welfare. I just feel disillusioned with the larger implications that global health is overfunded and not really worth the money we spend on it.
I’m deliberately focusing on emotional/psychological inhibitions as opposed to analytical doubts I have about animal welfare. I do have some analytical doubts, but I think of them as secondary to the personal relationship I have with GHD.
This looks like an exceptionally promising list of charities. Good luck to all the founders!
This is a restatement of the law of iterated expectations. LIE says . Replace with an indicator variable for whether some hypothesis is true, and interpret as an indicator for binary evidence about . Then this immediately gives you a conservation of expected evidence: if , then , since is an average of the two of them so it must be in between them.
You could argue this is just an intuitive connection of the LIE to problems of decisionmaking, rather than a reinvention. But there’s no acknowledgement of the LIE anywhere in the original post or comments. In fact, it’s treated as a consequence of Bayesianism, when it follows from probability axioms. (Though one comment does point this out.)
To see it formulated in a context explicitly about beliefs, see Box 1 in these macroeconomics lecture notes.
I agree with Linch that the idea that “a game can have multiple equilibria that are Pareto-rankable” is trivial. Then the existence of multiple equilibria automatically means players can get trapped in a suboptimal equilibrium – after all, that’s what an equilibrium is.
What specific element of “coordination traps” goes beyond that core idea?
I solve this problem by prioritizing my own needs over doing good. This is not necessarily appealing to the scrupulous people who face this dilemma.
People often justify a fast takeoff of AI by pointing to how fast AI could improve beyond some point. But The Great Data Integration Schlep is an excellent LW post about the absolute sludge of trying to do data work inside corporate bureaucracy. The key point is that even when companies seemingly benefit from having much more insights into their work, a whole slew of incentive problems and managerial foibles prevent this from being realized. She applies this to be skeptical of AI takeoff:
If you’re imagining an “AI R&D researcher” inventing lots of new technologies, for instance, that means integrating it into corporate R&D, which primarily means big manufacturing firms with heavy investment into science/engineering innovation (semiconductors, pharmaceuticals, medical devices and scientific instruments, petrochemicals, automotive, aerospace, etc). You’d need to get enough access to private R&D data to train the AI, and build enough credibility through pilot programs to gradually convince companies to give the AI free rein, and you’d need to start virtually from scratch with each new client. This takes time, trial-and-error, gradual demonstration of capabilities, and lots and lots of high-paid labor, and it is barely being done yet at all.
This is also the story of computers, and the story of electricity. A transformative new technology was created, but it took decades for its potential to be realized because of all the existing infrastructure that had to be upended to maximize its impact.
In general, even if AI is technologically unprecedented, the social infrastructure through which AI will be deployed is much more precedented, and we should consider those barriers as actually slowing down AI impacts.
Growth theory for EAs – reading list and summary
One thing I’ve never seen people who are bullish on management interventions talk about (including myself) is why the corresponding interventions for microenterprises are so much less effective. (McKenzie 2014, McKenzie 2020) Microenterprises and self employed entrepreneurs also don’t do simple business practices like “keep accounts”, but even heavy-touch interventions pushing them to do that have zero to small effects. What’s going on?
I think this is a bit of a roundabout argument. From the Philippines study:
We examine the effects of the policy changes on enrollment and graduation in other degree programs to determine whether increased migration prospects for nurses spurred new students to obtain postsecondary education or, instead, caused students to shift from other fields of study. While these results are relatively imprecise, they suggest that nursing enrollees primarily switched to nursing from other fields. This result helps to explain our large enrollment effects by clarifying that we are not estimating the elasticity of overall education to migration opportunities. Rather, the policy changes examined here were occupation specific, and individuals might be more elastic in switching between fields of study than making the extensive margin decision to enroll in higher education… While the enrollment effects were driven primarily by students switching from other degree types, students persisted to graduation at higher rates, leading to an overall increase in college graduates in the Philippines.
So they are finding exactly what you suggest, that people switch to the sector from other sectors, but they also find that if people hadn’t moved, they would have been less likely to graduate college, period. So if you see increases in the overall stock of college workers as an overall positive effect, the program did have an overall positive effect.
But in general, I don’t even think you need to appeal to that kind of reasoning, because brain drain is usually in jobs that are among the most valuable possible jobs for the country. (This is likely because those jobs are both the jobs that rich countries want to import, and also because they must be well-paid for the people to have the means to emigrate.) Medical workers are extremely valuable, so are engineers. It seems a little contrived to imagine that the sectors that lost out were comparably socially valuable.
(xpost)
Really excited to see where this substack goes, but I have to start off with some disagreements! The remittances point is fine, as is return migration. But the literature on brain gain has always seemed pretty uncompelling.
The most obvious problem is that increasing the supply of skilled workers requires both increasing demand for education (which emigration possibilities do) and increasing the supply of education. The latter is not a given in any country. Expanding college enrolment is hard. New colleges need staff, instructors, and administrators, all of which are scarce. Government colleges need to be established by a bureaucracy, private colleges need to be regulated and quality-controlled, both of which require a lot of governance capacity by the country. We can’t just handwave the claim that if more people want to become doctors, more people can become doctors.
So I’m concerned that there’s a site selection bias in the countries studied in this literature. People are writing papers about the countries that did manage to successfully pull off a large educational expansion, so they find that emigration boosted human capital. But for countries that can’t pull it off, emigration really might be a brain drain.
How large could this site selection bias be? I pulled some data on college enrolment rates and emigration (net, not for any skill group) and compared India and the Philippines to the rest of them. (There was no data on Cabo Verde, the other country you cited.) Among the top 20 emigrant-sending developing countries, India had one of the highest increases in college enrolment (21 pp) between 1990 and 2015, while the Philippines was a bit above average (13 pp). As a specific contrasting example, Nigeria had only a 7 pp increase in college enrollment during this period. (data, graph)
A similar picture emerges when comparing India and the Philippines to developing countries as a whole. India has close to the highest enrolment growth over this period, and the Philippines is still above average, while Nigeria is still below average. (graph) So we should not expect Nigeria’s brain gain to be anywhere close to that of India or the Philippines.
You could argue that India and the Philippines had higher growth because emigration incentives increased the supply of education. But:
-
The emigration incentives studied by Khanna/Morales and Abarcar/Theoharides are not India-specific or Philippines-specific—this fact is necessary for their estimates to be causal! - so we shouldn’t expect these countries to have larger increases in enrolment just from emigration incentives.
-
Even if emigration incentives have a causal effect on the supply of colleges that was for some reason higher in India and the Philippines, I would expect that effect to be small relative to other factors that make governments want to supply more colleges (domestic political projects, trying to attract foreign companies, trying to spur industrial growth). So heterogeneous effects of emigration incentives can’t explain much of the difference between these two countries and other developing countries.
In general, I wish there was more nuance around the brain gain hypothesis. I would speculate it has such immediate acceptance because it resolves our conflicting commitments as cosmopolitans: we want people to be able to pursue a better life, we want high-income countries to have more open immigration policy, and we want low-income countries to grow faster. The brain gain hypothesis is alluring because it promises that we can have all of the above. But I think that relies on other things going right that absolutely don’t have to go right. And I wish there was more acceptance of that nuance.
-
I don’t mean to say that risk preferences in general are unimpeachable and beyond debate. I was only saying that I personally do not put my risk preferences up for debate, nor do I try to convince others about their risk preferences.
In any debate about different approaches to ethics, I place a lot of weight on intuitionism as a way to resolve debates. Considering the implications of different viewpoints for what I would have to accept is the way I decide what I value. I do not place a lot of weight on whether I can refute the internal logic of any viewpoint.
Cash transfers are not targeted (i.e. lots of households receive transfers that don’t have young children) and are very expensive relative to other ways to avert child deaths ($1000 vs a few dollars for a bednet). The latter varies over more orders of magnitude than child mortality effects, so it dominates the calculation.