I research a wide variety of issues relevant to global health and development. I also consult as a researcher for GiveWell. I’m always happy to chat—if you think we have similar interests and would like to talk, send me a calendar invite at karthikt@berkeley.edu!
Karthik Tadepalli
Makes sense!
Thanks for the helpful answer!
In some orgs, internships are sort of a training/test period or lead-in to a full-time role, but my understanding is that’s mostly a thing for bigger organizations that hire for the same roles every year, and definitely not realistic for us.
Can you elaborate on why it’s not realistic for small organizations? It seems like a small organization could use an internship as a lead-in for a full time role, too. You can get a clearer signal about whether someone is a good fit for a full time job without committing to hire them upfront.
[Question] Why does your organization not offer internships?
I don’t care about population ethics so don’t take this as a good faith argument. But doesn’t astronomical waste imply that saving lives earlier can compete on the same order of magnitude as x risk?
I do think they are a sufficient condition to kick off an arms race. Export controls are a declaration of hostility, and they force the two countries to decouple from each other. China and the US being decoupled makes the downside of an arms race much lower and the upside much higher.
It’s hard to justify export controls unless you believe that we actually are in an arms race, sooner or later. If you wanted to prevent an arms race you couldn’t pick a worse policy to put your weight behind. That leads me to conclude that AI policy people who back export controls find an arms race to be acceptable.
Do you see advocating for export controls as fundamentally different from an arms race? Because it seems like export controls are pretty popular among AI policy people.
Cash transfers are not targeted (i.e. lots of households receive transfers that don’t have young children) and are very expensive relative to other ways to avert child deaths ($1000 vs a few dollars for a bednet). The latter varies over more orders of magnitude than child mortality effects, so it dominates the calculation.
I redirected my giving from GiveWell to EA Animal Welfare Fund. I had been meaning to for a while (since the donation election), so wouldn’t necessarily call it marginal, but it was the trigger.
Interestingly, metaculus forecast on this was off by an order of magnitude (15% vs 300-400%). Only three people forecasted, so I wouldn’t read too much into it, but it is a wide gap.
I’ve had a substantive/technical conversation with Emmanuel over Zoom, can confirm he is not a scammer.
I’m curious how many people actually split their individual giving across cause areas. It seems like a strange decision, for all the reasons you outline.
Sorry for demanding the spoon-feeding, but where do I find a list of such organizations?
I might do this. What organizations would you be most interested in seeing this for?
The CE of redirecting money is simply (dollars raised per dollar spent) * (difference in CE between your use of the money vs counterfactual use). So if GD raises $10 from climate mitigation for every $1 it spent, and that money would have otherwise been neutral, then that’s a cost-effectiveness of 10x in GiveWell units.
There’s nothing complicated about estimating the value of leverage. The problem is actually doing leverage. Everyone is trying to leverage everyone else. When there is money to be had, there are a bunch of organizations trying to influence how it is spent. Melinda French Gates is likely deluged with organizations trying to pitch her for money. The CEAP shutdown post you mentioned puts it perfectly:
The core thesis of our charity fell prey to the 1% fallacy. Within any country, much of the development budget is fixed and difficult to move. For example, most countries will have made binding commitments spanning several years to fund various projects and institutions. Another large chunk is going to be spent on political priorities (funding Ukraine, taking in refugees, etc.) which is also difficult for an outsider to influence.
What is left is fought over by hundreds, if not thousands of NGOs all looking for funding. I can’t think of any other government budget with as many entities fighting over as small a budget. The NGOs which survive in this space, are those which were best at getting grants. Like other industries dependent on government subsidies, they fight tooth and nail to ensure those subsidies stay put.
This doesn’t mean that leverage is impossible. It just means that leverage opportunities tend to be specific and limited. We have to take them on opportunistically, rather than making leverage a theory of impact.
During the animal welfare vs global health debate week, I was very reluctant to make a post or argument in favor of global health, the cause I work in and that animates me. Here are some reflections on why, that may or may not apply to other people:
Moral weights are tiresome to debate. If you (like me) do not have a good grasp of philosophy, it’s an uphill struggle to grasp what RP’s moral weights project means exactly, and where I would or would not buy into its assumptions.
I don’t choose my donations/actions based on impartial cause prioritization. I think impartially within GHD (e.g. I don’t prioritize interventions in India just because I’m from there, I treat health vs income moral weights much more analytically than species moral weights) but not for cross-cause comparison. I am okay with this. But it doesn’t make for a persuasive case to other people.
It doesn’t feel good to post something that you know will provoke a large volume of (friendly!) disagreement. I think of myself as a pretty disagreeable person, but I am still very averse to posting things that go against what almost everyone around me is saying, at least when I don’t feel 100% confident in my thesis. I have found previous arguments about global health vs animal welfare to be especially exhausting and they did not lead to any convergence, so I don’t see the upside that justifies the downside.
I don’t fundamentally disagree with the narrow thesis that marginal money can do more good in animal welfare. I just feel disillusioned with the larger implications that global health is overfunded and not really worth the money we spend on it.
I’m deliberately focusing on emotional/psychological inhibitions as opposed to analytical doubts I have about animal welfare. I do have some analytical doubts, but I think of them as secondary to the personal relationship I have with GHD.
This looks like an exceptionally promising list of charities. Good luck to all the founders!
This is a restatement of the law of iterated expectations. LIE says . Replace with an indicator variable for whether some hypothesis is true, and interpret as an indicator for binary evidence about . Then this immediately gives you a conservation of expected evidence: if , then , since is an average of the two of them so it must be in between them.
You could argue this is just an intuitive connection of the LIE to problems of decisionmaking, rather than a reinvention. But there’s no acknowledgement of the LIE anywhere in the original post or comments. In fact, it’s treated as a consequence of Bayesianism, when it follows from probability axioms. (Though one comment does point this out.)
To see it formulated in a context explicitly about beliefs, see Box 1 in these macroeconomics lecture notes.
I agree with Linch that the idea that “a game can have multiple equilibria that are Pareto-rankable” is trivial. Then the existence of multiple equilibria automatically means players can get trapped in a suboptimal equilibrium – after all, that’s what an equilibrium is.
What specific element of “coordination traps” goes beyond that core idea?
It’s definitely true that all else equal, uncertainty inflates CEAs of funded grants, for the reasons you identify. (This is an example of the optimizer’s curse.) However:
This risk is lower when the variance in true CE is large, especially if its larger than the variance due to measurement error. To the extent we think this is true in the opportunities we evaluate, this reduces the quantitative contribution of measurement error to CE inflation. More elaboration in this comment.
Good CEAs are conservative in their choices of inputs for exactly this reason. The goal should be to establish the minimal conditions for a grant to be worth making, as opposed to providing precise point estimates of CE.