Thanks for the question, John. I’m not sure how much weight to put on “similar” in your question. In general, you’d be looking to minimize the greatest strength-weighted complaint that someone might have. Imagine a simple case where all the individuals in two equally-sized populations you might help are at risk of dying, which means that the core content of the complaint would be the same. Then, we just have the strength-weighting to worry about. The two key parts of that (at least for present purposes) would be the probability of harm, your probability of impact, and the magnitude of the impact you can have. So, we multiply through to figure out who has the strongest claim. In a case like this, intervention prioritization looks very similar to what we already do in EA. However, in cases where the core contents of the complaints are different (death vs. quality of life improvements, say), the probabilities might not end up mattering. Or in cases where your action would have high EV but only because you’re aggregating over a very large population where each individual has a very low chance of harm, it could easily work out that, according to EAC, you should get less EV by benefitting individuals who are exposed to much greater risk of harm. So the core process can sometimes be similar, but with these anti-aggregative (or partially-aggregative) side constraints.
Can you explain how, in practice, one would choose between similar interventions within global health under this lens?
Thanks for the question, John. I’m not sure how much weight to put on “similar” in your question. In general, you’d be looking to minimize the greatest strength-weighted complaint that someone might have. Imagine a simple case where all the individuals in two equally-sized populations you might help are at risk of dying, which means that the core content of the complaint would be the same. Then, we just have the strength-weighting to worry about. The two key parts of that (at least for present purposes) would be the probability of harm, your probability of impact, and the magnitude of the impact you can have. So, we multiply through to figure out who has the strongest claim. In a case like this, intervention prioritization looks very similar to what we already do in EA. However, in cases where the core contents of the complaints are different (death vs. quality of life improvements, say), the probabilities might not end up mattering. Or in cases where your action would have high EV but only because you’re aggregating over a very large population where each individual has a very low chance of harm, it could easily work out that, according to EAC, you should get less EV by benefitting individuals who are exposed to much greater risk of harm. So the core process can sometimes be similar, but with these anti-aggregative (or partially-aggregative) side constraints.