In the case you mentioned, you can try to calculate the impact of an education throughout the beneficiaries’ lives. In this case, I’d expect it to mostly be an increase in future wages, but also some other positive externalities. Then you look at the willingness to trade time for money, or the willingness to trade years of life for money, or the goodness and badness of life at different earning levels, and you come up with a (very uncertain) comparison.
I hope that’s enough to point you to some directions which might answer your questions.
* But e.g., for negative utilitarians, axiom’s 3 and 3′ wouldn’t apply in general (because they prefer to avoid suffering infinitely more than promoting happiness, i.e. consider L=some suffering, M=non-existence, N=some happiness) but they would still apply for the particular case where they’re trading-off between different quantities of suffering. In any case, even if negative utilitarians would represent the world with two points (total suffering, total happiness), they still have a way of comparing between possible worlds (choose the one with the least suffering, then the one with the most happiness if suffering is equal).
Thanks very much. I am going to spend some time thinking about the von-Neumann-Mortgenstern theorem. Despite my huge in-built bias towards believing things labelled “von-Neumann”, at an initial scan I found only one of the axioms (transitivity) felt obviously “true” to me about things like “how good is the whole world?”. They all seem true if actually playing games of chance for money of course, which seems to often be the model. But I intend to think about that harder.
On GiveWell, I think they’re doing an excellent job of trying to answer these questions. I guess I tend to get a bit stuck at the value-judgement level (e.g. how to decide what fraction of a human life a chicken life is worth). But it doesn’t matter much in practice because I can then fall back on a gut-level view and yet still choose a charity from their menu and be confident it’ll be pretty damn good.
1 & 2 might be normally be answered by the Von Neumann–Morgenstern utility theorem*
In the case you mentioned, you can try to calculate the impact of an education throughout the beneficiaries’ lives. In this case, I’d expect it to mostly be an increase in future wages, but also some other positive externalities. Then you look at the willingness to trade time for money, or the willingness to trade years of life for money, or the goodness and badness of life at different earning levels, and you come up with a (very uncertain) comparison.
If you want to look at an example of this, you might want to look at GiveWell’s evaluations in general, or at their evaluation of deworming charities in particular.
I hope that’s enough to point you to some directions which might answer your questions.
* But e.g., for negative utilitarians, axiom’s 3 and 3′ wouldn’t apply in general (because they prefer to avoid suffering infinitely more than promoting happiness, i.e. consider L=some suffering, M=non-existence, N=some happiness) but they would still apply for the particular case where they’re trading-off between different quantities of suffering. In any case, even if negative utilitarians would represent the world with two points (total suffering, total happiness), they still have a way of comparing between possible worlds (choose the one with the least suffering, then the one with the most happiness if suffering is equal).
Thanks very much. I am going to spend some time thinking about the von-Neumann-Mortgenstern theorem. Despite my huge in-built bias towards believing things labelled “von-Neumann”, at an initial scan I found only one of the axioms (transitivity) felt obviously “true” to me about things like “how good is the whole world?”. They all seem true if actually playing games of chance for money of course, which seems to often be the model. But I intend to think about that harder.
On GiveWell, I think they’re doing an excellent job of trying to answer these questions. I guess I tend to get a bit stuck at the value-judgement level (e.g. how to decide what fraction of a human life a chicken life is worth). But it doesn’t matter much in practice because I can then fall back on a gut-level view and yet still choose a charity from their menu and be confident it’ll be pretty damn good.