Suppose I’m the intended recipient of a philanthropic intervention by an organization called MaxGood. They are considering two possible interventions: A and B. If MaxGood choose according to “decision utility” then the result is equivalent to letting me choose, assuming that I am well-informed about the consequences. In particular, if it was in my power to decide according to what measure they choose their intervention, I would definitely choose decision-utility. Indeed, making MaxGood choose according to decision-utility is guaranteed to be the best choice according to decision-utility, assuming MaxGood are at least as well informed about things as I am, and by definition I’m making my choices according to decision-utility.
On the other hand, letting MaxGood choose according to my answer on a poll is… Well, if I knew how the poll is used when answering it, I could use it to achieve the same effect. But in practice, this is not the context in which people answer those polls (even if they know the poll is used for philanthropy, this philanthropy usually doesn’t target them personally, and even if it did individual answers would have tiny influence[1]). Therefore, the result might be what I actually want or it might be e.g. choosing an intervention which will influence society in a direction that makes putting higher numbers culturally expected or will lower the baseline expectations w.r.t. which I’m implicitly calculating this number[2].
Another issue with polls is, how do we know the answer is utility rather than some monotonic function of utility? The difference is important if we need to compute expectations. But this is the least of the problem IMO.
Now, in reality it is not in the recipient’s power to decide on that measure. Hence MaxGood are free to decide in some other way. But, if your philanthropy is explicitly going against what the recipient would choose for themself[3], well… From my perspective (as Vanessa this time), this is not even altruism anymore. This is imposing your own preferences on other people[4].
A similar situation arises in voting, and I indeed believe this causes people to vote in ways other than optimizing the governance of the country (specifically, vote according to tribal signalling considerations instead).
Although in practice, many interventions have limited predictable influence on this kind of factors, which might mean that poll-based measures are usually fine. It might still be difficult to see the signal through the noise in this measure. And, we need to be vigilant about interventions that don’t fall into this class.
It is ofc absolutely fine if e.g. MaxGood are using a poll-based measure because they believe, with rational justification, that in practice this is the best way to maximize the recipient’s decision-utility.
But, if your philanthropy is explicitly going against what the recipient would choose for themself, well… From my perspective (as Vanessa this time), this is not even altruism anymore. This is imposing your own preferences on other people
Would this also apply to e.g. funding any GiveWell top charity besides GiveDirectly, or would that fall into “in practice, this is the best way to maximize the recipient’s decision-utility”?
I don’t think most recipients would buy vitamin supplementation or bednets themselves, given cash. I guess you could say that it’s because they’re not “well informed”, but then how could you predict their “decision utility when well informed” besides assuming it would correlate strongly with maximizing their experience utility?
A bit off-topic, but I found GiveWell’s staff documents on moral weights fascinating for deciding how much to weigh beneficiaries’ preferences, from a very different angle.
I don’t know much about supplements/bednets, but AFAIU there are some economy of scale issues which make it easier for e.g. AMF to supply bednets compared with individuals buying bednets for themselves.
As to how to predict “decision utility when well informed”, one method I can think of is look at people who have been selected for being well-informed while similar to target recipients in other respects.
But, I don’t at all claim that I know how to do it right, or even that life satisfaction polls are useless. I’m just saying that I would feel better about research grounded in (what I see as) more solid starting assumptions, which might lead to using life satisfaction polls or to something else entirely (or a combination of both).
Suppose I’m the intended recipient of a philanthropic intervention by an organization called MaxGood. They are considering two possible interventions: A and B. If MaxGood choose according to “decision utility” then the result is equivalent to letting me choose, assuming that I am well-informed about the consequences. In particular, if it was in my power to decide according to what measure they choose their intervention, I would definitely choose decision-utility. Indeed, making MaxGood choose according to decision-utility is guaranteed to be the best choice according to decision-utility, assuming MaxGood are at least as well informed about things as I am, and by definition I’m making my choices according to decision-utility.
On the other hand, letting MaxGood choose according to my answer on a poll is… Well, if I knew how the poll is used when answering it, I could use it to achieve the same effect. But in practice, this is not the context in which people answer those polls (even if they know the poll is used for philanthropy, this philanthropy usually doesn’t target them personally, and even if it did individual answers would have tiny influence[1]). Therefore, the result might be what I actually want or it might be e.g. choosing an intervention which will influence society in a direction that makes putting higher numbers culturally expected or will lower the baseline expectations w.r.t. which I’m implicitly calculating this number[2].
Another issue with polls is, how do we know the answer is utility rather than some monotonic function of utility? The difference is important if we need to compute expectations. But this is the least of the problem IMO.
Now, in reality it is not in the recipient’s power to decide on that measure. Hence MaxGood are free to decide in some other way. But, if your philanthropy is explicitly going against what the recipient would choose for themself[3], well… From my perspective (as Vanessa this time), this is not even altruism anymore. This is imposing your own preferences on other people[4].
A similar situation arises in voting, and I indeed believe this causes people to vote in ways other than optimizing the governance of the country (specifically, vote according to tribal signalling considerations instead).
Although in practice, many interventions have limited predictable influence on this kind of factors, which might mean that poll-based measures are usually fine. It might still be difficult to see the signal through the noise in this measure. And, we need to be vigilant about interventions that don’t fall into this class.
It is ofc absolutely fine if e.g. MaxGood are using a poll-based measure because they believe, with rational justification, that in practice this is the best way to maximize the recipient’s decision-utility.
I’m ignoring animals in this entire analysis, but this doesn’t matter much since the poll methodology is in applicable to animals anyway.
Would this also apply to e.g. funding any GiveWell top charity besides GiveDirectly, or would that fall into “in practice, this is the best way to maximize the recipient’s decision-utility”?
I don’t think most recipients would buy vitamin supplementation or bednets themselves, given cash.
I guess you could say that it’s because they’re not “well informed”, but then how could you predict their “decision utility when well informed” besides assuming it would correlate strongly with maximizing their experience utility?
A bit off-topic, but I found GiveWell’s staff documents on moral weights fascinating for deciding how much to weigh beneficiaries’ preferences, from a very different angle.
I don’t know much about supplements/bednets, but AFAIU there are some economy of scale issues which make it easier for e.g. AMF to supply bednets compared with individuals buying bednets for themselves.
As to how to predict “decision utility when well informed”, one method I can think of is look at people who have been selected for being well-informed while similar to target recipients in other respects.
But, I don’t at all claim that I know how to do it right, or even that life satisfaction polls are useless. I’m just saying that I would feel better about research grounded in (what I see as) more solid starting assumptions, which might lead to using life satisfaction polls or to something else entirely (or a combination of both).