He also mentions that cost-effectiveness analysis ignores the significance of ‘iteration effects’ (page 12)
Gabriel uses Iterate in his Ultra-poverty example so I’m fairly certain how he uses iterate here is what he was trying to refer to
Therefore, they would choose the program that supports literate men. When this pattern of reasoning is iterated many times, it leads to the systematic neglect of those at the very bottom, a trend exemplified by how EAs systematically neglect focusing on the very bottom in the first world. This is unjust (with my edits)
So it’s the same with using the DALY to assess cost-effectiveness. He is concerned that if you scale up or replicate a program that is cost-effective due to DALY calculations that it would ignore iteration effects where a subset of those receiving the treatment might systematically be neglected—and that this goes against principles of justice and equality. Therefore using cost-effectiveness as a means of deciding what is good or what charity to fund is on morally shaky ground (according to Gabriel). This is how I understood him.
Thanks for this. I have two comments. Firstly, I’m not sure he’s making a point about justice and equality in the ‘quantification bias’ section. If his criticism of DALYs works, then it works on straightforward consequentialist grounds—DALYs are the wrong metric of welfare. (On this, see our footnote 41.)
Secondly, the claim about iteration effects is neither necessary nor sufficient to get to his conclusion. If the DALY metric inappropriately ignores hope, then it doesn’t really matter whether a decision about healthcare resource distribution on the basis of DALYs is made once or is iterated. Either way, DALYs would ignore an important component of welfare.
Put this way I change my mind and agree it is unclear. However, to make your paper stronger, I would have included something akin to what you just wrote in the paper to make it clear why you think Gabriel’s use of “iteration effects” is unclear and not the same as his usage in the ‘priority’ section.
I’m not sure how important clarifying something like this is for philosophical argumentation, but for me, this was the one nagging kink in what is otherwise fast becoming one of my favourite “EA-defense” papers.
Thanks for the feedback. From memory, I think at the time we thought that since it didn’t do any work in his argument, we didn’t think that could be what he meant by it.
I notice this in your paper:
Gabriel uses Iterate in his Ultra-poverty example so I’m fairly certain how he uses iterate here is what he was trying to refer to
So it’s the same with using the DALY to assess cost-effectiveness. He is concerned that if you scale up or replicate a program that is cost-effective due to DALY calculations that it would ignore iteration effects where a subset of those receiving the treatment might systematically be neglected—and that this goes against principles of justice and equality. Therefore using cost-effectiveness as a means of deciding what is good or what charity to fund is on morally shaky ground (according to Gabriel). This is how I understood him.
Thanks for this. I have two comments. Firstly, I’m not sure he’s making a point about justice and equality in the ‘quantification bias’ section. If his criticism of DALYs works, then it works on straightforward consequentialist grounds—DALYs are the wrong metric of welfare. (On this, see our footnote 41.)
Secondly, the claim about iteration effects is neither necessary nor sufficient to get to his conclusion. If the DALY metric inappropriately ignores hope, then it doesn’t really matter whether a decision about healthcare resource distribution on the basis of DALYs is made once or is iterated. Either way, DALYs would ignore an important component of welfare.
Put this way I change my mind and agree it is unclear. However, to make your paper stronger, I would have included something akin to what you just wrote in the paper to make it clear why you think Gabriel’s use of “iteration effects” is unclear and not the same as his usage in the ‘priority’ section.
I’m not sure how important clarifying something like this is for philosophical argumentation, but for me, this was the one nagging kink in what is otherwise fast becoming one of my favourite “EA-defense” papers.
Thanks for the feedback. From memory, I think at the time we thought that since it didn’t do any work in his argument, we didn’t think that could be what he meant by it.