Thanks for this. I have two comments. Firstly, I’m not sure he’s making a point about justice and equality in the ‘quantification bias’ section. If his criticism of DALYs works, then it works on straightforward consequentialist grounds—DALYs are the wrong metric of welfare. (On this, see our footnote 41.)
Secondly, the claim about iteration effects is neither necessary nor sufficient to get to his conclusion. If the DALY metric inappropriately ignores hope, then it doesn’t really matter whether a decision about healthcare resource distribution on the basis of DALYs is made once or is iterated. Either way, DALYs would ignore an important component of welfare.
Put this way I change my mind and agree it is unclear. However, to make your paper stronger, I would have included something akin to what you just wrote in the paper to make it clear why you think Gabriel’s use of “iteration effects” is unclear and not the same as his usage in the ‘priority’ section.
I’m not sure how important clarifying something like this is for philosophical argumentation, but for me, this was the one nagging kink in what is otherwise fast becoming one of my favourite “EA-defense” papers.
Thanks for the feedback. From memory, I think at the time we thought that since it didn’t do any work in his argument, we didn’t think that could be what he meant by it.
Thanks for this. I have two comments. Firstly, I’m not sure he’s making a point about justice and equality in the ‘quantification bias’ section. If his criticism of DALYs works, then it works on straightforward consequentialist grounds—DALYs are the wrong metric of welfare. (On this, see our footnote 41.)
Secondly, the claim about iteration effects is neither necessary nor sufficient to get to his conclusion. If the DALY metric inappropriately ignores hope, then it doesn’t really matter whether a decision about healthcare resource distribution on the basis of DALYs is made once or is iterated. Either way, DALYs would ignore an important component of welfare.
Put this way I change my mind and agree it is unclear. However, to make your paper stronger, I would have included something akin to what you just wrote in the paper to make it clear why you think Gabriel’s use of “iteration effects” is unclear and not the same as his usage in the ‘priority’ section.
I’m not sure how important clarifying something like this is for philosophical argumentation, but for me, this was the one nagging kink in what is otherwise fast becoming one of my favourite “EA-defense” papers.
Thanks for the feedback. From memory, I think at the time we thought that since it didn’t do any work in his argument, we didn’t think that could be what he meant by it.