If they read superficially, yes. Would you prefer he explicitly say in the abstract “I think it’s bad if everyone dies”?
ælijah
I apologize. I meant my comment to say that the paper wouldn’t be misunderstood in that way by its readership as a whole if it were read carefully.
On further thought, I think it could be reasonably argued that the abstract actually should explicitly say “I think it’s bad if everyone dies”.
I tried to make this comment before, but for some reason it isn’t visible, so I’m reposting it.
I think this is an interesting paper. I gave it an upvote.
One comment: It is misleading to say that on total utilitarianism + longtermism “the axiological difference between S1 and S2 is negligible”. It may be negligible compared to the difference between either and utopia, but that doesn’t mean it’s negligible in absolute terms. Saying that the disvalue of a single terrible thing happening to one person is “negligible” compared to the total disvalue in the world over the course of ten years doesn’t necessarily mean one is callous about the former.
I also agree and would like to see discussion of hedonistic/preference NU and SFE more generally.
It seems as though some of the discussion assumes classical utilitarianism (or at least uses CU as a synecdoche for utilitarian theories as a whole?) But, as the authors themselves acknowledge, some utilitarian theories aren’t hedonistic or totalist (or symmetrical, another unstated difference between CU and other utilitarian theories).
It is also a bit misleading to say that “many effective altruists are not utilitarians and care intrinsically about things besides welfare, such as rights, freedom, equality, personal virtue and more.” On some theories, these things are components of welfare.
And it is not necessarily true that “Utilitarians would reason that if there are enough people whose headaches you can prevent, then the total wellbeing generated by preventing the headaches is greater than the total wellbeing of saving the life, so you are morally required to prevent the headaches.” The increase in wellbeing from saving the life might be lexically superior to the increase in wellbeing from preventing the headache.
It would be interesting to survey responses to the sorts of interventions that provoke more negative responses (e.g., supporting the reduction of wild-animal habitats as a pro-WAW intervention, or a hypothetical “reprogramming predators” scenario––of course, the latter is very different insofar as it isn’t currently technically feasible).
I think this is an interesting paper. I gave it an upvote.
One comment: It is misleading to say that on total utilitarianism + longtermism “the axiological difference between S1 and S2 is negligible”. It may be negligible compared to the difference between either and utopia, but that doesn’t mean it’s negligible in absolute terms. Saying that the disvalue of a single terrible thing happening to one person is “negligible” compared to the total disvalue in the world over the course of ten years doesn’t necessarily mean one is callous about the former.