Good post! I mostly agree with sections. (2) and (4) and would echo other comments that various points made are under-discussed.
My “disagreement”—if you can call it that—is that I think the general case here can be made more compelling by using assumptions and arguments that are weaker/more widely shared/more likely to be true. Some points:
The uncertainty Will fails to discuss (in short, the Very Repugnant Conclusion) can be framed as fundamental moral uncertainty, but I think it’s better understood as the more prosaic, sorta-almost-empirical question “Would a self-interested rational agent with full knowledge and wisdom choose to experience every moment of sentience in a given world over a given span of time?”
I personally find this framing more compelling because it puts one in the position of answering something more along the lines of “would I live the life of a fish that dies by asphyxiation?” than”does some (spooky-seeming) force called ‘moral outweighing’ exist in the universe”
Even a fully-committed total utilitarian who would maintain that all amounts of suffering are in principle outweighable can have this kind of quasi-empirical uncertainty of where the equilibrium moral balance lies
More, utilitarians of all types would find it better to create less future suffering, all else equal, which is a question I don’t recall Will directly addressing.
Maybe this is relying too much on generalizing from my own intuitions/beliefs, but I’d also guess that objecting to the belief that it’s “good to make happy people” both weakens the argument as a whole and distracts from its most compelling points
You can agree with Will on all his explicit claims made in the book (as I think I do?) and still think he made a pretty major sin of omission by failing to discuss whether the creation of happiness can/does/will cause and justify the creation of suffering
Good post! I mostly agree with sections. (2) and (4) and would echo other comments that various points made are under-discussed.
My “disagreement”—if you can call it that—is that I think the general case here can be made more compelling by using assumptions and arguments that are weaker/more widely shared/more likely to be true. Some points:
The uncertainty Will fails to discuss (in short, the Very Repugnant Conclusion) can be framed as fundamental moral uncertainty, but I think it’s better understood as the more prosaic, sorta-almost-empirical question “Would a self-interested rational agent with full knowledge and wisdom choose to experience every moment of sentience in a given world over a given span of time?”
I personally find this framing more compelling because it puts one in the position of answering something more along the lines of “would I live the life of a fish that dies by asphyxiation?” than”does some (spooky-seeming) force called ‘moral outweighing’ exist in the universe”
Even a fully-committed total utilitarian who would maintain that all amounts of suffering are in principle outweighable can have this kind of quasi-empirical uncertainty of where the equilibrium moral balance lies
More, utilitarians of all types would find it better to create less future suffering, all else equal, which is a question I don’t recall Will directly addressing.
Maybe this is relying too much on generalizing from my own intuitions/beliefs, but I’d also guess that objecting to the belief that it’s “good to make happy people” both weakens the argument as a whole and distracts from its most compelling points
You can agree with Will on all his explicit claims made in the book (as I think I do?) and still think he made a pretty major sin of omission by failing to discuss whether the creation of happiness can/does/will cause and justify the creation of suffering