[Question] Is it possible, and if so how, to arrive at ‘strong’ EA conclusions without the use of utilitarian principles?

All arguments I have seen for the EA philosophy (in particular some of its ‘nastier’ consequences) to date have derived from broadly utilitarian principles. Can anyone point me to alternatives broadly under the wing of deontology, virtuism or otherwise? I identify as utilitarian so this is more so out of curiosity than anything else.

No comments.