Philosophy, global priorities and animal welfare research. My current specific interests include: philosophy of mind, moral weights, person-affecting views, preference-based views and subjectivism, moral uncertainty, decision theory, deep uncertainty/cluelessness and backfire risks, s-risks, and indirect effects on wild animals.
I’ve also done economic modelling for some animal welfare issues.
I think I’m ~100% on no non-instrumental benefit from creating moral patients. Also pretty high on no non-instrumental value from creating new desires, preferences, values, etc. within existing moral patients. (I try to develop and explain my views in this sequence.)
I haven’t thought a lot about tradeoffs between suffering and other things, including pleasure, within moral patients that would exist anyway. I could see these tradeoffs going like they would for a classical utilitarian, if we hold an individual’s dispositions fixed.
To be clear, I’m a moral anti-realist (subjectivist), so I don’t think there’s any stance-independent fact about how asymmetric things should be.
Also, I’m curious if we can explain why you react like this:
Some ideas: Complexity of value but not disvalue or the urgency of suffering is explained by the intensity of desire, not unpleasantness? Do you have any ideas.