TLDR: Very helpful post. Do you have any rough thoughts on how someone would pursue moral weighing research?
Wanted to say, first of all, that I found this post really helpful in helping crystalize some thoughts I’ve had for a while. I’ve spent about a year researching population axiologies (admittedly at the undergrad level) and have concluded that something like a critical level utilitarian view is close enough to a correct view that there’s not much left to say. So, in trying to figure out where to go from there (and especially whether to pursue a career in philosophy), I’ve been trying to think of just what sort of questions would make a substantive difference in how we ought to approach EA goals. I couldn’t think of anything, but it still seemed like there was some gap between the plausible arguments that have been presented so far and how to actually go about accomplishing those goals. I think you’ve clarified here, with “moral weighting,” the gap that was bothering me. It seems similar to the “neutrality intuition” Broome talks about where we don’t want to (but basically have to) say there’s a discrete threshold where a life goes from worth living to not.
At any rate, moral weighting is the sort of work I hope to be able to contribute to. Are there any other articles/papers/posts you think would be relevant to the topic? Do you have any rough thoughts on the sort of considerations that would be operative here? Do any particular fields seem closest to you?I had been considered something like a wellbeing metric like the QALY or DALY in public health (a la the work Derek Foster posted a little while ago) to be a promising direction.
Glad to hear you found it helpful. Unfortunately, I don’t think I have a lot to add at the moment re: how to actually pursue moral weighting research, beyond what I gestured at in the post (e.g., trying to solicit lots of your own/other people’s intuitions across lots of cases, trying to make them consistent, that kind of thing). Re: articles/papers/posts, you could also take a look at GiveWell’s process here, and the moral weight post from Luke Muelhauser I mentioned has a few references at the end that might be helpful (though most of them I haven’t engaged with myself). I’ll also add, FWIW, that I actually think the central point in the post most applicable outside of the EA community than inside it, as I think of EA as fairly “basic-set oriented” (though there are definitely some questions in EA where weightings matter).
TLDR: Very helpful post. Do you have any rough thoughts on how someone would pursue moral weighing research?
Wanted to say, first of all, that I found this post really helpful in helping crystalize some thoughts I’ve had for a while. I’ve spent about a year researching population axiologies (admittedly at the undergrad level) and have concluded that something like a critical level utilitarian view is close enough to a correct view that there’s not much left to say. So, in trying to figure out where to go from there (and especially whether to pursue a career in philosophy), I’ve been trying to think of just what sort of questions would make a substantive difference in how we ought to approach EA goals. I couldn’t think of anything, but it still seemed like there was some gap between the plausible arguments that have been presented so far and how to actually go about accomplishing those goals. I think you’ve clarified here, with “moral weighting,” the gap that was bothering me. It seems similar to the “neutrality intuition” Broome talks about where we don’t want to (but basically have to) say there’s a discrete threshold where a life goes from worth living to not.
At any rate, moral weighting is the sort of work I hope to be able to contribute to. Are there any other articles/papers/posts you think would be relevant to the topic? Do you have any rough thoughts on the sort of considerations that would be operative here? Do any particular fields seem closest to you? I had been considered something like a wellbeing metric like the QALY or DALY in public health (a la the work Derek Foster posted a little while ago) to be a promising direction.
Thanks!
Glad to hear you found it helpful. Unfortunately, I don’t think I have a lot to add at the moment re: how to actually pursue moral weighting research, beyond what I gestured at in the post (e.g., trying to solicit lots of your own/other people’s intuitions across lots of cases, trying to make them consistent, that kind of thing). Re: articles/papers/posts, you could also take a look at GiveWell’s process here, and the moral weight post from Luke Muelhauser I mentioned has a few references at the end that might be helpful (though most of them I haven’t engaged with myself). I’ll also add, FWIW, that I actually think the central point in the post most applicable outside of the EA community than inside it, as I think of EA as fairly “basic-set oriented” (though there are definitely some questions in EA where weightings matter).