The moral weight tag covers posts about differences in moral status, capacity for welfare, or other factors that might determine differences in the characteristic moral value of the lives, interests, or experiences of different types of potential moral patients (such as individuals of different species).
Further reading
Fischer, Bob (2022) Moral Weight Project Sequence, Effective Altruism Forum, October 30.
Muehlhauser, Luke (2017) 2017 report on consciousness and moral patienthood, Open Philanthropy, June.
The idea of “moral weights” is addressed briefly in a few places.
Muehlhauser, Luke (2018) Preliminary thoughts on moral weight, LessWrong, August 14.
Schukraft, Jason (2020) Moral weight series, Effective Altruism Forum, October 19.
The tag seems focused on how much weight should be assigned to different moral patients. But some people and posts use the phrase moral weight to refer to relative importance of different outcomes, e.g. how much should we care about consumption vs saving a life? Examples include:
https://forum.effectivealtruism.org/posts/fY5EFXzv7yLDNGHRo/new-research-on-moral-weights
https://forum.effectivealtruism.org/posts/FTLKg6WrFFbaACBQr/sogive-s-moral-weights-please-take-part
Should we include both under this wiki-tag and broaden the definition? Or should we make a new tag and disambiguate between the two?
Thanks for tagging all these posts! We already have a moral patienthood entry, though unfortunately it was “wiki-only”, so you couldn’t use it as a tag. I have now removed this restriction and re-tagged all the articles. For the two articles above, I used the moral uncertainty tag instead, which seems more appropriate. Feel free to review my changes, and if you are satisfied with them, I would suggest deleting this tag.
Do you mean removing this tag from the two articles David mentions, or deleting this tag entirely?
It seems possible to me that this tag should be deleted, but I’m also not sure. If it is deleted, I think it should still exist as a redirect to the moral patienthood tag, and that that tag should include a section on moral weight.
(FWIW, I also agree that the term “moral weight” should probably be reserved for the usage given in this tag description, rather than also covering things like how much to value changes in consumption vs saving a life.)
I was suggesting deleting the tag entirely, and replacing it with a redirect. As I understand the distinction, ‘moral patienthood’ focuses on which beings matter morally, whereas ‘moral weights’ focuses on the degree to which they matter morally. I think these two questions are best discussed in the same article, since they are intimately related. However, instead of deleting the tag, we could have a disambiguation page, noting the two different senses in which the expression ‘moral weights’ is used. On reflection, I think I now lean towards this option, though only very weakly. (If we decide to delete the tag, we could instead have a brief disambiguation section in the ‘moral patienthood’ article itself.) What do you think?
Ignoring the wiki function for a second, from a tag perspective only I think we should include two topics as part of the same tag in cases where:
A large portion of people looking for articles about one post would also be interested in posts about the other topic
It is not the case that a large portion of people who are looking for posts on one of the topics would find a tag less useful if it was also cluttered up with posts on the other topic
(That’s of course vague, since “large portion” is vague, but it seems a somewhat useful principle anyway.)
So for example, for the topics dystopia and existential risk, 1 is true but 2 is not; just looking at the existential risk tag isn’t a good substitute for looking at the dystopia tag, since the former has loads of posts and the latter is a clearly defined sub-area.
Here, I think 1 and 2 are probably both true. I think most people interested in posts on moral weight would be happy to see posts on moral patienthood at the same time, and vice versa.
But as for the wiki function, I’m less sure. It seems like moral weight is a quite important and fairly distinct topic, so maybe it warrants its own entry. But it also seems ok to just make it a big section in a moral patienthood article, and mention moral weight somewhere in the lead as well.
Or maybe the entry should just be called “Moral patienthood & moral weight”. That might also be better from a tag perspective (since many of the posts—particularly Jason’s—were framed as about moral weight, not moral patienthood).
If the moral weight article is turned into basically just a redirect, then I like the idea of having it look like a disambiguation page.
To be clear, my view is not that ‘moral patienthood’ is more important than ‘moral weights’, or that the latter should be relegated to a section in an article primarily about the former. Rather, my view is that ‘moral patienthood’ and ‘moral weights’ are two facets of the same topic, primarily concerned with identifying the properties that make an entity count morally to the degree that it does, and that for this reason we should have just one article rather than two. (It’s possible that there is a deeper distinction that I’m failing to grasp, and one that would justify having separate articles.)
Also, I now think that, if we do keep this entry (which I now vote in favour of), we should include a sentence or two about the alternative meaning used by ID Insight and SoGive. And we should maybe keep this tag on those posts. This is because it seems decently likely that someone would go to this tag because they care about that meaning of that term, having heard it from e.g. ID Insight or SoGive.
Hmm, I think unfortunately it’s messy and complicated, in a way I’d kind-of forgotten about. Jason Schukraft wrote:
My previous comment was focused pretty much just on the moral status part of that. I think moral status is very related to moral patienthood, and we could perhaps see questions about moral patienthood as just a special case of questions about moral status—as asking “0 or not zero?”, which is like a special case of asking “how high, on a scale starting at 0 and ending who-knows-where?”
But I think capacity for welfare and average realized welfare aren’t as closely related to moral patienthood. So now I think I retract my earlier comment, and think we should just keep this entry as a distinct entry (while still having it point to moral patienthood and vice versa).
I also really wish it wasn’t the case that some people use moral weight to refer to any of the three things Jason mentions, while others seem to use it just to mean moral status, while still others use it the way ID Insight and SoGive did. But that’s outside of our control. And it seems to me that Jason is basically the leading authority on this within EA, so we should defer to his usage. (He’s now a colleague of mine, but I formed this view before joining RP.)
---
Also, here’s a comment I started writing before I remembered the above stuff, and have now adapted so I say “moral status” instead of “moral weight” when what I mean is really just moral status:
I think of moral patienthood as basically about “should this entity get zero or nonzero moral weight?”, and moral status as about “how much moral status should this entity get on a potentially continuous scale from 0 upwards?” (though I think Jason’s posts or comments said that whether it’s just binary vs discrete vs continuous is a debated topic), or maybe “given that this entity gets *some nonzero amount of* moral status, how much should it get?”
So in some sense maybe moral patienthood should be best thought of as a subset of the topic of moral status.
But I think the two topics are often discussed separately, and that there’s a fairly large literature on each. And I think it’s plausible that the considerations relevant to “zero vs nonzero” are fairly different from those relevant to “how much (given that it’s nonzero)”. (Not sure, though.)