I think I disagree with the core premise that animal welfare is an odd one out. That animals have moral worth is a much smaller buy, than the beliefs needed to accept longtermism.
For reference, I think the strongest case for longtermism comes when you accept claims that humanity has a non-zero chance of colonising the universe with digital human beings. It makes perfect sense for me that someone would accept that animals have a high moral worth, but not the far future stuff.
I don’t think a justice explanation predicts better why EA’s care about animals, than the object level arguments for caring about them do.
I mostly agree, I don’t think I was super clear with my initial post, and have edited to try to clarify more what I mean by the “odd one out”. To respond to your point more specifically, I also agree that the reason for caring in the first place is just the strong arguments in favor of caring about non-humans, and I even agree that the formal arguments for caring about non-human animals are probably more philosophically robust that those for caring about future generations (at least in the “theory X” no-difference-made-by-identity way longtermists usually do). I think the reason the cause area is the odd one out on the EA formal arguments side is different from the reason it is the odd one out when describing EA to outsiders, to be clear, I just think that when an outsider finds the cause area weird on the list, it becomes hard to respond if the formal arguments are also less well developed for which dimension factory farming dominates the other three areas on. I hope this clarifies my position somewhat.
I think I disagree with the core premise that animal welfare is an odd one out. That animals have moral worth is a much smaller buy, than the beliefs needed to accept longtermism.
For reference, I think the strongest case for longtermism comes when you accept claims that humanity has a non-zero chance of colonising the universe with digital human beings. It makes perfect sense for me that someone would accept that animals have a high moral worth, but not the far future stuff.
I don’t think a justice explanation predicts better why EA’s care about animals, than the object level arguments for caring about them do.
I mostly agree, I don’t think I was super clear with my initial post, and have edited to try to clarify more what I mean by the “odd one out”. To respond to your point more specifically, I also agree that the reason for caring in the first place is just the strong arguments in favor of caring about non-humans, and I even agree that the formal arguments for caring about non-human animals are probably more philosophically robust that those for caring about future generations (at least in the “theory X” no-difference-made-by-identity way longtermists usually do). I think the reason the cause area is the odd one out on the EA formal arguments side is different from the reason it is the odd one out when describing EA to outsiders, to be clear, I just think that when an outsider finds the cause area weird on the list, it becomes hard to respond if the formal arguments are also less well developed for which dimension factory farming dominates the other three areas on. I hope this clarifies my position somewhat.