[This point is unrelated to the paper’s main arguments]
It seems like the paper implicitly assumes that humans are the only moral patients (which I don’t think is a sound assumption, or an assumption the authors themselves would actually endorse).
I think it does make sense for the paper to focus on humans, since it typically makes sense for a given paper to tackle just one thorny issue (and in this instance it’s, well, the case for strong longtermism)
But I think it would’ve been good for the paper to at least briefly acknowledge that this is just a simplifying assumption
Perhaps just in a footnote
Otherwise the paper is kind-of implying that the authors really do take it as a given that humans are the only moral patients
And I think it’s good to avoid feeding into that implicit assumption which is already very common among people in general (particularly outside of EA)
[This point is unrelated to the paper’s main arguments]
It seems like the paper implicitly assumes that humans are the only moral patients (which I don’t think is a sound assumption, or an assumption the authors themselves would actually endorse).
I think it does make sense for the paper to focus on humans, since it typically makes sense for a given paper to tackle just one thorny issue (and in this instance it’s, well, the case for strong longtermism)
But I think it would’ve been good for the paper to at least briefly acknowledge that this is just a simplifying assumption
Perhaps just in a footnote
Otherwise the paper is kind-of implying that the authors really do take it as a given that humans are the only moral patients
And I think it’s good to avoid feeding into that implicit assumption which is already very common among people in general (particularly outside of EA)