On page 27, you clarify that many concepts in the article are not core to EA, but are “specific ideas contingently associated with EA, such as earning to give and life-affirming longtermism” that could be rejected “while still embracing the core of effective altruism.” I think it would be helpful to distinguish core commitments from non-core issues early on the article.
I would also consider toning down some of the strong rhetorical claims, like “every decent person.” You’d need much more space to cover every potential objection to EA’s philosophical underpinnings and potentially be able to substantiate this claim to that level of confidence. Moreover, the reader knows they are reading a journal volume on philosophical issues in EA, which implies that the journal editors at least think there are plausible philosophical criticisms. Likely the reader knows that other contributors have identified what they think are substantial philosophical problems, and some EA principles do not align with the assumptions a reader new to EA likely has.
All that is to say that I think a tone of “core concepts are obviously right, and no decent person would argue otherwise” would lead most neutral readers to conclude (1) that you’re setting up strawmen, or (2) that you’re defining the “core ideas” broadly enough to almost be truisms, leaving a lot of the heavy lifting to be done by unclearly-defined “details of implementation.”
On page 27, you clarify that many concepts in the article are not core to EA, but are “specific ideas contingently associated with EA, such as earning to give and life-affirming longtermism” that could be rejected “while still embracing the core of effective altruism.” I think it would be helpful to distinguish core commitments from non-core issues early on the article.
I would also consider toning down some of the strong rhetorical claims, like “every decent person.” You’d need much more space to cover every potential objection to EA’s philosophical underpinnings and potentially be able to substantiate this claim to that level of confidence. Moreover, the reader knows they are reading a journal volume on philosophical issues in EA, which implies that the journal editors at least think there are plausible philosophical criticisms. Likely the reader knows that other contributors have identified what they think are substantial philosophical problems, and some EA principles do not align with the assumptions a reader new to EA likely has.
All that is to say that I think a tone of “core concepts are obviously right, and no decent person would argue otherwise” would lead most neutral readers to conclude (1) that you’re setting up strawmen, or (2) that you’re defining the “core ideas” broadly enough to almost be truisms, leaving a lot of the heavy lifting to be done by unclearly-defined “details of implementation.”