To clarify, what I object to here is not a claim like “very strong consequence-focused impartiality is most plausible all things considered”, or “alternative views also have serious problems”. What I push back against is what I see as an implied brittleness of the general project of effective altruism (broadly construed), along the lines of “it’s either very strong consequence-focused impartiality or total bust” when it comes to working on EA causes/pursuing impartial altruism in some form.
To clarify, what I object to here is not a claim like “very strong consequence-focused impartiality is most plausible all things considered”, or “alternative views also have serious problems”. What I push back against is what I see as an implied brittleness of the general project of effective altruism (broadly construed), along the lines of “it’s either very strong consequence-focused impartiality or total bust” when it comes to working on EA causes/pursuing impartial altruism in some form.