Some possible timelines are much better than others
What “feels” like the best action often won’t result in anything close to the best possible timeline
In such situations, it’s better to disregard our feelings and go with the actions that get us closer to the best timeline.
This doesn’t commit you to a particular moral philosophy. You can rank timelines by whatever aspects you want: Your moral rule can tell you to only consider your own actions, and disregard their effects on the behaviour of other people’s actions. I could consider such a person to be an effective altruist, even though they’d be a non-consequentialist. While I think it’s fair to say that, after the above beliefs, consequentialism is fairly core to EA, I think the whole EA community could switch away from consequentialism without having to rebrand itself.
The critique targets effective altruists’ tendency to focus on single actions and their proximate consequences and, more specifically, to focus on simple interventions that reduce suffering in the short term.
But she also says EA has a “god’s eye moral epistemology”. This seems contradictory. Even if we suppose that most EAs focus on proximate consequences, that’s not a fundamental failing of the philosophy, it’s a failed application of it. If many fail to accurately implement the philosophy, it doesn’t imply the philosophy bad[1]: There’s a difference between a “criterion of right” and a “decision procedure”. Many EAs are longtermists who essentially use entire timelines as the unit of moral analysis. This is clearly is not focused on “proximate consequences”. That’s more the domain of non-consequentialists (e.g. “Are my actions directly harming anyone?”).
The article’s an incoherent mess, even ignoring the Communist nonsense at the end.
My idea of EA’s essential beliefs are:
Some possible timelines are much better than others
What “feels” like the best action often won’t result in anything close to the best possible timeline
In such situations, it’s better to disregard our feelings and go with the actions that get us closer to the best timeline.
This doesn’t commit you to a particular moral philosophy. You can rank timelines by whatever aspects you want: Your moral rule can tell you to only consider your own actions, and disregard their effects on the behaviour of other people’s actions. I could consider such a person to be an effective altruist, even though they’d be a non-consequentialist. While I think it’s fair to say that, after the above beliefs, consequentialism is fairly core to EA, I think the whole EA community could switch away from consequentialism without having to rebrand itself.
But she also says EA has a “god’s eye moral epistemology”. This seems contradictory. Even if we suppose that most EAs focus on proximate consequences, that’s not a fundamental failing of the philosophy, it’s a failed application of it. If many fail to accurately implement the philosophy, it doesn’t imply the philosophy bad[1]: There’s a difference between a “criterion of right” and a “decision procedure”. Many EAs are longtermists who essentially use entire timelines as the unit of moral analysis. This is clearly is not focused on “proximate consequences”. That’s more the domain of non-consequentialists (e.g. “Are my actions directly harming anyone?”).
The article’s an incoherent mess, even ignoring the Communist nonsense at the end.
This is in contrast with a policies being bad because no one can implement them with the desired consequences.