You might be interested in this thing from within the EA community which I think might be one of the deepest possible cuts against consequentialism: Logical Decision Theory (or any solution to newcomb’s problem), but afaik, no one’s written about this angle on it, because it’s pretty arguable that it’s just advocating for a different kind of consequentialism.
But I don’t totally buy those arguments: LDT advocates doing things that will have bad outcomes, when being the kind of person (or the kind of decision theory) who would do those things gets it better outcomes on average across all possible worlds. In human-level application, this ends up looking a bit more like advanced virtue ethics than consequentialism, to me. On the other hand, I’ve seen it argued that regular consequentialism ends up looking like virtue ethics too.
You might be interested in this thing from within the EA community which I think might be one of the deepest possible cuts against consequentialism: Logical Decision Theory (or any solution to newcomb’s problem), but afaik, no one’s written about this angle on it, because it’s pretty arguable that it’s just advocating for a different kind of consequentialism.
But I don’t totally buy those arguments: LDT advocates doing things that will have bad outcomes, when being the kind of person (or the kind of decision theory) who would do those things gets it better outcomes on average across all possible worlds. In human-level application, this ends up looking a bit more like advanced virtue ethics than consequentialism, to me. On the other hand, I’ve seen it argued that regular consequentialism ends up looking like virtue ethics too.