I kind of feel this way, except that I think the target criteria can differ between people, and are often underdetermined. (As you point out in some comment, things also depend on which parts of one’s psychology one identifies with.)
I think that you were referring to this?
Normative realism implies identification with system 2
I find this very interesting because locating personal identity in system 1 feels conceptually impossible or deeply confusing. No matter how much rationalization goes on, it never seems intuitive to identify myself with system 1. How can you identify with the part of yourself that isn’t doing the explicit thinking, including the decision about which part of yourself to identify with? It reminds me of Nagel’s The Last Word.
My point here was that if you are a realist about normativity of any kind, you have to identify with system 2 as that is what makes the (potentially correct) judgements about what you ought to do.
But that’s not to say that if you are antirealist, you have to identify with system 1. If you are an antirealist, then in some sense (the realist sense) you don’t have to identify with anything, but how easy and natural it is to identify with system 2 depends on how much importance you place on coherence among your values, which in turn depends on how coherent and universalizable your values actually are—you can be an antirealist but accept that some fairly strong degree of convergence does occur in practice, for whatever reason. This:
target criteria can differ between people, and are often underdetermined
seems to imply that you don’t think there will be much convergence practically, or that we should feel a strong pressure to reach high-level agreement on moral questions because such a project is never going to succeed.
I think this is part of the motivation for your ‘case for suffering focussed ethics’ - even though any asymmetry between preventing suffering and producing happiness falls victim to the absurd conclusion and paralysis argument, I’m assuming that this wouldn’t bother you much.
I talk about why, regardless of whether realism is true, I think this is an unstable position in that post.
AFAIK the paralysis argument is about the implications of non-consequentialism, not about down-side focused axiologies. In particular, it’s about the implications of a pair of views. As Will says in the transcript you linked:
“but this is a paradigm nonconsequentialist view endorses an acts/omissions distinction such that it’s worse to cause harm than it is to allow harm to occur, and an asymmetry between benefits and harms where it’s more wrong to cause a certain amount of harm than it is right or good to cause a certain amount of benefit… And if you have those two claims, then you’ve got to conclude [along the lines of the paralysis argument]”.
Also, I’m not sure how Lukas would reply but I think one way of defending his claim which you criticize, namely that “the need to fit all one’s moral intuitions into an overarching theory based solely on intuitively appealing axioms simply cannot be fulfilled”, is by appealing to the existence of impossibility theorems in ethics. In that case we truly won’t be able to avoid counterintuitive results (see e.g. Arrhenius 2000, Greaves 2017). This also shouldn’t surprise us too much if we agree with the evolved nature of some of our moral intuitions.