Yeah I think you’re right. I didn’t need to actually reference Piaget (it just prompted the thought). To be clear, I wasn’t trying to imply that Piaget/​Kohlberg’s theories were correct or sound, but rather applying the model to another issue. I didn’t make that very clear. I don’t think my argument really requirs the empirical implications of the model (especially because I wasn’t trying to imply moral judgement that one moral circle is necessary better/​worse). However I didn’t flag this. [meta note: I also posted it pretty quickly, didn’t think it through it much since it’s a short form]
I broadly agree with all your points.
I think my general point of x,10x,100x makes more sense if you’re looking along one axes (eg. A class of beings like future humans) rather than all the ways you can expand your moral circle—which I also think might be better to think of as a sphere or more complex shape to account for different dimensions/​axes.
I was thinking about the more concrete cases where you go from cats and dogs → pigs and cows or people in my home country → people in other countries.
Re the other reasons you gave:
Sometimes belief x1 itself gives a person epistemic reason to believe x2
I think this is kind of what I was trying to say, where there can be some important incremental movement here. (Of course if x2 is very different from x1 then maybe not).
Sometimes believing x1 increases your self-identity as a person who believes weird things, making you more likely to believe weird things
This is an interesting point I haven’t thought much about.
Sometimes believing x2 increases your affiliation with a group associated with x1 (e.g. EA) making you more likely to believe x3 which is also associated with that group
I think this is probably the strongest non-step-wise reason.
Yeah I think you’re right. I didn’t need to actually reference Piaget (it just prompted the thought). To be clear, I wasn’t trying to imply that Piaget/​Kohlberg’s theories were correct or sound, but rather applying the model to another issue. I didn’t make that very clear. I don’t think my argument really requirs the empirical implications of the model (especially because I wasn’t trying to imply moral judgement that one moral circle is necessary better/​worse). However I didn’t flag this. [meta note: I also posted it pretty quickly, didn’t think it through it much since it’s a short form]
I broadly agree with all your points.
I think my general point of x,10x,100x makes more sense if you’re looking along one axes (eg. A class of beings like future humans) rather than all the ways you can expand your moral circle—which I also think might be better to think of as a sphere or more complex shape to account for different dimensions/​axes.
I was thinking about the more concrete cases where you go from cats and dogs → pigs and cows or people in my home country → people in other countries.
Re the other reasons you gave:
I think this is kind of what I was trying to say, where there can be some important incremental movement here. (Of course if x2 is very different from x1 then maybe not).
This is an interesting point I haven’t thought much about.
I think this is probably the strongest non-step-wise reason.