My sense is that the idea of sequential stages for moral development is exceedingly likely to be false and in the case of the most prominent theory of this kind, Kolhlberg’s, completely debunked in the sense that there was never any good evidence for it (I find the social intuitionist model much more plausible), so I don’t see much appeal to trying to understand cause selection in these terms.
That said, I’m sure there’s a rough sense in which people tend to adopt less weird beliefs before they adopt more weird ones and I think that thinking about this in terms of more/less weird beliefs is likely more informative than thinking about this in terms of more/less distant areas in a “moral circle”.
I don’t think there’s a clear non-subjective sense in which causes are more or less weird though. For example, there are many EAs who value the wellbeing of non-actual people in the distant future and not suffering wild animals and vice versa, so which is weirder or more distant from the centre of this posited circle? I hear people assume conflicting answers to this question from time to time (people tend to assume their area is less weird).
I would also agree that getting people to agree to beliefs which are less far from what they currently believe can make them more positively inclined to subsequently adopt beliefs related to that belief which are further from their current beliefs. It seems like there are a bunch of non-competing reasons why this could be the case though. For example:
Sometimes belief x1 itself gives a person epistemic reason to believe x2
Sometimes believing x1 increases your self-identity as a person who believes weird things, making you more likely to believe weird things
Sometimes believing x2 increases your affiliation with a group associated with x1 (e.g. EA) making you more likely to believe x3 which is also associated with that group
Notably none of these require that we assume anything about moral circles or general sequences of belief.
Yeah I think you’re right. I didn’t need to actually reference Piaget (it just prompted the thought). To be clear, I wasn’t trying to imply that Piaget/Kohlberg’s theories were correct or sound, but rather applying the model to another issue. I didn’t make that very clear. I don’t think my argument really requirs the empirical implications of the model (especially because I wasn’t trying to imply moral judgement that one moral circle is necessary better/worse). However I didn’t flag this. [meta note: I also posted it pretty quickly, didn’t think it through it much since it’s a short form]
I broadly agree with all your points.
I think my general point of x,10x,100x makes more sense if you’re looking along one axes (eg. A class of beings like future humans) rather than all the ways you can expand your moral circle—which I also think might be better to think of as a sphere or more complex shape to account for different dimensions/axes.
I was thinking about the more concrete cases where you go from cats and dogs → pigs and cows or people in my home country → people in other countries.
Re the other reasons you gave:
Sometimes belief x1 itself gives a person epistemic reason to believe x2
I think this is kind of what I was trying to say, where there can be some important incremental movement here. (Of course if x2 is very different from x1 then maybe not).
Sometimes believing x1 increases your self-identity as a person who believes weird things, making you more likely to believe weird things
This is an interesting point I haven’t thought much about.
Sometimes believing x2 increases your affiliation with a group associated with x1 (e.g. EA) making you more likely to believe x3 which is also associated with that group
I think this is probably the strongest non-step-wise reason.
My sense is that the idea of sequential stages for moral development is exceedingly likely to be false and in the case of the most prominent theory of this kind, Kolhlberg’s, completely debunked in the sense that there was never any good evidence for it (I find the social intuitionist model much more plausible), so I don’t see much appeal to trying to understand cause selection in these terms.
That said, I’m sure there’s a rough sense in which people tend to adopt less weird beliefs before they adopt more weird ones and I think that thinking about this in terms of more/less weird beliefs is likely more informative than thinking about this in terms of more/less distant areas in a “moral circle”.
I don’t think there’s a clear non-subjective sense in which causes are more or less weird though. For example, there are many EAs who value the wellbeing of non-actual people in the distant future and not suffering wild animals and vice versa, so which is weirder or more distant from the centre of this posited circle? I hear people assume conflicting answers to this question from time to time (people tend to assume their area is less weird).
I would also agree that getting people to agree to beliefs which are less far from what they currently believe can make them more positively inclined to subsequently adopt beliefs related to that belief which are further from their current beliefs. It seems like there are a bunch of non-competing reasons why this could be the case though. For example:
Sometimes belief x1 itself gives a person epistemic reason to believe x2
Sometimes believing x1 increases your self-identity as a person who believes weird things, making you more likely to believe weird things
Sometimes believing x2 increases your affiliation with a group associated with x1 (e.g. EA) making you more likely to believe x3 which is also associated with that group
Notably none of these require that we assume anything about moral circles or general sequences of belief.
Yeah I think you’re right. I didn’t need to actually reference Piaget (it just prompted the thought). To be clear, I wasn’t trying to imply that Piaget/Kohlberg’s theories were correct or sound, but rather applying the model to another issue. I didn’t make that very clear. I don’t think my argument really requirs the empirical implications of the model (especially because I wasn’t trying to imply moral judgement that one moral circle is necessary better/worse). However I didn’t flag this. [meta note: I also posted it pretty quickly, didn’t think it through it much since it’s a short form]
I broadly agree with all your points.
I think my general point of x,10x,100x makes more sense if you’re looking along one axes (eg. A class of beings like future humans) rather than all the ways you can expand your moral circle—which I also think might be better to think of as a sphere or more complex shape to account for different dimensions/axes.
I was thinking about the more concrete cases where you go from cats and dogs → pigs and cows or people in my home country → people in other countries.
Re the other reasons you gave:
I think this is kind of what I was trying to say, where there can be some important incremental movement here. (Of course if x2 is very different from x1 then maybe not).
This is an interesting point I haven’t thought much about.
I think this is probably the strongest non-step-wise reason.