Some thoughts on stage-wise development of moral circle
Status: Very rough, I mainly want to know if there’s already some research/thinking on this.
Jean Piaget, a early childhood psychologist from the 1960s, suggested a stage sequential model of childhood developemnt. He suggesting that we progress through different levels of development, and each stage is necessary to develop to the next.
Perhaps we can make a similar argument for moral circle expansion. In other words: you cannot run when you don’t know how to walk. If you ask someone to believe X, then X+1, then X+2, this makes some sense. if you jump from X to 10X to 10000X (they may even perceive 10000X as Y, an entirely different thing which makes no sense), it becomes a little more difficult for them to adjust over a short period of time.
Anecdotally seems true from a number of EAs I’ve spoken to who’ve updated to longtermism over time.
For most people, changing one’s beliefs and moral circles takes time. So we need to create a movement which can accomodate this. Peter Singer sums it up quite well: “there are people who come into the animal movement because of their concern for cats and dogs who later move on to understand that the number of farm animals suffering is vastly greater than the number of cats and dogs suffering and that typically the farm animals suffer more than the cats and dogs, and so they’ve added to the strength of the broader, and as I see more important, animal welfare organizations or animal rights organizations that are working for farm animals. So I think it’s possible that something similar can happen in the EA movement.”
Risk to the movement is that we lose people who could have become EAs because we turn them off the movement by making it too “weird”
Further research on this topic that could verify my hypothesis:
Studying changes in moral attitudes regarding other issues such as slavery, racism, LGBT rights etc. over time, and how long it took individuals/communities to change their attitudes (and behaviors)
My sense is that the idea of sequential stages for moral development is exceedingly likely to be false and in the case of the most prominent theory of this kind, Kolhlberg’s, completely debunked in the sense that there was never any good evidence for it (I find the social intuitionist model much more plausible), so I don’t see much appeal to trying to understand cause selection in these terms.
That said, I’m sure there’s a rough sense in which people tend to adopt less weird beliefs before they adopt more weird ones and I think that thinking about this in terms of more/less weird beliefs is likely more informative than thinking about this in terms of more/less distant areas in a “moral circle”.
I don’t think there’s a clear non-subjective sense in which causes are more or less weird though. For example, there are many EAs who value the wellbeing of non-actual people in the distant future and not suffering wild animals and vice versa, so which is weirder or more distant from the centre of this posited circle? I hear people assume conflicting answers to this question from time to time (people tend to assume their area is less weird).
I would also agree that getting people to agree to beliefs which are less far from what they currently believe can make them more positively inclined to subsequently adopt beliefs related to that belief which are further from their current beliefs. It seems like there are a bunch of non-competing reasons why this could be the case though. For example:
Sometimes belief x1 itself gives a person epistemic reason to believe x2
Sometimes believing x1 increases your self-identity as a person who believes weird things, making you more likely to believe weird things
Sometimes believing x2 increases your affiliation with a group associated with x1 (e.g. EA) making you more likely to believe x3 which is also associated with that group
Notably none of these require that we assume anything about moral circles or general sequences of belief.
Yeah I think you’re right. I didn’t need to actually reference Piaget (it just prompted the thought). To be clear, I wasn’t trying to imply that Piaget/Kohlberg’s theories were correct or sound, but rather applying the model to another issue. I didn’t make that very clear. I don’t think my argument really requirs the empirical implications of the model (especially because I wasn’t trying to imply moral judgement that one moral circle is necessary better/worse). However I didn’t flag this. [meta note: I also posted it pretty quickly, didn’t think it through it much since it’s a short form]
I broadly agree with all your points.
I think my general point of x,10x,100x makes more sense if you’re looking along one axes (eg. A class of beings like future humans) rather than all the ways you can expand your moral circle—which I also think might be better to think of as a sphere or more complex shape to account for different dimensions/axes.
I was thinking about the more concrete cases where you go from cats and dogs → pigs and cows or people in my home country → people in other countries.
Re the other reasons you gave:
Sometimes belief x1 itself gives a person epistemic reason to believe x2
I think this is kind of what I was trying to say, where there can be some important incremental movement here. (Of course if x2 is very different from x1 then maybe not).
Sometimes believing x1 increases your self-identity as a person who believes weird things, making you more likely to believe weird things
This is an interesting point I haven’t thought much about.
Sometimes believing x2 increases your affiliation with a group associated with x1 (e.g. EA) making you more likely to believe x3 which is also associated with that group
I think this is probably the strongest non-step-wise reason.
Some thoughts on stage-wise development of moral circle
Status: Very rough, I mainly want to know if there’s already some research/thinking on this.
Jean Piaget, a early childhood psychologist from the 1960s, suggested a stage sequential model of childhood developemnt. He suggesting that we progress through different levels of development, and each stage is necessary to develop to the next.
Perhaps we can make a similar argument for moral circle expansion. In other words: you cannot run when you don’t know how to walk. If you ask someone to believe X, then X+1, then X+2, this makes some sense. if you jump from X to 10X to 10000X (they may even perceive 10000X as Y, an entirely different thing which makes no sense), it becomes a little more difficult for them to adjust over a short period of time.
Anecdotally seems true from a number of EAs I’ve spoken to who’ve updated to longtermism over time.
For most people, changing one’s beliefs and moral circles takes time. So we need to create a movement which can accomodate this. Peter Singer sums it up quite well: “there are people who come into the animal movement because of their concern for cats and dogs who later move on to understand that the number of farm animals suffering is vastly greater than the number of cats and dogs suffering and that typically the farm animals suffer more than the cats and dogs, and so they’ve added to the strength of the broader, and as I see more important, animal welfare organizations or animal rights organizations that are working for farm animals. So I think it’s possible that something similar can happen in the EA movement.”
Risk to the movement is that we lose people who could have become EAs because we turn them off the movement by making it too “weird”
Further research on this topic that could verify my hypothesis:
Studying changes in moral attitudes regarding other issues such as slavery, racism, LGBT rights etc. over time, and how long it took individuals/communities to change their attitudes (and behaviors)
My sense is that the idea of sequential stages for moral development is exceedingly likely to be false and in the case of the most prominent theory of this kind, Kolhlberg’s, completely debunked in the sense that there was never any good evidence for it (I find the social intuitionist model much more plausible), so I don’t see much appeal to trying to understand cause selection in these terms.
That said, I’m sure there’s a rough sense in which people tend to adopt less weird beliefs before they adopt more weird ones and I think that thinking about this in terms of more/less weird beliefs is likely more informative than thinking about this in terms of more/less distant areas in a “moral circle”.
I don’t think there’s a clear non-subjective sense in which causes are more or less weird though. For example, there are many EAs who value the wellbeing of non-actual people in the distant future and not suffering wild animals and vice versa, so which is weirder or more distant from the centre of this posited circle? I hear people assume conflicting answers to this question from time to time (people tend to assume their area is less weird).
I would also agree that getting people to agree to beliefs which are less far from what they currently believe can make them more positively inclined to subsequently adopt beliefs related to that belief which are further from their current beliefs. It seems like there are a bunch of non-competing reasons why this could be the case though. For example:
Sometimes belief x1 itself gives a person epistemic reason to believe x2
Sometimes believing x1 increases your self-identity as a person who believes weird things, making you more likely to believe weird things
Sometimes believing x2 increases your affiliation with a group associated with x1 (e.g. EA) making you more likely to believe x3 which is also associated with that group
Notably none of these require that we assume anything about moral circles or general sequences of belief.
Yeah I think you’re right. I didn’t need to actually reference Piaget (it just prompted the thought). To be clear, I wasn’t trying to imply that Piaget/Kohlberg’s theories were correct or sound, but rather applying the model to another issue. I didn’t make that very clear. I don’t think my argument really requirs the empirical implications of the model (especially because I wasn’t trying to imply moral judgement that one moral circle is necessary better/worse). However I didn’t flag this. [meta note: I also posted it pretty quickly, didn’t think it through it much since it’s a short form]
I broadly agree with all your points.
I think my general point of x,10x,100x makes more sense if you’re looking along one axes (eg. A class of beings like future humans) rather than all the ways you can expand your moral circle—which I also think might be better to think of as a sphere or more complex shape to account for different dimensions/axes.
I was thinking about the more concrete cases where you go from cats and dogs → pigs and cows or people in my home country → people in other countries.
Re the other reasons you gave:
I think this is kind of what I was trying to say, where there can be some important incremental movement here. (Of course if x2 is very different from x1 then maybe not).
This is an interesting point I haven’t thought much about.
I think this is probably the strongest non-step-wise reason.
If longtermism is one of the latest stages of moral circle development than your anecdotal data suffers from major selection effects.