I’d also like to mention that, of course, a cellular-automata simulation of different evolutionary strategies is very different from the complex behavior of real human societies. There are definitely lots of forces that push towards tribalistic fighting between coalitions (ethno-nationalist, religious, political, class-based, and otherwise), but there are also forces that push towards cooperation and universalism:
The real world, unlike the fixed grid of the simulation, can be positive-sum thanks to new technologies that create material abundance. Today’s world might be more peaceful than the past because, after the industrial revolution, peace and cooperation (which is better for economic growth) became more profitable than conquest.
In the simulation, what’s called “humanism” is really “cooperate with anyone you interact with, never defect”, which sounds more like gullibility to me. In real life, societies have a lot of ways to figure out how to build trust—like tracking people’s reputations and meritocratically promoting trustworthy players, or unifying around a common set of ideological beliefs. I think that in real life it’s possible to have a high-trust, humanist, but non-gullible society that succeeds by using meritocracy and good judgement to avoid getting scammed, and which makes sure to spend enough resources supporting itself and staying competitive (even while maintaining a background commitment to universalism) that it doesn’t get overtaken by other groups.
In the simulation, after the ethno-nationalist squares take over, then I guess they will fight it out between the different ethnic groups, and then the final winning ethnic group will live happily ever after as a dominant monoculture? But real life doesn’t work this way—by the very logic that helped the nationalist squares in the first place, a real-life monoculture would tend to fracture and divide into subgroups who would then proceed to fight with each other as before. This tendency towards internecine fighting is a drag on the forces of division and (in theory) could be a boon to the forces of universalism.
“I think there are a lot of problems with the idea of directly pushing for moral circle expansion as a cause area—for starters, moral philosophy might not play a large role in actually driving moral progress. ”
Could you please explain to me why, in your view, this is a “problem” with moral circle expansion as a cause area? Thanks!
I like the idea of expanding people’s moral circle, I’m just not sure what interventions might actually work. The straightforward strategy is “just tell people they should expand their moral circle to include Group X”, but I’m often doubtful that strategy this will win converts and lead to lasting change.
For example, my impression is that things like the rise and decline of slavery were mostly fueled by changing economic fundamentals, rather than by people first deciding that slavery was okay and then later remembering that it was bad. If you wanted to have an effect on people’s moral circles, perhaps better to try and influence those fundamentals than trying to persuade people directly? But others have studied these things in much greater depth: https://forum.effectivealtruism.org/posts/o4HX48yMGjCrcRqwC/what-helped-the-voiceless-historical-case-studies
By analogy, I would expect that creating tasty, cost-competitive plant-based meats will probably do more to expand people’s moral concern for farmed animals, than trying to persuade them directly about the evils of factory farming.
Since I think people’s cultural/moral beliefs are basically downstream of the material conditions of society (“moral progress not driven by moral philosophy”), therefore I think that pushing directly for moral circle expansion (via persuasion, philosophical arguments, appeals to empathy, etc) isn’t a great route towards actually expanding people’s moral circles.
Ah okay, thanks for explaining. Sounds like by “pushing for moral circle expansion as a cause area”, you meant “pushing for moral circle expansion via direct advocacy” or something more specific like that. When I and others have talked about “moral circle expansion” as something that we should aim for, we’re usually including all sorts of more or less direct approaches to achieving those goals.
(For what it’s worth, I do think that the direct moral advocacy is an important component, but it doesn’t have to be the only or even main one for you to think moral circle expansion is a promising cause area.)
I think there are a lot of problems with the idea of directly pushing for moral circle expansion as a cause area—for starters, moral philosophy might not play a large role in actually driving moral progress. But I see the concept of moral circle expansion as a goal worth working towards (sometimes indirectly towards!), and I think the discussion over moral circle expansion has been beneficial to EA—for example, explorations of some ways our circle might be narrowing over time rather than expanding.
I’d also like to mention that, of course, a cellular-automata simulation of different evolutionary strategies is very different from the complex behavior of real human societies. There are definitely lots of forces that push towards tribalistic fighting between coalitions (ethno-nationalist, religious, political, class-based, and otherwise), but there are also forces that push towards cooperation and universalism:
The real world, unlike the fixed grid of the simulation, can be positive-sum thanks to new technologies that create material abundance. Today’s world might be more peaceful than the past because, after the industrial revolution, peace and cooperation (which is better for economic growth) became more profitable than conquest.
In the simulation, what’s called “humanism” is really “cooperate with anyone you interact with, never defect”, which sounds more like gullibility to me. In real life, societies have a lot of ways to figure out how to build trust—like tracking people’s reputations and meritocratically promoting trustworthy players, or unifying around a common set of ideological beliefs. I think that in real life it’s possible to have a high-trust, humanist, but non-gullible society that succeeds by using meritocracy and good judgement to avoid getting scammed, and which makes sure to spend enough resources supporting itself and staying competitive (even while maintaining a background commitment to universalism) that it doesn’t get overtaken by other groups.
In the simulation, after the ethno-nationalist squares take over, then I guess they will fight it out between the different ethnic groups, and then the final winning ethnic group will live happily ever after as a dominant monoculture? But real life doesn’t work this way—by the very logic that helped the nationalist squares in the first place, a real-life monoculture would tend to fracture and divide into subgroups who would then proceed to fight with each other as before. This tendency towards internecine fighting is a drag on the forces of division and (in theory) could be a boon to the forces of universalism.
“I think there are a lot of problems with the idea of directly pushing for moral circle expansion as a cause area—for starters, moral philosophy might not play a large role in actually driving moral progress. ”
Could you please explain to me why, in your view, this is a “problem” with moral circle expansion as a cause area? Thanks!
I like the idea of expanding people’s moral circle, I’m just not sure what interventions might actually work. The straightforward strategy is “just tell people they should expand their moral circle to include Group X”, but I’m often doubtful that strategy this will win converts and lead to lasting change.
For example, my impression is that things like the rise and decline of slavery were mostly fueled by changing economic fundamentals, rather than by people first deciding that slavery was okay and then later remembering that it was bad. If you wanted to have an effect on people’s moral circles, perhaps better to try and influence those fundamentals than trying to persuade people directly? But others have studied these things in much greater depth: https://forum.effectivealtruism.org/posts/o4HX48yMGjCrcRqwC/what-helped-the-voiceless-historical-case-studies
By analogy, I would expect that creating tasty, cost-competitive plant-based meats will probably do more to expand people’s moral concern for farmed animals, than trying to persuade them directly about the evils of factory farming.
Since I think people’s cultural/moral beliefs are basically downstream of the material conditions of society (“moral progress not driven by moral philosophy”), therefore I think that pushing directly for moral circle expansion (via persuasion, philosophical arguments, appeals to empathy, etc) isn’t a great route towards actually expanding people’s moral circles.
Ah okay, thanks for explaining. Sounds like by “pushing for moral circle expansion as a cause area”, you meant “pushing for moral circle expansion via direct advocacy” or something more specific like that. When I and others have talked about “moral circle expansion” as something that we should aim for, we’re usually including all sorts of more or less direct approaches to achieving those goals.
(For what it’s worth, I do think that the direct moral advocacy is an important component, but it doesn’t have to be the only or even main one for you to think moral circle expansion is a promising cause area.)