Reworked version of a shortform comment. This is still not the optimal version of this post but it’s the one I had time to re-publish.
I’ve spent the past few years trying to get a handle on what it means to be moral. In particular, to be moral in a robust way, that holds up in different circumstances.
A year-ish ago, while arguing about what standards scientists should hold themselves to, I casually noted that I wasn’t sure whether, if I were a scientist, and if the field of science were rife with dishonesty, whether it would be better for me to focus on becoming more honest than the average scientist, or focus on Some Other Cause, such as avoiding eating meat.
A bunch of arguments ensued, and elucidating my current position on the entire discourse would take a lot of time. But, I do think there was something important I was missing when I first wondered about that. I think a lot of Effective Altruism types miss this, and it’s important.
The folk morality I was raised with, generally would rank the following crimes in ascending order of badness:
Lying
Stealing
Killing
Torturing people to death (I’m not sure if torture-without-death is generally considered better/worse/about-the-same-as killing)
But this is the conflation of a few different things. One axis I was ignoring was “morality as coordination tool” vs “morality as ‘doing the right thing because I think it’s right’.” And these are actually quite different. And, importantly, you don’t get to spend many resources on morality-as-doing-the-right-thing unless you have a solid foundation of the morality-as-coordination-tool. (This seems true whether “doing the right thing” looks like helping the needy, or “doing God’s work”, or whatever)
There’s actually a 4x3 matrix you can plot lying/stealing/killing/torture-killing into which are:
Harming the ingroup
Harming the outgroup (who you may benefit from trading with)
Harming powerless people who can’t trade or collaborate with you
And I think you need to tackle these mostly in this order. If you live in a world where even people in your tribe backstab each other all the time, you won’t have spare resources to spend on the outgroup or the powerless until your tribe has gotten it’s basic shit together and figured out that lying/stealing/killing each other sucks.
If your tribe has it’s basic shit together, then maybe you have the slack to ask the question: “hey, that outgroup over there, who we regularly raid and steal their sheep and stuff, maybe it’d be better if we traded with them instead of stealing their sheep?” and then begin to develop cosmopolitan norms.
If you eventually become a powerful empire, you may notice that you’re going around exploiting or conquering and… maybe you just don’t actually want to do that anymore? Or maybe, within your empire, there’s an underclass of people who are slaves or slave-like instead of being formally traded with. And maybe this is locally beneficial. But… you just don’t want to do that anymore, because of empath, ,or because you’ve come to believe in principles that say to treat all humans with dignity. Sometimes this is because the powerless people would actually be more productive if they were free builders/traders, but sometimes it just seems like the right thing to do.
Avoiding harming the ingroup and productive outgroup are things that you’re locally incentived to do because cooperation is very valuable. In an iterated strategy game, these are things you’re incentived to do all the way along.
Avoiding harming the powerless is something that you are limited in your ability to do, until the point where it starts making sense to cash in your victory points.
I think this is often non-explicit in most discussions of morality/ethics/what-people-should-do. It seems common for people to conflate “actions that are bad because it ruins ability to coordinate” and “actions that are bad because empathy and/or principles tell me they are.”
I’m not making a claim about exactly how all of this should influence your decisionmaking. The world is complex. Cause prioritization is hard. But, while you’re cause-prioritizing, and while you are deciding on strategy, make sure you keep this distinction in mind.
Morality as “Coordination” vs “Altruism”
Link post
Reworked version of a shortform comment. This is still not the optimal version of this post but it’s the one I had time to re-publish.
I’ve spent the past few years trying to get a handle on what it means to be moral. In particular, to be moral in a robust way, that holds up in different circumstances.
A year-ish ago, while arguing about what standards scientists should hold themselves to, I casually noted that I wasn’t sure whether, if I were a scientist, and if the field of science were rife with dishonesty, whether it would be better for me to focus on becoming more honest than the average scientist, or focus on Some Other Cause, such as avoiding eating meat.
A bunch of arguments ensued, and elucidating my current position on the entire discourse would take a lot of time. But, I do think there was something important I was missing when I first wondered about that. I think a lot of Effective Altruism types miss this, and it’s important.
The folk morality I was raised with, generally would rank the following crimes in ascending order of badness:
Lying
Stealing
Killing
Torturing people to death (I’m not sure if torture-without-death is generally considered better/worse/about-the-same-as killing)
But this is the conflation of a few different things. One axis I was ignoring was “morality as coordination tool” vs “morality as ‘doing the right thing because I think it’s right’.” And these are actually quite different. And, importantly, you don’t get to spend many resources on morality-as-doing-the-right-thing unless you have a solid foundation of the morality-as-coordination-tool. (This seems true whether “doing the right thing” looks like helping the needy, or “doing God’s work”, or whatever)
There’s actually a 4x3 matrix you can plot lying/stealing/killing/torture-killing into which are:
Harming the ingroup
Harming the outgroup (who you may benefit from trading with)
Harming powerless people who can’t trade or collaborate with you
And I think you need to tackle these mostly in this order. If you live in a world where even people in your tribe backstab each other all the time, you won’t have spare resources to spend on the outgroup or the powerless until your tribe has gotten it’s basic shit together and figured out that lying/stealing/killing each other sucks.
If your tribe has it’s basic shit together, then maybe you have the slack to ask the question: “hey, that outgroup over there, who we regularly raid and steal their sheep and stuff, maybe it’d be better if we traded with them instead of stealing their sheep?” and then begin to develop cosmopolitan norms.
If you eventually become a powerful empire, you may notice that you’re going around exploiting or conquering and… maybe you just don’t actually want to do that anymore? Or maybe, within your empire, there’s an underclass of people who are slaves or slave-like instead of being formally traded with. And maybe this is locally beneficial. But… you just don’t want to do that anymore, because of empath, ,or because you’ve come to believe in principles that say to treat all humans with dignity. Sometimes this is because the powerless people would actually be more productive if they were free builders/traders, but sometimes it just seems like the right thing to do.
Avoiding harming the ingroup and productive outgroup are things that you’re locally incentived to do because cooperation is very valuable. In an iterated strategy game, these are things you’re incentived to do all the way along.
Avoiding harming the powerless is something that you are limited in your ability to do, until the point where it starts making sense to cash in your victory points.
I think this is often non-explicit in most discussions of morality/ethics/what-people-should-do. It seems common for people to conflate “actions that are bad because it ruins ability to coordinate” and “actions that are bad because empathy and/or principles tell me they are.”
I’m not making a claim about exactly how all of this should influence your decisionmaking. The world is complex. Cause prioritization is hard. But, while you’re cause-prioritizing, and while you are deciding on strategy, make sure you keep this distinction in mind.