From an evolution / selfish gene’s perspective, the reason I or any human has morality is so we can win (or at least not lose) our local virtue/status game. Given this, it actually seems pretty wild that anyone (or more than a handful of outliers) tries to be impartial. (I don’t have a good explanation of how this came about. I guess it has something to do with philosophy, which I also don’t understand the nature of.)
BTW, I wonder if EAs should take the status game view of morality more seriously, e.g., when thinking about how to expand the social movement, and predicting the future course of EA itself.
What might EAs taking the status game view more seriously look like, more concretely? I’m a bit confused since from my outside-ish perspective it seems the usual markers of high status are already all there (e.g. institutional affiliation, large funding, [speculatively] OP’s CJR work, etc), so I’m not sure what doing more on the margin might look like. Alternatively I may just be misunderstanding what you have in mind.
One, be more skeptical when someone says they are committed to impartially do the most good, and keep in mind that even if they’re totally sincere, that commitment may well not hold when their local status game changes, or if their status gradient starts diverging from actual effective altruism. Two, form a more explicit and detailed model of how status considerations + philosophy + other relevant factors drive the course of EA and other social/ethical movements, test this model empirically, basically do science on this and use it to make predictions and inform decisions in the future. (Maybe one or both of these could have helped avoid some of the mistakes/backlashes EA has suffered.)
One tricky consideration here is that people don’t like to explicitly think about status, because it’s generally better for one’s status to appear to do everything for its own sake, and any explicit talk about status kind of ruins that appearance. Maybe this can be mitigated somehow, for example by keeping some distance between the people thinking explicitly about status and EA in general. Or maybe, for the long term epistemic health of the planet, we can somehow make it generally high status to reason explicitly about status?
Hey Wei, I appreciate you responding to Mo, but I found myself still confused after reading this reply. This isn’t purely down to you—a lot of LessWrong writing refers to ‘status’, but they never clearly define what it is or where the evidence and literature for it is.[1]To me, it seem to function as this magic word that can explain anything and everything. The whole concept of ‘status’ as I’ve seen it used in LW seems incredibly susceptible to being part of ‘just-so’ stories.
I’m highly sceptical of this though, like I don’t know what a ‘status gradient’ is and I don’t think it exists in the world? Maybe you mean an abstract description of behaviour? But then a ‘status gradient’ is just describing what happened in a social setting, rather than making scientific predictions. Maybe it’s instead a kind of non-reductionist sense of existing and having impact, which I do buy, but then things like ‘ideas’,‘values’, and ‘beliefs’ should also exist in this non-reductionist way and be as important for considering human action as ‘status’ is.
It also tends to lead to using explanations like this:
One tricky consideration here is that people don’t like to explicitly think about status, because it’s generally better for one’s status to appear to do everything for its own sake
Which to me is dangerously close to saying “if someone talks about status, it’s evidence it’s real. If they don’t talk about it, then they’re self-deceiving in a Hansion sense, and this is evidence for status” which sets off a lot of epistemological red-flags for me
a lot of LessWrong writing refers to ‘status’, but they never clearly define what it is or where the evidence and literature for it is
Two citations that come to mind are Geoffrey Miller’s Virtue Signaling and Will Storr’s The Status Game (maybe also Robin Hanson’s book although its contents are not as fresh in my mind), but I agree that it’s not very scientific or well studied (unless there’s a body of literature on it that I’m unfamiliar with), which is something I’d like to see change.
Maybe it’s instead a kind of non-reductionist sense of existing and having impact, which I do buy, but then things like ‘ideas’,‘values’, and ‘beliefs’ should also exist in this non-reductionist way and be as important for considering human action as ‘status’ is.
Well sure, I agree with this. I probably wouldn’t have made my suggestion if EAs talked about status roughly as much as ideas, values, or beliefs.
Which to me is dangerously close to saying “if something talks about status, it’s evidence it’s real. If they don’t talk about it, then they’re self-deceiving in a Hansion sense, and this is evidence for status” which sets off a lot of epistemological red-flags for me
It seems right that you’re wary about this, but on reflection I think the main reason I think status is real is not because people talk or don’t talk about it, but because I see human behavior that seems hard to explain without invoking such a concept. For example, why are humans moral but our moralities vary so much across different communities? Why do people sometimes abandon or fail to act according to their beliefs/values without epistemic or philosophical reasons to do so? Why do communities sometimes collectively become very extreme in their beliefs/values, again without apparent epistemic or philosophical justification?
why are humans moral but our moralities vary so much across different communities? Why do people sometimes abandon or fail to act according to their beliefs/values without epistemic or philosophical reasons to do so? Why do communities sometimes collectively become very extreme in their beliefs/values, again without apparent epistemic or philosophical justification?
I think “status” plays some part in the answers to these, but only a fairly small one.
Why do moralities vary across different communities? Primarily because they are raised in different cultures with different prevalent beliefs. We then modify those beliefs from the baseline as we encounter new ideas and new events, and often end up seeking out other people with shared values to be friends with. But the majority of people aren’t just pretending to hold those beliefs to fit in (although that does happen), the majority legitimately believe what they say.
Why do communities get extreme? Well, consult the literature on radicalisation, there are a ton of factors. A vivid or horrible event or ongoing trauma sometimes triggers an extreme response. Less radical members of groups might leave, making the average more radical, so even more moderates leave or split, until the group is just radicals.
As to why we fail to act according to their values, people generally have competing values, including self-preservation and instincts, and are not perfectly rational. Sometimes the primal urge to eat a juicy burger overcomes the calculated belief that eating meat is wrong.
These are all amateur takes, a sociologist could probably answer better.
From an evolution / selfish gene’s perspective, the reason I or any human has morality is so we can win (or at least not lose) our local virtue/status game.
If you’re talking about status games at all, then not only have you mostly rounded the full selective landscape off to the organism level, you’ve also taken a fairly low resolution model of human sociality and held it fixed (when it’s properly another part of the phenotype). Approximations like this, if not necessarily these ones in particular, are of course necessary to get anywhere in biology—but that doesn’t make them any less approximate.
If you want to talk about the evolution of some complex psychological trait, you need to provide a very clear account of how you’re operationalizing it and explain why your model’s errors (which definitely exist) aren’t large enough to matter in its domain of applicability (which is definitely not everything). I don’t think rationalist-folk-evopsych has done this anywhere near thoroughly enough to justify strong claims about “the” reason moral beliefs exist.
I don’t think it’s possible to give an evolutionary account of impartiality in isolation, any more than you can give one for algebraic geometry or christology or writing or common-practice tonality. The underlying capabilities (e.g. intelligence, behavioral plasticity, language) are biological, but the particular way in which they end up expressed is not. We might find a thermodynamic explanation of the origin of self-replicating molecules, but a thermodynamic explanation of the reproductive cycle of ferns isn’t going to fit in a human brain. You have to move to a higher level of organization to say anything intelligible. Reason, similarly, is likely the sort of thing that admits a good evolutionary explanation, but individual instances of reasoning can only really be explained in psychological terms.
It seems like you’re basically saying “evolution gave us reason, which some of us used to arrive at impartiality” which doesn’t seem very different from my thinking which I alluded to in my opening comment (except that I used “philosophy” instead of “reason). Does that seem fair, or am I rounding you off too much, or otherwise missing your point?
Yes and no: “evolution gave us reason” is the same sort of coarse approximation as “evolution gave us the ability and desire to compete in status games”. What we really have is a sui generis thing which can, in the right environment, approximate ideal reasoning or Machiavellian status-seeking or coalition-building or utility maximization or whatever social theory of everything you want to posit, but which most of the time is trying to split the difference.
People support impartial benevolence because they think they have good pragmatic reasons to do so and they think it’s correct and it has an acceptable level of status in their cultural environment and it makes them feel good and it serves as a signal of their willingness to cooperate and and and and. Of course the exact weights vary, and it’s pretty rare that every relevant reason for belief is pointing exactly the same way simultaneously, but we’re all responding to a complex mix of reasons. Trying to figure out exactly what that mix is for one person in one situation is difficult. Trying to do the same thing for everyone all at once in general is impossible.
From an evolution / selfish gene’s perspective, the reason I or any human has morality is so we can win (or at least not lose) our local virtue/status game. Given this, it actually seems pretty wild that anyone (or more than a handful of outliers) tries to be impartial. (I don’t have a good explanation of how this came about. I guess it has something to do with philosophy, which I also don’t understand the nature of.)
BTW, I wonder if EAs should take the status game view of morality more seriously, e.g., when thinking about how to expand the social movement, and predicting the future course of EA itself.
What might EAs taking the status game view more seriously look like, more concretely? I’m a bit confused since from my outside-ish perspective it seems the usual markers of high status are already all there (e.g. institutional affiliation, large funding, [speculatively] OP’s CJR work, etc), so I’m not sure what doing more on the margin might look like. Alternatively I may just be misunderstanding what you have in mind.
One, be more skeptical when someone says they are committed to impartially do the most good, and keep in mind that even if they’re totally sincere, that commitment may well not hold when their local status game changes, or if their status gradient starts diverging from actual effective altruism. Two, form a more explicit and detailed model of how status considerations + philosophy + other relevant factors drive the course of EA and other social/ethical movements, test this model empirically, basically do science on this and use it to make predictions and inform decisions in the future. (Maybe one or both of these could have helped avoid some of the mistakes/backlashes EA has suffered.)
One tricky consideration here is that people don’t like to explicitly think about status, because it’s generally better for one’s status to appear to do everything for its own sake, and any explicit talk about status kind of ruins that appearance. Maybe this can be mitigated somehow, for example by keeping some distance between the people thinking explicitly about status and EA in general. Or maybe, for the long term epistemic health of the planet, we can somehow make it generally high status to reason explicitly about status?
Hey Wei, I appreciate you responding to Mo, but I found myself still confused after reading this reply. This isn’t purely down to you—a lot of LessWrong writing refers to ‘status’, but they never clearly define what it is or where the evidence and literature for it is.[1] To me, it seem to function as this magic word that can explain anything and everything. The whole concept of ‘status’ as I’ve seen it used in LW seems incredibly susceptible to being part of ‘just-so’ stories.
I’m highly sceptical of this though, like I don’t know what a ‘status gradient’ is and I don’t think it exists in the world? Maybe you mean an abstract description of behaviour? But then a ‘status gradient’ is just describing what happened in a social setting, rather than making scientific predictions. Maybe it’s instead a kind of non-reductionist sense of existing and having impact, which I do buy, but then things like ‘ideas’,‘values’, and ‘beliefs’ should also exist in this non-reductionist way and be as important for considering human action as ‘status’ is.
It also tends to lead to using explanations like this:
Which to me is dangerously close to saying “if someone talks about status, it’s evidence it’s real. If they don’t talk about it, then they’re self-deceiving in a Hansion sense, and this is evidence for status” which sets off a lot of epistemological red-flags for me
In fact, one of the most cited works about it isn’t a piece of anthropology or sociology, but a book about Improv acting???
Two citations that come to mind are Geoffrey Miller’s Virtue Signaling and Will Storr’s The Status Game (maybe also Robin Hanson’s book although its contents are not as fresh in my mind), but I agree that it’s not very scientific or well studied (unless there’s a body of literature on it that I’m unfamiliar with), which is something I’d like to see change.
Well sure, I agree with this. I probably wouldn’t have made my suggestion if EAs talked about status roughly as much as ideas, values, or beliefs.
It seems right that you’re wary about this, but on reflection I think the main reason I think status is real is not because people talk or don’t talk about it, but because I see human behavior that seems hard to explain without invoking such a concept. For example, why are humans moral but our moralities vary so much across different communities? Why do people sometimes abandon or fail to act according to their beliefs/values without epistemic or philosophical reasons to do so? Why do communities sometimes collectively become very extreme in their beliefs/values, again without apparent epistemic or philosophical justification?
I think “status” plays some part in the answers to these, but only a fairly small one.
Why do moralities vary across different communities? Primarily because they are raised in different cultures with different prevalent beliefs. We then modify those beliefs from the baseline as we encounter new ideas and new events, and often end up seeking out other people with shared values to be friends with. But the majority of people aren’t just pretending to hold those beliefs to fit in (although that does happen), the majority legitimately believe what they say.
Why do communities get extreme? Well, consult the literature on radicalisation, there are a ton of factors. A vivid or horrible event or ongoing trauma sometimes triggers an extreme response. Less radical members of groups might leave, making the average more radical, so even more moderates leave or split, until the group is just radicals.
As to why we fail to act according to their values, people generally have competing values, including self-preservation and instincts, and are not perfectly rational. Sometimes the primal urge to eat a juicy burger overcomes the calculated belief that eating meat is wrong.
These are all amateur takes, a sociologist could probably answer better.
If you’re talking about status games at all, then not only have you mostly rounded the full selective landscape off to the organism level, you’ve also taken a fairly low resolution model of human sociality and held it fixed (when it’s properly another part of the phenotype). Approximations like this, if not necessarily these ones in particular, are of course necessary to get anywhere in biology—but that doesn’t make them any less approximate.
If you want to talk about the evolution of some complex psychological trait, you need to provide a very clear account of how you’re operationalizing it and explain why your model’s errors (which definitely exist) aren’t large enough to matter in its domain of applicability (which is definitely not everything). I don’t think rationalist-folk-evopsych has done this anywhere near thoroughly enough to justify strong claims about “the” reason moral beliefs exist.
I agree that was too strong or over simplified. Do you think there are other evolutionary perspectives from which impartiality is less surprising?
I don’t think it’s possible to give an evolutionary account of impartiality in isolation, any more than you can give one for algebraic geometry or christology or writing or common-practice tonality. The underlying capabilities (e.g. intelligence, behavioral plasticity, language) are biological, but the particular way in which they end up expressed is not. We might find a thermodynamic explanation of the origin of self-replicating molecules, but a thermodynamic explanation of the reproductive cycle of ferns isn’t going to fit in a human brain. You have to move to a higher level of organization to say anything intelligible. Reason, similarly, is likely the sort of thing that admits a good evolutionary explanation, but individual instances of reasoning can only really be explained in psychological terms.
It seems like you’re basically saying “evolution gave us reason, which some of us used to arrive at impartiality” which doesn’t seem very different from my thinking which I alluded to in my opening comment (except that I used “philosophy” instead of “reason). Does that seem fair, or am I rounding you off too much, or otherwise missing your point?
Yes and no: “evolution gave us reason” is the same sort of coarse approximation as “evolution gave us the ability and desire to compete in status games”. What we really have is a sui generis thing which can, in the right environment, approximate ideal reasoning or Machiavellian status-seeking or coalition-building or utility maximization or whatever social theory of everything you want to posit, but which most of the time is trying to split the difference.
People support impartial benevolence because they think they have good pragmatic reasons to do so and they think it’s correct and it has an acceptable level of status in their cultural environment and it makes them feel good and it serves as a signal of their willingness to cooperate and and and and. Of course the exact weights vary, and it’s pretty rare that every relevant reason for belief is pointing exactly the same way simultaneously, but we’re all responding to a complex mix of reasons. Trying to figure out exactly what that mix is for one person in one situation is difficult. Trying to do the same thing for everyone all at once in general is impossible.