What might EAs taking the status game view more seriously look like, more concretely? I’m a bit confused since from my outside-ish perspective it seems the usual markers of high status are already all there (e.g. institutional affiliation, large funding, [speculatively] OP’s CJR work, etc), so I’m not sure what doing more on the margin might look like. Alternatively I may just be misunderstanding what you have in mind.
One, be more skeptical when someone says they are committed to impartially do the most good, and keep in mind that even if they’re totally sincere, that commitment may well not hold when their local status game changes, or if their status gradient starts diverging from actual effective altruism. Two, form a more explicit and detailed model of how status considerations + philosophy + other relevant factors drive the course of EA and other social/ethical movements, test this model empirically, basically do science on this and use it to make predictions and inform decisions in the future. (Maybe one or both of these could have helped avoid some of the mistakes/backlashes EA has suffered.)
One tricky consideration here is that people don’t like to explicitly think about status, because it’s generally better for one’s status to appear to do everything for its own sake, and any explicit talk about status kind of ruins that appearance. Maybe this can be mitigated somehow, for example by keeping some distance between the people thinking explicitly about status and EA in general. Or maybe, for the long term epistemic health of the planet, we can somehow make it generally high status to reason explicitly about status?
Hey Wei, I appreciate you responding to Mo, but I found myself still confused after reading this reply. This isn’t purely down to you—a lot of LessWrong writing refers to ‘status’, but they never clearly define what it is or where the evidence and literature for it is.[1]To me, it seem to function as this magic word that can explain anything and everything. The whole concept of ‘status’ as I’ve seen it used in LW seems incredibly susceptible to being part of ‘just-so’ stories.
I’m highly sceptical of this though, like I don’t know what a ‘status gradient’ is and I don’t think it exists in the world? Maybe you mean an abstract description of behaviour? But then a ‘status gradient’ is just describing what happened in a social setting, rather than making scientific predictions. Maybe it’s instead a kind of non-reductionist sense of existing and having impact, which I do buy, but then things like ‘ideas’,‘values’, and ‘beliefs’ should also exist in this non-reductionist way and be as important for considering human action as ‘status’ is.
It also tends to lead to using explanations like this:
One tricky consideration here is that people don’t like to explicitly think about status, because it’s generally better for one’s status to appear to do everything for its own sake
Which to me is dangerously close to saying “if someone talks about status, it’s evidence it’s real. If they don’t talk about it, then they’re self-deceiving in a Hansion sense, and this is evidence for status” which sets off a lot of epistemological red-flags for me
a lot of LessWrong writing refers to ‘status’, but they never clearly define what it is or where the evidence and literature for it is
Two citations that come to mind are Geoffrey Miller’s Virtue Signaling and Will Storr’s The Status Game (maybe also Robin Hanson’s book although its contents are not as fresh in my mind), but I agree that it’s not very scientific or well studied (unless there’s a body of literature on it that I’m unfamiliar with), which is something I’d like to see change.
Maybe it’s instead a kind of non-reductionist sense of existing and having impact, which I do buy, but then things like ‘ideas’,‘values’, and ‘beliefs’ should also exist in this non-reductionist way and be as important for considering human action as ‘status’ is.
Well sure, I agree with this. I probably wouldn’t have made my suggestion if EAs talked about status roughly as much as ideas, values, or beliefs.
Which to me is dangerously close to saying “if something talks about status, it’s evidence it’s real. If they don’t talk about it, then they’re self-deceiving in a Hansion sense, and this is evidence for status” which sets off a lot of epistemological red-flags for me
It seems right that you’re wary about this, but on reflection I think the main reason I think status is real is not because people talk or don’t talk about it, but because I see human behavior that seems hard to explain without invoking such a concept. For example, why are humans moral but our moralities vary so much across different communities? Why do people sometimes abandon or fail to act according to their beliefs/values without epistemic or philosophical reasons to do so? Why do communities sometimes collectively become very extreme in their beliefs/values, again without apparent epistemic or philosophical justification?
why are humans moral but our moralities vary so much across different communities? Why do people sometimes abandon or fail to act according to their beliefs/values without epistemic or philosophical reasons to do so? Why do communities sometimes collectively become very extreme in their beliefs/values, again without apparent epistemic or philosophical justification?
I think “status” plays some part in the answers to these, but only a fairly small one.
Why do moralities vary across different communities? Primarily because they are raised in different cultures with different prevalent beliefs. We then modify those beliefs from the baseline as we encounter new ideas and new events, and often end up seeking out other people with shared values to be friends with. But the majority of people aren’t just pretending to hold those beliefs to fit in (although that does happen), the majority legitimately believe what they say.
Why do communities get extreme? Well, consult the literature on radicalisation, there are a ton of factors. A vivid or horrible event or ongoing trauma sometimes triggers an extreme response. Less radical members of groups might leave, making the average more radical, so even more moderates leave or split, until the group is just radicals.
As to why we fail to act according to their values, people generally have competing values, including self-preservation and instincts, and are not perfectly rational. Sometimes the primal urge to eat a juicy burger overcomes the calculated belief that eating meat is wrong.
These are all amateur takes, a sociologist could probably answer better.
What might EAs taking the status game view more seriously look like, more concretely? I’m a bit confused since from my outside-ish perspective it seems the usual markers of high status are already all there (e.g. institutional affiliation, large funding, [speculatively] OP’s CJR work, etc), so I’m not sure what doing more on the margin might look like. Alternatively I may just be misunderstanding what you have in mind.
One, be more skeptical when someone says they are committed to impartially do the most good, and keep in mind that even if they’re totally sincere, that commitment may well not hold when their local status game changes, or if their status gradient starts diverging from actual effective altruism. Two, form a more explicit and detailed model of how status considerations + philosophy + other relevant factors drive the course of EA and other social/ethical movements, test this model empirically, basically do science on this and use it to make predictions and inform decisions in the future. (Maybe one or both of these could have helped avoid some of the mistakes/backlashes EA has suffered.)
One tricky consideration here is that people don’t like to explicitly think about status, because it’s generally better for one’s status to appear to do everything for its own sake, and any explicit talk about status kind of ruins that appearance. Maybe this can be mitigated somehow, for example by keeping some distance between the people thinking explicitly about status and EA in general. Or maybe, for the long term epistemic health of the planet, we can somehow make it generally high status to reason explicitly about status?
Hey Wei, I appreciate you responding to Mo, but I found myself still confused after reading this reply. This isn’t purely down to you—a lot of LessWrong writing refers to ‘status’, but they never clearly define what it is or where the evidence and literature for it is.[1] To me, it seem to function as this magic word that can explain anything and everything. The whole concept of ‘status’ as I’ve seen it used in LW seems incredibly susceptible to being part of ‘just-so’ stories.
I’m highly sceptical of this though, like I don’t know what a ‘status gradient’ is and I don’t think it exists in the world? Maybe you mean an abstract description of behaviour? But then a ‘status gradient’ is just describing what happened in a social setting, rather than making scientific predictions. Maybe it’s instead a kind of non-reductionist sense of existing and having impact, which I do buy, but then things like ‘ideas’,‘values’, and ‘beliefs’ should also exist in this non-reductionist way and be as important for considering human action as ‘status’ is.
It also tends to lead to using explanations like this:
Which to me is dangerously close to saying “if someone talks about status, it’s evidence it’s real. If they don’t talk about it, then they’re self-deceiving in a Hansion sense, and this is evidence for status” which sets off a lot of epistemological red-flags for me
In fact, one of the most cited works about it isn’t a piece of anthropology or sociology, but a book about Improv acting???
Two citations that come to mind are Geoffrey Miller’s Virtue Signaling and Will Storr’s The Status Game (maybe also Robin Hanson’s book although its contents are not as fresh in my mind), but I agree that it’s not very scientific or well studied (unless there’s a body of literature on it that I’m unfamiliar with), which is something I’d like to see change.
Well sure, I agree with this. I probably wouldn’t have made my suggestion if EAs talked about status roughly as much as ideas, values, or beliefs.
It seems right that you’re wary about this, but on reflection I think the main reason I think status is real is not because people talk or don’t talk about it, but because I see human behavior that seems hard to explain without invoking such a concept. For example, why are humans moral but our moralities vary so much across different communities? Why do people sometimes abandon or fail to act according to their beliefs/values without epistemic or philosophical reasons to do so? Why do communities sometimes collectively become very extreme in their beliefs/values, again without apparent epistemic or philosophical justification?
I think “status” plays some part in the answers to these, but only a fairly small one.
Why do moralities vary across different communities? Primarily because they are raised in different cultures with different prevalent beliefs. We then modify those beliefs from the baseline as we encounter new ideas and new events, and often end up seeking out other people with shared values to be friends with. But the majority of people aren’t just pretending to hold those beliefs to fit in (although that does happen), the majority legitimately believe what they say.
Why do communities get extreme? Well, consult the literature on radicalisation, there are a ton of factors. A vivid or horrible event or ongoing trauma sometimes triggers an extreme response. Less radical members of groups might leave, making the average more radical, so even more moderates leave or split, until the group is just radicals.
As to why we fail to act according to their values, people generally have competing values, including self-preservation and instincts, and are not perfectly rational. Sometimes the primal urge to eat a juicy burger overcomes the calculated belief that eating meat is wrong.
These are all amateur takes, a sociologist could probably answer better.