First thing is that, if intersectionality seems vague or poorly defined, then that’s likely a fault of my writing rather than the idea. To clarify—“intersectionality” is the idea that different individuals encounter overlapping types of disadvantage, and that these disadvantages combine in ways cannot be easily explained by looking at either kind of disadvantage in isolation. This means that finding solutions to issues at the intersection of several axes of disadvantage often requires explicitly considering how these considerations interact.
Its (potential) relevance to EA comes from the fact that a lot of EA cause areas deal with multiple axes of disadvantage in tandem; the use of ‘intersectionality’ could help to bring conceptual clarity to these discussions. To return to the animal welfare example, evelyciara has highlighted that there have been several recent posts about non-human animals being neglected on several axes simultaneously. Given the multiple disadvantages faced by future animals, intersectionality predicts that we will need to come up with novel solutions to protect them. Just trying to do (a) protect the long-term future, and (b) promote animal welfare is unlikely to achieve this goal. Guy Raveh highlights a similar example below in global health. I think the language of intersectionality is a neat way to explain what’s going on here, and why we might need to bring a fresh approach to these issues.
I don’t think animal welfare is the only cause area where intersectionality could bring conceptual clarity and improve our thinking. For example, engaging with how best to advance the welfare of digital people might benefit from an intersectional framing. It seems plausible that digital societies might end up with similar social ills—status games, inequality, ‘poverty’, etc—that we currently suffer. However, it’s unlikely that standard EA development strategy (read: health interventions) would be at all useful in dealing with these issues. Again, that’s because this is an intersectional issue, with multiple disadvantages (digital, poor) combining to create novel problems. If you agree with me that this seems obvious, then I think our disagreement has more to do with the use of the particular term ‘intersectionality’. This brings me to my next point.
Even if intersectionality comes with intellectual baggage, I don’t think we should shy away from using the term if it improves clarity. EAs already use terms that come with significant ideological baggage, because they’re useful and help to express important ideas. The term ‘nonhuman animals’ is a good example here—EAs use it to indicate that the moral distinction between the two is illusory. But this term (and much of the language around veganism) is morally charged, indicating a set of beliefs is perceived by many outside of EA as an indictment of meat-eaters. Alternatively, EAs on the forum often discuss political liberalism or cosmopolitanism, and many leading EAs explicitly identify as neoliberals. All three terms are highly politically charged, identifying a fuzzily defined set of policy stances that are controversial on both sides of the political spectrum. Nonetheless, in all of the cases I’ve just outlined, we use these ideas because they’re a helpful way of concisely explaining our ideas. I don’t think intersectionality is different in any unique way from the terms I’ve just described. I now think that this comment is right, inasmuch as it’s worth starting a new language game given the baggage that comes with the term.
I think this covers most of your comments, but please let me know if there’s anything I can clarify. I expect our crux of disagreement is on how useful it is to introduce a politically charged term like intersectionality into EA discourse, and I’m happy to engage more on that topic.
Thanks for your message Jackson. A few thoughts:
First thing is that, if intersectionality seems vague or poorly defined, then that’s likely a fault of my writing rather than the idea. To clarify—“intersectionality” is the idea that different individuals encounter overlapping types of disadvantage, and that these disadvantages combine in ways cannot be easily explained by looking at either kind of disadvantage in isolation. This means that finding solutions to issues at the intersection of several axes of disadvantage often requires explicitly considering how these considerations interact.
Its (potential) relevance to EA comes from the fact that a lot of EA cause areas deal with multiple axes of disadvantage in tandem; the use of ‘intersectionality’ could help to bring conceptual clarity to these discussions. To return to the animal welfare example, evelyciara has highlighted that there have been several recent posts about non-human animals being neglected on several axes simultaneously. Given the multiple disadvantages faced by future animals, intersectionality predicts that we will need to come up with novel solutions to protect them. Just trying to do (a) protect the long-term future, and (b) promote animal welfare is unlikely to achieve this goal. Guy Raveh highlights a similar example below in global health. I think the language of intersectionality is a neat way to explain what’s going on here, and why we might need to bring a fresh approach to these issues.
I don’t think animal welfare is the only cause area where intersectionality could bring conceptual clarity and improve our thinking. For example, engaging with how best to advance the welfare of digital people might benefit from an intersectional framing. It seems plausible that digital societies might end up with similar social ills—status games, inequality, ‘poverty’, etc—that we currently suffer. However, it’s unlikely that standard EA development strategy (read: health interventions) would be at all useful in dealing with these issues. Again, that’s because this is an intersectional issue, with multiple disadvantages (digital, poor) combining to create novel problems. If you agree with me that this seems obvious, then I think our disagreement has more to do with the use of the particular term ‘intersectionality’. This brings me to my next point.
Even if intersectionality comes with intellectual baggage, I don’t think we should shy away from using the term if it improves clarity. EAs already use terms that come with significant ideological baggage, because they’re useful and help to express important ideas. The term ‘nonhuman animals’ is a good example here—EAs use it to indicate that the moral distinction between the two is illusory. But this term (and much of the language around veganism) is morally charged, indicating a set of beliefs is perceived by many outside of EA as an indictment of meat-eaters. Alternatively, EAs on the forum often discuss political liberalism or cosmopolitanism, and many leading EAs explicitly identify as neoliberals. All three terms are highly politically charged, identifying a fuzzily defined set of policy stances that are controversial on both sides of the political spectrum. Nonetheless, in all of the cases I’ve just outlined, we use these ideas because they’re a helpful way of concisely explaining our ideas. I don’t think intersectionality is different in any unique way from the terms I’ve just described.I now think that this comment is right, inasmuch as it’s worth starting a new language game given the baggage that comes with the term.I think this covers most of your comments, but please let me know if there’s anything I can clarify. I expect our crux of disagreement is on how useful it is to introduce a politically charged term like intersectionality into EA discourse, and I’m happy to engage more on that topic.