I broadly agree, but in my view the important part to emphasize is what you said on the final thoughts (about seeking to ask more questions about this to ourselves and as a community) and less on intervention recommendations.
Is EA really all about taking every question and twisting it back to malaria nets …?… we want is to tackle systemic racism at a national level (e.g. in the US, or the UK).
I bite this bullet. I think you do ultimately need to circle back to the malaria nets (especially if you are talking more about directing money than about directing labor). I say this as someone who considers myself as much a part of the social justice movement as I do part of the EA movement.Realistically, I don’t think it’s really plausible that tackling stuff in high income countries is going to be more morally important than malaria net-type activities, at least when it comes to fungible resources such as donations (the picture gets more complex with respect to direct work of course). It’s good to think about what the cost-effective ways to improve matters in high income countries might be, but realistically I bet once you start crunching numbers you will probably find that malaria-net-type-activities should still the top priority by a wide margin if you are dealing with fungible resources. I think the logical conclusions of anti-racist/anti-colonialist thought converge upon this as well. In my view, the things that social justice activists are fighting for ultimately do come down to the basics of food, shelter, medical care, and the scale of that fight has always been global even if the more visible portion generally plays out on ones more local circles.
However, I still think putting thought into how one would design such interventions should be encouraged, because:
our doubts about the malign influence of institutional prejudice...should reach ourselves as well.
I agree with this, and would encourage more emphasis on this. The EA community (especially on the rationality/lesswrong part of the community) puts a lot of effort into getting rid of cognitive biases. But when it comes to acknowledging and internally correcting for the types of biases which result from growing up in a society which is built upon exploitation, I don’t really think the EA community does better than any other randomly selected group of people who are from a similar demographic (lets say, randomly selected people who went to prestigious universities). And that’s kind of weird. We’re a group of people who are trying to achieve social impact. We’re often people who wield considerable resources and have to work with power structures all the time. It’s a bit concerning that the community level of knowledge of the bodies of work that deal with these issues is just average.I don’t really mean this as a call to action (realistically, I think given the low current state of awareness it seems probable that attempting action is going to result in misguided or heavy-handed solutions). What I do suggest is—a lot of you spend some of your spare time reading and thinking about cognitive biases, trying to better understand yourself and the world, and consider this a worthwhile activity. I think, it would be worth applying a similar spirit to spending time to really understand these issues as well.
But when it comes to acknowledging and internally correcting for the types of biases which result from growing up in a society which is built upon exploitation, I don’t really think the EA community does better than any other randomly selected group of people who are from a similar demographic (lets say, randomly selected people who went to prestigious universities).
What are some of the biases you’re thinking of here? And are there any groups of people that you think are especially good at correcting for these biases?
My impression of the EA bubble is that it leans left-libertarian; I’ve seen a lot of discussion of criminal justice reform and issues with policing there (compared to e.g. the parts of the mainstream media dominated by people from prestigious universities).
I suppose the average EA might be more supportive of capitalism than the average graduate of a prestigious university, but I struggle to see that as an example of bias rather than as a focus on the importance of certain outcomes (e.g. average living standards vs. higher equity within society).
What are some of the biases you’re thinking of here? And are there any groups of people that you think are especially good at correcting for these biases?
The longer answer to this question: I am not sure how to give a productive answer to this question. In the classic “cognitive bias” literature, people tend to immediately accept that the biases exist once they learn about them (…as long as you don’t point them out right at the moment they are engaged in them). That is not the case for these issues.
I had to think carefully about how to answer because (when speaking to the aforementioned “randomly selected people who went to prestigious universities”, as well as when speaking to EAs) such issues can be controversial and trigger defensiveness. These topics are political and cannot be de-politicized, I don’t think there is any bias I can simply state that isn’t going to be upvoted by those who agree and dismissed as a controversial political opinion by those who don’t already agree, which isn’t helpful.
It’s analogous to if you walked into a random town hall and proclaimed “There’s a lot of anthropomorphic bias going on in this community, for example look at all the religiosity” or “There’s a lot of species-ism going on in this community, look at all the meat eating”. You would not necessarily make any progress on getting people to understand. The only people who would understand are those who know exactly what you mean and already agree with you. In some circles, the level of understanding would be such that people would get it. In others, such statements would produce minor defensiveness and hostility. The level of “understanding” vs “defensiveness and hostility” in the EA community regarding these issues is similar to that of randomly selected prestigious university students (that is, much more understanding than the population average, but less than ideal). As with “anthropomorphic bias” and as with “speciesism”, there are some communities where certain concepts are implicitly understood by most people and need no explanation, and some communities where they aren’t. It comes down to what someone’s point of view is.
Acquiring an accurate point of view, and moving a community towards an accurate point of view, is a long process of truth seeking. It is a process of un-learning a lot of things that you very implicitly hold true. It wouldn’t work to just list biases. If I start listing out things like (unfortunately poorly named) “privilege-blindness” and (unfortunately poorly named) “white-fragility” I doubt it’s not going to have any positive effect other than to make people who already agree nod to themselves, while other people roll their eyes, and other people google the terms and then roll their eyes. Criticizing things such that something actually goes through is pretty hard.
The productive process involves talking to individual people, hearing their stories, having first-hand exposure to things, reading a variety of writings on the topic and evaluating them. I think a lot of people think of these issues as “identity political topics” or “topics that affect those less fortunate” or “poorly formed arguments to be dismissed”. I think progress occurs when we frame-shift towards thinking of them as “practical every day issues that affect our lives”, and “how can I better articulate these real issues to myself and others” and “these issues are important factors in generating global inequality and suffering, an issue which affects us all”.
Something which might be a useful contribution from someone familiar with the topic would be to write about it in EA-friendly terms. Practical every day issues don’t have to be expressed in “poorly formed arguments”. If the material could be expressed in well formed arguments (or in arguments which the EA community can recognise as well formed), I think it would gain a lot more traction in the community.
I’ve seen a lot of discussion of criminal justice reform
Well, I do think discussion of it is good, but if you’re referring to resources directed to the cause area...it’s not that I want EAs to re-direct resources away from low-income countries to instead solving disparities in high income countries, and I don’t necessarily consider this related to the self-criticism as a community issue. I haven’t really looked into this issue, but: on prior intuition I’d be surprised if American criminal justice reform compares very favorably in terms of cost-effectiveness to e.g. GiveWell top charities, reforms in low income countries, or reforms regarding other issues. (Of course, prior intuitions aren’t a good way to make these judgements, so right now that’s just a “strong opinion, weakly held”.)
My stance is basically no on redirecting resources away from basic interventions in low income countries and towards other stuff, but yes on advocating that each individual tries to become more self-reflective and knowledgeable about these issues.
I suppose the average EA might be more supportive of capitalism than the average graduate of a prestigious university, but I struggle to see that as an example of bias
I agree, that’s not an example of bias. This is one of those situations where a word gets too big to be useful—“supportive of capitalism” has come to stand for a uselessly large range of concepts. The same person might be critical about private property, or think it has sinister/exploitative roots, and also support sensible growth focused economic policies which improve outcomes via market forces.
I think the fact that EA has common sense appeal to a wide variety of people with various ideas is a great feature. If you are actually focused on doing the most good you will start becoming less abstractly ideological and more practical and I think that is the right way to be. (Although I think a lot of EAs unfortunately stay abstract and end up supporting anything that’s labeled “EA”, which is also wrong).
My main point is that if someone is serious about doing the most good, and is working on a topic that requires a broad knowledge base, then a reasonable understanding the structural roots of inequality (including how gender and race and class and geopolitics play into it) should be one part of their practical toolkit. In my personal opinion, while a good understanding of this sort of thing generally does lead to a certain political outlook, it’s really more about adding things to your conceptual toolbox than it is about which -ism you rally around.
What are some of the biases you’re thinking of here?
This is a tough question to answer properly, both because it is complicated and because I think not everyone will like the answer. There is a short answer and a long answer.
Here is the short answer. I’ll put the long answer in a different comment.
Refer to Sanjay’s statement above
There are some who would argue that you can’t tackle such a structural issue without looking at yourselves too, and understanding your own perspectives, biases and privileges...But I worried that tackling the topic of racism without even mentioning the risk that this might be a problem risked seeming over-confident.
At time of writing, this is sitting at negative-5 karma. Maybe it won’t stay there, but this innocuous comment was sufficiently controversial that it’s there now. Why is that? Is anything written there wrong? I think it’s a very mild comment pointing out an obviously true fact—that a communities should also be self-reflective and self-critical when discussing structural racism. Normally EAs love self-critical, skeptical behavior. What is different here? Even people who believe that “all people matter equally” and “racism is bad” are still very resistant to having self-critical discussions about it.
I think that understanding the psychology of defensiveness surrounding the response to comments such as this one is the key to understanding the sorts of biases I’m talking about here. (And to be clear—I don’t think this push back against this line of criticism is specific to the EA community, I think the EA community is responding as any demographically similar group would...meaning, this is general civilizational inadequacy at work, not something about EA in particular)
“It’s a bit concerning that the community level of knowledge of the bodies of work that deal with these issues is just average”—I do think there are valuable lessons to be drawn from the literature, unfortunately a) lots of the work is low quality or under-evidenced b) discussion of these issues often ends up being highly divisive, whilst not changing many people’s minds
a) Well, I think the “most work is low-quality aspect” is true, but also fully-general to almost everything (even EA). Engagement requires doing that filtering process.
b) I think seeking not to be “divisive” here isn’t possible—issues of inequality on global scales and ethnic tension on local scales are in part caused by some groups of humans using violence to lock another group of humans out of access to resources. Even for me to point that out is inherently divisive. Those who feel aligned with the higher-power group will tend to feel defensive and will wish not to discuss the topic, while those who feel aligned with lower-power groups as well as those who have fully internalized that all people matter equally will tend to feel resentful about the state of affairs and will keep bringing up the topic. The process of mind changing is slow, but I think if one tries to let go of in-group biases (especially, recognizing that the biases exist) and internalizes that everyone matters equally, one will tend to shift in attitude.
What do you think about the fact that many in the field are pretty open that they are pursuing enquiry on how to achieve an ideology and not neutral enquiry (using lines like all fields are ideological whether they know it or not)?
I broadly agree, but in my view the important part to emphasize is what you said on the final thoughts (about seeking to ask more questions about this to ourselves and as a community) and less on intervention recommendations.
I bite this bullet. I think you do ultimately need to circle back to the malaria nets (especially if you are talking more about directing money than about directing labor). I say this as someone who considers myself as much a part of the social justice movement as I do part of the EA movement.Realistically, I don’t think it’s really plausible that tackling stuff in high income countries is going to be more morally important than malaria net-type activities, at least when it comes to fungible resources such as donations (the picture gets more complex with respect to direct work of course). It’s good to think about what the cost-effective ways to improve matters in high income countries might be, but realistically I bet once you start crunching numbers you will probably find that malaria-net-type-activities should still the top priority by a wide margin if you are dealing with fungible resources. I think the logical conclusions of anti-racist/anti-colonialist thought converge upon this as well. In my view, the things that social justice activists are fighting for ultimately do come down to the basics of food, shelter, medical care, and the scale of that fight has always been global even if the more visible portion generally plays out on ones more local circles.
However, I still think putting thought into how one would design such interventions should be encouraged, because:
I agree with this, and would encourage more emphasis on this. The EA community (especially on the rationality/lesswrong part of the community) puts a lot of effort into getting rid of cognitive biases. But when it comes to acknowledging and internally correcting for the types of biases which result from growing up in a society which is built upon exploitation, I don’t really think the EA community does better than any other randomly selected group of people who are from a similar demographic (lets say, randomly selected people who went to prestigious universities). And that’s kind of weird. We’re a group of people who are trying to achieve social impact. We’re often people who wield considerable resources and have to work with power structures all the time. It’s a bit concerning that the community level of knowledge of the bodies of work that deal with these issues is just average.I don’t really mean this as a call to action (realistically, I think given the low current state of awareness it seems probable that attempting action is going to result in misguided or heavy-handed solutions). What I do suggest is—a lot of you spend some of your spare time reading and thinking about cognitive biases, trying to better understand yourself and the world, and consider this a worthwhile activity. I think, it would be worth applying a similar spirit to spending time to really understand these issues as well.
What are some of the biases you’re thinking of here? And are there any groups of people that you think are especially good at correcting for these biases?
My impression of the EA bubble is that it leans left-libertarian; I’ve seen a lot of discussion of criminal justice reform and issues with policing there (compared to e.g. the parts of the mainstream media dominated by people from prestigious universities).
I suppose the average EA might be more supportive of capitalism than the average graduate of a prestigious university, but I struggle to see that as an example of bias rather than as a focus on the importance of certain outcomes (e.g. average living standards vs. higher equity within society).
The longer answer to this question: I am not sure how to give a productive answer to this question. In the classic “cognitive bias” literature, people tend to immediately accept that the biases exist once they learn about them (…as long as you don’t point them out right at the moment they are engaged in them). That is not the case for these issues.
I had to think carefully about how to answer because (when speaking to the aforementioned “randomly selected people who went to prestigious universities”, as well as when speaking to EAs) such issues can be controversial and trigger defensiveness. These topics are political and cannot be de-politicized, I don’t think there is any bias I can simply state that isn’t going to be upvoted by those who agree and dismissed as a controversial political opinion by those who don’t already agree, which isn’t helpful.
It’s analogous to if you walked into a random town hall and proclaimed “There’s a lot of anthropomorphic bias going on in this community, for example look at all the religiosity” or “There’s a lot of species-ism going on in this community, look at all the meat eating”. You would not necessarily make any progress on getting people to understand. The only people who would understand are those who know exactly what you mean and already agree with you. In some circles, the level of understanding would be such that people would get it. In others, such statements would produce minor defensiveness and hostility. The level of “understanding” vs “defensiveness and hostility” in the EA community regarding these issues is similar to that of randomly selected prestigious university students (that is, much more understanding than the population average, but less than ideal). As with “anthropomorphic bias” and as with “speciesism”, there are some communities where certain concepts are implicitly understood by most people and need no explanation, and some communities where they aren’t. It comes down to what someone’s point of view is.
Acquiring an accurate point of view, and moving a community towards an accurate point of view, is a long process of truth seeking. It is a process of un-learning a lot of things that you very implicitly hold true. It wouldn’t work to just list biases. If I start listing out things like (unfortunately poorly named) “privilege-blindness” and (unfortunately poorly named) “white-fragility” I doubt it’s not going to have any positive effect other than to make people who already agree nod to themselves, while other people roll their eyes, and other people google the terms and then roll their eyes. Criticizing things such that something actually goes through is pretty hard.
The productive process involves talking to individual people, hearing their stories, having first-hand exposure to things, reading a variety of writings on the topic and evaluating them. I think a lot of people think of these issues as “identity political topics” or “topics that affect those less fortunate” or “poorly formed arguments to be dismissed”. I think progress occurs when we frame-shift towards thinking of them as “practical every day issues that affect our lives”, and “how can I better articulate these real issues to myself and others” and “these issues are important factors in generating global inequality and suffering, an issue which affects us all”.
Something which might be a useful contribution from someone familiar with the topic would be to write about it in EA-friendly terms. Practical every day issues don’t have to be expressed in “poorly formed arguments”. If the material could be expressed in well formed arguments (or in arguments which the EA community can recognise as well formed), I think it would gain a lot more traction in the community.
Well, I do think discussion of it is good, but if you’re referring to resources directed to the cause area...it’s not that I want EAs to re-direct resources away from low-income countries to instead solving disparities in high income countries, and I don’t necessarily consider this related to the self-criticism as a community issue. I haven’t really looked into this issue, but: on prior intuition I’d be surprised if American criminal justice reform compares very favorably in terms of cost-effectiveness to e.g. GiveWell top charities, reforms in low income countries, or reforms regarding other issues. (Of course, prior intuitions aren’t a good way to make these judgements, so right now that’s just a “strong opinion, weakly held”.)
My stance is basically no on redirecting resources away from basic interventions in low income countries and towards other stuff, but yes on advocating that each individual tries to become more self-reflective and knowledgeable about these issues.
I agree, that’s not an example of bias. This is one of those situations where a word gets too big to be useful—“supportive of capitalism” has come to stand for a uselessly large range of concepts. The same person might be critical about private property, or think it has sinister/exploitative roots, and also support sensible growth focused economic policies which improve outcomes via market forces.
I think the fact that EA has common sense appeal to a wide variety of people with various ideas is a great feature. If you are actually focused on doing the most good you will start becoming less abstractly ideological and more practical and I think that is the right way to be. (Although I think a lot of EAs unfortunately stay abstract and end up supporting anything that’s labeled “EA”, which is also wrong).
My main point is that if someone is serious about doing the most good, and is working on a topic that requires a broad knowledge base, then a reasonable understanding the structural roots of inequality (including how gender and race and class and geopolitics play into it) should be one part of their practical toolkit. In my personal opinion, while a good understanding of this sort of thing generally does lead to a certain political outlook, it’s really more about adding things to your conceptual toolbox than it is about which -ism you rally around.
This is a tough question to answer properly, both because it is complicated and because I think not everyone will like the answer. There is a short answer and a long answer.
Here is the short answer. I’ll put the long answer in a different comment.
Refer to Sanjay’s statement above
At time of writing, this is sitting at negative-5 karma. Maybe it won’t stay there, but this innocuous comment was sufficiently controversial that it’s there now. Why is that? Is anything written there wrong? I think it’s a very mild comment pointing out an obviously true fact—that a communities should also be self-reflective and self-critical when discussing structural racism. Normally EAs love self-critical, skeptical behavior. What is different here? Even people who believe that “all people matter equally” and “racism is bad” are still very resistant to having self-critical discussions about it.
I think that understanding the psychology of defensiveness surrounding the response to comments such as this one is the key to understanding the sorts of biases I’m talking about here. (And to be clear—I don’t think this push back against this line of criticism is specific to the EA community, I think the EA community is responding as any demographically similar group would...meaning, this is general civilizational inadequacy at work, not something about EA in particular)
“It’s a bit concerning that the community level of knowledge of the bodies of work that deal with these issues is just average”—I do think there are valuable lessons to be drawn from the literature, unfortunately a) lots of the work is low quality or under-evidenced b) discussion of these issues often ends up being highly divisive, whilst not changing many people’s minds
a) Well, I think the “most work is low-quality aspect” is true, but also fully-general to almost everything (even EA). Engagement requires doing that filtering process.
b) I think seeking not to be “divisive” here isn’t possible—issues of inequality on global scales and ethnic tension on local scales are in part caused by some groups of humans using violence to lock another group of humans out of access to resources. Even for me to point that out is inherently divisive. Those who feel aligned with the higher-power group will tend to feel defensive and will wish not to discuss the topic, while those who feel aligned with lower-power groups as well as those who have fully internalized that all people matter equally will tend to feel resentful about the state of affairs and will keep bringing up the topic. The process of mind changing is slow, but I think if one tries to let go of in-group biases (especially, recognizing that the biases exist) and internalizes that everyone matters equally, one will tend to shift in attitude.
What do you think about the fact that many in the field are pretty open that they are pursuing enquiry on how to achieve an ideology and not neutral enquiry (using lines like all fields are ideological whether they know it or not)?