Not the most charitable tone, I think. And I disagree strongly with your points.
You compare DEI initiatives with interventions in global health and animal suffering—but this post doesn’t argue for such a comparison. This post suggests that the EA community already values diversity, inclusion, etc. and a greater understanding of intersectionality could help further those values. The applications considered in the post are how intersectionality can offer new insights or perspectives on existing cause areas, and how intersectionality might improve communications. You are attacking the post as if DEI was being proposed as a cause area, which is disingenuous.
Second, the special obligations that people of colour might feel more strongly may not attach to skin colour only or at all. Racism and its effects is complex, so special obligations might be more specific to a particular culture, ethnicity, geography, or history. For example, perhaps an obligation is due to a particular way an ethnicity is treated in a specific place, such as Cuban-Americans in Florida, or on the basis of a particular historic relationship, such as the slave trade. It’s simplistic to assume that special obligations felt more strongly by people of colour must be based on skin colour, and so alleviated by helping other people of a similar skin colour.
I have yet to see sufficient evidence that DEI or intersectionality is utilitarian in the slightest.
Correct. It is a quite different ethical theory. You seem to believe that concepts must be utilitarian to be included in or considered by effective altruism. Given that you are an ‘oldschoolEA’ you probably don’t need to be directed to the EA FAQ on this question, but for other readers it’s worth quoting that:
Utilitarians are usually enthusiastic about effective altruism. But many effective altruists are not utilitarians and care intrinsically about things other than welfare, such as violation of rights, freedom, inequality, personal virtue and more. In practice, most people give some weight to a range of different ethical theories.
The only ethical position necessary for effective altruism is believing that helping others is important. Unlike utilitarianism, effective altruism doesn’t necessarily say that doing everything possible to help others is obligatory, and doesn’t advocate for violating people’s rights even if doing so would lead to the best consequences.
Ibid. for your last point, which seems to claim that unless something is quantifiable it is epistemically suspect. I think there’s a big range of ideas worth considering when thinking about how to do the most good in the world. Not all of those ideas are easily quantifiable.
Finally, you say
Only their beliefs, actions, and values matter for utility maximization. Not the color of their skin, gender, or sexuality.
EAs are humans, not utility-maximising machines. And human psychology is complex. You can’t capture who someone is by asking them to write down all their beliefs, values, and/or actions. Because we can’t write them all down or test them or even know about them all, it’s worth being interested in gaining perspectives from lots of different people, who have lived different kinds of lives.
For a simple example, we want people from a range of academic disciplines. Say our community was almost entirely economists. Even if we were sure it was a really smart bunch of economists, it would be wise to try and get the perspective of some other disciplines. Similarly, if our community is particularly homogenous with respect to gender, ethnicity, culture, or class, it would be worth trying to get more involvement and ideas from people from underrepresented gender/ethnicity/culture/class. This is because how and what we think is quite contingent on the lives we have led. That is, in addition to any arguments from justice or representation, diversity also has epistemic benefits.
An uncharitable tone? Perhaps I should take it as a compliment. Being uncharitably critical is a good thing.
This post suggests that the EA community already values diversity, inclusion, etc. and a greater understanding of intersectionality could help further those aims.
When I first became an EA a decade ago and familiarized myself with (blunt and iconoclastic) EA concepts and ideas, in the EA handbooks and other relevant writings, there was no talk of diversity, righting historic wrongs with equity, inclusion, and intersectionality. These were not the values the community sought to maximize or the domains of knowledge meant to be understood. They had nothing to do with increasing utility and combating disutility. Granted, not every EA was utilitarian. But EA grew out of utilitarianism and utilitarian philosophers like Singer and MacAskill. The consequentialist focus was on maximizing good via high-impact philanthropy, how one do good better, relative to QALYs and DALYs. EA wasn’t very inclusive either- it was (necessarily) harsh towards those any and all who rejected an evidence-based, quantifiable, doing good better approach, irrespective of their backgrounds.
There was extreme methodological, data-driven rigor. If you suggested that there was a pressing need to follow in the footsteps of inter-sectionalist activists and fight racial discrimination injustice in the US, adopting the jargon and flawed ideas of the intersectionalists, you’d be laughed at… or at least critiqued at an EA meeting. That cause, whilst noble, was far from a tractable priority. People, animals, and countless other sentient beings were out there dying in the world and suffering. What are 300 or so people that die at the hands of American police brutality annually compared to the 300 kids in Africa who die every hour…
Things like seeing eye dog campaigns, giving to art museums were deemed ineffective. Today we have DEI campaigns and other sorts of ineffective altruism that have crept up and infiltrated the main EA sphere. Perhaps, today we should replace the give $1 to AMF or the seeing eye-dog experiment with give $1 to AMF or a DEI educational or instructional-based campaign. One is effective, the other not so much.
DEI would be fine if there was evidence that maxing DEI was good for EA ends, but frankly, I see no evidence of that being the case. The focus on community building in EA shifted from “Growing the EA community” to complaining the EA community was somehow inherently in the wrong or discriminatory or evil for ending up mostly male, white, secular, tech based etc. Now that couldn’t stand so there was a push to turn EA more diverse and open and inclusive.
Which is great and had my initial support. But it comes with risk that those who might not share EA values and methodologies will become EAs and overtime shift EA’s values/priorities as these individuals become more numerous, influential, and rise to leadership positions. EA became increasingly big tent, in part because of this.
I initially supported this outreach, but didn’t expect the epistemic baggage and prioritized non-EA values of others to in turn infiltrate and alter EA from the inside out. Whereas previously, I found EA had a stronger ideological unity and sense of purpose. No one cared about what gender/race you were— that wasn’t important. Only your beliefs, values, epistemologies and deeds mattered. And what mattered more was discourse-driven consensus among EAs but consensus and what we all share has given way to inclusion and relativistic diversity of thought. Look at the criterion of what it means to be an EA, look how vague and non-specific it has become :(
Today the EA community is one where diversity and “equity” and “justice” became innate, disseminated values, rather than potential or circumstantial instrumental ones for prior lauded ends. I’ve watched the sad and slow evolution of this take place. And it saddens the inner utilitarian in me.
So DEI has become a cause area within a cause area, and we are all aware of it.
Intersectionality is not just a flawed, unquantitative epistemology. It is the very means by which DEI initiatives are maximized and implemented.
After all, if your goal is to maximize diversity then you need intersectionality to draw up the dozens of (imo irrelevant) demographic categories (racial, religious/lacktherof, ethnic, gender, sex, health status, socioeconomic, sexual, age, lvl of education, citizenship status, etc.) then try to make sure you have people that match all the combinations and criteria. Then you have to make sure equity is there, so all historical wrongs have to be accounted for. Then you have to shame people for making assumptions or holding beliefs about those who are part of other categories.
For ex., intersectionalists claim it’s pointless for a male to study female psychology because a male will never understand what it’s like to be female and should instead have no voice in the conversation.
Second, the special obligations that people of colour might feel more strongly may not attach to skin colour only or at all. Racism and its effects is complex, so special obligations might be more specific to a particular culture, ethnicity, geography, or history. For example, perhaps an obligation is due to a particular way an ethnicity is treated in a specific place, such as Cuban-Americans in Florida, or on the basis of a particular historic relationship, such as the slave trade.
These obligations, if they exist, are not EA. Period. They are not effective. Yes, they may be forms of altruism, but they are ineffective ones based on kinship, greenbeard effects, localism, etc. They aren’t EA. They aren’t neutral. We as a community used to take a harsher stance against these, because the money goes further overseas. Has that been lost?
I’m White and Asian, and I’ve experienced discrimination and dislike from humans who adopt tribalistic mentalities. I’m no stranger to racism, but I realize that culture and history can turn people into the opposite of what EAs strive for- cause neutrality.
It’s simplistic to assume that special obligations felt more strongly by people of colour must be based on skin colour, and so alleviated by helping other people of a similar skin colour.
Having spoken to plenty of PoC intersectional activists, there is often an emphasis on color and I find it delusional to deny it.
One such campaign (for example) is the “Buy from this business it is Black-owned etc” or support this charity because it is run entirely by PoC and is fully diverse etc. These campaigns argue there is a moral obligation to support charities or businesses based on the demographic characteristics of their owners or leaders. I find this not justifiable, relative to other charities or initiatives.
While you are not wrong in pointing out that (today) one doesn’t have to be a utilitarian to be an EA, back in the day, it was rare to find an EA who wasn’t utilitarian or an adherent to the utilitarian moral prescriptions of Singer and the like.
Ibid. for your last point, which seems to claim that unless something is quantifiable it is epistemically suspect. I think there’s a big range of ideas worth considering when thinking about how to do the most good in the world. Not all of those ideas are easily quantifiable.
I agree but Ea’s strength is its focus on what is quantifiable
EAs are humans, not utility-maximising machines. And human psychology is complex. You can’t capture who someone is by asking them to write down all their beliefs, values, and/or actions. Because we can’t write them all down or test them or even know about them all, it’s worth being interested in gaining perspectives from lots of different people, who have lived different kinds of lives.
Humans are utility maximizing machines, though we are often very bad at it. You can get a good and workable approximation of someone based on their values, beliefs and actions.
it’s worth being interested in gaining perspectives from lots of different people, who have lived different kinds of lives.
Gonna have to disagree there. The perspective of those training seeing eye dogs or caring about art or volunteering at the local theater are not worth considering. What I like about EA was that some perspectives are more important than others, and we can hone the perspectives that matter over those that are not morally or epistemically relevant.
Similarly, if our community is particularly homogenous with respect to gender, ethnicity, culture, or class, it would be worth trying to get more involvement and ideas from people from underrepresented gender/ethnicity/culture/class.
This seems to assume that people from from underrepresented gender/ethnicity/culture/class are incapable of generating the same ideas and that they somehow have different ideas that differ from the homogenous majority.
Or at minimum, if these ideas are in fact different, it assumes those ideas are better than what the majority has come up with (which I find unlikely, given the rarity of EA methodological rigor).
Frankly (for ex), I can’t tell the difference between a female/white/American/working class hedonistic utilitarian than a male/Black/French/middle class hedonistic utilitarian.
As far as I’m concerned, both are hedonistic utilitarians with the same (or highly similar) hedonistic utilitarian ideas. Their sex or gender or race doesn’t change that.
That’s… a lot to unpack. I think we probably disagree on a lot, and I’m not sure further back-and-forth will be all that productive. I trust other readers to assess whose responses were substantive or convincing.
Two final comments:
1) As mentioned in McMahan’s ‘Philosophical Critiques of Effective Altruism’, the earliest arguments by Singer and Unger were based on intuition to a thought experiment and consistency, and “there is no essential dependence of effective altruism on utilitarianism.”
2) Even if we grant that early EA was 100% and whole-heartedly utilitarian, does it follow that EA today should be?
The 2019 EA survey found that the clear majority of EAs (80.7%) identified with consequentialism, especially utilitarian consequentialism. Their moral views color and influence how EA functions. So the lack of dependence of effective altruism on utilitarianism is a weak argument, historically and presently.
Yes, EA should still uphold data-driven consequentialist principles and methodologies, like those seen in contemporary utilitarian calculus.
I agree that most EAs identify with consequentialism, and that proportion was likely higher in the past. I also lean consequentialist myself. But that’s not what we disagree about. You move from ‘The majority of EAs lean consequentialist’ to ‘The only ideas EA should consider seriously are utilitarian ones’ - and that I disagree with.
Moral Uncertainty is a book about what to do given there are multiple plausible ethical theories, written by two of EA’s leading lights Toby Ord and Will MacAskill (in addition to Krister Bykvist). Perhaps you could consider it.
The change over time from a simplistic, first order theory of effective altruism is warranted and natural. You describe a set of thumb rules for utilitarianism, but the thing is—over time we get better at discussing how to adapt to different situations and what it even is that we want to maximise. You may prefer to keep the old ways, but that doesn’t make it the “correct” EA formalism.
over time we get better at discussing how to adapt to different situations and what it even is that we want to maximise.
Overtime EA has become increasingly big tent and has ventured into offering opinions on altruistic initiatives it would have previously criticized or deemed ineffective.
That is to say, the concern is that EA is becoming merely A, overtime.
Not the most charitable tone, I think. And I disagree strongly with your points.
You compare DEI initiatives with interventions in global health and animal suffering—but this post doesn’t argue for such a comparison. This post suggests that the EA community already values diversity, inclusion, etc. and a greater understanding of intersectionality could help further those values. The applications considered in the post are how intersectionality can offer new insights or perspectives on existing cause areas, and how intersectionality might improve communications. You are attacking the post as if DEI was being proposed as a cause area, which is disingenuous.
Second, the special obligations that people of colour might feel more strongly may not attach to skin colour only or at all. Racism and its effects is complex, so special obligations might be more specific to a particular culture, ethnicity, geography, or history. For example, perhaps an obligation is due to a particular way an ethnicity is treated in a specific place, such as Cuban-Americans in Florida, or on the basis of a particular historic relationship, such as the slave trade. It’s simplistic to assume that special obligations felt more strongly by people of colour must be based on skin colour, and so alleviated by helping other people of a similar skin colour.
Correct. It is a quite different ethical theory. You seem to believe that concepts must be utilitarian to be included in or considered by effective altruism. Given that you are an ‘oldschoolEA’ you probably don’t need to be directed to the EA FAQ on this question, but for other readers it’s worth quoting that:
Ibid. for your last point, which seems to claim that unless something is quantifiable it is epistemically suspect. I think there’s a big range of ideas worth considering when thinking about how to do the most good in the world. Not all of those ideas are easily quantifiable.
Finally, you say
EAs are humans, not utility-maximising machines. And human psychology is complex. You can’t capture who someone is by asking them to write down all their beliefs, values, and/or actions. Because we can’t write them all down or test them or even know about them all, it’s worth being interested in gaining perspectives from lots of different people, who have lived different kinds of lives.
For a simple example, we want people from a range of academic disciplines. Say our community was almost entirely economists. Even if we were sure it was a really smart bunch of economists, it would be wise to try and get the perspective of some other disciplines. Similarly, if our community is particularly homogenous with respect to gender, ethnicity, culture, or class, it would be worth trying to get more involvement and ideas from people from underrepresented gender/ethnicity/culture/class. This is because how and what we think is quite contingent on the lives we have led. That is, in addition to any arguments from justice or representation, diversity also has epistemic benefits.
An uncharitable tone? Perhaps I should take it as a compliment. Being uncharitably critical is a good thing.
When I first became an EA a decade ago and familiarized myself with (blunt and iconoclastic) EA concepts and ideas, in the EA handbooks and other relevant writings, there was no talk of diversity, righting historic wrongs with equity, inclusion, and intersectionality. These were not the values the community sought to maximize or the domains of knowledge meant to be understood. They had nothing to do with increasing utility and combating disutility. Granted, not every EA was utilitarian. But EA grew out of utilitarianism and utilitarian philosophers like Singer and MacAskill. The consequentialist focus was on maximizing good via high-impact philanthropy, how one do good better, relative to QALYs and DALYs. EA wasn’t very inclusive either- it was (necessarily) harsh towards those any and all who rejected an evidence-based, quantifiable, doing good better approach, irrespective of their backgrounds.
There was extreme methodological, data-driven rigor. If you suggested that there was a pressing need to follow in the footsteps of inter-sectionalist activists and fight racial discrimination injustice in the US, adopting the jargon and flawed ideas of the intersectionalists, you’d be laughed at… or at least critiqued at an EA meeting. That cause, whilst noble, was far from a tractable priority. People, animals, and countless other sentient beings were out there dying in the world and suffering. What are 300 or so people that die at the hands of American police brutality annually compared to the 300 kids in Africa who die every hour…
Things like seeing eye dog campaigns, giving to art museums were deemed ineffective. Today we have DEI campaigns and other sorts of ineffective altruism that have crept up and infiltrated the main EA sphere. Perhaps, today we should replace the give $1 to AMF or the seeing eye-dog experiment with give $1 to AMF or a DEI educational or instructional-based campaign. One is effective, the other not so much.
DEI would be fine if there was evidence that maxing DEI was good for EA ends, but frankly, I see no evidence of that being the case. The focus on community building in EA shifted from “Growing the EA community” to complaining the EA community was somehow inherently in the wrong or discriminatory or evil for ending up mostly male, white, secular, tech based etc. Now that couldn’t stand so there was a push to turn EA more diverse and open and inclusive.
Which is great and had my initial support. But it comes with risk that those who might not share EA values and methodologies will become EAs and overtime shift EA’s values/priorities as these individuals become more numerous, influential, and rise to leadership positions. EA became increasingly big tent, in part because of this.
I initially supported this outreach, but didn’t expect the epistemic baggage and prioritized non-EA values of others to in turn infiltrate and alter EA from the inside out. Whereas previously, I found EA had a stronger ideological unity and sense of purpose. No one cared about what gender/race you were— that wasn’t important. Only your beliefs, values, epistemologies and deeds mattered. And what mattered more was discourse-driven consensus among EAs but consensus and what we all share has given way to inclusion and relativistic diversity of thought. Look at the criterion of what it means to be an EA, look how vague and non-specific it has become :(
Today the EA community is one where diversity and “equity” and “justice” became innate, disseminated values, rather than potential or circumstantial instrumental ones for prior lauded ends. I’ve watched the sad and slow evolution of this take place. And it saddens the inner utilitarian in me.
So DEI has become a cause area within a cause area, and we are all aware of it.
Intersectionality is not just a flawed, unquantitative epistemology. It is the very means by which DEI initiatives are maximized and implemented.
After all, if your goal is to maximize diversity then you need intersectionality to draw up the dozens of (imo irrelevant) demographic categories (racial, religious/lacktherof, ethnic, gender, sex, health status, socioeconomic, sexual, age, lvl of education, citizenship status, etc.) then try to make sure you have people that match all the combinations and criteria. Then you have to make sure equity is there, so all historical wrongs have to be accounted for. Then you have to shame people for making assumptions or holding beliefs about those who are part of other categories.
For ex., intersectionalists claim it’s pointless for a male to study female psychology because a male will never understand what it’s like to be female and should instead have no voice in the conversation.
These obligations, if they exist, are not EA. Period. They are not effective. Yes, they may be forms of altruism, but they are ineffective ones based on kinship, greenbeard effects, localism, etc. They aren’t EA. They aren’t neutral. We as a community used to take a harsher stance against these, because the money goes further overseas. Has that been lost?
I’m White and Asian, and I’ve experienced discrimination and dislike from humans who adopt tribalistic mentalities. I’m no stranger to racism, but I realize that culture and history can turn people into the opposite of what EAs strive for- cause neutrality.
Having spoken to plenty of PoC intersectional activists, there is often an emphasis on color and I find it delusional to deny it.
One such campaign (for example) is the “Buy from this business it is Black-owned etc” or support this charity because it is run entirely by PoC and is fully diverse etc. These campaigns argue there is a moral obligation to support charities or businesses based on the demographic characteristics of their owners or leaders. I find this not justifiable, relative to other charities or initiatives.
While you are not wrong in pointing out that (today) one doesn’t have to be a utilitarian to be an EA, back in the day, it was rare to find an EA who wasn’t utilitarian or an adherent to the utilitarian moral prescriptions of Singer and the like.
I agree but Ea’s strength is its focus on what is quantifiable
Humans are utility maximizing machines, though we are often very bad at it. You can get a good and workable approximation of someone based on their values, beliefs and actions.
Gonna have to disagree there. The perspective of those training seeing eye dogs or caring about art or volunteering at the local theater are not worth considering. What I like about EA was that some perspectives are more important than others, and we can hone the perspectives that matter over those that are not morally or epistemically relevant.
This seems to assume that people from from underrepresented gender/ethnicity/culture/class are incapable of generating the same ideas and that they somehow have different ideas that differ from the homogenous majority.
Or at minimum, if these ideas are in fact different, it assumes those ideas are better than what the majority has come up with (which I find unlikely, given the rarity of EA methodological rigor).
Frankly (for ex), I can’t tell the difference between a female/white/American/working class hedonistic utilitarian than a male/Black/French/middle class hedonistic utilitarian.
As far as I’m concerned, both are hedonistic utilitarians with the same (or highly similar) hedonistic utilitarian ideas. Their sex or gender or race doesn’t change that.
That’s… a lot to unpack. I think we probably disagree on a lot, and I’m not sure further back-and-forth will be all that productive. I trust other readers to assess whose responses were substantive or convincing.
Two final comments:
1) As mentioned in McMahan’s ‘Philosophical Critiques of Effective Altruism’, the earliest arguments by Singer and Unger were based on intuition to a thought experiment and consistency, and “there is no essential dependence of effective altruism on utilitarianism.”
2) Even if we grant that early EA was 100% and whole-heartedly utilitarian, does it follow that EA today should be?
The 2019 EA survey found that the clear majority of EAs (80.7%) identified with consequentialism, especially utilitarian consequentialism. Their moral views color and influence how EA functions. So the lack of dependence of effective altruism on utilitarianism is a weak argument, historically and presently.
Yes, EA should still uphold data-driven consequentialist principles and methodologies, like those seen in contemporary utilitarian calculus.
I agree that most EAs identify with consequentialism, and that proportion was likely higher in the past. I also lean consequentialist myself. But that’s not what we disagree about. You move from ‘The majority of EAs lean consequentialist’ to ‘The only ideas EA should consider seriously are utilitarian ones’ - and that I disagree with.
Moral Uncertainty is a book about what to do given there are multiple plausible ethical theories, written by two of EA’s leading lights Toby Ord and Will MacAskill (in addition to Krister Bykvist). Perhaps you could consider it.
The change over time from a simplistic, first order theory of effective altruism is warranted and natural. You describe a set of thumb rules for utilitarianism, but the thing is—over time we get better at discussing how to adapt to different situations and what it even is that we want to maximise. You may prefer to keep the old ways, but that doesn’t make it the “correct” EA formalism.
Overtime EA has become increasingly big tent and has ventured into offering opinions on altruistic initiatives it would have previously criticized or deemed ineffective.
That is to say, the concern is that EA is becoming merely A, overtime.