I see Helen’s post as being more prescriptive than descriptive. It’s something to aspire to, and declaring that “Effective Altruism is an Ideology” feels like giving up. Instead of “defending” against “competing” ideological perspectives, why not adopt the best of what they have to offer?
I also think you’re being a little unfair. Time & attention for evaluating ideas & publishing analysis is limited, and in several cases there is work you don’t seem aware of.
I’ll grant that EA may have an essentially consequentialist outlook (though even on this point, I’d argue many EAs are too open to other moral philosophies to qualify for the adjective “ideological”; see e.g. the discussion of non-consequentialist ethics in this podcast with EA co-founder Will MacAskill).
But some of your other claims feel too strong to me. For example, even if it’s true that no EA organization has ever made use of ethnography, I don’t think that’s because we’re ideologically opposed to ethnography in the way that, say, libertarians are ideologically opposed to government coercion. As anonymous_ea points out, ethnography was just recently a topic of interest here on the forum. It seems plausible to me that we’re talking about and making use of ethnography at about the same rate as the research world at large (that is to say, not very much).
Similarly, using phenomenology to determine the value of different types of life sounds like Qualia Research Institute, and I believe CEA has examined historical case studies related to social movements. Just because you aren’t personally aware of it doesn’t mean someone in EA isn’t doing it, and it certainly doesn’t mean EA is ideologically opposed to it.
With regard to “devising and implementing alternatives to global capitalism”, 80k did a podcast on that. This is the sort of podcast I’d expect to see in the world where EA is a question, and 80k is always talking to experts in different areas, exploring new possible cause areas for EA. Here’s a post on socialism you might be interested in.
Similarly, there is an effective environmentalism group with hundreds of members in it. Here is a post on an EA blog attempting address more or less exactly the issue you outline (“serious evidence-based research into the specific questions I present is highly neglected, even if the broader areas are not”) with regard to environmentalism. And at a recent EA conference, I attended a presentation which argued that global warming should be a higher priority for EAs.
It doesn’t feel to me like EAs are ideologically opposed to environmentalism with anything like the vigor with which feminists and libertarians ideologically oppose things. Instead it seems like EAs investigate environmentalism, and some folks argue for it and work on it, but those arguments haven’t been strong enough to make environmentalism the primary focus of most EAs. 80k places global warming under the category of “areas that are especially important but somewhat less neglected”.
Anyway, an argument that uniquely picks out AI safety is: If we can solve AI safety and create a superintelligent FAI, it can solve all the other problems on your list. I don’t think this argument is original to me; I suspect it came up when FHI did research on which existential risks to focus on many years ago. A quick look at the table of contents of this book shows FHI spent plenty of time considering existential risks unrelated to new technologies. I think OpenPhil did their own broad research and ended up coming to conclusions similar to FHI’s.
With regard to the Global Priorities Institute, and the importance of x-risk, longtermism has received a fair amount of discussion. Nick Beckstead wrote an entire PhD thesis on it.
Regarding the claim that emerging technologies are EA’s main focus, I want to highlight these results from the EA Survey which found that “Global Poverty remains the most popular single cause in our sample as a whole”. Note that the fourth most popular cause is cause prioritization. You write: “My point is not that the candidate causes I have presented actually are good causes for EAs to work on”. However, if we’re trying to figure out whether we should devote even more resources to investigating unexplored causes to do the most good, beyond what’s currently going into cause prioritization, the ease of finding good causes which are currently ignored seems like an important factor. In other words, I don’t see a compelling argument here that cause prioritization should receive more attention than it currently receives.
In addition to being a question, EA is also a community and a memeplex. It’s important to listen to people outside the community in case people are self-selecting in or out based on incidental factors. And I believe in upvoting uncommon perspectives on this forum to encourage a diversity of opinions. But let’s not give up and start calling ourselves an ideology. I would rather have an ecosystem of competing ideas than a body of doctrine—and luckily, I think we’re already closer to an ecosystem, so let’s keep it that way.
It’s important to listen to people outside the community in case people are self-selecting in or out based on incidental factors.
Yet anything which is framed as an attack or critique on EA is itself something that causes people to self-select in or out of the community. If someone says “EAs have statistics ideology” then people who don’t like statistics won’t join. It becomes an entrenched problem from founder effects. Sort of a self-fulfilling prophecy.
What is helpful is to showcase people who actual work on things like ethnography. That’s something that makes EA more methodologically diverse.
But stuff like this is just as apt to make anyone who isn’t cool with utilitarianism / statistics / etc say they want to go elsewhere.
People who aren’t “cool with utilitarianism / statistics / etc” already largely self-select out of EA. I think my post articulates some of the reasons why this is the case.
I’ve met a great number of people in EA who disagree with utilitarianism and many people who aren’t particularly statistically minded. Of course it is not equal to the base rates of the population, but I don’t really see philosophically dissecting moderate differences as productive for the goal of increasing movement growth.
If you’re interested in ethnologies, sociology, case studies, etc—then consider how other movements have effectively overcome similar issues. For instance, the contemporary American progressive political movement is heavily driven by middle and upper class whites, and faces dissent from substantial portions of the racial minority and female identities. Yet it has been very effective in seizing institutions and public discourse surrounding race and gender issues. Have they accomplished this by critically interrogating themselves about their social appeal? No, they hid such doubts as they focused on hammering home their core message as strongly as possible.
If we want to assist movement growth, we need to take off our philosopher hats, and put on our marketer and politician hats. But you didn’t write this essay with the framing of “how to increase the uptake of EA among non-mathematical (etc) people” (which would have been very helpful); eschewing that in favor of normative philosophy was your implicit, subjective judgment of which questions are most worth asking and answering.
I see Helen’s post as being more prescriptive than descriptive. It’s something to aspire to, and declaring that “Effective Altruism is an Ideology” feels like giving up. Instead of “defending” against “competing” ideological perspectives, why not adopt the best of what they have to offer?
I also think you’re being a little unfair. Time & attention for evaluating ideas & publishing analysis is limited, and in several cases there is work you don’t seem aware of.
I’ll grant that EA may have an essentially consequentialist outlook (though even on this point, I’d argue many EAs are too open to other moral philosophies to qualify for the adjective “ideological”; see e.g. the discussion of non-consequentialist ethics in this podcast with EA co-founder Will MacAskill).
But some of your other claims feel too strong to me. For example, even if it’s true that no EA organization has ever made use of ethnography, I don’t think that’s because we’re ideologically opposed to ethnography in the way that, say, libertarians are ideologically opposed to government coercion. As anonymous_ea points out, ethnography was just recently a topic of interest here on the forum. It seems plausible to me that we’re talking about and making use of ethnography at about the same rate as the research world at large (that is to say, not very much).
Similarly, using phenomenology to determine the value of different types of life sounds like Qualia Research Institute, and I believe CEA has examined historical case studies related to social movements. Just because you aren’t personally aware of it doesn’t mean someone in EA isn’t doing it, and it certainly doesn’t mean EA is ideologically opposed to it.
With regard to “devising and implementing alternatives to global capitalism”, 80k did a podcast on that. This is the sort of podcast I’d expect to see in the world where EA is a question, and 80k is always talking to experts in different areas, exploring new possible cause areas for EA. Here’s a post on socialism you might be interested in.
Similarly, there is an effective environmentalism group with hundreds of members in it. Here is a post on an EA blog attempting address more or less exactly the issue you outline (“serious evidence-based research into the specific questions I present is highly neglected, even if the broader areas are not”) with regard to environmentalism. And at a recent EA conference, I attended a presentation which argued that global warming should be a higher priority for EAs.
It doesn’t feel to me like EAs are ideologically opposed to environmentalism with anything like the vigor with which feminists and libertarians ideologically oppose things. Instead it seems like EAs investigate environmentalism, and some folks argue for it and work on it, but those arguments haven’t been strong enough to make environmentalism the primary focus of most EAs. 80k places global warming under the category of “areas that are especially important but somewhat less neglected”.
Anyway, an argument that uniquely picks out AI safety is: If we can solve AI safety and create a superintelligent FAI, it can solve all the other problems on your list. I don’t think this argument is original to me; I suspect it came up when FHI did research on which existential risks to focus on many years ago. A quick look at the table of contents of this book shows FHI spent plenty of time considering existential risks unrelated to new technologies. I think OpenPhil did their own broad research and ended up coming to conclusions similar to FHI’s.
With regard to the Global Priorities Institute, and the importance of x-risk, longtermism has received a fair amount of discussion. Nick Beckstead wrote an entire PhD thesis on it.
Regarding the claim that emerging technologies are EA’s main focus, I want to highlight these results from the EA Survey which found that “Global Poverty remains the most popular single cause in our sample as a whole”. Note that the fourth most popular cause is cause prioritization. You write: “My point is not that the candidate causes I have presented actually are good causes for EAs to work on”. However, if we’re trying to figure out whether we should devote even more resources to investigating unexplored causes to do the most good, beyond what’s currently going into cause prioritization, the ease of finding good causes which are currently ignored seems like an important factor. In other words, I don’t see a compelling argument here that cause prioritization should receive more attention than it currently receives.
In addition to being a question, EA is also a community and a memeplex. It’s important to listen to people outside the community in case people are self-selecting in or out based on incidental factors. And I believe in upvoting uncommon perspectives on this forum to encourage a diversity of opinions. But let’s not give up and start calling ourselves an ideology. I would rather have an ecosystem of competing ideas than a body of doctrine—and luckily, I think we’re already closer to an ecosystem, so let’s keep it that way.
Yet anything which is framed as an attack or critique on EA is itself something that causes people to self-select in or out of the community. If someone says “EAs have statistics ideology” then people who don’t like statistics won’t join. It becomes an entrenched problem from founder effects. Sort of a self-fulfilling prophecy.
What is helpful is to showcase people who actual work on things like ethnography. That’s something that makes EA more methodologically diverse.
But stuff like this is just as apt to make anyone who isn’t cool with utilitarianism / statistics / etc say they want to go elsewhere.
People who aren’t “cool with utilitarianism / statistics / etc” already largely self-select out of EA. I think my post articulates some of the reasons why this is the case.
I’ve met a great number of people in EA who disagree with utilitarianism and many people who aren’t particularly statistically minded. Of course it is not equal to the base rates of the population, but I don’t really see philosophically dissecting moderate differences as productive for the goal of increasing movement growth.
If you’re interested in ethnologies, sociology, case studies, etc—then consider how other movements have effectively overcome similar issues. For instance, the contemporary American progressive political movement is heavily driven by middle and upper class whites, and faces dissent from substantial portions of the racial minority and female identities. Yet it has been very effective in seizing institutions and public discourse surrounding race and gender issues. Have they accomplished this by critically interrogating themselves about their social appeal? No, they hid such doubts as they focused on hammering home their core message as strongly as possible.
If we want to assist movement growth, we need to take off our philosopher hats, and put on our marketer and politician hats. But you didn’t write this essay with the framing of “how to increase the uptake of EA among non-mathematical (etc) people” (which would have been very helpful); eschewing that in favor of normative philosophy was your implicit, subjective judgment of which questions are most worth asking and answering.