In a widely-cited article on the EA forum, Helen Toner argues that effective altruism is a question, not an ideology. Here is her core argument:
What is the definition of Effective Altruism? What claims does it make? What do you have to believe or do, to be an Effective Altruist? I don’t think that any of these questions make sense. It’s not surprising that we ask them: if you asked those questions about feminism or secularism, Islamism or libertarianism, the answers you would get would be relevant and illuminating. Different proponents of the same movement might give you slightly different answers, but synthesising the answers of several people would give you a pretty good feeling for the core of the movement. But each of these movements is answering a question. Should men and women be equal? (Yes.) What role should the church play in governance? (None.) What kind of government should we have? (One based on Islamic law.) How big a role should government play in people’s private lives? (A small one.) Effective Altruism isn’t like this. Effective Altruism is asking a question, something like: “How can I do the most good, with the resources available to me?”
In this essay I will argue that his view of effective altruism being a question and not an ideology is incorrect. In particular, I will argue that effective altruism is an ideology, meaning that it has particular (if somewhat vaguely defined) set of core principles and beliefs, and associated ways of viewing the world and interpreting evidence. After first explaining what I mean by ideology, I proceed to discuss the ways in which effective altruists typically express their ideology, including by privileging certain questions over others, applying particular theoretical frameworks to answer these questions, and privileging particular answers and viewpoints over others. I should emphasise at the outset that my purpose in this article is not to disparage effective altruism, but to try to strengthen the movement by helping EAs to better understand the intellectual actual intellectual underpinnings of the movement.
What is an ideology?
The first point I want to explain is what I mean when I talk about an ‘ideology’. Basically, an ideology is a constellation of beliefs and perspectives that shape the way adherents of that ideology view the world. To flesh this out a bit, I will present two examples of ideologies: feminism and libertarianism. Obviously these will be simplified since there is considerable heterogeneity within any ideology, and there are always disputes about who counts as a ‘true’ adherent of any ideology. Nevertheless, I think these quick sketches are broadly accurate and helpful for illustrating what I am talking about when I use the word ‘ideology’.
First consider feminism. Feminists typically begin with the premise that the social world is structured in such a manner that men as a group systematically oppress women as a group. There is a richly structured theory about how this works and how this interacts with different social institutions, including the family, the economy, the justice system, education, health care, and so on. In investigating any area, feminists typically focus on gendered power structures and how they shape social outcomes. When something happens, feminists ask ‘what affect does this have on the status and place of women in society?’ Given these perspectives, feminists typically are uninterested in and highly sceptical of any accounts of social differences between men and women based on biological differences, or attempts to rationalist differences on the basis of social stability or cohesion. This way of looking at things, focus on particular issues at the expense of others, and set of underlying assumptions constitutes the ideology of feminism.
Second consider libertarianism. Libertarians typically begin with the idea that individuals are fundamentally free and equal, but that governments throughout the world systematically step beyond their legitimate role of protecting individual freedoms by restricting those freedoms and violating individual rights. In analysing any situation, libertarians focus on how the actions of governments limit the free choices of individuals. Libertarians have extensive accounts as to how this occurs through taxation, government welfare programs, monetary and fiscal policy, the criminal justice system, state-sponsored education, the military industrial complex, and so on. When something happens, libertarians ask ‘what affect does this have on individual rights and freedoms?’ Given these perspectives, libertarians typically are uninterested in and highly sceptical of any attempts to justify state intervention on the basis of increases in efficiency, increasing equality, or improving social cohesion. This way of looking at things, focus on particular issues at the expense of others, and set of underlying assumptions constitutes the ideology of libertarianism.
Given the foregoing, here I summarise some of the key aspects of an ideology:
Some questions are privileged over others.
There are particular theoretical frameworks for answering questions and analysing situations.
As a result of 1 and 2, certain viewpoints and answers to questions are privileged, while others are neglected as being uninteresting or implausible.
With this framework in mind of what an ideology is, I now want to apply this to the case of effective altruism. In doing so, I will consider each of these three aspects of an ideology in turn, and see how they relate to effective altruism.
Some questions are privileged over others
Effective altruism, according to Toner (and many others), asks a question something like ‘How can I do the most good, with the resources available to me?’. I agree that EA does indeed ask this question. However it doesn’t follow that EA isn’t an ideology, since as we have just seen, ideologies privilege some questions over others. In this case we can ask – what other similar questions could effective altruism ask? Here are a few that come to mind:
What moral duties do we have towards people in absolute poverty, animals in factory farms, or future generations?
What would a virtuous person do to help those in absolute poverty, animals in factory farms, or future generations?
What oppressive social systems are responsible for the most suffering in the world, and what can be done to dismantle them?
How should our social and political institutions be structured so as to properly represent the interests of all persons, or all sentient creatures?
I’ve written each with a different ethical theory in mind. In order these are: deontology, virtue ethics, Marxist/postcolonial/other critical theories, and contractarian ethics. While some readers may phrase these questions somewhat differently, my point is simply to emphasise that the question you ask depends upon your ideology.
Some EAs may be tempted to respond that all my examples are just different ways, or more specific ways, of asking the EA question ‘how can we do the most good’, but I think this is simply wrong. The EA question is the sort of question that a utilitarian would ask, and presupposes certain assumptions that are not shared by other ethical perspectives. These assumptions include things like: there is (in principle) some way of comparing the value of different causes, that it is of central importance to consider maximising the positive consequences of our actions, and that historical connections between us and those we might try to help are not of critical moral relevance in determining how to act. EAs asking this question need not necessarily explicitly believe all these assumptions, but I argue that in asking the EA question instead of other questions they could ask, they are implicitly relying upon tacit acceptance of these assumptions. To assert that these are beliefs shared by all other ideological frameworks is to simply ignore the differences between different ethical theories and the worldviews associated with them.
Particular theoretical frameworks are applied
In addition to the questions they ask, effective altruists tend to have a very particular approach to answering these questions. In particular, they tend to rely almost exclusively on experimental evidence, mathematical modelling, or highly abstract philosophical arguments. Other theoretical frameworks are generally not taken very seriously or simply ignored. Theoretical approaches that EAs tend to ignore include:
Sociological theory: potentially relevant to understanding causes of global poverty, how group dynamics operates and how social change occurs.
Ethnography: potentially highly useful in understanding causes of poverty, efficacy of interventions, how people make dietary choices regarding meat eating, the development of cultural norms in government or research organisations surrounding safety of new technologies, and other such questions, yet I have never heard of an EA organisation conducting this sort of analysis.
Phenomenology and existentialism: potentially relevant to determining the value of different types of life and what sort of society we should focus on creating.
Historical case studies: there is some use of these in the study of existential risk, mostly relating to nuclear war, but mostly this method is ignored as a potential source of information about social movements, improving society, and assessing the risk of catastrophic risks.
Regression analysis: potentially highly useful for analysing effective causes in global development, methods of political reform, or even the ability to influence AI or nuclear policy formation, but largely neglected in favour of either experiments or abstract theorising.
If readers disagree with my analysis, I would invite them to investigate the work published on EA websites, particularly research organisations like the Future of Humanity Institute and the Global Priorities Institute (among many others), and see what sorts of methodologies they utilise. Regression analysis and historical case studies are relatively rare, and the other three techniques I mention are virtually unheard of. This represents a very particular set of methodological choices about how to best go about answering the core EA question of how to do the most good.
Note that I am not taking a position on whether it is correct to privilege the types of evidence or methodologies that EA typically does. Rather, my point is simply that effective altruists seem to have very strong norms about what sorts of analysis is worthwhile doing, despite the fact that relatively little time is spent in the community discussing these issues. GiveWell does have a short discussion of their principles for assessing evidence, and there is a short section in the appendix of the GPI research agenda about harnessing and combining evidence, but overall the amount of time spent discussing these issues in the EA community is very small. I therefore content that these methodological choices are primarily the result of ideological preconceptions about how to go about answering questions, and not an extensive analysis of the pros and cons of different techniques.
Certain viewpoints and answers are privileged
Ostensibly, effective altruism seeks to answer the question ‘how to do the most good’ in a rigorous but open-minded way, without making ruling out any possibilities at the outset or making assumptions about what is effective without proper investigation. It seems to me, however, that this is simply not an accurate description of how the movement actually investigates causes. In practise, the movement seems heavily focused on the development and impacts of emerging technologies. Though not so pertinent in the case of global poverty, this is somewhat applicable in the case of animal welfare, given the increasing focus on the development of in vitro meat and plant-based meat substitutes. This technological focus is most evident in the focus on far future causes, since all of the main far future cause areas focused on by 80,000 hours and other key organisations (nuclear weapons, artificial intelligence, biosecurity, and nanotechnology) relate to new and emerging technologies. EA discussions also commonly feature discussion and speculation about the effects that anti-aging treatments, artificial intelligence, space travel, nanotechnology, and other speculative technologies are likely to have on human society in the long term future.
By itself the fact that EAs are highly focused on new technologies doesn’t prove that they privilege certain viewpoints and answers over others – maybe a wide range of potential cause areas have been considered, and many of the most promising causes just happen to relate to emerging technologies. However, from my perspective this does not appear to be the case. As evidence for this view, I will present as an illustration the common EA argument for focusing on AI safety, and then show that much the same argument could also be used to justify work on several other cause areas that have attracted essentially no attention from the EA community.
We can summarise the EA case for working on AI safety as follows, based on articles such as those from 80,000 hours and CEA (note this is an argument sketch and not a fully-fledged syllogism):
Most AI experts believe that AI with superhuman intelligence is certainly possible, and has nontrivial probability of arriving within the next few decades.
Many experts who have considered the problem have advanced plausible arguments for thinking that superhuman AI has the potential for highly negative outcomes (potentially even human extinction), but there are current actions we can take to reduce these risks.
Work on reducing the risks associated with superhuman AI is highly neglected.
Therefore, the expected impact of working on reducing AI risks is very high.
The three key aspects of this argument are expert belief in plausibility of the problem, very large impact of the problem if it does occur, and the problem being substantively neglected. My argument is that we can adapt this argument to make parallel arguments for other cause areas. I shall present three: overthrowing global capitalism, philosophy of religion, and resource depletion.
Overthrowing global capitalism
Many experts on politics and sociology believe that the institutions of global capitalism are responsible for extremely large amounts of suffering, oppression, and exploitation throughout the world.
Although there is much work criticising capitalism, work on devising and implementing practical alternatives to global capitalism is highly neglected.
Therefore, the expected impact of working on devising and implementing alternatives to global capitalism is very high.
Philosophy of religion
A sizeable minority of philosophers believe in the existence of God, and there are at least some very intelligent and educated philosophers are adherents of a wide range of different religions.
According to many religions, humans who do not adopt the correct beliefs and/or practices will be destined to an eternity (or at least a very long period) of suffering in this life or the next.
Although religious institutions have extensive resources, the amount of time and money dedicated to systematically analysing the evidence and arguments for and against different religious traditions is extremely small.
Therefore, the expected impact of working on investigating the evidence and arguments for the various religious is very high.
Resource depletion
Many scientists have expressed serious concern about the likely disastrous effects of population growth, ecological degradation, and resource depletion on the wellbeing of future generations and even the sustainability of human civilization as a whole.
Very little work has been conducted to determine how best to respond to resource depletion or degradation of the ecosystem so as to ensure that Earth remains inhabitable and human civilization is sustainable over the very long term.
Therefore, the expected impact of working on investigating long-term responses to resource depletion and ecological collapse is very high.
Readers may dispute the precise way I have formulated each of these arguments or exactly how closely they all parallel the case for AI safety, however I hope they will see the basic point I am trying to drive at. Specifically, if effective altruists are focused on AI safety essentially because of expert belief in plausibility, large scope of the problem, and neglectedness of the issue, a similar case can be made with respect to working on overthrowing global capitalism, conducting research to determine which religious belief (if any) is most likely to be correct, and efforts to develop and implement responses to resource depletion and ecological collapse.
One response that I foresee is that none of these causes are really neglected because there are plenty of people focused on overthrowing capitalism, researching religion, and working on environmentalist causes, while very few people work on AI safety. But remember, outsiders would likely say that AI safety is not really neglected because billions of dollars are invested into AI research by academics and tech companies around the world. The point is that there is a difference between working in a general area and working on the specific subset of that area that is highest impact and most neglected. In much the same way as AI safety research is neglected even if AI research more generally is not, likewise in the parallel cases I present, I argue that serious evidence-based research into the specific questions I present is highly neglected, even if the broader areas are not.
Potential alternative causes are neglected
I suspect that at this point many of my readers will at this point be mentally marshaling additional arguments as to why AI safety research is in fact a more worthy cause than the other three I have mentioned. Doubtless there are many such arguments that one could present, and probably I could devise counterarguments to at least some of them – and so the debate would progress. My point is not that the candidate causes I have presented actually are good causes for EAs to work on, or that there aren’t any good reasons why AI safety (along with other emerging technologies) is a better cause. My point is rather that these reasons are not generally discussed by EAs. That is, the arguments generally presented for focusing on AI safety as a cause area do not uniquely pick out AI safety (and other emerging technologies like nanotechnology or bioengineered pathogens), but EAs making the case for AI safety essentially never notice this because their ideological preconceptions bias them towards focusing on new technologies, and away from the sorts of causes I mention here. Of course EAs do go into much more detail about the risks of new technologies than I have here, but the core argument for focusing in AI safety in the first place is not applied to other potential cause areas to see if (as I think it does) it could also apply to those other causes.
Furthermore, it is not as if effective altruists have carefully considered these possible cause areas and come to the reasoned conclusion that they are not the highest priorities. Rather, they have simply not been considered. They have not even been on the radar, or at best barely on the radar. For example, I searched for ‘resource depletion’ on the EA forums and found nothing. I searched for ‘religion’ and found only the EA demographics survey and an article about whether EA and religious organisations can cooperate. A search for ‘socialism’ yielded one article discussing what is meant by ‘systemic change’, and one article (with no comments and only three upvotes) explicitly outlining an effective altruist plan for socialism.
This lack of interest in other cause areas can also be found in the major EA organisations. For example, the stated objective of the global priorities institute is:
To conduct foundational research that informs the decision-making of individuals and institutions seeking to do as much good as possible. We prioritise topics which are important, neglected, and tractable, and use the tools of multiple disciplines, especially philosophy and economics, to explore the issues at stake.
On the face of it this aim is consistent with all three of the suggested alternative cause areas I outlined in the previous section. Yet the GPI research agenda focuses almost entirely on technical issues in philosophy and economics pertaining to the long-termism paradigm. While AI safety is not discussed extensively it is mentioned a number of times, and much of the research agenda appears to be developed around related questions in philosophy and economics that the long-termism paradigm gives rise to. Religion and socialism are not mentioned at all in this document, while resource depletion is only mentioned indirectly by two references in the appendix under ‘indices involving environmental capital’.
Similarly the Future of Humanity Institute focuses on AI safety, AI governance, and biotechnology. Strangely, it also pursues some work on highly obscure topics such as the aestivation solution to the Fermi paradox and on the probability of Earth being destroyed by microscopic black holes or metastable vacuum states. At the same time, nothing about any of the potential new problem areas I have mentioned.
Under their problem profiles, 80,000 hours does not mention having investigated anything relating to religion or overthrowing global capitalism (or even substantially reforming global economic institutions). They do link to an article by Robert Wiblin discussing why EAs do not work on resource scarcity, however this is not a careful analysis or investigation, just his general views on the topic. Although I agree with some of the arguments he makes, the depth of analysis is very shallow relative to the potential risks and concern raised about this issue by many scientists and writers over the decades. Indeed, I would argue that there is about as much substance in this article as a rebuttal of resource depletion as a cause area as one finds in the typical article dismissing AI fears as exaggerated and hysterical.
In yet another example, the Foundational Research Institute states that:
Our mission is to identify cooperative and effective strategies to reduce involuntary suffering. We believe that in a complex world where the long-run consequences of our actions are highly uncertain, such an undertaking requires foundational research. Currently, our research focuses on reducing risks of dystopian futures in the context of emerging technologies. Together with others in the effective altruism community, we want careful ethical reflection to guide the future of our civilization to the greatest extent possible.
Hence, even though it seems that in principle socialists, Buddhists, and ecological activists (among others) are highly concerned about reducing the suffering of humans and animals, FRI ignores the topics that these groups would tend to focus on, and instead focuses their attention on the risks of emerging technologies. As in the case of FHI, they also seem to find room for some topics of highly dubious relevance to any of EAs goals, such as this paper about the potential for correlated actions with civilizations located elsewhere in the multiverse.
Outside of the main organisations, there has been some discussion about socialism as an EA cause, for example on r/EffectiveAltruism and by Jeff Kaufman. I was able to find little else about either of the two potential cause areas I outline.
Overall, on the basis of the foregoing examples I conclude that the amount of time and energy spent by the EA community investigating the three potential new cause areas that I have discussed is negligible compared to the time and energy spent investigating emerging technologies. This is despite the fact that most of these groups are not ostensibly established with the express purpose of reducing the harms of emerging technologies, but have simply chosen this cause area over other possibilities would that also potentially fulfill their broad objectives. I have not found any evidence that this choice is the result of early investigations demonstrating that emerging technologies are far superior to the cause areas I mention. Instead, it appears to be mostly the result of disinterest in the sorts of topics I identify, and a much greater ex ante interest in emerging technologies over other causes. I present this as evidence that the primary reason effective altruism focuses so extensively on emerging technologies over other speculative but potentially high impact causes, is because of the privileging of certain viewpoints and answers over others. This, in turn, is the result of the underlying ideological commitments of many effective altruists.
What is EA ideology?
If many effective altruists share a common ideology, then what is the content of this ideology? As with any social movement, this is difficult to specify with any precision and will obviously differ somewhat from person to person and from one organisation to another. That said, on the basis of my research and experiences in the movement, I would suggest the following core tenets of EA ideology:
The natural world is all that exists, or at least all that should be of concern to us when deciding how to act. In particular, most EAs are highly dismissive of religious or other non-naturalistic worldviews, and tend to just assume without further discussion that views like dualism, reincarnation, or theism cannot be true. For example, the map of EA concepts has listed under ‘important general features of the world’ pages on ‘possibility of an infinite universe’ and ‘the simulation argument’, yet no mention of the possibility that anything could exist beyond the natural world. It requires a very particular ideological framework to regard the simulation as is more important or pressing than non-naturalism.
The correct way to think about moral/ethical questions is through a utilitarian lens in which the focus is on maximising desired outcomes and minimising undesirable ones. We should focus on the effect of our actions on the margin, relative to the most likely counterfactual. There is some discussion of moral uncertainty, but outside of this deontological, virtue ethics, contractarian, and other approaches are rarely applied in philosophical discussion of EA issues. This marginalist, counterfactual, optimisation-based way of thinking is largely borrowed from neoclassical economics, and is not widely employed by many other disciplines or ideological perspectives (e.g. communitarianism).
Rational behaviour is best understood through a Bayesian framework, incorporating key results from game theory, decision theory, and other formal approaches. Many of these concepts appear in the idealised decision making section of the map of EA concepts, and are widely applied in other EA writings.
The best way to approach a problem is to think very abstractly about that problem, construct computational or mathematical models of the relevant problem area, and ultimately (if possible) test these models using experiments. The model appears to be of how research is approached in physics with some influence from analytic philosophy. The methodologies of other disciplines are largely ignored.
The development and introduction of disruptive new technologies is a more fundamental and important driver of long-term change than socio-political reform or institutional change. This is clear from the overwhelming focus on technological change of top EA organisations, including 80,000 hours, the Center for Effective Altruism, the Future of Humanity Institute, the Global Priorities Project, the Future of Life Institute, the Centre for the Study of Existential Risk, and the Machine Intelligence Research Institute.
I’m sure others could devise different ways of describing EA ideology that potentially look quite different to mine, but this is my best guess based on what I have observed. I believe these tenets are generally held by EAs, particularly those working at the major EA organisations, but are generally not widely discussed or critiqued. That this set of assumptions is fairly specific to EA should be evident if one reads various criticisms of effective altruism from those outside the movement. Although they do not always express their concerns using the same language that I have, it is often clear that the fundamental reason for their disagreement is the rejection of one or more of the five points mentioned above.
Conclusion
My purpose in this article has not been to contend that effective altruists shouldn’t have an ideology, or that the current dominant EA ideology (as I have outlined it) is mistaken. In fact, my view is that we can’t really get anywhere in rational investigation without certain starting assumptions, and these starting assumptions constitute our ideology. It doesn’t follow from this that any ideology is equally justified, but how we adjudicate between different ideological frameworks is beyond the scope of this article.
Instead, all I have tried to do is argue that effective altruists do in fact have an ideology. This ideology leads them to privilege certain questions over others, to apply particular theoretical frameworks to the exclusion of others, and to focus on certain viewpoints and answers while largely ignoring others. I have attempted to substantiate my claims by showing how different ideological frameworks would ask different questions, use different theoretical frameworks, and arrive at different conclusions to those generally found within EA, especially the major EA organisations. In particular, I argued that the typical case for focusing on AI safety can be modified to serve as an argument for a number of other cause areas, all of which have been largely ignored by most EAs.
My view is that effective altruists should acknowledge that the movement as a whole does have an ideology. We should critically analyse this ideology, understand its strengths and weaknesses, and then to the extent to which we think this set of ideological beliefs is correct, defend it against rebuttals and competing ideological perspectives. This is essentially what all other ideologies do – it is how the exchange of ideas works. Effective altruists should engage critically in this ideological discussion, and not pretend they are aloof from it by resorting to the refrain that ‘EA is a question, not an ideology’.
Effective Altruism is an Ideology, not (just) a Question
Link post
Introduction
In a widely-cited article on the EA forum, Helen Toner argues that effective altruism is a question, not an ideology. Here is her core argument:
In this essay I will argue that his view of effective altruism being a question and not an ideology is incorrect. In particular, I will argue that effective altruism is an ideology, meaning that it has particular (if somewhat vaguely defined) set of core principles and beliefs, and associated ways of viewing the world and interpreting evidence. After first explaining what I mean by ideology, I proceed to discuss the ways in which effective altruists typically express their ideology, including by privileging certain questions over others, applying particular theoretical frameworks to answer these questions, and privileging particular answers and viewpoints over others. I should emphasise at the outset that my purpose in this article is not to disparage effective altruism, but to try to strengthen the movement by helping EAs to better understand the intellectual actual intellectual underpinnings of the movement.
What is an ideology?
The first point I want to explain is what I mean when I talk about an ‘ideology’. Basically, an ideology is a constellation of beliefs and perspectives that shape the way adherents of that ideology view the world. To flesh this out a bit, I will present two examples of ideologies: feminism and libertarianism. Obviously these will be simplified since there is considerable heterogeneity within any ideology, and there are always disputes about who counts as a ‘true’ adherent of any ideology. Nevertheless, I think these quick sketches are broadly accurate and helpful for illustrating what I am talking about when I use the word ‘ideology’.
First consider feminism. Feminists typically begin with the premise that the social world is structured in such a manner that men as a group systematically oppress women as a group. There is a richly structured theory about how this works and how this interacts with different social institutions, including the family, the economy, the justice system, education, health care, and so on. In investigating any area, feminists typically focus on gendered power structures and how they shape social outcomes. When something happens, feminists ask ‘what affect does this have on the status and place of women in society?’ Given these perspectives, feminists typically are uninterested in and highly sceptical of any accounts of social differences between men and women based on biological differences, or attempts to rationalist differences on the basis of social stability or cohesion. This way of looking at things, focus on particular issues at the expense of others, and set of underlying assumptions constitutes the ideology of feminism.
Second consider libertarianism. Libertarians typically begin with the idea that individuals are fundamentally free and equal, but that governments throughout the world systematically step beyond their legitimate role of protecting individual freedoms by restricting those freedoms and violating individual rights. In analysing any situation, libertarians focus on how the actions of governments limit the free choices of individuals. Libertarians have extensive accounts as to how this occurs through taxation, government welfare programs, monetary and fiscal policy, the criminal justice system, state-sponsored education, the military industrial complex, and so on. When something happens, libertarians ask ‘what affect does this have on individual rights and freedoms?’ Given these perspectives, libertarians typically are uninterested in and highly sceptical of any attempts to justify state intervention on the basis of increases in efficiency, increasing equality, or improving social cohesion. This way of looking at things, focus on particular issues at the expense of others, and set of underlying assumptions constitutes the ideology of libertarianism.
Given the foregoing, here I summarise some of the key aspects of an ideology:
Some questions are privileged over others.
There are particular theoretical frameworks for answering questions and analysing situations.
As a result of 1 and 2, certain viewpoints and answers to questions are privileged, while others are neglected as being uninteresting or implausible.
With this framework in mind of what an ideology is, I now want to apply this to the case of effective altruism. In doing so, I will consider each of these three aspects of an ideology in turn, and see how they relate to effective altruism.
Some questions are privileged over others
Effective altruism, according to Toner (and many others), asks a question something like ‘How can I do the most good, with the resources available to me?’. I agree that EA does indeed ask this question. However it doesn’t follow that EA isn’t an ideology, since as we have just seen, ideologies privilege some questions over others. In this case we can ask – what other similar questions could effective altruism ask? Here are a few that come to mind:
What moral duties do we have towards people in absolute poverty, animals in factory farms, or future generations?
What would a virtuous person do to help those in absolute poverty, animals in factory farms, or future generations?
What oppressive social systems are responsible for the most suffering in the world, and what can be done to dismantle them?
How should our social and political institutions be structured so as to properly represent the interests of all persons, or all sentient creatures?
I’ve written each with a different ethical theory in mind. In order these are: deontology, virtue ethics, Marxist/postcolonial/other critical theories, and contractarian ethics. While some readers may phrase these questions somewhat differently, my point is simply to emphasise that the question you ask depends upon your ideology.
Some EAs may be tempted to respond that all my examples are just different ways, or more specific ways, of asking the EA question ‘how can we do the most good’, but I think this is simply wrong. The EA question is the sort of question that a utilitarian would ask, and presupposes certain assumptions that are not shared by other ethical perspectives. These assumptions include things like: there is (in principle) some way of comparing the value of different causes, that it is of central importance to consider maximising the positive consequences of our actions, and that historical connections between us and those we might try to help are not of critical moral relevance in determining how to act. EAs asking this question need not necessarily explicitly believe all these assumptions, but I argue that in asking the EA question instead of other questions they could ask, they are implicitly relying upon tacit acceptance of these assumptions. To assert that these are beliefs shared by all other ideological frameworks is to simply ignore the differences between different ethical theories and the worldviews associated with them.
Particular theoretical frameworks are applied
In addition to the questions they ask, effective altruists tend to have a very particular approach to answering these questions. In particular, they tend to rely almost exclusively on experimental evidence, mathematical modelling, or highly abstract philosophical arguments. Other theoretical frameworks are generally not taken very seriously or simply ignored. Theoretical approaches that EAs tend to ignore include:
Sociological theory: potentially relevant to understanding causes of global poverty, how group dynamics operates and how social change occurs.
Ethnography: potentially highly useful in understanding causes of poverty, efficacy of interventions, how people make dietary choices regarding meat eating, the development of cultural norms in government or research organisations surrounding safety of new technologies, and other such questions, yet I have never heard of an EA organisation conducting this sort of analysis.
Phenomenology and existentialism: potentially relevant to determining the value of different types of life and what sort of society we should focus on creating.
Historical case studies: there is some use of these in the study of existential risk, mostly relating to nuclear war, but mostly this method is ignored as a potential source of information about social movements, improving society, and assessing the risk of catastrophic risks.
Regression analysis: potentially highly useful for analysing effective causes in global development, methods of political reform, or even the ability to influence AI or nuclear policy formation, but largely neglected in favour of either experiments or abstract theorising.
If readers disagree with my analysis, I would invite them to investigate the work published on EA websites, particularly research organisations like the Future of Humanity Institute and the Global Priorities Institute (among many others), and see what sorts of methodologies they utilise. Regression analysis and historical case studies are relatively rare, and the other three techniques I mention are virtually unheard of. This represents a very particular set of methodological choices about how to best go about answering the core EA question of how to do the most good.
Note that I am not taking a position on whether it is correct to privilege the types of evidence or methodologies that EA typically does. Rather, my point is simply that effective altruists seem to have very strong norms about what sorts of analysis is worthwhile doing, despite the fact that relatively little time is spent in the community discussing these issues. GiveWell does have a short discussion of their principles for assessing evidence, and there is a short section in the appendix of the GPI research agenda about harnessing and combining evidence, but overall the amount of time spent discussing these issues in the EA community is very small. I therefore content that these methodological choices are primarily the result of ideological preconceptions about how to go about answering questions, and not an extensive analysis of the pros and cons of different techniques.
Certain viewpoints and answers are privileged
Ostensibly, effective altruism seeks to answer the question ‘how to do the most good’ in a rigorous but open-minded way, without making ruling out any possibilities at the outset or making assumptions about what is effective without proper investigation. It seems to me, however, that this is simply not an accurate description of how the movement actually investigates causes. In practise, the movement seems heavily focused on the development and impacts of emerging technologies. Though not so pertinent in the case of global poverty, this is somewhat applicable in the case of animal welfare, given the increasing focus on the development of in vitro meat and plant-based meat substitutes. This technological focus is most evident in the focus on far future causes, since all of the main far future cause areas focused on by 80,000 hours and other key organisations (nuclear weapons, artificial intelligence, biosecurity, and nanotechnology) relate to new and emerging technologies. EA discussions also commonly feature discussion and speculation about the effects that anti-aging treatments, artificial intelligence, space travel, nanotechnology, and other speculative technologies are likely to have on human society in the long term future.
By itself the fact that EAs are highly focused on new technologies doesn’t prove that they privilege certain viewpoints and answers over others – maybe a wide range of potential cause areas have been considered, and many of the most promising causes just happen to relate to emerging technologies. However, from my perspective this does not appear to be the case. As evidence for this view, I will present as an illustration the common EA argument for focusing on AI safety, and then show that much the same argument could also be used to justify work on several other cause areas that have attracted essentially no attention from the EA community.
We can summarise the EA case for working on AI safety as follows, based on articles such as those from 80,000 hours and CEA (note this is an argument sketch and not a fully-fledged syllogism):
Most AI experts believe that AI with superhuman intelligence is certainly possible, and has nontrivial probability of arriving within the next few decades.
Many experts who have considered the problem have advanced plausible arguments for thinking that superhuman AI has the potential for highly negative outcomes (potentially even human extinction), but there are current actions we can take to reduce these risks.
Work on reducing the risks associated with superhuman AI is highly neglected.
Therefore, the expected impact of working on reducing AI risks is very high.
The three key aspects of this argument are expert belief in plausibility of the problem, very large impact of the problem if it does occur, and the problem being substantively neglected. My argument is that we can adapt this argument to make parallel arguments for other cause areas. I shall present three: overthrowing global capitalism, philosophy of religion, and resource depletion.
Overthrowing global capitalism
Many experts on politics and sociology believe that the institutions of global capitalism are responsible for extremely large amounts of suffering, oppression, and exploitation throughout the world.
Although there is much work criticising capitalism, work on devising and implementing practical alternatives to global capitalism is highly neglected.
Therefore, the expected impact of working on devising and implementing alternatives to global capitalism is very high.
Philosophy of religion
A sizeable minority of philosophers believe in the existence of God, and there are at least some very intelligent and educated philosophers are adherents of a wide range of different religions.
According to many religions, humans who do not adopt the correct beliefs and/or practices will be destined to an eternity (or at least a very long period) of suffering in this life or the next.
Although religious institutions have extensive resources, the amount of time and money dedicated to systematically analysing the evidence and arguments for and against different religious traditions is extremely small.
Therefore, the expected impact of working on investigating the evidence and arguments for the various religious is very high.
Resource depletion
Many scientists have expressed serious concern about the likely disastrous effects of population growth, ecological degradation, and resource depletion on the wellbeing of future generations and even the sustainability of human civilization as a whole.
Very little work has been conducted to determine how best to respond to resource depletion or degradation of the ecosystem so as to ensure that Earth remains inhabitable and human civilization is sustainable over the very long term.
Therefore, the expected impact of working on investigating long-term responses to resource depletion and ecological collapse is very high.
Readers may dispute the precise way I have formulated each of these arguments or exactly how closely they all parallel the case for AI safety, however I hope they will see the basic point I am trying to drive at. Specifically, if effective altruists are focused on AI safety essentially because of expert belief in plausibility, large scope of the problem, and neglectedness of the issue, a similar case can be made with respect to working on overthrowing global capitalism, conducting research to determine which religious belief (if any) is most likely to be correct, and efforts to develop and implement responses to resource depletion and ecological collapse.
One response that I foresee is that none of these causes are really neglected because there are plenty of people focused on overthrowing capitalism, researching religion, and working on environmentalist causes, while very few people work on AI safety. But remember, outsiders would likely say that AI safety is not really neglected because billions of dollars are invested into AI research by academics and tech companies around the world. The point is that there is a difference between working in a general area and working on the specific subset of that area that is highest impact and most neglected. In much the same way as AI safety research is neglected even if AI research more generally is not, likewise in the parallel cases I present, I argue that serious evidence-based research into the specific questions I present is highly neglected, even if the broader areas are not.
Potential alternative causes are neglected
I suspect that at this point many of my readers will at this point be mentally marshaling additional arguments as to why AI safety research is in fact a more worthy cause than the other three I have mentioned. Doubtless there are many such arguments that one could present, and probably I could devise counterarguments to at least some of them – and so the debate would progress. My point is not that the candidate causes I have presented actually are good causes for EAs to work on, or that there aren’t any good reasons why AI safety (along with other emerging technologies) is a better cause. My point is rather that these reasons are not generally discussed by EAs. That is, the arguments generally presented for focusing on AI safety as a cause area do not uniquely pick out AI safety (and other emerging technologies like nanotechnology or bioengineered pathogens), but EAs making the case for AI safety essentially never notice this because their ideological preconceptions bias them towards focusing on new technologies, and away from the sorts of causes I mention here. Of course EAs do go into much more detail about the risks of new technologies than I have here, but the core argument for focusing in AI safety in the first place is not applied to other potential cause areas to see if (as I think it does) it could also apply to those other causes.
Furthermore, it is not as if effective altruists have carefully considered these possible cause areas and come to the reasoned conclusion that they are not the highest priorities. Rather, they have simply not been considered. They have not even been on the radar, or at best barely on the radar. For example, I searched for ‘resource depletion’ on the EA forums and found nothing. I searched for ‘religion’ and found only the EA demographics survey and an article about whether EA and religious organisations can cooperate. A search for ‘socialism’ yielded one article discussing what is meant by ‘systemic change’, and one article (with no comments and only three upvotes) explicitly outlining an effective altruist plan for socialism.
This lack of interest in other cause areas can also be found in the major EA organisations. For example, the stated objective of the global priorities institute is:
On the face of it this aim is consistent with all three of the suggested alternative cause areas I outlined in the previous section. Yet the GPI research agenda focuses almost entirely on technical issues in philosophy and economics pertaining to the long-termism paradigm. While AI safety is not discussed extensively it is mentioned a number of times, and much of the research agenda appears to be developed around related questions in philosophy and economics that the long-termism paradigm gives rise to. Religion and socialism are not mentioned at all in this document, while resource depletion is only mentioned indirectly by two references in the appendix under ‘indices involving environmental capital’.
Similarly the Future of Humanity Institute focuses on AI safety, AI governance, and biotechnology. Strangely, it also pursues some work on highly obscure topics such as the aestivation solution to the Fermi paradox and on the probability of Earth being destroyed by microscopic black holes or metastable vacuum states. At the same time, nothing about any of the potential new problem areas I have mentioned.
Under their problem profiles, 80,000 hours does not mention having investigated anything relating to religion or overthrowing global capitalism (or even substantially reforming global economic institutions). They do link to an article by Robert Wiblin discussing why EAs do not work on resource scarcity, however this is not a careful analysis or investigation, just his general views on the topic. Although I agree with some of the arguments he makes, the depth of analysis is very shallow relative to the potential risks and concern raised about this issue by many scientists and writers over the decades. Indeed, I would argue that there is about as much substance in this article as a rebuttal of resource depletion as a cause area as one finds in the typical article dismissing AI fears as exaggerated and hysterical.
In yet another example, the Foundational Research Institute states that:
Hence, even though it seems that in principle socialists, Buddhists, and ecological activists (among others) are highly concerned about reducing the suffering of humans and animals, FRI ignores the topics that these groups would tend to focus on, and instead focuses their attention on the risks of emerging technologies. As in the case of FHI, they also seem to find room for some topics of highly dubious relevance to any of EAs goals, such as this paper about the potential for correlated actions with civilizations located elsewhere in the multiverse.
Outside of the main organisations, there has been some discussion about socialism as an EA cause, for example on r/EffectiveAltruism and by Jeff Kaufman. I was able to find little else about either of the two potential cause areas I outline.
Overall, on the basis of the foregoing examples I conclude that the amount of time and energy spent by the EA community investigating the three potential new cause areas that I have discussed is negligible compared to the time and energy spent investigating emerging technologies. This is despite the fact that most of these groups are not ostensibly established with the express purpose of reducing the harms of emerging technologies, but have simply chosen this cause area over other possibilities would that also potentially fulfill their broad objectives. I have not found any evidence that this choice is the result of early investigations demonstrating that emerging technologies are far superior to the cause areas I mention. Instead, it appears to be mostly the result of disinterest in the sorts of topics I identify, and a much greater ex ante interest in emerging technologies over other causes. I present this as evidence that the primary reason effective altruism focuses so extensively on emerging technologies over other speculative but potentially high impact causes, is because of the privileging of certain viewpoints and answers over others. This, in turn, is the result of the underlying ideological commitments of many effective altruists.
What is EA ideology?
If many effective altruists share a common ideology, then what is the content of this ideology? As with any social movement, this is difficult to specify with any precision and will obviously differ somewhat from person to person and from one organisation to another. That said, on the basis of my research and experiences in the movement, I would suggest the following core tenets of EA ideology:
The natural world is all that exists, or at least all that should be of concern to us when deciding how to act. In particular, most EAs are highly dismissive of religious or other non-naturalistic worldviews, and tend to just assume without further discussion that views like dualism, reincarnation, or theism cannot be true. For example, the map of EA concepts has listed under ‘important general features of the world’ pages on ‘possibility of an infinite universe’ and ‘the simulation argument’, yet no mention of the possibility that anything could exist beyond the natural world. It requires a very particular ideological framework to regard the simulation as is more important or pressing than non-naturalism.
The correct way to think about moral/ethical questions is through a utilitarian lens in which the focus is on maximising desired outcomes and minimising undesirable ones. We should focus on the effect of our actions on the margin, relative to the most likely counterfactual. There is some discussion of moral uncertainty, but outside of this deontological, virtue ethics, contractarian, and other approaches are rarely applied in philosophical discussion of EA issues. This marginalist, counterfactual, optimisation-based way of thinking is largely borrowed from neoclassical economics, and is not widely employed by many other disciplines or ideological perspectives (e.g. communitarianism).
Rational behaviour is best understood through a Bayesian framework, incorporating key results from game theory, decision theory, and other formal approaches. Many of these concepts appear in the idealised decision making section of the map of EA concepts, and are widely applied in other EA writings.
The best way to approach a problem is to think very abstractly about that problem, construct computational or mathematical models of the relevant problem area, and ultimately (if possible) test these models using experiments. The model appears to be of how research is approached in physics with some influence from analytic philosophy. The methodologies of other disciplines are largely ignored.
The development and introduction of disruptive new technologies is a more fundamental and important driver of long-term change than socio-political reform or institutional change. This is clear from the overwhelming focus on technological change of top EA organisations, including 80,000 hours, the Center for Effective Altruism, the Future of Humanity Institute, the Global Priorities Project, the Future of Life Institute, the Centre for the Study of Existential Risk, and the Machine Intelligence Research Institute.
I’m sure others could devise different ways of describing EA ideology that potentially look quite different to mine, but this is my best guess based on what I have observed. I believe these tenets are generally held by EAs, particularly those working at the major EA organisations, but are generally not widely discussed or critiqued. That this set of assumptions is fairly specific to EA should be evident if one reads various criticisms of effective altruism from those outside the movement. Although they do not always express their concerns using the same language that I have, it is often clear that the fundamental reason for their disagreement is the rejection of one or more of the five points mentioned above.
Conclusion
My purpose in this article has not been to contend that effective altruists shouldn’t have an ideology, or that the current dominant EA ideology (as I have outlined it) is mistaken. In fact, my view is that we can’t really get anywhere in rational investigation without certain starting assumptions, and these starting assumptions constitute our ideology. It doesn’t follow from this that any ideology is equally justified, but how we adjudicate between different ideological frameworks is beyond the scope of this article.
Instead, all I have tried to do is argue that effective altruists do in fact have an ideology. This ideology leads them to privilege certain questions over others, to apply particular theoretical frameworks to the exclusion of others, and to focus on certain viewpoints and answers while largely ignoring others. I have attempted to substantiate my claims by showing how different ideological frameworks would ask different questions, use different theoretical frameworks, and arrive at different conclusions to those generally found within EA, especially the major EA organisations. In particular, I argued that the typical case for focusing on AI safety can be modified to serve as an argument for a number of other cause areas, all of which have been largely ignored by most EAs.
My view is that effective altruists should acknowledge that the movement as a whole does have an ideology. We should critically analyse this ideology, understand its strengths and weaknesses, and then to the extent to which we think this set of ideological beliefs is correct, defend it against rebuttals and competing ideological perspectives. This is essentially what all other ideologies do – it is how the exchange of ideas works. Effective altruists should engage critically in this ideological discussion, and not pretend they are aloof from it by resorting to the refrain that ‘EA is a question, not an ideology’.