Agreed with Mathias that the authors have a good grasp of what EA is and what causes EAs prioritize, and I appreciate how respectful the article is. Also like Mathias, I feel like I have some pretty fundamental worldview differences from the authors, so I’m not sure how well I can explain my disagreements. But I’ll try my best.
The article’s criticism seems to focus on the notion that EA ignores power dynamics and doesn’t address the root cause of problems. This is a pretty common criticism. I find it a bit confusing, and I don’t really understand what the authors consider to be root causes. For example, efforts to create cheap plant-based or cultured meat seem to address the root cause of factory farming because, if successful, they will eliminate the need to farm and kill sentient animals. AI safety work, if successful, could eliminate the root causes of all suffering and bring about an unimaginably good utopia. But the authors don’t seem to agree with me that these qualify as “addressing root causes”. I don’t understand how they distinguish between the EA work that I perceive as addressing root causes and the things they consider to be root causes. Critics like these authors seem to want EAs to do something that they’re not doing, but I don’t understand what it is.
[W]ealthy EA donors [do] not [go] through a (potentially painful) personal development process to confront and come to terms with the origins of their wealth and privilege: the racial, class, and gender biases that are at the root of a productive system that has provided them with financial wealth, and their (often inadvertent) role in maintaining such systems of exploitation and oppression.
It seems to me that if rich people come to terms with the origins of their wealth, they might conclude that they don’t “deserve” it any more than poor people in Kenya, and decide to distribute the money to them (via GiveDirectly) instead of spending it on themselves. Isn’t that ultimately the point? What outcome would the authors like to come out of this self-reflection, if not using their wealth to help disadvantaged people?
EAs spend more time than any other group I know talking about how they are among the richest people in the world, and they should use their wealth to help the less fortunate. But this doesn’t seem to count in the authors’ eyes.
This article argues that EAs fixate too much on “doing the most good”, and then appears to argue that they believe people should focus on addressing root causes/grassroots activism/power dynamics/etc. because it will do the most good—or maybe I’m misinterpreting the article because I’m seeing it from an EA lens. Sometimes it seems like the authors disagree with EAs about fundamental principles like maximizing good, and other times it seems like they just disagree about what does the most good. I wasn’t clear on that.
If they do agree in principle that we should do as much good as possible, then I would like to see a more rigorous justification for why the authors’ favored causes do more good than EA causes. I realize they’re not amenable to cost-effectiveness analysis than GiveWell’s top charities, but I would like to see at least some attempt at a justification.
For example, many EAs prioritize existential risk. There’s no rigorous cost-effective analysis of x-risk, but you can at least make an argument that it’s more cost-effective than other things:
Extinction is way worse than anything else.
Extinction is not that unlikely.
We can probably make significant progress on reducing extinction risk.
My impression is there’s a worldview difference between people who think it’s possible in principle to make decisions under uncertainty, and people who think it’s not. I don’t have much to say in defense of the former position except to vaguely gesture in the direction of Phil Tetlock and the proven track record of some people’s ability to forecast uncertain outcomes.
More broadly, I would have an easier time understanding articles like these if they gave more concrete examples of what they consider to be the best things to work on, and why—something more specific than “grassroots activism”. For example (not saying I think the authors believe this, just that this is the general sort of thing I’d like to see):
We should support community groups that organize meetups where they promote the idea of the fundamental unfairness of global wealth inequality. We believe that once sufficiently many people worldwide are paying attention to this problem, people will develop and move toward a new system of government that will redistribute wealth and provide basic services to everyone. We aren’t sure what this government structure will look like, but we’re confident that it’s possible because [insert argument here]. We also believe this plan has a good chance of getting broad support because [insert argument here], and that once it has broad support, it has a good chance of actually getting implemented, because [insert argument here].
As for the question of “what do the authors consider to be root causes,” here’s my reading of the article. Consider the case of factory farming. Probably all of us agree that the following are all necessary causes:
(1) There’s lots of demand for meat.
(2) Factory farming is currently the technology that can produce meat most efficiently and cost-effectively.
(3) Producers of meat just care about production efficiency and cost-effectiveness, not animal suffering.
I suspect you and other EAs focus on item (2) when you are talking about “root causes.” In this case, you are correct that creating cheap plant-based meat alternatives will solve (2). However, I suspect the authors of this article think of (3) as the root cause. They likely think that if meat producers cared more about animal suffering, then they would stop doing factory farming or invest in alternatives on their own, and philanthropists wouldn’t need to support them. They write:
if all investment was directed in a responsible way towards plant-based alternatives, and towards safe AI, would we need philanthropy at all
Furthermore, they think that since the cause of (3) is a focus on cost-effectiveness (in the sense of minimizing cost per pound of meat produced), then focusing on cost-effectiveness (in the sense of minimizing cost per life saved, or whatever) in philanthropy promotes more cost-effectiveness focused thinking, which makes (3) worse. And they think lots of problems have something like (3) as a root cause. This is what they mean when they talk about “values of the old system” in this quote:
By asking these questions, EA seems to unquestioningly replicate the values of the old system: efficiency and cost-effectiveness, growth/scale, linearity, science and objectivity, individualism, and decision-making by experts/elites.
As for the other quote you pulled out:
[W]ealthy EA donors [do] not [go] through a (potentially painful) personal development process to confront and come to terms with the origins of their wealth and privilege: the racial, class, and gender biases that are at the root of a productive system that has provided them with financial wealth, and their (often inadvertent) role in maintaining such systems of exploitation and oppression.
and the following discussion:
To be more concrete, I suspect what they’re talking about is something like the following. Consider a potential philanthropist like Jeff Bezos—they likely believe that Amazon has harmed the world through their business practices. Let’s say Jeff Bezos wanted to spend $10 billion of his wealth on philanthropy. There might be two ways of doing that:
(1) Donate $10 billion to worthy causes.
(2) Change Amazon’s business practices such that he makes $10 billion less money, but Amazon has a more positive (or less negative) impact on the world.
My reading is that the authors believe (2) would be of higher value, but Bezos (and others like him) would be biased toward (1) for self-serving reasons: Bezos would get more direct credit for doing (1) than (2), and Bezos would be biased toward underestimating how bad Amazon’s business practices are for the world.
---
Overall, though I agree with you that if my interpretation accurately describes the author’s viewpoint, the article does not do a good job arguing for that. But I’m not really sure about the relevance of your statement:
My impression is there’s a worldview difference between people who think it’s possible in principle to make decisions under uncertainty, and people who think it’s not. I don’t have much to say in defense of the former position except to vaguely gesture in the direction of Phil Tetlock and the proven track record of some people’s ability to forecast uncertain outcomes.
Do you think that the article reflects a viewpoint that it’s not possible to make decisions under uncertainty? I didn’t get that from the article; one of their main points is that it’s important to try things even if success is uncertain.
Thanks, this comment makes a lot of sense, and it makes it much easier for me to conceptualize why I disagree with the conclusion.
Do you think that the article reflects a viewpoint that it’s not possible to make decisions under uncertainty?
I think so, because the article includes some statements like,
“How could anyone forecast the recruitment of thousands of committed new climate activists around the world, the declarations of climate emergency and the boost for NonViolentDirectAction strategies across the climate movement?”
and
“[C]omplex systems change can most often emerge gradually and not be pre-identified ‘scientifically’.”
Maybe instead of “make decisions under uncertainty”, I should have said “make decisions that are informed by uncertain empirical forecasts”.
I can get behind your initial framing, actually. It’s not explicit—I don’t think the authors would define themselves as people who don’t believe decision under uncertainty is possible—but I think it’s a core element of the view of social good professed in this article and others like it.
A huge portion of the variation in worldview between EAs and people who think somewhat differently about doing good seems to be accounted for by a different optimization strategy. EAs, of course, tend to use expected value, and prioritize causes based on probability-weighted value. But it seems like most other organizations optimize based on value conditional on success.
These people and groups select causes based only on perceived scale. They don’t necessarily think that malaria and AI risk aren’t important, they just make a calculation that allots equal probabilities to their chances of averting, say, 100 malarial infections and their chances of overthrowing the global capitalist system.
To me, this is not necessarily reflective of innumeracy or a lack of comfort with probability. It seems more like a really radical second- and third-order uncertainty about the value of certain kinds of reasoning— a deep-seated mistrust of numbers, science, experts, data, etc. I think the authors of the posted article lay their cards on the table in this regard:
the values of the old system: efficiency and cost-effectiveness, growth/scale, linearity, science and objectivity, individualism, and decision-making by experts/elites
These are people who associate the conventions and methods of science and rationality with their instrumental use in a system that they see as inherently unjust. As a result of that association, they’re hugely skeptical about the methods themselves, and aren’t able or willing to use them in decision-making.
I don’t think this is logical, but I do think it is understandable. Many students, in particular American ones (though I recognize that Guerrilla is a European group) have been told repeatedly, for many years, that the central value of learning science and math lies in getting a good job in industry. I think it can be hard to escape this habituation and see scientific thinking as a tool for civilization instead of as some kind of neoliberal astrology.
A huge portion of the variation in worldview between EAs and people who think somewhat differently about doing good seems to be accounted for by a different optimization strategy. EAs, of course, tend to use expected value, and prioritize causes based on probability-weighted value. But it seems like most other organizations optimize based on value conditional on success.
These people and groups select causes based only on perceived scale. They don’t necessarily think that malaria and AI risk aren’t important, they just make a calculation that allots equal probabilities to their chances of averting, say, 100 malarial infections and their chances of overthrowing the global capitalist system.
I agree it would be good to have a diagnosis of the thought process that generates these sorts of articles so we can respond in a targetted manner that addresses their model of their objections, rather that one which simply satisfies us that we have rebutted them. And this diagnosis is a very interesting one! However, I am a little sceptical, for two reasons.
EAs often break cause evaluation down into Scope, Tractability and Neglectedness, which is elegant as they correspond to three factors which can be multiplied together. You’re basically saying that these critics ignore (or consider unquantifiable) Neglectedness and Tractability. However, it seems perhaps a little bit of a coincidence that the factor they are missing just happens to correspond to one of the terms in our standard decomposition. After all, there are many other possible decompositions! But maybe this decomposition just really captures something fundamental to all people’s thought processes, in which case this is not so much of a surprise.
But more importantly I think this theory seems to give some incorrect predictions about cause focus. If Importance is all that matters, then I would expect these critics to be very interested in existential risks, but my impression is they are not. Similarly, I would be very surprised if they were dismissive of e.g. residential recycling, or US criminal justice, as being too small a scale an issue to warrant much concern.
I think scale/scope is a pretty intuitive way of thinking about problems, which is I imagine why it’s part of the ITN framework. To my eye, the framework is successful because it reflects intuitive concepts like scale, so I don’t see too much of a coincidence here.
If Importance is all that matters, then I would expect these critics to be very interested in existential risks, but my impression is they are not. Similarly, I would be very surprised if they were dismissive of e.g. residential recycling, or US criminal justice, as being too small a scale an issue to warrant much concern.
This is a good point. I don’t see any dissonance with respect to recycling and criminal justice—recycling is (nominally) about climate change, and climate change is a big deal, so recycling is important when you ignore the degree to which it can address the problem; likewise with criminal justice. Still, you’re right that my “straw activist” would probably scoff at AI risk, for example.
I guess I’d say that the way of thinking I’ve described doesn’t imply an accurate assessment of problem scale, and since skepticism about the (relatively formal) arguments on which concerns about AI risk are based is core to the worldview, there’d be no reason for someone like this to accept that some of the more “out there” GCRs are GCRs at all.
Quite separately, there is a tendency among all activists (EAs included) to see convergence where there is none, and I think this goes a long way toward neutralizing legitimate but (to the activist) novel concerns. Anecdotally, I see this a lot—the proposition, for instance, that international development will come “along for the ride” when the U.S. gets its own racial justice house in order, or that the end of capitalism necessarily implies more effective global cooperation.
I don’t see any dissonance with respect to recycling and criminal justice—recycling is (nominally) about climate change, and climate change is a big deal, so recycling is important when you ignore the degree to which it can address the problem; likewise with criminal justice.
It seems a lot depends on how you group together things into causes then. Is my recycling about reducing waste in my town (a small issue), preventing deforestation (a medium issue), fighting climate change (a large issue) or being a good person (the most important issue of all)? Pretty much any action can be attached to a big cause by defining an even larger, and even more inclusive problem for it to be part of.
A more charitable interpretation of the author’s point might be something like the following:
(1) Since EAs look at quantitative factors like the expected number of lives saved by an intervention, they need to be able to quantify their uncertainty.
(2) Interventions that target large, interconnected systems are harder to quantify the results of than interventions that target individuals. For instance, consider health-improving interventions. The intervention “give medication X to people who have condition Y” is easy to test with an RCT. However, the intervention “change the culture to make outdoor exercise seem more attractive” is much harder to test: it’s harder to target cultural change to a particular area (and thus it’s harder to do a well-controlled study), and the causal pathways are a lot more complex (e.g. it’s not just that people get more exercise, it might also encourage changes in land-use patterns, which would affect traffic and pollution, etc.) so it would be harder to identify what was due to the change.
(3) Thus, EA approaches that focus on quantifying uncertainty are likely to miss interventions targeted at systems. Since most of our biggest problems are caused by large systems, EA will miss the highest-impact interventions.
This is certainly a charitable reading of the article, and you are doing the right thing by trying to read it as generously as possible. I think they are indeed making this point:
the technocratic nature of the approach itself will only very rarely result in more funds going to the type of social justice philanthropy that we support with the Guerrilla Foundation – simply because the effects of such work are less easy to measure and they are less prominent among the Western, educated elites that make up the majority of the EA movement
This criticism is more than fair. I have to agree with it and simultaneously point out that of course this is a problem that many are aware of and are actively working to change. I don’t think that they’re explicitly arguing for the worldview I was outlining above. This is my own perception of the motivating worldview, and I find support in the authors’ explicit rejection of science and objectivity.
I think leftists are primarily concerned with oppression, exploitation, hierarchy and capitalism as root causes. That seems to basically be what it means to be a leftist. Poverty and environmental destruction are the result of capitalist greed and exploitation. Factory farming is the result of speciesist oppression and capitalism.
Agreed with Mathias that the authors have a good grasp of what EA is and what causes EAs prioritize, and I appreciate how respectful the article is. Also like Mathias, I feel like I have some pretty fundamental worldview differences from the authors, so I’m not sure how well I can explain my disagreements. But I’ll try my best.
The article’s criticism seems to focus on the notion that EA ignores power dynamics and doesn’t address the root cause of problems. This is a pretty common criticism. I find it a bit confusing, and I don’t really understand what the authors consider to be root causes. For example, efforts to create cheap plant-based or cultured meat seem to address the root cause of factory farming because, if successful, they will eliminate the need to farm and kill sentient animals. AI safety work, if successful, could eliminate the root causes of all suffering and bring about an unimaginably good utopia. But the authors don’t seem to agree with me that these qualify as “addressing root causes”. I don’t understand how they distinguish between the EA work that I perceive as addressing root causes and the things they consider to be root causes. Critics like these authors seem to want EAs to do something that they’re not doing, but I don’t understand what it is.
It seems to me that if rich people come to terms with the origins of their wealth, they might conclude that they don’t “deserve” it any more than poor people in Kenya, and decide to distribute the money to them (via GiveDirectly) instead of spending it on themselves. Isn’t that ultimately the point? What outcome would the authors like to come out of this self-reflection, if not using their wealth to help disadvantaged people?
EAs spend more time than any other group I know talking about how they are among the richest people in the world, and they should use their wealth to help the less fortunate. But this doesn’t seem to count in the authors’ eyes.
This article argues that EAs fixate too much on “doing the most good”, and then appears to argue that they believe people should focus on addressing root causes/grassroots activism/power dynamics/etc. because it will do the most good—or maybe I’m misinterpreting the article because I’m seeing it from an EA lens. Sometimes it seems like the authors disagree with EAs about fundamental principles like maximizing good, and other times it seems like they just disagree about what does the most good. I wasn’t clear on that.
If they do agree in principle that we should do as much good as possible, then I would like to see a more rigorous justification for why the authors’ favored causes do more good than EA causes. I realize they’re not amenable to cost-effectiveness analysis than GiveWell’s top charities, but I would like to see at least some attempt at a justification.
For example, many EAs prioritize existential risk. There’s no rigorous cost-effective analysis of x-risk, but you can at least make an argument that it’s more cost-effective than other things:
Extinction is way worse than anything else.
Extinction is not that unlikely.
We can probably make significant progress on reducing extinction risk.
Bostrom basically makes this argument in Existential Risk Prevention as Global Priority.
My impression is there’s a worldview difference between people who think it’s possible in principle to make decisions under uncertainty, and people who think it’s not. I don’t have much to say in defense of the former position except to vaguely gesture in the direction of Phil Tetlock and the proven track record of some people’s ability to forecast uncertain outcomes.
More broadly, I would have an easier time understanding articles like these if they gave more concrete examples of what they consider to be the best things to work on, and why—something more specific than “grassroots activism”. For example (not saying I think the authors believe this, just that this is the general sort of thing I’d like to see):
As for the question of “what do the authors consider to be root causes,” here’s my reading of the article. Consider the case of factory farming. Probably all of us agree that the following are all necessary causes:
(1) There’s lots of demand for meat.
(2) Factory farming is currently the technology that can produce meat most efficiently and cost-effectively.
(3) Producers of meat just care about production efficiency and cost-effectiveness, not animal suffering.
I suspect you and other EAs focus on item (2) when you are talking about “root causes.” In this case, you are correct that creating cheap plant-based meat alternatives will solve (2). However, I suspect the authors of this article think of (3) as the root cause. They likely think that if meat producers cared more about animal suffering, then they would stop doing factory farming or invest in alternatives on their own, and philanthropists wouldn’t need to support them. They write:
Furthermore, they think that since the cause of (3) is a focus on cost-effectiveness (in the sense of minimizing cost per pound of meat produced), then focusing on cost-effectiveness (in the sense of minimizing cost per life saved, or whatever) in philanthropy promotes more cost-effectiveness focused thinking, which makes (3) worse. And they think lots of problems have something like (3) as a root cause. This is what they mean when they talk about “values of the old system” in this quote:
As for the other quote you pulled out:
and the following discussion:
To be more concrete, I suspect what they’re talking about is something like the following. Consider a potential philanthropist like Jeff Bezos—they likely believe that Amazon has harmed the world through their business practices. Let’s say Jeff Bezos wanted to spend $10 billion of his wealth on philanthropy. There might be two ways of doing that:
(1) Donate $10 billion to worthy causes.
(2) Change Amazon’s business practices such that he makes $10 billion less money, but Amazon has a more positive (or less negative) impact on the world.
My reading is that the authors believe (2) would be of higher value, but Bezos (and others like him) would be biased toward (1) for self-serving reasons: Bezos would get more direct credit for doing (1) than (2), and Bezos would be biased toward underestimating how bad Amazon’s business practices are for the world.
---
Overall, though I agree with you that if my interpretation accurately describes the author’s viewpoint, the article does not do a good job arguing for that. But I’m not really sure about the relevance of your statement:
Do you think that the article reflects a viewpoint that it’s not possible to make decisions under uncertainty? I didn’t get that from the article; one of their main points is that it’s important to try things even if success is uncertain.
Thanks, this comment makes a lot of sense, and it makes it much easier for me to conceptualize why I disagree with the conclusion.
I think so, because the article includes some statements like,
“How could anyone forecast the recruitment of thousands of committed new climate activists around the world, the declarations of climate emergency and the boost for NonViolentDirectAction strategies across the climate movement?”
and
“[C]omplex systems change can most often emerge gradually and not be pre-identified ‘scientifically’.”
Maybe instead of “make decisions under uncertainty”, I should have said “make decisions that are informed by uncertain empirical forecasts”.
I can get behind your initial framing, actually. It’s not explicit—I don’t think the authors would define themselves as people who don’t believe decision under uncertainty is possible—but I think it’s a core element of the view of social good professed in this article and others like it.
A huge portion of the variation in worldview between EAs and people who think somewhat differently about doing good seems to be accounted for by a different optimization strategy. EAs, of course, tend to use expected value, and prioritize causes based on probability-weighted value. But it seems like most other organizations optimize based on value conditional on success.
These people and groups select causes based only on perceived scale. They don’t necessarily think that malaria and AI risk aren’t important, they just make a calculation that allots equal probabilities to their chances of averting, say, 100 malarial infections and their chances of overthrowing the global capitalist system.
To me, this is not necessarily reflective of innumeracy or a lack of comfort with probability. It seems more like a really radical second- and third-order uncertainty about the value of certain kinds of reasoning— a deep-seated mistrust of numbers, science, experts, data, etc. I think the authors of the posted article lay their cards on the table in this regard:
These are people who associate the conventions and methods of science and rationality with their instrumental use in a system that they see as inherently unjust. As a result of that association, they’re hugely skeptical about the methods themselves, and aren’t able or willing to use them in decision-making.
I don’t think this is logical, but I do think it is understandable. Many students, in particular American ones (though I recognize that Guerrilla is a European group) have been told repeatedly, for many years, that the central value of learning science and math lies in getting a good job in industry. I think it can be hard to escape this habituation and see scientific thinking as a tool for civilization instead of as some kind of neoliberal astrology.
I agree it would be good to have a diagnosis of the thought process that generates these sorts of articles so we can respond in a targetted manner that addresses their model of their objections, rather that one which simply satisfies us that we have rebutted them. And this diagnosis is a very interesting one! However, I am a little sceptical, for two reasons.
EAs often break cause evaluation down into Scope, Tractability and Neglectedness, which is elegant as they correspond to three factors which can be multiplied together. You’re basically saying that these critics ignore (or consider unquantifiable) Neglectedness and Tractability. However, it seems perhaps a little bit of a coincidence that the factor they are missing just happens to correspond to one of the terms in our standard decomposition. After all, there are many other possible decompositions! But maybe this decomposition just really captures something fundamental to all people’s thought processes, in which case this is not so much of a surprise.
But more importantly I think this theory seems to give some incorrect predictions about cause focus. If Importance is all that matters, then I would expect these critics to be very interested in existential risks, but my impression is they are not. Similarly, I would be very surprised if they were dismissive of e.g. residential recycling, or US criminal justice, as being too small a scale an issue to warrant much concern.
I think scale/scope is a pretty intuitive way of thinking about problems, which is I imagine why it’s part of the ITN framework. To my eye, the framework is successful because it reflects intuitive concepts like scale, so I don’t see too much of a coincidence here.
This is a good point. I don’t see any dissonance with respect to recycling and criminal justice—recycling is (nominally) about climate change, and climate change is a big deal, so recycling is important when you ignore the degree to which it can address the problem; likewise with criminal justice. Still, you’re right that my “straw activist” would probably scoff at AI risk, for example.
I guess I’d say that the way of thinking I’ve described doesn’t imply an accurate assessment of problem scale, and since skepticism about the (relatively formal) arguments on which concerns about AI risk are based is core to the worldview, there’d be no reason for someone like this to accept that some of the more “out there” GCRs are GCRs at all.
Quite separately, there is a tendency among all activists (EAs included) to see convergence where there is none, and I think this goes a long way toward neutralizing legitimate but (to the activist) novel concerns. Anecdotally, I see this a lot—the proposition, for instance, that international development will come “along for the ride” when the U.S. gets its own racial justice house in order, or that the end of capitalism necessarily implies more effective global cooperation.
It seems a lot depends on how you group together things into causes then. Is my recycling about reducing waste in my town (a small issue), preventing deforestation (a medium issue), fighting climate change (a large issue) or being a good person (the most important issue of all)? Pretty much any action can be attached to a big cause by defining an even larger, and even more inclusive problem for it to be part of.
A more charitable interpretation of the author’s point might be something like the following:
(1) Since EAs look at quantitative factors like the expected number of lives saved by an intervention, they need to be able to quantify their uncertainty.
(2) Interventions that target large, interconnected systems are harder to quantify the results of than interventions that target individuals. For instance, consider health-improving interventions. The intervention “give medication X to people who have condition Y” is easy to test with an RCT. However, the intervention “change the culture to make outdoor exercise seem more attractive” is much harder to test: it’s harder to target cultural change to a particular area (and thus it’s harder to do a well-controlled study), and the causal pathways are a lot more complex (e.g. it’s not just that people get more exercise, it might also encourage changes in land-use patterns, which would affect traffic and pollution, etc.) so it would be harder to identify what was due to the change.
(3) Thus, EA approaches that focus on quantifying uncertainty are likely to miss interventions targeted at systems. Since most of our biggest problems are caused by large systems, EA will miss the highest-impact interventions.
This is certainly a charitable reading of the article, and you are doing the right thing by trying to read it as generously as possible. I think they are indeed making this point:
This criticism is more than fair. I have to agree with it and simultaneously point out that of course this is a problem that many are aware of and are actively working to change. I don’t think that they’re explicitly arguing for the worldview I was outlining above. This is my own perception of the motivating worldview, and I find support in the authors’ explicit rejection of science and objectivity.
I think leftists are primarily concerned with oppression, exploitation, hierarchy and capitalism as root causes. That seems to basically be what it means to be a leftist. Poverty and environmental destruction are the result of capitalist greed and exploitation. Factory farming is the result of speciesist oppression and capitalism.
Oppression, exploitation, hierarchy and capitalism are also seen as causes of many of the worst ills in the world, perhaps even most of them.
EDIT: I’m not claiming this is an accurate view of the world; this is my (perhaps inaccurate) impression of the views of leftists.