Thanks, this comment makes a lot of sense, and it makes it much easier for me to conceptualize why I disagree with the conclusion.
Do you think that the article reflects a viewpoint that it’s not possible to make decisions under uncertainty?
I think so, because the article includes some statements like,
“How could anyone forecast the recruitment of thousands of committed new climate activists around the world, the declarations of climate emergency and the boost for NonViolentDirectAction strategies across the climate movement?”
and
“[C]omplex systems change can most often emerge gradually and not be pre-identified ‘scientifically’.”
Maybe instead of “make decisions under uncertainty”, I should have said “make decisions that are informed by uncertain empirical forecasts”.
I can get behind your initial framing, actually. It’s not explicit—I don’t think the authors would define themselves as people who don’t believe decision under uncertainty is possible—but I think it’s a core element of the view of social good professed in this article and others like it.
A huge portion of the variation in worldview between EAs and people who think somewhat differently about doing good seems to be accounted for by a different optimization strategy. EAs, of course, tend to use expected value, and prioritize causes based on probability-weighted value. But it seems like most other organizations optimize based on value conditional on success.
These people and groups select causes based only on perceived scale. They don’t necessarily think that malaria and AI risk aren’t important, they just make a calculation that allots equal probabilities to their chances of averting, say, 100 malarial infections and their chances of overthrowing the global capitalist system.
To me, this is not necessarily reflective of innumeracy or a lack of comfort with probability. It seems more like a really radical second- and third-order uncertainty about the value of certain kinds of reasoning— a deep-seated mistrust of numbers, science, experts, data, etc. I think the authors of the posted article lay their cards on the table in this regard:
the values of the old system: efficiency and cost-effectiveness, growth/scale, linearity, science and objectivity, individualism, and decision-making by experts/elites
These are people who associate the conventions and methods of science and rationality with their instrumental use in a system that they see as inherently unjust. As a result of that association, they’re hugely skeptical about the methods themselves, and aren’t able or willing to use them in decision-making.
I don’t think this is logical, but I do think it is understandable. Many students, in particular American ones (though I recognize that Guerrilla is a European group) have been told repeatedly, for many years, that the central value of learning science and math lies in getting a good job in industry. I think it can be hard to escape this habituation and see scientific thinking as a tool for civilization instead of as some kind of neoliberal astrology.
A huge portion of the variation in worldview between EAs and people who think somewhat differently about doing good seems to be accounted for by a different optimization strategy. EAs, of course, tend to use expected value, and prioritize causes based on probability-weighted value. But it seems like most other organizations optimize based on value conditional on success.
These people and groups select causes based only on perceived scale. They don’t necessarily think that malaria and AI risk aren’t important, they just make a calculation that allots equal probabilities to their chances of averting, say, 100 malarial infections and their chances of overthrowing the global capitalist system.
I agree it would be good to have a diagnosis of the thought process that generates these sorts of articles so we can respond in a targetted manner that addresses their model of their objections, rather that one which simply satisfies us that we have rebutted them. And this diagnosis is a very interesting one! However, I am a little sceptical, for two reasons.
EAs often break cause evaluation down into Scope, Tractability and Neglectedness, which is elegant as they correspond to three factors which can be multiplied together. You’re basically saying that these critics ignore (or consider unquantifiable) Neglectedness and Tractability. However, it seems perhaps a little bit of a coincidence that the factor they are missing just happens to correspond to one of the terms in our standard decomposition. After all, there are many other possible decompositions! But maybe this decomposition just really captures something fundamental to all people’s thought processes, in which case this is not so much of a surprise.
But more importantly I think this theory seems to give some incorrect predictions about cause focus. If Importance is all that matters, then I would expect these critics to be very interested in existential risks, but my impression is they are not. Similarly, I would be very surprised if they were dismissive of e.g. residential recycling, or US criminal justice, as being too small a scale an issue to warrant much concern.
I think scale/scope is a pretty intuitive way of thinking about problems, which is I imagine why it’s part of the ITN framework. To my eye, the framework is successful because it reflects intuitive concepts like scale, so I don’t see too much of a coincidence here.
If Importance is all that matters, then I would expect these critics to be very interested in existential risks, but my impression is they are not. Similarly, I would be very surprised if they were dismissive of e.g. residential recycling, or US criminal justice, as being too small a scale an issue to warrant much concern.
This is a good point. I don’t see any dissonance with respect to recycling and criminal justice—recycling is (nominally) about climate change, and climate change is a big deal, so recycling is important when you ignore the degree to which it can address the problem; likewise with criminal justice. Still, you’re right that my “straw activist” would probably scoff at AI risk, for example.
I guess I’d say that the way of thinking I’ve described doesn’t imply an accurate assessment of problem scale, and since skepticism about the (relatively formal) arguments on which concerns about AI risk are based is core to the worldview, there’d be no reason for someone like this to accept that some of the more “out there” GCRs are GCRs at all.
Quite separately, there is a tendency among all activists (EAs included) to see convergence where there is none, and I think this goes a long way toward neutralizing legitimate but (to the activist) novel concerns. Anecdotally, I see this a lot—the proposition, for instance, that international development will come “along for the ride” when the U.S. gets its own racial justice house in order, or that the end of capitalism necessarily implies more effective global cooperation.
I don’t see any dissonance with respect to recycling and criminal justice—recycling is (nominally) about climate change, and climate change is a big deal, so recycling is important when you ignore the degree to which it can address the problem; likewise with criminal justice.
It seems a lot depends on how you group together things into causes then. Is my recycling about reducing waste in my town (a small issue), preventing deforestation (a medium issue), fighting climate change (a large issue) or being a good person (the most important issue of all)? Pretty much any action can be attached to a big cause by defining an even larger, and even more inclusive problem for it to be part of.
A more charitable interpretation of the author’s point might be something like the following:
(1) Since EAs look at quantitative factors like the expected number of lives saved by an intervention, they need to be able to quantify their uncertainty.
(2) Interventions that target large, interconnected systems are harder to quantify the results of than interventions that target individuals. For instance, consider health-improving interventions. The intervention “give medication X to people who have condition Y” is easy to test with an RCT. However, the intervention “change the culture to make outdoor exercise seem more attractive” is much harder to test: it’s harder to target cultural change to a particular area (and thus it’s harder to do a well-controlled study), and the causal pathways are a lot more complex (e.g. it’s not just that people get more exercise, it might also encourage changes in land-use patterns, which would affect traffic and pollution, etc.) so it would be harder to identify what was due to the change.
(3) Thus, EA approaches that focus on quantifying uncertainty are likely to miss interventions targeted at systems. Since most of our biggest problems are caused by large systems, EA will miss the highest-impact interventions.
This is certainly a charitable reading of the article, and you are doing the right thing by trying to read it as generously as possible. I think they are indeed making this point:
the technocratic nature of the approach itself will only very rarely result in more funds going to the type of social justice philanthropy that we support with the Guerrilla Foundation – simply because the effects of such work are less easy to measure and they are less prominent among the Western, educated elites that make up the majority of the EA movement
This criticism is more than fair. I have to agree with it and simultaneously point out that of course this is a problem that many are aware of and are actively working to change. I don’t think that they’re explicitly arguing for the worldview I was outlining above. This is my own perception of the motivating worldview, and I find support in the authors’ explicit rejection of science and objectivity.
Thanks, this comment makes a lot of sense, and it makes it much easier for me to conceptualize why I disagree with the conclusion.
I think so, because the article includes some statements like,
“How could anyone forecast the recruitment of thousands of committed new climate activists around the world, the declarations of climate emergency and the boost for NonViolentDirectAction strategies across the climate movement?”
and
“[C]omplex systems change can most often emerge gradually and not be pre-identified ‘scientifically’.”
Maybe instead of “make decisions under uncertainty”, I should have said “make decisions that are informed by uncertain empirical forecasts”.
I can get behind your initial framing, actually. It’s not explicit—I don’t think the authors would define themselves as people who don’t believe decision under uncertainty is possible—but I think it’s a core element of the view of social good professed in this article and others like it.
A huge portion of the variation in worldview between EAs and people who think somewhat differently about doing good seems to be accounted for by a different optimization strategy. EAs, of course, tend to use expected value, and prioritize causes based on probability-weighted value. But it seems like most other organizations optimize based on value conditional on success.
These people and groups select causes based only on perceived scale. They don’t necessarily think that malaria and AI risk aren’t important, they just make a calculation that allots equal probabilities to their chances of averting, say, 100 malarial infections and their chances of overthrowing the global capitalist system.
To me, this is not necessarily reflective of innumeracy or a lack of comfort with probability. It seems more like a really radical second- and third-order uncertainty about the value of certain kinds of reasoning— a deep-seated mistrust of numbers, science, experts, data, etc. I think the authors of the posted article lay their cards on the table in this regard:
These are people who associate the conventions and methods of science and rationality with their instrumental use in a system that they see as inherently unjust. As a result of that association, they’re hugely skeptical about the methods themselves, and aren’t able or willing to use them in decision-making.
I don’t think this is logical, but I do think it is understandable. Many students, in particular American ones (though I recognize that Guerrilla is a European group) have been told repeatedly, for many years, that the central value of learning science and math lies in getting a good job in industry. I think it can be hard to escape this habituation and see scientific thinking as a tool for civilization instead of as some kind of neoliberal astrology.
I agree it would be good to have a diagnosis of the thought process that generates these sorts of articles so we can respond in a targetted manner that addresses their model of their objections, rather that one which simply satisfies us that we have rebutted them. And this diagnosis is a very interesting one! However, I am a little sceptical, for two reasons.
EAs often break cause evaluation down into Scope, Tractability and Neglectedness, which is elegant as they correspond to three factors which can be multiplied together. You’re basically saying that these critics ignore (or consider unquantifiable) Neglectedness and Tractability. However, it seems perhaps a little bit of a coincidence that the factor they are missing just happens to correspond to one of the terms in our standard decomposition. After all, there are many other possible decompositions! But maybe this decomposition just really captures something fundamental to all people’s thought processes, in which case this is not so much of a surprise.
But more importantly I think this theory seems to give some incorrect predictions about cause focus. If Importance is all that matters, then I would expect these critics to be very interested in existential risks, but my impression is they are not. Similarly, I would be very surprised if they were dismissive of e.g. residential recycling, or US criminal justice, as being too small a scale an issue to warrant much concern.
I think scale/scope is a pretty intuitive way of thinking about problems, which is I imagine why it’s part of the ITN framework. To my eye, the framework is successful because it reflects intuitive concepts like scale, so I don’t see too much of a coincidence here.
This is a good point. I don’t see any dissonance with respect to recycling and criminal justice—recycling is (nominally) about climate change, and climate change is a big deal, so recycling is important when you ignore the degree to which it can address the problem; likewise with criminal justice. Still, you’re right that my “straw activist” would probably scoff at AI risk, for example.
I guess I’d say that the way of thinking I’ve described doesn’t imply an accurate assessment of problem scale, and since skepticism about the (relatively formal) arguments on which concerns about AI risk are based is core to the worldview, there’d be no reason for someone like this to accept that some of the more “out there” GCRs are GCRs at all.
Quite separately, there is a tendency among all activists (EAs included) to see convergence where there is none, and I think this goes a long way toward neutralizing legitimate but (to the activist) novel concerns. Anecdotally, I see this a lot—the proposition, for instance, that international development will come “along for the ride” when the U.S. gets its own racial justice house in order, or that the end of capitalism necessarily implies more effective global cooperation.
It seems a lot depends on how you group together things into causes then. Is my recycling about reducing waste in my town (a small issue), preventing deforestation (a medium issue), fighting climate change (a large issue) or being a good person (the most important issue of all)? Pretty much any action can be attached to a big cause by defining an even larger, and even more inclusive problem for it to be part of.
A more charitable interpretation of the author’s point might be something like the following:
(1) Since EAs look at quantitative factors like the expected number of lives saved by an intervention, they need to be able to quantify their uncertainty.
(2) Interventions that target large, interconnected systems are harder to quantify the results of than interventions that target individuals. For instance, consider health-improving interventions. The intervention “give medication X to people who have condition Y” is easy to test with an RCT. However, the intervention “change the culture to make outdoor exercise seem more attractive” is much harder to test: it’s harder to target cultural change to a particular area (and thus it’s harder to do a well-controlled study), and the causal pathways are a lot more complex (e.g. it’s not just that people get more exercise, it might also encourage changes in land-use patterns, which would affect traffic and pollution, etc.) so it would be harder to identify what was due to the change.
(3) Thus, EA approaches that focus on quantifying uncertainty are likely to miss interventions targeted at systems. Since most of our biggest problems are caused by large systems, EA will miss the highest-impact interventions.
This is certainly a charitable reading of the article, and you are doing the right thing by trying to read it as generously as possible. I think they are indeed making this point:
This criticism is more than fair. I have to agree with it and simultaneously point out that of course this is a problem that many are aware of and are actively working to change. I don’t think that they’re explicitly arguing for the worldview I was outlining above. This is my own perception of the motivating worldview, and I find support in the authors’ explicit rejection of science and objectivity.