A huge portion of the variation in worldview between EAs and people who think somewhat differently about doing good seems to be accounted for by a different optimization strategy. EAs, of course, tend to use expected value, and prioritize causes based on probability-weighted value. But it seems like most other organizations optimize based on value conditional on success.
These people and groups select causes based only on perceived scale. They don’t necessarily think that malaria and AI risk aren’t important, they just make a calculation that allots equal probabilities to their chances of averting, say, 100 malarial infections and their chances of overthrowing the global capitalist system.
I agree it would be good to have a diagnosis of the thought process that generates these sorts of articles so we can respond in a targetted manner that addresses their model of their objections, rather that one which simply satisfies us that we have rebutted them. And this diagnosis is a very interesting one! However, I am a little sceptical, for two reasons.
EAs often break cause evaluation down into Scope, Tractability and Neglectedness, which is elegant as they correspond to three factors which can be multiplied together. You’re basically saying that these critics ignore (or consider unquantifiable) Neglectedness and Tractability. However, it seems perhaps a little bit of a coincidence that the factor they are missing just happens to correspond to one of the terms in our standard decomposition. After all, there are many other possible decompositions! But maybe this decomposition just really captures something fundamental to all people’s thought processes, in which case this is not so much of a surprise.
But more importantly I think this theory seems to give some incorrect predictions about cause focus. If Importance is all that matters, then I would expect these critics to be very interested in existential risks, but my impression is they are not. Similarly, I would be very surprised if they were dismissive of e.g. residential recycling, or US criminal justice, as being too small a scale an issue to warrant much concern.
I think scale/scope is a pretty intuitive way of thinking about problems, which is I imagine why it’s part of the ITN framework. To my eye, the framework is successful because it reflects intuitive concepts like scale, so I don’t see too much of a coincidence here.
If Importance is all that matters, then I would expect these critics to be very interested in existential risks, but my impression is they are not. Similarly, I would be very surprised if they were dismissive of e.g. residential recycling, or US criminal justice, as being too small a scale an issue to warrant much concern.
This is a good point. I don’t see any dissonance with respect to recycling and criminal justice—recycling is (nominally) about climate change, and climate change is a big deal, so recycling is important when you ignore the degree to which it can address the problem; likewise with criminal justice. Still, you’re right that my “straw activist” would probably scoff at AI risk, for example.
I guess I’d say that the way of thinking I’ve described doesn’t imply an accurate assessment of problem scale, and since skepticism about the (relatively formal) arguments on which concerns about AI risk are based is core to the worldview, there’d be no reason for someone like this to accept that some of the more “out there” GCRs are GCRs at all.
Quite separately, there is a tendency among all activists (EAs included) to see convergence where there is none, and I think this goes a long way toward neutralizing legitimate but (to the activist) novel concerns. Anecdotally, I see this a lot—the proposition, for instance, that international development will come “along for the ride” when the U.S. gets its own racial justice house in order, or that the end of capitalism necessarily implies more effective global cooperation.
I don’t see any dissonance with respect to recycling and criminal justice—recycling is (nominally) about climate change, and climate change is a big deal, so recycling is important when you ignore the degree to which it can address the problem; likewise with criminal justice.
It seems a lot depends on how you group together things into causes then. Is my recycling about reducing waste in my town (a small issue), preventing deforestation (a medium issue), fighting climate change (a large issue) or being a good person (the most important issue of all)? Pretty much any action can be attached to a big cause by defining an even larger, and even more inclusive problem for it to be part of.
I agree it would be good to have a diagnosis of the thought process that generates these sorts of articles so we can respond in a targetted manner that addresses their model of their objections, rather that one which simply satisfies us that we have rebutted them. And this diagnosis is a very interesting one! However, I am a little sceptical, for two reasons.
EAs often break cause evaluation down into Scope, Tractability and Neglectedness, which is elegant as they correspond to three factors which can be multiplied together. You’re basically saying that these critics ignore (or consider unquantifiable) Neglectedness and Tractability. However, it seems perhaps a little bit of a coincidence that the factor they are missing just happens to correspond to one of the terms in our standard decomposition. After all, there are many other possible decompositions! But maybe this decomposition just really captures something fundamental to all people’s thought processes, in which case this is not so much of a surprise.
But more importantly I think this theory seems to give some incorrect predictions about cause focus. If Importance is all that matters, then I would expect these critics to be very interested in existential risks, but my impression is they are not. Similarly, I would be very surprised if they were dismissive of e.g. residential recycling, or US criminal justice, as being too small a scale an issue to warrant much concern.
I think scale/scope is a pretty intuitive way of thinking about problems, which is I imagine why it’s part of the ITN framework. To my eye, the framework is successful because it reflects intuitive concepts like scale, so I don’t see too much of a coincidence here.
This is a good point. I don’t see any dissonance with respect to recycling and criminal justice—recycling is (nominally) about climate change, and climate change is a big deal, so recycling is important when you ignore the degree to which it can address the problem; likewise with criminal justice. Still, you’re right that my “straw activist” would probably scoff at AI risk, for example.
I guess I’d say that the way of thinking I’ve described doesn’t imply an accurate assessment of problem scale, and since skepticism about the (relatively formal) arguments on which concerns about AI risk are based is core to the worldview, there’d be no reason for someone like this to accept that some of the more “out there” GCRs are GCRs at all.
Quite separately, there is a tendency among all activists (EAs included) to see convergence where there is none, and I think this goes a long way toward neutralizing legitimate but (to the activist) novel concerns. Anecdotally, I see this a lot—the proposition, for instance, that international development will come “along for the ride” when the U.S. gets its own racial justice house in order, or that the end of capitalism necessarily implies more effective global cooperation.
It seems a lot depends on how you group together things into causes then. Is my recycling about reducing waste in my town (a small issue), preventing deforestation (a medium issue), fighting climate change (a large issue) or being a good person (the most important issue of all)? Pretty much any action can be attached to a big cause by defining an even larger, and even more inclusive problem for it to be part of.