It becomes clear that there’s a lot of value in really nailing down your intervention the best you can. Having tons of different reasons to think something will work. In this case, we’ve got:
It’s common sense that not being bit by mosquitos is nice, all else equal.
The global public health community has clearly accomplished lots of good for many decades, so their recommendation is worth a lot.
Lots of smart people recommend this intervention.
There are strong counterarguments to all the relevant objections, and these objections are mostly shaped like “what about this edge case” rather than taking issue with the central premise.
Even if one of these fails, there are still the others. You’re very likely to be doing some good, both probabilistically and in a more fuzzy, hard-to-pin-down sense.
I really liked this framing, and think it could be a post on it’s own! It points at something fundamental and important like “Prefer robust arguments”.
You might visualize an argument as a toy structure built out of building blocks. Some kinds of arguments are structured as towers: one conclusion piled on top of another, capable of reaching tremendous heights. But: take out any one block and the whole thing comes crumbling down.
Other arguments are like those Greek temples with multiple supporting columns. They take a bit more time to build, and might not go quite as high; but are less reliant on one particular column to hold its entire weight. I call such arguments “robust”.
By preferring robustness, you are more likely to avoid Pascalian muggings, more likely to work on true and important areas, more likely to have your epistemic failures be graceful.
Some signs that an argument is robust:
Many people who think hard about this issue agree
People with very different backgrounds agree
The argument does a good job predicting past results across a lot of different areas
Robustness isn’t the only, or even main, quality of an argument; there are some conclusions you can only reach by standing atop a tall tower! Longtermism feels shaped this way to me. But also, this suggests that you can do valuable work by shoring up the foundations and assumptions that are implicit in a tower-like argument, eg by red-teaming the assumption that future people are likely to exist conditional on us doing a good job.
Yeah! This was the actually the first post I tried to write. But it petered out a few times, so I approached it from a different angle and came up with the post above instead. I definitely agree that “robustness” is something that should be seen as a pillar of EA—boringly overdetermined interventions just seem a lot more likely to survive repeated contact with reality to me, and I think as we’ve moved away from geeking out about RCTs we’ve lost some of that caution as a communtiy.
Sequence thinking involves making a decision based on a single model of the world: breaking down the decision into a set of key questions, taking one’s best guess on each question, and accepting the conclusion that is implied by the set of best guesses (an excellent example of this sort of thinking is Robin Hanson’s discussion of cryonics). It has the form: “A, and B, and C … and N; therefore X.” Sequence thinking has the advantage of making one’s assumptions and beliefs highly transparent, and as such it is often associated with finding ways to make counterintuitive comparisons.
Cluster thinking – generally the more common kind of thinking – involves approaching a decision from multiple perspectives (which might also be called “mental models”), observing which decision would be implied by each perspective, and weighing the perspectives in order to arrive at a final decision. Cluster thinking has the form: “Perspective 1 implies X; perspective 2 implies not-X; perspective 3 implies X; … therefore, weighing these different perspectives and taking into account how much uncertainty I have about each, X.” Each perspective might represent a relatively crude or limited pattern-match (e.g., “This plan seems similar to other plans that have had bad results”), or a highly complex model; the different perspectives are combined by weighing their conclusions against each other, rather than by constructing a single unified model that tries to account for all available information.
A key difference with “sequence thinking” is the handling of certainty/robustness (by which I mean the opposite of Knightian uncertainty) associated with each perspective. Perspectives associated with high uncertainty are in some sense “sandboxed” in cluster thinking: they are stopped from carrying strong weight in the final decision, even when such perspectives involve extreme claims (e.g., a low-certainty argument that “animal welfare is 100,000x as promising a cause as global poverty” receives no more weight than if it were an argument that “animal welfare is 10x as promising a cause as global poverty”).
Holden also linked other writing heavily overlapping with this idea:
I really liked this framing, and think it could be a post on it’s own! It points at something fundamental and important like “Prefer robust arguments”.
You might visualize an argument as a toy structure built out of building blocks. Some kinds of arguments are structured as towers: one conclusion piled on top of another, capable of reaching tremendous heights. But: take out any one block and the whole thing comes crumbling down.
Other arguments are like those Greek temples with multiple supporting columns. They take a bit more time to build, and might not go quite as high; but are less reliant on one particular column to hold its entire weight. I call such arguments “robust”.
One example of a robust argument that I particularly liked: the case for cutting meat out of your diet. You can make a pretty good argument for it from a bunch of different angles:
Animal suffering
Climate/reducing emissions
Health and longevity
Financial cost (price of food)
By preferring robustness, you are more likely to avoid Pascalian muggings, more likely to work on true and important areas, more likely to have your epistemic failures be graceful.
Some signs that an argument is robust:
Many people who think hard about this issue agree
People with very different backgrounds agree
The argument does a good job predicting past results across a lot of different areas
Robustness isn’t the only, or even main, quality of an argument; there are some conclusions you can only reach by standing atop a tall tower! Longtermism feels shaped this way to me. But also, this suggests that you can do valuable work by shoring up the foundations and assumptions that are implicit in a tower-like argument, eg by red-teaming the assumption that future people are likely to exist conditional on us doing a good job.
Yeah! This was the actually the first post I tried to write. But it petered out a few times, so I approached it from a different angle and came up with the post above instead. I definitely agree that “robustness” is something that should be seen as a pillar of EA—boringly overdetermined interventions just seem a lot more likely to survive repeated contact with reality to me, and I think as we’ve moved away from geeking out about RCTs we’ve lost some of that caution as a communtiy.
There’s an excellent old GiveWell blogpost by Holden Karnofsky on this topic called Sequence Thinking vs Cluster Thinking:
Holden also linked other writing heavily overlapping with this idea:
Haha thanks for pointing this out! I’m glad this isn’t an original idea; you might say robustness itself is pretty robust ;)