I agree with your points, but from my perspective they somewhat miss the mark.
Specifically, your discussion seems to assume that we have a fixed, exogenously given set of propositions or factors X, Y, …, and that our sole task is to establish relations of correlation and causation between them. In this context, I agree on preferring “wide surveys” etc.
However, in fact, doing research also requires the following tasks:
Identify which factors X, Y, … to consider in the first place.
Refine the meaning of the considered factors X, Y, … by clarifying their conceptual and hypothesized empirical relationships to other factors.
Prioritize which of the myriads of possible correlational or causal relationships between the factors X, Y, … to test.
I think that depth can help with these three tasks in ways in which breadth can’t.
For instance, in Will’s example, my guess is that the main value of considering the history of Objectivism does not come from moving my estimate for the strength of the hypothesis “X = romantic involvement between movement leaders → Y = movement collapses”. Rather, the source of value is including “romantic involvement between movement leaders” into the set of factors I’m considering in the first place. Only then am I able to investigate its relation to outcomes of interests, whether by a “wide survey of cases” or otherwise. Moreover, I might only have learned about the potential relevance of “romantic involvement between movement leaders” by looking at some depth into the history of Objectivism. (I know very little about Objectivism, and so don’t know if this is true in this instance; it’s certainly possible that the issue of romantic involvement between Objectivist leaders is so well known that it would be mentioned in any 5-sentence summary one would encounter during a breadth-first process. But it also seems possible that it’s not, and I’m sure I could come up with examples where the interesting factor was buried deeply.)
My model here squares well with your observation that a “common feature among superforecasters is they read a lot”, and in fact makes a more specific prediction: I expect that we’d find that superforecasters read a fair amount (say, >10% of their total reading) of deep, small-n case studies—for example, historical accounts of a single war, economic policy, or biographies.
[My guess is that my comment is largely just restating Will’s points from his above comment in other words.]
(FWIW, I think some generators of my overall model here are:
Frequently experiencing disagreements I have with others, especially around AI timelines and takeoff scenarios, as noticing a thought like “Uh… I just think your overall model of the world lacks depth and detail.” rather than “Wait, I’ve read about 50 similar cases, and only 10 of them are consistent with your claim”.
Semantic holism, or at least some of the arguments usually given in its favor.
Some intuitive and fuzzy sense that, in the terminology of this Julia Galef post, being a “Hayekian” has worked better for me than being a “Planner”, including for making epistemic progress.
Some intuitive and fuzzy sense of what I’ve gotten out of “deep” versus “broad” reading. E.g. my sense is that reading Robert Caro’s monumental, >1,300-page biography of New York city planner Robert Moses has had a significant impact on my model of how individuals can attain political power, albeit by adding a bunch of detail and drawing my attention to factors I previously wouldn’t have considered rather than by providing evidence for any particular hypothesis.)
I agree with your points, but from my perspective they somewhat miss the mark.
Specifically, your discussion seems to assume that we have a fixed, exogenously given set of propositions or factors X, Y, …, and that our sole task is to establish relations of correlation and causation between them. In this context, I agree on preferring “wide surveys” etc.
However, in fact, doing research also requires the following tasks:
Identify which factors X, Y, … to consider in the first place.
Refine the meaning of the considered factors X, Y, … by clarifying their conceptual and hypothesized empirical relationships to other factors.
Prioritize which of the myriads of possible correlational or causal relationships between the factors X, Y, … to test.
I think that depth can help with these three tasks in ways in which breadth can’t.
For instance, in Will’s example, my guess is that the main value of considering the history of Objectivism does not come from moving my estimate for the strength of the hypothesis “X = romantic involvement between movement leaders → Y = movement collapses”. Rather, the source of value is including “romantic involvement between movement leaders” into the set of factors I’m considering in the first place. Only then am I able to investigate its relation to outcomes of interests, whether by a “wide survey of cases” or otherwise. Moreover, I might only have learned about the potential relevance of “romantic involvement between movement leaders” by looking at some depth into the history of Objectivism. (I know very little about Objectivism, and so don’t know if this is true in this instance; it’s certainly possible that the issue of romantic involvement between Objectivist leaders is so well known that it would be mentioned in any 5-sentence summary one would encounter during a breadth-first process. But it also seems possible that it’s not, and I’m sure I could come up with examples where the interesting factor was buried deeply.)
My model here squares well with your observation that a “common feature among superforecasters is they read a lot”, and in fact makes a more specific prediction: I expect that we’d find that superforecasters read a fair amount (say, >10% of their total reading) of deep, small-n case studies—for example, historical accounts of a single war, economic policy, or biographies.
[My guess is that my comment is largely just restating Will’s points from his above comment in other words.]
(FWIW, I think some generators of my overall model here are:
Frequently experiencing disagreements I have with others, especially around AI timelines and takeoff scenarios, as noticing a thought like “Uh… I just think your overall model of the world lacks depth and detail.” rather than “Wait, I’ve read about 50 similar cases, and only 10 of them are consistent with your claim”.
Semantic holism, or at least some of the arguments usually given in its favor.
Some intuitive and fuzzy sense that, in the terminology of this Julia Galef post, being a “Hayekian” has worked better for me than being a “Planner”, including for making epistemic progress.
Some intuitive and fuzzy sense of what I’ve gotten out of “deep” versus “broad” reading. E.g. my sense is that reading Robert Caro’s monumental, >1,300-page biography of New York city planner Robert Moses has had a significant impact on my model of how individuals can attain political power, albeit by adding a bunch of detail and drawing my attention to factors I previously wouldn’t have considered rather than by providing evidence for any particular hypothesis.)