I think having a roadmap, and choosing subproblems as close as possible to the final problem, are often good strategies, perhaps in a large majority of cases.
However, I think there at least three important types of exceptions:
When it’s not possible to identify any clear subproblems or their closeness to the final problem is unclear (perhaps AI alignment is an example, though I think it’s less true today than it was, say, 10 years ago—at least if you buy e.g. Paul Christiano’s broad agenda).
When the close, or even all known, subproblems have resisted solutions for a long time, e.g. Riemann hypothesis.
When one needs tools/subproblems that seem closely related only after having invested a lot of effort investigating them, rather than in advance. E.g. squaring the circle—“if you want to understand constructions with ruler and compass, do a lot of constructions with ruler and compass” was a bad strategy. Though admittedly it’s unclear if one can identify examples of this type in advance unless they are also examples of one of the previous two types.
Also, I of course acknowledge that there are limits to the idea of exploring subproblems that are less closely related. For example, I think no matter what mathematical problem you want to solve, I think it would be a very bad strategy to study dung beetles or to become a priest. And to be fair, I think at least in hindsight the idea of studying close subproblems will almost always appear to be correct. To return to the example of squaring the circle: once people had realized that the set of points you can construct with ruler and compass are closed under basic algebraic operations in the complex plane, it was possible and relatively easy to see how certain problems in algebra number theory were closely related. So the problem was less that it’s intrinsically better to focus on less related subproblems, but more that people didn’t properly understand what would count as helpfully related.
Regarding the first two types, I think that it’s practically never the case and one can always make progress—even if that progress is in work done on analogies or heuristically relevant techniques. The Riemann hypothesis is actually a great example of that; there are many paths currently pursued to help us understand it better, even if there aren’t any especially promising reductions (not sure if that’s the case). But I guess that your point here is that this are distinct markers for how easy is it to make progress.
What is the alternative strategy you are suggesting in those exceptions? Is it to work on problems that are weakly related and the connection is not clear but are more tractable?
If so, I think that two alternative strategies are to just try harder to find something more related or to move to a different project altogether. Of course, this all lies on a continuum so it’s a matter of degree.
I think having a roadmap, and choosing subproblems as close as possible to the final problem, are often good strategies, perhaps in a large majority of cases.
However, I think there at least three important types of exceptions:
When it’s not possible to identify any clear subproblems or their closeness to the final problem is unclear (perhaps AI alignment is an example, though I think it’s less true today than it was, say, 10 years ago—at least if you buy e.g. Paul Christiano’s broad agenda).
When the close, or even all known, subproblems have resisted solutions for a long time, e.g. Riemann hypothesis.
When one needs tools/subproblems that seem closely related only after having invested a lot of effort investigating them, rather than in advance. E.g. squaring the circle—“if you want to understand constructions with ruler and compass, do a lot of constructions with ruler and compass” was a bad strategy. Though admittedly it’s unclear if one can identify examples of this type in advance unless they are also examples of one of the previous two types.
Also, I of course acknowledge that there are limits to the idea of exploring subproblems that are less closely related. For example, I think no matter what mathematical problem you want to solve, I think it would be a very bad strategy to study dung beetles or to become a priest. And to be fair, I think at least in hindsight the idea of studying close subproblems will almost always appear to be correct. To return to the example of squaring the circle: once people had realized that the set of points you can construct with ruler and compass are closed under basic algebraic operations in the complex plane, it was possible and relatively easy to see how certain problems in algebra number theory were closely related. So the problem was less that it’s intrinsically better to focus on less related subproblems, but more that people didn’t properly understand what would count as helpfully related.
Regarding the first two types, I think that it’s practically never the case and one can always make progress—even if that progress is in work done on analogies or heuristically relevant techniques. The Riemann hypothesis is actually a great example of that; there are many paths currently pursued to help us understand it better, even if there aren’t any especially promising reductions (not sure if that’s the case). But I guess that your point here is that this are distinct markers for how easy is it to make progress.
What is the alternative strategy you are suggesting in those exceptions? Is it to work on problems that are weakly related and the connection is not clear but are more tractable?
If so, I think that two alternative strategies are to just try harder to find something more related or to move to a different project altogether. Of course, this all lies on a continuum so it’s a matter of degree.