I just read Jacob Steinhardtâs Research as a Stochastic Decision Process, found it very interesting, and realised that it seems relevant here as well (in particular in relation to Section 2.1). Some quotes:
In this post I will talk about an approach to research (and other projects that involve high uncertainty) that has substantially improved my productivity. Before implementing this approach, I made little research progress for over a year; afterwards, I completed one project every four months on average. Other changes also contributed, but I expect the ideas here to at least double your productivity if you arenât already employing a similar process.
Below I analyze how to approach a project that has many somewhat independent sources of uncertainty (we can often think of these as multiple âstepsâ or âpartsâ that each have some probability of success). Is it best to do these steps from easiest to hardest? From hardest to easiest? From quickest to slowest? We will eventually see that a good principle is to âreduce uncertainty at the fastest possible rateâ. [...]
Suppose you are embarking on a project with several parts, all of which must succeed for the project to succeed. [Note: This could be a matter of whether the project will âworkâ or of how valuable its results would be.] For instance, a proof strategy might rely on proving several intermediate results, or an applied project might require achieving high enough speed and accuracy on several components. What is a good strategy for approaching such a project? For me, the most intuitively appealing strategy is something like the following:
(Naive Strategy) Complete the components in increasing order of difficulty, from easiest to hardest.
This is psychologically tempting: you do what you know how to do first, which can provide a good warm-up to the harder parts of the project. This used to be my default strategy, but often the following happened: I would do all the easy parts, then get to the hard part and encounter a fundamental obstacle that required scrapping the entire plan and coming up with a new one. For instance, I might spend a while wrestling with a certain algorithm to make sure it had the statistical consistency properties I wanted, but then realize that the algorithm was not flexible enough to handle realistic use cases.
The work on the easy parts was mostly wastedâit wasnât that I could replace the hard part with a different hard part; rather, I needed to re-think the entire structure, which included throwing away the âprogressâ from solving the easy parts. [...]
I expect that, on the current margin in longtermism:
fundamental research will tend to reduce uncertainty at a faster rate than intervention research
somewhat prioritising fundamental research would result in fewer hours âwastedâ on relatively low-value efforts than somewhat prioritising intervention research would
(Though thoes are empirical and contestable claimsârather than being true by definitionâand Steinhardtâs post wasnât specifically about fundamental vs intervention research.)
I just read Jacob Steinhardtâs Research as a Stochastic Decision Process, found it very interesting, and realised that it seems relevant here as well (in particular in relation to Section 2.1). Some quotes:
I expect that, on the current margin in longtermism:
fundamental research will tend to reduce uncertainty at a faster rate than intervention research
somewhat prioritising fundamental research would result in fewer hours âwastedâ on relatively low-value efforts than somewhat prioritising intervention research would
(Though thoes are empirical and contestable claimsârather than being true by definitionâand Steinhardtâs post wasnât specifically about fundamental vs intervention research.)