How to do theoretical research, a personal perspective
“Where do new [algorithms] come from? I keep reading about someone who invented [an algorithm] to do something-or-other but there’s no mention of how.”
A shrug of robed shoulders. “Where do new books come from, Mr. Potter? Those who read many books sometimes become able to write them in turn. How? No one knows.”
“There are books on how to write—”
“Reading them will not make you a famous playwright. After all such advice is accounted for, what remains is mystery. The invention of new [algorithms] is a similar mystery of purer form.”
Most of the content in this document came out of extensive conversations with Paul Christiano.
(This document describes one way of thinking about how to do one particular type of research. There are other ways to productively do this kind of research, and other productive kinds of research. Consider this document peppered with phrases like “from my perspective”, “I think”, “my sense is”, etc.)
A lot of people have a vague mental picture of what empirical research looks like, which consists of exploring data, articulating hypotheses about the data, and running experiments that potentially falsify the hypothesis. I think people lack a similarly mechanistic picture of what theoretical research looks like, which results in them not knowing how to do theory, being skeptical of the possibility of theoretical progress, etc. I do think the difficulty of getting high-quality real-world feedback makes theoretical research more difficult than empirical research, but I think it’s possible to get enough real-world feedback when doing theory that you can still expect to make steady progress.
Currently, I think many people think of theory as “someone sits in a room and has a brilliant insight to solve the problem.” Instead, I think a more accurate picture is very similar to the picture one has for empirical research: the theorist explores some data, gradually builds an intuitive sense of what’s going on, articulates hypotheses that capture their intuition, and falsifies their hypotheses by testing them against data, all the while iteratively building up their understanding.
The key difference between the empirical researcher and the theoretical researcher is while the empirical researcher can build intuition and falsify hypothesis by considering real-world data, the theorist, although ultimately be grounded in the real world, must build intuition and falsify hypothesis by considering thought experiments, simple toy examples, computations, etc. Even mathematics, the purest of intellectual pursuits, roughly follows this process of iteration. Terence Tao:
[A]ctual solutions to a major problem tend to be arrived at by a process more like the following (often involving several mathematicians over a period of years or decades, with many of the intermediate steps described here being significant publishable papers in their own right):
Isolate a toy model case x of major problem X.
Solve model case x using method A.
Try using method A to solve the full problem X.
This does not succeed, but method A can be extended to handle a few more model cases of X, such as x’ and x”.
Eventually, it is realised that method A relies crucially on a property P being true; this property is known for x, x’, and x”, thus explaining the current progress so far.
Conjecture that P is true for all instances of problem X.
Discover a family of counterexamples y, y’, y”, … to this conjecture. This shows that either method A has to be adapted to avoid reliance on P, or that a new method is needed.
Take the simplest counterexample y in this family, and try to prove X for this special case. Meanwhile, try to see whether method A can work in the absence of P.
Discover several counterexamples in which method A fails, in which the cause of failure can be definitively traced back to P. Abandon efforts to modify method A.
Realise that special case y is related to (or at least analogous to) a problem z in another field of mathematics. Look up the literature on z, and ask experts in that field for the latest perspectives on that problem.
Learn that z has been successfully attacked in that field by use of method B. Attempt to adapt method B to solve y.
After much effort, an adapted method B’ is developed to solve y.
Repeat the above steps 1-12 with A replaced by B’ (the outcome will of course probably be a little different from the sample storyline presented above). Continue doing this for a few years, until all model special cases can be solved by one method or another.
Eventually, one possesses an array of methods that can give partial results on X, each of having their strengths and weaknesses. Considerable intuition is gained as to the circumstances in which a given method is likely to yield something non-trivial or not.
Begin combining the methods together, simplifying the execution of these methods, locating new model problems, and/or finding a unified and clarifying framework in which many previous methods, insights, results, etc. become special cases.
Eventually, one realises that there is a family of methods A^* (of which A was the first to be discovered) which, roughly speaking, can handle all cases in which property P^* (a modern generalisation of property P) occurs. There is also a rather different family of methods B^* which can handle all cases in which Q^* occurs.
From all the prior work on this problem, all known model examples are known to obey either P^* or Q^. Formulate Conjecture C: all cases of problem X obey either P^ or Q^*.
Verify that Conjecture C in fact implies the problem. This is a major reduction!
Repeat steps 1-18, but with problem X replaced by Conjecture C. (Again, the storyline may be different from that presented above.) This procedure itself may iterate a few times.
Finally, the problem has been boiled down to its most purified essence: a key conjecture K which (morally, at least) provides the decisive input into the known methods A^, B^, etc. which will settle conjecture C and hence problem X.
A breakthrough: a new method Z is introduced to solve an important special case of K.
The endgame: method Z is rapidly developed and extended, using the full power of all the intuition, experience, and past results, to fully settle K, then C, and then at last X.
The technology developed to solve major problem X is adapted to solve other related problems in the field. But now a natural successor question X’ to X arises, which lies just outside of the reach of the newly developed tools… and we go back to Step 1.
How to do research
The Research methodology section of the ELK report and Paul’s My research methodology both articulate a high level picture of the basic theoretical research loop, but lack details about what steps one takes besides “propose solutions” and “generate counterexamples”
To lend more color to these vague descriptions and provide examples comprehensible to readers without an extensive background in number theory, I will describe what I think of as “modes” of research, articulate some key questions that get asked, and provide (stylized) historical examples related to ELK. I will approach this from the perspective of designing an algorithm, but this basic description will apply to many possible endeavors.
Suppose I’m trying to design an algorithm to accomplish task T (e.g. elicit latent knowledge) over a variety of situations S (e.g. ways my AI could look internally). My goal as a theorist is to develop a sufficiently accurate, unified, and precise intuition of how I hope to accomplish task T in every situation S such that I can just formalize the rules I’m using in my head into an algorithm and my problem has been solved. You might also say that the goal of someone trying to discover the laws of nature is to develop a precise enough model of nature in their head that they can just write it down and they have a law of nature.
This process of algorithm development roughly proceeds as follows:
Figure out what I want to happen in real-world cases
Figure out what I want to happen in simpler cases
Articulate a general algorithm that does what I want in simple cases
Search for examples where the algorithm does something that I don’t want
Become tentatively convinced that a certain class of algorithms can’t work
Refine my sense of what I want to happen in simpler cases
Articulate another general algorithm that does what I want in simple cases
Search for examples where the algorithm does something that I don’t want
...
I think of this process as roughly having 4 key mode:
Figuring out what you want to happen in real-world cases
Translating what you want in real-world cases into desiderata for simple cases
Articulating an algorithm for solving simple cases
Finding cases where your algorithm doesn’t do what you want
I will describe these modes as happening in sequence, but in practice they’re all happening at the same time, just with different amounts of emphasis. It’s common to spend a day or two in a particular mode. I would begin worrying if I spent more than ~three days trying to do a particular step in isolation, without the feedback loop that comes from transitioning between steps.
Figuring out what you want to happen in real-world cases
Ultimately, your intuition for what you want your algorithm to do has to be anchored on what you actually wanted to happen in the real world. The way to develop this intuition is to roughly “solve” the task by hand in many examples, and let the examples wash over you and percolate into your intuition. The key move in this research mode is asking questions of the form “Suppose the world was in situation S, what would the accomplishing task T look like?”
Questions that I find helpful to ask myself:
Can I be more specific? Can I write pseudocode for my situation/solution?
Example: Game of Life Example
Specificity is important because imprecise thinking often hides non-trivial structure that your algorithm needs to exploit. See Mundane solutions to exotic problems for some examples.
Suppose I think that outcome X is bad. Can I tell a story where X actually leads to a bad outcome?
Example: a story where sensor tampering leads to humans being murdered.
Can I articulate a set of assumptions such that Y happening leads to good outcomes?
Can I think of a situation that only has part of the difficulty and handle that case separately?
Example: splitting off ontology identification as a base case, and not being concerned with recursive cases
Example: “Narrow” elicitation and why it might be sufficient
What if [weird thing] was going on? What would I want to happen then?
Example: Avoiding subtle manipulation
Your goal in this mode is to develop an extremely precise sense of exactly what you do/don’t want to happen in a handful of cases to serve as the final arbiter for whether you’ve succeeded at developing an algorithm. It’s generally okay to be extremely unsure of what you want to happen in a large number of cases/not know exactly how to handle a lot of scenarios. However, it’s often worth spending some time trying to articulate high-level hopes for various cases that you’re confused about how to handle (e.g. Indirect normativity: defining a utility function)
ELK Examples
Historically, we started considered cases like:
What if your AI is modeling the world as a low-level physics simulation. What do you want to happen?
What if your AI is reasoning about the world in terms of logical sentences and rules of inference. What do you want to happen?
A small breakthrough was when we articulated the Game of Life Example, which gave us an extremely precise sense of what we wanted to happen in at least one case.
Other cases that one might consider:
What if your predictor learned variational inference/Gibb’s sampling/belief propagation/etc. What would you want to happen?
What if your predictor learned minimax? What if it was literally Stockfish?
Potshot algorithms
Ideally, once you’ve developed a precise sense of what you want to happen for real-world cases, you can iterate on algorithms by checking to see if they do what you want. Unfortunately, often times real-world cases are too complicated, which means that:
It’s really hard to articulate algorithms that handle real-world cases without first iterating on simple cases. Most complicated problems need to be broken down into parts and handled separately.
Example: writing complicated software consists of writing modular functions and composing them together.
Even if you articulated an algorithm that might work, it’s really hard to tell what it actually does. You can still discard algorithms if you think you can tell a plausible story of that algorithm failing, but this requires a lot of judgment of what counts as “plausible.” 2. Example: It’s very hard to tell what “train a model to imitate what a human would say, hope it generalizes naturally” will result in the real world, so you can iterate on that. You can try to iterate on stories about bad things that might happen, e.g. Bad behavior: do inference in the human Bayes net, but this is still less than ideal.
It’s still often worth trying to directly solve real-world cases by proposing “potshot” algorithms. Here are two reasons:
Often you’re not sure how hard your problem actually is. If it’s easy, you might just be able to solve it without developing substantial intuition about what the solution will look like.
Example: Often you can solve simple math problems via brute symbol manipulation, without really understanding what the proof “means”.
It’s worth developing a sense for why “naive” algorithms fail, to get a sense of what the key barriers to coming up with an algorithm don’t work. It’s also often worth specifying these algorithms and counterexamples fully to get maximum surface area, even though you don’t expect adding more details to change your bottom line about whether an algorithm fails.
Example: you can think of most of the proposals in the ELK report as these sorts of potshots. We had a non-trivial probability that one of them would work, but in the end none of them did and we gained significant intuition about what the key difficulties were likely to be.
Translating what you want in real-world cases into desiderata for simple cases
Once you’re satisfied that potshot approaches are unlikely to work, the hard work of iteration can begin. Since you can’t iterate against real world cases, you must develop a sense of what you want to happen in cases simple enough that you can work through by hand. The way to do this is to try to build simple toy models of real-world situations, and then transfer your hard-won intuition about what you want to happen in those cases onto the simple toy cases.
Questions that I find helpful to ask myself:
Can I be more specific? Can I specify literally every possible detail about the case?
Example: write down the case as an explicit boolean circuit.
Can I find two toy-models of a single real-world situation? When I try to extend my intuition, do I get the same answer?
Example: a AND b is the same as ¬(¬a OR ¬b), so my intuition better say the same thing in both cases.
How does this toy model relate to the overall picture of what I want to happen in the real world?
Can I articulate local rules for how combinations of cases should work? If I know what I want in a and what I want in b, what do I want in a AND b?
Example: Let f be the “what I want” function. We can conjecture that f(a AND b) = f(a) AND f(b).
Can I find places where my intuitions about local rules contradict? Can I derive impossibility results?
Example: my local rules for OR, NOT and AND better obey demorgan’s rules.
If you have an argument for why a simplified model of a real world case is impossible, then you often want to check back in with the real world case. Do you still think it’s possible to solve? What structure does your simplified model lack that makes the simplified model impossible?
Example: it’s often possible to solve instances of NP-hard problems extremely quickly. What structure did those problems have that made this possible?
Your goal in this mode is to develop an extremely precise sense of what you want to happen for a handful of examples that are simple enough for you to write down formally, evaluate by hand, etc. Again, it’s generally okay to be extremely unsure of what you want to happen in all but a handful of cases.
ELK Example
I’m currently considering cases like:
Let x and y represent cameras.
Let d represent the diamond
Let hx and hy represent hacking of camera x and y respectively.
Suppose that
x := d OR hx
y := d OR hy
It seems clear that ‘d’ would be a good direct translator in this case, while ‘x AND y’ would be bad. But is d AND ¬hx AND ¬hy also an acceptable direct translator?
This example can be extended in a few ways:
What if d, hx, hy were all the output of some complicated circuit C? How would you deal with the fact that hx and d were not literally independent?
Let ox and oy represent whether there is anything obstructing camera x and y respectively.
Suppose that:
x := (d AND ¬ox) OR hx
y := (d AND ¬oy) OR hy
What is the direct translator here?
What if hx and hy were strongly correlated?
In the limit, if hx := h and hy = y, then we would have:
x := d OR h
y := d OR h
By symmetry, it’s going to be impossible to distinguish between the diamond being present and hacking occurring. What structure am I missing from this simplified model?
Articulating an algorithm for solving simple cases
Once you have a sufficiently precise sense of what you want to happen you want to articulate a general algorithm that doesn’t special-case the cases where you know what you want, but nevertheless has the desired behavior anyway. The way to do this is to consider the reasoning that let you decide what you wanted and try to develop underlying rules or natural generalizations. Often, the process of trying to articulate a general algorithm will point out ambiguities in your sense of what you want to happen, leading to substantial revision/sharpening of your intuition.
Questions that I find helpful to ask myself:
How did you know what you wanted in case X? What about a very similar case X’?
If the answer to X and Y are different, what about the cases makes them different? Can I describe a sequence of cases that transforms X to Y?
For a given case X, can I describe two different real-world cases that X might capture? Do I think different things should happen in those two real-world cases?
Do all the cases I’m considering have some property P on which my algorithm crucially relies? Is P true of all cases?
A problem in machine learning is that often complicated ML algorithms do well on a problem because linear regression (or some other simple baseline) does well, and the complicated ML algorithm does well by emulating linear regression. Similarly, sometimes your complicated algorithm sometimes does well because it’s just a simplicity prior, and all of the examples you’ve considered are solved by a simplicity prior.
Often, when you try to articulate the general rules behind what you’re to “solve” simple cases, you’ll find that you were accidentally special casing one of those simple cases, and you can’t quite see the connection between what you did in case 1 and what you did in case 2. In these situations, you can:
Try to articulate a precise distinction between “cases of type 1” and “cases of type 2”, and split the problem into two subproblems
Try to see how what you’re doing for case 1 is secretly the same as what you’re doing for case 2
Go “back to the drawing board” and meditate/stew on examples until your intuition has sharped/unified enough for you to extract an algorithm
Generally, I think that if you can intuitively “solve” every case, then your intuition must be reliably executing some algorithm that solves every case. Your job is to just sharpen your intuition until it has unified, and extract the algorithm it’s executing.
ELK Examples
Unfortunately, describing examples in detail would require too much context for me to write down :(.
Finding cases where your algorithm doesn’t do what you want
After you have an algorithm, you want to articulate a case where it doesn’t do what you want. This is generally much easier than other parts of the process, because you have a precise sense of what you want and a precise algorithm.
Questions that I find helpful to ask myself:
Can I be more specific about the case?
Often it’s tempting to discard algorithms because of a vague story for why it can’t work, but you generally learn a lot by articulating exactly why the algorithm doesn’t work. You generally want to be skeptical of an algorithm being “intuitively broken” and want to iterate on that intuition until you have a precise statement of exactly why it’s broken.
Can I (even more) precisely state what I want out of my algorithm?
If you have a sufficiently precise sense of what you want, then it’s often extremely easy to find a case where your algorithm fails.
Example: if you want a sorting algorithm that works in less than 2 n log2(n) comparaisons, you can probably find this out by considering the list [4, 2, 1, 3]. It’s harder to tell if an algorithm is O(n log n), because there is a bit of imprecision in the constants, but overall much easier than “fast enough.”
Does the algorithm fail in an analogous real-world case? If it does, can I describe a simplified model of that case that also breaks the algorithm?
ELK Examples
Again, unfortunately all the detailed examples I can think of require too much context for me to write down. Eliciting Latent Knowledge has many “worst-case” counterexamples to “potshot” algorithms that might give a general feel, but you typically wanting to be working more precisely than that.
Other random tips
Be as specific and concrete as possible!
This is generally the most important thing in research, made even more important by the lack of high-quality empirical feedback in theoretical research.
A lot of people tend to turn to real-world examples to make things more concrete. This is sometimes good, but many real-world examples have complicated structure that gets abstracted away. Things like “suppose that you have pictures of animals, some of which are dogs” is worse than “suppose that you have sequences of bits of length 1000 generated by taking the pointwise sign of samples from a multivariate Gaussian with mean 0 and covariance matrix Σ. Call a sequence a “dog” if the first 50 bits are 1.”
You want to bear in mind the intuitive connection your example has to the real world, but the actual example you’re working with should be as precisely specified as possible.
Have courage
If figuring out what you want, finding counterexamples, or searching for algorithms leads you to a tedious and difficult place, oftentimes you want to just follow through and do it.
Halmos writes: “Another notable and enviable trait of von Neumann’s was his mathematical courage. If, in the middle of a search for a counterexample, an infinite series came up, with a lot of exponentials that had quadratic exponents, many mathematicians would start with a clean sheet of paper and look for another counterexample. Not Johnny! When that happened to him, he cheerfully said: “Oh, yes, a theta function...”, and plowed ahead with the mountainous computations. He wasn’t afraid of anything.”
Before doing computations/considering examples, ask “what do I hope to learn?”
Generally, unguided exploration is seldom that useful. Maintaining a sharp sense of what you’re hoping to get out of what you’re doing is important in cutting off research avenues that are fun to think about, but ultimately not that productive.
Even if you’re “trying to understand what’s going on,” you should be trying to focus on a specific algorithm/example that you don’t understand, have a clear goal that understanding will help you achieve, etc. Avoid things that smell like philosophy at great cost (but not all cost, for sometimes it will be necessary).
If there are multiple “next-steps”, breadth-first search is typically better.
There’s often cross-pollination between different research directions.
Breadth-first prevents you from being stuck in a dead-end for too long, assuming one of the steps is a good step forward.
You should be making progress
On timescales of days and weeks, you should be able to point to concrete examples/algorithms that constitute “units of progress” towards your final goal. Don’t be content with a sense that you understand more of what’s going on that isn’t grounded in concrete algorithms or examples.
You generally want to be in “no holds barred” optimization world
That is, you generally want to know what type of object your searching for (e.g. have a precise sense of what counts as an algorithm/counterexample), and a set of formal desiderata for that object, such that you can just optimize for finding an object that meets those desiderata.
A common example of such an object is a proof of a particular theorem statement.
- 2 Jan 2023 0:17 UTC; 24 points) 's comment on Your 2022 EA Forum Wrapped 🎁 by (
- Should I force myself to work on AGI alignment? by 24 Aug 2022 17:25 UTC; 19 points) (
- Monthly Overload of EA—September 2022 by 1 Sep 2022 13:43 UTC; 15 points) (
Coming from an undergrad background in social science/liberal arts, I was honestly kind of let down by this discussion of “theory.” At least for me, when people are comparing between “empirical” vs. “theoretical” research, one of the key features of theoretical research is that one doesn’t get to just “[falsify] their hypotheses by testing them against data” or look for simple counterexamples to their “algorithm.”
For example, if I’m trying to determine something like “did the creation/usage of machine guns asymmetrically benefit insurgent forces against counterinsurgents” (my thesis) I can’t just go out, grab some data, throw it at the hypothesis, then get a reliable result: there just isn’t sufficient granular, reliable, relevant data. (I literally had an entire ~3-page subsection in my thesis on why even a single great case pair seemed inherently unlikely to exist in the recorded history of warfare.)
Thus, most of the “testing” in theoretical research has to be done at the theoretical level, which in my view is distinguished by its reliance on inference (rather than more-deductive arguments), non-mechanical calculations (i.e., in your head or on paper with current methods, which I think ought to change!)—including argument clash, oversimplified simulations, etc. These differences matter often because they involve a lower degree of legibility[1], transparency, reproducibility/reliability, reputation stake [2], etc.—compared to traditional empirical methods like statistical hypothesis testing/randomized controlled trials, engineering prototypes, high-fidelity simulations, direct observation tests (e.g., spectrometry), etc.
Ultimately, I don’t think this post tells people how to do that theoretical research; in my perhaps-overly-cynical evaluation, it basically just seems to say “think about the problem until you can figure out how to test it with traditional empirical methods.” (A lot of this post literally seems to be about engineering and mathematics problems!)
But perhaps I’m just misunderstanding something; could you elaborate on how you distinguish theoretical vs. empirical research?
i.e., ease of understanding arguments, including what is or isn’t being assumed.
For example, it might be harder to detect that a writer is strawmanning their intellectual opponents or such mischaracterizations might even be welcomed by some readers, whereas fraudulent data in STEM is egregious. (Part of the issue is that people might have more plausible deniability for strawmanning: it’s harder to determine that they were acting in bad faith, and/or it might even be hard to determine that they were in fact misrepresenting others’ views.)
I understood “theoretical” to mean your ‘data’ comes from thought experiments—eg “How would I, as a counterinsurgent general, use machine guns in creative ways against insurgents?”
I’m a bit confused by your distinction: the question “Did (as opposed to Would) machine guns asymmetrically benefit insurgents?” seems entirely empirical to me. If you can’t find reliable data, that just makes it hard, not theoretical.
Well, yeah, what else would you expect? The post describes how you might use argument clashes and oversimplified simulations in thinking about the problem.
The use of “did” vs “would” wasn’t very intentional or precise.
As to the empirical vs. theoretical nature of my hypothesis, it is indeed claiming that certain relationships empirically existed (and, with a lot of caveats, may continue into the future). However, my point was that the research methods I used were much more “theoretical”: I couldn’t do a large-N empirical analysis or controlled experiments to even establish meaningfully-controlled correlation (let alone causation) between the dependent, control, and independent variables, and instead had to rely on lines of reasoning such as:
Hypothetical scenarios (e.g., imagine comparing an ambush where both parties have machine guns vs. one where neither side has machine guns)—which is impractical to clinically/experimentally test (I.e., with high reality fidelity)
More-qualitative (and somewhat subjective) comparison of case studies, using a large amount of argumentation/theoretical reasoning to deal with the many gaps and flaws in the case comparison (given that, as I noted, there didn’t seem to be any good case comparison pair in the historical record)
Agreement with existing theoretical and/or empirical concepts in the literature, such as Biddle’s Modern System.
Again, perhaps I was being a bit too imprecise with my language? My point is that for some questions (arguably including my thesis), theoretical argumentation has to bear a lot of the analytical burden. This analytical burden can include things like:
Explaining why variables Q, K, and W—none of which you could experimentally control for—probably do or don’t affect the relationship;
Explaining why your very limited sample size can probably be extrapolated to some other cases;
Explaining why some metric is probably a decent proxy for what you actually are trying to measure;
Reasoning about hypothetical scenarios which will not actually empirically occur.
(Caveat: all of those activities can be supported by direct reference to supporting data in some situations, but not always.)
In contrast, it seems that much of the “theoretical” research methods described in this post are basically just “use lots of thinking to figure out how to test this empirically against data [at which point these empirical methods do almost all the legwork.]”
There is perhaps some debate to be had over the meaning of “theoretical” research methods: do mathematical proofs or algorithms count as theory? While I’m not universally opposed to using the term in such a context, I think it is much less helpful to use the term “theory” when you’re trying to juxtapose it with empirical methods. This especially feels true if a major reason you support a mathematical proof or algorithm is based on your determination that “this empirically works every single time.” When teaching research methods, I think it’s important to emphasize the differences that I described previously (e.g., legibility/transparency, reliability/consistency, reputation stake) which, in my view, have tended to make empirical methods so much more effective when they can be used.
I think this post applies mainly to mathematical theoretical research, I’m not sure it generalizes well to other kinds of theory work.
If that was the intention, I think the title and content should have more clearly expressed that. I’m unclear on what the significant difference is between theory and empirics in math; I think most of the value in distinguishing between theoretical and empirical research comes from highlighting the inability to simply “use data or counter-examples to falsify/test a hypothesis”—but in this case, that doesn’t apply.
Thanks, this was helpful. I often hear from people who don’t really know what it means to do certain types of theoretical research, and I think this will go a decent way towards addressing their uncertainty.
Great selection of sources on the work of theorizing in the social sciences, if of interest:
https://twitter.com/LarsJohannessen/status/1559221205441323013
https://twitter.com/LarsJohannessen/status/1560169368146903042