Learnings about literature review strategy from research practice sessions
Earlier this year, myself and a few other early-career researchers decided to try creating sessions where we could deliberately practice certain research skills by creating a high-feedback environment where we could build intuitions and refine techniques. This grew out of a frustration with the slow feedback loops typical of research, the exciting prospect of improving skills that should pay off consistently through an entire career, and a lack of good resources on how to actually do research.
This document collects some of the lessons learned from our first research practice experiment, a series of five literature review practice sessions with between 3-6 longtermist researchers. Note though that many of the benefits were from improved intuitions or small details about how to do literature reviews and were hard to boil down into concrete advice. For example, we found this to be a useful environment to explore the finer details of how to do research that do not otherwise come up easily in conversation (e.g. how many searches on Google Scholar do you do before you start reading papers?), build intuitions on research taste (i.e. judging trustworthiness of a paper quickly), and to better understand what we actually do when we do literature reviews (think for a moment, do you actually know what steps you take when starting a review?).
As such, I think it is likely worthwhile to try the exercise for yourself with a group of peers. The structure we used was as follows:
Identify an interesting question (before starting): These can be almost anything so long as the question is well-structured (e.g. try to avoid having two questions in one). It’s worth playing around with the type of question too and seeing how that affects the literature review strategy.
Do a quick literature review on the topic (50-60 minutes): Focus on answering the question and producing a readable output. A good framing for the output is to make something you could confidently come back to after a month of not thinking about the topic and be ready to continue the project/review.
Read and comment on each others’ work (15 minutes): While reading and critiquing others’ work, each person would focus also on what they could have done better given what they see others have accomplished
Come together to discuss (45 minutes): This would likely touch on what the others found, how they found it, ways we got stuck, and potential methods to improve on the technique which was used. We would generally not spend much time on the subject matter and try to maintain focus on methods rather than content once the review was done.
The following are techniques we identified as being useful. We believe that these should be valid for literature reviews on almost any topic. Note though that we focused primarily on questions within the social sciences, so take this advice with a grain of salt if you are doing technical work.
Focus on breadth first! It’s best to have a pile of papers ready to look at before diving too deep into any given paper. Some papers are far more valuable than others and if you dive into the first you find, you might miss the highest value papers. Searching smartly is often more effective than going down the citation trail (unless you find the right meta-analysis that is). Be aware however that it’s possible to spend too much time on this step (like any of the others), though we tend to think people usually spend too little. It might even be worth setting a timer for this step to make sure you don’t cut it off too soon or too late. For the exercise I’d suggest (with high uncertainty) 15 minutes, for a real literature review it really depends on context.
Try a variety of search terms: It’s almost always useful to spend a few minutes generating synonyms and potentially useful terms to search for early on. You probably want to be thinking about good search terms consistently in the early stages of your lit review. It’s easy to think you’ve searched for all the relevant terms when you’ve already found something a bit promising, so it’s good to think a bit outside the box early on and keep looking for new terms to use for search throughout the process. Some search terms can yield much better results than others, sometimes surprisingly. For example, we once found that “private tutoring effect” yielded irrelevant papers while “one-on-one tutoring effect” gave exactly what we were looking for.
Think before you search: For some questions it can help to make a rough model/writeup of how you would answer the question first or to break down the question into its constituent parts so that you can search through the topic in a way that makes sense to you and/or answers questions of interest posed by the model. E.g. if you are researching what makes people good researchers you might find it useful to first write out what you think the answer is, which factors seem most important, etc. While this may seem to carry the risk of biasing your search, I’d argue that it’s better to have explicit, known possibilities of bias than to let implicit assumptions drive your search without you being aware of them. This way you can:
Be wrong and notice that you were wrong and thus update your thinking. Otherwise you might read through new information and think you already knew it, possibly leading to your not incorporating the new information well
Consciously look for information that contrasts with what you think (as opposed to the possibility of having an implicit bias in your search)
Find useful search terms by breaking down the area into component parts which seem more likely to have a solid academic literature around them
Connectedpapers.com is a useful search tool in almost all occasions once you’ve found a relevant paper to feed into it. Keep in mind though that it sometimes fails to make the right connections so it’s good practice to generate graphs with a few different papers.
In order to find meta-analyses with this tool it can be helpful to use the derivative works tab once you’ve generated a graph with a useful paper.
To summarize findings I (Alex) generally make a list of potential answers to the question and include the citations for each. This might be worth trying, though it was not proven to reliably be the ideal method of consolidating a literature review
Thanks Nora Amman, Jan Brauner, and Ben Snodin for feedback. Thanks to Nora Amman, Ondrej Bajgar, Jan Brauner, Lukas Finnveden, Chris van Merwijk, and others for attending the sessions, coming up with some of the above ideas, and helping improve the process!
- Exploring Existential Risk—using Connected Papers to find Effective Altruism aligned articles and researchers by 23 Jun 2021 17:03 UTC; 52 points) (
- Intervention options for improving the EA-aligned research pipeline by 28 May 2021 14:26 UTC; 49 points) (
- How do you balance reading and thinking? by 17 Jan 2021 13:47 UTC; 28 points) (
- New Year Review Resources by 16 Jan 2022 18:21 UTC; 23 points) (LessWrong;
- 13 Apr 2022 14:27 UTC; 1 point) 's comment on david_reinstein’s Quick takes by (
This article on doing systematic reviews well might also be of interest if you want to refine your process to make a publishable review. It’s written by environmental researchers, but I think the ideas should be fairly general (i.e. they mention Cochrane for medical reviews).
I’d also recommend having a loot at Iris.ai. It is a bit similar to ConnectedPapers but works off a concept map (I think) rather than than a citation map, so it can discover semantic linkages between your paper of interest and others that aren’t directly connected through reference links. I’ve just started looking at it this week and have been quite impressed with the papers it suggested.
The idea of doing deliberate practice on research skills is great. I agree that learning to do good research is difficult and poor feedback mechanisms certainly don’t help. Which other skills are you aiming to practice?
Iris.ai sounds potentially useful, I’ll definitely check it out!
So far we’ve done some things on inspectional note-taking, finding the logical argument structure of articles, and breaking down questions into subquestions. I’m not too sure what the next big thing will be though. Some other ideas have been to practice finding flaws in articles (but it takes a bit too long for a 2hr session and is too field specific), abstract writing, making figures, and picking the right research question.
I haven’t been spending too much time on this recently though so the ideas for actually implementing these aren’t top of mind
Yes! It’s hard to convey that you need to have already done a literature search to know what you need to search in the first place.
I second “Focus on breadth first!”. Googling is cheap. Search longer than you think you need to. An additional good paper can be decisive in forming a view on a new topic.
I think “going down the citation trail” can often be very fruitful, especially if you search citations within a foundational article. E.g.,
Also: a good template can help you organize and focus your search. I only sorted the studies I found by their most salient features (the 4 colored 0⁄1 columns) after I’d gathered quite a few.
I did not know about http://connectedpapers.com/. Seems useful!
Thanks!
Yes! You’re totally right that going down the citation trail with the right paper can be better than search, I just edited to reflect that.
This spreadsheet seems great. So far we’ve only found ways to practice the early parts of literature review so we never created anything so sophisticated but that seems like a good method
“Searching smartly is often more effective than going down the citation trail” I’d love more detail / clarification on this if you’re happy to share? I think I pretty much exclusively go down the citation trail.
Relatedly, what’s the benefit of having “a pile of papers ready to look at” before you start reading them? Unless you’re trying to be systematic and comprehensive (in which case you might as wel gather them all first), it seems to me that reading through papers as you go helps you realise if you need to adjust your search terms or add new ones, or if you’re just hitting diminishing returns on the review generally. I pretty much just Google Scholar search and start reading the first item that comes up.
Yeah, maybe I should change some text… but I guess I have assumption built in that when finding papers which seem relevant you’d be reading the abstract, getting a basic idea of what they’re about, and then adjusting search terms.
The reason having a pile of papers is useful is because the value of papers is extremely uneven for any given question and by having a pile you get a better feel for the range of what people say about a topic before diving into one perspective. Wrt the first point I’d argue that in most cases there are one or two papers which would be perfect for getting an overview. Reading those might be 100x more valuable than reading something which is just kind of related (what you are likely to find on the first search). If that’s true it’s clearly worth spending a lot of time looking around for the perfect paper rather than jumping into the first one you find. Obviously this can be overdone but I expect most people err toward too little search. Note that you might also find the perfect paper by skimming through an imperfect one. I tend to see this as another way of searching as you can look for that without actually ‘reading’ the paper, just by skimming through their lit review or intro.