Introduction
I used a recent Ask-Me-Anything (AMA) of Rethink Priorities to ask a series of questions about research in general (not limited to Rethink Priorities).
I’m posting these here severally to make them more visible. I’m not personally looking for more answers at this point, but if you think that readers would benefit from another perspective, I’d be delighted if you could add it.
Question
I imagine that virtually any research project, successful and unsuccessful, starts with some inchoate thoughts and notes. These will usually seem hopelessly inadequate but they’ll sometimes mature into something amazingly insightful. Have you ever struggled with mental blocks when you felt self-conscious about these beginnings, and have you found ways to (reliably) overcome them?
Jason Schukraft
Yep, I am intimately familiar with hopelessly inchoate thoughts and notes. (I’m not sure I’ve ever completed a project without passing through that stage.) For me at least, the best way to overcome this state is to talk to lots of people. One piece of advice I have for young researchers is to come to terms with sharing your work with people you respect before it’s polished. I’m very grateful to have a large network of collaborators willing to listen to and read my confused ramblings. Feedback at an early stage of a project is often much more valuable than feedback at a later stage.
Holly Elmore
Personally, I’m very self-conscious about my work and tend to wait to long to share it. But the culture of RP seems to fight that tendency – which I think is very productive!
Alex Lintz
Idk if this fits exactly but when I started my research position I tried to have the mindset of, “I’ll be pretty bad at this for quite a while.” Then when I made mistakes I could just think, “right, as expected. Now let’s figure out how to not do that again.” Not sure how sustainable this is but it felt good to start! In general it seems good to have a mindset of research being nearly impossibly hard. Humans are just barely able to do this thing in a useful way and even at the highest levels academics still make mistakes (most papers have at least some flaws).
Michael Aird
This is his answer to the questions about self-consciousness and “Is there something interesting here?”
These questions definitely resonate with me, and I imagine they’d resonate with most/all researchers.
I have a tendency to continually wonder if what I’m doing is what I should be doing, or if I should change my priorities. I think this is good in some ways. But sometimes I’d make better decisions faster if I just actually pursued an idea more “confidently” for a bit, to get more info on whether it’s worth pursuing, rather than just “wondering” about it repeatedly and going back and forth without much new info to work with. Basically, I might do too much self-doubt-style armchair reasoning, with too little actual empirical info.
Also, pursuing an idea more “confidently” for a bit will not only inform me about whether to continue pursuing it further, but also might result in outputs that are useful for others. So I try to sometimes switch into “just commit and focus mode” for a given time period, or until I hit a given milestone, and mostly minimise reflection on what I should prioritise during that time. But so far this has been like a grab bag of heuristics and habits I use, rather than a more precise guideline for myself.
See also When to focus and when to re-evaluate.
Things that help me with this include, and/or some scattered related thoughts, include:
Talking to others and getting feedback, including on early-stage ideas
I liked David and Jason’s remarks on this in their comments
A sort-of minimum viable product and quick feedback loop approach has often seemed useful for me – something like:
First getting verbal feedback from a couple people on a messy, verbal description of an idea
Then writing up a rough draft about the idea and circulating it to a couple more people for a bit more feedback
Then polishing and fleshing out that draft and circulating it to a few more people for more feedback
Then posting publicly
(But only proceeding to the next step if evidence from the prior one – plus one’s own intuitions – suggested this would be worthwhile)
Feedback has often helped me determine whether an idea is worth pursuing further, feel more comfortable/motivated with pursuing an idea further (rather than being mired in unproductive self-doubt), develop the idea, work out which angles of it are most worth pursuing, and work out how to express it more clearly
Reminding myself that I haven’t really gathered any new info since the last time I thought “Should this really be what I spend my time on?,” so thinking about that again is unlikely to reveal new insights, and is probably just a stupid part of my psychology rather than something I’d endorse.
I might think to myself something like “If a friend was doing this, you’d think it’s irrational, and gently advise them to just actually commit for a bit and get new info, right? So shouldn’t you do the same yourself?”
Remembering Algorithms to Live By drawing an analogy to a failure mode in which computers continually reprioritise tasks and the reprioritisation takes up just enough processing power to mean no actual progress on any of the tasks occurs, and this can just cycle forever. And the way to get out of this is to at some point just do tasks, even without having confidence that these “should” be top priority.
This is just my half-remembered version of that part of the book, and might be wrong somehow.
Remembering that I’d be deeply uncertain about the “actual” value of any project I could pursue, because the world is very complicated and my ambitions (contribute to improving the long-term future) are pretty lofty. The best I can do is something that seems good in expected value but with large error bars. So the fact I feel some uncertainty and doubt provides basically no evidence that this project isn’t worth pursuing. (Though feeling an unusually large amount of uncertainty and doubt might.)
Remembering that, if the idea ends up seeming to have not been important but there was a reasonable ex ante case that it might’ve been important, there’s a decent chance someone else would end up pursuing it if I don’t. So if I pursue it, then find out it seems to not be important, then write about what I found, that might still have the effect of causing an important project to get done, because it might cause someone else to do that important project rather than doing something similar to what I did.
Examples to somewhat illustrate the last two points:
This year, in some so-far-unpublished work, I wrote about some ideas that:
I initially wasn’t confident about the importance of
Seemed like they should’ve been obvious to relevant groups, but seemed not to have been discussed by them. And that generally seems like (at least) weak evidence that an idea either (a) actually isn’t important or (b) has been in essence discussed in some other form or place that I just am not familiar with.
So when I had the initial forms of these ideas and wasn’t sure how much time (if any) to spend on them, I took roughly the following approach:
I developed some thoughts on some of the ideas. Then I shared those thoughts verbally or as very rough drafts with a small set of people who seemed like they’d have decent intuitions on whether the ideas were important vs unimportant, somewhat novel vs already covered, etc.
In most cases, this early feedback indicated that it was at least plausible that the ideas were somewhat important and somewhat novel. This – combined with my independent impression that these ideas might be somewhat important and novel – seemed to provide sufficient reason to flesh those ideas out further, as well as to flesh out related ideas (which seemed like they’d probably also be important and novel if the other ideas were, and vice versa).
So I did so, then shared that slightly more widely. Then I got more positive feedback, so I bothered to invest the time to polish the writings up a bit more.
Meanwhile, when I fleshed one of the ideas out a little, it seemed like that one turned out to probably not be very important at all. So with that one, I just made sure that my write-up made it clear early on that my current view was that this idea probably didn’t matter, and I neatened up the write-up just a bit, because I still thought the write-up might be a bit useful either to:
Explain to others why they shouldn’t bother exploring the same thing
Make it easy for others to see if they disagreed with my reasoning for why this probably didn’t matter, because I might be wrong about that, and it might be good for others to quickly check that reasoning
Having spent time on that idea sort-of felt in hindsight silly or like a mistake. But I think I probably shouldn’t see that as having been a bad decision ex ante, given that:
It seems plausible that, if not for my write-up, someone else would’ve eventually “wasted” time on a similar idea
This was just one out of a set of ideas that I tried to flesh out and write up, many/most of which still (in hindsight) seem like they were worth spending time on
So maybe it’s very roughly like I gave 60% predictions for each of 10 things, and decided that that’d mean the expected value of betting on those 10 things was good, and then 6 of those things happened, suggesting I was well-calibrated and was right to bet on those things
(I didn’t actually make quantitative predictions)
And some of the other ideas were in between – no strong reason to believe they were important or that they weren’t – so I just fleshed them out a bit and left it there, pending further feedback. (I also had other things to work on.)
In a reply, I referred to this related blog post of mine. Michael replied:
It’s also important to be transparent about one’s rigor and to make the negative results findable for others. The second is obvious. The first is because the dead end may not actually be a dead end but only looked that way given the particular way in which you had resolved the optimal stopping problem of investigating it (even) further.
I agree with these points, and think that they might sometimes be under-appreciated (both in and outside of EA).
To sort-of restate your points:
I think it’s common for people to not publish explorations that turned out to seem to “not reveal anything important” (except of course that this direction of exploration might be worth skipping).
Much has been written about this sort of issue, and there can be valid reasons for that behaviour, but sometimes it seems unfortunate.
I think another failure mode is to provide some sort of public info of your belief that this direction of exploration seems worth skipping, but without sufficient reasoning transparency, which could make people rule this out too much/too early.
Again, there can be valid reasons for this (if you’re sufficiently confident that it’s worth ruling out this direction and you have sufficiently high-value other things to do, it might not be worth spending time on a write-up with high reasoning transparency), but sometimes it seems unfortunate.
In the same context, I also brought up a bit of CFAR lore:
Part of this reminds me a lot of CFAR’s approach here (I can’t quite tell whether Julia Galef is interviewer, interviewee, or both):
For example, when I’ve decided to take a calculated risk, knowing that I might well fail but that it’s still worth it to try, I often find myself worrying about failure even after having made the decision to try. And I might be tempted to lie to myself and say, “Don’t worry! This is going to work!” so that I can be relaxed and motivated enough to push forward.
But instead, in those situations I like to use a framework CFAR sometimes calls “Worker-me versus CEO-me.” I remind myself that CEO-me has thought carefully about this decision, and for now I’m in worker mode, with the goal of executing CEO-me’s decision. Now is not the time to second-guess the CEO or worry about failure.
Your approach to gathering feedback and iterating on the output, making it more and more refined with every iteration but also deciding whether it’s worth another iteration, that process sounds great!
I think a lot of people aim for such a process or want to after reading your comment, but will held back from showing their first draft to their first round of reviewers because they worry the reviewers will think badly of them for addressing a topic of this particular level of perceived difficulty or relevance (maybe it’s too difficult or too irrelevant in the reviewer’s opinion), or think badly of them for a particular wording, or think badly of them because they think you should’ve anticipated a negative effect of writing about the topic and not done so (e.g., some complex acausal trade or social dynamics thing that didn’t occur to you), or just generally have diffuse fears holding them back. Such worries are probably disproportionate, but still, overcoming them will probably require particular tricks or training.
Michael replied:
I like that “Worker-me versus CEO-me” framing, and hadn’t heard of it or seen that page, so thanks for sharing that. It does seem related to what I said in the parent comment.
I share the view that it’ll be decently common for a range of disproportionate worries to hold people back from striking out into areas that seem good in expected value but very uncertain and with real counterarguments, and from sharing early-stage results from such pursuits. I also think there can be a range of good reasons to hold back from those things, and that it can be hard to tell when the worries are disproportionate!
I imagine it’d be hard (though not impossible) to generate advice on this that’s quite generally useful without being vague/littered with caveats. People will probably have to experiment to some extent, get advice from trusted people on their general approach, and continue reflecting, or something like that.
(If one of the answers is yours, you can post it below, and I’ll delete it here.)
I’m not a very experienced researcher, but I think in my short research career, I’ve had my fair share of dealing with self-consciousness. Here are some things I find:
Note that I mostly refer to the “I’m not worth other people’s time”, “This/I am dumb”, “This is bad, others will hate me for it” type of self-consciousness. There might be other types of self-consciousness, e.g. “I’m nervous I’m not doing the optimal thing and feel bad because then more morally horrible things will happen” in a way that’s genuinely not related to self-confidence, self-esteem etc. for which my experience will not apply. This is apart from the obvious fact that different things work for different people
Some general thoughts:
As my disclaimer might indicate, I see research self-consciousness not as bound to research. Before/outside of research/work/EA in general I didn’t notice that I had any issues with self-confidence. But for me at least, I think research/work/EA just activated some self-esteem/self-confidence issues I had before. (That is not to say that self-esteem etc. are not domain-specific, but I still think there’s some more general thing going on as well) So, I approach research self-consciousness quite holistically as character development and try to improve not strictly in the research domain, although that will hopefully also help with research self-consciousness. Because I never looked at it from a pure research-domain lens, some things might seem a bit off, but I’ll try to make it relevant.
The goal of the things I do to improve self-consciousness is not primarily to get to a state where I think my research is great, but to get to a state where I can do research, think it’s great or not, be wrong, and be okay with it. I sometimes have to remind myself of that. On the occasions on which I do and decouple my self-esteem from the research, it lowers the stakes: If my research really is crap, at least it doesn’t mean I’m crap.
Things I do to improve:
In the moment of feeling self-conscious: I would second Jason that talking to others about the object-level is magic
I also have a rule of talking to others about my research/sharing writeups whenever it feels most uncomfortable to do so. Those are often the moments when I’m most hardstuck because an anxious mind doesn’t research well, exchange with others really helps, but my anxiety traps me to stay in that bad state!
I do something easy to commit myself to something that seems scary but important for research progress, so in the moment of self-consciousness I can’t just back out again. Examples:
Social accountability is great for this. When I’m in a really self-conscious-can’t-work-state, I sometimes commit myself to send someone something I haven’t started, yet, in 30 minutes, no matter what state it’s in.
I also often find it way easier to say “yes, I’ll do this talk/discussion round at date X” or messaging another person “Hey, I have this idea I wanted to discuss, can I send you a doc?” (Even though I don’t have a good doc, yet, because I think the idea is crap) than to do the thing, so whenever I feel able to do the first, I do it and future Chi has to deal with it, no matter how self-conscious she is.
Often, just starting is the hardest thing. At least for me, that’s where feeling super self-conscious often happens and stops me from doing anything. I sometimes set a timer for 5 minutes to do work. That’s short enough that it feels ridiculous not to be able to it, and afterwards I often feel way less self-conscious and can just continue.
I think the above examples are partly me basically doing iterated exposure therapy with myself. (I never put it in those terms before.) It’s uncomfortable and sucky, but it helps (, and I get some perverse enjoyment out of it). I try to look for the thing I’m most scared of that feels related to my research self-consciousness, that seems barely doable, and try to do it. Unless that thing becomes easier and then I go “to the next level”. E.g. maybe at some point, I want ot practice sharing my opinions publicly on a platform, but can only do so, if I ran my opinion by a friend beforehand and then excessively qualify when writing my opinion and emphasize how dumb and wrong everything could be. And that’s fine, you can “cheat” a bit. But after a while you hopefully feel comfortable enough that you can challenge yourself to cut the excessive qualifiers and running things by your friend. Ideally, you do that with research related things (e.g. iterated exposure of all your crappy research stuff, then with you confidently backing them, then on some topic that scares you, then with scarier people etc. --> Not suggesting that this order makes sense for everyone), but we don’t always have the opportunity to iterate quickly on research because research takes time. I have the goal to improve on this generally anyway, but I think even if not, some things are good enough nearbys that are relevant to research self-consciousness. Examples:
For self-consciousness reasons, I struggle with saying “Yes, I think this is good and promising” about something I work on, which makes me useless at analyzing whether e.g. a cause area is promising, which is incidentally exactly my task right now. So I looked for things that felt similar and uncomfortable in the same way and settled for trying to post at least one opinion/idea a weekday in pre-specified channels. (I had to give up after a week, but I think it was really good and I want to continue once I have more breathing room.)
For the same reason as above, I deliberately go through my messages and delete all anxious qualifiers. I can’t always do that in all contexts because they make me too self-conscious, and I allow myself that.
I appreciate that the above self-exposure-therapy examples might be too difficult for some and that might seem intimidating. (I’ve definitely been at “I’d never, ever, ever write a comment on the forum!” I’m still self-conscious about what I up- and downvote and noone can even see that) But you can also make progress on a lower level, just try whatever seems manageable and be gentle to yourself. (And back off if you notice you chewed off too much.) However, it can still be pretty daunting and it might be that it’s not always possible to do the above completely independently. (E.g. I think I only got started when I spent several weeks at a research organisation I respect a lot, felt terrible for many parts, but couldn’t back out and just had to do or die, and had a really good environment. I’m not sure “sticking through” would have been possible for me without that organisational context)
I personally benefited a lot from listening to other people’s stories, general content on failing, self-esteem etc. I’m not sure how applicable that is to others that try to improve research self-consciousness because I never looked at it from a pure research lens, but it’s motivating to have a positive ideal as well, and not just “self-consciousness is bad.” I usually consume non-EA podcasts and books for this.
On positive motivation:
Related to the last point of positive ideals: Recently, I found it really inspiring to look at some people who just seem to have no fear of expressing their ideas, enthusiasm, think things through themselves without anyone giving them permission etc. And I think about how valuable just these traits are apart from the whole “oh, they are so smart” thing. I find that a lot more motivating than the smartness-ideal in EA, and then I get really motivated to also become cool like that!
I guess for me there’s also a gender thing in where the idea of becoming a kickass woman is double motivating. I think I also have the feeling that I want to make progress on this on behalf of other self-conscious people that struggle more than me. I’m not really sure why I think that benefits them, but I just somehow do. (Maybe I could investigate that intuition at some point.) And that also gives me some positive motivation.
This seems like a great bundle of ideas, advice, and perspectives!
This reminds me of the Eliezer Yudkowsky post Working hurts less than procrastinating, we fear the twinge of starting.
Woah! Thank you for all these very concrete tips! A lot of great ideas for me to pick from. :-D