I’ve been very impressed with your work, and I’m looking forward to you hopefully making similarly impressive contributions to probing longtermism!
But when it comes to questions: You did say “anything,” so may I ask some questions about productivity when it comes to research in particular? Please pick and choose from these to answer any that seem interesting to you.
Thinking vs. reading. If you want to research a particular topic, how do you balance reading the relevant literature against thinking yourself and recording your thoughts? I’ve heard second-hand that Hilary Greaves recommends thinking first so to be unanchored by the existing literature and the existing approaches to the problem. Another benefit may be that you start out reading the literature with a clearer mental model of the problem, which might make it easier to stay motivated and to remain critical/vigilant while reading. Would you agree or do you have a different approach?
Self-consciousness. I imagine that virtually any research project, successful and unsuccessful, starts with some inchoate thoughts and notes. These will usually seem hopelessly inadequate but they’ll sometimes mature into something amazingly insightful. Have you ever struggled with mental blocks when you felt self-conscious about these beginnings, and have you found ways to (reliably) overcome them?
Is there something interesting here? I often have some (for me) novel ideas, but then it turns out that whether true or false, the idea doesn’t seem to have any important implications. Conversely, I’ve dismissed ideas as unimportant, and years later someone developed them – through a lot of work I didn’t do because I thought it wasn’t important – into something that did connect to important topics in unanticipated ways. Do you have rules of thumb that help you assess early on whether a particular idea is worth pursuing?
Survival vs. exploratory mindset. I’ve heard of the distinction between survival mindset and exploratory mindset, which makes intuitive sense to me. (I don’t remember where I learned of these terms, but I tried to clarify how I use them in a comment below.) I imagine that for most novel research, exploratory mindset is the more useful one. (Or would you disagree?) If it doesn’t come naturally to you, how do you cultivate it?
Optimal hours of work per day. Have you found that a particular number of hours of concentrated work per day works best for you? By this I mean time you spend focused on your research project, excluding time spent answering emails, AMAs, and such. (If hours per day doesn’t seem like an informative unit to you, imagine I asked “hours per week” or whatever seems best to you.)
Learning a new field. I don’t know what I mean by “field,” but probably something smaller than “biology” and bigger than “how to use Pipedrive.” If you need to get up to speed on such a field for research that you’re doing, how do you approach it? Do you read textbooks (if so, linearly or more creatively?) or pay grad students to answer your questions? Does your approach vary depending on whether it’s a subfield of your field of expertise or something completely new?
Hard problems. I imagine that you’ll sometimes have to grapple with problems that are sufficiently hard that it feels like you didn’t make any tangible progress on them (or on how to approach them) for a week or more. How do you stay optimistic and motivated? How and when do you “escalate” in some fashion – say, discuss hiring a freelance expert on some other field?
Emotional motivators. It’s easy to be motivated on a System 2 basis by the importance of the work, but sometimes that fails to carry over to System 1 when dealing with some very removed or specific work – say, understanding some obscure proof that is relevant to AI safety along a long chain of tenuous probabilistic implications. Do you have tricks for how to stay System 1 motivated in such cases – or when do you decide that a lack of motivation may actually mean that something is wrong with the topic and you should question whether it is sufficiently important?
Typing speed. I have this pet theory that a high typing speed is important for some forms of research that involves a lot of verbal thinking (e.g., maybe not maths). The idea is that our memory is limited, so we want to take notes of our thoughts. But handwriting is slow, and typing is only mildly faster, so unless one thinks slowly or types very fast, there is a disconnect that causes continual stalling, impatience, forgotten ideas, and prevents the process from flowing. Does that make any intuitive sense to you? Do you have any tricks (e.g., dictation software)?
Obvious questions.Nate Soares has an essay on “obvious advice.” Michael Aird mentioned that in many cases he just wanted to follow up on some obvious ideas. They were obvious in hindsight, but evidently they hadn’t been obvious to anyone else for years. Is there a distinct skill of “noticing the obvious ideas” or “noticing the obvious open questions”? And can it be trained or turned into a repeatable process?
Tiredness, focus, etc. We sometimes get tired or have trouble focusing. Sometimes this happens even when we’ve had enough sleep (just to get an obvious solution out of the way: sleep/napping). What are your favorite things to do when focusing seems hard or you feel tired? Do you use any particular nootropics, supplements, air quality monitor, music, or exercise routine?
Meta. Which of these questions would you like to see answered by more people because you are interested in the answers too?
Thank you kindly! And of course just pick out the questions you think are interesting for you or other readers to answer. :-)
I can answer 6, as I’ve been doing it for Wild Animal Welfare since I was hired in September. WAW is a new and small field, so it is relatively easy to learn the field, but there’s still so much! I started by going backwards (into the Welfare Biology movement of the 80s and 90s) and forwards (into the WAW EA orgs we know today) from Brain Tomasik, consulting the primary literature over various specific matters of fact. A great thing about WAW being such a young field (and so concentrated in EA) is that I can reach out to basically anyone who’s published on it and have a real conversation. It’s a big shortcut!
I should note that my background is in Evolutionary Biology and Ecology, so someone else might need a lot more background in those basics if they were to learn WAW.
Lots of really good questions here. I’ll do my best to answer.
Thinking vs reading: I think it depends on the context. Sometimes it makes sense to lean toward thinking more and sometimes it makes sense to lean toward reading more. (I wouldn’t advise focusing exclusively on one or the other.) Unjustified anchoring is certainly a worry, but I think reinventing the wheel is also a worry. One could waste two weeks groping toward a solution to a problem that could have been solved in afternoon just by reading the right review article.
Self-consciousness: Yep, I am intimately familiar with hopelessly inchoate thoughts and notes. (I’m not sure I’ve ever completed a project without passing through that stage.) For me at least, the best way to overcome this state is to talk to lots of people. One piece of advice I have for young researchers is to come to terms with sharing your work with people you respect before it’s polished. I’m very grateful to have a large network of collaborators willing to listen to and read my confused ramblings. Feedback at an early stage of a project is often much more valuable than feedback at a later stage.
Is there something interesting here?: Yep, this also happens to me. Unfortunately, I don’t have any particular insight. Oftentimes the only way to know whether an idea is interesting is to put in the hard exploratory work. Of course, one shouldn’t be afraid to abandon an idea if it looks increasingly unpromising.
*Survival vs. exploratory mindset: Insofar as I understand the terms, an exploratory mindset is an absolute must. Not sure how to cultivate it, though.
Optimal hours of work per day: I work between 4 and 8 hours a day. I don’t find any difference in my productivity within that range, though I imagine if I pushed myself to work more than 8, I would pretty quickly hit diminishing returns.
Learning a new field: I can’t emphasize enough the value of just talking to existing experts. For me at least, it’s by far the most efficient way to get up-to-speed quickly. For that reason, I really value having a large network of diverse people I can contact with questions. I put a fair amount of effort into cultivating such a network.
Hard problems: I’m fortunate that my work is almost always intrinsically interesting. So even if I don’t make progress on a problem, I continue to be motivated to work on it because the work itself is so very pleasant. That said, as I’ve emphasized above, when I’m stuck, I find it most helpful to talk to lots of people about the problem.
Emotional motivators: When I reflect on my life as a whole, I’m happy that I’m in a career that aims to improve the world. But in terms of what gets me out of bed in the morning and excited to work, it’s almost never the impact I might have. It’s the intrinsically interesting nature of my work. I almost certainly would not be successful if I did not find my research to be so fascinating.
Typing speed: No idea what my typing speed is, but it doesn’t feel particularly fast, and that doesn’t seem to handicap me. I’ve always considered myself a slow thinker, though.
Obvious questions: Yeah, I think there is a general skill of “noticing the obvious.” I don’t think I’m great at it, but one thing I do pretty often is reflect on the sorts of things that appear obvious now that weren’t obvious to smart people ~200 years ago.
Tiredness, focus, etc.: Regular exercise certainly helps. Haven’t tried anything else. Mostly I’ve just acclimated to getting work done even though I’m tired. (Not sure I would recommend that “solution,” though!)
Meta: I’d like to see others answer questions 1, 3, 6, 7, and 10.
Your advice to talk to people is probably most important to me! I haven’t tried that a lot but when I did, it was very successful. One hurdle is not wanting to come off as too stupid to the other person (but there are also people who make me feel sufficiently at ease that I don’t mind coming off as stupid) and another is not wanting to waste people’s time. So I want to first be sure that I can’t just figure it out myself within ~ 10x the time. Maybe that a bad tradeoff. I also sometimes worry that people would actually like to chat more, but my reluctance to waste their time interferes with both our interest to chat. (Maybe they have the same reluctance, and both of us would be happier if he didn’t have it. Can we have a Reciprocity.io for talking about research, please? ^^)
Typing speed: Haha! You can test it here for example: https://10fastfingers.com/typing-test/english. I’ve been stagnating at ~ 60 WPM now for years. Maybe there’s some sort of distinction that some brains are more optimized toward (e.g., worse memory) or incentivized to optimize toward (e.g., through positive feedback) fewer low-level concepts and other more toward more high-level concepts. So when it comes to measures of performance that have time in the denominator, the first group hits diminishing marginal returns early while the second keeps speeding up for a long time. Maybe the second group is, in turn, less interested in understanding from first-principles, which might make them less innovative. Just random speculation.
Obvious questions: Yeah, I’ve been wondering how it can be that now a lot of people come up independently with cases for nonhuman rights and altruism regardless of distance, but a century ago seemingly almost no one did. Maybe it’s just that I don’t know because most of those are lost in history and those that are not, I just don’t know about (though I can think of some examples). Or maybe culture was so different that a lot of the frameworks weren’t there that these ideas attach to. So if moral genius is, say, normally distributed, then values-spreading could have the benefit that it increases the number of people that use relevant frameworks and thereby also increases the absolute number of moral geniuses who work within those frameworks. The values would have to be sufficiently cooperative not to risk zero-sum competition between values. I suppose that’s similar to Bostrom’s Megaearth scenario except with people who share certain frameworks in their thinking rather than the pure number of people.
Getting work done when tired: Well, to some degree I noticed that I over-update on tiredness, and then get into a negative feedback loop where I give up on things too quickly because I think I’m too tired to do them. At that point I’m usually not actually particularly tired.
Regarding talking to people to get early feedback, get up to speed in a field, etc., you might find this post useful (if you haven’t already seen it).
I also sometimes worry that people would actually like to chat more, but my reluctance to waste their time interferes with both our interest to chat.
I find this relatable. Relatedly, in the above-linked post, Michelle Hutchinson (the author) wrote:
Try to make the conversation concise, and to avoid going over the time allocated. I really appreciate when people do this when I’m talking to them, because it means I can focus on thinking through the ideas rather than also making sure that we’re sticking to the agenda and get to everything.
I commented that I’d slightly push back on that passage, saying:
I think it makes sense for this to be the default way one approaches conversations in which one is seeking advice. But I think a decent portion of advice-givers would either be ok with or actually prefer a more loose / lengthy / free-wheeling / non-regimented conversation.
There have been a few times when I’ve arranged to talk to someone I perceived as very busy and important, and so I’ve tried to be very conscious of their time and give them opportunities to wrap things up, but they repeatedly opted to keep talking for a surprisingly long time. And they seemed genuinely happy with this, and I ended up getting a lot of extra value out of that extra time.
So I think it’s probably good to be open to signs that one’s conversation partner is ok with or prefers a longer conversation, even if one shouldn’t assume they are.
Thanks! Yeah, I sometimes wonder about that. I suppose in rationality-adjacent circles I can just ask what someone’s preference is (free-wheeling chat or no-nonsense and to the point). Maybe that’d be a faux pas or weird in general, but I think it should be fine among most EAs?
Personally, I’m very self-conscious about my work and tend to wait to long too share it. But the culture of RP seems to fight that tendency— which I think is very productive!
Thanks! This is something I sometimes struggle with I think. Is the culture just all about sharing early and often and helping each other, or are there also other aspects to the culture that I may not anticipate that help you overcome this self-consciousness? :-)
Another benefit of thinking before reading is that it can help you develop your research skills. Noticing some phenomena and then developing a model to explain it is a super valuable exercise. If it turns out you reproduce something that someone else has already done and published, then great, you’ve gotten experience solving some problem and you’ve shown that you can think through it at least as well as some expert in the field. If it turns out that you have produced something novel then it’s time to see how it compares to existing results in the literature and get feedback on how useful it is.
This said, I think this is more true for theoretical work than applied work, e.g. the value of doing this in philosophy > in theoretical economics > in applied economics. A fair amount of EA-relevant research is summarising and synthesising what the academic literature on some topic finds and it seems pretty difficult to do that by just thinking to yourself!
3. Is there something interesting here?
I mostly try to work out how excited I am by this idea and whether I could see myself still being excited in 6 months, since for me having internal motivation to work on a project is pretty important. I also try to chat about this idea with various other people and see how excited they are by it.
4. Survival vs. exploratory mindset.
I also haven’t heard these terms before, but from your description (which frames a survival mindset pretty negatively), an exploratory mindset comes fairly naturally to me and therefore I haven’t ever actively cultivated it. Lots of research projects fail so extreme risk aversion in particular seems like it would be bad for researchers.
5. Optimal hours of work per day.
I typically aim for 6-7 hours of deep work a day and a couple of dedicated hours for miscellaneous tasks and meetings. Since starting part-time at RP I’ve been doing 6 days a week (2 RP, 4 PhD), but before that I did 5. I find RP deep work less taxing than PhD work. 6 days a week is at the upper limit of manageable for me at the moment, so I plan to experiment with different schedules in the new year.
6. Learning a new field.
I’m a big fan of textbooks and schedule time to read a couple of textbook chapters each week. Lesswrong’s best textbooks on every subject thread is pretty good for finding them. I usually make Anki flashcards to help me remember the key facts, but I’ve recently started experimenting with Roam Research to take notes which I’m also enjoying so my “learning flow” is in flux at the moment.
8. Emotional motivators.
My main trick for dealing with this is to always plan my day the night before. I let System 2 Dave work out what is important and needs to be done and put blocks in the calendar for these things. When System 1 Dave is working the next day, his motivation doesn’t end up mattering so much because he can easily defer to what System 2 Dave said he should do. I don’t read too much into lack of System 1 motivation, it happens and I haven’t noticed that it is particularly correlated with how important the work is, it’s more correlated with things like how scary it is to start some new task and irrelevant things like how much sunlight I’ve been getting.
9. Typing speed.
I struggle to imagine typing speed being a binding constraint on research productivity since I’ve never found typing speed to be a problem for getting into flow, but when I just checked my wpm was 85 so maybe I’d feel different if it was slower. When I’m coding the vast majority of my time is spent thinking about how to solve the problem I’m facing, not typing the code that solves the problem. When I’m writing first drafts, I think typing speed is a bit more helpful for the reasons you mention, but again more time goes into planning the structure of what I want to say and polishing, than the first pass at writing where speed might help.
11. Tiredness, focus, etc.
My favourite thing to do is to stop working! Not all days can be good days and I became a lot happier and more productive when I stopped beating myself up for having bad days and allowed myself to take the rest of the afternoon off.
12. Meta.
The questions I didn’t answer were because I didn’t have much to say about them so I’d be happy to see answers to them!
Thank you! Using the thinking vs. reading balance as a feedback mechanism is an interesting take, and in my experience it’s also most fruitful in philosophy, though I can’t compare with those branches of economics.
Survival mindset: I suppose it serves its purpose when you’re in a very low-trust environment, but it’s probably not necessary most of the time for most aspiring EA researchers.
Thanks for linking that list of textbooks! It’s also been helpful for me in the past. :-D
Planning the next day the evening before also seems like a good thing to try for me. Thanks!
I wonder whether you all have such fairly high typing speeds simply because you all type a lot or whether 80+ WPM is a speed threshold that is necessary to achieve before one ceases to perceive typing speed as a limiting factor. (Mine is around 60 WPM.)
I hope you can get your work hours down to a manageable level!
#9 Typing speed: I think my own belief is that typing speed is probably less important than you appear to believe, but I care enough about it that I logged 53 minutes of typing practice on keybr this year (usually during moments where I’m otherwise not productive and just want to get “in flow” doing something repetitive), and I suspect I still can productively use another 3-5 hours of typing practice next year even if it trades off against deep work time (and presumably many more hours than that if it does not).
#10 Obvious questions. I suspect that while sometimes ignoring/not noticing “obvious questions/advice” etc is coincidental unforced errors, more often than not there is some form of motivated reasoning going on behind the scenes (eg because this story will invalidate a hypothesis I’m wedded to, because it involves unpleasant tradeoffs, because some beliefs are lower prestige, because it makes the work I do seem less important, etc). I think training myself carefully to notice these things has been helpful, though I suspect I still miss a lot of obvious stuff.
#11 Tiredness, focus, etc..I haven’t figured this out yet and am keen to learn from my coworkers and others! Right now I take a lot of caffeine and I suspect if I were more careful about optimization I should be cycling drugs over a weekly basis rather than taking the same one every day (especially a drug like caffeine that has tolerance and withdrawal symptoms).
Typing speed: Interesting! What is your typing speed?
Obvious questions: Thanks, I’ll keep that in mind. It seems unlikely to be the case for me, but I haven’t tried to observe such a connection either. I observed the opposite tendency in me in the sense that I’m worried about being wrong and so probe all the ways in which I may be wrong a lot, which has had the unintended negative effect that I’m too likely to abandon old approaches in favor of ones I’ve heard of very freshly because for the latter I haven’t come up with as many counterarguments. I also find rehearsing stuff that I already believe to be yucky and boring in ways that rehearsing counterarguments is not. But of course I might be falling for both traps in different contexts.
Typing speed: Interesting! What is your typing speed?
Only 57.9 according to keybr. I suspect a) typing practice will be less helpful for me if my typing speed is higher (like David’s) and b) my current typing speed is below average for programmers (not sure about researchers).
(It’s probably relevant/bad that my default typing system on those typing test layouts (26 characters + space only uses about 5 fingers. I think I go up to 8 on a more (normal) paragraph like this one that also uses shift/return/slash/number pad. I think if I’m focused on systematic rather than incremental changes to my typing speed I’d try to figure out how to force myself to use all 10 fingers).
Obvious questions Hmm I think a lot of people have motivated reasoning of the form I describe, but I don’t know you well enough and I definitely don’t think all people are like this.
There is certainly a danger as well of being too contrarian or self-critical.
Have you tried calibration practice?
Maybe also make an explicit effort to write down key beliefs and numerical probabilities (or even just words for felt senses) to record and eventually correct for overupdating on new arguments/evidence (if this is indeed your issue).
Do you use the guided lessons of Keybr or a custom text? I think the guided lessons are geared toward your weaknesses, which probably leads to a lower speed than what you’d achieve with the average text.
my current typing speed is below average for programmers
That’s something where I’ve never felt bottlenecked by my typing speed. Learning to type blindly was very useful, though, because it gave me a lot more freedom with screen configurations. (And switching to a keyboard layout other than German, where most brackets are super hard to reach. I use a customized Colemak.)
Have you tried calibration practice?
Yeah, it’s on my list of things I want to practice more, but the few times I did some tests I was mostly well-calibrated already (with the exception of one probability level or what they’re called). There’s surely room for improvement, though. Maybe I’ll do worse if the questions are from an area that I think I know something about. ^^
Maybe I’m also too impressionable by people who speak with an air of confidence. I might be falling for some sort of typical mind fallacy and assume that when someone doesn’t use a lot of hedges, they must be so sure that they’re almost certain to be right, and then update strongly on that. But I’m not quite convinced by that theory either. That probably happens sometimes, but at other times I also overupdate on my own new ideas. I’m pretty sure I overupdate whenever people use guilt-inducing language, though.
Thanks again for these questions. I’ll share my answers in a few comments. This context and disclaimer—including that I only started with Rethink a month ago—should be borne in mind.
1. Thinking vs reading
I don’t think I really have explicit policies regarding balancing reading against thinking myself and recording my thoughts. Maybe I should.
I’m somewhat inclined to think that, on the margin and on average (so not in every case), EA would benefit from a bit more reading of relevant literatures (or talking to more experienced people in an area, watching of relevant lectures, etc.), even at the expense of having a bit less time for coming up with novel ideas.
I feel like EA might have a bit too much a tendency towards “think really hard by oneself for a while, then kind-of reinvent the wheel but using new terms for it”. It might be that, often, people could get to similar ideas faster and in a way that connects to existing work better (making it easier for others to find, build on, etc.) by doing some extra reading first.
Note that this is not me suggesting EAs should increase how much they defer to experts/others/existing work. Instead, I’m tentatively suggesting spending more time learning what experts/others/existing work has to say, which could be followed by agreeing, disagreeing, critiquing, building on, proposing alternatives, striking out in a totally different direction, etc.
I often feel like I might be spending more time reading up-front than is worthwhile, as a way of procrastinating, or maybe out of a sort-of perfectionism (the more I read, the lower the chance that, once I start writing, what I write is mistaken or redundant). And I sort-of scold myself for that.
But then I’ve repeatedly heard people remark that I have an unusually large amount of output. (I sort-of felt like the opposite was true, until people told me this, which is weird since it’s such an easily checkable thing!) And I’ve also got some feedback that suggested I should move more in the direction of depth and expertise, even at the cost of breadth and quantity of output.
So maybe that feeling that I’m spending too much time reading up-front is just mistaken. And as mentioned, that feeling seems to conflict with what I’d (tentatively) tend to advise others, which should probably make me more suspicious of the feeling. (This reminds me of asking “Is this how I’d treat a friend?” in response to negative self-talk [source with related ideas].)
(Just my personal, current, non-expert thoughts, as always. Also, I’m not sure I’m addressing precisely the question you had in mind.)
A summary of my recommendations in this vicinity:
If people want to do research and want a menu of ideas/questions to work on, including ideas/questions that seem like they obviously should have a bunch of work on them but don’t yet, they could check out this central directory for open research questions, and/or an overlapping 80,000 Hours post.
If people want to discover “new” instances of such ideas/questions, one option might be to just try to notice ideas/variables/assumptions that seem important to some people’s beliefs, but that seem debatable and vague, have been contested by others, and/or haven’t been stated explicitly and fleshed out.
One way to do this might be to have a go at rigorously, precisely writing out the arguments that people seem to be acting as if they believe, in order to spot the assumptions that seem required but that those people haven’t stated/emphasised.
One could then try to explore those assumptions in detail, either just through more fleshed-out “armchair reasoning”, or through looking at relevant empirical evidence and academic work, or through some mixture of those things.
I think this is a big part of what I’ve done this year.
Here’s one example of a piece of my own work which came from roughly that sort of process.
I’ll add more detailed thoughts below.
---
I interpret this question as being focused on cases in which an idea/open question seems like it should’ve been obvious, or seems obvious in retrospect, yet it has been neglected so far. (Or the many cases we should assume still exist in which the idea/question is still neglected, but would—if and when finally tackled—seem obvious.)
It seems to me that there are two major types of such cases:
Unnoticed: Cases in which the ideas/open questions haven’t even been noticed by almost anyone
Or at least, almost anyone in the relevant community/field.
So I’d still say an idea counts as “unnoticed” for these purposes even if, for example, a very similar ideas has been explored thoroughly in sociology, but no one in longtermism has noticed that that idea is relevant to some longtermist issue, nor independently arrived at a similar idea.
Noticed yet neglected: Cases in which the ideas/open questions have been noticed, but no one has really fleshed them out or tackled them much
E.g., a fair number of longtermists have noticed that the question of how likely various types of recovery are from various types of civilizational collapse. But as far as I’m aware, there was nothing even approaching a thorough analysis of the question until some recent still-in-progress work, and there’s still room for much more work here.
Another example is questions related to how likely global, stable totalitarianism is; what factors could increase or decrease the odds of that; and what to do about this. Some people have highlighted such questions (including but not only in the context of advanced AI), but I’m not aware of any detailed work on them.
This is really more a continuum than a binary distinction. In almost all cases, there’s probably been someone in a relevant community who’s at least briefly noticed something relevant. But sometimes it’ll just be that something kind-of relevant has been discussed verbally a few times and then forgotten, while other times it’ll be that people have prominently highlighted pretty precisely the relevant open question, yet no one has actually worked on it. (And of course there’ll be many cases in between.)
---
For “noticed yet neglected” ideas/questions, recommendation 1 from above will be more relevant: people could find many ideas/questions of this type in this central directory for open research questions, and just get cracking on them.
That directory is like a map pointing the way to many trees that might be full of low-hanging fruit that would’ve been plucked by now in a better world. And I really would predict that a lot of EAs could do valuable work by just having a go at those questions. (I’m less confident that this is the most valuable thing lots of EAs could be doing, and each person would have to think that through for themselves, in light of their specific circumstances. See also.)
So we don’t necessarily need all EA-aligned researchers to try to cultivate a skill of “noticing the ideas that should’ve been tackled/fleshed out already” (though I’m sure some should). Some could just focus on actually exploring the ideas that have been noticed but still haven’t been tackled/fleshed out.
---
For “unnoticed” ideas/questions, recommendation 2 from above will be more relevant.
I think this dovetails somewhat with Ben Garfinkel calling for[1] more people to just try to rigorously write up more detailed versions of arguments about AI risk that often float around in sketchier or briefer form. (Obviously brevity is better than length, all else held equal, but often a few pages isn’t enough to give an idea proper treatment.)
---
There are at least two other approaches for finding “unnoticed” ideas/questions which seem to have sometimes worked for me, but which I’m less sure would often be useful for many people, and less sure I’ll describe clearly. These are:
Trying to sketch out causal diagrams of the pathway to something (e.g., an existential catastrophe) happening
I think that doing something like this has sometimes helped me notice there there are:
assumptions or steps missing in the standard/fleshed-out stories of how something might happen,
alternative pathways by which something could happen, and/or
Trying to define things precisely, and/or to precisely distinguish concepts from each other, and seeing if anything interesting falls out
Here’s an abstract example, but one which matches various real examples that have happened for me:
I try to define X, but then notice that that definition would fail tocover some cases of what I’d usually think of as X, and/or that it would cover some cases of what I’d usually think of as Y (which is a distinct concept).
This makes me realise that X and/or Y might be able to take somewhat different forms or occur via different pathways to what was typically considered, or that there’s actually an extra requirement for X or Y to happen that was typically ignored.
I feel like it’d be easy to misinterpret my stance here.
I actually think that definitions will never or almost never really be “perfect”, and I agree with the ideas in this post (see also family resemblance). And I think that many debates over definitions are largely nitpicking and wasting time.
But I also think that, in many case, being clearer about definitions can substantially benefit both thought and communication.
---
I should again mention that I’m only ~1.5 years into my research career, so maybe I’ll later change my mind about a bunch of those points, and there are probably a lot of useful things that could be said on this that I haven’t said.
[1] See the parts of the transcript after Howie asks “Do you know what it would mean for the arguments to be more sussed out?”
I don’t work at Rethink Priorities but I couldn’t resist jumping in with some thoughts as I’ve been doing a lot of thinking on some of these questions recently
Thinking vs. reading. I’ve been playing around with spending 15-60 min sketching out a quick model of what I think of something before starting in on the literature (by no means a consistent thing I do though). I find it can be quite nice and help me ask the right questions early on.
Self-consciousness. Idk if this fits exactly but when I started my research position I tried to have the mindset of, ‘I’ll be pretty bad at this for quite a while’. Then when I made mistakes I could just think, ‘right, as expected. Now let’s figure out how to not do that again’. Not sure how sustainable this is but it felt good to start! In general it seems good to have a mindset of research being nearly impossibly hard. Humans are just barely able to do this thing in a useful way and even at the highest levels academics still make mistakes (most papers have at least some flaws).
Optimal hours of work per day. I tend to work about 4-7 hours per day including meetings and everything. Including only mentally intensive tasks I probably get around 4-5 a day. Sometimes I’m able to get more if I fall into a good rhythm with something. Looking around at estimates (Rescuetime says just ~3 hours per day average of productive work) it seems clear I’m hitting a pretty solid average. I still can’t shake the feeling that everyone else is doing more work. Part of this is because people claim they do much more work. I assume this is mostly exaggeration though because hours worked is used as a signal of status and being a hard worker. But still, it’s hard to shake the feeling.
Learning a new field. I just do a lot of literature review. I tend to search for the big papers and meta-analyses, skim lot’s of them and try to make a map of what the key questions are and what the answers proposed by different authors are for each question (noting citations for each answer). This helps to distill the field I think and serves as something relatively easy to reference. Generally there’s a lot of restructuring that needs to happen as you learn more about a topic area and see that some questions you used were ill-posed or some papers answer somewhat different questions. In short this gets messy, but it seems like a good way to start and sometimes it works quite well for me.
Hard problems. I have a maybe-controversial take that research (even in LT space) is motivated largely by signalling and status games. From this view the advice many gave about talking to people about it sounds good. Then you generate some excitement as you’re able to show someone else you’re smart enough to solve it, or they get excited to share what they know, etc. I think if you had a nice working group on any topic, no matter how boring, everyone would get super excited about it. In general, connecting the solution to a hard problem to social reward is probably going to work well as a motivator by this logic.
Emotional motivators. I’ve been thinking a lot recently about what I’m calling ‘incentive landscaping’. The basic idea is that your system 2 has a bunch of things it wants to do (e.g. have impact). Then you can shape your incentive landscape such that your system 1 is also motivated to do the highest impact things. Working for someone who shares your values is the easiest way to do this as then your employer and peers will reward you (either socially or with promotions) for doing things which are impact-oriented. This still won’t be perfectly optimized for impact but it gets you close. Then you can add in some extra motivators like a small group you meet with to talk about progress on some thing which seems badly motivated, or ask others to make your reward conditional on you completing something your system 2 thinks is important. Still early days for me on this though and I think it’s a really hard thing to get right.
Typing speed. At least when I’m doing reflections or broad thinking I often circumvent this by doing a lot of voice notes with Dragon. That way I can type at the speed of thought. It’s never perfect but ~97% of it is readable so it’s good enough. Then if you want to actually have good notes you go through and summarize your long jumble of semi-coherent thoughts into something decent sounding. This has the side of effect of some spaced repetition learning as well!
Tiredness, focus, etc. I’ve had lot’s of ongoing and serious problems with fatigue and have tried many interventions. Certainly caffeine (ideally with l-theanine) is a nice thing to have but tolerance is an issue. Right now what seems to work for me (no idea why) is a greens powder called Athletic Greens. I’m also trying pro/prebiotics which might be helping. Magnesium supplementation also might have helped. A medication I was taking was causing some problems as well and causing me to have some really intense fatigue on occasion (again, probably…). It’s super hard to isolate cause and effect in this area as there are so many potential causes. I’d say it’s worth dropping a lot of money on different supplements and interventions and seeing what helps. If you can consistently increase energy by 5-10% (something I think is definitely on the table for most people), that adds up really quickly in terms of the amount of work you can get done, happiness, etc. Ideally you’d do this by introducing one intervention at a time for 2-4 weeks each. I haven’t had patience for that and am currently just trying a few things at once, then I figure I can cut out one at a time and see what helped. Things I would loosely recommend trying (aside from exercise, sleep, etc): Prebiotics, good multivitamins, checking for food intolerances, checking if any pills you take are having adverse effects. I do also work through tiredness sometimes and find it helpful to do some light exercise (for me, games in VR) to get back some energy. That also works as a decent gauge for whether I’ll be able to push past the tiredness. If playing 10 min of Beatsaber feels like a chore, I probably won’t be able to work. How you rest might also be important. E.g. might need time with little input so your default mode network can do it’s thing. No idea how big of a deal this is but I’ve found going for more walks with just music (or silence) to maybe be helpful, especially in that I get more time for reflection. I’ve also been experimenting with measuring heart rate variability using an app called Welltory. That’s been kind of interesting in terms of raising some new questions though I’m still not sure how I feel about it/how accurate it is for measuring energy levels.
Yeah, I think that perspective on self-consciousness is helpful!
Work hours: I also wonder how much this varies between professions. Maybe that’s worth a quick search and writeup for me at some point. When you go from a field where it’s generally easy to concentrate for a long time every day to a field where it’s generally hard, that may seem disproportionately discouraging when you don’t know about that general difference.
“Try to make a map of what the key questions are and what the answers proposed by different authors are”: Yeah, combining that with Jason’s tips seems fruitful too: When talking to a lot of people, always also ask what those big questions and proposed answers are. More nonobvious obvious advice! :-D
I may try out social incentives and dictation software, but social things are usually draining and sometimes scary for me, so there’d be a tradeoff between the motivation and my energy. And I feel like I think in a particular and particularly useful way while writing but can often not think new thoughts while speaking, but that may be just a matter of practice. We’ll see! And even if it doesn’t work, these questions and answers are not (primarily) for me, and others probably find them brilliantly useful!
I’ve bought some Performance Lab products (following a recommendation from Alex in a private conversation). They have better reviews on Vaga and are a bit cheaper than the Athletic Greens.
“Default mode network”: Interesting! I didn’t know about that.
Hi Denis, thanks for these questions. I’ll give my answers to a bunch of them tomorrow. Just jumping in early with a clarifying question: Could you explain what you mean by “Survival vs. exploratory mindset”, and/or provide a link that explains that distinction? I haven’t heard those terms before, and Google didn’t immediately show me anything that looked relevant.
(Is it perhaps related to exploring vs exploiting?)
Hi Michael! Huh, true, those terms seem to be vastly less commonly used than I had thought.
By survival mindset I mean: extreme risk aversion, fear, distrust toward strangers, little collaboration, isolation, guarded interaction with others, hoarding of money and other things, seeking close bonds with family and partners, etc., but I suppose it also comes with modesty and contentment, equanimity in the face of external catastrophes, vigilance, preparedness, etc.
By exploratory mindset I mean: risk neutrality, curiosity, trust toward strangers, collaboration, outgoing social behavior, making oneself vulnerable, trusting partners and family without much need for ritual, quick reinvestment of profits, etc., but I suppose also a bit lower conscientiousness, lacking preparedness for catastrophes, gullibility, overestimating how much others trust you, etc.
Those categories have been very useful for me, but maybe they’re a lot less useful for most other people? You can just ignore that question if the distinction makes no intuitive sense this way or doesn’t quite fit your world models.
This distinction reminds me of the “survival values vs self-expression values” dimension of the World Values Survey. I’m a bit rusty on those terms, but from skimming a Wikipedia page, I think the “survival” part lines up decently with what you describe as “survival mindset”, but the self-expression part might not line up well with “exploratory mindset”:
Survival values place emphasis on economic and physical security. They are linked with a relatively ethnocentric outlook and low levels of trust and tolerance.
Self-expression values give high priority to subjective well-being, self-expression and quality of life.[1] Some values more common in societies that embrace these values include environmental protection, growing tolerance of foreigners, gays and lesbians and gender equality, rising demands for participation in decision-making in economic and political life (autonomy and freedom from central authority), interpersonal trust, political moderation, and a shift in child-rearing values from emphasis on hard work toward imagination and tolerance.[1]
As for your question: I haven’t thought in terms of survival vs exploratory mindset before, so I don’t think I have a strong view on which is more useful for research (or the situations in which this differs), how often I adopt each mindset, or how I cultivate them. I guess I’d probably guess exploratory mindset tends to be more useful and tends to be what I have, but I’m not sure.
I think parts of Rationality: From AI to Zombies (aka “the sequences”) and Harry Potter and the Methods of Rationality have quite useful advice—and a way of making it stick psychologically—that feels somewhat relevant here. E.g., the repeated emphasis and elaboration on “that which can be destroyed by the truth should be”. I have a sense that someone who’s struggling to adopt useful facets of the exploratory might benefit from reading (or re-skimming) one or both of those things.
Yeah, I agree about how well or not well those concepts line up. But I think insofar as I still struggle with probably disproportionate survival mindset, it’s about questions of being accepted socially and surviving financially rather than anything linked to beliefs (maybe indirectly in a few edge cases, but that feels almost irrelevant).
If this is not just my problem, it could mean that a universal basic income could unlock more genius researchers. :-)
I find that being tired makes my mind wander a lot when reading longform things (e.g., papers, posts, not things like Slack messages or emails), so when I’m tired I usually try to do things other than reading.
If I’m just a bit or moderately tired, I usually find I’m still about as able to write as normal. If I’m very tired, I’ll still often be able to write quickly, but then when I later read what I wrote I’ll feel that it was unclear, poorly structured, and more typo-strewn than usual. So when very tired, I try to avoid writing longform things (e.g., actual research outputs).
Things I find I’m still pretty able to do when tired include commenting on documents people want input on (I think I’m more able to focus on this than on regular reading because it’s more “interactive” or something), writing things like EA Forum comments, replying to emails and Slack messages and the like, doing miscellaneous admin-y tasks, and reflecting on the last week/month and planning the next. So I often do a disproportionate amount of such tasks during evenings or during days when I’m more tired than normal, and at other times do a disproportionate amount of reading and “substantive” writing.
Also, I’m fortunate enough to have flexible hours. So sometimes I just work less on days when I’m tired (perhaps spending more time with my wife), and then make up for it on other days.
2 and 3. Self-consciousness and Is there something interesting here?
These questions definitely resonate with me, and I imagine they’d resonate with most/all researchers.
I have a tendency to continually wonder if what I’m doing is what I should be doing, or if I should change my priorities. I think this is good in some ways. But sometimes I’d make better decisions faster if I just actually pursued an idea more “confidently” for a bit, to get more info on whether it’s worth pursuing, rather than just “wondering” about it repeatedly and going back and forth without much new info to work with. Basically, I might do too much self-doubt-style armchair reasoning, with too little actual empirical info.
Also, pursuing an idea more “confidently” for a bit will not only inform me about whether to continue pursuing it further, but also might result in outputs that are useful for others. So I try to sometimes switch into “just commit and focus mode” for a given time period, or until I hit a given milestone, and mostly minimise reflection on what I should prioritise during that time. But so far this has been like a grab bag of heuristics and habits I use, rather than a more precise guideline for myself.
Things that help me with this include, and/or some scattered related thoughts, include:
Talking to others and getting feedback, including on early-stage ideas
I liked David and Jason’s remarks on this in their comments
A sort-of minimum viable product and quick feedback loop approach has often seemed useful for me—something like:
First getting verbal feedback from a couple people on a messy, verbal description of an idea
Then writing up a rough draft about the idea and circulating it to a couple more people for a bit more feedback
Then polishing and fleshing out that draft and circulating it to a few more people for more feedback
Then posting publicly
(But only proceeding to the next step if evidence from the prior one—plus one’s own intuitions—suggested this would be worthwhile)
Feedback has often helped me determine whether an idea is worth pursuing further, feel more comfortable/motivated with pursuing an idea further (rather than being mired in unproductive self-doubt), develop the idea, work out which angles of it are most worth pursuing, and work out how to express it more clearly
Reminding myself that I haven’t really gathered any new info since the last time I thought “Should this really be what I spend my time on?”, so thinking about that again is unlikely to reveal new insights, and is probably just a stupid part of my psychology rather than something I’d endorse.
I might think to myself something like “If a friend was doing this, you’d think it’s irrational, and gently advise them to just actually commit for a bit and get new info, right? So shouldn’t you do the same yourself?”
Remembering Algorithms to Live By drawing an analogy to a failure mode in which computers continually reprioritise tasks and the reprioritisation takes up just enough processing power to mean no actual progress on any of the tasks occurs, and this can just cycle forever. And the way to get out of this is to at some point just do tasks, even without having confidence that these “should” be top priority.
This is just my half-remembered version of that part of the book, and might be wrong somehow.
Remembering that I’d be deeply uncertain about the “actual” value of any project I could pursue, because the world is very complicated and my ambitions (contribute to improving the long-term future) are pretty lofty. The best I can do is something that seems good in expected value but with large error bars. So the fact I feel some uncertainty and doubt provides basically no evidence that this project isn’t worth pursuing. (Though feeling an unusually large amount of uncertainty and doubt might.)
Remembering that, if the idea ends up seeming to have not been important but there was a reasonable ex ante case that it might’ve been important, there’s a decent chance someone else would end up pursuing it if I don’t. So if I pursue it, then find out it seems to not be important, then write about what I found, that might still have the effect of causing an important project to get done, because it might cause someone else to do that important project rather than doing something similar to what I did.
Examples to somewhat illustrate the last two points:
This year, in some so-far-unpublished work, I wrote about some ideas that:
I initially wasn’t confident about the importance of
Seemed like they should’ve been obvious to relevant groups, but seemed not to have been discussed by them. And that generally seems like (at least) weak evidence that an idea either (a) actually isn’t important or (b) has been in essence discussed in some other form or place that I just am not familiar with.
So when I had the initial forms of these ideas and wasn’t sure how much time (if any) to spend on them, I took roughly the following approach:
I developed some thoughts on some of the ideas. Then I shared those thoughts verbally or as very rough drafts with a small set of people who seemed like they’d have decent intuitions on whether the ideas were important vs unimportant, somewhat novel vs already covered, etc.
In most cases, this early feedback indicated that it was at least plausible that the ideas were somewhat important and somewhat novel. This—combined with my independent impression that these ideas might be somewhat important and novel—seemed to provide sufficient reason to flesh those ideas out further, as well as to flesh out related ideas (which seemed like they’d probably also be important and novel if the other ideas were, and vice versa).
So I did so, then shared that slightly more widely. Then I got more positive feedback, so I bothered to invest the time to polish the writings up a bit more.
Meanwhile, when I fleshed one of the ideas out a little, it seemed like that one turned out to probably not be very important at all. So with that one, I just made sure that my write-up made it clear early on that my current view was that this idea probably didn’t matter, and I neatened up the write-up just a bit, because I still thought the write-up might be a bit useful either to:
Explain to others why they shouldn’t bother exploring the same thing
Make it easy for others to see if they disagreed with my reasoning for why this probably didn’t matter, because I might be wrong about that, and it might be good for others to quickly check that reasoning
Having spent time on that idea sort-of felt in hindsight silly or like a mistake. But I think I probably shouldn’t see that as having been a bad decision ex ante, given that:
It seems plausible that, if not for my write-up, someone else would’ve eventually “wasted” time on a similar idea
This was just one out of a set of ideas that I tried to flesh out and write up, many/most of which still (in hindsight) seem like they were worth spending time on
So maybe it’s very roughly like I gave 60% predictions for each of 10 things, and decided that that’d mean the expected value of betting on those 10 things was good, and then 6 of those things happened, suggesting I was well-calibrated and was right to bet on those things
(I didn’t actually make quantitative predictions)
And some of the other ideas were in between—no strong reason to believe they were important or that they weren’t—so I just fleshed them out a bit and left it there, pending further feedback. (I also had other things to work on.)
Yeah, I even mentioned this idea (about preventing someone from “wasting” time on a dead end you already explored) in a blog post a while back. :-D
Much of the bulk of the iceberg is research, which has the interesting property that often negative results – if they are the result of a high-quality, sufficiently powered study – can be useful. If the 100 EAs from the introduction (under 1.a.) are researchers, they know that one of the plausible ideas got to be right, and 99 of them have already been shown not to be useful, then the final EA researcher can already eliminate 99% of the work with very little effort by relying on what the others have already done. The bulk of that impact iceberg was thanks to the other researchers. Insofar as research is a component of the iceberg, it’s a particularly strong investment.
It’s also important to be transparent about one’s rigor and to make the negative results findable for others. The second is obvious. The first is because the dead end may not actually be a dead end but only looked that way given the particular way in which you had resolved the optimal stopping problem of investigating it (even) further.
It’s also important to be transparent about one’s rigor and to make the negative results findable for others. The second is obvious. The first is because the dead end may not actually be a dead end but only looked that way given the particular way in which you had resolved the optimal stopping problem of investigating it (even) further.
I agree with these points, and think that they might sometimes be under-appreciated (both in and outside of EA).
To sort-of restate your points:
I think it’s common for people to not publish explorations that turned out to seem to “not reveal anything important” (except of course that this direction of exploration might be worth skipping).
Much has been written about this sort of issue, and there can be valid reasons for that behaviour, but sometimes it seems unfortunate.
I think another failure mode is to provide some sort of public info of your belief that this direction of exploration seems worth skipping, but without sufficient reasoning transparency, which could make people rule this out too much/too early.
Again, there can be valid reasons for this (if you’re sufficiently confident that it’s worth ruling out this direction and you have sufficiently high-value other things to do, it might not be worth spending time on a write-up with high reasoning transparency), but sometimes it seems unfortunate.
Part of this reminds me a lot of CFAR’s approach here (I can’t quite tell whether Julia Galef is interviewer, interviewee, or both):
For example, when I’ve decided to take a calculated risk, knowing that I might well fail but that it’s still worth it to try, I often find myself worrying about failure even after having made the decision to try. And I might be tempted to lie to myself and say, “Don’t worry! This is going to work!” so that I can be relaxed and motivated enough to push forward.
But instead, in those situations I like to use a framework CFAR sometimes calls “Worker-me versus CEO-me.” I remind myself that CEO-me has thought carefully about this decision, and for now I’m in worker mode, with the goal of executing CEO-me’s decision. Now is not the time to second-guess the CEO or worry about failure.
Your approach to gathering feedback and iterating on the output, making it more and more refined with every iteration but also deciding whether it’s worth another iteration, that process sounds great!
I think a lot of people aim for such a process or want to after reading your comment, but will held back from showing their first draft to their first round of reviewers because they worry the reviewers will think badly of them for addressing a topic of this particular level of perceived difficulty or relevance (maybe it’s too difficult or too irrelevant in the reviewer’s opinion), or think badly of them for a particular wording, or think badly of them because they think you should’ve anticipated a negative effect of writing about the topic and not done so (e.g., some complex acausal trade or social dynamics thing that didn’t occur to you), or just generally have diffuse fears holding them back. Such worries are probably disproportionate, but still, overcoming them will probably require particular tricks or training.
I like that “Worker-me versus CEO-me” framing, and hadn’t heard of it or seen that page, so thanks for sharing that. It does seem related to what I said in the parent comment.
I share the view that it’ll be decently common for a range of disproportionate worries to hold people back from striking out into areas that seem good in expected value but very uncertain and with real counterarguments, and from sharing early-stage results from such pursuits. I also think there can be a range of good reasons to hold back from those things, and that it can be hard to tell when the worries are disproportionate!
I imagine it’d be hard (though not impossible) to generate advice on this that’s quite generally useful without being vague/littered with caveats. People will probably have to experiment to some extent, get advice from trusted people on their general approach, and continue reflecting, or something like that.
I wish I had a better answer to the first one than “become good at writing”. My own pathway was reading loads and loads, and writing loads and loads, and then essentially mimicking the writing that I liked (mainly Pratchett tbh) until eventually I noticed that I’d stopped doing that and had a recognisable style of my own. I sometimes go through my old emails from before I was a journalist and see I’ve just written needlessly long show-offy emails to friends, which I cringe about a bit now, but they were clearly practice for when I had to do it for real.
Actually, also, I did philosophy at uni and MA, and I found that the way I learnt to structure an argument in those essays has been really helpful.
Oh and this might sound silly but become good at typing. If you can type as fast as you think then when the ideas are flowing quickly then they just sort of appear on the page. I used to work as a medical secretary for a long time and I swear that helped me an awful lot, not least in transcribing interviews but also just in being able to get ideas down quickly.
As for getting it published: pitch! Ideally start by developing a relationship with some editor somewhere. It might be a good idea to blog as well, so that you can point people to stuff you’ve written. [emphasis added]
I’m not actually sure if the precise problem you’re describing resonates with me. I definitely often feel very uncertain about:
whether the goal I’m striving towards really matters at all
even if so, whether it’s a goal worth prioritising
whether I should prioritise it (is it my comparative advantage?)
whether anything I produce in pursuing this goal will be of any use to anyone
But I’m not sure there have been cases where, for a week or more, I didn’t feel like I was at least progressing towards:
having the sort of output I had planned or now planned to produce(setting aside the question of whether that output will be useful to anyone), and/or
deciding (for good reason) to not bother trying to create that sort of output
Note that I’d count as “progress” cases where I explored some solutions/options that I thought might work/be useful for X, and all turned out to be miserable wastes of time, so I can at least rule those out and try something else next week. I’d also count cases where I learned other potentially useful things in the process of pursuing dead ends, and that knowledge seems likely to somehow benefit this or other projects.
It is often the case that my estimate of how many remaining days something will take is longer at the end of the week than it was at the beginning of the week. But this is usually coupled with me thinking that I have made some sort of progress—I just also realised that some parts will be harder than I thought, or that I should do a more thorough job than I’d planned, or something like that.
(But I feel like maybe I’m just interpreting your question differently to what you intended.)
In a private conversation we figured out that I may tend too much toward setting specific goals and then only counting achievement of these goals as success ignoring all the little things that I learn along the way. If the goal is hard to achieve, I have to learn a lot of little things on the way and that takes time, but if I don’t count these little things as little successes, my feedback gets too sparse, and I lose motivation. So noticing little successes seems valuable.
(Disclaimer: I’m just reporting on my own experience, and think people will vary a lot in this sort of area, so none of the following is even slightly a recommendation.)
In general:
Personally, I seem to just find it pretty natural to spend a lot of hours per week doing work-ish things
I tend to be naturally driven to “work hard” (without it necessarily feeling much like working) by intellectual curiosity, by a desire to produce things I’m proud of, and by a desire for positive attention (especially but not only from people whose judgement I particularly respect)
That third desire in particular can definitely become a problem, but I try to keep a close eye on it and ensure that I’m channeling that desire towards actions I actually endorse on reflection
I do get run down sometimes, and sometimes this has to do with too many hours per week for too many weeks in a row. But the things that seem more liable to run me down are feeling that I lack sufficient autonomy in what I do, how, and when; or feeling that what I’m doing isn’t valuable; or feeling that I’m not developing skills and knowledge I’ll use in future
That last point means that one type of case in which I do struggle to be motivated is cases where I know I’m going to switch away from a broad area after finishing some project, and that I’m unlikely to use the skills involved in that project again.
In these cases, even if I know that finishing that project to a high standard would still be valuable and is worth spending time on, it can be hard for me to be internally motivated to do so, because it no longer feels like doing so would “level me up” in ways I care about.
I seem to often become intensely focused on a general area in an ongoing way (until something switches my focus to another area), and just continually think about it, in a way that feels positive or natural or flow-like or something
This happened for stand-up comedy, then for psychology research, then for teaching, then for EA stuff (once I learned about EA)
(The other points above likewise applied during each of those four “phases” of my adult life)
Luckily, the sort of work I do now:
is very intellectually stimulating
involves producing things I’m (at least often!) proud of
can bring me positive attention
allows me a sufficient degree of autonomy
seems to me to be probably the most valuable thing I could realistically be doing at the moment (in expectation, and with vast uncertainty, of course)
involves developing skills and knowledge I expect I might use in future
That means it’s typically been relatively easy for me to stay motivated. I feel very fortunate both to have the sort of job and “the sort of psychology” I’ve got. I think many people might, through no fault of their own, find it harder to be emotionally motivated to spend lots of hours doing valuable work, even when they know that that work would be valuable and they have the skills to do it. Unfortunately, we can’t entirely choose what drives us, when, and how.
(There’s also a scary possibility that my tendency so far to be easily motivated to work on things I think are valuable is just the product of me being relatively young and relatively new to EA and the areas I’m working in, and that that tendency will fade over time. I’d bet against that, but could be wrong.)
Awesome! For me the size of an area plays a role for how long I have a high level of motivation for it. When you’re studying a board game, there are a few activities, they are quite similar, and if you try out all of them it might be that you run out of motivation within a year. This happened to me with Othello. But computer science or EA are so wide that if you lose motivation for some subfield of decision theory, you move on to another subfield of decision theory, or to something else entirely, like history. And there are probably a lot of such subareas where there are potentially impactful investigations waiting to be done. So it makes sense to me to be optimistic about having long sustained motivation for such a big field.
My motivation did shift a few times, though. I think before 2012 it was more a “This is probably hopeless, but I have to at least try on the off-chance that I’m in a world where it’s not hopeless.” 2012–2014 it was more “Someone has to do it and no one else will.” After March 28, 2014, it was carried a lot by the sudden enormous amount of hope I got from EA. On October 28, 2015, I suddenly lost an overpowering feeling of urgency and became able to consider more long-term strategies than a decade or two. Even later, I became increasingly concerned with coordination and risk from regression to the (lower) mean.
I’d be surprised if typing speed was a big factor explaining differences in how much different researchers produce, or in their ability to produce certain types of output. (But of course, that claim is pretty vague—how surprised would I be? What do I mean by “big factor?”)
But I just did a typing test, and got 92wpm (with “medium” words, and 1 typo), which is apparently high. So perhaps I’m just taking that for granted and not recognising how a slower typing speed could’ve limited me. Hard to say.
I don’t know if I have a great, well-chosen, or transferable method here, so I think people should pay more attention to my colleagues’ answers than mine. But FWIW, I tend to do a mixture of:
reading Wikipedia articles
reading journal article abstracts
reading a small set of journal articles more thoroughly
listening to podcasts
listening to audiobooks
watching videos (e.g., a Yale lecture series on game theory)
talking to people who are already at least sort-of in my network (usually more to get a sounding board or “generalist feedback”, rather than to leverage specific expertise of theirs)
Whether I take many notes depends on whether I’m just learning about a field because I think it might be useful in some way in future for me to know about that field, or because I have at least a vague idea of a project I might work on within that field (e.g., “how bad would various possible types of nuclear wars be, from a longtermist perspective?”). In the latter case, I’ll take a lot of notes as I go in Roam, beginning to structure things into relevant sub-questions, things to learn more about, etc.
Since leaving university, I haven’t really made much use of textbooks, flashcards, or reaching out to experts who aren’t already in my network. It’s not really that I actively chose to not make much use of these things (it’s just that I never actively chose to make much use of these things), and think it’s plausible that I should make more use of these things. I’ll very likely talk to a bunch of experts for my current or upcoming research projects.
Wow! Thanks for all the insightful answers, everyone!
Would anyone mind if I transfer these into a post on my blog (or a separate post in the EA Forum) that is linear in the sense that there is one question and then all answers to it, then the next question and all answers to it, and so on? That may also generate more attention for these answers. :-)
I think it would be valuable to publish these as a sequence of questions on the forum and let others chime in and have a more thorough discussion. Perhaps even separated through time, say 1 or two per week
I’ve been very impressed with your work, and I’m looking forward to you hopefully making similarly impressive contributions to probing longtermism!
But when it comes to questions: You did say “anything,” so may I ask some questions about productivity when it comes to research in particular? Please pick and choose from these to answer any that seem interesting to you.
Thinking vs. reading. If you want to research a particular topic, how do you balance reading the relevant literature against thinking yourself and recording your thoughts? I’ve heard second-hand that Hilary Greaves recommends thinking first so to be unanchored by the existing literature and the existing approaches to the problem. Another benefit may be that you start out reading the literature with a clearer mental model of the problem, which might make it easier to stay motivated and to remain critical/vigilant while reading. Would you agree or do you have a different approach?
Self-consciousness. I imagine that virtually any research project, successful and unsuccessful, starts with some inchoate thoughts and notes. These will usually seem hopelessly inadequate but they’ll sometimes mature into something amazingly insightful. Have you ever struggled with mental blocks when you felt self-conscious about these beginnings, and have you found ways to (reliably) overcome them?
Is there something interesting here? I often have some (for me) novel ideas, but then it turns out that whether true or false, the idea doesn’t seem to have any important implications. Conversely, I’ve dismissed ideas as unimportant, and years later someone developed them – through a lot of work I didn’t do because I thought it wasn’t important – into something that did connect to important topics in unanticipated ways. Do you have rules of thumb that help you assess early on whether a particular idea is worth pursuing?
Survival vs. exploratory mindset. I’ve heard of the distinction between survival mindset and exploratory mindset, which makes intuitive sense to me. (I don’t remember where I learned of these terms, but I tried to clarify how I use them in a comment below.) I imagine that for most novel research, exploratory mindset is the more useful one. (Or would you disagree?) If it doesn’t come naturally to you, how do you cultivate it?
Optimal hours of work per day. Have you found that a particular number of hours of concentrated work per day works best for you? By this I mean time you spend focused on your research project, excluding time spent answering emails, AMAs, and such. (If hours per day doesn’t seem like an informative unit to you, imagine I asked “hours per week” or whatever seems best to you.)
Learning a new field. I don’t know what I mean by “field,” but probably something smaller than “biology” and bigger than “how to use Pipedrive.” If you need to get up to speed on such a field for research that you’re doing, how do you approach it? Do you read textbooks (if so, linearly or more creatively?) or pay grad students to answer your questions? Does your approach vary depending on whether it’s a subfield of your field of expertise or something completely new?
Hard problems. I imagine that you’ll sometimes have to grapple with problems that are sufficiently hard that it feels like you didn’t make any tangible progress on them (or on how to approach them) for a week or more. How do you stay optimistic and motivated? How and when do you “escalate” in some fashion – say, discuss hiring a freelance expert on some other field?
Emotional motivators. It’s easy to be motivated on a System 2 basis by the importance of the work, but sometimes that fails to carry over to System 1 when dealing with some very removed or specific work – say, understanding some obscure proof that is relevant to AI safety along a long chain of tenuous probabilistic implications. Do you have tricks for how to stay System 1 motivated in such cases – or when do you decide that a lack of motivation may actually mean that something is wrong with the topic and you should question whether it is sufficiently important?
Typing speed. I have this pet theory that a high typing speed is important for some forms of research that involves a lot of verbal thinking (e.g., maybe not maths). The idea is that our memory is limited, so we want to take notes of our thoughts. But handwriting is slow, and typing is only mildly faster, so unless one thinks slowly or types very fast, there is a disconnect that causes continual stalling, impatience, forgotten ideas, and prevents the process from flowing. Does that make any intuitive sense to you? Do you have any tricks (e.g., dictation software)?
Obvious questions. Nate Soares has an essay on “obvious advice.” Michael Aird mentioned that in many cases he just wanted to follow up on some obvious ideas. They were obvious in hindsight, but evidently they hadn’t been obvious to anyone else for years. Is there a distinct skill of “noticing the obvious ideas” or “noticing the obvious open questions”? And can it be trained or turned into a repeatable process?
Tiredness, focus, etc. We sometimes get tired or have trouble focusing. Sometimes this happens even when we’ve had enough sleep (just to get an obvious solution out of the way: sleep/napping). What are your favorite things to do when focusing seems hard or you feel tired? Do you use any particular nootropics, supplements, air quality monitor, music, or exercise routine?
Meta. Which of these questions would you like to see answered by more people because you are interested in the answers too?
Thank you kindly! And of course just pick out the questions you think are interesting for you or other readers to answer. :-)
I can answer 6, as I’ve been doing it for Wild Animal Welfare since I was hired in September. WAW is a new and small field, so it is relatively easy to learn the field, but there’s still so much! I started by going backwards (into the Welfare Biology movement of the 80s and 90s) and forwards (into the WAW EA orgs we know today) from Brain Tomasik, consulting the primary literature over various specific matters of fact. A great thing about WAW being such a young field (and so concentrated in EA) is that I can reach out to basically anyone who’s published on it and have a real conversation. It’s a big shortcut!
I should note that my background is in Evolutionary Biology and Ecology, so someone else might need a lot more background in those basics if they were to learn WAW.
Hi Denis,
Lots of really good questions here. I’ll do my best to answer.
Thinking vs reading: I think it depends on the context. Sometimes it makes sense to lean toward thinking more and sometimes it makes sense to lean toward reading more. (I wouldn’t advise focusing exclusively on one or the other.) Unjustified anchoring is certainly a worry, but I think reinventing the wheel is also a worry. One could waste two weeks groping toward a solution to a problem that could have been solved in afternoon just by reading the right review article.
Self-consciousness: Yep, I am intimately familiar with hopelessly inchoate thoughts and notes. (I’m not sure I’ve ever completed a project without passing through that stage.) For me at least, the best way to overcome this state is to talk to lots of people. One piece of advice I have for young researchers is to come to terms with sharing your work with people you respect before it’s polished. I’m very grateful to have a large network of collaborators willing to listen to and read my confused ramblings. Feedback at an early stage of a project is often much more valuable than feedback at a later stage.
Is there something interesting here?: Yep, this also happens to me. Unfortunately, I don’t have any particular insight. Oftentimes the only way to know whether an idea is interesting is to put in the hard exploratory work. Of course, one shouldn’t be afraid to abandon an idea if it looks increasingly unpromising.
*Survival vs. exploratory mindset: Insofar as I understand the terms, an exploratory mindset is an absolute must. Not sure how to cultivate it, though.
Optimal hours of work per day: I work between 4 and 8 hours a day. I don’t find any difference in my productivity within that range, though I imagine if I pushed myself to work more than 8, I would pretty quickly hit diminishing returns.
Learning a new field: I can’t emphasize enough the value of just talking to existing experts. For me at least, it’s by far the most efficient way to get up-to-speed quickly. For that reason, I really value having a large network of diverse people I can contact with questions. I put a fair amount of effort into cultivating such a network.
Hard problems: I’m fortunate that my work is almost always intrinsically interesting. So even if I don’t make progress on a problem, I continue to be motivated to work on it because the work itself is so very pleasant. That said, as I’ve emphasized above, when I’m stuck, I find it most helpful to talk to lots of people about the problem.
Emotional motivators: When I reflect on my life as a whole, I’m happy that I’m in a career that aims to improve the world. But in terms of what gets me out of bed in the morning and excited to work, it’s almost never the impact I might have. It’s the intrinsically interesting nature of my work. I almost certainly would not be successful if I did not find my research to be so fascinating.
Typing speed: No idea what my typing speed is, but it doesn’t feel particularly fast, and that doesn’t seem to handicap me. I’ve always considered myself a slow thinker, though.
Obvious questions: Yeah, I think there is a general skill of “noticing the obvious.” I don’t think I’m great at it, but one thing I do pretty often is reflect on the sorts of things that appear obvious now that weren’t obvious to smart people ~200 years ago.
Tiredness, focus, etc.: Regular exercise certainly helps. Haven’t tried anything else. Mostly I’ve just acclimated to getting work done even though I’m tired. (Not sure I would recommend that “solution,” though!)
Meta: I’d like to see others answer questions 1, 3, 6, 7, and 10.
Your advice to talk to people is probably most important to me! I haven’t tried that a lot but when I did, it was very successful. One hurdle is not wanting to come off as too stupid to the other person (but there are also people who make me feel sufficiently at ease that I don’t mind coming off as stupid) and another is not wanting to waste people’s time. So I want to first be sure that I can’t just figure it out myself within ~ 10x the time. Maybe that a bad tradeoff. I also sometimes worry that people would actually like to chat more, but my reluctance to waste their time interferes with both our interest to chat. (Maybe they have the same reluctance, and both of us would be happier if he didn’t have it. Can we have a Reciprocity.io for talking about research, please? ^^)
Typing speed: Haha! You can test it here for example: https://10fastfingers.com/typing-test/english. I’ve been stagnating at ~ 60 WPM now for years. Maybe there’s some sort of distinction that some brains are more optimized toward (e.g., worse memory) or incentivized to optimize toward (e.g., through positive feedback) fewer low-level concepts and other more toward more high-level concepts. So when it comes to measures of performance that have time in the denominator, the first group hits diminishing marginal returns early while the second keeps speeding up for a long time. Maybe the second group is, in turn, less interested in understanding from first-principles, which might make them less innovative. Just random speculation.
Obvious questions: Yeah, I’ve been wondering how it can be that now a lot of people come up independently with cases for nonhuman rights and altruism regardless of distance, but a century ago seemingly almost no one did. Maybe it’s just that I don’t know because most of those are lost in history and those that are not, I just don’t know about (though I can think of some examples). Or maybe culture was so different that a lot of the frameworks weren’t there that these ideas attach to. So if moral genius is, say, normally distributed, then values-spreading could have the benefit that it increases the number of people that use relevant frameworks and thereby also increases the absolute number of moral geniuses who work within those frameworks. The values would have to be sufficiently cooperative not to risk zero-sum competition between values. I suppose that’s similar to Bostrom’s Megaearth scenario except with people who share certain frameworks in their thinking rather than the pure number of people.
Getting work done when tired: Well, to some degree I noticed that I over-update on tiredness, and then get into a negative feedback loop where I give up on things too quickly because I think I’m too tired to do them. At that point I’m usually not actually particularly tired.
(Sorry for barging in on this thread :D)
Regarding talking to people to get early feedback, get up to speed in a field, etc., you might find this post useful (if you haven’t already seen it).
I find this relatable. Relatedly, in the above-linked post, Michelle Hutchinson (the author) wrote:
I commented that I’d slightly push back on that passage, saying:
Thanks! Yeah, I sometimes wonder about that. I suppose in rationality-adjacent circles I can just ask what someone’s preference is (free-wheeling chat or no-nonsense and to the point). Maybe that’d be a faux pas or weird in general, but I think it should be fine among most EAs?
Personally, I’m very self-conscious about my work and tend to wait to long too share it. But the culture of RP seems to fight that tendency— which I think is very productive!
Thanks! This is something I sometimes struggle with I think. Is the culture just all about sharing early and often and helping each other, or are there also other aspects to the culture that I may not anticipate that help you overcome this self-consciousness? :-)
1. Thinking vs. reading.
Another benefit of thinking before reading is that it can help you develop your research skills. Noticing some phenomena and then developing a model to explain it is a super valuable exercise. If it turns out you reproduce something that someone else has already done and published, then great, you’ve gotten experience solving some problem and you’ve shown that you can think through it at least as well as some expert in the field. If it turns out that you have produced something novel then it’s time to see how it compares to existing results in the literature and get feedback on how useful it is.
This said, I think this is more true for theoretical work than applied work, e.g. the value of doing this in philosophy > in theoretical economics > in applied economics. A fair amount of EA-relevant research is summarising and synthesising what the academic literature on some topic finds and it seems pretty difficult to do that by just thinking to yourself!
3. Is there something interesting here?
I mostly try to work out how excited I am by this idea and whether I could see myself still being excited in 6 months, since for me having internal motivation to work on a project is pretty important. I also try to chat about this idea with various other people and see how excited they are by it.
4. Survival vs. exploratory mindset.
I also haven’t heard these terms before, but from your description (which frames a survival mindset pretty negatively), an exploratory mindset comes fairly naturally to me and therefore I haven’t ever actively cultivated it. Lots of research projects fail so extreme risk aversion in particular seems like it would be bad for researchers.
5. Optimal hours of work per day.
I typically aim for 6-7 hours of deep work a day and a couple of dedicated hours for miscellaneous tasks and meetings. Since starting part-time at RP I’ve been doing 6 days a week (2 RP, 4 PhD), but before that I did 5. I find RP deep work less taxing than PhD work. 6 days a week is at the upper limit of manageable for me at the moment, so I plan to experiment with different schedules in the new year.
6. Learning a new field.
I’m a big fan of textbooks and schedule time to read a couple of textbook chapters each week. Lesswrong’s best textbooks on every subject thread is pretty good for finding them. I usually make Anki flashcards to help me remember the key facts, but I’ve recently started experimenting with Roam Research to take notes which I’m also enjoying so my “learning flow” is in flux at the moment.
8. Emotional motivators.
My main trick for dealing with this is to always plan my day the night before. I let System 2 Dave work out what is important and needs to be done and put blocks in the calendar for these things. When System 1 Dave is working the next day, his motivation doesn’t end up mattering so much because he can easily defer to what System 2 Dave said he should do. I don’t read too much into lack of System 1 motivation, it happens and I haven’t noticed that it is particularly correlated with how important the work is, it’s more correlated with things like how scary it is to start some new task and irrelevant things like how much sunlight I’ve been getting.
9. Typing speed.
I struggle to imagine typing speed being a binding constraint on research productivity since I’ve never found typing speed to be a problem for getting into flow, but when I just checked my wpm was 85 so maybe I’d feel different if it was slower. When I’m coding the vast majority of my time is spent thinking about how to solve the problem I’m facing, not typing the code that solves the problem. When I’m writing first drafts, I think typing speed is a bit more helpful for the reasons you mention, but again more time goes into planning the structure of what I want to say and polishing, than the first pass at writing where speed might help.
11. Tiredness, focus, etc.
My favourite thing to do is to stop working! Not all days can be good days and I became a lot happier and more productive when I stopped beating myself up for having bad days and allowed myself to take the rest of the afternoon off.
12. Meta.
The questions I didn’t answer were because I didn’t have much to say about them so I’d be happy to see answers to them!
Thank you! Using the thinking vs. reading balance as a feedback mechanism is an interesting take, and in my experience it’s also most fruitful in philosophy, though I can’t compare with those branches of economics.
Survival mindset: I suppose it serves its purpose when you’re in a very low-trust environment, but it’s probably not necessary most of the time for most aspiring EA researchers.
Thanks for linking that list of textbooks! It’s also been helpful for me in the past. :-D
Planning the next day the evening before also seems like a good thing to try for me. Thanks!
I wonder whether you all have such fairly high typing speeds simply because you all type a lot or whether 80+ WPM is a speed threshold that is necessary to achieve before one ceases to perceive typing speed as a limiting factor. (Mine is around 60 WPM.)
I hope you can get your work hours down to a manageable level!
It was interesting to read, thanks for the answers :)
A small remark, which may be of use as you said you used Anki and now using Roam—The Roam Toolkit add-on allows you to use spaced-repetition in Roam.
#9 Typing speed: I think my own belief is that typing speed is probably less important than you appear to believe, but I care enough about it that I logged 53 minutes of typing practice on keybr this year (usually during moments where I’m otherwise not productive and just want to get “in flow” doing something repetitive), and I suspect I still can productively use another 3-5 hours of typing practice next year even if it trades off against deep work time (and presumably many more hours than that if it does not).
#10 Obvious questions. I suspect that while sometimes ignoring/not noticing “obvious questions/advice” etc is coincidental unforced errors, more often than not there is some form of motivated reasoning going on behind the scenes (eg because this story will invalidate a hypothesis I’m wedded to, because it involves unpleasant tradeoffs, because some beliefs are lower prestige, because it makes the work I do seem less important, etc). I think training myself carefully to notice these things has been helpful, though I suspect I still miss a lot of obvious stuff.
#11 Tiredness, focus, etc..I haven’t figured this out yet and am keen to learn from my coworkers and others! Right now I take a lot of caffeine and I suspect if I were more careful about optimization I should be cycling drugs over a weekly basis rather than taking the same one every day (especially a drug like caffeine that has tolerance and withdrawal symptoms).
Typing speed: Interesting! What is your typing speed?
Obvious questions: Thanks, I’ll keep that in mind. It seems unlikely to be the case for me, but I haven’t tried to observe such a connection either. I observed the opposite tendency in me in the sense that I’m worried about being wrong and so probe all the ways in which I may be wrong a lot, which has had the unintended negative effect that I’m too likely to abandon old approaches in favor of ones I’ve heard of very freshly because for the latter I haven’t come up with as many counterarguments. I also find rehearsing stuff that I already believe to be yucky and boring in ways that rehearsing counterarguments is not. But of course I might be falling for both traps in different contexts.
Only 57.9 according to keybr. I suspect a) typing practice will be less helpful for me if my typing speed is higher (like David’s) and b) my current typing speed is below average for programmers (not sure about researchers).
(It’s probably relevant/bad that my default typing system on those typing test layouts (26 characters + space only uses about 5 fingers. I think I go up to 8 on a more (normal) paragraph like this one that also uses shift/return/slash/number pad. I think if I’m focused on systematic rather than incremental changes to my typing speed I’d try to figure out how to force myself to use all 10 fingers).
Obvious questions
Hmm I think a lot of people have motivated reasoning of the form I describe, but I don’t know you well enough and I definitely don’t think all people are like this.
There is certainly a danger as well of being too contrarian or self-critical.
Have you tried calibration practice?
Maybe also make an explicit effort to write down key beliefs and numerical probabilities (or even just words for felt senses) to record and eventually correct for overupdating on new arguments/evidence (if this is indeed your issue).
Do you use the guided lessons of Keybr or a custom text? I think the guided lessons are geared toward your weaknesses, which probably leads to a lower speed than what you’d achieve with the average text.
That’s something where I’ve never felt bottlenecked by my typing speed. Learning to type blindly was very useful, though, because it gave me a lot more freedom with screen configurations. (And switching to a keyboard layout other than German, where most brackets are super hard to reach. I use a customized Colemak.)
Yeah, it’s on my list of things I want to practice more, but the few times I did some tests I was mostly well-calibrated already (with the exception of one probability level or what they’re called). There’s surely room for improvement, though. Maybe I’ll do worse if the questions are from an area that I think I know something about. ^^
Maybe I’m also too impressionable by people who speak with an air of confidence. I might be falling for some sort of typical mind fallacy and assume that when someone doesn’t use a lot of hedges, they must be so sure that they’re almost certain to be right, and then update strongly on that. But I’m not quite convinced by that theory either. That probably happens sometimes, but at other times I also overupdate on my own new ideas. I’m pretty sure I overupdate whenever people use guilt-inducing language, though.
I filled in Brian Tomasik’s list of beliefs and values on big questions at one point. :-D
Hi Denis,
Thanks again for these questions. I’ll share my answers in a few comments. This context and disclaimer—including that I only started with Rethink a month ago—should be borne in mind.
1. Thinking vs reading
I don’t think I really have explicit policies regarding balancing reading against thinking myself and recording my thoughts. Maybe I should.
I’m somewhat inclined to think that, on the margin and on average (so not in every case), EA would benefit from a bit more reading of relevant literatures (or talking to more experienced people in an area, watching of relevant lectures, etc.), even at the expense of having a bit less time for coming up with novel ideas.
I feel like EA might have a bit too much a tendency towards “think really hard by oneself for a while, then kind-of reinvent the wheel but using new terms for it”. It might be that, often, people could get to similar ideas faster and in a way that connects to existing work better (making it easier for others to find, build on, etc.) by doing some extra reading first.
Note that this is not me suggesting EAs should increase how much they defer to experts/others/existing work. Instead, I’m tentatively suggesting spending more time learning what experts/others/existing work has to say, which could be followed by agreeing, disagreeing, critiquing, building on, proposing alternatives, striking out in a totally different direction, etc.
(On this general topic, I liked the post The Neglected Virtue of Scholarship.)
Less important personal ramble:
I often feel like I might be spending more time reading up-front than is worthwhile, as a way of procrastinating, or maybe out of a sort-of perfectionism (the more I read, the lower the chance that, once I start writing, what I write is mistaken or redundant). And I sort-of scold myself for that.
But then I’ve repeatedly heard people remark that I have an unusually large amount of output. (I sort-of felt like the opposite was true, until people told me this, which is weird since it’s such an easily checkable thing!) And I’ve also got some feedback that suggested I should move more in the direction of depth and expertise, even at the cost of breadth and quantity of output.
So maybe that feeling that I’m spending too much time reading up-front is just mistaken. And as mentioned, that feeling seems to conflict with what I’d (tentatively) tend to advise others, which should probably make me more suspicious of the feeling. (This reminds me of asking “Is this how I’d treat a friend?” in response to negative self-talk [source with related ideas].)
10. “Obvious questions”
(Just my personal, current, non-expert thoughts, as always. Also, I’m not sure I’m addressing precisely the question you had in mind.)
A summary of my recommendations in this vicinity:
If people want to do research and want a menu of ideas/questions to work on, including ideas/questions that seem like they obviously should have a bunch of work on them but don’t yet, they could check out this central directory for open research questions, and/or an overlapping 80,000 Hours post.
If people want to discover “new” instances of such ideas/questions, one option might be to just try to notice ideas/variables/assumptions that seem important to some people’s beliefs, but that seem debatable and vague, have been contested by others, and/or haven’t been stated explicitly and fleshed out.
One way to do this might be to have a go at rigorously, precisely writing out the arguments that people seem to be acting as if they believe, in order to spot the assumptions that seem required but that those people haven’t stated/emphasised.
One could then try to explore those assumptions in detail, either just through more fleshed-out “armchair reasoning”, or through looking at relevant empirical evidence and academic work, or through some mixture of those things.
I think this is a big part of what I’ve done this year.
Here’s one example of a piece of my own work which came from roughly that sort of process.
I’ll add more detailed thoughts below.
---
I interpret this question as being focused on cases in which an idea/open question seems like it should’ve been obvious, or seems obvious in retrospect, yet it has been neglected so far. (Or the many cases we should assume still exist in which the idea/question is still neglected, but would—if and when finally tackled—seem obvious.)
It seems to me that there are two major types of such cases:
Unnoticed: Cases in which the ideas/open questions haven’t even been noticed by almost anyone
Or at least, almost anyone in the relevant community/field.
So I’d still say an idea counts as “unnoticed” for these purposes even if, for example, a very similar ideas has been explored thoroughly in sociology, but no one in longtermism has noticed that that idea is relevant to some longtermist issue, nor independently arrived at a similar idea.
Noticed yet neglected: Cases in which the ideas/open questions have been noticed, but no one has really fleshed them out or tackled them much
E.g., a fair number of longtermists have noticed that the question of how likely various types of recovery are from various types of civilizational collapse. But as far as I’m aware, there was nothing even approaching a thorough analysis of the question until some recent still-in-progress work, and there’s still room for much more work here.
More thoughts and notes on this here and here.
Another example is questions related to how likely global, stable totalitarianism is; what factors could increase or decrease the odds of that; and what to do about this. Some people have highlighted such questions (including but not only in the context of advanced AI), but I’m not aware of any detailed work on them.
This is really more a continuum than a binary distinction. In almost all cases, there’s probably been someone in a relevant community who’s at least briefly noticed something relevant. But sometimes it’ll just be that something kind-of relevant has been discussed verbally a few times and then forgotten, while other times it’ll be that people have prominently highlighted pretty precisely the relevant open question, yet no one has actually worked on it. (And of course there’ll be many cases in between.)
---
For “noticed yet neglected” ideas/questions, recommendation 1 from above will be more relevant: people could find many ideas/questions of this type in this central directory for open research questions, and just get cracking on them.
That directory is like a map pointing the way to many trees that might be full of low-hanging fruit that would’ve been plucked by now in a better world. And I really would predict that a lot of EAs could do valuable work by just having a go at those questions. (I’m less confident that this is the most valuable thing lots of EAs could be doing, and each person would have to think that through for themselves, in light of their specific circumstances. See also.)
So we don’t necessarily need all EA-aligned researchers to try to cultivate a skill of “noticing the ideas that should’ve been tackled/fleshed out already” (though I’m sure some should). Some could just focus on actually exploring the ideas that have been noticed but still haven’t been tackled/fleshed out.
---
For “unnoticed” ideas/questions, recommendation 2 from above will be more relevant.
I think this dovetails somewhat with Ben Garfinkel calling for[1] more people to just try to rigorously write up more detailed versions of arguments about AI risk that often float around in sketchier or briefer form. (Obviously brevity is better than length, all else held equal, but often a few pages isn’t enough to give an idea proper treatment.)
---
There are at least two other approaches for finding “unnoticed” ideas/questions which seem to have sometimes worked for me, but which I’m less sure would often be useful for many people, and less sure I’ll describe clearly. These are:
Trying to sketch out causal diagrams of the pathway to something (e.g., an existential catastrophe) happening
I think that doing something like this has sometimes helped me notice there there are:
assumptions or steps missing in the standard/fleshed-out stories of how something might happen,
alternative pathways by which something could happen, and/or
alternative/additional outcomes that may occur
See also
Trying to define things precisely, and/or to precisely distinguish concepts from each other, and seeing if anything interesting falls out
Here’s an abstract example, but one which matches various real examples that have happened for me:
I try to define X, but then notice that that definition would fail to cover some cases of what I’d usually think of as X, and/or that it would cover some cases of what I’d usually think of as Y (which is a distinct concept).
This makes me realise that X and/or Y might be able to take somewhat different forms or occur via different pathways to what was typically considered, or that there’s actually an extra requirement for X or Y to happen that was typically ignored.
I feel like it’d be easy to misinterpret my stance here.
I actually think that definitions will never or almost never really be “perfect”, and I agree with the ideas in this post (see also family resemblance). And I think that many debates over definitions are largely nitpicking and wasting time.
But I also think that, in many case, being clearer about definitions can substantially benefit both thought and communication.
---
I should again mention that I’m only ~1.5 years into my research career, so maybe I’ll later change my mind about a bunch of those points, and there are probably a lot of useful things that could be said on this that I haven’t said.
[1] See the parts of the transcript after Howie asks “Do you know what it would mean for the arguments to be more sussed out?”
I don’t work at Rethink Priorities but I couldn’t resist jumping in with some thoughts as I’ve been doing a lot of thinking on some of these questions recently
Thinking vs. reading. I’ve been playing around with spending 15-60 min sketching out a quick model of what I think of something before starting in on the literature (by no means a consistent thing I do though). I find it can be quite nice and help me ask the right questions early on.
Self-consciousness. Idk if this fits exactly but when I started my research position I tried to have the mindset of, ‘I’ll be pretty bad at this for quite a while’. Then when I made mistakes I could just think, ‘right, as expected. Now let’s figure out how to not do that again’. Not sure how sustainable this is but it felt good to start! In general it seems good to have a mindset of research being nearly impossibly hard. Humans are just barely able to do this thing in a useful way and even at the highest levels academics still make mistakes (most papers have at least some flaws).
Optimal hours of work per day. I tend to work about 4-7 hours per day including meetings and everything. Including only mentally intensive tasks I probably get around 4-5 a day. Sometimes I’m able to get more if I fall into a good rhythm with something. Looking around at estimates (Rescuetime says just ~3 hours per day average of productive work) it seems clear I’m hitting a pretty solid average. I still can’t shake the feeling that everyone else is doing more work. Part of this is because people claim they do much more work. I assume this is mostly exaggeration though because hours worked is used as a signal of status and being a hard worker. But still, it’s hard to shake the feeling.
Learning a new field. I just do a lot of literature review. I tend to search for the big papers and meta-analyses, skim lot’s of them and try to make a map of what the key questions are and what the answers proposed by different authors are for each question (noting citations for each answer). This helps to distill the field I think and serves as something relatively easy to reference. Generally there’s a lot of restructuring that needs to happen as you learn more about a topic area and see that some questions you used were ill-posed or some papers answer somewhat different questions. In short this gets messy, but it seems like a good way to start and sometimes it works quite well for me.
Hard problems. I have a maybe-controversial take that research (even in LT space) is motivated largely by signalling and status games. From this view the advice many gave about talking to people about it sounds good. Then you generate some excitement as you’re able to show someone else you’re smart enough to solve it, or they get excited to share what they know, etc. I think if you had a nice working group on any topic, no matter how boring, everyone would get super excited about it. In general, connecting the solution to a hard problem to social reward is probably going to work well as a motivator by this logic.
Emotional motivators. I’ve been thinking a lot recently about what I’m calling ‘incentive landscaping’. The basic idea is that your system 2 has a bunch of things it wants to do (e.g. have impact). Then you can shape your incentive landscape such that your system 1 is also motivated to do the highest impact things. Working for someone who shares your values is the easiest way to do this as then your employer and peers will reward you (either socially or with promotions) for doing things which are impact-oriented. This still won’t be perfectly optimized for impact but it gets you close. Then you can add in some extra motivators like a small group you meet with to talk about progress on some thing which seems badly motivated, or ask others to make your reward conditional on you completing something your system 2 thinks is important. Still early days for me on this though and I think it’s a really hard thing to get right.
Typing speed. At least when I’m doing reflections or broad thinking I often circumvent this by doing a lot of voice notes with Dragon. That way I can type at the speed of thought. It’s never perfect but ~97% of it is readable so it’s good enough. Then if you want to actually have good notes you go through and summarize your long jumble of semi-coherent thoughts into something decent sounding. This has the side of effect of some spaced repetition learning as well!
Tiredness, focus, etc. I’ve had lot’s of ongoing and serious problems with fatigue and have tried many interventions. Certainly caffeine (ideally with l-theanine) is a nice thing to have but tolerance is an issue. Right now what seems to work for me (no idea why) is a greens powder called Athletic Greens. I’m also trying pro/prebiotics which might be helping. Magnesium supplementation also might have helped. A medication I was taking was causing some problems as well and causing me to have some really intense fatigue on occasion (again, probably…). It’s super hard to isolate cause and effect in this area as there are so many potential causes. I’d say it’s worth dropping a lot of money on different supplements and interventions and seeing what helps. If you can consistently increase energy by 5-10% (something I think is definitely on the table for most people), that adds up really quickly in terms of the amount of work you can get done, happiness, etc. Ideally you’d do this by introducing one intervention at a time for 2-4 weeks each. I haven’t had patience for that and am currently just trying a few things at once, then I figure I can cut out one at a time and see what helped. Things I would loosely recommend trying (aside from exercise, sleep, etc): Prebiotics, good multivitamins, checking for food intolerances, checking if any pills you take are having adverse effects.
I do also work through tiredness sometimes and find it helpful to do some light exercise (for me, games in VR) to get back some energy. That also works as a decent gauge for whether I’ll be able to push past the tiredness. If playing 10 min of Beatsaber feels like a chore, I probably won’t be able to work.
How you rest might also be important. E.g. might need time with little input so your default mode network can do it’s thing. No idea how big of a deal this is but I’ve found going for more walks with just music (or silence) to maybe be helpful, especially in that I get more time for reflection.
I’ve also been experimenting with measuring heart rate variability using an app called Welltory. That’s been kind of interesting in terms of raising some new questions though I’m still not sure how I feel about it/how accurate it is for measuring energy levels.
Whee! Thank you too!
Yeah, I think that perspective on self-consciousness is helpful!
Work hours: I also wonder how much this varies between professions. Maybe that’s worth a quick search and writeup for me at some point. When you go from a field where it’s generally easy to concentrate for a long time every day to a field where it’s generally hard, that may seem disproportionately discouraging when you don’t know about that general difference.
“Try to make a map of what the key questions are and what the answers proposed by different authors are”: Yeah, combining that with Jason’s tips seems fruitful too: When talking to a lot of people, always also ask what those big questions and proposed answers are. More nonobvious obvious advice! :-D
I may try out social incentives and dictation software, but social things are usually draining and sometimes scary for me, so there’d be a tradeoff between the motivation and my energy. And I feel like I think in a particular and particularly useful way while writing but can often not think new thoughts while speaking, but that may be just a matter of practice. We’ll see! And even if it doesn’t work, these questions and answers are not (primarily) for me, and others probably find them brilliantly useful!
I’ve bought some Performance Lab products (following a recommendation from Alex in a private conversation). They have better reviews on Vaga and are a bit cheaper than the Athletic Greens.
“Default mode network”: Interesting! I didn’t know about that.
Hi Denis, thanks for these questions. I’ll give my answers to a bunch of them tomorrow. Just jumping in early with a clarifying question: Could you explain what you mean by “Survival vs. exploratory mindset”, and/or provide a link that explains that distinction? I haven’t heard those terms before, and Google didn’t immediately show me anything that looked relevant.
(Is it perhaps related to exploring vs exploiting?)
Hi Michael! Huh, true, those terms seem to be vastly less commonly used than I had thought.
By survival mindset I mean: extreme risk aversion, fear, distrust toward strangers, little collaboration, isolation, guarded interaction with others, hoarding of money and other things, seeking close bonds with family and partners, etc., but I suppose it also comes with modesty and contentment, equanimity in the face of external catastrophes, vigilance, preparedness, etc.
By exploratory mindset I mean: risk neutrality, curiosity, trust toward strangers, collaboration, outgoing social behavior, making oneself vulnerable, trusting partners and family without much need for ritual, quick reinvestment of profits, etc., but I suppose also a bit lower conscientiousness, lacking preparedness for catastrophes, gullibility, overestimating how much others trust you, etc.
Those categories have been very useful for me, but maybe they’re a lot less useful for most other people? You can just ignore that question if the distinction makes no intuitive sense this way or doesn’t quite fit your world models.
This distinction reminds me of the “survival values vs self-expression values” dimension of the World Values Survey. I’m a bit rusty on those terms, but from skimming a Wikipedia page, I think the “survival” part lines up decently with what you describe as “survival mindset”, but the self-expression part might not line up well with “exploratory mindset”:
As for your question: I haven’t thought in terms of survival vs exploratory mindset before, so I don’t think I have a strong view on which is more useful for research (or the situations in which this differs), how often I adopt each mindset, or how I cultivate them. I guess I’d probably guess exploratory mindset tends to be more useful and tends to be what I have, but I’m not sure.
I think parts of Rationality: From AI to Zombies (aka “the sequences”) and Harry Potter and the Methods of Rationality have quite useful advice—and a way of making it stick psychologically—that feels somewhat relevant here. E.g., the repeated emphasis and elaboration on “that which can be destroyed by the truth should be”. I have a sense that someone who’s struggling to adopt useful facets of the exploratory might benefit from reading (or re-skimming) one or both of those things.
Yeah, I agree about how well or not well those concepts line up. But I think insofar as I still struggle with probably disproportionate survival mindset, it’s about questions of being accepted socially and surviving financially rather than anything linked to beliefs (maybe indirectly in a few edge cases, but that feels almost irrelevant).
If this is not just my problem, it could mean that a universal basic income could unlock more genius researchers. :-)
11. Tiredness, focus, etc.
I find that being tired makes my mind wander a lot when reading longform things (e.g., papers, posts, not things like Slack messages or emails), so when I’m tired I usually try to do things other than reading.
If I’m just a bit or moderately tired, I usually find I’m still about as able to write as normal. If I’m very tired, I’ll still often be able to write quickly, but then when I later read what I wrote I’ll feel that it was unclear, poorly structured, and more typo-strewn than usual. So when very tired, I try to avoid writing longform things (e.g., actual research outputs).
Things I find I’m still pretty able to do when tired include commenting on documents people want input on (I think I’m more able to focus on this than on regular reading because it’s more “interactive” or something), writing things like EA Forum comments, replying to emails and Slack messages and the like, doing miscellaneous admin-y tasks, and reflecting on the last week/month and planning the next. So I often do a disproportionate amount of such tasks during evenings or during days when I’m more tired than normal, and at other times do a disproportionate amount of reading and “substantive” writing.
Also, I’m fortunate enough to have flexible hours. So sometimes I just work less on days when I’m tired (perhaps spending more time with my wife), and then make up for it on other days.
2 and 3. Self-consciousness and Is there something interesting here?
These questions definitely resonate with me, and I imagine they’d resonate with most/all researchers.
I have a tendency to continually wonder if what I’m doing is what I should be doing, or if I should change my priorities. I think this is good in some ways. But sometimes I’d make better decisions faster if I just actually pursued an idea more “confidently” for a bit, to get more info on whether it’s worth pursuing, rather than just “wondering” about it repeatedly and going back and forth without much new info to work with. Basically, I might do too much self-doubt-style armchair reasoning, with too little actual empirical info.
Also, pursuing an idea more “confidently” for a bit will not only inform me about whether to continue pursuing it further, but also might result in outputs that are useful for others. So I try to sometimes switch into “just commit and focus mode” for a given time period, or until I hit a given milestone, and mostly minimise reflection on what I should prioritise during that time. But so far this has been like a grab bag of heuristics and habits I use, rather than a more precise guideline for myself.
See also When to focus and when to re-evaluate.
Things that help me with this include, and/or some scattered related thoughts, include:
Talking to others and getting feedback, including on early-stage ideas
I liked David and Jason’s remarks on this in their comments
A sort-of minimum viable product and quick feedback loop approach has often seemed useful for me—something like:
First getting verbal feedback from a couple people on a messy, verbal description of an idea
Then writing up a rough draft about the idea and circulating it to a couple more people for a bit more feedback
Then polishing and fleshing out that draft and circulating it to a few more people for more feedback
Then posting publicly
(But only proceeding to the next step if evidence from the prior one—plus one’s own intuitions—suggested this would be worthwhile)
Feedback has often helped me determine whether an idea is worth pursuing further, feel more comfortable/motivated with pursuing an idea further (rather than being mired in unproductive self-doubt), develop the idea, work out which angles of it are most worth pursuing, and work out how to express it more clearly
Reminding myself that I haven’t really gathered any new info since the last time I thought “Should this really be what I spend my time on?”, so thinking about that again is unlikely to reveal new insights, and is probably just a stupid part of my psychology rather than something I’d endorse.
I might think to myself something like “If a friend was doing this, you’d think it’s irrational, and gently advise them to just actually commit for a bit and get new info, right? So shouldn’t you do the same yourself?”
Remembering Algorithms to Live By drawing an analogy to a failure mode in which computers continually reprioritise tasks and the reprioritisation takes up just enough processing power to mean no actual progress on any of the tasks occurs, and this can just cycle forever. And the way to get out of this is to at some point just do tasks, even without having confidence that these “should” be top priority.
This is just my half-remembered version of that part of the book, and might be wrong somehow.
Remembering that I’d be deeply uncertain about the “actual” value of any project I could pursue, because the world is very complicated and my ambitions (contribute to improving the long-term future) are pretty lofty. The best I can do is something that seems good in expected value but with large error bars. So the fact I feel some uncertainty and doubt provides basically no evidence that this project isn’t worth pursuing. (Though feeling an unusually large amount of uncertainty and doubt might.)
Remembering that, if the idea ends up seeming to have not been important but there was a reasonable ex ante case that it might’ve been important, there’s a decent chance someone else would end up pursuing it if I don’t. So if I pursue it, then find out it seems to not be important, then write about what I found, that might still have the effect of causing an important project to get done, because it might cause someone else to do that important project rather than doing something similar to what I did.
Examples to somewhat illustrate the last two points:
This year, in some so-far-unpublished work, I wrote about some ideas that:
I initially wasn’t confident about the importance of
Seemed like they should’ve been obvious to relevant groups, but seemed not to have been discussed by them. And that generally seems like (at least) weak evidence that an idea either (a) actually isn’t important or (b) has been in essence discussed in some other form or place that I just am not familiar with.
So when I had the initial forms of these ideas and wasn’t sure how much time (if any) to spend on them, I took roughly the following approach:
I developed some thoughts on some of the ideas. Then I shared those thoughts verbally or as very rough drafts with a small set of people who seemed like they’d have decent intuitions on whether the ideas were important vs unimportant, somewhat novel vs already covered, etc.
In most cases, this early feedback indicated that it was at least plausible that the ideas were somewhat important and somewhat novel. This—combined with my independent impression that these ideas might be somewhat important and novel—seemed to provide sufficient reason to flesh those ideas out further, as well as to flesh out related ideas (which seemed like they’d probably also be important and novel if the other ideas were, and vice versa).
So I did so, then shared that slightly more widely. Then I got more positive feedback, so I bothered to invest the time to polish the writings up a bit more.
Meanwhile, when I fleshed one of the ideas out a little, it seemed like that one turned out to probably not be very important at all. So with that one, I just made sure that my write-up made it clear early on that my current view was that this idea probably didn’t matter, and I neatened up the write-up just a bit, because I still thought the write-up might be a bit useful either to:
Explain to others why they shouldn’t bother exploring the same thing
Make it easy for others to see if they disagreed with my reasoning for why this probably didn’t matter, because I might be wrong about that, and it might be good for others to quickly check that reasoning
Having spent time on that idea sort-of felt in hindsight silly or like a mistake. But I think I probably shouldn’t see that as having been a bad decision ex ante, given that:
It seems plausible that, if not for my write-up, someone else would’ve eventually “wasted” time on a similar idea
This was just one out of a set of ideas that I tried to flesh out and write up, many/most of which still (in hindsight) seem like they were worth spending time on
So maybe it’s very roughly like I gave 60% predictions for each of 10 things, and decided that that’d mean the expected value of betting on those 10 things was good, and then 6 of those things happened, suggesting I was well-calibrated and was right to bet on those things
(I didn’t actually make quantitative predictions)
And some of the other ideas were in between—no strong reason to believe they were important or that they weren’t—so I just fleshed them out a bit and left it there, pending further feedback. (I also had other things to work on.)
Yeah, I even mentioned this idea (about preventing someone from “wasting” time on a dead end you already explored) in a blog post a while back. :-D
It’s also important to be transparent about one’s rigor and to make the negative results findable for others. The second is obvious. The first is because the dead end may not actually be a dead end but only looked that way given the particular way in which you had resolved the optimal stopping problem of investigating it (even) further.
I agree with these points, and think that they might sometimes be under-appreciated (both in and outside of EA).
To sort-of restate your points:
I think it’s common for people to not publish explorations that turned out to seem to “not reveal anything important” (except of course that this direction of exploration might be worth skipping).
Much has been written about this sort of issue, and there can be valid reasons for that behaviour, but sometimes it seems unfortunate.
I think another failure mode is to provide some sort of public info of your belief that this direction of exploration seems worth skipping, but without sufficient reasoning transparency, which could make people rule this out too much/too early.
Again, there can be valid reasons for this (if you’re sufficiently confident that it’s worth ruling out this direction and you have sufficiently high-value other things to do, it might not be worth spending time on a write-up with high reasoning transparency), but sometimes it seems unfortunate.
Part of this reminds me a lot of CFAR’s approach here (I can’t quite tell whether Julia Galef is interviewer, interviewee, or both):
Your approach to gathering feedback and iterating on the output, making it more and more refined with every iteration but also deciding whether it’s worth another iteration, that process sounds great!
I think a lot of people aim for such a process or want to after reading your comment, but will held back from showing their first draft to their first round of reviewers because they worry the reviewers will think badly of them for addressing a topic of this particular level of perceived difficulty or relevance (maybe it’s too difficult or too irrelevant in the reviewer’s opinion), or think badly of them for a particular wording, or think badly of them because they think you should’ve anticipated a negative effect of writing about the topic and not done so (e.g., some complex acausal trade or social dynamics thing that didn’t occur to you), or just generally have diffuse fears holding them back. Such worries are probably disproportionate, but still, overcoming them will probably require particular tricks or training.
I like that “Worker-me versus CEO-me” framing, and hadn’t heard of it or seen that page, so thanks for sharing that. It does seem related to what I said in the parent comment.
I share the view that it’ll be decently common for a range of disproportionate worries to hold people back from striking out into areas that seem good in expected value but very uncertain and with real counterarguments, and from sharing early-stage results from such pursuits. I also think there can be a range of good reasons to hold back from those things, and that it can be hard to tell when the worries are disproportionate!
I imagine it’d be hard (though not impossible) to generate advice on this that’s quite generally useful without being vague/littered with caveats. People will probably have to experiment to some extent, get advice from trusted people on their general approach, and continue reflecting, or something like that.
Regarding your Typing speed question, Tom Chivers (a journalist) was asked in a recent EA Forum AMA “How one should go about learning how to write high-quality material? And what is the way to get it published?”
His reply:
Heh, great find! :-D
7. Hard problems
I’m not actually sure if the precise problem you’re describing resonates with me. I definitely often feel very uncertain about:
whether the goal I’m striving towards really matters at all
even if so, whether it’s a goal worth prioritising
whether I should prioritise it (is it my comparative advantage?)
whether anything I produce in pursuing this goal will be of any use to anyone
But I’m not sure there have been cases where, for a week or more, I didn’t feel like I was at least progressing towards:
having the sort of output I had planned or now planned to produce (setting aside the question of whether that output will be useful to anyone), and/or
deciding (for good reason) to not bother trying to create that sort of output
Note that I’d count as “progress” cases where I explored some solutions/options that I thought might work/be useful for X, and all turned out to be miserable wastes of time, so I can at least rule those out and try something else next week. I’d also count cases where I learned other potentially useful things in the process of pursuing dead ends, and that knowledge seems likely to somehow benefit this or other projects.
It is often the case that my estimate of how many remaining days something will take is longer at the end of the week than it was at the beginning of the week. But this is usually coupled with me thinking that I have made some sort of progress—I just also realised that some parts will be harder than I thought, or that I should do a more thorough job than I’d planned, or something like that.
(But I feel like maybe I’m just interpreting your question differently to what you intended.)
In a private conversation we figured out that I may tend too much toward setting specific goals and then only counting achievement of these goals as success ignoring all the little things that I learn along the way. If the goal is hard to achieve, I have to learn a lot of little things on the way and that takes time, but if I don’t count these little things as little successes, my feedback gets too sparse, and I lose motivation. So noticing little successes seems valuable.
8. Emotional motivators
(Disclaimer: I’m just reporting on my own experience, and think people will vary a lot in this sort of area, so none of the following is even slightly a recommendation.)
In general:
Personally, I seem to just find it pretty natural to spend a lot of hours per week doing work-ish things
I tend to be naturally driven to “work hard” (without it necessarily feeling much like working) by intellectual curiosity, by a desire to produce things I’m proud of, and by a desire for positive attention (especially but not only from people whose judgement I particularly respect)
That third desire in particular can definitely become a problem, but I try to keep a close eye on it and ensure that I’m channeling that desire towards actions I actually endorse on reflection
I do get run down sometimes, and sometimes this has to do with too many hours per week for too many weeks in a row. But the things that seem more liable to run me down are feeling that I lack sufficient autonomy in what I do, how, and when; or feeling that what I’m doing isn’t valuable; or feeling that I’m not developing skills and knowledge I’ll use in future
That last point means that one type of case in which I do struggle to be motivated is cases where I know I’m going to switch away from a broad area after finishing some project, and that I’m unlikely to use the skills involved in that project again.
In these cases, even if I know that finishing that project to a high standard would still be valuable and is worth spending time on, it can be hard for me to be internally motivated to do so, because it no longer feels like doing so would “level me up” in ways I care about.
I seem to often become intensely focused on a general area in an ongoing way (until something switches my focus to another area), and just continually think about it, in a way that feels positive or natural or flow-like or something
This happened for stand-up comedy, then for psychology research, then for teaching, then for EA stuff (once I learned about EA)
(The other points above likewise applied during each of those four “phases” of my adult life)
Luckily, the sort of work I do now:
is very intellectually stimulating
involves producing things I’m (at least often!) proud of
can bring me positive attention
allows me a sufficient degree of autonomy
seems to me to be probably the most valuable thing I could realistically be doing at the moment (in expectation, and with vast uncertainty, of course)
involves developing skills and knowledge I expect I might use in future
That means it’s typically been relatively easy for me to stay motivated. I feel very fortunate both to have the sort of job and “the sort of psychology” I’ve got. I think many people might, through no fault of their own, find it harder to be emotionally motivated to spend lots of hours doing valuable work, even when they know that that work would be valuable and they have the skills to do it. Unfortunately, we can’t entirely choose what drives us, when, and how.
(There’s also a scary possibility that my tendency so far to be easily motivated to work on things I think are valuable is just the product of me being relatively young and relatively new to EA and the areas I’m working in, and that that tendency will fade over time. I’d bet against that, but could be wrong.)
Awesome! For me the size of an area plays a role for how long I have a high level of motivation for it. When you’re studying a board game, there are a few activities, they are quite similar, and if you try out all of them it might be that you run out of motivation within a year. This happened to me with Othello. But computer science or EA are so wide that if you lose motivation for some subfield of decision theory, you move on to another subfield of decision theory, or to something else entirely, like history. And there are probably a lot of such subareas where there are potentially impactful investigations waiting to be done. So it makes sense to me to be optimistic about having long sustained motivation for such a big field.
My motivation did shift a few times, though. I think before 2012 it was more a “This is probably hopeless, but I have to at least try on the off-chance that I’m in a world where it’s not hopeless.” 2012–2014 it was more “Someone has to do it and no one else will.” After March 28, 2014, it was carried a lot by the sudden enormous amount of hope I got from EA. On October 28, 2015, I suddenly lost an overpowering feeling of urgency and became able to consider more long-term strategies than a decade or two. Even later, I became increasingly concerned with coordination and risk from regression to the (lower) mean.
9. Typing speed
I’d be surprised if typing speed was a big factor explaining differences in how much different researchers produce, or in their ability to produce certain types of output. (But of course, that claim is pretty vague—how surprised would I be? What do I mean by “big factor?”)
But I just did a typing test, and got 92wpm (with “medium” words, and 1 typo), which is apparently high. So perhaps I’m just taking that for granted and not recognising how a slower typing speed could’ve limited me. Hard to say.
6. Learning a new field
I don’t know if I have a great, well-chosen, or transferable method here, so I think people should pay more attention to my colleagues’ answers than mine. But FWIW, I tend to do a mixture of:
reading Wikipedia articles
reading journal article abstracts
reading a small set of journal articles more thoroughly
listening to podcasts
listening to audiobooks
watching videos (e.g., a Yale lecture series on game theory)
talking to people who are already at least sort-of in my network (usually more to get a sounding board or “generalist feedback”, rather than to leverage specific expertise of theirs)
I’ve also occasionally used free online courses, e.g. the Udacity Intro to AI course. (See also What are some good online courses relevant to EA?)
Whether I take many notes depends on whether I’m just learning about a field because I think it might be useful in some way in future for me to know about that field, or because I have at least a vague idea of a project I might work on within that field (e.g., “how bad would various possible types of nuclear wars be, from a longtermist perspective?”). In the latter case, I’ll take a lot of notes as I go in Roam, beginning to structure things into relevant sub-questions, things to learn more about, etc.
Since leaving university, I haven’t really made much use of textbooks, flashcards, or reaching out to experts who aren’t already in my network. It’s not really that I actively chose to not make much use of these things (it’s just that I never actively chose to make much use of these things), and think it’s plausible that I should make more use of these things. I’ll very likely talk to a bunch of experts for my current or upcoming research projects.
These are fascinating, I would love to see answers to all of these questions!
Wow! Thanks for all the insightful answers, everyone!
Would anyone mind if I transfer these into a post on my blog (or a separate post in the EA Forum) that is linear in the sense that there is one question and then all answers to it, then the next question and all answers to it, and so on? That may also generate more attention for these answers. :-)
Yeah, this would be nice to have! It’s a lot of text to digest as it is now and I guess most people won’t see it here going forward
Sure, in general feel free to assume that anything I write that’s open to the public internet is fair game.
Yeah, same for me.
That’s fine by me!
I think it would be valuable to publish these as a sequence of questions on the forum and let others chime in and have a more thorough discussion. Perhaps even separated through time, say 1 or two per week