How to make independent research more fun (80k After Hours)

Link post

For the 80k After Hours podcast, Luisa Rodriguez and I had a conversation about:

  • how she wrote her justly acclaimed nuclear war report while struggling with imposter syndrome and anxiety about her work

  • how to handle the emotional difficulty of reasoning under huge uncertainty

  • day-to-day productivity tricks

  • knowing when to fix your environment and process, versus changing what you’re doing more drastically

  • bottlenecks to the research process, especially one that I have struggled with: never feeling like projects are ‘done’ and thus not sharing my work enough

  • the close relationship between intellectual virtues and emotional virtues

  • the fine line between (a) being ambitious and holding yourself to high standards and (b) being paralyzed by perfecitonism

  • the fine line between (a) prioritizing impact and thinking rationally (not just ‘following your passion’) and (b) unhealthily coercing yourself into doing things you don’t intrinsically care about

  • how Luisa and I have helped each other with these and other issues

I really enjoyed the conversation. I learned a lot from Luisa about how she produced her excellent reports on nuclear war and civilizational collapse. I hope and expect that many people will recognize the challenges we have faced in our work.

And a heartfelt thank you to the amazing 80k podcast production /​ transcript team: Kieran Harris, Ben Cordell, Milo McGuire, and Katy Moore!

Transcript

Luisa’s nuclear war report [00:01:33]

Robert Long: So when you did the nuclear war report, and you came up with a Guesstimate model and probabilities, I’m sure as you input those, a lot of it felt extremely uncertain and it felt in some sense wrong to even be putting a number on stuff?

Luisa Rodriguez: It felt horrible. Yep.

Robert Long: How did you get over that feeling? And also, in retrospect, do you endorse having done that? I’m guessing you do.

Luisa Rodriguez: Yeah. One thing that was nice about the nuclear project is that surprisingly, many of the model inputs were not guesses about probabilities that felt really unknowable. Some of the inputs were just like, “How many nuclear weapons might be used in a nuclear war?” The maximum is the number that exists. The minimum is one.

And then, I mean, I use probability distributions in part because I got to be like, “I don’t really know which side it’s going to be.” There are some theoretical reasons to think it’s either going to be very few, because that’s a particular kind of military strategy, or very many, because that’s a different kind. And there are fewer stories to tell about why it’s in the middle. So: “I’m going to draw a curve that’s big at the small end and big at the upper end.”

Robert Long: “Bimodal” is what the cool kids call that, I think.

Luisa Rodriguez: Thank you, yeah. I mean, exactly. Like I didn’t even know it was called bimodal when I was doing it, which made me feel extra terrible about it.

Robert Long: But you had a reason to do it.

Luisa Rodriguez: Yeah.

Robert Long: Oh, but then you shouldn’t have felt terrible.

Luisa Rodriguez: Well, I felt very impostery. Like I felt like I should know more maths. I should know more probability. I should know more about how probability distributions worked. But basically, whenever I was like incredibly uncertain, I’d try a uniform distribution—which is basically where you put equal probability on all of the possible outcomes. And then I was like, “Do I really believe that’s true?” And if the answer was no, I’d try to add some probability to the things I think are more likely.

But plenty of my distributions are either very close to uniform—so, close to saying I’m just totally uncertain about which outcome it might be—or they’re like, I have one theory about how this works, and it’s something like, “Probability goes up over time.” Or like, “If we’re in the kind of world where we use this kind of nuclear targeting, then we’re also in the kind of world where we use this other thing, and so these things are correlated.” And so that would change some things about the distributions, but I rarely felt like I was putting numbers on things. And maybe you’d feel much better about a version where you were starting from uniform probability?

Robert Long: And seeing if I ever want to make it a little higher somewhere. Yeah.

Luisa Rodriguez: A little bit higher somewhere. And even if your probability is still between 0 and 99, then that is something. And you probably will make it even narrower—and that is better than I could do on consciousness, so that would be information to me.

So I think I was partly just very lucky that there was like actual concrete information for me to draw on—or like, not just lucky, but I am much more drawn toward projects with empirical data for this very reason. I think I’d find it way too uncomfortable to be like, “What’s my guess at the probability that this argument about consciousness is right?” That just sounds impossible to me.

Robert Long: Right. There’s even more—way, way, way more—model uncertainty in consciousness. I mean, I’m guessing there also is in the nuclear war case, right?

Luisa Rodriguez: I mean, eventually I’d get to some kinds of inputs that were super uncertain and weird, about like politics and game theory. And those made me incredibly uncomfortable. And I basically just, again, did my best to start from like, “Do I know nothing about this? No, I know something. So I should put more probability on something than just say it’s like a totally random guess.”

And then also, I think I just took a tonne of comfort in knowing that I was going to explain why I put a certain probability where I put it—and if someone thought that reasoning was bad, I was going to link them to the Guesstimate model and encourage them to put in a different probability. I mean, the fear is that people are going to be like, “You idiot. You think the probability is X?” And for me, it was really comforting to be like --

Robert Long: “Make your own damn model!”

Luisa Rodriguez: Yeah. “I’ve done all the work of setting up this model and explaining my reasoning. And you are super welcome to make counterarguments and put in your own numbers and have some other view that pops out at the end. And by all means, try to convince me, or try to convince other people.” That seems just objectively good.

And a lot of the time, I didn’t have full access to that motivation. A lot of the time I was just like, “This is terrifying and I hate it.” But you only need one time, which is publishing time. I just had to be like, “I hate this, but it’s time to publish. I said I would. And I convinced myself that there are good reasons to be transparent about this. So I’m gonna hit publish and feel tortured about it. But I believe in it.”

Robert Long: It’s extremely important. Yeah, I really like this idea of reducing the anxiety of putting the probabilities into a certain bin. I think it’s great, actually. I think there’s probably a very natural human instinct to be like, “I’m uncertain about this. So it’s like uniform, who can say? Uniform distribution. Everything’s equally likely. It’s absurd to say anything else.” I like the idea of starting from that and being like, “No, come on. There’s like a little bump here.”

Luisa Rodriguez: Like, “No, no, come on.” Exactly.

Robert Long: Yeah, there’s at least some bump. But was there a different kind of anxiety that came from setting up the model? I assume it’d be a little bit more threatening if someone was like, “This is not how the risk of nuclear winter relates to the risk of collapse at all. It’s the wrong nodes. What you’ve said is independent is not even close to independent. It’s completely misguided in a way that just makes the whole thing confused.” I’m guessing you had like a manager checking that—and also, you’re talented enough to do it yourself—but for the anxiety, that’s what I would really want double-checked.

Luisa Rodriguez: Yeah. I think part of the answer is, again, I was thinking about really concrete empirical questions in the nodes in the model. I mean, I really started like super, super simply. I think partly because I just felt really dumb about the issue of nuclear war and nuclear winter. Like I didn’t even pay enough attention to current events to have cached intuitions about how likely certain nuclear wars were. Or I didn’t know much about the Cold War.

So I felt so dumb that I felt like I could learn something by being like, “How many nuclear weapons are there? And what is the population of the US and Russia? If we allocate all of those nuclear weapons to a different city, how many people could die?” That was informative to me. It started there and I was like, “If I’m going to learn something, plausibly other people will learn something too.”

And maybe you worry that it’s actually really misleading. And if you have that gut intuition about it, then listen to that and try to figure out why—maybe it leads to some good ways to make the model more complex and nuanced and interesting and say more true things. But I think I really just did start from like, “I have no real intuitions about this. What would give me anything to latch on to, to be a bit clearer on how bad nuclear war would be?”

And then I did add things like the assumption that cities would be targeted in order of how big they were. And when you do that, that makes the number even bigger, so that made my model a bit more nuanced. And then I did get to use a lot of research papers that like, looked at how much smoke is lofted into the atmosphere when a nuclear bomb detonates in a city. I just took those inputs, and I think maybe I widened them a bit for uncertainty, but I’m not even sure I did that. And that was also just like, “Cool. I had absolutely no idea how much smoke we’d get if all of the current day’s nuclear weapons were going to be detonated in cities all at once. Now I have a number. And it is roughly less than this paper says is required for nuclear winter. So that’s interesting and something I didn’t know before.”

Robert Long: Here’s a possible lesson from this. I’ll see if you agree. Reasoning transparency is great. It helps the reader know what they can take from your report. It also seems maybe really helpful for the writer because you’re making your brain remember, “My job here is not to say the final certain word on this topic forever.”

That’s never what you were doing. But speaking for myself, I sit down and I’m like, “The goal of this is to look into these questions, find some evidence, say how you would think about it.” Some part of my brain just immediately forgets that, and starts saying, “You must solve it.” It’s funny, maybe we just need like posters above our computers that reminds us of like, “What are you doing when you sit down to do this? Are you proving that you’re the greatest genius of all time? And in three months, you’re going to solve every perplexing question about consciousness and AI?” Um, no.

Imposter syndrome [00:12:08]

Luisa Rodriguez: It does seem like a good lesson, partly because I think that we both probably have… well, I don’t know if you’d consider yourself someone who has imposter syndrome. I think you have some issues with self-belief around your work as a philosopher. Like, are you good enough to be doing really, really important work in philosophy? And your bar is really high for that. And so it’s not that you’ve got low self-confidence—I think I’m closer to the low self-confidence side of things—you’ve mostly just got really high standards.

So I’m coming at research questions with the background idea that I’m dumb and that I know nothing, but probably I could learn a little something about it. And you’re coming with a background belief of like, “I’m pretty smart”—which you are—“and it’d be really cool if I could solve this philosophical problem of consciousness and AI systems.” And you think that’s possible—which is both maybe true, but also, I mean, it’s such a crazy ambition to me.

Robert Long: Yeah, I think there must be some art to taking the good parts of that possibly preposterous belief. I think the rationality community is sometimes good at this. I think one of their principles is like, “Don’t pretend at the outset that you couldn’t possibly know anything or solve the problem. Maybe you just can. And don’t be embarrassed about that. Don’t take the social consensus that this is impossible too seriously. Maybe it just hasn’t properly been tried.”

Luisa Rodriguez: Because a bunch of other people were like “I probably can’t solve it” as well.

Robert Long: Yeah. You want that belief without something that says you can’t publish this until you know literally everything and have perfectly solved everything.

Luisa Rodriguez: Totally. Or like you’ve only succeeded if it turns out that you were the one person in the world who could solve everything.

Robert Long: Exactly.

Luisa Rodriguez: Yeah, I agree that those seem like two beliefs that often don’t come together, but that would probably make for a great researcher.

Robert Long: I think you see a lot of that in some of the olden days. Like here’s a kind of preposterous belief: I can just figure out a lot better than people have what charities are more effective. That’s an insanely complicated question, right? And you’re just going to waltz in as an outsider and do a better job. But then at the same time, there’s clearly something going on with those people, where they’re like, “Well, we’re allowed to release some provisional conclusions and say what would change our minds. And we definitely haven’t settled that question—that question will probably never be 100% settled—but here’s something to go on.”

Luisa Rodriguez: Totally. Do you think you could access that mindset a bit more, I don’t know, either mid-project or preparing to publish something? Like, “I’m really grateful that Elie Hassenfeld and Holden Karnofsky were brave and ambitious enough to try to think about how to do charity much better, and publish preliminary results, which they’ve since updated many times. I could totally do that with consciousness.”

Robert Long: I think the gratitude angle might help. I think more just remembering the best that I’m doing will be enough. Because I think something I find deeply intrinsically motivating and pleasurable is the process of talking to other people about what a weird and confusing world we live in, and reporting what we’ve found out about it and what our current best guesses are.

I think Steven Pinker, in his book on writing, has some remark to the effect that the best writing is just pointing at something in the world that is interesting and describing it. And that’s kind of the opposite of being in your own head about, “Am I good enough? What does this say about me?” So I think that I (and the listener, if this applies to you) can remember that the world is out there, and it’s confusing. You’re trying to get less confused, and other people would also like to get less confused. You’re helping them do that by reporting your own journey. And then I still think you do need a dash of, “But also, success is possible.”

Luisa Rodriguez: “Maybe we could do it.”

Robert Long: Yeah, yeah, exactly.

Luisa Rodriguez: Yeah, I mean, this does just all feel very related to the fact that we’ve had this shared experience, despite not talking that much about the content of the research we’ve done in the past. But we have talked a bunch about the experience of it, and I think we’ve both found it really, really hard. And there are plenty of things that are very different about our jobs, but the things in common being very independent research and research on very hard and interdisciplinary questions. Is there more you want to say on what that’s been like for you?

Robert Long: Yeah, I think you’re also able to do this. It’s not finding it really hard that’s unpleasant. Of course it should be hard.

Luisa Rodriguez: Right. They are hard questions.

Robert Long: And I think both of us really like hard work. I guess it’s remembering not to berate yourself for it being hard. I have had a few humorous times this year where I’m like, “Oh yeah, I’m working on consciousness and AI. Of course it’s hard.”

Luisa Rodriguez: Totally.

Robert Long: I mean, as you know from being my friend, I have tonnes of stuff to say on this subject. Here’s an interesting balance. I think there’s a lot of emotional stuff that you can get sorted out that is very helpful for research. And I think sometimes people maybe don’t realise the extent to which the intellectual virtues are in some sense like emotional and character virtues of equanimity, and not being blinded by pride or fear.

Luisa Rodriguez: Totally. Status. Yeah.

Robert Long: But then on the flip side, a lot of that stuff is definitely not sufficient, and not all of it is necessary if you just fix the incentive structure around you, or the environment around you.

Luisa Rodriguez: Yeah. I do think of you as someone who is especially wise and deliberate about how you set up your environment and the incentives you want to create for yourself to make sure you do good research and share your research. Do you mind saying some stuff about the kinds of stuff you found helpful? And the specific kinds of struggles that those strategies were trying to help you overcome?

Robert Long: Yeah, definitely. I think for anyone doing research, the pit you really want to avoid is your work not making contact with the world—or, more importantly, with other people at regular intervals.

Luisa Rodriguez: Oh yeah. Gosh. I mean, the number of times I’ve been like, “This isn’t ready to share; I need to make it better first.” And then you spend a year of your life writing something that would have looked totally different—and much better—if you’d asked anyone to give you thoughts on the direction it was going.

Robert Long: I think for a certain class of researcher with probably pretty common human experiences, maybe the most important thing you could have is a friend or manager who says, “You are sharing that now. Send that to person X. Just send it, just ship it.”

Luisa Rodriguez: Yeah. “It doesn’t matter that there are unfinished sentences. That is fine. Just get feedback on what you have.”

Robert Long: And having a collaborator is a great way to get out of this pit, and just like never be in it to begin with. I’ve recently been writing a paper where me and my coauthors have only written in Zoom meetings together.

Luisa Rodriguez: Wow. Wild.

Robert Long: It’s extremely fun. I certainly won’t claim that it’s sustainable or optimal for all projects, but what it eliminates is really any chance of getting in your own head.

Luisa Rodriguez: Right.

Robert Long: Or procrastinating. And it’s just so interactive and...

Luisa Rodriguez: It sounds really social. It sounds like maybe you’ve got some nerves about typing your thoughts in front of your colleagues in real time. But once you start doing it, you get all of the gains of your smart colleagues helping you make your ideas better. And if they’re doing it with you, they probably have some respect for your ideas in the first place. And probably you’ll have some good ideas, and some ideas that could be better. And you’ll just find all of that out really quickly.

Robert Long: Yeah. So that’s one thing: having the right environment, where things are shared and social. Of course, I think there are times when everyone needs to go off and just sit in a room and just rack their brain over stuff. I think for most important problems, you get a lot of that time. So I’m not saying tweet every thought you have and make sure you get feedback on it and stuff.

Luisa Rodriguez: Sure. But that said, just to put even more emphasis on this idea, I don’t think I ever needed time alone in a room when I was really doing independent research. I think I basically always benefitted from having sometimes daily management check-ins, where I was like, “Here’s what I did today. Here’s the part I’m stuck on. And I want you to be my rubber duckie.” Sometimes I did it with unwilling housemates. I’ve just always found that talking to people about the research thing I’m thinking about, in that moment, on the daily, is better than waiting. As long as people are up for it.

Robert Long: One thing I’ll be very interested to see, and excited if it works, is to what extent large language models and chatbots can serve this role.

Luisa Rodriguez: Totally, yeah.

Robert Long: Yeah. I suspect there will be things that you don’t get from it because I think it is important to know that a real human sometimes is saying, “Yeah, good job” and “These ideas are interesting.” But for like rubber ducking and brainstorming, I think maybe just the impression of actually being in conversation with someone could do a lot.

Luisa Rodriguez: Right. Have you ever used anything like that?

Robert Long: Yeah, I’ve been using ChatGPT. I have it open a lot now while I’m researching, just to be like, “Yeah, I’m thinking about...” I sometimes just sort of put in my day plan, or like what could be my day plan. And ChatGPT has this very kind of HR, feel-good persona because of its training. So it usually says something innocuous, like, “Yeah, just remember to take breaks,” and “It’s important to drink enough water,” and “You can do it with a little faith in yourself.” But you know, it’s good to know. It’s good to be reminded of.

Luisa Rodriguez: That’s good advice.

Robert Long: Yeah. But I think you probably already could be getting a lot more out of current language models in terms of actual feedback. And I’m sure future ones will provide a lot of that.

Luisa Rodriguez: It’s funny, I felt surprised to hear you say that, but I did ask GPT-3 for questions to ask you in this interview.

Robert Long: Right. And I bet it did a decent job too. That would be my guess.

Luisa Rodriguez: It did. It totally did.

Strategies for making independent research more fun and productive [00:23:29]

Luisa Rodriguez: Are there any other strategies that have helped you make independent research more fun or more productive?

Robert Long: I guess we’ve been being very philosophical and emotional, but a nuts-and-bolts thing I can recommend is the app Cold Turkey blocker. There are other internet blockers as well, but basically I have a block on Cold Turkey that will block everything except Google Docs—and that’ll do it. I mean, if you put your phone away and you put that on, you’re going to have to get some words into your Google Docs. I highly recommend that. That might be all for nuts-and-bolts type stuff for now.

Another philosophical one is, so sometimes you’re feeling bad at work because you haven’t been sharing your work enough. And if you had a friend telling you to share the work, then things would just instantly get a lot better. Or maybe you’re not sleeping enough, or you’re iron deficient, and if you sorted that out, everything would be fine.

And there are some times when you’re just consistently feeling bad doing the work, because you fundamentally don’t care about it. And that’s a sign that you shouldn’t be doing it. As a disclaimer, that has not been the case with this consciousness in AI work.

Luisa Rodriguez: Nice. But has it been the case with you before?

Robert Long: Oh yeah. Yeah, for sure. I think sometimes I’ve overdone the self-help and the environment and the “let’s try a different note-taking app” and “let’s try a different Pomodoro length. I’ve just overdone that instead of being like, “Oh, the reason I can’t focus on this is I don’t care about it.”

Luisa Rodriguez: Yeah, yeah, yeah. I feel like I’ve been coming to grips with what has felt like a very sad realisation to me. Which is I feel like, when I got more involved with EA—and learned some things about rationality, learned some things about mental health, learned some things about strategies for doing good—I just developed this sense that a lot about my experience was changeable. Like, you can donate a bunch of your salary and not really miss it—you can still be roughly as happy. Or you can choose a research topic that is really valuable, even if it doesn’t totally intrinsically interest you, or hasn’t historically.

And that’s because you can get really interested in it by some mechanism—of like, you think it’s important, so you’ll be motivated to work on it. Or even, you can use some tools and rationality to make it so that your System 1, so to speak—your kind of intuitive or emotional side of your brain—comes to grips with some things your System 2 thinks, just by like, I don’t know, talking yourself into it in some structured way or something.

For me, a very specific example—that is not related to work or careers or research—is I think I felt like I could feel less imposter syndrome. There are a bunch of feelings that I thought I could feel less of, because of stuff like not just self-help, but rationality tools, and arguments about impact, and arguments from philosophers about careers and stuff.

And I feel like I’m coming to terms with the fact that less about me is changeable than I thought. And probably there are a bunch of research topics that are really important—like really, genuinely are instrumentally important to doing a bunch of good—that I probably couldn’t force myself to do, even if I had a great manager and a bunch of friends giving me support. And yeah, I think it’s been years that I thought, “I just need the right structures and then I can do the most impactful thing.” And I think I’m slowly realising that probably I can’t just overhaul all of my intrinsic interests and motivations. I probably have to work with them some.

Robert Long: That sounds like a very important lesson for you, and I’m very happy to hear that you’re coming to it—as your friend, but also for any listeners who feel like they’re constantly fighting against some sort of intuitive sense or intuitive excitement. It’s tricky because there is some amount that we do endorse—like not doing what’s immediately the most fun and exciting—but there definitely is a point at which you’re just bullying yourself, and you’re going to limit your long-term impact and happiness.

I guess one heuristic is it should feel like a negotiated decision between the parts of you that do and don’t want to do something.

Luisa Rodriguez: Interesting, yeah. Say more?

Robert Long: There’s maybe some integrated consensus between the unexcited part and the excited part, where the unexcited part really has been brought on board, and its concerns have been heard that there are some things that are boring about this, but like...

Luisa Rodriguez: And not just like, “Shut up, unexcited part, you get no seat at this table. The part of us that only cares about impact is the only person.”

Robert Long: It’s taking the wheel. Exactly.

Luisa Rodriguez: I mean, that’s just very much how I was making career decisions for a very long time, or topics for research, or yeah, just a bunch of choices. And I think I’m still working on getting all those parts integrated, and the part of me that has intrinsic motivations and interests that don’t necessarily line up with impact—I’m still working on letting that part of me have a seat at the table. It still feels very uncomfortable, and I still feel a strong desire to banish it.

Robert Long: Yeah, I think a certain failure mode of otherwise helpful techniques—like cognitive behavioural therapy, and rationality tools of really considering the impact and stuff—is using that to bully the resistive parts, and not to like have a conversation with them about the bigger picture.

Luisa Rodriguez: Totally. Yeah. Oh man, that just sounds very familiar. Do you feel like you’ve been able to do that?

Robert Long: Yeah, I’ve been working on it. I think it helped that I did kind of realise I was doing what I’m describing. Like I was using CBT in helpful ways, but I think I had this internal narrative of the rational part that really needs to show up and tell everyone who’s boss. And going back to the brain, that’s just not that much of your brain. And a lot of people should do CBT and recognise distorted thinking, but there just is a limit to how much I think you should be really coercing yourself to do stuff. I think, as with all things, there is balance here. Like I’ve seen certain corners of Twitter and certain intellectual communities that seem to have this very strong belief that in some sense you should never be doing anything that looks like self-coercion.

Luisa Rodriguez: Wow.

Robert Long: I think that’s just probably gotta be wrong. I would be very surprised if that’s true. But there’s some truth, and I think being very wary of stuff that makes you feel like you’re killing or locking away a core part of yourself...

Luisa Rodriguez: Yeah. I feel like part of my coming to terms with this is just realising that I think I had the belief that I could lock it away. And to make it more concrete, what is an example of a thing I’ve locked away? I think there was just a long time where I wasn’t considering what kinds of work I enjoyed in deciding what careers to consider.

Robert Long: Which I’ll add is, I do think counter to official 80,000 Hours advice. Like it should be a factor. I know they do say, correctly, don’t just follow whatever pops into your head as your passion. But anyway, just pointing that out.

Luisa Rodriguez: Totally. No, I agree. I totally oversimplified it. And I think it was because I was unrealistic. You know, they said, don’t follow your passion, consider your personal fit as one consideration. And I was like, “Well, it seems better if I don’t consider personal fit, and I just do the highest-impact thing, regardless of whether I enjoy it or not. And that’s a sacrifice I’m willing to make.”

And the mistake here was thinking that I could make that sacrifice. It turns out, I think I just literally cannot. And I thought maybe I’d do it and suffer, and that would just be like a choice I could make. But instead, the part that was like, “No, we want to do work that’s interesting sometimes”—or it’s not even interesting; I think I’ve always been very lucky to do interesting work, but to do work that scratches some other creative itch, or plays more to like things I naturally enjoy and find really stimulating and motivating—that part apparently just demands to be heard eventually.

And so I think for me, it played a role in bouncing between jobs a lot. And probably to some extent that’s meant I had less impact. Or, it’s a confusing question, but it totally seems possible that that kind of thing could happen. And I think it really boils down to the fact that I just was wrong about whether realistically I could make as many sacrifices as I thought I wanted to.

Robert Long: I think one thing that can really complicate this process too is—it sounds like you were at least aware of the existence of this inclination, or maybe you had really locked it away—but I just want to point out sometimes people have motivations that it would be painful for them to even acknowledge that they have. And then they’re really kind of steering a ship with a broken rudder, because that makes it impossible to even come to a reasoned consensus between these different motivations.

Luisa Rodriguez: Yeah. Oh, interesting. Well, I do feel very inspired, because I do think of you as someone who’s thought about this very wisely in your own career. I have basically seen you be like, “I could work on this philosophical problem. I don’t want to, and I’m not going to force myself to. I’m going to find things at the intersection of what I really want to work on and what is valuable for the world.”

Robert Long: I hope that approach is working somewhat. Sorry, that’s being too modest. Clearly some things are going right. And some things are not going right, because that is also life—and you and I both need to remind ourselves that the gold standard is not 100% happy, frictionless impact every waking hour.

Luisa Rodriguez: Yes, that is another lesson I do need regularly. I think that’s the lesson I need above my desk. Yours can say “You don’t need to solve philosophy now” and mine will say “You cannot and will not be happy and maximising your impact all the time.”

Mistakes researchers often make [00:34:58]

Luisa Rodriguez: A question I wanted to ask that’s kind of related, because these are, I guess, pitfalls that we’re falling into. Do you think there are mistakes you’ve made in your career that listeners might learn from?

Robert Long: Yeah, absolutely. I’d be curious to hear your thoughts on this, because I think you’ve been part of this process this year. I think one of the weakest points in my work process is like month-to-month or quarter-to-quarter focus. I think I’ve made a lot of improvements in my own life to have very good day-to-day focus—I know to put on that blocker, I know to put away my phone—and so I can show up day in, day out and do some deep work, and Cal Newport and all that.

I think this is related to the question of management and accountability. One of my biggest weaknesses or bottlenecks has been something like long-term or medium-term plans and prioritising, “OK, let’s just take the next two months to finish this and then we’ll move on to the next thing.” So, again, life happens and I think everyone probably always feels like they’re not optimally segmenting and focusing stuff, and you know, you get five projects going at once and now they’re all blurring together. But yeah, I can definitely say I’ve made mistakes in not finishing stuff in the right sequence or at the right time.

Luisa Rodriguez: What do you think has made that hard for you?

Robert Long: I think one is sort of a natural disposition towards being interested in a lot of stuff, which I think is a great strength of mine in some ways, but it makes this process harder. I think some of it was lack of self-knowledge and not realising that that’s where the trouble was coming in and focusing more on the daily stuff. I think a lot of it is the environment and the nature of what I’m doing. You know, this probably wouldn’t be an issue in a job that has like quarterly deliverables, and you just have to have to hit them.

So as with other stuff, I think this can be fixed with a mix of better self-knowledge and also the right environment and the right management. And when I say the right management, I have you in particular in mind, because I brought you on board as my “What are you finishing this month?” accountability buddy. I’d be curious to hear your thoughts on that question.

Luisa Rodriguez: I mean, we’ve both actually done this for each other at different points in our lives. You did this for me in 2019, when I was working on civilisational collapse research and was feeling so impostery that I couldn’t type anything on a page. For months we had daily calls, and I set accountability goals with money tied to them—and I think I paid you hundreds of dollars. So it’s very nice to get to repay the favour this year.

And my sense is that, from my perspective, a lot of what was going on was you having an extremely high standard for what it meant for a project to be done. So there were other projects that seemed exciting to you, and it would seem good to me if you wrapped up the one you were doing and moved on to the next one. Like prioritisation seemed hard for you—partly because it didn’t seem like at least any of the solo projects you were working on were ever meeting your standards, were ever good enough to be done.

So I feel like a big part of the value I added was just being like, “Let’s set a delivery date. You’ve worked on this for several months now. From the outside, it seems like if you’ve worked on something for six to 12 months, it’s sensible to share it with the public, or to share it with whoever commissioned you to write this report. So we’re calling that date the end date. And we’re just doing what we can between now and then to wrap it up.” Whereas I think your kind of default perspective was something like, “It’s not done until I’ve solved it.” And that’s a really ridiculous goal that you could have probably tried to chip away at for like years and years and years.

Robert Long: Yeah. And I think it’s important that I knew, in some sense, that was not the goal. And I could have told you that and written that down. But I think by recognising that my brain was going to keep pulling me towards that, that’s when you need outside help—to have a sane friend just be like, “You’re doing it again.”

Luisa Rodriguez: Right, exactly. We had the same conversation so many times. It wasn’t like I was telling you things you didn’t know, or even like each time we checked in was troubleshooting a different problem. Very often, you were like, “I need to share this thing in two weeks.” Then one week went by and you were like, “I’m not going to do that. I’m going to share it in another two weeks.” And I was like, “Nope, you said you were going to do it in two weeks, which is next week. So you have to do that.” And we talked about the reasons we endorsed, and they made sense to you, and then you did it.

Robert Long: Yeah, I guess the listeners can gauge the advice. I do think some of this is an especially kind of raw problem from the way I am. Do you think this is probably something most people will face if they’re not, you know, working for a line manager with deliverables that show up every quarter? Yeah, who will this apply to?

Luisa Rodriguez: My guess is like… Honestly, it feels like fewer than like 10% of the people I know would just decide to write something for the EA Forum, and then get all the way to completion in as short a time as they would endorse. Or like, I don’t know, sharing it for feedback as soon as they would endorse. I think almost no one does that.

I certainly didn’t. The only reason I ever published anything on the EA Forum is because I was paid to and we set a hard deadline of a certain day in a certain month. And when it got to that time, I felt like it was unacceptable that we were publishing it, and it just felt horrible. Like I felt like it was the wrong thing to do so strongly. And I think I could have worked on that for like years and years more, before I actually wanted to share it or publish it or whatever. I think I just basically know no one who doesn’t have some of that. I guess some people do struggle with it more than others. And that’s probably like impostery people, perfectionisty people, anxious people. Does that sound kind of right?

Robert Long: Yeah, for sure. Another point is—and this goes to the point of not beating yourself up for struggling—one of the best questions someone asked me in the last calendar year was when I was talking about the pitfalls I felt like I was in, and she just asked me, “What do researchers need? Like in general, what do they need?” And I was like, “Oh, well, you know, they need deadlines. They need people to check in on them. They need regular feedback. They need breaks sometimes.”

Luisa Rodriguez: And it was totally obvious to you that researchers in general need that. However, you...

Robert Long: I think I was thinking about this 10% of people that you mentioned, who somehow weirdly won some lottery where they don’t need that. And those people do exist, and hats off to them. But if you’re not, you need to try to get those things.

Luisa Rodriguez: And you probably shouldn’t hate yourself for it.

Robert Long: Probably, yeah.

Keiran’s outro [00:42:51]

Keiran Harris: If you enjoyed that one, you’ll be delighted to know we’ve got plenty more Luisa content coming up over the next few months.

If you want to hear more from Rob, you can follow him on Twitter at @rgblong and subscribe to his Substack, Experience Machines.

And a reminder that you can listen to Luisa and Rob’s excellent episode on artificial sentience over on the original 80,000 Hours Podcast feed — that’s number 146.

All right, audio mastering and technical editing for this episode by Ben Cordell and Milo McGuire.

Full transcripts and an extensive collection of links to learn more are available on our site and put together by Katy Moore.

And I produce the show.

Thanks for listening.

Articles, books, and other media discussed in the show

Luisa’s reports:

Rob’s work:

Approaches for dealing with pitfalls:

Other 80,000 Hours resources: