Writing about my job: Research Fellow, FHI

Following Aaron Gertler’s prompt, I am writing about my job as a researcher at the Future of Humanity Institute: the path that led to me applying for it, the application itself, and what it’s like to do the job. See also the 80,000 Hours guides on academic research and philosophy academia.

The basics

Research fellow, Future of Humanity Institute (FHI) at Oxford University

October 1, 2020 - present

When I started this position, I was still working on a PhD in philosophy at New York University. I am still finishing up my dissertation, while working for FHI full-time (here’s my FHI page).

Background and path to applying

I graduated from Harvard in 2011 with a degree in Social Studies (comparable to the UK’s PPE). I did a masters in philosophy at Brandeis University and started a PhD at NYU in fall 2015.

EA Global 2016

My path to FHI can be directly traced back to my desire, in the summer of 2016, to get my travel to EA Global reimbursed.

I got interested in EA around 2015 and took the Giving What We Can Pledge in summer 2016. Flushed with enthusiasm, I looked into going to EA Global 2016, which was in Berkeley.

Michelle Hutchinson organized an academic poster session for that EAG; somehow I was on a list of people who got an email encouraging me to submit a poster. It occurred to me that NYU’s Center for Mind, Brain, and Consciousness reimburses the travel expenses for PhD students who are giving talks and presentations in philosophy of mind. Driven in no small measure by this pecuniary motive, I hastily threw together a poster presentation at the intersection of EA and philosophy of mind.

The most important thing about the poster is simply that it got me to the conference.[1] That’s where I first met Michelle Hutchinson; I surmise that meeting Michelle, and being a presenter, got me on a list of EA academics.

GPI

As a result (I think), about a year later I was invited to be part of the Global Priority Institute’s first group of summer fellows in the summer of 2018. For my project, I worked on applying Lara Buchak’s work on risk aversion to longtermism and cause prioritization.[2] That summer I met lots of people at FHI, who we shared an office and a kitchen with—most notably for the purposes of this post, Katja Grace and Ryan Carey.

AI Impacts

Meeting Katja Grace in summer 2018 led to me doing research for AI Impacts in the summer of 2019. Also in summer 2019, Ryan Carey messaged me to encourage me to apply for the FHI Research Fellow role.

All told, that’s all three of my EA gigs—GPI, AI Impacts, FHI—that stemmed from my decision to go to EA Global 2016 and my cheeky quest to get it reimbursed.[3]

PhD research

Throughout this time, I was doing my PhD research. It was during my PhD that I wrote a paper on fairness measures in machine learning that I would eventually use as my writing sample for FHI. My PhD research also gave me enough familiarity with AI to work on AI-related topics at AI Impacts and eventually FHI.[4] I also ran a reading group on philosophy and AI.

The application

Materials and process

The application required, if I recall correctly: cover letter, CV/​resume, research proposal, writing sample, and two references. The process involved a 2-hour (maybe 3?) timed work test, and two rounds of interviews.

My research proposal, inspired by issues I had been thinking about at AI Impacts, outlined ways to get evidence for or against the Prosaic AGI thesis. In an interview, the selection committee made it clear that they were not especially excited about this research direction. I also discussed my work in AI ethics.

My references were Katja Grace and my dissertation supervisor, David Chalmers.

My writing sample was the aforementioned paper on fairness in machine learning. It was of sufficient interest to GovAI that I was (separately from the application process) invited to give comments at a workshop GovAI was hosting—so I figure that it was an asset.

More generally, as I understand it, the features that made me an attractive candidate were: my work running the NYU AI and philosophy reading group, my background in philosophy of mind and AI, and general writing and research strengths. FHI wanted me to help start a research effort on digital minds and AI consciousness, even though I had not really “officially” worked on these topics—that said, I had a decent background from grad school classes and had absorbed a good deal by osmosis, just by being around NYU. I agreed to take a crack at this problem, and I got an offer.

Other things I applied for and did not get

Lessons

My path to FHI seems haphazard to me even in hindsight. But some tentative remarks:

  1. Going to events and meeting people can be extremely valuable; it has high upside risk.

  2. It can be very high-impact to nudge people to apply to things. I’m not sure I would have applied to FHI without Ryan Carey’s encouragement.

  3. It can be useful to finish things to a high level of quality even if you come to believe they are not the highest-impact thing you could be doing. My fairness and machine learning paper had felt like an utter slog for many months by the time my FHI application came up, but I had kept revising and improving it. I’ve been told that the clarity of that piece helped my application stand out.

  4. It can be very difficult to know in advance which of your activities might end up being most professionally useful. For example, I suspect that running the reading group, as much as my “official” research, made me an attractive candidate for my current position.

Doing the job

The main thing I’ve been thinking about since starting at FHI is consciousness in AI systems: how we might know when AI systems are conscious (or if indeed they already are), and how we might make progress on this incredibly difficult question.[5] I also think about the relationship between cognitive science and AI, especially in light of the trend that recent AI progress has come from scaling up large machine learning systems, with relatively little inspiration from cognitive science.

Days and weeks

These are long-term projects where it’s not always clear how to proceed. Much like blogging, research is very independent: it is up to you to figure out what is most important to work on, how to tackle it, and when to do it. On any given day there will be little that must be done immediately, and hardly anyone to make you do it. Setting a schedule and keeping myself accountable is challenging and essential; my friends are all too familiar with a jumble of high-strung techniques I have cobbled together: deadlines with money accountability, group pomodoros, et alia.

On an ideal day, I’ll do deep work reading and writing on my most important research project for three or so hours, starting in the morning. Lighter tasks are reserved for the afternoon, like meetings, replying to emails, and organization. (See: Cal Newport, Gwern on morning writing).

Recently I’ve been working from home in London. I’ll usually start work some time between 9 and 1030am, and stop some time between 6 and 8, taking liberal breaks for lunch and for working out. I try to unwind during the evenings: reading, music, hanging out with friends. Research jobs make it hard to “clock out”; there’s always more you could be doing. But clocking out is important. With some jobs, it may be possible and helpful to work more or less all the time—research is not such a job, at least not for me.[6] It takes practice to make sure you do keep some kind of regular workday and work week.

I’m not very good at tracking hours, so can’t give a detailed breakdown of a typical week. But here are some things on my to-do list for this week:

  • read ‘The Meta Problem of Consciousness’: take notes, make flashcards about it, and make a handout for reading group

  • email several academics to ask if they will take a meeting with me, and asking if they are interested in visiting the digital minds group in the future

  • revise a draft of a paper I’m working on

  • meet with a colleague about a paper we are collaborating on

  • attend FHI events: digital minds reading group, the Research Progress Meeting

  • meet with an FHI colleague; a DeepMind research scientist; and my mentee for SERI’s summer research internship

Skills developed

-Writing clearly

-Reading academic papers effectively

First, there’s deciding what to read and how deeply to read it. Often this feels like a hard-to-articulate intuition about quality, which develops gradually with familiarity with a field or a literature. Then there’s reading. If a paper seems especially important, I will read it carefully and make 10-20 Anki flashcards about it, and take extensive notes on it. At the best of times, I will also write a (low-effort) prose summary of it.

-Learning new maths and empirical literatures as needed

Things that are relevant to AI consciousness: neuroscience, deep learning, philosophy, ethology. There’s a huge amount of things to learn, which is one of the greatest struggles of working on interdisciplinary questions. It is also one of the greatest rewards.

-Networking and field-building

I’m fairly extroverted for a researcher, and I really enjoy meeting and talking to people (including you, the reader! See below). I try to leverage this and make my work as social as possible.

-Self-management

See above. See also, Lynette Bye.

Pros and cons

The major advantage of the job is that I have a lot of freedom to work on extremely challenging problems in whatever way I think best. I get to do this surrounded by fascinating people from a variety of disciplines. I look forward to the joys of in-person office life: when grabbing a protein bar from the FHI kitchen, you’re liable to find Anders Sandberg ebulliently holding forth on the physical constraints that govern possible intergalactic civilizations, or overhear some alarming fact about the history of nuclear weapons.

The major drawback of the job is that I have a lot of freedom to work on extremely challenging problems in whatever way I think best. I never feel like I know enough or that I am up for the challenge—in general, but especially when working on something as genuinely bewildering as consciousness. Cluelessness means I rarely have the satisfaction of knowing I have moved things in the right direction. Imposter syndrome flares up not infrequently. Doing a PhD is known to be a huge predictor of anxiety and depression; I would imagine the same is true for many EA research jobs, which can be similar to graduate research in some key dimensions.

Get in touch

All told, I enjoy my job and consider myself very privileged to have it, especially considering my somewhat fortuitous path to it.

My path to FHI was unique in many ways, with luck, privilege, timing, and personal idiosyncrasies all playing a role. Still, I hope you found this post helpful. Here are a variety of ways to get in touch with me, which would delight me.

Acknowledgements: thank you to the NYU Center for Mind, Brain, and Consciousness for supporting my presentation at EAG 2016. And to Molly Strauss, Sophie Rose, and Stephen Clare for comments on a draft of this post.

Notes


  1. ↩︎

    It’s worth noting that in my current opinion, the poster itself was—and this is not false modesty—not very good. It more or less consisted of two relatively trivial insights that a) it’s important for cause prioritization to know what systems are conscious and b) if the Integrated Information Theory of consciousness or something like it is correct, perhaps moral patienthood scales with “amount” of consciousness. Plausible thoughts, but many other posters presented substantive, polished papers.

  2. ↩︎

    Unfortunately for my work but fortunately for the world, not long after this someone else began working on this question, who was far more capable and far more familiar with Lara Buchak’s work: Lara Buchak.

  3. ↩︎

    EA Global 2016 is also when I performed the action that will probably outweigh the rest of my career impact combined: I successfully invited my friend Arden Koehler, who was not in effective altruism at all, to attend.

  4. ↩︎

    It might be of interest that I began to work on AI issues more from philosophical curiosity than from any EA considerations—at the time I was skeptical of AI safety as a cause area and of the ‘longtermist turn’ in EA more generally. Nor was I seriously considering EA research as a career at that point.

  5. ↩︎

    This has meant working on the following papers: (1) “The problem of minimal instantiations” with Jonathan Simon, on the discomfiting fact that very simple computational systems can satisfy the proposed criteria of consciousness of basically all of the leading scientific theories of consciousness. (2) I am scheming a paper how illusionists about consciousness should think about AI suffering. (3) I also need to write “AI consciousness: an overview for EAs” and post it to the EA forum! Please get in touch if you are interested in any of these topics.

  6. ↩︎

    Cf. Paul Graham: “[I]n many kinds of work there’s a point beyond which the quality of the result will start to decline. That limit varies depending on the type of work and the person. I’ve done several different kinds of work, and the limits were different for each. My limit for the harder types of writing or programming is about five hours a day. Whereas when I was running a startup, I could work all the time.”