I recently spent some time reflecting on my career and my life, for a few reasons:
It was my 29th birthday, an occasion which felt like a particularly natural time to think through what I wanted to accomplish over the course of the next year 🙂.
It seems like AI progress is heating up.
It felt like a good time to reflect on how Redwood has been going, because we’ve been having conversations with funders about getting more funding.
I wanted to have a better answer to these questions:
What’s the default trajectory that I should plan for my career to follow? And what does this imply for what I should be doing right now?
How much urgency should I feel in my life?
How hard should I work?
How much should I be trying to do the most valuable-seeming thing, vs engaging in more playful exploration and learning?
In summary:
For the purposes of planning my life, I’m going to act as if there are four years before AGI development progresses enough that I should substantially change what I’m doing with my time, and then there are three years after that before AI has transformed the world unrecognizably.
I’m going to treat this phase of my career with the urgency of a college freshman looking at their undergrad degree—every month is 2% of their degree, which is a nontrivial fraction, but they should also feel like they have a substantial amount of space to grow and explore.
The AI midgame
I want to split the AI timeline into the following categories.
The early game, during which interest in AI is not mainstream. I think this ended within the last year 😢
The midgame, during which interest in AI is mainstream but before AGI is imminent. During the midgame:
The AI companies are building AIs that they don’t expect will be transformative.
The alignment work we do is largely practice for alignment work later, rather than an attempt to build AIs that we can get useful cognitive labor from without them staging coups.
For the purpose of planning my life, I’m going to imagine this as lasting four more years. This is shorter than my median estimate of how long this phase will actually last.
The endgame, during which AI companies conceive of themselves as actively building models that will imminently be transformative, and that pose existential takeover risk.
During the endgame, I think that we shouldn’t count on having time to develop fundamentally new alignment insights or techniques (except maybe if AIs do most of the work? Idt we should count on this); we should be planning to mostly just execute on alignment techniques that involve ingredients that seem immediately applicable.
For the purpose of planning my life, I’m going to imagine this as lasting three years. This is about as long as I expect this phase to actually take.
I think this division matters because several aspects of my current work seem like they’re optimized for midgame, and I should plausibly do something very differently in the endgame. Features of my current life that should plausibly change in the endgame:
I’m doing blue-sky alignment research into novel alignment techniques–during the endgame, it might be too late to do this.
I’m working at an independent alignment org and not interacting with labs that much. During the endgame, I probably either want to be working at a lab or doing something else that involves interacting with labs a lot. (I feel pretty uncertain about whether Redwood should dissolve during the AI endgame.)
I spend a lot of my time constructing alignment cases that I think analogous to difficulties that we expect to face later. During the endgame, you probably have access to the strategy “observe/construct alignment cases that are obviously scary in the models you have”, which seems like it partially obseletes this workflow.
Doing research that is practice rather than an actual attempt at aligning models or safely extracting cognitive labor from them. Some of the work I expect to want takeover-concerned people do during the endgame is probably very practical/empirical. But I expect us to also want to do some difficult-to-empirically-ground work to answer questions like “How could this particular model be scarily misaligned? How might our alignment strategy have failed such that this particular model will try to kill us?”
One core question here is: How is my impact distributed between work I do in the midgame vs the endgame? (As in, how much of my career value do I expect to lose if I suddenly die at the end of the midgame?)
Midgame impact:
The main mechanism here is that I think I (as part of Redwood) have a shot at developing alignment techniques (or other knowledge) that make a serious difference to the alignment plan, that can then be adopted by AI labs with no further actions from me/Redwood.
Secondarily, I think we have a shot at developing an alignment research methodology and/or organizational structure that allows Redwood and maybe the broader alignment community to do much more good work during the midgame.
Other midgame impacts I can have:
helping other people to gain skills such that they have a really big impact in the endgame.
setting up an organization which lasts into the endgame and is really productive at doing something crucial.
Endgame impact:
I have been assuming that most of my impact was in the midgame, but after thinking about it more over the last few weeks I’m now less sold on that.
Mechanisms by which Redwood-style backchained alignment researchers might have a lot of impact in the endgame:
Being an expert on some particular thing which is extremely useful when trying to align AIs or safely extract cognitive labor from them. When I’m thinking about this, I’m mostly trying to imagine the head of the alignment department at an AI lab being ecstatically delighted to be able to hire me, and saying “I am so glad we are able to work with Buck; it’s so incredibly convenient that he has been obsessively focused on this topic for the last four years and he’s already understood a bunch of the ins and outs; if it wasn’t for him we’d be yoloing stuff way more.”
This could be a class of techniques, e.g. interpretability or MAD.
This could be a certain aspect of the alignment situation, from a more technique-agnostic perspective. For example, it seems pretty plausible to me that when we’re using AI to make lots of autonomous decisions, we’ll have some limited budget to oversee these decisions such that we can afford to oversee like 0.1% of them, and we’d really like to be smart about which 0.1% to review so that we maximize the probability of catching bad actions. There are lots of different approaches you might use here; e.g. you probably want to try some kind of anomaly detection, and you probably also want to concentrate oversight on examples where the AI action was higher stakes. If I/some Redwood people obsessively focused on the version of this problem that we expect to happen with TAI, I can imagine us being incredible hires due to having so much relevant experience and cached thinking.
As another example, if I decided to just focus on anti-adversarial training, it’s easy to imagine being way more familiar with that problem than anyone at an AI lab.
Having a bunch of practice at thinking about AI alignment in principle, which might be really useful for answering difficult-to-empirically-resolve questions about the AIs being trained.
Being well-prepared to use AI cognitive labor to do something useful, by knowing a lot about some research topic that we end up wanting to put lots of AI labor into. Maybe you could call this “preparing to be a research lead for a research group made up of AIs”. Or “preparing to be good at consuming AI research labor”.
Pacing: a freshman year
I think I want to treat my next year with the pacing of a freshman year in a US undergrad degree, for someone who wants to go into startups and thinks there’s some chance that they’ll want to graduate college early. I think that people going into their freshman year should be thinking a little bit about what they want to do after college. They should understand things that they need to do during college in order to be set up well for their post-college activities (e.g. they probably want to do some research as an undergrad, and they probably need to eventually learn various math). But meeting those requirements probably isn’t going to be where most of their attention goes.
Similarly, I think that I should be thinking a bit about my AI endgame plans, and make sure that I’m not failing to do fairly cheap things that will set me up for a much better position in the endgame. But I should mostly be focusing on succeeding during the midgame (at some combination of doing valuable research and at becoming an expert in topics that will be extremely valuable during the endgame).
When you’re a freshman, you probably shouldn’t feel like you’re sprinting all the time. You should probably believe that skilling up can pay off over the course of your degree. Every month is about 2% of your degree.
I think that this is how I want to feel. In a certain sense, four years is a really long time. I spent a reasonable amount of the last year feeling kind of exhausted and wrecked and rushed, and my guess is that this was net bad for my productivity. I think I should feel like there is real urgency, but also real amounts of space to learn and grow and play.
I went back and forth a lot on how I wanted to set up this metaphor; in particular, I was pretty tempted to suggest that I should think of this as a sophomore year rather than a freshman year. I think that freshmen should usually mostly ignore questions about career planning, whereas I think I should e.g. spend at least some time talking to labs about the possibility of them working with me/Redwood in various ways. I ended up choosing freshman rather than sophomore because I think that 3 years is less reasonable than 4.
And so, my plan is something like:
Put a bit of work into setting up my AI endgame plans.
E.g. talk to some people who are at labs and make sure they don’t think that my vague aspirations here are insane. I’m interested in more suggestions along these lines.
I think that if I feel more like I’ve deliberated once about this, I’ll find it easier to pursue my short-term plans wholeheartedly.
Mostly (like with 70% of my effort), push hard on succeeding at my midgame plans.
Spend about 20% of my effort on learning things that don’t have immediate benefits.
For example, I’ve spent some time over the last few weeks learning about generative modeling, and I plan to continue studying this. I have a few motivations here:
Firstly, I think it’s pretty healthy for me to know more about how ML progress tends to happen, and I feel much more excited about this subfield of ML than most subfields of ML. I feel intuitively really impressed and admiring of the researchers in this field, and it seems healthy for me to have a research field with researchers who I look up to and who I wholeheartedly believe I can learn a lot from.
Secondly, I have a crazy take that the kind of reasoning that is done in generative modeling has a bunch of things in common with the kind of reasoning that is valuable when developing algorithms for AI alignment.
A freshman year during the AI midgame: my approach to the next year
I recently spent some time reflecting on my career and my life, for a few reasons:
It was my 29th birthday, an occasion which felt like a particularly natural time to think through what I wanted to accomplish over the course of the next year 🙂.
It seems like AI progress is heating up.
It felt like a good time to reflect on how Redwood has been going, because we’ve been having conversations with funders about getting more funding.
I wanted to have a better answer to these questions:
What’s the default trajectory that I should plan for my career to follow? And what does this imply for what I should be doing right now?
How much urgency should I feel in my life?
How hard should I work?
How much should I be trying to do the most valuable-seeming thing, vs engaging in more playful exploration and learning?
In summary:
For the purposes of planning my life, I’m going to act as if there are four years before AGI development progresses enough that I should substantially change what I’m doing with my time, and then there are three years after that before AI has transformed the world unrecognizably.
I’m going to treat this phase of my career with the urgency of a college freshman looking at their undergrad degree—every month is 2% of their degree, which is a nontrivial fraction, but they should also feel like they have a substantial amount of space to grow and explore.
The AI midgame
I want to split the AI timeline into the following categories.
The early game, during which interest in AI is not mainstream. I think this ended within the last year 😢
The midgame, during which interest in AI is mainstream but before AGI is imminent. During the midgame:
The AI companies are building AIs that they don’t expect will be transformative.
The alignment work we do is largely practice for alignment work later, rather than an attempt to build AIs that we can get useful cognitive labor from without them staging coups.
For the purpose of planning my life, I’m going to imagine this as lasting four more years. This is shorter than my median estimate of how long this phase will actually last.
The endgame, during which AI companies conceive of themselves as actively building models that will imminently be transformative, and that pose existential takeover risk.
During the endgame, I think that we shouldn’t count on having time to develop fundamentally new alignment insights or techniques (except maybe if AIs do most of the work? Idt we should count on this); we should be planning to mostly just execute on alignment techniques that involve ingredients that seem immediately applicable.
For the purpose of planning my life, I’m going to imagine this as lasting three years. This is about as long as I expect this phase to actually take.
I think this division matters because several aspects of my current work seem like they’re optimized for midgame, and I should plausibly do something very differently in the endgame. Features of my current life that should plausibly change in the endgame:
I’m doing blue-sky alignment research into novel alignment techniques–during the endgame, it might be too late to do this.
I’m working at an independent alignment org and not interacting with labs that much. During the endgame, I probably either want to be working at a lab or doing something else that involves interacting with labs a lot. (I feel pretty uncertain about whether Redwood should dissolve during the AI endgame.)
I spend a lot of my time constructing alignment cases that I think analogous to difficulties that we expect to face later. During the endgame, you probably have access to the strategy “observe/construct alignment cases that are obviously scary in the models you have”, which seems like it partially obseletes this workflow.
Doing research that is practice rather than an actual attempt at aligning models or safely extracting cognitive labor from them. Some of the work I expect to want takeover-concerned people do during the endgame is probably very practical/empirical. But I expect us to also want to do some difficult-to-empirically-ground work to answer questions like “How could this particular model be scarily misaligned? How might our alignment strategy have failed such that this particular model will try to kill us?”
One core question here is: How is my impact distributed between work I do in the midgame vs the endgame? (As in, how much of my career value do I expect to lose if I suddenly die at the end of the midgame?)
Midgame impact:
The main mechanism here is that I think I (as part of Redwood) have a shot at developing alignment techniques (or other knowledge) that make a serious difference to the alignment plan, that can then be adopted by AI labs with no further actions from me/Redwood.
Secondarily, I think we have a shot at developing an alignment research methodology and/or organizational structure that allows Redwood and maybe the broader alignment community to do much more good work during the midgame.
Other midgame impacts I can have:
helping other people to gain skills such that they have a really big impact in the endgame.
setting up an organization which lasts into the endgame and is really productive at doing something crucial.
Endgame impact:
I have been assuming that most of my impact was in the midgame, but after thinking about it more over the last few weeks I’m now less sold on that.
Mechanisms by which Redwood-style backchained alignment researchers might have a lot of impact in the endgame:
Being an expert on some particular thing which is extremely useful when trying to align AIs or safely extract cognitive labor from them. When I’m thinking about this, I’m mostly trying to imagine the head of the alignment department at an AI lab being ecstatically delighted to be able to hire me, and saying “I am so glad we are able to work with Buck; it’s so incredibly convenient that he has been obsessively focused on this topic for the last four years and he’s already understood a bunch of the ins and outs; if it wasn’t for him we’d be yoloing stuff way more.”
This could be a class of techniques, e.g. interpretability or MAD.
This could be a certain aspect of the alignment situation, from a more technique-agnostic perspective. For example, it seems pretty plausible to me that when we’re using AI to make lots of autonomous decisions, we’ll have some limited budget to oversee these decisions such that we can afford to oversee like 0.1% of them, and we’d really like to be smart about which 0.1% to review so that we maximize the probability of catching bad actions. There are lots of different approaches you might use here; e.g. you probably want to try some kind of anomaly detection, and you probably also want to concentrate oversight on examples where the AI action was higher stakes. If I/some Redwood people obsessively focused on the version of this problem that we expect to happen with TAI, I can imagine us being incredible hires due to having so much relevant experience and cached thinking.
As another example, if I decided to just focus on anti-adversarial training, it’s easy to imagine being way more familiar with that problem than anyone at an AI lab.
Having a bunch of practice at thinking about AI alignment in principle, which might be really useful for answering difficult-to-empirically-resolve questions about the AIs being trained.
Being well-prepared to use AI cognitive labor to do something useful, by knowing a lot about some research topic that we end up wanting to put lots of AI labor into. Maybe you could call this “preparing to be a research lead for a research group made up of AIs”. Or “preparing to be good at consuming AI research labor”.
Pacing: a freshman year
I think I want to treat my next year with the pacing of a freshman year in a US undergrad degree, for someone who wants to go into startups and thinks there’s some chance that they’ll want to graduate college early. I think that people going into their freshman year should be thinking a little bit about what they want to do after college. They should understand things that they need to do during college in order to be set up well for their post-college activities (e.g. they probably want to do some research as an undergrad, and they probably need to eventually learn various math). But meeting those requirements probably isn’t going to be where most of their attention goes.
Similarly, I think that I should be thinking a bit about my AI endgame plans, and make sure that I’m not failing to do fairly cheap things that will set me up for a much better position in the endgame. But I should mostly be focusing on succeeding during the midgame (at some combination of doing valuable research and at becoming an expert in topics that will be extremely valuable during the endgame).
When you’re a freshman, you probably shouldn’t feel like you’re sprinting all the time. You should probably believe that skilling up can pay off over the course of your degree. Every month is about 2% of your degree.
I think that this is how I want to feel. In a certain sense, four years is a really long time. I spent a reasonable amount of the last year feeling kind of exhausted and wrecked and rushed, and my guess is that this was net bad for my productivity. I think I should feel like there is real urgency, but also real amounts of space to learn and grow and play.
I went back and forth a lot on how I wanted to set up this metaphor; in particular, I was pretty tempted to suggest that I should think of this as a sophomore year rather than a freshman year. I think that freshmen should usually mostly ignore questions about career planning, whereas I think I should e.g. spend at least some time talking to labs about the possibility of them working with me/Redwood in various ways. I ended up choosing freshman rather than sophomore because I think that 3 years is less reasonable than 4.
And so, my plan is something like:
Put a bit of work into setting up my AI endgame plans.
E.g. talk to some people who are at labs and make sure they don’t think that my vague aspirations here are insane. I’m interested in more suggestions along these lines.
I think that if I feel more like I’ve deliberated once about this, I’ll find it easier to pursue my short-term plans wholeheartedly.
Mostly (like with 70% of my effort), push hard on succeeding at my midgame plans.
Spend about 20% of my effort on learning things that don’t have immediate benefits.
For example, I’ve spent some time over the last few weeks learning about generative modeling, and I plan to continue studying this. I have a few motivations here:
Firstly, I think it’s pretty healthy for me to know more about how ML progress tends to happen, and I feel much more excited about this subfield of ML than most subfields of ML. I feel intuitively really impressed and admiring of the researchers in this field, and it seems healthy for me to have a research field with researchers who I look up to and who I wholeheartedly believe I can learn a lot from.
Secondly, I have a crazy take that the kind of reasoning that is done in generative modeling has a bunch of things in common with the kind of reasoning that is valuable when developing algorithms for AI alignment.