Iâm currently facing a career choice between a role working on AI safety directly and a role at 80,000 Hours. I donât want to go into the details too much publicly, but one really key component is how to think about the basic leverage argument in favour of 80k. This is the claim thatâs like: well, in fact I heard about the AIS job from 80k. If I ensure even two (additional) people hear about AIS jobs by working at 80k, isnât it possible going to 80k could be even better for AIS than doing the job could be?
In that form, the argument is naive and implausible. But I donât think I know what the âsophisticatedâ argument that replaces it is. Here are some thoughts:
Working in AIS also promotes growth of AIS. It would be a mistake to only consider the second-order effects of a job when youâre forced to by the lack of first-order effects.
OK, but focusing on org growth fulltime seems surely better for org growth than having it be a side effect of the main thing youâre doing.
One way to think about this is to compare two strategies of improving talent at a target org, between âtry to find people to move them into roles in the org, as part of cultivating a whole overall talent pipeline into the org and related orgsâ, and âput all of your fulltime effort into having a single person, i.e. you, do a job at the orgâ. It seems pretty easy to imagine that the former would be a better strategy?
I think this is the same intuition that makes pyramid schemes seem appealing (something like: surely I can recruit at least 2 people into the scheme, and surely they can recruit more people, and surely the norm is actually that you recruit a tonne of peopleâ and itâs really only by looking at the mathematics of the population as a whole you can see that it canât possibly work, and that actually itâs necessarily the case that most people in the scheme will recruit exactly zero people ever.
Maybe a pyramid scheme is the extreme of âwhat if literally everyone in EA worked at 80kâ, and serves as a reducto ad absurdum for always going into meta, but doesnât tell you precisely when to stop going meta. It could simultaneously be the case that pyramid schemes stop far too late, but everyone else stops significantly too early.
OK, so perhaps the thing to do is try to figure out what the bottleneck is in recruitment, and try to figure out when it would flip from ânot enough people are working on recruitmentâ to âthere isnât more work to do in recruitmentâ, e.g. because youâve already reached most of the interested population, and the people youâve already reached donât need more support, or in practice giving them more support doesnât improve outcomes.
OK, so when recruiting stops working, stop doing it (hardly a shocking revelation). But even that seems much more recruiting-heavy in implication than the norm. Surely you donât only stop when recruiting is useless, but when it is not as useful as your alternative.
In a lot of for-profit organisations, you need to stop recruiting because you need to start making money to pay the recruiters and the people they recruit. Even if it is worth more than it would cost to continue recruiting, you canât pay for it, so you have to do some object-level stuff, to prove to people that you deserve the confidence to continue to do the recruiting.
Itâs natural to think that if you could have arbitrary lines of credit, youâd want to do lots more recruitment up front, and this would eventually pay off. But there are people out there with money to lend, and they systematically donât want to lend it to you. Are they irrational, or are you over-optimistic?
It seems like the conclusion is forcing you to do object-level stuff is actually rational, because lots of people with optimistic projections of how much recruitment will help you are just wrong, and we need to find that out before burning too much money on recruiting. This discipline is enforced by the funding structure of for-profit firms.
The efficient market hypothesis as-applied here suggests that systematically demanding less or more object-level work from firms wonât allow you to do better than the status quo, so the existing overall level of skepticism is correct-ish. But non-profit ârecruitingâ is wildly different in many ways, so I donât know how much we should feel tethered to what works in the for-profit world.
The overall principle that we need people doing object-level work in EA in order to believe that meta-level work is worth doing seems correct. In the for-profit world the balance is struck basically by your equity holders or creditors seeking assurance from you that your growth will lead to more revenue in the future. In the non-profit world I suppose your funders can provide the same accountability. In both cases the people with money are not wizards and are not magically better at finding the truth than you are, though at least in the for-profit world if they repeatedly make bad calls then they run out of money to make new bad calls with (while if they make good calls they get more money to make more /â bigger calls with), so that acts as something of a filter. But being under budget discipline only makes it more important that you figure out whether to grow or execute; it doesnât really make you better at figuring it out.
I suppose the complement to the naive thing I said before is â80k needs a compelling reason to recruit people to EA, and needs EA to be compelling to the people to recruit to it as well; by doing an excellent job at some object-level work, you can grow the value of 80k recruiting, both by making it easier to do and by making the outcome a more valuable outcome. Perhaps this might be even better for recruiting than doing recruiting.â
This feels less intuitively compelling, but itâs useful to notice that it exists at all.
This take is increasingly non-quick, so I think Iâm going to post it and meditate on it somewhat and then think about whether to write more or edit this one.
My first reaction is that working on AI safety directly is more specialised and niche, so it might be your comparative advantage, while the 80k one might be filled by candidates from a wider range of backgrounds
I think this depends on what the specific role is. I think the one Iâm going for is not easily replaceable, but Iâm mostly aiming not to focus on the specific details of my career choice in this thread, instead trying to address the broader questions about meta work generally.
It feels like when Iâm comparing the person who does object-level work to the person who does meta-level work that leads to 2 people (say) doing object-level work, the latter really does seem better all things equal, but the intuition that calls this model naive is driven by a sense that itâs going to turn out to not âactuallyâ be 2 additional people, that additionality is going to be lower than you think, that the costs of getting that result are higher than you think, etc. etc.
But this intuition is not as clear as Iâd like on what the extra costs /â reduced benefits are, and how big a deal they are. Here are the first ones I can think of:
Perhaps the people that you recruit instead arenât as good at the job as you would have been.
If your orgâs hiring bottlenecks are not finding great people, but instead having the management capacity to onboard them or the funding capacity to pay for them, doing management or fundraising, or work that supports the case for fundraising, might matter more.
but 80k surely also needs good managers, at least as a general matter
I think when an org hires you, thereâs an initial period of your onboarding where you consume more staff time than you produce, especially if you weight by seniority. Different roles differ strongly on where their break-even point is. Iâve worked somewhere who thought their number was like 6-18 months (I forget what they said exactly, but in that range) and I can imagine cases where itâs more like⊠day 2 of employment. Anyway, one way or another, if you cause object level work to happen by doing meta level work, youâre introducing another onboarding delay before stuff actually happens. If the area youâre hoping to impact is time-sensitive, this could be a big deal? But usually Iâm a little skeptical of time-sensitivity arguments, since people seem to make them at all times.
itâs easy to inadvertently take credit for a person going to role that they would actually have gone to anyway, or not to notice when you guide someone into a role thatâs worse (or not better, or not so much better) than what they would have done otherwise (80k are clearly aware of this and try to measure it in various ways, but itâs not something you can do perfectly)
I think that this: > but the intuition that calls this model naive is driven by a sense that itâs going to turn out to not âactuallyâ be 2 additional people, that additionality is going to be lower than you think, that the costs of getting that result are higher than you think, etc. etc.
is most of the answer. Getting a fully counterfactual career shift (that personâs expected career value without your intervention is ~0, but instead theyâre now going to work at [job you would otherwise have taken, for at least as long as you would have]) is a really high bar to meet. If you did expect to get 2 of those, at equal skill levels to you, then I think the argument for âgoing metaâ basically goes through.
In practice, though: - People who fill [valuable role] after your intervention probably had a significant chance of finding out about it anyway. - They also probably had a significant chance of ending up in a different high-value role had they not taken the one you intervened on.
How much of a discount you want to apply for these things is going depend a lot on how efficiently you expect the [AI safety] job market to allocate talent. In general, I find it easier to arrive at reasonable-seeming estimates for the value of career/âtrajectory changes by modelling them as moving the the change earlier in time rather than causing it to happen at all. How valuable you expect the acceleration to be depends on your guesses about time-discounting, which is another can of worms, but I think is plausibly significant, even with no pure rate of time preference.
(This is basically your final bullet, just expanded a bit.)
I feel like the time sensitivity argument is a pretty big deal for me. I expect that even if the meta role does cause >1 additional person-equivilant doing direct work that might take at least a few years to happen. I think you should have a nontrivial discount rate for when the additional people start doing direct work in AI safety.
Iâm not sure the onboarding delay is relevant here since it happens in either case?
One crude way to model this is to estimate: - discount rate for â1 additional AI Safety researcherâ over time - rate of generating counterfactual AI Safety researchers per year by doing meta work
If I actually try to plug in numbers here the meta role seems better, although this doesnât match my overall gut feeling.
The onboarding delay is relevant because in the 80k case it happens twice: the 80k person has an onboarding delay, and then the people they cause to get hired have onboarding delays too.
I suppose the complement to the naive thing I said before is â80k needs a compelling reason to recruit people to EA, and needs EA to be compelling to the people to recruit to it as well; by doing an excellent job at some object-level work, you can grow the value of 80k recruiting, both by making it easier to do and by making the outcome a more valuable outcome. Perhaps this might be even better for recruiting than doing recruiting.â
I think there are a bunch of meta effects from working in an object level job:
The object level work makes people more likely to enter the field as you note. (Though this doesnât just route through 80k and goes through a bunch of mechanisms.)
Youâll probably have some conversations with people considering entering the field from a slightly more credible position at least if the object level stuff goes well.
Part of the work will likely involve fleshing stuff out so people with less context can more easily join/âcontribute. (True for most /â many jobs.)
sometimes I feel bone-headedly stuck on even apparently-simple things like âif nonprofit growth is easier than for-profit growth, does that mean that nonprofits should spend more effort on growth, or less?â
Is your question how we should think about meta vs object level work, excluding considerations of personal fit? Because, at least in this example, I would expect fit considerations to dominate.
Basically, it seems to me that for any given worker, these career options would have pretty different levels of expected productivity, influenced by things like aptitude and excitement/âmotivation. And my prior is that in most cases, these productivity differences should swamp the sort of structural considerations you bring up here.
Iâm currently facing a career choice between a role working on AI safety directly and a role at 80,000 Hours. I donât want to go into the details too much publicly, but one really key component is how to think about the basic leverage argument in favour of 80k. This is the claim thatâs like: well, in fact I heard about the AIS job from 80k. If I ensure even two (additional) people hear about AIS jobs by working at 80k, isnât it possible going to 80k could be even better for AIS than doing the job could be?
In that form, the argument is naive and implausible. But I donât think I know what the âsophisticatedâ argument that replaces it is. Here are some thoughts:
Working in AIS also promotes growth of AIS. It would be a mistake to only consider the second-order effects of a job when youâre forced to by the lack of first-order effects.
OK, but focusing on org growth fulltime seems surely better for org growth than having it be a side effect of the main thing youâre doing.
One way to think about this is to compare two strategies of improving talent at a target org, between âtry to find people to move them into roles in the org, as part of cultivating a whole overall talent pipeline into the org and related orgsâ, and âput all of your fulltime effort into having a single person, i.e. you, do a job at the orgâ. It seems pretty easy to imagine that the former would be a better strategy?
I think this is the same intuition that makes pyramid schemes seem appealing (something like: surely I can recruit at least 2 people into the scheme, and surely they can recruit more people, and surely the norm is actually that you recruit a tonne of peopleâ and itâs really only by looking at the mathematics of the population as a whole you can see that it canât possibly work, and that actually itâs necessarily the case that most people in the scheme will recruit exactly zero people ever.
Maybe a pyramid scheme is the extreme of âwhat if literally everyone in EA worked at 80kâ, and serves as a reducto ad absurdum for always going into meta, but doesnât tell you precisely when to stop going meta. It could simultaneously be the case that pyramid schemes stop far too late, but everyone else stops significantly too early.
OK, so perhaps the thing to do is try to figure out what the bottleneck is in recruitment, and try to figure out when it would flip from ânot enough people are working on recruitmentâ to âthere isnât more work to do in recruitmentâ, e.g. because youâve already reached most of the interested population, and the people youâve already reached donât need more support, or in practice giving them more support doesnât improve outcomes.
OK, so when recruiting stops working, stop doing it (hardly a shocking revelation). But even that seems much more recruiting-heavy in implication than the norm. Surely you donât only stop when recruiting is useless, but when it is not as useful as your alternative.
In a lot of for-profit organisations, you need to stop recruiting because you need to start making money to pay the recruiters and the people they recruit. Even if it is worth more than it would cost to continue recruiting, you canât pay for it, so you have to do some object-level stuff, to prove to people that you deserve the confidence to continue to do the recruiting.
Itâs natural to think that if you could have arbitrary lines of credit, youâd want to do lots more recruitment up front, and this would eventually pay off. But there are people out there with money to lend, and they systematically donât want to lend it to you. Are they irrational, or are you over-optimistic?
It seems like the conclusion is forcing you to do object-level stuff is actually rational, because lots of people with optimistic projections of how much recruitment will help you are just wrong, and we need to find that out before burning too much money on recruiting. This discipline is enforced by the funding structure of for-profit firms.
The efficient market hypothesis as-applied here suggests that systematically demanding less or more object-level work from firms wonât allow you to do better than the status quo, so the existing overall level of skepticism is correct-ish. But non-profit ârecruitingâ is wildly different in many ways, so I donât know how much we should feel tethered to what works in the for-profit world.
The overall principle that we need people doing object-level work in EA in order to believe that meta-level work is worth doing seems correct. In the for-profit world the balance is struck basically by your equity holders or creditors seeking assurance from you that your growth will lead to more revenue in the future. In the non-profit world I suppose your funders can provide the same accountability. In both cases the people with money are not wizards and are not magically better at finding the truth than you are, though at least in the for-profit world if they repeatedly make bad calls then they run out of money to make new bad calls with (while if they make good calls they get more money to make more /â bigger calls with), so that acts as something of a filter. But being under budget discipline only makes it more important that you figure out whether to grow or execute; it doesnât really make you better at figuring it out.
I suppose the complement to the naive thing I said before is â80k needs a compelling reason to recruit people to EA, and needs EA to be compelling to the people to recruit to it as well; by doing an excellent job at some object-level work, you can grow the value of 80k recruiting, both by making it easier to do and by making the outcome a more valuable outcome. Perhaps this might be even better for recruiting than doing recruiting.â
This feels less intuitively compelling, but itâs useful to notice that it exists at all.
This take is increasingly non-quick, so I think Iâm going to post it and meditate on it somewhat and then think about whether to write more or edit this one.
My first reaction is that working on AI safety directly is more specialised and niche, so it might be your comparative advantage, while the 80k one might be filled by candidates from a wider range of backgrounds
I think this depends on what the specific role is. I think the one Iâm going for is not easily replaceable, but Iâm mostly aiming not to focus on the specific details of my career choice in this thread, instead trying to address the broader questions about meta work generally.
It feels like when Iâm comparing the person who does object-level work to the person who does meta-level work that leads to 2 people (say) doing object-level work, the latter really does seem better all things equal, but the intuition that calls this model naive is driven by a sense that itâs going to turn out to not âactuallyâ be 2 additional people, that additionality is going to be lower than you think, that the costs of getting that result are higher than you think, etc. etc.
But this intuition is not as clear as Iâd like on what the extra costs /â reduced benefits are, and how big a deal they are. Here are the first ones I can think of:
Perhaps the people that you recruit instead arenât as good at the job as you would have been.
If your orgâs hiring bottlenecks are not finding great people, but instead having the management capacity to onboard them or the funding capacity to pay for them, doing management or fundraising, or work that supports the case for fundraising, might matter more.
but 80k surely also needs good managers, at least as a general matter
I think when an org hires you, thereâs an initial period of your onboarding where you consume more staff time than you produce, especially if you weight by seniority. Different roles differ strongly on where their break-even point is. Iâve worked somewhere who thought their number was like 6-18 months (I forget what they said exactly, but in that range) and I can imagine cases where itâs more like⊠day 2 of employment. Anyway, one way or another, if you cause object level work to happen by doing meta level work, youâre introducing another onboarding delay before stuff actually happens. If the area youâre hoping to impact is time-sensitive, this could be a big deal? But usually Iâm a little skeptical of time-sensitivity arguments, since people seem to make them at all times.
itâs easy to inadvertently take credit for a person going to role that they would actually have gone to anyway, or not to notice when you guide someone into a role thatâs worse (or not better, or not so much better) than what they would have done otherwise (80k are clearly aware of this and try to measure it in various ways, but itâs not something you can do perfectly)
I think that this:
> but the intuition that calls this model naive is driven by a sense that itâs going to turn out to not âactuallyâ be 2 additional people, that additionality is going to be lower than you think, that the costs of getting that result are higher than you think, etc. etc.
is most of the answer. Getting a fully counterfactual career shift (that personâs expected career value without your intervention is ~0, but instead theyâre now going to work at [job you would otherwise have taken, for at least as long as you would have]) is a really high bar to meet. If you did expect to get 2 of those, at equal skill levels to you, then I think the argument for âgoing metaâ basically goes through.
In practice, though:
- People who fill [valuable role] after your intervention probably had a significant chance of finding out about it anyway.
- They also probably had a significant chance of ending up in a different high-value role had they not taken the one you intervened on.
How much of a discount you want to apply for these things is going depend a lot on how efficiently you expect the [AI safety] job market to allocate talent. In general, I find it easier to arrive at reasonable-seeming estimates for the value of career/âtrajectory changes by modelling them as moving the the change earlier in time rather than causing it to happen at all. How valuable you expect the acceleration to be depends on your guesses about time-discounting, which is another can of worms, but I think is plausibly significant, even with no pure rate of time preference.
(This is basically your final bullet, just expanded a bit.)
I feel like the time sensitivity argument is a pretty big deal for me. I expect that even if the meta role does cause >1 additional person-equivilant doing direct work that might take at least a few years to happen. I think you should have a nontrivial discount rate for when the additional people start doing direct work in AI safety.
Iâm not sure the onboarding delay is relevant here since it happens in either case?
One crude way to model this is to estimate:
- discount rate for â1 additional AI Safety researcherâ over time
- rate of generating counterfactual AI Safety researchers per year by doing meta work
If I actually try to plug in numbers here the meta role seems better, although this doesnât match my overall gut feeling.
The onboarding delay is relevant because in the 80k case it happens twice: the 80k person has an onboarding delay, and then the people they cause to get hired have onboarding delays too.
I think there are a bunch of meta effects from working in an object level job:
The object level work makes people more likely to enter the field as you note. (Though this doesnât just route through 80k and goes through a bunch of mechanisms.)
Youâll probably have some conversations with people considering entering the field from a slightly more credible position at least if the object level stuff goes well.
Part of the work will likely involve fleshing stuff out so people with less context can more easily join/âcontribute. (True for most /â many jobs.)
sometimes I feel bone-headedly stuck on even apparently-simple things like âif nonprofit growth is easier than for-profit growth, does that mean that nonprofits should spend more effort on growth, or less?â
Your AI timelines would likely be an important factor here.
Agree. If you think career switches take 18 months but timelines are 72 months then direct work is more important?
Is your question how we should think about meta vs object level work, excluding considerations of personal fit? Because, at least in this example, I would expect fit considerations to dominate.
Basically, it seems to me that for any given worker, these career options would have pretty different levels of expected productivity, influenced by things like aptitude and excitement/âmotivation. And my prior is that in most cases, these productivity differences should swamp the sort of structural considerations you bring up here.