It feels like when Iâm comparing the person who does object-level work to the person who does meta-level work that leads to 2 people (say) doing object-level work, the latter really does seem better all things equal, but the intuition that calls this model naive is driven by a sense that itâs going to turn out to not âactuallyâ be 2 additional people, that additionality is going to be lower than you think, that the costs of getting that result are higher than you think, etc. etc.
But this intuition is not as clear as Iâd like on what the extra costs /â reduced benefits are, and how big a deal they are. Here are the first ones I can think of:
Perhaps the people that you recruit instead arenât as good at the job as you would have been.
If your orgâs hiring bottlenecks are not finding great people, but instead having the management capacity to onboard them or the funding capacity to pay for them, doing management or fundraising, or work that supports the case for fundraising, might matter more.
but 80k surely also needs good managers, at least as a general matter
I think when an org hires you, thereâs an initial period of your onboarding where you consume more staff time than you produce, especially if you weight by seniority. Different roles differ strongly on where their break-even point is. Iâve worked somewhere who thought their number was like 6-18 months (I forget what they said exactly, but in that range) and I can imagine cases where itâs more like⌠day 2 of employment. Anyway, one way or another, if you cause object level work to happen by doing meta level work, youâre introducing another onboarding delay before stuff actually happens. If the area youâre hoping to impact is time-sensitive, this could be a big deal? But usually Iâm a little skeptical of time-sensitivity arguments, since people seem to make them at all times.
itâs easy to inadvertently take credit for a person going to role that they would actually have gone to anyway, or not to notice when you guide someone into a role thatâs worse (or not better, or not so much better) than what they would have done otherwise (80k are clearly aware of this and try to measure it in various ways, but itâs not something you can do perfectly)
I think that this: > but the intuition that calls this model naive is driven by a sense that itâs going to turn out to not âactuallyâ be 2 additional people, that additionality is going to be lower than you think, that the costs of getting that result are higher than you think, etc. etc.
is most of the answer. Getting a fully counterfactual career shift (that personâs expected career value without your intervention is ~0, but instead theyâre now going to work at [job you would otherwise have taken, for at least as long as you would have]) is a really high bar to meet. If you did expect to get 2 of those, at equal skill levels to you, then I think the argument for âgoing metaâ basically goes through.
In practice, though: - People who fill [valuable role] after your intervention probably had a significant chance of finding out about it anyway. - They also probably had a significant chance of ending up in a different high-value role had they not taken the one you intervened on.
How much of a discount you want to apply for these things is going depend a lot on how efficiently you expect the [AI safety] job market to allocate talent. In general, I find it easier to arrive at reasonable-seeming estimates for the value of career/âtrajectory changes by modelling them as moving the the change earlier in time rather than causing it to happen at all. How valuable you expect the acceleration to be depends on your guesses about time-discounting, which is another can of worms, but I think is plausibly significant, even with no pure rate of time preference.
(This is basically your final bullet, just expanded a bit.)
I feel like the time sensitivity argument is a pretty big deal for me. I expect that even if the meta role does cause >1 additional person-equivilant doing direct work that might take at least a few years to happen. I think you should have a nontrivial discount rate for when the additional people start doing direct work in AI safety.
Iâm not sure the onboarding delay is relevant here since it happens in either case?
One crude way to model this is to estimate: - discount rate for â1 additional AI Safety researcherâ over time - rate of generating counterfactual AI Safety researchers per year by doing meta work
If I actually try to plug in numbers here the meta role seems better, although this doesnât match my overall gut feeling.
The onboarding delay is relevant because in the 80k case it happens twice: the 80k person has an onboarding delay, and then the people they cause to get hired have onboarding delays too.
It feels like when Iâm comparing the person who does object-level work to the person who does meta-level work that leads to 2 people (say) doing object-level work, the latter really does seem better all things equal, but the intuition that calls this model naive is driven by a sense that itâs going to turn out to not âactuallyâ be 2 additional people, that additionality is going to be lower than you think, that the costs of getting that result are higher than you think, etc. etc.
But this intuition is not as clear as Iâd like on what the extra costs /â reduced benefits are, and how big a deal they are. Here are the first ones I can think of:
Perhaps the people that you recruit instead arenât as good at the job as you would have been.
If your orgâs hiring bottlenecks are not finding great people, but instead having the management capacity to onboard them or the funding capacity to pay for them, doing management or fundraising, or work that supports the case for fundraising, might matter more.
but 80k surely also needs good managers, at least as a general matter
I think when an org hires you, thereâs an initial period of your onboarding where you consume more staff time than you produce, especially if you weight by seniority. Different roles differ strongly on where their break-even point is. Iâve worked somewhere who thought their number was like 6-18 months (I forget what they said exactly, but in that range) and I can imagine cases where itâs more like⌠day 2 of employment. Anyway, one way or another, if you cause object level work to happen by doing meta level work, youâre introducing another onboarding delay before stuff actually happens. If the area youâre hoping to impact is time-sensitive, this could be a big deal? But usually Iâm a little skeptical of time-sensitivity arguments, since people seem to make them at all times.
itâs easy to inadvertently take credit for a person going to role that they would actually have gone to anyway, or not to notice when you guide someone into a role thatâs worse (or not better, or not so much better) than what they would have done otherwise (80k are clearly aware of this and try to measure it in various ways, but itâs not something you can do perfectly)
I think that this:
> but the intuition that calls this model naive is driven by a sense that itâs going to turn out to not âactuallyâ be 2 additional people, that additionality is going to be lower than you think, that the costs of getting that result are higher than you think, etc. etc.
is most of the answer. Getting a fully counterfactual career shift (that personâs expected career value without your intervention is ~0, but instead theyâre now going to work at [job you would otherwise have taken, for at least as long as you would have]) is a really high bar to meet. If you did expect to get 2 of those, at equal skill levels to you, then I think the argument for âgoing metaâ basically goes through.
In practice, though:
- People who fill [valuable role] after your intervention probably had a significant chance of finding out about it anyway.
- They also probably had a significant chance of ending up in a different high-value role had they not taken the one you intervened on.
How much of a discount you want to apply for these things is going depend a lot on how efficiently you expect the [AI safety] job market to allocate talent. In general, I find it easier to arrive at reasonable-seeming estimates for the value of career/âtrajectory changes by modelling them as moving the the change earlier in time rather than causing it to happen at all. How valuable you expect the acceleration to be depends on your guesses about time-discounting, which is another can of worms, but I think is plausibly significant, even with no pure rate of time preference.
(This is basically your final bullet, just expanded a bit.)
I feel like the time sensitivity argument is a pretty big deal for me. I expect that even if the meta role does cause >1 additional person-equivilant doing direct work that might take at least a few years to happen. I think you should have a nontrivial discount rate for when the additional people start doing direct work in AI safety.
Iâm not sure the onboarding delay is relevant here since it happens in either case?
One crude way to model this is to estimate:
- discount rate for â1 additional AI Safety researcherâ over time
- rate of generating counterfactual AI Safety researchers per year by doing meta work
If I actually try to plug in numbers here the meta role seems better, although this doesnât match my overall gut feeling.
The onboarding delay is relevant because in the 80k case it happens twice: the 80k person has an onboarding delay, and then the people they cause to get hired have onboarding delays too.