I agree with many of the points, especially that personal fit is a big deal and that doing a PhD is also in part useful research (rather than pure career capital), and what matters is time until the x-risk rather than random definitions of AGI, but I’m worried this bit understates the reasons for urgency quite a bit:
you might then conclude that delaying your career by 6 years would cause it to have 41⁄91 = 45% of the value. If that’s the case, if the delay increased the impact you could have by a bit more than a factor of 2, the delay would be worth it.
This is on a model in which work becomes moot after a transition point. But it’s assuming work before the transition is equally valuable no matter the year.
However, the AI safety community is probably growing at 40%+ per year, and (if timelines are short) it’ll probably still be growing at 10-20%+ when the potential existential risk arrives. This roughly means that moving a year of labour invested in AI safety community building one year earlier makes it 10-20% more valuable. This would mean an extra year of labour now is worth 3-10x one in 10 years, all else equal.
Or to turn to direct work, there are serial dependencies i.e. 100 people working for 1 year won’t achieve anywhere near as much as 10 people working for 10 years. This again could make extra labour on alignment now many times more valuable work in 10 years.
Another argument is that since the community can have more impact in world with short timelines, people should act as if they’re shorter than they are.
This could mean, for instance, if your best guess is 33% timelines under 10yr, 33% medium timelines and 33% longer timelines, it might be optimal for people to allocate something like 70%, 15%, 15%. Yes in this world some people focus on long-term career capital, but it would be less than normal.
Estimating the size of these effects is hard – my main point is that they can be very large, especially as timelines get short. (Many of these effects feel like they go up non-linearly as timelines shorten.)
So, while I agree that if someone’s median timeline estimate changes from, say, 25 years to 20 years, that’s not going to have much effect on the question; I think how much to focus on career capital could be pretty sensitive to, say, your probability on <10 year timelines.
This is a useful consideration to point out, thanks. I push back a bit below on some specifics, but this effect is definitely one I’d want to include if I do end up carving out time to add a bunch more factors to the model.
I don’t think having skipped the neglectedness considerations you mention is enough to call the specific example you quote misleading though, as it’s very far from the only thing I skipped, and many of the other things point the other way. Some other things that were skipped:
Work after AGI likely isn’t worth 0, especially with e.g. Metaculus definitions.
While in the community building examples you’re talking about, shifting work later doesn’t change the quality of that work, this is not true wrt PhDs (doing a PhD looks more like truncating the most junior n years of work than shifting all years of work n years later).
Work that happens just before AGI can be done with a much better picture of what AGI will look like, which pushes against the neglectedness effect.
Work from research leads may actually increase in effectiveness as the field grows, if the growth is mostly coming from junior people who need direction and/or mentorship, as has historically been the case.
And then there’s something about changing your mind, but it’s unclear to me which direction this shifts things:
it’s easier to drop out of a PhD than it is to drop into one, if e.g. your timelines suddenly shorten.
If your timelines shorten because AGI arrives, though, it’s too late to switch, while big updates towards timelines being longer are things you can act on, pushing towards acting as if timelines are short.
I agree with many of the points, especially that personal fit is a big deal and that doing a PhD is also in part useful research (rather than pure career capital), and what matters is time until the x-risk rather than random definitions of AGI, but I’m worried this bit understates the reasons for urgency quite a bit:
This is on a model in which work becomes moot after a transition point. But it’s assuming work before the transition is equally valuable no matter the year.
However, the AI safety community is probably growing at 40%+ per year, and (if timelines are short) it’ll probably still be growing at 10-20%+ when the potential existential risk arrives. This roughly means that moving a year of labour invested in AI safety community building one year earlier makes it 10-20% more valuable. This would mean an extra year of labour now is worth 3-10x one in 10 years, all else equal.
Or to turn to direct work, there are serial dependencies i.e. 100 people working for 1 year won’t achieve anywhere near as much as 10 people working for 10 years. This again could make extra labour on alignment now many times more valuable work in 10 years.
Another argument is that since the community can have more impact in world with short timelines, people should act as if they’re shorter than they are.
This could mean, for instance, if your best guess is 33% timelines under 10yr, 33% medium timelines and 33% longer timelines, it might be optimal for people to allocate something like 70%, 15%, 15%. Yes in this world some people focus on long-term career capital, but it would be less than normal.
Estimating the size of these effects is hard – my main point is that they can be very large, especially as timelines get short. (Many of these effects feel like they go up non-linearly as timelines shorten.)
So, while I agree that if someone’s median timeline estimate changes from, say, 25 years to 20 years, that’s not going to have much effect on the question; I think how much to focus on career capital could be pretty sensitive to, say, your probability on <10 year timelines.
This is a useful consideration to point out, thanks. I push back a bit below on some specifics, but this effect is definitely one I’d want to include if I do end up carving out time to add a bunch more factors to the model.
I don’t think having skipped the neglectedness considerations you mention is enough to call the specific example you quote misleading though, as it’s very far from the only thing I skipped, and many of the other things point the other way. Some other things that were skipped:
Work after AGI likely isn’t worth 0, especially with e.g. Metaculus definitions.
While in the community building examples you’re talking about, shifting work later doesn’t change the quality of that work, this is not true wrt PhDs (doing a PhD looks more like truncating the most junior n years of work than shifting all years of work n years later).
Work that happens just before AGI can be done with a much better picture of what AGI will look like, which pushes against the neglectedness effect.
Work from research leads may actually increase in effectiveness as the field grows, if the growth is mostly coming from junior people who need direction and/or mentorship, as has historically been the case.
And then there’s something about changing your mind, but it’s unclear to me which direction this shifts things:
it’s easier to drop out of a PhD than it is to drop into one, if e.g. your timelines suddenly shorten.
If your timelines shorten because AGI arrives, though, it’s too late to switch, while big updates towards timelines being longer are things you can act on, pushing towards acting as if timelines are short.
Good point there are reasons why work could get more valuable the closer you are – I should have mentioned that.
Also interesting points about option value.