My model for policy careers is that delaying by one year costs 10-20% of your lifetime impact (assuming roughly 20-year AI timelines, over which time you get ~4 promotions that ~3x your impact). Maybe AISTR careers are less back-loaded, in which case the number would be lower.
This model only works if you have 100% probability on AI in 20 year, but for most people it’s their median, so there’s still a 50% chance it happens in the second half of your career – and that scenario adds a lot of EV because your productivity will be so high at that point, so it’s probably not wise to ignore it.
My guess is it’s better to model it as a discount rate, where there’s an x% chance of AI happening each year. To get a 50% chance of AI in 20 years (and all future efforts being moot), that’s something like a 3.5% annual chance. Or to get a 50% chance in 10 years, that’s something like a 6.5% annual chance.
When I try to model with a 5% chance annual chance of AI, with your productivity increasing 30% per year to a peak of 100x, I get the cost of a one delay to be 6.6% of lifetime impact.
So that’s high but significantly lower than 10-20%.
It’s worth taking a 1 year delay if you can increase your career capital (i.e. future productivity) by ~7%, or explore and find a path that’s 7% better – which might be pretty achievable.
Thanks for this—the flaw in using the point estimate of 20-year timelines (and on the frequency and value of promotions) in this way occurred to me, and I tried to model it with guesstimate I got values that made no sense and gave up. Awesome to see this detailed model and to get your numbers!
That said, I think the 5% annual chance is oversimple in a way that could lead to wrong decisions at the margin for the trade-off I have in mind, which is “do AI-related community-building for a year vs. start policy career now.” If you think the risk is lower for the next decade or so before rising in the 2030s, which I think is the conventional wisdom, then the 5% uniform distribution incorrectly discounts work done between now and the 2030s. This makes AI community-building now, which basically produces AI technical research starting in a few years, look like a worse deal than it is and biases towards starting the policy career.
My sheet is set up so you can change the discount rate for the next 10 years. If I switch it to 2% for the next 10yr, and then make it 7% after that, the loss increases to 8% rather than 7%.
I should maybe also flag that you can easily end up with different discount rates for movement building and direct work, and I normally suggest treating them separately. The value of moving forward a year of movement building labour depends on your model of future movement growth – if EA will hit a plateau at some point, the value comes from getting to the plateau sooner; whereas if EA will still be exponentially growing when an xrisk happens, then the value is (very approximately) the EA growth rate in the year where our efforts become moot.
Great post!
Just a minor thing, but this might be aggressive:
This model only works if you have 100% probability on AI in 20 year, but for most people it’s their median, so there’s still a 50% chance it happens in the second half of your career – and that scenario adds a lot of EV because your productivity will be so high at that point, so it’s probably not wise to ignore it.
My guess is it’s better to model it as a discount rate, where there’s an x% chance of AI happening each year. To get a 50% chance of AI in 20 years (and all future efforts being moot), that’s something like a 3.5% annual chance. Or to get a 50% chance in 10 years, that’s something like a 6.5% annual chance.
When I try to model with a 5% chance annual chance of AI, with your productivity increasing 30% per year to a peak of 100x, I get the cost of a one delay to be 6.6% of lifetime impact.
So that’s high but significantly lower than 10-20%.
It’s worth taking a 1 year delay if you can increase your career capital (i.e. future productivity) by ~7%, or explore and find a path that’s 7% better – which might be pretty achievable.
Thanks for this—the flaw in using the point estimate of 20-year timelines (and on the frequency and value of promotions) in this way occurred to me, and I tried to model it with guesstimate I got values that made no sense and gave up. Awesome to see this detailed model and to get your numbers!
That said, I think the 5% annual chance is oversimple in a way that could lead to wrong decisions at the margin for the trade-off I have in mind, which is “do AI-related community-building for a year vs. start policy career now.” If you think the risk is lower for the next decade or so before rising in the 2030s, which I think is the conventional wisdom, then the 5% uniform distribution incorrectly discounts work done between now and the 2030s. This makes AI community-building now, which basically produces AI technical research starting in a few years, look like a worse deal than it is and biases towards starting the policy career.
Agree there could be issues like that.
My sheet is set up so you can change the discount rate for the next 10 years. If I switch it to 2% for the next 10yr, and then make it 7% after that, the loss increases to 8% rather than 7%.
I should maybe also flag that you can easily end up with different discount rates for movement building and direct work, and I normally suggest treating them separately. The value of moving forward a year of movement building labour depends on your model of future movement growth – if EA will hit a plateau at some point, the value comes from getting to the plateau sooner; whereas if EA will still be exponentially growing when an xrisk happens, then the value is (very approximately) the EA growth rate in the year where our efforts become moot.