For a personal decision, I’d like to know if a person’s expected impact is roughly proportional to their hours worked (keeping output per hour fixed). Suppose the decision would make you work x% fewer hours on useful things but keep your performance in the remaining hours the same—you won’t be more rested. The 20% just goes into work that’s not helpful for career capital or impact. In other words, you’re x% less productive. Does that mean you have roughly x% less expected impact?
Discussion
One reason your expected impact may decrease by >x% is that personal impact is (supposedly) heavy-tail distributed across people. To be in the heavy tail you’d need to be roughly at your highest productivity. So being x% less productive could reduce your expected impact by well over x%.
More than x% impact loss seems intuitive when you consider large x. Say you reduce your work time by 70%, and keep your productivity in the remaining 30% fixed. This seems to almost completely kill your chances of becoming a heavy-tail top performer in your field as you won’t be able to invest in yourself enough to stay competitive.[1][2]
On the other hand, your impact depends on other factors than the quantity of your work: talent and luck. In fact, talent and luck may be the main reason why impact is seems heavy-tailed. This view suggests that, if you work 20% less, your chances of being in the heavy tail don’t change much, and your expected impact decreases only by ca. 20%.
Edit: The answer seems to depend on the career path. In this case it’s academic research or startup founder.
The law of logarithmic utility has also been applied to research funding[74]—and a simple rule of thumb is that a dollar is worth 1/X times as much if you are X times richer. So doubling someone’s income is worth the same amount no matter where they start.[75] Past the point of increasing returns to scale, the next dollar donated say at the $500k funding mark might have 10x as much impact as the dollar donated after the $5m mark.
Maybe a useful first approximation might be that with hours worked it’s similar, where past the point of increasing returns to scale, the next hour worked at the 10h/week mark might have 10x as much impact as the hour worked after the 100h/week mark (An hour might be worth 1/X times as much if you work X times more). More realistically, if you work 40h week vs. 80h week, the hours leading up to 80h/week are only ~half as valuable (but I definitely think the 1st hour of the day is often 10x more valuable then the 10th).
CS professor Cal Newport says that if you can do DeepWork TM for 4h / day, you’re hitting the mental speed limit, the amount of concentration your brain is actually able to give. Poincaré could only work 4 hours a day.
This suggests that’d it be better to work 5h/d for 7d/week rather than 7h for 5 days and all else equal, hiring more researchers at lower pay rather than more at higher pay.
Ideally, you’d do admin / research management in the afternoons. But then sometimes I feel like long days are also sometimes useful in research because it takes a some time to ‘upload’ the current research project into your mind in the morning and you need to reboot it the next day. I remember someone very productive saying and I can confirm from personal experience that you can ‘reset’, a little bit, the buildup of adenosine with 1.5h naps (1 full sleep cycle), after working the morning and then continue working ‘another morning’ in the afternoon.
It’s important to keep in mind that you always want to prevent burnout by keeping work efficiency high (= Total work time / Time in office. The section Work All the Time You Work in Eat That Frog says that you don’t want to be spending your intended-work-time not-working such that you have to spend your intended-leisure-time working.
But yes this is all different in winner-takes-all-markets.
and:
Thanks Hauke that’s helpful. Yes, the above would be mainly because you run out of steam at 100h/week. I want to clarify that I assume this effect doesn’t exist. I’m not talking about working 20% less and then relaxing. The 20% of time lost would also go into work, but that work has no benefit for career capital or impact.
Yes—I think running out of steam does some of the work here, but assuming that you prioritize the most productive tasks first, my sense is this should still hold.
It seems to depend on your job. E.g. in academia there’s a practically endless stream of high priority research to do since each field is way too big for one person solve. Doing more work generates more ideas, which generate more work.
Another framing on this: As an academic, if I magically worked more productive hours this month, I could just do the high-priority research I otherwise would’ve done next week/month/year, so I wouldn’t do lower-priority work.
Startup founder success is sometimes winner-take-all (Facebook valued at hundreds of billions of dollars, Myspace at ~$0).
If that’s true in your market, then the question reduces to how likely that additional 20% is to make you better than your competitor. My guess is that you will be competing against people who are ~equally talented and working at 100%, so the final 20% of your work effort is relatively likely to push you into being more productive than them (meaning that ~100% of the value is lost by you cutting your work hours 20%).
I assume this is less true in academia.
I’d guess that quite often you’d either win anyway or lose anyway, and that the 20% don’t make the difference. There are so many factors that matter for startup founder success (talent, hard-workingness, network, credentials, luck) that it would be surprising if the competition was often so close that a 20% reduction in working time changes things.
Another way to put this: it seems likely that Facebook would still be worth hundreds of billions of dollars, and Myspace ~$0, had the Facebook founders worked 20% less).
I don’t have a good object-level answer, but maybe thinking through this model can be helpful.
Big picture description: We think that a person’s impact is heavy tailed. Suppose that the distribution of a person’s impact is determined by some concave function of hours worked. We want that working more hours increases the mean of the impact distribution, and probably also the variance, given that this distribution is heavy-tailed. But we plausibly want that additional hours affect the distribution less and less, if we’re prioritising perfectly (as Lukas suggests) -- that’s what concavity gives us. If talent and luck play important roles in determining impact, then this function will be (close to) flat, so that additional hours don’t change the distribution much. If talent is important, then the distributions for different people might be quite different and signals about how talented a person is are informative about what their distribution looks like.
This defines a person’s expected impact in terms of hours worked. We can then see whether this function is linear or concave or convex etc., which will answer your question.
More concretely: suppose that a person’s impact is lognormally distributed with parameters μ and σ, that μ is an increasing, concave function of hours worked, h, and that σ is fixed. I chose this formulation because it’s simple but still enlightening, and has some important features: expected impact, eμ(h)+σ22, is increasing in hours worked and the variance is also increasing in hours worked. I’m leaving σ fixed for simplicity. Suppose also that μ(h)=logh, which then implies that expected impact is heσ22, i.e. expected impact is linear in hours worked.
Obviously, this probably doesn’t describe reality very well, but we can ask what changes if we change the underlying assumptions. For example, it seems pretty plausible that impact is heavier-tailed than lognormally distributed, which suggests, holding everything else equal, that expected impact is convex in hours worked, so you lose more than 20% impact by working 20% less.
Getting a good sense of what the function of hours worked (here μ(h)) should look like is super hard in the abstract, but seems more doable in concrete cases like the one described above. Here, the median impact is eμ(h)=h, if μ(h)=logh, so the median impact is linear in hours worked. This doesn’t seem super plausible to me. I’d guess that the median impact is concave in hours worked, which would require μ to be more concave than log, which suggests, holding everything else equal, that expected impact is concave in hours worked. I’m not sure how this changes if you consider other distributions though—it’s a peculiarity of the lognormal distribution that the mean is linear in the median, if σ is held fixed, so things could look quite different with other distributions (or if we tried to determine μ and σ from h jointly).
Median impact being linear in hours worked seems unlikely globally—like, if I halved my hours, I think I’d more than half my median impact; if I doubled them, I don’t think I would double my median impact (setting burnout concerns aside). But it seems more plausible that median impact could be close to linear over the margins you’re talking about. So maybe this suggests that the model isn’t too bad for median impact, and that if impact is heavier-tailed than lognormal, then expected impact is indeed convex in hours worked.
This doesn’t directly answer your question very well but I think you could get a pretty good intuition for things by playing around with a few models like this.
After a little more thought, I think it might be helpful to think about/look into the relationship between the mean and median of heavy-tailed distributions and in particular, whether the mean is ever exponential in the median.
I think we probably have a better sense of the relationship between hours worked and the median than between hours worked and the mean because the median describes “typical” outcomes and means are super unintuitive and hard to reason about for very heavy tailed distributions. In particular, arguments like those given by Hauke seem more applicable to the median than the mean. This suggests that the median is roughly logarithmic in hours worked. It would then require the mean to be exponential in the median for the mean to be linear in hours worked, in which case, working 20% less would lose exactly 20% of the expected impact (more if the mean is more convex than exponential in the median, less if it’s less than exponential).
In the simple example above, the mean is linear in the median, so the mean is logarithmic in hours worked if the median is. But the lognormal distribution might not be heavy-tailed enough, so I wouldn’t put too much weight on this.
Looking at the pareto distribution, it seems to be the case that the mean is sometimes more than exponential in the median—it’s less convex for small values and more convex for high values . You’d have to a bit of work to figure out the scale and whether it’s more than exponential over the relevant range, but it could turn out that expected impact is convex in hours worked in this model, which would suggest working 20% less would lose more than 20% of the value. I’m not sure how well the pareto distribution describes the median though (it seems good for heavy tails but bad for the whole distribution of things), so it might be better to look at something like a lognormal body with a pareto tail. But maybe that’s getting too complicated to be worth it. This seems like an interesting and important question though, so I might spend more time thinking about it!
Thanks Aidan, I’ll consider this model when doing any more thinking on this.
This is a bit of a summary of what other people have said, and a bit of my own conceptualisation:
A) If the work is not competitive (not a winner-takes-all market), then:
For some jobs, marginal returns on quality-adjusted time invested will decrease, and you lose less than 20% of impact. This is true for jobs where some activities are clearly more valuable than others, so that you cut the less valuable ones.
For some jobs, marginal returns on quality-adjusted time invested will increase, and you lose more than 20% of impact. This could be e.g. because you have some maintenance activities that are fixed costs (like reading papers to stay up to date), or have increasing returns because you benefit from deep immersion.
B) If the work is competitive (a winner-takes-all market), either:
you are going to win anyway, in which case the same as above applies, or
you are going to lose anyway, in which whether or not you spend 20% of your time on something else doesn’t matter, or
working less is causing you to lose the competition, in which case you lose 100% of value.
Of course, this is nearly always gradual because the market is not literally winner-takes-all, just winner-takes-a-lot-more-than-second. For example, if you’re working towards an academic faculty position, then maybe a position at a tier 1 uni is twice as impactful as one at a tier 2 uni, which is twice as impactful than one at a tier 3 uni, and so on (the tiers would be pretty small for the difference only being 2x, though).
On average, the more “competitive” a job, and the closer the distance between you and the competition, the more value you lose from working 20% less.
Nearly every job has some degree of “competitiveness”/”winner-takes-all-market” going on, but for some jobs this degree is very small (e.g. employee at EA org), and for others it’s large (academia before you got a tenure-track position, for-profit startup founder).
For academic research, I’d guess that from looking at A) alone, you’d get roughly linear marginal returns, and how much B) matters depends on your career stage. It matters a lot before you got a tenure-track position (because the market is “winner-takes-much-more-than-second” and competition is likely close because so many people compete for these positions). After you got a tenure-track position, it depends on what you want to do. E.g., if you try to become the world-leader in a popular field, then competition is high. If you want to research some niche EA topic well, then competition is low.