I think the difference is along the lines of a lighter touch, ongoing intervention space vs a one-time, immersive experience. My coaching is focused on implemented changes to your mindsets, strategies, and habits in your daily life. I view this as a structured approach to making gradual changes that last. My understanding is that CFAR, on the other hand, aims to immerse their participants in an unusual context with specific tools and ways of thinking intended to rapidly open you up to new ways of thinking and acting. I don’t think they are mutually exclusive since you’ll take away different things from both.
lynettebye
Want to be more productive?
People generally profit most from working with me if they have a clear area(s) that they know could be improved in order to more effectively accomplish their goals, but haven’t yet successfully fixed it. I generally think the returns are good if improving could save you a couple hours a week that then is used more impactfully.
When discussing outcomes, I encourage my clients to try estimating what the concrete impact has been, so I can get a sense of what each person means rather than vague ideas such as “much more productive”. So most of them are estimates based on their personal judgments.
Impact Report for Effective Altruism Coaching
1. The NPS is 39. However, I’m not sure exactly how to interpret it. Broadly speaking, scores above 0 are considered good, but it depends a lot on the industry and I don’t have benchmarks within the coaching industry for comparison. It would be really interesting to see how this compares with other EA orgs, e.g. EAG.
2. The number of hours added is an effect size – standardized effect sizes are usually used when the mean difference is hard to interpret. Since I only have the estimated change (and not the baseline value), I can’t calculate a cohen’s d right now. For the fun of it, I made up baseline values for how many hours people work a month to see what it would be. If I assume each person worked a randomly chosen value between 100 and 200 hours per month before coaching (using randbetween in excel), d = .5. If I assume each person worked a randomly chosen value between 140 and 180 hours per month, d = .9.
3. Sadly, I don’t have data from the 7% clients who didn’t complete four calls. A few dropped out because of physical or mental health reasons. The few more said productivity coaching wasn’t what they needed at the time after all. I’m assuming the rest didn’t think it was worth continuing for one reason or another.
4. I’m working on it! I recently did a four-week writing challenge to kick start that process – you can view the posts here.
I average about 13 calls a week (which works out to about $80,000 a year), and about 40% of total revenue goes to business expenses (which leaves a salary of <$50,000).
This question is too broad for me to fully answer, but checking out the productivity tips on my fb page and reading Deep Work are probably decent places to start.
I wrote up some advice for people interested in becoming coaches a while ago, you can check it out here.
The biggest expenses are costs typically paid by the employer separately from salary (e.g. self-employment taxes and health insurance together are about $16,000). The next largest is outsourcing some work to help me scale coaching.
Will MacAskill on his ‘Eat That Elephant’ routine, learning from successful people, and the diminishing marginal returns of time spent working [blog cross-post]
Niel Bowerman on goal-setting, hacking motivation, and 80,000 Hours’ bout system
Interview with Owen Cotton-Barratt on Productivity
Ryan Carey on how to transition from being a software engineer to a research engineer at an AI safety team
Do you know what the landscape is of people working on this now, and whether any of them are doing it in an EA-ish way?
So, that example looks like an example of time pressure, rather than just being aware of time.
My understanding is that the literature on time pressure is considerably more nuanced and interesting. At its simplest, increased pressure (e.g. tight deadlines or expectation of evaluation) seem to improve performance on tasks where it’s clear exactly what needs to be done. On tasks that require creativity or novel problem solving, pressure seems to reduce performance compared to low to moderate time pressure. E.g. Ted Talk and study. I haven’t actually looked at this since college, so I can send you the dozen or so other papers I read then if you want to look at it with fresh eyes.
From that, I would expect your concern to be accurate only some of the time, albeit for some important work.
On the other hand, I have several anecdotal data points that regular time tracking is valuable for improving prioritization, though I expect the return is more varied than for short periods of time. I expect time tracking to be extremely valuable for short time spans (about 2 weeks) as a sanity check/improving knowledge of where time is spent.
Additionally, I expect people to be pretty bad at estimating productive time without tracking their time, hence the concern that prompted my original comment. The data means less if people are highly inaccurate when estimating time.
Last year, I looked at some studies to try understanding how correlated self-reported and objective measures are. There was a wide variance, with generally low to moderate correlations. When I looked just at the couple data points that are easily and/or frequently measured, the correlation was much higher, above r=0.7. Things that aren’t frequently measured have average correlations closer to r=0.3. Here’s that data if you want to reexamine it:
For numbers that were not frequently measured, the correlation between self-reported and directly measured was moderate: for one meta-analysis on physical activity, the mean r coefficient = 0.37 (range −0.71 to 0.96); for various measures of ability, mean r = 0.29 (range −0.6 to 0.80); for sedentary time, r<0.31; for physical activity, r=0.11.
A few more studies reported r coefficient ranges, but not mean r: for another measure of sedentary time, the coefficients ranged from 0.02 to 0.36; for another study on physical activity, the coefficients ranged from 0.46 to 0.53 (p value did not meet .05 threshold); for various other measures of sedentary time, the coefficients ranged from 0.50 to 0.65. If these are included in the above graph, the mean R goes up closer to .33.
For numbers that are frequently measured, the correlation between self-reported and directly measured was noticeably higher: for course grades, median r = .76 (range.70 to .84); for height and weight, median r = .94 (range .90 and above). This mildly sketchy unpublished review of hundreds of comparisons found an average of 85% perfect match between self-reports and objective records. The examples they give (e.g. self-report of hospitalizations or how many ambulatory physician visits compared with medical records) range from 89% to 100% exact match, and are mostly more frequently/easily measured.
Sadly, nope.
Hey, sorry for the late replies. Didn’t realize there weren’t notifications for comments.
Good question. You could get a lot of the benefit of working with me from another good productivity coach. I think there is some benefit of working with someone within the EA community, which has somewhat different goals and norms than the general population. I expect my coaching may be particularly more helpful when you’re trying to make life decisions. Given my personal goals and the EA grant, my coaching is also more accessible, compared to that of other coaches, for members of the EA community/people contributing toward impactful causes.