As a second data point, my thought process was pretty similar to Claire’s—I didn’t really consider medication until reading Rob’s post because I didn’t think I was capital D depressed, and I’m really glad now that I changed my mind about trying it for mild depression. I personally haven’t had any negative side effects from Wellbutrin, although some of my friends have.
lynettebye
I’m guessing it’s mostly because I put less emphasis on them filling it out. When I started coaching, I got more information from new data than I do now, so I put more effort into getting as many people as possible to fill it out. Additionally, I got feedback that it seemed strange paying clients were spending so much time giving me feedback. So now, e.g., I haven’t been following up as much if people don’t fill it out, and the ask is probably easier to ignore.
Larger groups, coaching busier clients on average, and only asking at the end (instead of also after the first four calls) might also contribute.
Unfortunately, I don’t have an easy control group to do such a trial. I do my best to take on every client who I think is a great fit for me to help, so there isn’t a non-coached group who is otherwise comparable. Additionally, as a for-profit business, there’s an understandable limit to how much my clients are willing to humor my desire for unending data.
I just checked, and 43% of clients who started coaching in 2020 filled in the survey, compared to 81% of clients who started coaching in 2018.
A couple tips that seemed to help me:
If you notice yourself endlessly scrolling job boards, making lists of possible jobs, and never actually applying to them, then try out a rule that you have to apply as soon as you have three jobs you’re excited about. That way, more of your effort is focused on actually getting apps send.
I found job hunting really aversive because felt like I was trying to sell something. A few conversations helped me view it more as a mutual exploration where we’re trying to work toward the same goal—finding if this is a good fit for both sides. I don’t think it will help in all cases, but it worked wonders for me.
I made a folder of shortcuts to the relevant job boards/company sites, and then checked the whole list at some interval that I don’t now remember. Having the list reduced the worry that I was forgetting someplace.
I agree that option value is important, but I think there’s a trap where preserving option value means never testing one path. I lean toward trying to rapidly and cheaply test multiple paths, while preserving option value.
Thanks
Thanks for the comment, Meerpirat. This is the latter, but felt closely enough related to use the same terminology. I’d started writing the “Getting Excited about Efficiency” post and realized that the idea didn’t resonate with some people because they didn’t viscerally grok why getting more stuff done was valuable. So I wrote this post about why people should care about the ideas in Half-Assing It, or my later Noticing and Getting Excited posts.
I find it useful to stagger asking for advice, roughly from easy to hard to access. E.g. if I can casually chat with a housemate about a decision when I just need a sounding board, I’ll start there. Once I have more developed ideas, I’ll reach out to the harder to access people, e.g. experts on the topic or more senior people who I don’t want to bother with lots of questions.
Sadly, nope.
So, that example looks like an example of time pressure, rather than just being aware of time.
My understanding is that the literature on time pressure is considerably more nuanced and interesting. At its simplest, increased pressure (e.g. tight deadlines or expectation of evaluation) seem to improve performance on tasks where it’s clear exactly what needs to be done. On tasks that require creativity or novel problem solving, pressure seems to reduce performance compared to low to moderate time pressure. E.g. Ted Talk and study. I haven’t actually looked at this since college, so I can send you the dozen or so other papers I read then if you want to look at it with fresh eyes.
From that, I would expect your concern to be accurate only some of the time, albeit for some important work.
On the other hand, I have several anecdotal data points that regular time tracking is valuable for improving prioritization, though I expect the return is more varied than for short periods of time. I expect time tracking to be extremely valuable for short time spans (about 2 weeks) as a sanity check/improving knowledge of where time is spent.
Additionally, I expect people to be pretty bad at estimating productive time without tracking their time, hence the concern that prompted my original comment. The data means less if people are highly inaccurate when estimating time.
Last year, I looked at some studies to try understanding how correlated self-reported and objective measures are. There was a wide variance, with generally low to moderate correlations. When I looked just at the couple data points that are easily and/or frequently measured, the correlation was much higher, above r=0.7. Things that aren’t frequently measured have average correlations closer to r=0.3. Here’s that data if you want to reexamine it:
For numbers that were not frequently measured, the correlation between self-reported and directly measured was moderate: for one meta-analysis on physical activity, the mean r coefficient = 0.37 (range −0.71 to 0.96); for various measures of ability, mean r = 0.29 (range −0.6 to 0.80); for sedentary time, r<0.31; for physical activity, r=0.11.
A few more studies reported r coefficient ranges, but not mean r: for another measure of sedentary time, the coefficients ranged from 0.02 to 0.36; for another study on physical activity, the coefficients ranged from 0.46 to 0.53 (p value did not meet .05 threshold); for various other measures of sedentary time, the coefficients ranged from 0.50 to 0.65. If these are included in the above graph, the mean R goes up closer to .33.
For numbers that are frequently measured, the correlation between self-reported and directly measured was noticeably higher: for course grades, median r = .76 (range.70 to .84); for height and weight, median r = .94 (range .90 and above). This mildly sketchy unpublished review of hundreds of comparisons found an average of 85% perfect match between self-reports and objective records. The examples they give (e.g. self-report of hospitalizations or how many ambulatory physician visits compared with medical records) range from 89% to 100% exact match, and are mostly more frequently/easily measured.
Do you know what the landscape is of people working on this now, and whether any of them are doing it in an EA-ish way?
The biggest expenses are costs typically paid by the employer separately from salary (e.g. self-employment taxes and health insurance together are about $16,000). The next largest is outsourcing some work to help me scale coaching.
This question is too broad for me to fully answer, but checking out the productivity tips on my fb page and reading Deep Work are probably decent places to start.
I wrote up some advice for people interested in becoming coaches a while ago, you can check it out here.
I average about 13 calls a week (which works out to about $80,000 a year), and about 40% of total revenue goes to business expenses (which leaves a salary of <$50,000).
1. The NPS is 39. However, I’m not sure exactly how to interpret it. Broadly speaking, scores above 0 are considered good, but it depends a lot on the industry and I don’t have benchmarks within the coaching industry for comparison. It would be really interesting to see how this compares with other EA orgs, e.g. EAG.
2. The number of hours added is an effect size – standardized effect sizes are usually used when the mean difference is hard to interpret. Since I only have the estimated change (and not the baseline value), I can’t calculate a cohen’s d right now. For the fun of it, I made up baseline values for how many hours people work a month to see what it would be. If I assume each person worked a randomly chosen value between 100 and 200 hours per month before coaching (using randbetween in excel), d = .5. If I assume each person worked a randomly chosen value between 140 and 180 hours per month, d = .9.
3. Sadly, I don’t have data from the 7% clients who didn’t complete four calls. A few dropped out because of physical or mental health reasons. The few more said productivity coaching wasn’t what they needed at the time after all. I’m assuming the rest didn’t think it was worth continuing for one reason or another.
4. I’m working on it! I recently did a four-week writing challenge to kick start that process – you can view the posts here.
People generally profit most from working with me if they have a clear area(s) that they know could be improved in order to more effectively accomplish their goals, but haven’t yet successfully fixed it. I generally think the returns are good if improving could save you a couple hours a week that then is used more impactfully.
When discussing outcomes, I encourage my clients to try estimating what the concrete impact has been, so I can get a sense of what each person means rather than vague ideas such as “much more productive”. So most of them are estimates based on their personal judgments.
I think the difference is along the lines of a lighter touch, ongoing intervention space vs a one-time, immersive experience. My coaching is focused on implemented changes to your mindsets, strategies, and habits in your daily life. I view this as a structured approach to making gradual changes that last. My understanding is that CFAR, on the other hand, aims to immerse their participants in an unusual context with specific tools and ways of thinking intended to rapidly open you up to new ways of thinking and acting. I don’t think they are mutually exclusive since you’ll take away different things from both.
Hey, sorry for the late replies. Didn’t realize there weren’t notifications for comments.
Good question. You could get a lot of the benefit of working with me from another good productivity coach. I think there is some benefit of working with someone within the EA community, which has somewhat different goals and norms than the general population. I expect my coaching may be particularly more helpful when you’re trying to make life decisions. Given my personal goals and the EA grant, my coaching is also more accessible, compared to that of other coaches, for members of the EA community/people contributing toward impactful causes.
Do you (or did you) ever have doubts about whether you were “good enough” to pursue your career?
(Sorry for posting after the deadline—I haven’t been on screens recently due to a migraine and just saw it.)