Founder of Overcome, an EA-aligned mental health charity
John Salter
We’re sadly no longer accepting sign-ups for our founder’s programme. We’ve had an influx of demand and we’re now fully at capacity for the foreseeable future. Its funding situation is precarious and I’ve sadly got to focus on that now. Results are nuts, but mental health funders are focussed on LMICs, meta funders don’t like mental health interventions, so it’s a challenging category to even survive in.
For now, I’ve got to focus on doing a good job for our existing clients. I’m sorry!
Hi there, Angel here, John’s co-founder who ran the study.
Intervention
The four-week, one-to-one coaching programme combined motivational interviewing and cognitive behavioural therapy techniques, delivered via weekly 60-minute video calls by trained volunteer coaches following a manualised guide.
Session 1 focused on identifying procrastination triggers and building personalised action plans. Sessions 2-3 reviewed progress and addressed unhelpful thoughts and emotions through cognitive restructuring, behavioural activation, and self-monitoring. Session 4 consolidated learning and built a long-term maintenance plan.
Coaches
Outside of this RCT, Overcome runs a three-month internship programme to aspiring mental health professionals to get training and hand-on experience delivering coaching. The first month of this internship is a full-time training programme on techniques from MI, CBT, and acceptance and commitment therapy, including workshops, readings, role-plays, and assessments. In the later two months of the internship, the interns then coach adults across the globe to build healthy habits or manage low mood/worries.
For this RCT, five volunteer coaches from Overcome were trained and delivered the procrastination coaching programme. The selected coaches performed strongly in their initial training and had promising client outcomes.
To prepare the selected coaches, they received two hours of study-specific training, including a workshop on the intervention protocol and a role-play session. They also attended weekly one-hour group supervision with a counselling psychologist to discuss and troubleshoot client cases.
Hi there, Angel here, John’s co-founder who ran the study.
1) Question on “20 multiply imputed datasets”
a) What does this mean? What are you imputing and how are you imputing it?
Excluding 3 participants who later withdrew, we had 114 participants in the trial. However, only 94 participants had complete data (pre & post for control; pre, post & follow-up for intervention). This is around 82% of complete cases. One way of handling missing data is multiple imputation, which makes plausible guesses about missing values using the information you do have. I used multiple imputation (via programming in R) to create 20 complete datasets of all 114 participants, with guesses based on each participant’s demographics (age, gender, group) and outcome scores (pre, post, and follow-up scores; follow-up for the intervention group only).
b) What are the results if you don’t do any imputation?
See the attached graphs for the results of the complete-case analysis of 94 participants. The results are fairly similar:
Procrastination: The four-week intervention led to a statistically significant reduction in procrastination compared to the waitlist (p < .001, Cohen’s d = 1.56). Effect size is comparable to 10-week internet CBT guided by professional therapists (Rozental et al., 2017).Life satisfaction: The four-week intervention led to a statistically significant increase in life satisfaction compared to the waitlist (p < .001, Cohen’s d = 0.95).
2) Question on 1-month follow-up effects
You raise a fair point. The 1-month follow-up was only collected for the intervention group. The waitlist control group began the intervention immediately after the waitlist period ended, and time constraints meant we couldn’t collect follow-up data from them.
Because of this, we didn’t compare intervention and control groups at follow-up. Instead, the claim that effects were maintained is based on within-group comparisons for the intervention group only (post-intervention vs. 1-month follow-up), using Hedges’ g rather than Cohen’s d to reflect this. We should have been clearer about that in the post.
You’re right that without a control group follow-up, we can’t rule out that scores would have continued to improve naturally over time. We’re running a larger RCT with a longer follow-up period in the future and plan to collect follow-up data from the control group too, which will let us make stronger claims about longer-term effects.
Quick answers
Control group was a four-week waitlist. 1-month follow-up was only collected for intervention group as the waitlist group started the intervention as soon as waitlist ended, plus time constraints for the project.
When analysing the results with 20 multiply imputed datasets:1. Procrastination: Statistically significant pre-post reduction in intervention compared to waitlist (p < .001, Cohen’s d = 1.52). Within intervention group, further small reduction at one-month follow-up compared to post-intervention (n = 47, p < .001, Hedges’ g = 0.36).
2. Life satisfaction: Statistically significant pre-post increase in intervention compared to waitlist (p < .001, Cohen’s d = 0.93). Within intervention group, further small increase at one-month follow-up compared to post-intervention (n = 47, p < .001, Hedges’ g = 0.42).
How Fran goes out of her way to acknowledge the good, even after a genuinely awful experience, is a testament to her truth-seeking.
- She calls the CEO’s final apology genuine and says she appreciated it.
- She’s enthusiastic about the new HR hire.
- She praises her line manager who otherwise might have faced a ton of undue scrutiny.Perhaps largely due to this, the comment section has remained unusually civil and constructive for something as scandalous. As bad as it would be to punish CEA for allowing this post to happen, it’d be even worse if despite it nothing changes. I really hope this works out for both parties!
As both a manager and someone who trains coaches to help people through burnout, I think your last three suggestions are killer!
Figure out what you’re missing and ask for it. What do you need from your work to feel nourished? Autonomy? Positive feedback? A sense of completion? More social connection? Everyone is different here, and you’ll learn over time what you need. But you have to actually ask for it and make it happen, rather than assuming the mission will eventually reward you.
I’ve never met anyone who has acted on this too soon. If you’re slaying, you might be shocked how much you organisation will be willing to adjust to keep you. That said, every day you perform without adjustments strengthens the argument you don’t need adjustments and every day you don’t perform is evidence they’d be better off hiring someone else. It never gets easier. Schedule a meeting with someone ASAP!
Seek outside help. Find a coach or therapist you can talk to about your relationship to work, to impact, to this whole EA thing. For what it’s worth, I had a therapeutic relationship during my burnout period. It didn’t prevent me from burning out, but it still helped a bit, and if I’d been in less of a cage, maybe it could have prevented the worst.
If you’re looking for something specific, consider doing an “energy audit”. Essentially, track how you feel before and after different work activities. You’ll likely identify that ~20% of your job (sometimes it’s one particular colleague!) is responsible for ~50% of the energy being sucked from your life. In my case, one hour of admin drains me about as much as 6 of any other task. I hired someone to do that hour for me, and now I can work many more hours without tiring. If you’d like to do this with a coach, consider Overcome—the charity I run. It’s free.
Other great options exist, especially if you got cash. The best predictor of how well it’s likely to go is how much you like and respect the coach / therapist after a session or two. It should feel as though it’s really aligned with your goals and preferences. If you’re dreading showing up—move on!
Speak openly about what you’re experiencing. Tell the people around you that you’re struggling. This is harder than it sounds in a community that valorizes sacrifice and grit, but it’s important.
We’ve coached 30+ EA founders. The advice that’s produced the most impact per second is talking openly to their peers about their struggles. Besides being a necessary step to getting support from your friends and peers, the best best placed to support you, it’s a massive public service. Others are in the same place you are now, hiding it because of shame.
If you don’t have peers that you think would be open, you could consider Rethink Wellbeing—they gather a bunch of EAs who want to improve their mental health into groups that support each other. Last I spoke to them, they put a ton of effort into picking who goes into each group so everyone gets their needs met.
For Managers / Founders
One of my biggest ever mistakes as a founder was underestimating how seriously you need to take agreeable staff members dropping hints that they are unhappy. Schedule a meeting with every new hire about what they need to be happy. The framing I’ve found that works best is: “Everyone has selfish reasons to choose a job over others. For me personally, [3-5 unvirtuous ways my job satisfies me]. What about you?”
There’s something about going first that unlocks more candid responses.
What a turnaround. Well done!!!
A note on the clickbait:
As affected people are unusually impulsive, I suspect that the people who’d most benefit would have been least likely to click with a boring title / post. There’ll be a more formal post soon about the RCT results, and another less cryptic sign-up post, all coming in the next week or so!
To respond briefly:
1. “First of all, the AI 2027 people disagree about the numbers”.
That’s irrelevant to your claim that you’d put “60% odds on the kind of growth depicted in AI 2027″
“you’ve predicted a 95-trillion-fold increase in AI research capacity under a ‘conservative scenario.’” is false. I was just giving that as an example of the rapid exponential growth.
Here’s what you wrote:
”This might sound outrageous, but remember: the number of AI models we can run is going up 25x per year! Once we reach human level, if those trends continue (and they show no signs of stopping) it will be as if the number of human researchers is going up 25x per year. 25x yearly increases is a 95-trillion-fold increase in a decade.”
You then go on to outline reasons why it would actually be faster than that. If you aren’t predicting this 95-trillion-fold increase, then either:
1. The trends do indeed show signs of stopping
2. The number of AI models you can run isn’t really going up 25x YOY
We can talk all day, but words are cheap. I’d much rather bet. Bets force you to get specific about what you actually believe. They make false predictions costly, true ones profitable. They signal what you actually believe, not what you think writing will get you the most status / clicks / views / shares etc.
What’s the minimum percentage chance of greater than 10% GDP growth in 2029 that you think is plausible given the trends you’re writing about and how much are you willing to bet at those odds? I’d rather bet on an earlier year, but I’d accept 2029 if that’s all you’ve got in you.
To be explicit, I’m trying to work out what you actually believe and what is just sensationalised.
This is a response more befitting Jim Cramer’s Chihuahua than Jeremy Bentham’s Bulldog.
I’d put about 60% odds on the kind of growth depicted variously in AI 2027...
According to AI 2027, before the end of 2027, OpenAI has:
a “country of geniuses in a datacenter.” each:
75x more capable than the best human human at AI research
“wildly superhuman” coding, hacking and politics
330K Superhuman AI Researcher copies thinking at 57x human speed”
In their slowest projection, by April 2028, OpenAI has achieved generalised superintelligence.But you’re only willing to bet US GDP grows just 10%, in just one year, across the next 15? The US did 7.4% in 1984. Within 10 years—five years before your proposed bet resolves—you’ve predicted a 95-trillion-fold increase in AI research capacity under a ‘conservative scenario.’ According to your eighth section, this won’t cause major bottlenecks elsewhere that would seriously stifle growth.
If this is really the best bet you’re willing to offer, one of three things is true:
You’re wildly risk averse
You don’t believe what you’re writing
You’re misleadingly missing out the fine print (e.g. “I’d put about 60% odds on the kind of growth depicted variously in AI 2027″ 2027
but not any time close to when they actually predict it will happen”)
Which is it?
Are you willing to stake cash on any of your near-term predictions, say before 1st Jan 2028? If so, which and at what odds?
I think this misses the forest for the trees. Yes, the pure donation message tested slightly better in conversion rate—but that’s only half the equation.
Donations = Reach × Conversion Rate
The controversial framing got massive media coverage that a standard “please donate” pitch never would. Even if conversion is marginally lower, if reach is 10-100x higher, the math still favors the provocative approach.The key question is whether the increased donations are worth the non-monetary costs.
We’ll be posting about it later this month!
This was a really informative read!
One thing I found confusing was how China would have a huge geographical advantage over Taiwan / USA. It strikes me that Taiwan has a 100 mile moat, and a ton of mountains / coastal cliffs. It’s essentially a scaled up version of a castle. It’s hard to imagine an easier geography to defend, tactically at least.
I presume it’s the location that’s the issue. While the US would have a harder time resupplying Taiwan, they presumably know this and can build up stockpiles ahead of time. While there’s freight shipping, the cost of doing so would be a rounding error. My understanding is that China is surrounded by enemies and US military bases, so the prima facie difference between US and Chinese mainland’s proximity to Taiwan is moot.
I haven’t studied this conflict much so I’m pretty sure I’m wrong. What am I missing?
The potential for AI to reshape mental health globally is really underexplored. It’s great to see EAs paying attention to it! Here are some hastily scribbled thoughts:
1. Confidently drawing conclusions about the subset of people who are currently using LLM therapy from studies of people who were paid to try it out.
I’m doubtful those two client groups would be sufficiently comparable. Given that variance in clients accounts for ~5x more variance in treatment outcome than the treatment itself, it’s really important to keep the client group constant.
2. Consider the challenges of distribution and funding.
We already have many evidence-based, cost-effective therapies that aren’t widely implemented—not because they don’t work, but because it’s hard to secure funding and uptake at scale. One potential shortcut here is working directly with the model providers: they already have distribution, resourcing, and reputational incentives to avoid harm. Helping them mitigate specific risks may be a more tractable way to have immediate impact, while also building credibility for future projects—before vs afters are a really compelling way to demonstrate your impact that anyone could understand in seconds. The other proposed work would be far harder to communicate, and thus harder to get people excited about.
3. Take the unguided digital intervention literature with a grain of salt.
There’s been decades of promising RCTs that unfortunately didn’t translate into widespread real-world impact. A few possible reasons:Low-cost trials make it cheaper to try again until you get a strong result (by chance)
Many of these studies are sponsored by companies/authors with strong incentives to report positive findings.
Participants are often paid to engage—something that doesn’t generalize well to real-world settings where adherence is a big challenge, especially for unguided interventions.
4. Doing too many things
Each of your four projects could easily be an entire organization’s focus. Consider testing each idea briefly, then doubling down on the one where you see the most traction. . If you’d rather work separately on different things, you might want to consider branding yourselves as separate projects—it’s harder for any given project to be taken as seriously otherwise.
I agree with Huw’s assessment re: books vs digital vs digital + guide. Here are a few less-discussed reasons why, hastily scribbled:
Recruitment and retention costs: The cost of delivering a very cost-effective therapy is often lower than the cost of convincing someone to seriously give it a go. People don’t really want to just read a book or just use an app; they overwhelmingly want to talk to a real person. It can therefore be cheaper to recruit and retain people when a person is involved.
Misinterpretation of non-significant: Psychologists often present their findings as though statistically non-significant differences should be ignored. Sometimes this results in treating an effect size of 0.3 and 0.6 as if they’re identical, leading to conclusions like “we found no significant differences between guided and unguided…”. Nobody has time to read the whole literature, so people skim—and can come away thinking there’s no real difference, when in practice it may be more like a 30–100% difference in effectiveness.
Greater publication bias in unguided RCTs: It’s insanely cheap to do RCTs on unguided interventions because the cost of delivery is near zero and logistics are simple. Since it’s usually the researcher or funder who developed the treatment, they’re unlikely to publish the mediocre results. What gets published instead are lots and lots of positive findings, creating a skewed picture where unguided looks consistently effective.
Retention IRL: Despite most mental health apps showing >50% completion in RCTs, they retain only ~1–3% of real users that long. Guided self-help interventions retain an order of magnitude more. You thus need recruit an order of magnitude more users to treat the same number of people. This not only undermines their cost-effectiveness, but also drives up recruitment costs for everyone else. Plus, a lot of people try something that doesn’t work for them, have their time and effort wasted, become more jaded, and are harder to convince to try again later.
All that being said, I think we focus far too much on differences between treatments and far too little on differences between clients. The latter explains roughly 4× more variance than the former, yet accounts for <1% of the research published.
Great write up! I especially like how candidly you wrote about the errors.
I’m happy to match up to $200 worth of donations from EA Forum Readers (personally, not on behalf of my organisation):
1. The theory of change is solid—the inefficiencies they are addressing seem efficient to address and to have a big lasting impact.
2. Increased social support is a really efficient way to improve mental health
3. I like that it’s run by people in the local area who know the women personally and who were self-funding it for so long up to now.
4. I’d like to encourage more people from LMICs to engage on the forum, especially on matters of global health where they have the context so many of us lack.
5. EA has so rarely funded small local organisations, and so rarely funded charities that can generate their own income thereafter. I think this is a cheap, efficient test of these types of grants
If you’d like to take advantage of my matching, please message me on the forum.
We don’t have a public page for it; people sign-up via word-of-mouth and invite via incubators. We handpick and train mental health coaches for EA founders from the people who got the best results for regular EAs. The thesis was that people who’re founding or scaling an EA charity founders face a ton of mental health challenges and that can be resolved quickly and help them and their charity succeed.
I figured getting the results would be the hard part, or convincing founders you could, but no. Within ~2 years, over half of AIM incubated charities have had one or more founders successfully resolve one mental health problem with us. ~90% of people who do the first session complete the programme and ~50% decide to keep going after it ends to work on their next most pressing problem. This is waaaaaaaay better than our stats for regular EAs and regular people—Founders underinvest in themselves so hard, and are so focussed on making their organisation succeed, that tons of low hanging fruit remain.
The problem is getting someone to fund it long-term:
- Early stage founders are broke, irrationally self-sacrificial, and time-poor
- Mental health funders, for good reason, care mostly just for LMICs
- Meta funders, for good reason, don’t want to choose for others what service would work best for them / their incubatees.
So, while finding seed funding to demonstrate POC was really easy, getting something durable isn’t. Donors think incubators should fund it. Incubators think donors should, after all, it’s an ecosystem wide service.
It only costs ~$80k a year to run, I’ll figure out a way to do it, it’s whether I can do that in time to avoid losing talent I can’t replace. I have one coach with a ~90% success rate, who only costs ~$33k a year, considering quitting because they don’t believe the job will exist in 2 years. The founders she supports collectively have a budget in the tens of millions and several are widely used as examples as EA’s most successful ever charities. We can’t replace her: she’s dramatically better than anyone else we employ, miles better than me, and neither she nor I understand how she does it.
I don’t really want a grant. I want some mechanism whereby we can be paid by results or just compete in an open-market that isn’t so distorted by the expectation that donors will cover everything.