On the meta-analyses: that seems fair. I think my initial thought was just that the Cuijpers seemed very low relative to my priors, and the Tong seemed more in line with them. But maybe my priors are wrong! I take your point that the Tong may be too high because of how widely it casts the “unguided” net, though it still does find some meaningful difference. But on the main point I think we’re in agreement: guided > unguided, and the case for unguidedness, if there is one, will depend on its relative cost-effectiveness.
On apps v. books: I think there are so many potentially countervailing effects here it’s hard to trust my intuitive judgments. I see the consideration you cite, but on the other hand (I would guess) someone on a phone is more likely to defect away from self-help and to use their phones for all the other things that phones can be used for. It would be great to have more studies here. There are a few RCT’s comparing print with courses delivered via the internet on a desktop/laptop, which seem to find little difference either way, but these studies are very sparse, and they’re at some remove from the question of comparing self-help delivery via a printed book via self-help delivery via Whatapp.
I take the point about cost-effectiveness. Certainly the tendency in the for-profit space has been digitization. But here too there’s a countervailing consideration. Digitized self-help is a natural fit for the for-profit space, since a product that can be monetized in various ways (subscriptions, advertising) and produced at 0 marginal cost offers an attractive business model. But books do not fit that model. So perhaps one role for NGO’s in this space may be supporting interventions which are known to be effective but whose financials are less promising, and perhaps self-help books are a case of this.
I suspect the synthesis here is that unguided is very effective when adhered to, but the main challenge is adherence. The reason to believe this is that there is usually a strong dosage effect in psychotherapy studies, and that the Furukawa study I posted in the first comment found that the only value humans provided was for adherence, not effect size.
Unfortunately, this would then cause big problems, because there is likely a trial bias affecting adherence, potentially inflating estimates by 4× against real-world data. I’m surprised that this isn’t covered in the literature, and my surprise is probably good evidence that I have something wrong here. This is one of the reasons I’m keen to study our intervention’s real-world data in a comparative RCT.
You make a strong point about the for-profit space and relative incentives, which is partly why, when I had to make a decision between founding a for-profit unguided app and joining Kaya Guides, I chose the guided option. As you note, the way the incentives seem to work is that large for-profits can serve LMICs only when profit margins are competitive with expanding further in HICs. This is the case for unguided apps, because translation and adaptation is a cheap fixed cost. But as soon as you have marginal costs, like hiring humans (or buying books, or possibly, paying for AI compute), it stops making sense. This is why BetterHelp have only now begun to expand beyond the U.S. to other rich countries.
But I think you implicitly note—if one intervention has zero marginal cost, then surely it’s going to be more cost-effective and therefore more attractive to funders? One model I’ve wondered about for an unguided for-profit is essentially licensing its core technology and brand to a non-profit at cost, which would then receive donations, do translations, and distribute in other markets.
On the meta-analyses: that seems fair. I think my initial thought was just that the Cuijpers seemed very low relative to my priors, and the Tong seemed more in line with them. But maybe my priors are wrong! I take your point that the Tong may be too high because of how widely it casts the “unguided” net, though it still does find some meaningful difference. But on the main point I think we’re in agreement: guided > unguided, and the case for unguidedness, if there is one, will depend on its relative cost-effectiveness.
On apps v. books: I think there are so many potentially countervailing effects here it’s hard to trust my intuitive judgments. I see the consideration you cite, but on the other hand (I would guess) someone on a phone is more likely to defect away from self-help and to use their phones for all the other things that phones can be used for. It would be great to have more studies here. There are a few RCT’s comparing print with courses delivered via the internet on a desktop/laptop, which seem to find little difference either way, but these studies are very sparse, and they’re at some remove from the question of comparing self-help delivery via a printed book via self-help delivery via Whatapp.
I take the point about cost-effectiveness. Certainly the tendency in the for-profit space has been digitization. But here too there’s a countervailing consideration. Digitized self-help is a natural fit for the for-profit space, since a product that can be monetized in various ways (subscriptions, advertising) and produced at 0 marginal cost offers an attractive business model. But books do not fit that model. So perhaps one role for NGO’s in this space may be supporting interventions which are known to be effective but whose financials are less promising, and perhaps self-help books are a case of this.
I suspect the synthesis here is that unguided is very effective when adhered to, but the main challenge is adherence. The reason to believe this is that there is usually a strong dosage effect in psychotherapy studies, and that the Furukawa study I posted in the first comment found that the only value humans provided was for adherence, not effect size.
Unfortunately, this would then cause big problems, because there is likely a trial bias affecting adherence, potentially inflating estimates by 4× against real-world data. I’m surprised that this isn’t covered in the literature, and my surprise is probably good evidence that I have something wrong here. This is one of the reasons I’m keen to study our intervention’s real-world data in a comparative RCT.
You make a strong point about the for-profit space and relative incentives, which is partly why, when I had to make a decision between founding a for-profit unguided app and joining Kaya Guides, I chose the guided option. As you note, the way the incentives seem to work is that large for-profits can serve LMICs only when profit margins are competitive with expanding further in HICs. This is the case for unguided apps, because translation and adaptation is a cheap fixed cost. But as soon as you have marginal costs, like hiring humans (or buying books, or possibly, paying for AI compute), it stops making sense. This is why BetterHelp have only now begun to expand beyond the U.S. to other rich countries.
But I think you implicitly note—if one intervention has zero marginal cost, then surely it’s going to be more cost-effective and therefore more attractive to funders? One model I’ve wondered about for an unguided for-profit is essentially licensing its core technology and brand to a non-profit at cost, which would then receive donations, do translations, and distribute in other markets.