Thanks very much Kris, I’m very pleased that you’re interested in this enough to write these comments.
And as you’re pointing out, I didn’t respond to your earlier point about talking about the evidence base for an entire approach, as opposed to (e.g.) an approach applied to a specific diagnosis.
The claim that the “evidence base for CBT” is stronger than the “evidence base for Rogerian therapy” came from psychologists/psychiatrists who were using a bit of a shorthand—i.e. I think they really mean something like “if we look at the evidence base for CBT as applied to X for lots of values of X, compared to the evidence base for Rogerian therapy as applied to X for lots of values of X, the evidence base for the latter is more likely to have gaps for lots of values of X, and more likely to have poorer quality evidence if it’s not totally missing”.
It’s worth noting that while the current assessment mechanism is the question described in Appendix 1f, this is, as alluded to, not the only question that could be asked, and it’s also possible for the bot to incorporate other standard assessment approaches (PHQ9, GAD7, or whatever) and adapt accordingly.
Having said that, I’d say that this on its own doesn’t feel revolutionary to me. What really does seem revolutionary is that, with the right scale, I might be able to say: This client said XYZ to me, if I had responded with ABC or DEF, which of those would have given me a better response, and be able to test something as granular as that and get a non-tiny sample size.
Thanks very much Kris, I’m very pleased that you’re interested in this enough to write these comments.
And as you’re pointing out, I didn’t respond to your earlier point about talking about the evidence base for an entire approach, as opposed to (e.g.) an approach applied to a specific diagnosis.
The claim that the “evidence base for CBT” is stronger than the “evidence base for Rogerian therapy” came from psychologists/psychiatrists who were using a bit of a shorthand—i.e. I think they really mean something like “if we look at the evidence base for CBT as applied to X for lots of values of X, compared to the evidence base for Rogerian therapy as applied to X for lots of values of X, the evidence base for the latter is more likely to have gaps for lots of values of X, and more likely to have poorer quality evidence if it’s not totally missing”.
It’s worth noting that while the current assessment mechanism is the question described in Appendix 1f, this is, as alluded to, not the only question that could be asked, and it’s also possible for the bot to incorporate other standard assessment approaches (PHQ9, GAD7, or whatever) and adapt accordingly.
Having said that, I’d say that this on its own doesn’t feel revolutionary to me. What really does seem revolutionary is that, with the right scale, I might be able to say: This client said XYZ to me, if I had responded with ABC or DEF, which of those would have given me a better response, and be able to test something as granular as that and get a non-tiny sample size.