Thank you for writing this. I think the concept has potential. I don’t think the content you’ve written here is sufficient to make the case (not that there should be an expectation that something is fully thought through before it appears on the Forum).
Context in case it’s useful: I have volunteered with Samaritans for c20 years, and I founded the Talk It Over chatbot (forum post from a couple of years ago here)
I’m less excited by your work on how to find/identify suicidal people.
In our work at TIO, we’ve found that it’s very easy to reach people with google ads
This is consistent with indications/evidence that people are much more willing to be open and honest in their google searches than they are in other contexts. So they might have high willingness to google “I hate myself”, “I want to die”, etc
Not to be too negative; the research you allude may also be helpful, but the strength of this proposal doesn’t live or die on your ability to reach suicidal people
I’m more interested in interventions which actually effectively reduce suicide—I’m not convinced we have any?
I wasn’t familiar with Pil et al, thank you for sharing that study; interesting that it found a suicide helpline to be effective. I haven’t read the study, was it a good quality study? Was it based on experimental evidence of the effectiveness of the intervention, or did it take effectiveness for granted? If experimental, how good was the study design?
I haven’t done a careful review of suicide interventions, but my impression, based on casual conversations with academic suicide experts speaking at relevant conferences, is that:
… suicide helplines don’t seem to have a good evidence base supporting them
… systemic policy interventions (putting a limit on the number of packets of paracetamol you can buy, catalytic converters, changes to fertilisers) do have a much better evidence base
… I don’t know about CBT, but I’m sympathetic to the possibility that it might be effective (at least some of the time)
TIO’s impact model doesn’t mention suicide prevention, precisely because I didn’t have confidence that suicide helplines achieved this, and hence didn’t have confidence that TIO achieves this (although I hope it does)
A full impact analysis needs more on how good it is to prevent a suicide
Some might argue that by taking actions to prevent suicides fails to consider the subject’s wishes; their own perspective is that their life is net negative and they would be better off dead. I’m not saying that this perspective is correct, rather that there is a burden of proof on us to show that this doesn’t necessarily hold
Even if it doesn’t hold, a weaker version could be argued for: maybe those who are prevented from dying by suicide have a higher propensity to suffer from depression, and hence the in-expectation DALYs averted is lower than someone else of the same age
Liability risks may make suicide prevention apps neglected
It seems, contrary to my impression when reading your title, that you didn’t envisage creating an app; I’ll comment briefly on this nonetheless
As far as I’m aware, several mental health apps and non-tech-based service providers are very nervous about suicide, and are keen to direct users to a suicide helpline as soon as suicide is mentioned
There is some risk here: if a user used an app or service and their text was stored, and that user died, then the app provider could be accused of being liable for their death
It seems that the method for managing this risk is surprisingly simple: at Samaritans we simply explain to service users that we can’t trace their call, and if they need help they should seek help themselves
Thanks for your comment. I feel very grateful to have received such a thorough reply (especially from someone with so much experience in the area).
To be honest, I haven’t looked carefully at most of the papers I mentioned here concerning intervention effectiveness, including Pil et al. As I mentioned in the post, I still plan to do a more extensive literature review. It’s interesting to hear your perception on how academic experts feel about intervention effectiveness; I tried a bit to find a recent thorough review article on this, but didn’t have much luck.
Regarding the question of whether suicide prevention is net-positive in the first place, as I mentioned in another reply below, I felt pretty convinced of this after casually reading this blog post (whose main argument is that most suicides are the result of impulsive decisions or treatable conditions / temporary circumstances), but I think it would definitely be worthwhile to go through the argument more critically.
I hadn’t considered liability risks, and, though I guess what I was describing is more like a bot than an app, it’s possible they would still be relevant, so thanks for drawing my attention to that.
Thank you for writing this. I think the concept has potential. I don’t think the content you’ve written here is sufficient to make the case (not that there should be an expectation that something is fully thought through before it appears on the Forum).
Context in case it’s useful: I have volunteered with Samaritans for c20 years, and I founded the Talk It Over chatbot (forum post from a couple of years ago here)
I’m less excited by your work on how to find/identify suicidal people.
In our work at TIO, we’ve found that it’s very easy to reach people with google ads
This is consistent with indications/evidence that people are much more willing to be open and honest in their google searches than they are in other contexts. So they might have high willingness to google “I hate myself”, “I want to die”, etc
Not to be too negative; the research you allude may also be helpful, but the strength of this proposal doesn’t live or die on your ability to reach suicidal people
I’m more interested in interventions which actually effectively reduce suicide—I’m not convinced we have any?
I wasn’t familiar with Pil et al, thank you for sharing that study; interesting that it found a suicide helpline to be effective. I haven’t read the study, was it a good quality study? Was it based on experimental evidence of the effectiveness of the intervention, or did it take effectiveness for granted? If experimental, how good was the study design?
I haven’t done a careful review of suicide interventions, but my impression, based on casual conversations with academic suicide experts speaking at relevant conferences, is that:
… suicide helplines don’t seem to have a good evidence base supporting them
… systemic policy interventions (putting a limit on the number of packets of paracetamol you can buy, catalytic converters, changes to fertilisers) do have a much better evidence base
… I don’t know about CBT, but I’m sympathetic to the possibility that it might be effective (at least some of the time)
TIO’s impact model doesn’t mention suicide prevention, precisely because I didn’t have confidence that suicide helplines achieved this, and hence didn’t have confidence that TIO achieves this (although I hope it does)
A full impact analysis needs more on how good it is to prevent a suicide
Some might argue that by taking actions to prevent suicides fails to consider the subject’s wishes; their own perspective is that their life is net negative and they would be better off dead. I’m not saying that this perspective is correct, rather that there is a burden of proof on us to show that this doesn’t necessarily hold
Even if it doesn’t hold, a weaker version could be argued for: maybe those who are prevented from dying by suicide have a higher propensity to suffer from depression, and hence the in-expectation DALYs averted is lower than someone else of the same age
Liability risks may make suicide prevention apps neglected
It seems, contrary to my impression when reading your title, that you didn’t envisage creating an app; I’ll comment briefly on this nonetheless
As far as I’m aware, several mental health apps and non-tech-based service providers are very nervous about suicide, and are keen to direct users to a suicide helpline as soon as suicide is mentioned
There is some risk here: if a user used an app or service and their text was stored, and that user died, then the app provider could be accused of being liable for their death
It seems that the method for managing this risk is surprisingly simple: at Samaritans we simply explain to service users that we can’t trace their call, and if they need help they should seek help themselves
Thanks for your comment. I feel very grateful to have received such a thorough reply (especially from someone with so much experience in the area).
To be honest, I haven’t looked carefully at most of the papers I mentioned here concerning intervention effectiveness, including Pil et al. As I mentioned in the post, I still plan to do a more extensive literature review. It’s interesting to hear your perception on how academic experts feel about intervention effectiveness; I tried a bit to find a recent thorough review article on this, but didn’t have much luck.
Regarding the question of whether suicide prevention is net-positive in the first place, as I mentioned in another reply below, I felt pretty convinced of this after casually reading this blog post (whose main argument is that most suicides are the result of impulsive decisions or treatable conditions / temporary circumstances), but I think it would definitely be worthwhile to go through the argument more critically.
I hadn’t considered liability risks, and, though I guess what I was describing is more like a bot than an app, it’s possible they would still be relevant, so thanks for drawing my attention to that.