Automated online suicide prevention: potentially high impact?

Content warning: extensive discussion of suicide.

Introduction

Worldwide, about 1 million people commit suicide every year, more than die of malaria. As far as I can tell from a cursory search, there haven’t been too many efforts by effective altruists directed at this issue[1], probably because, at least in first-world countries, it already receives a lot of attention, making it less likely that there are unknown promising interventions. However, I recently realized that it’s possible there are very inexpensive methods for automated online suicide prevention available.

The purpose of this post is to present what such a system might look like and solicit criticism, as there’s a good chance I’m missing some legal /​ technical /​ ethical reason why it’s infeasible or a bad idea. If this idea is promising, I hope to get the idea out there among EAs so that someone with more technical chops and organizational experience can make it happen, although if no one is available (and the idea is sound) I plan to keep working on it myself.

How an automated suicide prevention system might work

An automated suicide prevention system would need to do three things: 1. find online content expressing suicidal intent. 2. acquire enough identifying information to make possible an intervention 3. carry out an intervention. I’ll discuss each of these steps in turn.

To find posts expressing suicidal thoughts, one fairly simple method would be to access recent posts on a fixed list of social media sites and use an ML model to try and identify posts indicating risk for suicide. Some sites, like Twitter, have an API available for bots, while others do not. To use most existing ML methods, we would need a training set consisting of posts labeled for whether or not they expressed suicidal intent. This is definitely obtainable: part of the paper “Natural Language Processing of Social Media as Screening for Suicide Risk” describes the creation of a data set for a similar purpose by finding people who had self-reported the dates of past suicide attempts in the mental health database OurDataHelps.org and scraping their social media posts prior to the attempt.[2]

Next, what information can be acquired about a user varies a lot depending on the site. Some sites encourage users to use their real names. Others, like Twitter, have location-sharing features that some users can opt into; for instance, Twitter allows users to include GPS data with each tweet (moreover, this paper concludes that automated processing of Twitter users’ profiles yields a city of residence in 17% of cases, which correlate well with GPS data when the latter are present). An automated search through a user’s past posts may allow additional information to be acquired. It’s also possible that even if some sites do not make certain pieces of user information publicly available, they would be willing to supply them to a service like this.

Finally, we come to the intervention. There are many possible interventions, which vary in the types of user information required, efficacy, and potential for negative side effects, which are discussed below in the “Possible Pitfalls” section:

  • Sending a message to the individual who made the post: a message including a hotline number could be sent to the person. Some sites already do something like this: for instance, Quora includes a hotline number before answers to questions deemed related to suicide.

  • Sending messages to contacts of the person: Access to people’s contacts is available on many platforms. A message could be sent to several contacts (perhaps prioritizing those with the same last name to reach family members) alerting them to the concerning message and urging them to reach out to the person or contact authorities if they deem it serious.

  • (somewhat less feasible) Alerting emergency responders: This requires knowledge of the person’s address or at least city of residence, and some automated means of contacting authorities.

  • (somewhat less feasible) Alerting mental health providers to contact the person: This intervention only makes sense if we’re searching for content that indicates potential for future suicidality rather than imminent risk. It would presumably require coordination with large telehealth providers.

  • (less feasible) Asking a hotline to call the person: This would require coordination with hotlines and access to the user’s phone number. Many people don’t answer calls from unknown numbers, so this would probably need to be paired with a message informing them that a hotline will call them. This might seem intrusive or creepy to people, and as far as I know there is no precedent for hotlines doing this.

  • (less feasible) Interfacing with an existing system: Facebook has an existing system for sending emergency responders to help suicidal individuals, and perhaps they would be willing to extend it to cases where someone posts on another platform but there’s enough information (such as their name, profile picture, or phone number) to link them to a Facebook account. However, it seems unlikely that they would want to handle the new influx of reports, since in their system a human vets each one.

Impact

To assess the impact of such a system, I think there are two main considerations: how well someone’s likelihood of attempting suicide can be determined from their online content by an automated system, and how effective the possible interventions are. (Note: there’s ton of existing literature on these questions, and what’s below is only the start of a review; I’m mostly just trying to dump some relevant data here rather than make a tight argument. I plan look into the literature more and make a more thorough post on this if doing so appears useful.)

One paper related to the first question, bearing more on gauging long-term risk than predicting an imminent crisis, is the above-mentioned “Natural Language Processing of Social Media as Screening for Suicide Risk.” The authors identified 547 individuals who attempted suicide in the past, some from the mental-health database OurDataHelps.org (now defunct) and others who made reference to past attempts on Twitter, matched them with demographically similar controls and trained a classifier to distinguish them based on their posts in the 6 months leading up to the attempt[3]. They conclude that one version of their model, if given the same amount of data for other users and asked to flag those likely to attempt suicide, would be able to successfully flag 24% of those users who would eventually attempt suicide and only have 67% as many false positives as true positives (the model can be tweaked to increase how many at-risk users are flagged at the expense of increasing the false alarm rate.)

There are numerous other academic papers that are relevant to the above questions, a few of which I’ll summarize here briefly:

  • Belfort et al. surveyed 1350 adolescents admitted to a hospital for suicidality, finding that 36 had communicated their intent to commit suicide electronically (though not necessarily publicly).

  • Braithwaite et al. compared suicide risk surveys of Mechanical Turk participants with a risk rating derived from ML analysis of their tweets: their algorithm correctly classified 53% of the suicidal individuals and 97% of the non-suicidal ones.

  • O’Dea et al. used the Twitter API to search for tweets containing phrases potentially linked to suicide, hand-classified a sample of them by seriousness, and then trained an ML model on that data: they concluded that the ML model was able to “replicate the accuracy of the human coders,” and they estimated that (as of 2015) there were about 32 tweets per day expressing a ‘strongly concerning’ level of suicidality (which is much less than I was expecting; I might be misunderstanding this).

Now, I’ll briefly summarize some papers I found on how effective various interventions are at preventing suicide:

  • Pil et al. estimated that a suicide helpline in Belgium prevented about 36% of callers from attempting suicide and, on average, gained male callers 0.063 QALYs and female callers 0.019 QALYs.

  • Brown et al. concluded that CBT can reduce suicide risk by 50% when compared to standard care. If we assume that standard care is not actively harmful, this implies that at least some forms of intervention by medical practitioners are highly effective at preventing suicide.

  • Neimeyer and Pfeiffer reviewed the literature in 1994 and concluded that most existing studies detect no impact of crisis centers on suicide rates except among white females aged 24 or younger, for which they were highly effective.

  • Player et al. presented a qualitative analysis of interviews of 35 men deemed at risk for suicide, as well as 47 families of men who recently committed suicide. In general, despite some complaints, those interviewed affirmed the value of the medical system in helping them stay alive, also mentioning the usefulness of having trusted contacts to talk to.

Possible Pitfalls

There are several potential problems with this proposal, and I’ll list the ones I’ve thought of here:

Potential for public criticism

This system would be likely to receive criticism for violating privacy. The most famous past analogue of the sort of system discussed here was the Samaritans Radar, a service by the Samaritans organization that users could add to Twitter and which scanned the text of all followed individuals’ tweets and alerted the user if any seemed to express suicidal intent. It was shut down after only nine due to the volume of criticism directed at it (due to privacy concerns, worry about possible exploitation of the system by bad actors to identify vulnerable people, and the possibility it violated Britain’s Data Protection Act).

Additionally, due to the Copenhagen interpretation of ethics, it seems likely that any automated online suicide prevention system might be blamed for the deaths it fails to prevent, regardless of its positive effects.

Causing harm to suicidal people /​ their contacts

If the system involves sending messages to contacts, it has the potential to place a lot of stress on contacts or, in the worst case, make them feel partially responsible for the user’s death. In some (or perhaps many) cases, having one’s friends /​ family become aware of one’s mental health issues /​ suicidality could itself be psychologically damaging. If such an automated suicide prevention system became widely known to exist, people might avoid talking about their plans to commit suicide online and so fail to receive support they would have otherwise.

Overburdening existing systems

Depending on how selective a system is at flagging concerning messages, the possible intervention mentioned above of asking hotlines to call people might be infeasible due to the fact that hotlines wouldn’t be able to handle the volume of requests the system would generate (it seems hotlines are already short on personnel). If (as seems likely) hotlines would be unable to expand to meet the new demand, and if making a post deemed concerning by the automated system is less predictive of suicidal intent than calling a hotline, such a system would just dilute the positive impact of hotlines.

Additionally, if an automated suicide prevention system became well-known, troublemakers might intentionally provoke a response from it via insincere posts, thereby diverting resources.

Conclusion

Again, I welcome criticism. If it turns out the idea is not critically flawed, I’d also like to know if anyone else (especially anyone with ML experience) has an interest in working on this.

  1. ^

    With The Center for Pesticide Suicide Prevention one notable and very successful exception.

  2. ^

    It’s unclear whether it would be most effective to target interventions at users expressing short-term suicidal intent or whose post history indicates they are at risk for attempting suicide in the future. The authors of “Natural Language Processing of Social Media as Screening for Suicide Risk” argued in favor of the latter, and designed their model for identifying concerning posts accordingly. I’ll discuss both possibilities here.

  3. ^

    The researchers also tested excluding the data within the three months preceding the attempts and found that this did not substantially affect the capability of their classifier, suggesting that it wasn’t relying on posts indicating imminent suicide attempts. I guess this also suggests either that such posts are uncommon, that they are usually preceded by enough other indications that they aren’t uniquely informative to a classifier, or that they were present but the model was not sophisticated enough to take them into account.