Data Science in Mental Health

I came across two articles recently that I thought spoke to each other in an interesting way. The first was a New York Times piece about the failings of data science firms who try to identify school shootings before they happen by social media posts. The second was a Vox article about how a crisis counseling hotline successfully used data science to flag callers who are at higher risk of suicide or self-harm.

I have some specific thoughts on both, but I think the comparison between the two articles shows why a data-driven approach is helpful in one case and not the other.

A Failure: Predicting School Shootings

“Could Monitoring Students on Social Media Stop the Next School Shooting?” by Aaron Leibowitz.

This article reviews the services that several companies are providing to school districts by monitoring public posts by students on social media. These companies usually scrape data from all posts in a geographic region around the school. “Rather than asking schools for a list of students and social media handles, the companies typically employ a method called “geofencing” to sweep up posts within a given geographic area and use keywords to narrow the pool.”

However, as you can imagine, this wide of a net ends up flagging posts from many people unaffiliated with the school. One school in Ohio was warned about someone posting “There’s three seasons: summer, construction season and school shooting season.” Further investigation discovered that the poster was from Wisconsin not Ohio. Another school that hired one of these firms was close to a liquor store, and the firm couldn’t separate tweets about the store and the school. In general, the problem seems to be that in the available data, any signals, if they exist, are swamped by the noise.

This monitoring also brought up some philosophical questions about what out-of-school actions should have in-school consequences. A case of a school that hired an outside firm to review students’ posts on social media expelled 14 students. Some of the allegations are described below:

One student had been accused of “holding too much money” in photographs, an investigation by the Southern Poverty Law Center found, and one was suspended for an Instagram post in which she wore a sweatshirt with an airbrushed image of her father, a murder victim. School officials said the sweatshirt’s colors and the student’s hand symbol were evidence of gang ties, according to the investigation.

I can understand an administrator’s desire to punish students for some out-of-school actions, but these seem like they’ve gone too far. School officials who suddenly have access to all public activity of their students need to think harder about what kind of actions they want to police. Especially if they want to continue to monitor students’ activities for more serious transgressions, officials need to be tolerant of activities they may disapprove of to keep that information channel open. I’m sure that many fewer students made any public posts after these expulsions.

A Success: Identifying High-Risk Callers to a Crisis Hotline

“How data scientists are using AI for suicide prevention” by Brian Resnick

A more heartening case is this article about the data science team at Crisis Text Line. CTL provides crisis counseling via text message to anyone who requests it. Certain events cause dramatic increases in demand for their services; Robin Williams’s suicide and the 2015 terrorist attack in Paris are two examples. The volunteers working at the time cannot handle everyone at once, so they used machine learning to prioritize incoming requests based on the text message rather than using the order the requests came in. The words most predictive of an active rescue (when 911 is called) were the names of household drugs like Advil or Ibuprofen, even the crying face emoji was more predictive than the word “suicide” and other words like “cut” and “kill” that the company had thought would be good predictors previously.

Something I wondered throughout this piece was what really was the “machine learning” being used and did it really rise to the level of artificial intelligence? It sounds like the analysis could have been simply computed by a simple logistic regression of active rescue on indicators for which words were used in the first message. It sounds a bit pedantic, but I think overuse of buzzwords like machine learning and AI discourage people who would have valuable insights or could produce data analysis of similar rigor and results from looking into these problems. For a longer read on how data is being used (and not used) in the counseling profession, check out this Atlantic piece that was linked in the Vox article.

What was the difference?

Both CTL and the school shooting programs were trying to predict individual behavior from limited textual data. I think the difference is that the event they wanted to detect was much more frequent and observable in the studied population.

My intuition is that the proportion of school shooters among everyone living near a school is significantly smaller than the proportion of callers to a crisis hotline who need an active rescue. The most credible estimates are that in the 2015-2016 school year there were 11 to 29 school shootings across the country. The data on social media posts by the shooters has to be even more scarce. Statistical methods to identify which features are predictive of a given event need data from when the event does and does not occur. The school security firms, lacking many data on what shooters post before they bring a gun to school, ended up simply referring “violent-sounding” messages to school officials, without being able to specify how likely that person was to actually be violent at school. That determination was left up to school officials who, given discretion, appear to have made some poor choices about what kinds of messages merited a response. I would not be surprised if part of the justification school administrators had in mind was that the data science firm flagged the message because they thought the person would be violent at school. When in reality, the firm isn’t doing any data-based prediction on what messages correlate with actual violence.

CTL, however, had a clear outcome variable and data on both callers who did and did not need active rescue. They were able to build a statistical model and identify predictive features of the incoming messages to effectively allocate resources. CTL had the appropriate labeled data, but the schools only had a limited selection of messages from students who had not yet (and might never be) violent at school.

Avatar
Graham Tierney
Statistical Science Ph.D. Student