WHEN SOMEONE TAKES their own life, they leave behind an inheritance of unanswered questions. “Why did they do it?” “Why didn’t we see this coming?” “Why didn’t I help them sooner?” If suicide were easy to diagnose from the outside, it wouldn’t be the public health curse it is today. In 2014 suicide rates surged to a 30-year high in the US, making it now the second leading cause of death among young adults. But what if you could get inside someone’s head, to see when dark thoughts might turn to action?
That’s what scientists are now attempting to do with the help of brain scans and artificial intelligence. In a study published today in Nature Human Behavior, researchers at Carnegie Mellon and the University of Pittsburgh analyzed how suicidal individuals think and feel differently about life and death, by looking at patterns of how their brains light up in an fMRI machine. Then they trained a machine learning algorithm to isolate those signals—a frontal lobe flare at the mention of the word “death,” for example. The computational classifier was able to pick out the suicidal ideators with more than 90 percent accuracy. Furthermore, it was able to distinguish people who had actually attempted self-harm from those who had only thought about it.
Thing is, fMRI studies like this suffer from some well-known shortcomings. The study had a small sample size—34 subjects—so while the algorithm might excel at spotting particular blobs in this set of brains, it’s not obvious it would work as well in a broader population. Another dilemma that bedevils fMRI studies: Just because two things occur at the same time doesn’t prove one causes the other. And then there’s the whole taint of tautology to worry about; scientists decide certain parts of the brain do certain things, then when they observe a hand-picked set of triggers lighting them up, boom, confirmation.
In today’s study, the researchers started with 17 young adults between the ages of 18 and 30 who had recently reported suicidal ideation to their therapists. Then they recruited 17 neurotypical control participants and put them each inside an fMRI scanner. While inside the tube, subjects saw a random series of 30 words. Ten were generally positive, 10 were generally negative, and 10 were specifically associated with death and suicide. Then researchers asked the subjects to think about each word for three seconds as it showed up on a screen in front of them. “What does ‘trouble’ mean for you?” “What about ‘carefree,’ what’s the key concept there?” For each word, the researchers recorded the subjects’ cerebral blood flow to find out which parts of their brains seemed to be at work.
Then they took those brain scans and fed them to a machine learning classifier. For each word, they told the algorithm which scans belonged to the suicidal ideators and which belonged to the control group, leaving one person at random out of the training set. Once it got good at telling the two apart, they gave it the left-out person. They did this for all 30 words, each time excluding one test subject. At the end, the classifier could reliably look at a scan and say whether or not that person had thought about killing themselves 91 percent of the time. To see how well it could more generally parse people, they then turned it on 21 additional suicidal ideators, who had been excluded from the main analyses because their brain scans had been too messy. Using the six most discriminating concepts—death, cruelty, trouble, carefree, good, and praise—the classifier spotted the ones who’d thought about suicide 87 percent of the time.