Controversial Brain Imaging Uses AI to Take Aim at Suicide Prevention

By iCare

Researchers are training algorithms to spot tell-tale signs of self-harm in brain scans. But there’s probably better data to use.

WHEN SOMEONE TAKES their own life, they leave behind an inheritance of unanswered questions. “Why did they do it?” “Why didn’t we see this coming?” “Why didn’t I help them sooner?” If suicide were easy to diagnose from the outside, it wouldn’t be the public health curse it is today. In 2014 suicide rates surged to a 30-year high in the US, making it now the second leading cause of death among young adults. But what if you could get inside someone’s head, to see when dark thoughts might turn to action?

That’s what scientists are now attempting to do with the help of brain scans and artificial intelligence. In a study published today in Nature Human Behavior, researchers at Carnegie Mellon and the University of Pittsburgh analyzed how suicidal individuals think and feel differently about life and death, by looking at patterns of how their brains light up in an fMRI machine. Then they trained a machine learning algorithm to isolate those signals—a frontal lobe flare at the mention of the word “death,” for example. The computational classifier was able to pick out the suicidal ideators with more than 90 percent accuracy. Furthermore, it was able to distinguish people who had actually attempted self-harm from those who had only thought about it.

Thing is, fMRI studies like this suffer from some well-known shortcomings. The study had a small sample size—34 subjects—so while the algorithm might excel at spotting particular blobs in this set of brains, it’s not obvious it would work as well in a broader population. Another dilemma that bedevils fMRI studies: Just because two things occur at the same time doesn’t prove one causes the other. And then there’s the whole taint of tautology to worry about; scientists decide certain parts of the brain do certain things, then when they observe a hand-picked set of triggers lighting them up, boom, confirmation.

In today’s study, the researchers started with 17 young adults between the ages of 18 and 30 who had recently reported suicidal ideation to their therapists. Then they recruited 17 neurotypical control participants and put them each inside an fMRI scanner. While inside the tube, subjects saw a random series of 30 words. Ten were generally positive, 10 were generally negative, and 10 were specifically associated with death and suicide. Then researchers asked the subjects to think about each word for three seconds as it showed up on a screen in front of them. “What does ‘trouble’ mean for you?” “What about ‘carefree,’ what’s the key concept there?” For each word, the researchers recorded the subjects’ cerebral blood flow to find out which parts of their brains seemed to be at work.

fMRI scans of two groups of people thinking about the word death. On the left are those who have previously attempted...

fMRI scans of two groups of people thinking about the word “death.” On the left are those who have previously attempted suicide. On the right is a neurotypical control group. CARNEGIE MELLON UNIVERSITY

Then they took those brain scans and fed them to a machine learning classifier. For each word, they told the algorithm which scans belonged to the suicidal ideators and which belonged to the control group, leaving one person at random out of the training set. Once it got good at telling the two apart, they gave it the left-out person. They did this for all 30 words, each time excluding one test subject. At the end, the classifier could reliably look at a scan and say whether or not that person had thought about killing themselves 91 percent of the time. To see how well it could more generally parse people, they then turned it on 21 additional suicidal ideators, who had been excluded from the main analyses because their brain scans had been too messy. Using the six most discriminating concepts—death, cruelty, trouble, carefree, good, and praise—the classifier spotted the ones who’d thought about suicide 87 percent of the time.

“The fact that it still performed well with noisier data tells us that the model is more broadly generalizable,” says Marcel Just, a psychologist at Carnegie Mellon and lead author on the paper. But he says the approach needs more testing to determine if it could successfully monitor or predict future suicide attempts. Comparing groups of individuals with and without suicide risk isn’t the same thing as holding up a brain scan and assigning its owner a likelihood of going through with it.

But that’s where this is all headed. Right now, the only way doctors can know if a patient is thinking of harming themselves is if they report it to a therapist, and many don’t. In a study of people who committed suicide either in the hospital or immediately following discharge, nearly 80 percent denied thinking about it to the last mental healthcare professional they saw. So there is a real need for better predictive tools. And a real opportunity for AI to fill that void. But probably not with fMRI data.

 

It’s just not practical. The scans can cost a few thousand dollars, and insurers only cover them if there is a valid clinical reason to do so. That is, if a doctor thinks the only way to diagnose what’s wrong with you is to stick you in a giant magnet. While plenty of neuroscience papers make use of fMRI, in the clinic, the imaging procedure is reserved for very rare cases. Most hospitals aren’t equipped with the machinery, for that very reason. Which is why Just is planning to replicate the study—but with patients wearing electronic sensors on their head while they’re in the tube. Electroencephalograms, or EEGs, are one hundredth the price of fMRI equipment. The idea is to tie predictive brain scan signals to corresponding EEG readouts, so that doctors can use the much cheaper test to identify high-risk patients.

Other scientists are already mining more accessible kinds of data to find telltale signatures of impending suicide. Researchers at Florida State and Vanderbilt recently trained a machine learning algorithm on 3,250 electronic medical records for people who had attempted suicide sometime in the last 20 years. It identifies people not by their brain activity patterns, but by things like age, sex, prescriptions, and medical history. And it correctly predicts future suicide attempts about 85 percent of the time.

“As a practicing doctor, none of those things on their own might pop out to me, but the computer can spot which combinations of features are predictive of suicide risk,” says Colin Walsh, an internist and clinical informatician at Vanderbilt who’s working to turn the algorithm he helped develop into a monitoring tool doctors and other healthcare professionals in Nashville can use to keep tabs on patients. “To actually get used it’s got to revolve around data that’s already routinely collected. No new tests. No new imaging studies. We’re looking at medical records because that’s where so much medical care is already delivered.”

And others are mining data even further upstream. Public health researchers are poring over Google searches for evidence of upticks in suicidal ideation. Facebook is scanning users’ wall posts and live videos for combinations of words that suggest a risk of self-harm. The VA is currently piloting an app that passively picks up vocal cues that can signal depression and mood swings. Verily is looking for similar biomarkers in smart watches and blood draws. The goal for all these efforts is to reach people where they are—on the internet and social media—instead of waiting for them to walk through a hospital door or hop in an fMRI tube.

 


Original article from WIRED