“This patient is at high risk of suicide”… AI can warn medical staff and prevent it

Select patients at high risk of suicide attempts for 30 days and warn medical staff

“This patient is at high risk of suicide”… AI can warn medical staff and prevent it
An artificial intelligence (AI) system has been developed that effectively selects patients at high risk of suicide and warns medical staff. (Photo = Getty Image Bank)

An artificial intelligence (AI) system has been developed that effectively selects patients at high risk of suicide and warns medical staff. This is what Medical Express, a medical media outlet, reported based on a paper by researchers at Vanderbilt University recently published in the Journal of the American Medical Association (JAMA Network Open).

A research team led by Professor Colin Walsh (Biomedical Informatics and Psychiatry) at Vanderbilt University School of Medicine developed an AI system called the Vanderbilt Likelihood of Suicide Attempts and Ideation Model (VSAIL). We then tested whether it could effectively predict the risk of suicide in patients who regularly visit three psychiatric clinics at Vanderbilt University Hospital. The focus on psychiatric clinics is because certain neurological conditions are associated with an increased risk of suicide.

The researchers first compared two approaches: an automated pop-up notification that disrupts the physician’s workflow and a passive system that simply displays risk information on the patient’s electronic chart. The results showed that interruptive warnings were significantly more effective, with doctors performing suicide risk assessments in 42% of screening warnings. On the other hand, only 4% of passive systems performed suicide risk assessments.

Professor Walsh pointed out, “Most people who die by suicide have visited a medical institution for reasons unrelated to mental health in the year before their death.” “But universal screening is not practical in all settings,” he explained. “We developed VSAIL to identify high-risk patients and encourage focused screening conversations.”

Suicide in the United States has been on the rise for the past 30 years. It is estimated that 14.2 out of 100,000 Americans die by suicide each year. It is the 11th leading cause of death among Americans. According to the study, 77% of people who die by suicide had contact with a primary health care provider in the year before their death.

To improve suicide risk screening, researchers are exploring ways to identify patients most in need of evaluation. The VSAIL model, developed by Professor Walsh’s research team, analyzes routine information in electronic health records to calculate a patient’s risk of attempting suicide over a 30-day period.

The model has proven effective in identifying high-risk patients in initial prospective testing, which only flagged the patient as at risk in the patient record but did not issue a warning. One in 23 people who developed a red flag reported actual suicidal thoughts.

In this study, when patients predicted to be at high risk by VSAIL visited the psychiatric clinic at Vanderbilt University Hospital, their doctors received either an abortive warning or a non-abortive (passive) warning. The researchers suggest that similar systems could be tested in other healthcare settings. “Only about 8% of all patient visits were identified as requiring this screening (high risk of suicide),” Professor Walsh said. “This selective approach will make suicide prevention efforts more feasible.” .

In this study, a total of 596 screening warnings occurred for 7732 patient visits over 6 months. Review of VUMC health records during the 30-day follow-up period revealed that none of the patients in the randomized warning group experienced suicidal thoughts or attempts. Interruption warnings have been more effective in encouraging testing, but can lead to ‘alert fatigue’ if doctors are bombarded with frequent automated reminders.

The researchers noted that this issue should be examined in future studies. “There is a need to balance the effectiveness and potential downsides of interruption warnings,” Professor Walsh said. “However, these findings suggest that automated risk detection combined with well-designed warnings can help identify more patients in need of suicide prevention services. “It suggests that it could be helpful,” he said.

The paper can be found at the following link:








Source: kormedi.com