The Minority Report: Is Big-Data Suicide Risk Prediction Something We Should Want?
Thursday, October 12, 2023
4:00 PM – 5:15 PM ET
Location: Falkland (Fourth Floor)
Suicide is hard to predict. There are few clinically useful suicide prediction algorithms. Currently, suicide prediction uses a just-in-time approach that depends on individual provider’s assessments, typically applied to persons who are already receiving care. But many people die of suicide without ever seeing a health care provider.
Machine-learning approaches may yield advanced suicide prediction algorithms (ASPAs) that combine data from health care records, genetic sequences, social media, and smart devices, among other sources to more accurately predict suicide. ASPAs may also be adaptable for persons who have not yet sought care. ASPAs could, eventually, accurately determine that a person is at increased near-term risk of suicide even before he starts to experience significant suicidal ideation.
Is this something we should want? We argue, controversially, that the answer may be “no.” We consider how variations in the performance characteristics of ASPAs could interact with different clinical approaches to mitigating suicide risk, such as involuntary hospitalization or increased monitoring. We argue that even perfect suicide prediction could be worse than the status quo if clinical approaches to reducing risk remain too aversive, invasive, ineffectual, or harmful. These drawbacks increase as ASPA performance worsens. Opt-in approaches to risk prediction, wherein patients in a healthcare system consent to monitoring for suicide risk, could attenuate some, but not all, of these harms. Although preventing suicide is important, ASPAs should be implemented with caution and only in conjunction with efforts to improve the resources offered to vulnerable persons with suicidal ideation.