Labeling or Lifesaving? Cautioning Artificial Intelligence Predictions in Psychiatry
Friday, October 13, 2023
5:00 PM – 6:15 PM ET
Location: Iron (Fourth Floor)
As artificial intelligence (AI) utilization for clinical prediction grows, it will likely incorporate data from many sources beyond the electronic medical record (EMR), including cell phone and social media usage. In psychiatry, interpreting personal data may be useful given the social components of mental health. Use of AI clinical decision support systems (AI-CDSS) in psychiatry could help physicians make more accurate diagnoses, detect earlier signs of mental illness, and prescribe individualized therapies.
Predictions from AI are bound to be incorporated into how clinicians view their patients, contributing to a labeling effect. For example, if AI-CDSS predicts a patient to have a higher risk of developing major depressive disorder (MDD), it may affect the physician’s actions toward the patient, the patient’s self image when they learn of this risk, and the treatment they receive. This could prevent a patient from developing MDD, or it could become a self-fulfilling prophecy. If AI-CDSS are less accurate, their use may cause significant harm through labeling.
Psychiatric predictions are complex, and even the best accepted criteria fail to properly articulate the factors comprising mental illness. Although AI may be better than humans at recognizing complex patterns, it’s highly debatable which of those patterns are meaningful for predictions on an individual basis. Contrary to expectations, an AI-CDSS won’t necessarily make more objective psychiatric predictions than clinicians. Only with careful regulation of data use and clinician education can AI-CDSS be an effective tool in psychiatry while reducing the risk of harm to patients.
Charles Binkley – Bioethics – Hackensack Meridian Health