A Scoping Review of Health AI Controversies in the Grey Literature from 2013-2022
Friday, October 13, 2023
9:30 AM – 10:45 AM ET
Location: Essex AB (Fourth Floor)
Objectives Artificial intelligence (AI) is an increasingly prominent tool for improving health services and care delivery. However, physicians and patients remain uncertain of health AI for both ethical and practical reasons. To date, evidence for this skepticism has not been systematically evaluated or cataloged. Thus, it is unclear whether skepticism around health AI is due to real controversy in the development and deployment of health AI or theoretical or perceived controversy explored through thought experiments.
To that end, this study seeks to answer the following question: to what extent is skepticism around health AI justified by actual controversies in AI development and/or deployment?
Methods We searched Google News with a combination of terms (27 combinations in total) related to health-related AI controversies. Articles were limited to publication in 2013-2022. Search was limited to 300 articles per search term.
Results Data collection is still in progress. A total of 8100 articles will be evaluated. Initial results suggest that only 5-10% of these articles truly describe health-related AI controversies. The majority of articles analyzed to date were published from 2019-2022. Initial analysis suggests that health AI controversies fall into 9 categories of concern: privacy, poor evidence, poor accuracy, poor oversight, conflict of interest, bias, illegal behavior, exploitation, and poor outcomes. Initial analysis also suggests that a small number of corporate and government actors are responsible for the majority of press around health AI controversies.
Conclusions This review may help to substantiate or debunk claims that health AI is dangerous, unethical or untrustworthy.
Arturo Balaguer – Student – Northwestern Medicine; Chad Teven – Department of Plastic Surgery – Northwestern Medicine