Transparency for Clinicians: Information Needs for Healthcare Artificial Intelligence
Saturday, October 14, 2023
9:00 AM – 10:15 AM ET
Location: Waterview CD (Lobby Level)
As artificial intelligence (AI) becomes increasingly translated from research settings to patient care, there are numerous ethical concerns related to its highly technical and opaque nature. Despite their potential tidal wave of utility, significant gaps remain in how to responsibly implement AI tools with respect to clinician needs and preferences. We conducted eight focus groups with 18 clinicians from two academic health systems aimed at identifying the key information needs for transparency and trusted use of AI in patient care. We found that clinicians have specific key information requirements for the incorporation of AI into patient care practice including key concerns for accuracy, safety, limitations, and endorsements. Additionally, clinicians articulated specific processes and factors that mediate their trust in AI such as knowing which stakeholders and data sets were involved in the development of a given tool (e.g., diverse populations). Clinicians also discussed what potential information needs might exist for their patient populations and to what extent there may be obligations to be transparent about these tools to patients. Furthermore, clinicians expressed how they would like information about AI to be presented to them, adding to ongoing conversations about how to best make details about these technologies more transparent to end-users. Our analysis situates prevalent discussions in AI ethics regarding transparency and explainability from the viewpoint of clinicians. We call into question what obligations healthcare AI developers, healthcare systems, and regulatory agencies have to clinicians as they are tasked with adopting and implementing AI tools in patient care.