Session: Towards Equitable and Just Applications of AI/ML in Healthcare
Building Equitable and Effective Systems of Maintenance and Repair for Health AI
Friday, October 13, 2023
9:30 AM – 10:45 AM ET
Location: Iron (Fourth Floor)
As artificial intelligence/machine learning (AI/ML) prediction models become more popular in healthcare settings, many groups have documented concerns about bias, fairness, equity, justice, and privacy. These problems are not hypothetical—many models encode and reinscribe long-standing health inequities. Some creators of AI/ML models recognize that overlooking the performance of these tools across a variety of subgroups over time has led to significant harm, and are seeking to build fairness and equity into the design of new models. Yet they often center this assessment solely on the moment of initial deployment—focusing more on innovation than sustainability over time. While this work is critical to preventing harm, it is insufficient to expect that all potential harms of AI/ML models can be anticipated in the design phase, even with robust ethical and technical reviews. Healthcare is a setting where change is expected—model performance and impact will evolve over time or in new contexts. Because shifts in model performance may generate new inequities and harms, ongoing social and technical review of these models over time is essential, alongside mechanisms to redress harms caused by AI/ML models. This research characterizes and analyzes, through in-depth interviews, how key stakeholders such as data scientists, clinicians, and regulators understand the potential harms of AI/ML tools that could arise after implementation, and the practical barriers to identifying, repairing, or removing harmful tools. I then provide an ethical framework, using both empirical and normative analysis, for the continued assessment of fairness, equity, and justice for AI/ML models over their entire lifespan.