FDA Review Needed to Address Bias in Health Algorithms
[Thursday, November 7, 2019] Health algorithms used to manage health decisions could lead to erroneous outcomes if data used to build and or train them is biased. This is particularly a concern since most health algorithms do not require a formal FDA approval particularly if they are labelled as MDDS products. A recent report demonstrated how a widely used health algorithm created racial bias in its application for health decisions because of limitation of the data used to develop it. This is the first such independent analysis of datasets used to create a health algorithm, but the authors feared that this may be an indication of a more common issue with health algorithms since most health algorithms are built based on limited datasets with no independent validation or formal regulatory reviews. By vigorously analyzing the datasets, the authors found the root cause of the bias and suggested ways to address is to get the algorithm provide fair assessment. Would such an independent analysis of the data labels used to select datasets help improve health algorithm is an open question. Most developers would be hard-pressed to have outside independent assessment of their datasets, since most such datasets are proprietary. The authors of the study offered to work pro bono with developers but still confidentiality issues may prevent most developers to share their datasets. But most developers are comfortable sharing their proprietary information with FDA so one solution may be for FDA to step in to verify the validation data. With the current FDA policy of not regulating MDDS and most health algorithms it may be a tall order but otherwise we are left with self-policing by the developers, and that’s not known to work well before. |
|