Researchers have called for the U.S. Food and Drug Administration to create a dedicated regulatory process for artificial intelligence software in healthcare to reduce racial and ethnic disparities.
Researchers in the Journal of Science Policy and Governance note that the FDA is increasingly approving AI-driven medical devices (SaMD). From 2015 to 2019, approvals rose by 750%.
However, such devices have the potential "to amplify healthcare bias and further exacerbate racial and ethnic health disparities."
"Some AI-driven SaMDs have displayed substandard performance among racial and ethnic minorities. Auditing these tools for biased output can help produce more equitable outcomes across populations," the researchers write.
If appropriately designed, AI could help to reduce health disparities. Therefore, the researchers propose the FDA develop a dedicated regulatory process for AI-driven tools.
They call the agency to convene a panel that includes algorithmic justice and healthcare equity experts who would develop benchmarks and requirements for investigating bias at every development stage. The panel would also review all AI-driven medical devices’ functions to ensure the chance of exacerbating existing health disparities is low.
The researchers note that such a regulatory pathway would require a significant amount of funding from the FDA and developers of devices.
Currently, the FDA reviews medical devices through a premarket pathway, such as premarket clearance (510(k)), De Novo classification, or premarket approval that has not been designed for artificial intelligence technologies.
Racial and ethnic health disparities persist in the U.S. Black and Native American and Alaska Native people have shorter lifespans and are more likely to die from treatable conditions. Minorities are also at a higher risk for developing chronic health conditions, such as diabetes or hypertension.
Your email address will not be published. Required fields are marked