Nearly Half of AI Medical Devices Lack Clinical Validation

After analyzing hundreds of FDA-approved AI medical devices, scientists discovered that nearly half were not trained using real human data. The research team says this lack of clinical validation could pose a risk to patients.

Scientists have harnessed the power of artificial intelligence (AI) to design antibacterial molecules potentially effective against antibiotic-resistant bacteria and develop devices to detect skin cancer.

Moreover, doctors can use AI to diagnose health conditions and create personalized treatment plans. This rapidly emerging technology also plays a crucial role in robotic surgery.

ADVERTISEMENT

Despite its integration into healthcare, the accuracy of AI and ethical concerns sparked by its use has raised questions about whether the technology could cause more harm than good.

Now, new research published on August 26 in the journal Nature has revealed another concerning aspect of AI. According to the study's authors, many AI and machine-learning medical devices approved by the U.S. Food and Drug Administration (FDA) were not developed using data from actual human patients.

What does FDA approval mean for AI-based medical devices?

To look closer at the safety and effectiveness of AI health and medical devices authorized by the FDA, the scientists analyzed applications for artificial intelligence and machine learning-enabled medical devices submitted to the Agency for approval.

After combing through the FDA's database, the team found 521 device authorizations. Of those, 144 were "retrospectively validated," 148 were "prospectively validated," and 22 were authorized using randomized controlled trials.

The researchers say that approximately 43% of the 521 FDA-approved medical devices did not have published clinical validation data to confirm their effectiveness. They note that instead of using images from human patients, some devices relied on computer-generated images, which do not technically meet clinical validation requirements.

"Although AI device manufacturers boast of the credibility of their technology with FDA authorization, clearance does not mean that the devices have been properly evaluated for clinical effectiveness using real patient data," said corresponding author Sammy Chouffani El Fassi, an M.D. candidate at the UNC School of Medicine and research scholar at Duke Heart Center, in a news release.

Doctor using virtual touchscreen
Image by Panchenko Vladimir via Shutterstock
ADVERTISEMENT

Why validation methods matter

The study authors explained that medical devices with retrospective validation utilize previous image data, such as human X-rays, to develop the AI model. In contrast, prospective validation is based on real-time patient data, which results in more robust scientific evidence that the AI device is accurate and effective.

However, randomized control trials are the best way to establish the accuracy and effectiveness of machine learning-based medical devices because they involve control groups and factor in variables that can impact results.

During their investigation, the research team also discovered that the FDA's most recent draft guidance fails to clarify the different types of clinical validation in its recommendations to manufacturers.

"We shared our findings with directors at the FDA who oversee medical device regulation, and we expect our work will inform their regulatory decision making," Chouffani El Fassi said.

The team hopes the study results will inspire researchers to conduct clinical studies on medical AI devices to improve the safety and effectiveness of these technologies.

Chouffani El Fassi adds, "With these findings, we hope to encourage the FDA and industry to boost the credibility of device authorization by conducting clinical validation studies on these technologies and making the results of such studies publicly available."

ADVERTISEMENT

Leave a reply

Your email will not be published. All fields are required.