The world’s leading publication for data science, AI, and ML professionals.

Why the FDA Regulating Medical AI Products Would Be Good for Everyone

Medical AI Technology Could Skyrocket with a Clear Approval Process

Despite the FDA not maintaining a public database of approved AI products, two studies have identified between 160 and 225 approved products.

Of the 161 products that STAT found the FDA approved, only 73 disclosed the amount of patient data used for validation in public documents, with:

  • Patients observed varying from 0 to 15,000
  • Only 7 products providing racial makeup of their datasets
  • Only 13 products providing gender makeup of their datasets

The inconsistency in data standards undermines the trust in medical AI products and endangers their future growth. Some companies claim to have correspondence with the FDA about using diverse datasets, such as AI breast imaging detection provider iCAD. But those rarely made it into their FDA documents.

This approach differs with the FDA’s long-running, stringent reviews of new pharmaceuticals. Why is this important? Bringing new life-saving AI products to market is critical. Doing so uniformly protects patients, gains trust in these new tools, and opens the market to safer and wider use of AI.


What are key benefits for the FDA providing a consistent, clear approval process for products?

New Products

In my work on medical AI, I saw the great lengths researchers go to create superb models. Yet, an undefined regulatory framework limits the number who apply Deep Learning to solve medical problems and fewer yet who commercial those technologies.

We’re seeing an explosion in the uses of deep learning. Models can save lives, increase accessibility, reduce costs, and improve patient care. A clear pathway to market encourages innovation in medical AI.

Photo by Ramón Salinero on Unsplash
Photo by Ramón Salinero on Unsplash

Better Models

The medical literature is filled with examples of how diverse datasets results in better models. This is true of classical medical methods before machine learning, and AI has the potential to improve or reduce medical biases.

I have written about how diverse datasets can improve predictions across race, income, and education level. This is also true across other demographics–for example, a recent Stanford study showed that data for image-based AI models primarily came from California, Massachusetts, and New York. Patients in those states don’t represent everyone in the United States. Ensuring engineers train models across locations would create models that work for every patient.

A set of best practices based on input from top researchers would also improve models. For example, if engineers use the test data during any step in the model’s creation before testing, it is less likely the model will work beyond its initial dataset.

A set of clear guidelines will ensure all AI products coming to market result in models that don’t just work on paper – they actually work.

Widespread Use

Without regulation providing a framework to create trust in efficacy and safety, use of medical AI products is limited.

Photo by Owen Beard on Unsplash
Photo by Owen Beard on Unsplash

Why did you take antibiotics last time you were sick? Or the last prescription your doctor prescribed?

The aim in regulatory agencies is to ensure products work and are safe. Doing so means that more people will feel comfortable engaging with AI, whether with a cancer-detection algorithm or a surgical assistant for doctors.

Performance Monitoring

Right now, there is no standard for monitoring an AI product post-approval. For non-AI medical devices, post-market safety monitoring exists. The FDA cannot use the same standard for AI products, which adapt quicker. However, there needs to be a standard.

This will help answer questions like:

  • What if your AI product is not effective in real-world settings?
  • Is your model still effective as you continue updating and training it?
  • Under what conditions do you need to reapply for approval (e.g. what if your product switches from highlighting X-rays for radiologists to identifying conditions)?

To be clear, deep learning models adapt and develop quicker than many drugs and traditional technologies. But we need to understand what the change process looks like after approval. And we need to create a process to monitor performance and safety.

Patient Privacy

Many medical providers use AI systems to assist decision-making without informing patients. A common rationale is that they use these tools for operations, not research. However, in traditional settings, unproven drugs require patient consent to take part in approved, monitored studies.

Right now, what defines operations versus research is subjective. Concerns over collecting patient data seem to be increasing as well, with cases like Ascension’s alleged sale of identifiable patient data to Google. Defining protocols around what defines research and how to collect patient data would reduce risk for patients and companies.

Reduced Cost

Right now, companies have minimal incentive to gather and diversify their data, safeguard models, and take other actions to improve their products. Why? There is no framework. If the FDA issues a framework that all companies must follow, they will take on the expense of doing so–and reduce their costs in the long run.

Uncertainty around approval adds cost. If companies apply through an official process, it can be cheaper than informally communicating with teams at the FDA. The requirements and timeline would be clearer. This reduces the cost for companies and the FDA. And ultimately, patients.

Stability

In the Trump administration’s last days, it filed a proposal to exempt permanently many medical AI categories from FDA review. A few weeks later, the Biden administration put the proposal on hold. The fate of medical AI should not change whenever national politics changes.

While adaptability is key in this new domain, we should have an established, stable standard for AI medical products–just like we do with non-AI medical products.

Photo by Alex Knight on Unsplash
Photo by Alex Knight on Unsplash

Conclusion

While we do not want to bog down a new, fast-changing field like medical AI, providing a basic regulatory framework for approval and post-market performance will:

  • Increase public trust in AI
  • Improve the medical AI on the market
  • Expand the use of AI
  • Protect patients

These benefits will speed up the use of AI in applications that improve patient care, medical outcomes, and accessibility. They will also ensure products coming to market work. Hence, we can have a future where medical AI makes medicine better and benefits us all.

Note: As of January 2021, the FDA has proposed an AI-based software action plan here.


Related Articles