Is AI Changing the Face of Modern Medicine?

Do medical AI products have what it takes to replace healthcare professionals and disrupt the healthcare industry?

Jonathan Davis
Towards Data Science

--

Photo by National Cancer Institute on Unsplash

Between 2012 and 2017, AI investment in healthcare raised $1.8B through equity deals. This exceeded AI investment in every other industry!

Modern AI and healthcare have been attracted to each other for decades. The healthcare industry is looking for ways to reduce costs and increase efficiency, to provide high quality and accessible healthcare to a larger proportion of the global population.

On the other side, AI researchers are looking for well-defined use-cases that can demonstrate the value of AI. Where better to do this than healthcare, where decisions can save lives, and (with the exception of obvious complex cases) in many cases the cause is known and well defined.

In this article, we will explore the applications and limitations of AI within healthcare to if it will change the face of the industry in the not-to-distant future.

Photo by National Cancer Institute on Unsplash

Medical Imaging

Medical imaging has been one of the fastest areas of medicine to embrace AI. This is no surprise considering that computer vision, the area of computer science that studies the understanding of images by computers, is one of the most well know and mature areas of machine learning.

A systematic review of 82 different studies, published in The Lancet Digital Health, found that the image diagnostic performance of deep learning models was equivalent to that of healthcare professionals.

Deep learning is an area of machine learning which uses models with an artificial neural network architecture, inspired by the neural structure of biological systems (considering this, it’s somewhat satisfying to be applying them to medical use-cases). These models are trained by providing them with huge collections of images and labels, so the computer can learn to classify certain diseases and conditions.

Depending on their complexity, there can be a large cost associated with training neural networks. This is associated both with the time needed to accurately curate the data and with the large amount of computational power needed during training.

Photo by Jonathan Borba on Unsplash

However, like most applications of machine learning, once they are trained these models are faster and cheaper than their human counterparts, a clear benefit of AI adoption.

Soon after the above mentioned systematic review, the UK government pledged £250m for AI research in the NHS. And this is just one example of the large funding provided for research into AI in medicine.

So why don’t we see these models as commonplace in hospitals?

Support For AI

There are in fact a handful of FDA approved AI solutions on the market for medical imaging. One of these is ContaCT, which analysis CT angiograms for signs of an impending stroke. In one study, it was shown to have a sensitivity greater than 90%, when identifying middle cerebral artery large vessel occlusion (a blockage in the middle cerebral artery).

As well as this, it decreased the onset-to-treatment time five-fold! This is particularly important in the treatment of strokes, where the amount of permanent damage increases with time-to-treatment.

Photo by camilo jimenez on Unsplash

However, the FDA classified ContaCT as

“a computer-aided triage software that uses an artificial intelligence algorithm to analyze images for indicators associated with a stroke. Artificial intelligence algorithms are a type of clinical decision support software that can assist providers in identifying the most appropriate treatment plan for a patient’s disease or condition.”

Note the words “computer-aided”, “support” and “assist”. ContaCT and other similar applications of AI for medical imaging are not designed to work independently. They are a tool used to support medical professionals, improving the speed and accuracy of diagnosis.

For ContaCT, if it detects a vascular anomaly, it sends a text message to the neurovascular specialist who then begins treatment. There is no treatment before the intervention of a human.

Ethical Concerns

It is quite easy to understand why AI has been sidelined to a supporting role once the medical intervention begins. Although virtual AI applications are fairly mature in areas such as medical imaging, this is not the case for physical applications.

Physical applications could include techniques such as the use of autonomous robots in surgeries, and intelligent prosthesis for the handicapped. Not only is research into these techniques less mature, but they have far more immediate and permanent consequences.

Photo by Franck V. on Unsplash

If a robot were to make a mistake during surgery, it could be fatal, whereas AI models used to diagnose medical images pass their findings onto a specialist before any action is taken.

However, even the use of AI for diagnosis, a virtual application, is somewhat contentious. We will briefly touch on three of the key concerns.

Firstly, there is the issue of trust. AI models are notorious for being ‘black boxes’ that make decisions based on complex combinations of parameters that are difficult to understand. This is particularly true for neural networks, where models learn complex combinations of weights used in different mathematical functions to provide a classification.

Although there are now many techniques for interpretation of neural networks, these can still be difficult to understand and will make it difficult for both doctors and patients to trust them to provide effective diagnostics.

Secondly, there is the question of responsibility. It is often unclear who is responsible when an AI model makes a mistake and causes harm. This question has often been asked in the case of fatal accidents caused by autonomous vehicles, but the question is just as applicable (if not more so) to healthcare.

Should it be the programmer or data scientist who developed the model, or the healthcare professional using it? Until it is clear who takes responsibility for diagnostic mistakes caused by AI models it is unlikely that they will be relied upon without significant human input and supervision.

Finally, AI can cause human complacency. If practitioners know that the diagnostic tool they are using is as effective as they are, they may not adequately complete their own diagnostic analysis. This means healthcare professionals could avoid checking images, knowing that the AI model will do it for them. The problem with this is that just because the model and human perform equivalently on average, it doesn’t mean they will on a case-by-case basis.

Just as the model may identify cases missed by the human, so too the human may identify cases missed by the model. If healthcare professionals become complacent and do not properly check images themselves, these cases could be missed.

Conclusion

It is clear that developments in image diagnostics by AI have resulted in computers that are just as effective as humans in particular diagnostic tasks.

However, the ethical concerns raised mean that computers are unlikely to replace humans in the near future. Instead, they will act as an extra layer alongside healthcare professionals to try and improve the precision and efficiency of diagnostics.

Photo by Arseny Togulev on Unsplash

In an industry where every decision can be the difference between life and death, more than just research will be needed to change this. Slow adoption of AI visibly in hospitals will build trust with patients, and training for healthcare professionals will help them use AI products to improve patient treatment.

I wouldn’t expect autonomous robots to be performing your surgery any time soon but implemented with proper procedures I can only see the benefits of enriching the current diagnostic process with AI.

If you enjoyed this, you might like another article I wrote, “Is Fine Art the Next Frontier of AI?”.

--

--