Can AI enable a 10 Minute MRI?

Hugh Harvey
Towards Data Science
8 min readJan 13, 2018

--

An MRI machine (Magnetic Resonance Imaging) is a lumbering beast. Standing at over 7 feet tall, as wide as a family car, weighing well over a tonne, constantly clunking and heaving away with an unnerving ticking ‘pssht pssht’ sound, is not exactly a patient-friendly piece of medical equipment.

It can also be deadly. Any ferromagnetic object within near-reach will be sucked in towards the powerful magnetic field at accelerating speed, obliterating anything in its path. Add to this the claustrophobia-inducing tunnel you must lay in (about 60–70cm wide, just enough clearance for your nose as you lay flat), the staggeringly noisy chaotic banging noise it makes once it’s fired up, plus the need to lay absolutely motionless for the whole procedure (up to an hour long), and you can start to understand why patients are somewhat nervous about their MRI scans.

One patient described it to me as ‘a living coffin… plenty of time for the mind to wander and reflect on life and the hereafter, while the grim reaper bangs away loudly from the other side’.

So why do us doctors put our patients through this experience? Not to be cruel, but to be kind. The MRI is a modern-marvel of medical technology, a proudly British invention, that allows clinicians to peek inside the human body in exquisite detail without any of the side effects or risks of radiation exposure that come with X-rays or CT scans. The human anatomy is largely water-based (hydrogen and oxygen atoms), and magnetic resonance capitalises on that biochemical make-up by using hugely powerful magnetic gradients to control the orientation of every single hydrogen atom in the body, lining them all up in one uniform direction before releasing them and letting them spin away at frequencies only detectable by fine-tuned receivers. The physics behind it is complex yet astounding, and the images that result are quite simply breath-taking.

The small price patients must pay for such excellence is a long time spent in motionless confinement surrounded by clattering noise.

For now…

At the last annual Society for Imaging Informatics in Medicine (SIIM) conference 2017 in Pittsburgh, academics and physicians met to discuss the latest research in radiology. Hundreds of scientific posters and presentations were on display, and experts from all sub-fields of medical imaging poured over the latest theories and breakthroughs. A panel of judges pronounced the first place award-winning research poster to be from the University of Pennsylvania entitled ‘Diagnostic Quality of Machine Learning Algorithm for Optimization of Low-Dose Computed Tomography Data’ (Cross et al, U Penn).

This poster, simple and straight-forward, described what could be a huge breakthrough in CT imaging, the X-ray radiation-based imaging modality. They used a patented, FDA approved algorithm to assess the diagnostic quality of algorithmically-enhanced low-dose CT images. In essence, CT scans were performed at ultra-low radiation dose, almost equivalent to that of a simple chest X-ray, and compared to normal high-dose CT images. They found that 91% of AI-enhanced low-dose images were assessed by radiologists to be diagnostic, whereas only 28% of unenhanced images were diagnostic.

This is akin to those scenes in detective dramas where the characters are looking at a screen of grainy CCTV footage or photos and someone shouts ‘Enhance!’ at the IT guy, who magically taps away at the keyboard and instantly makes the images perfectly clear. Except this is real, and works on CT images. Even better, it works on CT images taken at radiation doses far lower than standard techniques, from any vendor.

The developers of the algorithm (PixelShine™, from a Sunnyvale-based pre-Series B start-up called Algomedica) claim that the original noise power spectrum (noise texture at all frequencies) is entirely maintained while improving diagnostic image quality by reducing noise magnitude, something that typical dose-reduction techniques, such as iterative reconstruction, fail to do effectively. (In fact, the more reconstructive iterations you use, the worse image quality gets.) Not so with their algorithm — have a look at the following example…

Left: Standard high dose CT at 12.4mGy. Middle: Ultra-low dose CT at 1.3mGy. Right: AI-enhanced ultra-low dose CT at 1.3mGy. Diagnostic image quality between the left and right images was rated as comparable by independent radiologists, despite a significant dose reduction of 11.1mGy. The middle image is noisy and non-diagnostic. Images courtesy of Algomedica.

The implications for this kind of technology are potentially game-changing. If proven to work robustly without losing clinically important detail, there is scope to reduce standard CT radiation exposure by several orders of magnitude. Current radiation doses, depending on body part being scanned and other factors, are roughly equivalent to receiving the same amount of radiation as spending an hour at ground zero at Chernobyl (see graphic below). This is not enough to pose any significant risks, but having multiple scans will increase individual risk over time, such as tissue damage and inducing new cancers. In fact vendors of CT scanners are under a sort of perverse incentive not to reduce dose too much, in order to maintain the image quality that radiologists expect (although they do offer some dose-reduction techniques). However, by performing ultra-low dose CT and then post-processing the images using artificial intelligence, these risks can potentially be vastly mitigated, especially in children, bringing the radiation dose close to near negligible (the same amount of radiation as taking a long haul flight across America, for instance). In addition, this could theoretically bring down the cost of CT scanners by negating the need for expensive high-powered components, again something the big vendors may not like. Therein lies the true disruption.

There are guiding principles in radiation safety known as ALARP & ALARA (As Low As Reasonably Practicable/Achievable) whereby the radiation exposure given for a medical procedure must be as low enough as possible while still maintaining diagnostic utility. The new potential for deep learning algorithms in enhancing ultra-low dose CT may well redefine what we consider practical and achievable. The cherry on the cake is that these algorithms work on images produced by any CT vendor, at any dose, meaning scalability and adoption are low barriers to overcome. I look forward to the results of clinical trials set up to establish just how effective these algorithms can be.

Image courtesy of xkcd.com

So, how does this translate to a 10 minute MRI?

Algomedica is also performing research using deep learning networks for the enhancement of undersampled MRI image data. Where CT has the downside of radiation, MRI has the downside of prolonged acquisition time. If algorithms can be developed to enhance noisy, grainy undersampled MRI images produced in shortened time frames, then there is the potential to reduce time spent in the MRI scanner by up to 2/3rds. To put that in perspective — a standard study for an MRI of the lumbar spine takes about 30 minutes. Deep learning algorithms could reduce that to a mere 10 minutes. That’s far less time spent listening to the grim reaper banging away at your temporary coffin!

Research in deep learning applications is relatively young in this field, with some activity going on at both Newcastle and King’s College, but with sparse publication of results relating to deep learning elsewhere (this paper from 2012 from a Chinese group is one of the earliest in the field, but doesnt use DL). Reducing MR times using sparse image data acquisition is achievable and available, but doesn’t achieve the results that deep learning promises. Only just recently have researchers at Stanford reported some results on PET (Positron Emission Tomography) with radio-tracer dose reduction of up to 99%, for instance. This has particular relevance to the UK, which is potentially leaving the Euratom as part of Brexit, and therefore won’t have as plentiful access to the radioisotopes required for PET imaging.

No doubt the big vendors of PET, MR and CT scanners will also be aware of this potential niche — they have after all been introducing brand-specific dose-reduction techniques (such as iterative reconstruction for CT)— and the race will soon be on to find out who can provide the best quality MRI images in the shortest time frames and PET and CT at the lowest radiation doses. This work of course needs to be supported by high performance computing infrastructures, as MRI data is much larger than other imaging modalities. It will also need heavy input from radiologists to advise on acceptable levels of image quality in order to ensure diagnostic and clinical safety, as well as medical physicists to advise on adjusting the scanners’ acquisition parameters, but these technical and collaborative challenges are worth overcoming if the ultimate goal is to be achieved.

A 3X improvement in MRI throughput

Assuming the technology is proven to work, an increase in MRI patient throughput by up to threefold would certainly have a significant positive impact on waiting times for imaging studies, especially for musculoskeletal problems and cancers which are the bread and butter of MR imaging. Similarly, reduced radiation exposure would enable increased and more liberal use of CT scanning. In hospitals we may even see a decline in ‘plain film’ standard X-rays (which aren’t very accurate anyway), as both MR and CT become the go-to modalities. I can certainly envisage scenarios in which, for example, suspected wrist fractures in A&E go straight to a fast MRI scan, and patients with shortness of breath avoid a chest X-ray and go straight to a low-dose CT chest. PET-CT would likely become hospitals’ first line investigation of choice for cancer diagnosis, rather than the adjunct it is now. This tech may also increase subsequently the need for small dedicated scanners for certain body parts — an ankle/knee MR machine doesn’t need to be anywhere near as large as its whole body cousin, and a brain CT scanner can now fit inside of an ambulance, providing immediate access to acute imaging for life threatening conditions such as stroke at a fraction of current radiation doses.

Of course, this development and implementation must be supported by providing increased resources for the surrounding infrastructure. Money spent here will enable the expected savings down the line. Increased staffing would be required for both for operating the equipment at higher throughput (radiographers) and reporting the increased volume of images (radiologists — unless AI image-perception algorithms take over, but that’s a separate argument!). Consideration would also need to be made towards improved scheduling of appointments to accommodate the increased throughput, and we may even need larger patient waiting areas and better facilities for getting patients changed into hospital gowns and in and out of scanners!

None of this is beyond the realms of possibility, in my opinion, but it will take hard work to get there. While the algorithms in question are still in the clinical validation phase, the research certainly has legs. I would like to see more hospitals engaging at the early stages with this type of work, and more trials conducted.

Until then, it will take time before we can save time, save lives and improve the standard of care.

If you are as excited as I am about the future of radiology artificial intelligence, and want to discuss these ideas, please do get in touch. I’m on Twitter @drhughharvey

If you enjoyed this article, it would really help if you hit recommend and shared it.

About the author:

Dr Harvey is a board certified radiologist and clinical academic, trained in the NHS and Europe’s leading cancer research institute, the ICR, where he was twice awarded Science Writer of the Year. He has worked at Babylon Health, heading up the regulatory affairs team, gaining world-first CE marking for an AI-supported triage service, and is now a consultant radiologist, Royal College of Radiologists informatics committee member, and advisor to AI start-up companies, including Algomedica and Kheiron Medical.

--

--

Doctor² (radiologist & academic) MBBSs BSc(Hons) FRCR MD(Res) FBIR. Clinical AI in radiology imaging and research.