Algorithms are the New Drugs

Hugh Harvey
Towards Data Science
9 min readDec 15, 2017

--

Once upon a time, apothecaries and healers sold their medicinal lotions and potions in backstreets and bazaars, promising fortitude and vigor to all who would buy their wares. Snake oil sat alongside miracle cures, most of which did nothing except act as a placebo. Eventually, evidence-based practice was born and modern medicine arrived. Only those treatments with a proven effectiveness remain, and slowly but surely the formulary of established medications grows. Doctors now prescribe, pharmacists check and dispense, and patients, by and large, get better. Today, we have a heavily regulated, fiercely competitive and frankly ridiculously profitable global pharmaceutical industry. Last year, the top 10 Big Pharma companies combined total revenue reached over $400 billion. Now, we are at the cusp of an entirely new billion dollar industry; that of ‘medical algorithmia’, poised to raise medicine to even greater heights…

Algorithms are the new drugs, and doctors the new technology prescribers.

To understand where the future lies for the Artificial Intelligence (AI) and algorithm industry (aka “Big Tech”, “Big Algorithmia”, “Digital Health”), one needs only to look at how Pharma got quite so Big in the first place, including how new drugs are developed and reach the market, how medical practice is structured around the safe delivery and monitoring of drugs, and how doctors learn to understand drug mechanisms of action and side effects.

Therefore, I predict several ancillary industries erupting around algorithms, based on lessons learned from existing practice in Big Pharma.

Advertising and marketing

In the UK, the Association of the British Pharmaceutical Industry (ABPI), and in the US the Pharmaceutical Research and Manufacturers of America (PHRMA) have codes of practice that all Pharma companies have to adhere to when promoting their products and interacting with health professionals. Call it an ethical framework, if you will, backed up by penalty fines. These codes detail how companies should behave when promoting their drugs to both clinicians and patients. They include things like rules on how much a company can spend on entertaining clients, free pens, CME offers and best practice on ‘communicating trial results’. The aim is to prevent misinformation, exaggeration and statistical slight-of-hand – something which it is largely successful at.

However, critics still exist. Dr Ben Goldacre, a staunch antagonist of the morally bankrupt world he calls Bad Pharma, wrote that pharma companies, despite the ethics codes, are still getting away with “…poorly designed trials, on hopelessly small numbers of weird, unrepresentative patients, and analysed using techniques that are flawed by design, in such a way that they exaggerate the benefits…”. Sound familiar? Other valid criticisms include a lack of pressure on pharma companies to publish negative trial results.

Clearly, the same can be said for algorithmic developers. I’m yet to come across a single paper on deep learning that openly said “our algorithms didn’t work”, and some of those with ‘positive’ results have some rather dubious statistical smoke and mirrors going on. Certainly, there is still an air of hype and fanning expectations surrounding AI in general. I suppose this in part comes from venture-backed companies not wanting to admit when something didn’t work for fear of losing future investment interest. It also certainly doesn’t help that every academic paper on AI (which are rarely peer-reviewed by the way) is swiftly followed by an overblown media press release stating that “Algorithm X outperforms doctors at Y!!”.

It’s not just the press to blame. One only had to wander the stalls of RSNA this year to get an idea of how over-exaggerated some companies claims are. We need to remember that the landscape and framework for robust clinical investigation and subsequent marketing of algorithms is still in its infancy, and it is this same landscape that will see massive accelerated growth as both regulators and developers find their way. Until then, we will just have to wade through the hype to find the practical truth.

All pharma companies have a dedicated team dealing with external communication of clinical matters – known as Medical Affairs. This comprises of a team of specialist doctors trained in medical affairs, medical writing and science liaison whose job it is to sign off on any external academic publications, branding and marketing, handle scientific communication with opinion leaders and clients, and ensure that all communication is clinically accurate. There are heavy fines in place for any pharma company found to be falsely advertising, after all. Therefore, I predict that AI developers will also need to employ a ‘medical affairs’ team to handle the equivalent communication tasks. It will not be acceptable for non-clinically trained staff to act as the clinical communicators for AI companies, particularly when dealing directly with healthcare.

I also predict we will see an overseeing body for algorithmic marketing, and a code of ethical practice introduced, separate to the validation and regulatory framework that the FDA and its equivalents have. This body may well borrow heavily from pharmaceutical codes of practice regarding marketing, sales and advertising in order to ensure that hospitals aren’t sold digital snake oil instead of holy grail technology. It may well copy rules on ‘entertaining clients’ and setting guidelines on what business development managers can and can’t say, and maybe even set out guidelines on how to formulate a press release regarding algorithmic effectiveness. I wouldn’t be surprised if someone somewhere is already setting up such a body – it should turn out to be quite lucrative…

But all this can’t be built without solid foundations of clinical research and investigational trials. I’ve previously covered statistical analysis reporting and regulatory frameworks — but not yet discussed the sequelae of algorithmic implementation — which brings me on to the next piece in the big pharma life-cycle – ongoing monitoring of algorithms once they are released into the wild inside of clinical workflows.

Algorithm Safety & Technovigilance

In every hospital in the world there is a pharmacist who’s job it is to oversee medicines safety. Whether they focus on prescribing errors, discharge communication, aseptic preparations, or dispensing mistakes, their task is to ensure that the harms and side effects of pharmaceuticals are minimised in clinical practice. It’s a hugely important part of overall patient safety, often undertaken out of the spotlight of day-to-day clinical care. (The same goes for physical medical devices in a hospital – I guarantee there is someone employed to ensure electrical compliance of kit, quality assurance and maintenance.)

Algorithmic safety will require the same level of oversight. Not only do algorithms have the potential to do unintended harm (no algorithm can ever be perfect), but they require rigorous post-market surveillance as per regulatory requirements. The international standards for medical devices (including ISO13485) explicitly state that developers should have in place a robust system for monitoring real-world device performance. This includes regular audit of algorithmic outputs, and a feedback mechanism to ensure that errors are acted upon.

Technovigilance is a new term – inspired by the equivalent in big pharma known as ‘pharmacovigilance’. Rather like the yellow card system in the UK, and the Center for Drug Evaluation and Research system in the US for mandatory reporting of medicine safety, technovigilance is designed to ensure that companies and end-users report all new or unexpected harms to a central governing body. For example, if an algorithm that detects onset of Atrial Fibrillation (AF) in a patient fails to trigger and a patient comes to harm, that event must be reported both to the developer and the relevant overseeing safety body.

I would like to say that I predict an industry arising around the concept of safety and technovigilance – but in fact it is already here! Regulatory third party companies already offer technovigilance consultancy and help in setting up regulatory compliance processes for medical devices. This industry will extend to offer services for the monitoring of algorithmic safety. Not only will this industry ensure ongoing safety monitoring, but it will also link in with the vitally important Phase IV clinical studies, where algorithmic performance is clinically evaluated in a live setting to assess safety performance.

Medical Education & Allied Health Specialists

Medical education will also have to adapt to the new digital future. Just as medical students today have to learn about pharmacological mechanisms of action, half lives, bio-effectiveness and chemical cascades, students of the future will need to understand statistical bias, artificial neural network function, data structure and interpretation of algorithmic outputs. In addition to learning classes of drugs and their side effects, doctors will need to know about different classes of algorithms, their indications and limitations, and how to interpret their outputs in context. Just as for drug safety, doctors need to be aware of algorithmic safety. For instance, a drug has a known efficacy, an intended target population, recommended dosage and monitoring requirements. A doctor will know most of these facts about a drug before prescribing (and if they don’t they shouldn’t be prescribing them!). Algorithms are similar, in that they have a known accuracy, an intended target population, a recommended usage and require monitoring. Surely it stands to reason that a doctor should be as equally educated about the limitations of an algorithm as they are about the side effects of drugs?

For these reasons, I predict the medical school training will have to adapt and start including basic data science teaching, and a stronger focus on statistical understanding. We need to go beyond the basic Chi Squared and T-test studies that students these days are briefly introduced too. We must ensure that the next generation of doctors are capable in dealing with more complex statistical methodologies including (but not limited to) ROC curves, AUC, probabilistic modelling, inference and odds ratios. Only then will we have a clinical workforce prepared enough to steer the digital evolution of medicine.

So, I predict a new industry arising based on the delivery of data science education to doctors. This has already started to happen, with online courses and MOOCs opening up to anyone who fancies themselves as a data scientist. I think we will begin to see courses specifically aimed at clinicians, with a focus on algorithms in medical contexts and interpretation of outputs in a probabilistic setting.

It’s not just doctors of course who will be the end users of algorithms in clinical practice. The corridors of medicine are full of non-physician specialties, one for almost each aspect of medicine. We have sonographers for ultrasound, theatre nurses for surgery, ward clerks for administration, porters for transport and pharmacists for drugs. Therefore, I predict an entire new allied health specialty will arise, focused purely on algorithmia. Let’s call them ‘algorithmists’.

As specialists in clinical algorithmic functionality, algorithmists will check that hospitals are using the correct algorithms in the correct situations, help with procurement processes to choose good algorithms, provide advice on which technologies to use for which use cases, oversee algorithmic safety within hospitals, and manage ‘technovigilance’ reporting. I imagine the skill set needed to become an algorithmist will be very niche – with both a basic clinical grounding and a background in data science.

You might wonder why a separate specialty will be needed. Good question! In my view, even if we educate doctors effectively, they do not have the time or know-how to become true full-time algorithmists. In fact, it would be a waste of clinical training to take doctors away from front-line medicine. Yes, some clinical academics are well equipped with the necessary skills and may lead departments of algorithmists, but in the daily clinical ebb and flow of a hospital there will be huge demand for such expertise, and much of the work will require specially trained and dedicated allied health staff. It’s also more cost effective to have a specialist workforce, rather than spend money on doctors taking time away from patients.

So, what have we learned?

Drugs don’t deliver safe and effective healthcare, people and systems do. The same goes for AI and algorithms.

By looking at Big Pharma, AI developers can start to predict the trends that will take place in and around their industry. Being prepared to adapt and adopt to these movements will be crucial for sustained growth.

From medical affairs and marketing guidelines, to safety and technovigilance, to education and training a specialist workforce, AI developers can take part in and benefit from a whole ecosystem of supporting industries. If they engage with these then, ultimately, it is our patients who will benefit the most from the promise of Big Algorithmia.

If you are as excited as I am about the future of AI in medicine, and want to discuss these ideas, please do get in touch. I’m on Twitter @drhughharvey

If you enjoyed this article, it would really help if you hit recommend and shared it.

About the author:

Dr Harvey is a board certified radiologist and clinical academic, trained in the NHS and Europe’s leading cancer research institute, the ICR, where he was twice awarded Science Writer of the Year. He has worked at Babylon Health, heading up the regulatory affairs team, gaining world-first CE marking for an AI-supported triage service, and is now a consultant radiologist, Royal College of Radiologists informatics committee member, and advisor to AI start-up companies, including Kheiron Medical.

--

--

Doctor² (radiologist & academic) MBBSs BSc(Hons) FRCR MD(Res) FBIR. Clinical AI in radiology imaging and research.