Fairness and Bias

In 2019, I was entrenched in volunteer work at a small free clinic in North Hollywood, California. Actually in pursuit of my California phlebotomy (blood drawing) credential, my goal was to continue work in clinical settings, and to finish my degree to become a doctor. I wanted, like many of us, to help patients – to make a positive difference in the lives of people.
It was not very long after that I discovered computer science, Artificial Intelligence, and biotechnology. Hooked on the idea and disillusioned with a traditional academic path, I jumped at the opportunity to join a startup-like college in San Francisco, accelerate my degree path, and get into tech. It was all so shiny and promising. Papers on top of papers were being published with titles like "Artificial Intelligence Helping Biotech Get Real", "AI Breakthrough Could Speed Up Lung Cancer Diagnosis …", and many, many more. Months into pursuit of my new, shiny degree, I was doing aggressive research into the field, and even had begun working in it.
Pause. The Covid-19 pandemic seemed to accelerate the AI diagnostic space, with poor results. Venture capitalists, large investment firms, academic organizations, and big pharma continued to pour millions into so-called "AI focused biotech" startups. While large conference platforms seem to always accept a single keynote speaker on fairness or ethics in AI (mostly focused on disparate outcomes between demographic groups – a crucial part of the conversation, but a part nonetheless), my LinkedIn feed, industry contacts, and tech-news outlets served crickets on the conversation of ethical standards for the artificial intelligence boom. The biotechnology companies themselves continued to post job openings for subject matter expert "Data Scientists", leaving no space in the room for anyone without a laser-focused Ph.D. dissertation, anyone who might have less of a stake in innovation for innovation’s sake, anyone willing to stand in the corner and ask the crucial question:
"We can, but should we?"
Building Systems Designed for Failure
Physicians swear an oath to "first do no harm", but professionals falling under the ubiquitous umbrella of bioinformaticians, data scientists, and machine learning engineers swear no such oath.
Instead, when we’re hired, we market ourselves based on metrics. Biotechnology companies may have noble missions, but are still profit-driven companies focused on what value you bring them from a hard-number standpoint. As data practitioners, we are not selling our talents based on how fuzzy and warm we made a patient or their family feel through the promise of a technology we developed, we sell ourselves based on the percentage of test-data oncology biopsy slides our algorithm correctly identified as pre-cancerous (for example). Not to mention: assuming a technology we create gets to the "wild", that patient and their family have no idea how our tech works. It’s up to us to be honest about how much it can really be trusted – and that kind of honesty doesn’t often get us far in our careers, or get us published or recognized.
If you’re clamoring for a better, real world example of a potentially devastating and dramatic failure, take IBM’s "Watson for Oncology" project. After a $62 million dollar investment, studies found that the recommendations made by the so-called "supercomputer" were often potentially deadly. The project inevitably was rolled back, a loss.
Or, read this article I linked earlier on the hundreds of Covid diagnostic applications that were proven to do more harm than good.
"AI for healthcare" is a term with so much potential and almost palpable kinetic energy that many of us practitioners feel galvanized to become a part of it. The truth of the matter seems to be that in many cases, AI technologies are a cry for help – the world of R&D is increasing its pace, and large, slow-moving scientific companies need an accelerator to keep up. The promise and potential of automation, accelerated discovery, fast triaging, and lowering costs for radiological, pathology, or diagnostic services are just too good to pass up. The sadder part of this truth, though, is that most of our technology just isn’t ready for clinical use. How honest we are about this seems to be contingent on how dramatically we need to increase profit margins, impress investors, or keep up with the multi-billion dollar "Joneses" to maintain a name in industry.
Taking it Back to Move Forward: Building Better Systems
What good scientists know is that good science takes time. The mavericks of medical innovation maintained equilibrium with the traditional academics who understood that sometimes you can try something new, but you shouldn’t try just for the sake of trying: not when there’s a patient on the table.
Biotech company CEOs, investors, machine learning engineers, data scientists, bioinformaticians, data analysts, data engineers, you get the point: we don’t see those patients on the table. Abstraction can be a tricky thing. It allows us to take bold steps forward, but simultaneously enables us to be bolder than we would, did we have to look into the eyes of the human being who may be put on a ventilator unnecessarily because of poorly designed Covid-19 triaging software. Or given drugs that increase bleeding because of a poor recommendation from an algorithm, or testing a new drug and dealing with severe adverse reactions, and so on.
The beginnings of the so-called "biotech" industry as we know it roughly four decades ago made us – the tech people – practitioners in a sacred space, right along with the clinicians who see, in the flesh, the impact of their work, and now ours. We need oaths – we need to develop systems with ethics in mind, from the very beginning.
In every brainstorming meeting you have with your team, you need to have someone in the corner who isn’t wrapped up in the rat race of their own career, or focused on improving potential quarterly profit margins. Biotech, in a nutshell, cannot remain what it is: money hungry, innovative for the sake of innovation and publication, full of hubris and over-trusting of the programs we develop, of the buzzwords "deep learning" and "AI".
We need industry-wide ethical standard operating procedures. We need someone in the corner who will ask us:
"We can do this, but should we?" without fear of reproach.
In the long run, the ethical conversation is the best one you can have for your company, even if it’s sacrificial of short-term press-release glory or temporary product hype. More importantly, the ethical conversation is the best one you can have for any potential future patients impacted by your product or innovation.