The world’s leading publication for data science, AI, and ML professionals.

Healthcare’s AI Future - In Conversation with Andrew Ng and Fei-Fei Li

Four key takeaways from the fireside chat with two leading AI experts at the intersection of data science and healthcare

DeepLearning.AI and Stanford HAI recently organized a virtual fireside chat with two of the world’s most eminent computer scientists – Andrew Ng and Fei-Fei Li.

Driven by a strong belief in healthcare’s social mission and importance to humanity, they have focused their efforts and expertise on the healthcare industry in recent years.

This write-up looks at four key takeaways from the perspectives and ideas shared on the intersection of AI and healthcare.


About the Speakers

Fei-Fei Li is the inaugural Sequoia Professor in the Computer Science Department at Stanford University and Co-Director of Stanford’s Human-Centered AI Institute. She served as the Director of Stanford’s AI Lab from 2013 to 2018.

Andrew NG is the Founder of deeplearning.ai, Co-Founder of Coursera, and is currently an Adjunct Professor at Stanford University. He was also Chief Scientist at Baidu Inc. and Founder of the Google Brain Project.


Four Key Take-Home Messages

(1) Biggest barriers to AI adoption in healthcare

Despite all the hype around AI in healthcare, the real-world implementation of such AI solutions is much slower than expected, with only a handful of isolated successes.

Andrew believes that change management in both technical and non-technical aspects remains a critical challenge. From the technical perspective, teams need to learn how to better manage the entire lifecycle of machine learning projects.

Fei-Fei highlights that AI in healthcare still lacks a ‘human win’ to create a watershed moment in the industry. She mentioned that there is still a lack of proven stories and products to demonstrate how AI solutions can make a fundamental difference to patients or healthcare providers.


(2) Most significant unsolved healthcare problems

The goal of Data Science is to leverage data and advanced techniques to tackle business problems. Therefore, it would be interesting to hear what the panelists perceive to be significant challenges worth tackling.

Fei-Fei grabbed the crowd’s attention by sharing a staggering statistic— an estimated 250,000 people in the U.S. die from medical errors every year.

While medical innovation has paved the way for the development of new drugs and procedures to treat patients more effectively, the issues of human error, fatigue, and poor system resourcing remain significant contributors to injuries and deaths in the Healthcare system.

Andrew built on this point by mentioning that while AI in healthcare tends to be commonly associated with diagnostics and treatments, there is relatively less focus and research on the operational side.

He believes that there are major overlooked issues on the optimization of operations in healthcare, like scheduling patients for limited resources such as MRI machines and staffing of healthcare workers.


(3) Importance of empathy and collaboration

Data science is a team sport that requires close collaboration and aligned objectives to succeed.

Fei-Fei believes that both data scientists (technical members) and healthcare practitioners (non-technical members) must develop an empathetic willingness to understand each other’s language.

For example, Fei-Fei would ask her students to shadow Stanford physicians in the hospital for several days to understand the humanity of the healthcare space by witnessing the vulnerability, empathy, and dignity of patients and healthcare providers.

Andrew shared his experiences in building a palliative care recommendation system. He was intrigued by how this system motivated more providers to start conversations on recommending certain patients for palliative consult in situations where it might not have discussed otherwise.

This Machine Learning application opened up a previously hierarchical structure and paved the way for an approach in the hospital that was more multi-stakeholder, interactive, and empathetic.


(4) Avoiding the perpetuation of bias

The issue of biases making their way into AI systems and causing harmful results has come under the spotlight in recent years.

Andrew suggests multi-disciplinary teams should come together to brainstorm all the things that could go wrong in a machine learning application and design metrics around them for transparent monitoring.

Furthermore, the data should be sliced accordingly to analyze the effects of a machine learning model on different groups as part of the system audit.

Fei-Fei highlights that ethics should be a core consideration in designing an AI solution before writing any code. In the teams she runs, she involves ethicists, legal professionals, patients, and healthcare providers to discuss these potential biases as part of the solution design process.

The bias incorporated in a machine learning system is ultimately a human responsibility. Therefore, it is essential to recognize that we humans are responsible for any form of system bias, and we need to set up guard rails to mitigate bias as much as possible.


Full Conversation

You can watch the video of the entire conversation here:

Before you go

I welcome you to join me on a data science learning journey! Follow this Medium page and check out my GitHub to stay in the loop of more exciting data science content. Meanwhile, have fun exploring the intersection of healthcare and data science!


Related Articles