Interpretability in Machine Learning

Why we need to understand how our models make predictions

Conor O'Sullivan
Towards Data Science
9 min readOct 21, 2020

--

Should we always trust a model that performs well? A model could reject your application for a mortgage or diagnose you with cancer. The consequences of these decisions are serious and, even if they are correct, we would expect an explanation. A human would be able to tell you that your income is too low for a mortgage or that a specific cluster of cells is likely malignant. A model that provided similar explanations would be more useful than one that just provided…

--

--

PhD Student | Writer | Houseplant Addict | Follow me for articles on IML, XAI, Algorithm Fairness and Remote Sensing