ARTIFICIAL INTELLIGENCE | EXPLAINABILITY | DATA SCIENCE

How to Interpret Any Machine Learning Prediction

Transforming black-box models into glass boxes

David Farrugia
Towards Data Science
8 min readApr 22, 2022

--

Photo by Wilhelm Gunkel on Unsplash

Local Interpretable Model-agnostic Explanations (LIME) is a Python project developed by Ribeiro et al. [1] to interpret the predictions of any supervised Machine Learning (ML) model.

Most ML algorithms are black-boxes; we are unable to properly understand how they perform a specific prediction. This is a huge drawback of ML and as Artificial Intelligence (AI) becomes more and more widespread, the importance of understanding ‘the Why?’ is ever-increasing.

In this post, we will discuss how and why the LIME project works. We will also go through an example using a real-life dataset to further understand the results of LIME.

Understanding the Basics of Machine Learning

Before we can understand and truly appreciate the awesomeness of LIME, we must first understand the basic intuition of ML.

Any supervised problem can be summarised in two main characteristics: 𝒙 (our features) and 𝑦 (our target objective). We want to build a model ƒ(𝒙) to generate a prediction 𝑦’ whenever we provide some sample 𝒙’.

--

--

Towards Data Science
Towards Data Science

Published in Towards Data Science

Your home for data science and AI. The world’s leading publication for data science, data analytics, data engineering, machine learning, and artificial intelligence professionals.

David Farrugia
David Farrugia

Written by David Farrugia

Data Scientist | AI Enthusiast and Researcher | Talks about Python, AI, and Data. Get in touch — davidfarrugia53@gmail.com

Responses (1)

What are your thoughts?