ARTIFICIAL INTELLIGENCE | EXPLAINABILITY | DATA SCIENCE
How to Interpret Any Machine Learning Prediction
Transforming black-box models into glass boxes
Local Interpretable Model-agnostic Explanations (LIME) is a Python project developed by Ribeiro et al. [1] to interpret the predictions of any supervised Machine Learning (ML) model.
Most ML algorithms are black-boxes; we are unable to properly understand how they perform a specific prediction. This is a huge drawback of ML and as Artificial Intelligence (AI) becomes more and more widespread, the importance of understanding ‘the Why?’ is ever-increasing.
In this post, we will discuss how and why the LIME project works. We will also go through an example using a real-life dataset to further understand the results of LIME.
Understanding the Basics of Machine Learning
Before we can understand and truly appreciate the awesomeness of LIME, we must first understand the basic intuition of ML.
Any supervised problem can be summarised in two main characteristics: 𝒙 (our features) and 𝑦 (our target objective). We want to build a model ƒ(𝒙) to generate a prediction 𝑦’ whenever we provide some sample 𝒙’.