This article is a continuation of my series of articles on Model Interpretability and Explainable Artificial Intelligence. If you haven’t, I would highly recommend you to check out the first article of this series – ‘Introduction to Machine Learning Model Interpretation’ which covers the basics of Model Interpretability ranging from what model interpretability is, why we need it to the underlying distinctions of model interpretation.
In this article, we will pick up where we left off by diving deeper into the ins and outs of global model interpretation. First we will quickly recap on what global model interpretation and why it is important. Then we will dive into the theory of two of it’s most popular methods – feature importance and partial dependence plots – and apply them to get information about the features of the heart disease data-set.
What is global model interpretation?
Global model interpretation is a set of techniques that helps us to answer questions like how does a model behave in general? What features drive predictions and what features are completely useless for your cause. Using this knowledge you can make decisions about the data collecting process, create dashboards to explain your model or use your domain knowledge to fix obvious bugs.
Most global interpretation methods work by investigating the conditional interactions between the dependent variable and the independent variables (features) on the complete data-set. They also create and use extensive visualizations which are mostly easy to understand but contain a huge amount of useful information for analyzing your model.
Feature Importance
The importance of a feature is the increase in the prediction error of the model after we permuted the feature’s values, which breaks the relationship between the feature and the true outcome. – Interpretable Machine Learning, A Guide for Making Black Box Models Explainable

Concept and Theory
The concept of feature importance is really straightforward. Using feature importance we measure the importance of a feature by calculating the increase in the error of a given model after permuting/shuffling the feature values of a given feature.
A feature is "important" if permuting it increases the model error. This is because in that case, the model relied heavily on this feature for making right prediction. On the other hand, a feature is "unimportant" if permuting it doesn’t affect the error by much or doesn’t change it at all.
Fisher, Rudin, and Dominici suggest in their 2018 paper "All Models are Wrong but many are Useful …" that instead of randomly shuffling the feature you should split the feature in half and swap the halves.
Advantages
Feature importance is one of the most popular techniques to get a feel of the importance of a feature. This is because it is a simple technique that gives you highly compressed global insights about the importance of a feature. Also, it does not require retraining the model which is always an advantage because of the save of computing time.
Disadvantages
Even though feature importance is one of the go-to interpretation techniques which should be used almost all the time it still has some disadvantages. For instance, it isn’t clear whether you should use the training or testing set for calculating the feature importance. Furthermore because of the permutation process results can vary heavily when repeating the calculation.
Another problem is that correlation between features can bias feature importance by producing unrealistic instances or by splitting the importance between the two correlated features.
For more information, I would highly recommend you to check out Christoph Molnar’s e-book "Interpretable Machine Learning" which is an excellent read for learning more about interpreting models.
Example and Interpretation
What feature does a model think are important for determining if a patient has or doesn’t have a heart disease?
This question can be answered using feature importance.
As I already mentioned at the start of the article we will work on the Heart Disease Data-set. You can find all the code used in this tutorial on my Github or as a Kaggle kernel.
Most libraries like Scikit-Learn, XGBoost as well as other Machine Learning libraries already have their own feature importance methods but if you want to get exact results when working with models from multiple libraries it is advantageous to use the same method to calculate the feature importance for every model.
To ensure this we will use the ELI5 library. ELI5 allows users to visualize and debug various Machine Learning Models. It offers way more than just feature importance including library-specific features as well as a text-explainer.
To calculate the feature importance we can use the _permutation_importance
_ method. After calculating the feature importance of a given model we can visualize it using the show_weights
method.
Using the method above we can get the feature importance of models and compare them with each other.


You can see that the two models have very different importance scores for a feature. This can have negative effects on the amount of trust you can have in the results.
Nonetheless we can see that feature like ca, sex and thal are quite useful for getting to the right predictions whilst age and cp aren’t important for getting to the right predictions.
Partial Dependence Plots
The partial dependence plot (short PDP or PD plot) shows the marginal effect one or two features have on the predicted outcome of a machine learning model – J. H. Friedman

Concept and Theory
A partial dependence plot gives you information about how a feature affects the model’s predictions. This can help us to understand what feature values tend to give us higher or lower outputs.
The partial dependence can be calculated easily for categorical features. We get an estimate for each category by forcing all data instances to have the same category. For example, if we are interested in how gender affects the chance of having a heart disease we can first replace all values of the gender column with the value male and average the predictions and then to the same using female as the value.
Calculating partial dependence for regression is way harder but Christoph Molnar explains it nicely in his ebook __ Interpretable Machine Learning. So if you are interested to go deeper into Model Interpretation be sure to check it out.
Example and Interpretation
For creating partial dependence plots we will use the PDPbox library. PDPbox provides us with a few different well-designed plots including partial dependence plots for a single feature as well as partial dependence plots for multiple features.


To install PDPbox we can type:
pip install git+https://github.com/SauceCat/PDPbox.git
Now we can create a partial dependence plot to analyze the effect of the different genders on the probability of having a heart disease using the pdp_isolate
and pdp_plot
methods.

The yellow and black line gives us the average effect on the predictions when changing the gender from _sex0 to _sex1. By only looking at this line we are able to see that patients with gender _sex0 are more likely to have a heart disease than patients with gender _sex1.
To create a partial dependence plot that shows us the interaction effect of two features on the target we can use the pdp_interact
and pdp_interact_plot
methods.

This can help us find interactions between two features or even individual feature values. For example we can see that no-matter the value of the gender column patients with an age between 55 and 63 have the lowest probability of having a heart disease.
Conclusion
Global model interpretation are techniques that help us to answer questions like how does a model behave in general? What features drive predictions and what features are completely useless for your cause.
The two most used global model interpretation techniques are feature importance and partial dependence plots.
We can use feature importance to get an understanding of how important a model thinks a feature is for making predictions.
Partial dependence plots help us understand how a specific feature value effects predictions. This is extremely useful because it allows you to get interesting inside about specific feature values which can then be further analyzed or shared.
What’s next?
In part 3 of this series, we will take a closer look at understanding individual predictions by diving into what local model interpretation is and how two local model interpretation techniques— Lime and Shapely values – work.
That’s all from this article. If you have any questions or just want to chat with me feel free to leave a comment below or contact me on social media. If you want to get continuous updates about my blog make sure to follow me on Medium and join my newsletter.