
Do not show up to your next presentation unprepared
Understanding what your model is doing is very important. The more information you have to evaluate your model, the better you can tune it. Even if you have a deep understanding of the inner workings of the algorithms, your business partners do not. You need to be able to present your findings attractively and interestingly.
There are times that the business partners have more subject matter expertise that might help provide feature context. If they actually understand what you convey, they can help you tune the model even further.
One of the most common questions I hear is, "What data goes into the model?" which translates to "Which features are the most important?". You need to be prepared to answer that question in a way that they understand. Shapash provides some interesting outputs that might help you do inform your audience.
Why try shapash?
Always on the lookout for interesting packages to use in my day-to-day work, I came across shapash. And if you know me, you know I do not like hassles. The package must be easy to use, or it doesn’t stand a chance for a quick proof of concept. Just a few lines of code adds both interactive and report-like explainability to your model script.
I think it is fully worth your time to check out the package and its offerings. Setup is simple (remember, I’m not too fond of hassles). I have detailed the steps below.
Installation
As always, it is recommended that you create a new virtual environment. I have included the link to the installation process in the References section below. For this example, I am using Jupyter, so I just needed to install ipywidgets (and enable) and shapash.
Add this straightforward code block.
After you train your model (in this example, ‘regressor’), add a simple code block to compile and execute SmartExplainer. There is a full example code attached further below in this article.
from shapash.explainer.smart_explainer import SmartExplainer
# shapash Step 1: Declare SmartExplainer Object
xpl = SmartExplainer()
# shapash Step 2: Compile Model, Dataset, Encoders
xpl.compile( x=Xtest,
model=regressor,
preprocessing=encoder, #optional
y_pred=y_pred)
# shapash Step 3: Display interactive output
app = xpl.run_app()
Run the code
You should run your code from data ingestion, feature engineering, and model training through model scoring. Next, when you execute run_app(), an app link will display.

Simply click that link to open a browser window with your output. You will be able to navigate through the various visualizations.

BONUS – a code snippet to generate an HTML report
When you want to share the finding with colleagues, you can generate an HTML report to share.
# Step 4: Generate the Shapash Report
xpl.generate_report(
output_file='medium_spending_scores_report2.html',
project_info_file='shapash_project_info.yml',
x_train=Xtrain,
y_train=ytrain,
y_test=ytest,
title_story="Spending Scores Report",
title_description="""This is just an easy sample.
It was generated using the Shapash library.""",
metrics=[
{
'path': 'sklearn.metrics.mean_absolute_error',
'name': 'Mean absolute error',
}])

Full Example Code
Jupyter Notebook and files:
GitHub – dmoyer22/simple_shapash_example: Simple example of interactive model explainability using…
.py version:
References
GitHub – MAIF/shapash: 🔅 Shapash makes Machine Learning models transparent and understandable by…
Conclusion
I think shapash has a spot in the model explainability toolbox. If you aren’t able to explain your work to non-technical coworkers, your results may get overlooked. Nobody wants that to happen.
The folks I see advancing their Data Science careers in the workplace are those whose presentations shine and speak directly to their specific audience. So, shine on!