MONTHLY EDITION

May Edition: Questions on Explainable AI

As machine learning models penetrate almost every area of knowledge, actually understanding what ML systems do starts to seem problematic.

TDS Editors
Towards Data Science
3 min readMay 2, 2021

--

Modern Dancer (Marta Reguera) by Carlos Ojeda, local artist and friend of Carlos Mougan. Posted with permission.

Many papers, blogs, and software tools present explainability and interpretability in a quasi-mathematical way, but… is there a canonical definition of what interpretability and explainability mean? Or even of how we evaluate explanations?

Machine learning algorithms definitely can’t be left to themselves, running in the wild. The question is, how can we, as human beings, understand algorithms that surpass human performance?

So, for this Monthly Edition, we decided to highlight some of the best blogs and podcasts that TDS authors want you to know about. Whether you’re a data scientist in industry, a researcher in academia, a student, or just a curious person, I’d recommend taking this opportunity to reflect on the effects of machine learning on societies and the potential need for explainability.

Enjoy the read!

Carlos Mougan, Editorial Associate at Towards Data Science & Marie-Sklodowska-Curie Research Fellow.

Responsible AI at Facebook

Podcast | Youtube

Facebook routinely deploys recommendation systems and predictive models that affect the lives of billions of people every day. That kind of reach comes with great responsibility — among other things, the responsibility to develop AI tools that are ethical, fair, and well characterized.

Does AI have to be understandable to be ethical?

Podcast | Youtube

As AI systems have become more ubiquitous, people have begun to pay more attention to their ethical implications.

Bring Explainable AI to the Next Level by Finding the “Few Vital Causes”

An effective Explainable AI should be aimed at discovering the “vital few” causes rather than the “trivial many” events. Here is how to do it, in Python.

By Samuele Mazzanti — 8 min

Need for Explainability in AI and Robotics

Explainable AI: the gateway to a new future.

By Pier Paolo Ippolito — 5 min

You Are Underutilizing SHAP Values — Feature Groups and Correlations

Your model is a lens into your data, and SHAP its telescope.

By Estevão Uyrá Pardillos Vieira — 7 min

Opening Black Boxes: How to leverage Explainable Machine Learning

Using PDP, LIME, and SHAP to create interpretable decisions that create value for your stakeholders.

By Maarten Grootendorst — 9 min

The Explainable Boosting Machine

As accurate as gradient boosting, as interpretable as linear regression.

By Dr. Robert Kübler — 11 min

Advanced Permutation Importance to Explain Predictions

Bring explainability to the next level preserving simplicity.

By Marco Cerliani — 6 min

Understand the machine learning Blackbox with ML-interpreter

A web app for auto-interpreting the decisions of algorithms like XGBoost

By Hannah Yan Han — 8 min

Explaining “Blackbox” ML Models — Practical Application of SHAP

Train a “blackbox” GBM model on a real dataset and make it explainable with SHAP.

By Norm Niemer — 5 min read

5 Significant Reasons Why Explainable AI Is an Existential Need for Humanity

What is explainable artificial intelligence (XAI), and why do we seek explainability and interpretability in AI systems?

By Orhan G. Yalçın — 8 min

--

--

Building a vibrant data science and machine learning community. Share your insights and projects with our global audience: bit.ly/write-for-tds