MONTHLY EDITION
May Edition: Questions on Explainable AI
As machine learning models penetrate almost every area of knowledge, actually understanding what ML systems do starts to seem problematic.
Many papers, blogs, and software tools present explainability and interpretability in a quasi-mathematical way, but… is there a canonical definition of what interpretability and explainability mean? Or even of how we evaluate explanations?
Machine learning algorithms definitely can’t be left to themselves, running in the wild. The question is, how can we, as human beings, understand algorithms that surpass human performance?
So, for this Monthly Edition, we decided to highlight some of the best blogs and podcasts that TDS authors want you to know about. Whether you’re a data scientist in industry, a researcher in academia, a student, or just a curious person, I’d recommend taking this opportunity to reflect on the effects of machine learning on societies and the potential need for explainability.
Enjoy the read!
Carlos Mougan, Editorial Associate at Towards Data Science & Marie-Sklodowska-Curie Research Fellow.
Responsible AI at Facebook
Facebook routinely deploys recommendation systems and predictive models that affect the lives of billions of people every day. That kind of reach comes with great responsibility — among other things, the responsibility to develop AI tools that are ethical, fair, and well characterized.
Does AI have to be understandable to be ethical?
As AI systems have become more ubiquitous, people have begun to pay more attention to their ethical implications.
Bring Explainable AI to the Next Level by Finding the “Few Vital Causes”
An effective Explainable AI should be aimed at discovering the “vital few” causes rather than the “trivial many” events. Here is how to do it, in Python.
By Samuele Mazzanti — 8 min
Need for Explainability in AI and Robotics
Explainable AI: the gateway to a new future.
By Pier Paolo Ippolito — 5 min
You Are Underutilizing SHAP Values — Feature Groups and Correlations
Your model is a lens into your data, and SHAP its telescope.
By Estevão Uyrá Pardillos Vieira — 7 min
Opening Black Boxes: How to leverage Explainable Machine Learning
Using PDP, LIME, and SHAP to create interpretable decisions that create value for your stakeholders.
By Maarten Grootendorst — 9 min
The Explainable Boosting Machine
As accurate as gradient boosting, as interpretable as linear regression.
By Dr. Robert Kübler — 11 min
Advanced Permutation Importance to Explain Predictions
Bring explainability to the next level preserving simplicity.
By Marco Cerliani — 6 min
Understand the machine learning Blackbox with ML-interpreter
A web app for auto-interpreting the decisions of algorithms like XGBoost
By Hannah Yan Han — 8 min
Explaining “Blackbox” ML Models — Practical Application of SHAP
Train a “blackbox” GBM model on a real dataset and make it explainable with SHAP.
By Norm Niemer — 5 min read
5 Significant Reasons Why Explainable AI Is an Existential Need for Humanity
What is explainable artificial intelligence (XAI), and why do we seek explainability and interpretability in AI systems?
By Orhan G. Yalçın — 8 min
New podcasts
- Josh Fairfield — AI advances, but can the law keep up?
- Melanie Mitchell — Existential risk from AI: A skeptical perspective
- Ryan Carey — What does your AI want?
- Yan Li — The Surprising Challenges of Global AI Philanthropy
We also thank all the great new writers who joined us recently Nesrine Sfar, Avishay Balter, Lindsay M Pettingill, Steve Attila Kopias, Nina Hristozova, Kishore Gopalan, Prateek Singh, jean-baptiste charraud, Dustin Stewart, Sunayana Ghosh, Preeyonuj Boruah, Johannes Beetz, Isabelle Augenstein, Kabir Nagrecha, Fabio Oliveira, Mathias Gruber, Mohammadreza Salehi, Theo, Gabrielgilling, Travis Cooper, Luca Carniato, Ines Lee, Robert Dzudzar, David Hall, Michael Azimov, Nikita Kiselov, Kirill Tsyganov, Henry Greeley, Benjamin Lowe, Katie Huang, Chuck Utterback, Denisa Blackwood, Aya Spencer, Sixing Huang, Oscar Goodloe, Federico Bianchi, Christine Winter, Gabriel Hermsen, Tino Álvarez, Peter Gao, and many others. We invite you to take a look at their profiles and check out their work.