Neural Basis Models for Interpretability
Unpacking the new interpretable model proposed by Meta AI
Published in
6 min readOct 11, 2023
The widespread application of Machine Learning and Artificial Intelligence across various domains brings about heightened challenges regarding risks and ethical assessments. As seen in case studies like the criminal recidivism model reported on by ProPublica, machine learning algorithms can be incredibly biased and, as a result, robust explainability mechanisms are needed to ensure trust and safety when these models are deployed in high-stakes areas.