Model Interpretability
Are We Thinking about eXplAInability Backwards?
Three questions you should be able to answer before building an AI solution
Published in
6 min readSep 27, 2021
One widespread issue surrounding AI is its black-box nature, but it is possible to design for eXplainability. Not every use case requires an explainable solution, but many do. When we develop XAI, we’re often asking, “what can we explain?” In this post, I challenge us to first think about the end-user. I highlight three questions to consider before building your…