EVENT TALKS
c
About the speakers:
— Sheldon Fernandez, CEO of DarwinAI, is a seasoned executive and respected thought leader in the technical and enterprise communities. Throughout his career, he has applied emerging technologies such as Artificial Intelligence to practical scenarios for enterprise clients. Sheldon is also an accomplished author and speaker. He has spoken at numerous conferences in numerous contexts, including Singularity University, the prestigious think tank in the Bay Area, and has written technical books and articles on many topics including both Artificial Intelligence and Computational Creativity.
— Michael St. Jules is a Senior Research Developer at DarwinAI, and has been with the company since early 2018. He received BMath and M.Sc. degrees in mathematics from Carleton University and the University of Ottawa in 2014 and 2016, respectively, focusing on mathematical analysis, logic and computer science, with a master’s thesis and publication in quantum cryptography. He then pivoted study Machine Learning and deep learning in an MMath in computational mathematics at the University of Waterloo, graduating in 2017, and worked as a research assistant for Dr. Alexander Wong until joining DarwinAI.
About the talk:
The prevailing progress around AI has created in its wake a heightened interest in Explainable Artificial Intelligence (XAI), whose goal is to produce interpretable decisions made by machine learning algorithms. Of particular interest is the interpretation of how deep neural networks make decisions, given the complexity and ‘black box’ nature of such networks.
Given the infancy in the field, there has been limited exploration into the assessment of the performance of explainability methods, with most evaluations centered on subjective and visual interpretations of current approaches. In this talk, the speakers introduce two quantitative performance metrics for quantifying the performance of explainability methods on deep neural networks via a novel decision-making impact analysis:
- Impact Score, which assesses the percentage of critical factors with either strong confidence reduction impact or decision changing impact; and
- Impact Coverage, which assesses the percentage coverage of adversarially impacted factors in the input. We further consider a comprehensive analysis using this approach against numerous state-of-the-art explainability methods.
