Trusting Machine Learning Models with LIME

Data Skeptic - A podcast by Kyle Polich - Lunedì

Categorie:

Machine learning models are often criticized for being black boxes. If a human cannot determine why the model arrives at the decision it made, there's good cause for skepticism. Classic inspection approaches to model interpretability are only useful for simple models, which are likely to only cover simple problems. The LIME project seeks to help us trust machine learning models. At a high level, it takes advantage of local fidelity. For a given example, a separate model trained on neighbors of the example are likely to reveal the relevant features in the local input space to reveal details about why the model arrives at it's conclusion. In this episode, Marco Tulio Ribeiro joins us to discuss how LIME (Locally Interpretable Model-Agnostic Explanations) can help users trust machine learning models. The accompanying paper is titled "Why Should I Trust You?": Explaining the Predictions of Any Classifier.

Visit the podcast's native language site