Artwork

Content provided by Kyle Polich. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Kyle Polich or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Trusting Machine Learning Models with LIME

35:16
 
Share
 

Manage episode 158999695 series 49487
Content provided by Kyle Polich. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Kyle Polich or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Machine learning models are often criticized for being black boxes. If a human cannot determine why the model arrives at the decision it made, there's good cause for skepticism. Classic inspection approaches to model interpretability are only useful for simple models, which are likely to only cover simple problems.

The LIME project seeks to help us trust machine learning models. At a high level, it takes advantage of local fidelity. For a given example, a separate model trained on neighbors of the example are likely to reveal the relevant features in the local input space to reveal details about why the model arrives at it's conclusion.

In this episode, Marco Tulio Ribeiro joins us to discuss how LIME (Locally Interpretable Model-Agnostic Explanations) can help users trust machine learning models. The accompanying paper is titled "Why Should I Trust You?": Explaining the Predictions of Any Classifier.

  continue reading

551 episodes

Artwork

Trusting Machine Learning Models with LIME

Data Skeptic

5,542 subscribers

published

iconShare
 
Manage episode 158999695 series 49487
Content provided by Kyle Polich. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Kyle Polich or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Machine learning models are often criticized for being black boxes. If a human cannot determine why the model arrives at the decision it made, there's good cause for skepticism. Classic inspection approaches to model interpretability are only useful for simple models, which are likely to only cover simple problems.

The LIME project seeks to help us trust machine learning models. At a high level, it takes advantage of local fidelity. For a given example, a separate model trained on neighbors of the example are likely to reveal the relevant features in the local input space to reveal details about why the model arrives at it's conclusion.

In this episode, Marco Tulio Ribeiro joins us to discuss how LIME (Locally Interpretable Model-Agnostic Explanations) can help users trust machine learning models. The accompanying paper is titled "Why Should I Trust You?": Explaining the Predictions of Any Classifier.

  continue reading

551 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide