Artwork

Content provided by Dev and Doc. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Dev and Doc or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

#22 Explaining Explainable AI (for healthcare) with Dr Annabelle Painter (RSM digital health section Podcast)

58:40
 
Share
 

Manage episode 434385253 series 3585389
Content provided by Dev and Doc. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Dev and Doc or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Dev and Doc is joined by guest Annabelle Painter, doctor, CMO, and podcaster for the Royal Society of Medicine Digital Health Podcast. We deep dive into explainability and interpretability with concrete healthcare examples.

Check out Dr. Painter's Podcast here, she has some amazing guests and great insights into AI in healthcare! - https://spotify.link/pzSgxmpD5yb

πŸ‘‹ Hey! If you are enjoying our conversations, reach out, share your thoughts and journey with us. Don't forget to subscribe whilst you're here :)

πŸ‘¨πŸ»β€βš•οΈ Doc - Dr. Joshua Au Yeung - https://www.linkedin.com/in/dr-joshua-auyeung/

πŸ€– Dev - Zeljko Kraljevic - https://twitter.com/zeljkokr

LinkedIn Newsletter

YouTube Channel

Spotify

Apple Podcasts

Substack

For enquiries - πŸ“§ Devanddoc@gmail.com

🎞️ Editor - Dragan KraljeviΔ‡ - https://www.instagram.com/dragan_kraljevic/

🎨 Brand design and art direction - Ana Grigorovici - https://www.behance.net/anagrigorovici027d

Timestamps:

  • 00:00 - Start + highlights
  • 03:47 - Intro
  • 08:16 - Does all AI in healthcare need to be explainable?
  • 15:56 - History and explanation of Explainable/Interpretable AI
  • 20:43 - Gradient-based saliency and heat maps
  • 24:14 - LIME - Local Interpretable Model-agnostic Explanations
  • 30:09 - Nonsensical correlations - When explainability goes wrong
  • 33:57 - Modern explainability - Anthropic
  • 37:15 - Comparing LLMs with the human brain
  • 40:02 - Clinician-AI interaction
  • 47:11 - Where is this all going? Aligning models to ground truth and teaching them to say "I don't know"

References:

  continue reading

23 episodes

Artwork
iconShare
 
Manage episode 434385253 series 3585389
Content provided by Dev and Doc. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Dev and Doc or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Dev and Doc is joined by guest Annabelle Painter, doctor, CMO, and podcaster for the Royal Society of Medicine Digital Health Podcast. We deep dive into explainability and interpretability with concrete healthcare examples.

Check out Dr. Painter's Podcast here, she has some amazing guests and great insights into AI in healthcare! - https://spotify.link/pzSgxmpD5yb

πŸ‘‹ Hey! If you are enjoying our conversations, reach out, share your thoughts and journey with us. Don't forget to subscribe whilst you're here :)

πŸ‘¨πŸ»β€βš•οΈ Doc - Dr. Joshua Au Yeung - https://www.linkedin.com/in/dr-joshua-auyeung/

πŸ€– Dev - Zeljko Kraljevic - https://twitter.com/zeljkokr

LinkedIn Newsletter

YouTube Channel

Spotify

Apple Podcasts

Substack

For enquiries - πŸ“§ Devanddoc@gmail.com

🎞️ Editor - Dragan KraljeviΔ‡ - https://www.instagram.com/dragan_kraljevic/

🎨 Brand design and art direction - Ana Grigorovici - https://www.behance.net/anagrigorovici027d

Timestamps:

  • 00:00 - Start + highlights
  • 03:47 - Intro
  • 08:16 - Does all AI in healthcare need to be explainable?
  • 15:56 - History and explanation of Explainable/Interpretable AI
  • 20:43 - Gradient-based saliency and heat maps
  • 24:14 - LIME - Local Interpretable Model-agnostic Explanations
  • 30:09 - Nonsensical correlations - When explainability goes wrong
  • 33:57 - Modern explainability - Anthropic
  • 37:15 - Comparing LLMs with the human brain
  • 40:02 - Clinician-AI interaction
  • 47:11 - Where is this all going? Aligning models to ground truth and teaching them to say "I don't know"

References:

  continue reading

23 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide