Artwork

Content provided by thescientistspeaks. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by thescientistspeaks or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Explainable AI for Rational Antibiotic Discovery

15:31
 
Share
 

Manage episode 414349932 series 2623015
Content provided by thescientistspeaks. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by thescientistspeaks or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Researchers now employ artificial intelligence (AI) models based on deep learning to make functional predictions about big datasets. While the concepts behind these networks are well established, their inner workings are often invisible to the user. The emerging area of explainable AI (xAI) provides model interpretation techniques that empower life science researchers to uncover the underlying basis on which AI models make such predictions.

In this month’s episode, Deanna MacNeil from The Scientist spoke with Jim Collins from Massachusetts Institute of Technology to learn how researchers are using explainable AI and artificial neural networks to gain mechanistic insights for large scale antibiotic discovery.

More on this topic

Artificial Neural Networks: Learning by Doing


The Scientist
Speaks is a podcast produced by The Scientist’s Creative Services Team. Our podcast is by scientists and for scientists. Once a month, we bring you the stories behind news-worthy molecular biology research.

This month's episode is sponsored by LabVantage, serving disease researchers with AI-driven scientific data management solutions that increase discovery and speed time-to-market. Learn more at LabVantage.com/analytics.

  continue reading

61 episodes

Artwork
iconShare
 
Manage episode 414349932 series 2623015
Content provided by thescientistspeaks. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by thescientistspeaks or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Researchers now employ artificial intelligence (AI) models based on deep learning to make functional predictions about big datasets. While the concepts behind these networks are well established, their inner workings are often invisible to the user. The emerging area of explainable AI (xAI) provides model interpretation techniques that empower life science researchers to uncover the underlying basis on which AI models make such predictions.

In this month’s episode, Deanna MacNeil from The Scientist spoke with Jim Collins from Massachusetts Institute of Technology to learn how researchers are using explainable AI and artificial neural networks to gain mechanistic insights for large scale antibiotic discovery.

More on this topic

Artificial Neural Networks: Learning by Doing


The Scientist
Speaks is a podcast produced by The Scientist’s Creative Services Team. Our podcast is by scientists and for scientists. Once a month, we bring you the stories behind news-worthy molecular biology research.

This month's episode is sponsored by LabVantage, serving disease researchers with AI-driven scientific data management solutions that increase discovery and speed time-to-market. Learn more at LabVantage.com/analytics.

  continue reading

61 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide