Artwork

Content provided by Kanjun Qiu. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Kanjun Qiu or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Episode 13: Jonathan Frankle, MIT, on the lottery ticket hypothesis and the science of deep learning

1:20:33
 
Share
 

Manage episode 301944652 series 2906499
Content provided by Kanjun Qiu. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Kanjun Qiu or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Jonathan Frankle (Google Scholar) (Website) is finishing his PhD at MIT, advised by Michael Carbin. His main research interest is using experimental methods to understand the behavior of neural networks. His current work focuses on finding sparse, trainable neural networks.

**Highlights from our conversation:**

🕸 "Why is sparsity everywhere? This isn't an accident."

🤖 "If I gave you 500 GPUs, could you actually keep those GPUs busy?"

📊 "In general, I think we have a crisis of science in ML."

  continue reading

34 episodes

Artwork
iconShare
 
Manage episode 301944652 series 2906499
Content provided by Kanjun Qiu. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Kanjun Qiu or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Jonathan Frankle (Google Scholar) (Website) is finishing his PhD at MIT, advised by Michael Carbin. His main research interest is using experimental methods to understand the behavior of neural networks. His current work focuses on finding sparse, trainable neural networks.

**Highlights from our conversation:**

🕸 "Why is sparsity everywhere? This isn't an accident."

🤖 "If I gave you 500 GPUs, could you actually keep those GPUs busy?"

📊 "In general, I think we have a crisis of science in ML."

  continue reading

34 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide