Artwork

Content provided by Richard Jacobs. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Richard Jacobs or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Pushing the Boundaries of AI, Cheaply and Efficiently: Murat Onen Explains

31:39
 
Share
 

Manage episode 348147442 series 1538640
Content provided by Richard Jacobs. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Richard Jacobs or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Large-scale AI models that enable next-generation applications like natural language processing and autonomous systems require intensive training and immense power. The monetary and environmental expense is too great.

This is where analog deep learning comes into play. The concept behind it is to develop a new type of hardware that can accelerate the training of neural networks, achieving a cheaper, more efficient, and more sustainable way to move forward with AI applications.

Murat Onen, a postdoctoral researcher in the Department of Electrical Engineering and Computer Science at MIT, explains.

Tune in to explore:

  • Conventional vs. novel methods of training neural networks
  • The difference between GPUs and CPUs and why it matters
  • Analog vs. digital machine operations
  • About how long it will take to have small and full-scale systems that outperform conventional AI models

Press play for the full conversation.

Episode also available on Apple Podcasts: http://apple.co/30PvU9C

  continue reading

3712 episodes

Artwork
iconShare
 
Manage episode 348147442 series 1538640
Content provided by Richard Jacobs. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Richard Jacobs or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Large-scale AI models that enable next-generation applications like natural language processing and autonomous systems require intensive training and immense power. The monetary and environmental expense is too great.

This is where analog deep learning comes into play. The concept behind it is to develop a new type of hardware that can accelerate the training of neural networks, achieving a cheaper, more efficient, and more sustainable way to move forward with AI applications.

Murat Onen, a postdoctoral researcher in the Department of Electrical Engineering and Computer Science at MIT, explains.

Tune in to explore:

  • Conventional vs. novel methods of training neural networks
  • The difference between GPUs and CPUs and why it matters
  • Analog vs. digital machine operations
  • About how long it will take to have small and full-scale systems that outperform conventional AI models

Press play for the full conversation.

Episode also available on Apple Podcasts: http://apple.co/30PvU9C

  continue reading

3712 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide