Advancing Deep Learning with Custom-Built Accelerators- Intel® Chip Chat episode 677

11:12
 
Share
 

Manage episode 246062500 series 1210447
By Intel Corporation. Discovered by Player FM and our community — copyright is owned by the publisher, not Player FM, and audio is streamed directly from their servers. Hit the Subscribe button to track updates in Player FM, or paste the feed URL into other podcast apps.
Deep learning workloads have evolved considerably over the last few years. Today’s models are larger, deeper, and more complex than neural networks from even a few years ago, with an explosion in size in the number of parameters per model. The Intel Nervana Neural Network Processor for Training (NNP-T) is a purpose-built deep learning accelerator to speed up the training and deployment of distributed learning algorithms. Carey Kloss is the VP and General Manager of the AI Training Products Group at Intel. In this interview, Kloss outlines the architecture and potential of the Intel Nervana NNP-T. He gets into major issues like memory and how the architecture was designed to avoid problems like becoming memory-locked, how the accelerator supports existing software frameworks like PaddlePaddle and TensorFlow, and what the NNP-T means for customers who want to keep on eye on power usage and lower TCO. To learn more about the Intel Nervana Neural Network Processor for Training go to: https://www.intel.ai/nervana-nnp/ Intel and the Intel logo are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. © Intel Corporation

692 episodes