The Evolution of Inference with Custom-Built Accelerators - Intel® Chip Chat episode 678

16:29
 
Share
 

Manage episode 246062499 series 1210447
By Intel Corporation. Discovered by Player FM and our community — copyright is owned by the publisher, not Player FM, and audio is streamed directly from their servers. Hit the Subscribe button to track updates in Player FM, or paste the feed URL into other podcast apps.
Academic research and theoretical work on deep learning have predicted an exciting future for deep learning, and now real-world technology is catching up with some of the most exciting potential of AI. Inference is a particularly fascinating part of AI, as it’s what powers the ability of neural networks to “predict” what certain data looks or sounds like. The Intel Nervana Neural Network Processor for Inference (NNP-I) is purpose-built for intensive inference workloads, and accelerating this crucial part of artificial intelligence. Gadi Singer is the VP and General Manager of the Artificial Intelligence Products Group at Intel, and a 29-year veteran of the company. In this interview, Gadi holds forth on both the high-level design philosophy informing the NNP-I’s structure, and the finer details of its design, such as power efficiency, optimizing data movement, and software support. He also talks about which industries and areas can potentially be transformed by better inference, such as image analysis, automated recommendation systems, and natural language processing. To learn more about the Intel Nervana Neural Network Processor for Inference go to: https://www.intel.ai/nervana-nnp/ Intel and the Intel logo are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. © Intel Corporation

692 episodes