ONNX and Intel nGraph API Deliver AI Framework Flexibility – Intel Chip Chat – Episode 611


Manage episode 220564061 series 29991
By Discovered by Player FM and our community — copyright is owned by the publisher, not Player FM, and audio streamed directly from their servers.

In this Intel Chip Chat audio podcast with Allyson Klein: Prasanth Pulavarthi, Principal Program Manager for AI Infrastructure at Microsoft, and Padma Apparao, Principal Engineer and Lead Technical Architect for AI at Intel, discuss a collaboration that enables developers to switch from one deep learning operating environment to another regardless of software stack or hardware configuration.

ONNX is an open format that unties developers from specific machine learning frameworks so they can easily move between software stacks. It also reduces ramp-up time by sparing them from learning new tools. Many hardware and software companies have joined the ONNX community over the last year and added ONNX support in their products. Microsoft has enabled ONNX in Windows and Azure and has released the ONNX Runtime which provides a full implementation of the ONNX-ML spec.

With the nGraph API, developed by Intel, developers can optimize their deep learning software without having to learn the specific intricacies of the underlying hardware. It enables portability between Intel Xeon Scalable processors and Intel FPGAs as well as Intel Nervana Neural Network Processors (Intel Nervana NNPs). Intel is integrating the nGraph API into the ONNX Runtime to provide developers accelerated performance on a variety of hardware.

For information about ONNX as well as tutorials and ways to get involved in the ONNX community, visit:

To learn more about ONNX Runtime visit:

To learn more about the Intel nGraph API, visit:


756 episodes available. A new episode about every day .