Accelerating AI Inference with Microsoft Azure Machine Learning – Intel Chip Chat – Episode 626

 
Share
 

Manage episode 224459017 series 29991
By Discovered by Player FM and our community — copyright is owned by the publisher, not Player FM, and audio streamed directly from their servers.

In this Intel Chip Chat audio podcast with Allyson Klein: Dr. Henry Jerez, Principal Group Product and Program Manager for Azure Machine Learning Inferencing and Infrastructure at Microsoft, joins Chip Chat to discuss accelerating AI inference in Microsoft Azure. Dr. Jerez leads the team responsible for creating assets that help data scientists manage their AI models and deployments, both in the cloud and at the edge, and works closely with Intel to deliver the fastest-possible inference performance for Microsoft’s customers. At Ignite 2018, Microsoft demoed an Azure Machine Learning model running atop the OpenVINO toolkit and Intel architecture for highly-performant inference at the edge. This capability will soon be incorporated into Azure Machine Learning. Microsoft additionally announced at Ignite a refreshed public preview of Azure Machine Learning that now provides a unified platform and SDK for data scientists, IT professionals, and developers.

For more on Microsoft Azure Machine Learning, please visit:
aka.ms/azureml-docs

Share

797 episodes available. A new episode about every day .