Accelerating FPGA Adoption for AI Inference with the Inspur TF2 – Intel on AI – Episode 13

 
Share
 

Manage episode 232496349 series 29991
By Discovered by Player FM and our community — copyright is owned by the publisher, not Player FM, and audio streamed directly from their servers.

In this Intel on AI podcast episode: FPGA (field-programmable gate array) technology can offer a very high level of flexibility and performance, with low latency. Yet, with software writing thresholds, limited performance optimization, and difficult power control, FPGA solutions can also be challenging to implement. Bob Anderson, General Manager of Sales for Strategic Accounts at Inspur, joins Intel on AI to talk about the Inspur TensorFlow-supported FPGA Compute Acceleration Engine (TF2). Bob illustrates how the TF2 helps customers more easily deploy FPGA solutions and take advantage of the customization and performance of FPGAs for AI inference applications. He also describes how the TF2 is especially suitable for image-based AI applications with high real-time requirements.

To learn more, visit:
inspursystems.com

Visit Intel AI Builders at:
builders.intel.com/ai

907 episodes available. A new episode about every day .