Moving AI from the Data Center to the Edge – Intel Chip Chat – Episode 663

 
Share
 

Manage episode 237659329 series 29991
By Discovered by Player FM and our community — copyright is owned by the publisher, not Player FM, and audio streamed directly from their servers.

In this Intel Conversations in the Cloud audio podcast: As AI expands into the mainstream, how will it succeed at the edge? Matt Jacobs, Senior VP Commercial Systems, Penguin Computing, Inc. talks about the growth of AI, its move from the data center, and easing customers into the edge space.

An established leader in data center and HPC solutions, Penguin takes a system-level view of the move to AI, a trend they’ve seen developing over the past two years. In the next 18 months, symbiotic technologies will be converging to create “real growth opportunity,” in Jacobs’ words.

Deploying AI at the edge comes with its own needs, and they differ from those of traditional data centers. Easily maintained low-power environments well-tuned for workloads are key, and the spread of processing power from the data center to the near-edge to the far-edge calls for software that can ensure workload portability.

New compute platforms, like 2nd Generation Intel Xeon Scalable processors with built-in AI acceleration, suit the unique requirements of workloads at the edge. Upcoming Intel “One API” software, which will simplify programming diverse computing engines, will support targeting workloads at various levels of capability.

For more about Penguin Computing, Inc. visit:
pengiuncomputing.com

Information about Intel technologies is available at:
intel.com
intel.com/ai

Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No product or component can be absolutely secure. Check with your system manufacturer or retailer or learn more at:
intel.com

942 episodes available. A new episode about every day .