Artwork

Content provided by Intel Corporation. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Intel Corporation or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Driving Data Center Performance Through Intel Memory Technology – Intel® Chip Chat episode 624

10:43
 
Share
 

Manage episode 223244749 series 1210447
Content provided by Intel Corporation. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Intel Corporation or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Dr. Ziya Ma, vice president of Intel® Software and Services Group and director of Data Analytics Technologies, gives Chip Chat listeners a look at data center optimization along with a preview of advancements well underway. In their work with the broad industry, Dr. Ma and her team have found that taming the data deluge calls for IT data center managers to unify their big data analytics and AI workflows. As they’ve helped customers overcome the memory constraints involved in data caching, Apache Spark*, which supports the convergence of AI on big data, has proven to be a highly effective platform. Dr. Ma and her team have already provided the community a steady stream of source code contributions and optimizations for Spark. In this interview she reveals that more – and even more exciting work – is underway. Spark depends on memory to perform and scale. That means optimizing Spark for the revolutionary new Intel® Optane™ DC persistent memory offers performance improvement for the data center. In one example, Dr. Ma describes benchmark testing where Spark SQL performs eight times faster at a 2.6TB data scale using Intel Optane DC persistent memory than a comparable system using DRAM DIMMs. With Intel Optane DC persistent memory announced and broadly available in 2019, data centers have the chance to achieve workflow unification along with performance gains and system resilience starting now. For more information about Intel’s work in this space, go to software.intel.com/ai. For more about how Intel is driving advances in the ecosystem, visit intel.com/analytics. Performance results are based on Intel internal testing: 8X faster insights (8/2/2018) based on Apache Spark* SQL IO intensive queries for Analytics vs. DRAM+HDD at 2.6TB data scale; 9X read transactions and 11X users per system (5/29/2018) based on Apache* Cassandra-4.0 workload doing 100% reads vs. comparable server system with DRAM & NAND NVME Drives; 12.5X faster restart times (5/30/2018) based on running SAP HANA 2.0 SPS 03, and may not reflect all publicly available security updates. No product can be absolutely secure. Configurations: Results have been estimated based on tests conducted on pre-production systems, and provided to you for informational purposes. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information go to www.intel.com/benchmarks. Intel, the Intel logo, and Optane are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. © Intel Corporation.
  continue reading

172 episodes

Artwork
iconShare
 
Manage episode 223244749 series 1210447
Content provided by Intel Corporation. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Intel Corporation or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Dr. Ziya Ma, vice president of Intel® Software and Services Group and director of Data Analytics Technologies, gives Chip Chat listeners a look at data center optimization along with a preview of advancements well underway. In their work with the broad industry, Dr. Ma and her team have found that taming the data deluge calls for IT data center managers to unify their big data analytics and AI workflows. As they’ve helped customers overcome the memory constraints involved in data caching, Apache Spark*, which supports the convergence of AI on big data, has proven to be a highly effective platform. Dr. Ma and her team have already provided the community a steady stream of source code contributions and optimizations for Spark. In this interview she reveals that more – and even more exciting work – is underway. Spark depends on memory to perform and scale. That means optimizing Spark for the revolutionary new Intel® Optane™ DC persistent memory offers performance improvement for the data center. In one example, Dr. Ma describes benchmark testing where Spark SQL performs eight times faster at a 2.6TB data scale using Intel Optane DC persistent memory than a comparable system using DRAM DIMMs. With Intel Optane DC persistent memory announced and broadly available in 2019, data centers have the chance to achieve workflow unification along with performance gains and system resilience starting now. For more information about Intel’s work in this space, go to software.intel.com/ai. For more about how Intel is driving advances in the ecosystem, visit intel.com/analytics. Performance results are based on Intel internal testing: 8X faster insights (8/2/2018) based on Apache Spark* SQL IO intensive queries for Analytics vs. DRAM+HDD at 2.6TB data scale; 9X read transactions and 11X users per system (5/29/2018) based on Apache* Cassandra-4.0 workload doing 100% reads vs. comparable server system with DRAM & NAND NVME Drives; 12.5X faster restart times (5/30/2018) based on running SAP HANA 2.0 SPS 03, and may not reflect all publicly available security updates. No product can be absolutely secure. Configurations: Results have been estimated based on tests conducted on pre-production systems, and provided to you for informational purposes. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information go to www.intel.com/benchmarks. Intel, the Intel logo, and Optane are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. © Intel Corporation.
  continue reading

172 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide