Driving Data Center Performance Through Intel Memory Technology – Intel Chip Chat – Episode 624

 
공유
 

Manage episode 224461130 series 1136657
Player FM과 저희 커뮤니티의 Connected Social Media 콘텐츠는 모두 원 저작자에게 속하며 Player FM이 아닌 작가가 저작권을 갖습니다. 오디오는 해당 서버에서 직접 스트리밍 됩니다. 구독 버튼을 눌러 Player FM에서 업데이트 현황을 확인하세요. 혹은 다른 팟캐스트 앱에서 URL을 불러오세요.

In this Intel Chip Chat audio podcast with Allyson Klein: Dr. Ziya Ma, vice president of Intel Software and Services Group and director of Data Analytics Technologies, gives Chip Chat listeners a look at data center optimization along with a preview of advancements well underway.\n\nIn their work with the broad industry, Dr. Ma and her team have found that taming the data deluge calls for IT data center managers to unify their big data analytics and AI workflows. As they’ve helped customers overcome the memory constraints involved in data caching, Apache Spark, which supports the convergence of AI on big data, has proven to be a highly effective platform.

Dr. Ma and her team have already provided the community a steady stream of source code contributions and optimizations for Spark. In this interview she reveals that more – and even more exciting work – is underway.

Spark depends on memory to perform and scale. That means optimizing Spark for the revolutionary new Intel Optane DC persistent memory offers performance improvement for the data center.

In one example, Dr. Ma describes benchmark testing where Spark SQL performs eight times faster at a 2.6TB data scale using Intel Optane DC persistent memory than a comparable system using DRAM DIMMs.

With Intel Optane DC persistent memory announced and broadly available in 2019, data centers have the chance to achieve workflow unification along with performance gains and system resilience starting now.

For more information about Intel’s work in this space, go to:
software.intel.com/ai

For more about how Intel is driving advances in the ecosystem, visit:
intel.com/analytics

177 에피소드