+1-415-954-2800 Support

LiveData

Support application data needs with powerful, flexible big memory…

The Big Memory Computing Challenge

Data has become one of the greatest drivers of our economy, our businesses, and our IT infrastructures.  As data-intensive workloads scale, it’s critical to implement data-driven, software-defined architectures that meet the demands of large data sets.  These architectures are categorized into tiers based on characteristics such as performance, capacity, connectivity, and cost.  They are combined in a variety of data tiering strategies to efficiently optimize a complete data pipeline for the unique requirements of an organization’s workloads.  A typical data-intensive pipeline will benefit from a fast, in-memory tier for latency-sensitive workloads, a second fast and scalable flash tier for I/O-intensive workloads, and a capacity-optimized tier for long-term storage.   

According to IDC, real-time data was less than 5% of all data in 2015 but is projected to comprise almost 30% of all data by 2024. They project that by 2021, 60-70% of the Global 2000 will have at least one mission-critical real-time workload.   

This Big Bang of real-time data is driving the expansion of real-time analytics and AI/ML applications into the mainstream. Another result is real-time applications are outpacing the capacity, performance, and availability capabilities of in-memory infrastructure.  This can lead to congestion, I/O bottlenecks, storage outages, and cost overruns for data-intensive HPC and AI/ML workloads.

These data bottlenecks, in turn, can lead to a situation known as Data Greater than Memory (DGM), when the data can no longer fit in memory.  When this happens, data traditionally needs to overflow to hard drives or SSDs.  This leads to a dramatic performance drop — 1000 times slower, or even 5,000 times slower.   It is then no longer a bottleneck, it’s a roadblock.

Leveraging a powerful, software defined architecture to aggregate the performance and capacity of DRAM and persistent memory eliminates these roadblocks and enables innovative workflows.  The right technical partner can provide an optimized platform with the architecture, integration, support, and managed services to ensure your success.

The Penguin Computing™ LiveData Solution

Penguin Computing LiveDataTM is built upon large memory server building blocks and memory-centric software defined architectures to provide a Big Memory solution leveraging DRAM, Persistent Memory (PMEM), and high-performance, low-latency networking to drive real-time workloads.

Benefits

  • Innovate and speed time-to-market.
  • Scale memory capacity and improve system performance.
  • Maintain availability – recover in seconds, not hours.
  • Improve agility with efficient clone deployment, and fast application rollbacks.
  • Avoid application disruption or rewrite.
  • Reduce latency.

Penguin Computing LiveData with Memory Machine

Penguin Computing has partnered with MemVerge™ to create Penguin Computing LiveData with MemVerge Memory Machine™. LiveData with MemVerge Memory Machine addresses the DGM roadblock by providing a  memory virtualization software layer that essentially delivers software-defined memory services to the applications without application change. This allows thousands of applications running in the data center today to take advantage of a higher memory capacity at a lower cost.

MemVerge Memory Machine allows you to massively scale out DRAM and persistent memory.    The results are big memory pools where all applications and data can live. To support all application data needs, MemVerge has invented rich big memory data services such as snapshot, replication, and tiering that for the first time enable lightning fast recovery from in-memory application crashes. Existing tier-1 applications can run safely and transparently on big memory without application rewrites.

LiveData with MemVerge Memory Machine can be integrated into your existing bare-metal, containerized, virtual, or cloud environments. It can be implemented alone as a solution or in combination with other Penguin Computing solutions for HPC, AI/ML, and Cloud to provide an end-to-end complete compute platform.

Features:

  • Tier Persistent Memory and DRAM for optimum performance
  • Low-latency memory replication
  • Virtualize memory to form a platform for enterprise-class data services.
  • In-memory storage compatible with existing applications.
  • Recover hundreds of GB in seconds with ZeroIO™ memory snapshots
  • Clone databases in seconds

Contact us to find out how you can support your application data needs with powerful, flexible big memory. >


This solution includes technologies from: