Memory DIMMs in stack
Expertise > Memory Pooling & Expansion

Enable Lower-Cost Memory Capacity Scaling With CXL

Expand memory for compute resources within data centers, cloud services, and high-performance computing (HPC) easily and cost-effectively, even as your data-intensive workloads grow in complexity and scale.

Let's Talk
Scaling the Memory Wall

Memory Pooling & Expansion Considerations

Increasing memory capacity and reducing latency without needing to add processing power is a game-changer. Use memory scaling to streamline real-time processing of large datasets and accelerate execution of complex algorithms and advanced analytics.

As modern applications such as artificial intelligence (AI), machine learning (ML), image processing, in-memory databases and real-time analytics consume more memory than ever before, the demand for low-latency, high-bandwidth memory has only intensified.

For decades, computer systems have used registered dual inline memory modules (RDIMMs) directly attached to the host motherboard and central processing unit (CPU) via parallel bus. Expanding the number of modules required adding memory controllers and pins to the CPU.

Addressing this system limitation, Compute Express Link® (CXL) is a high-bandwidth, low-latency, CPU-to-device interconnect standard that builds on existing PCI Express® (PCIe) infrastructure to expand and pool memory.

By leveraging PCIe physical and electrical interfaces while adding additional transfer protocols, CXL expands capacity, improves energy efficiency, and generates significant cost savings.

Stock chart
Memory Success Takes Expertise

Memory Pooling &
Expansion Expertise

In industries where milliseconds matter, the demand for high-capacity, high-performance expandable memory solutions has never been greater.

Being a more economical choice for expanding memory under current system hardware constraints, CXL is the industry standard, open protocol for high-speed and low-latency communications between host accelerator which are increasingly used in HPC, AI, and ML applications.

CXL has emerged as the game-changing solution enabling affordable memory pooling and expansion, flexible scaling, improved performance, and the disaggregation of memory resources from their processors. CXL technology eliminates traditional memory constraints and enables real-time processing of massive datasets with unprecedented efficiency.

CXL Memory Solutions
CXL Memory Expansion Servers

CXL Memory Expansion

A more flexible and scalable memory architecture allows memory modules to be added or removed as needed without having to replace or upgrade the entire system.

CXL Memory Pooling

Memory pooling provides more efficient memory allocation. Partition your device as Multiple Logical Devices (MLD), that allow up to 16 hosts to access different portions of the memory simultaneously.

Lower Total Cost of Ownership

Achieve a remarkable 25% reduction in costs and memory capacity equivalent to eight 128GB DDR5 RDIMMs with just eight 64GB DDR5 RDIMMs and an 8-DIMM CXL add-in card (AIC) supporting an additional eight 64GB DDR5 RDIMMs.

Big Memory Servers

Applications that Benefit from
Large Memory Compute Power

Providing low-latency, high-speed memory access, CXL improves response times for memory-intensive AI, ML, and HPC workloads and enables higher memory capacity at a lower cost.

Ingest and process vast amounts of data in real time to accelerate computational analysis workloads, boost simulation processing, and reduce turnaround times.

Use cases that benefit from CXL expansion and pooling capabilities include:

  • In-memory databases: Keep entire datasets in memory for faster processing.
  • Big data analytics, AI, and deep learning: Enable rapid query and model training using very large datasets.
  • Financial modeling: Conduct complex risk analysis of market data in real time for high-frequency trading.
  • Accelerated computing: Ideal for climate modeling, genomics research, fluid dynamics, particle physics, and more.
CXL Memory Expansion Servers
CXL memory server
Team With a Technology Partner

Solving Complexity.
Accelerating Results.

Penguin Solutions applies its more than 25 years of HPC experience to the design, build, deployment, and management of the data center infrastructure required to operationalize AI. We apply best practices and leverage strong and long-term relationships with our technology partners to build massive, highly efficient AI systems.

25+

Years Experience

85,000+

GPUs Deployed & Managed

2+ Billion

Hours of GPU Runtime

Team members collaborating
Request a Callback

Talk to the Experts at Penguin Solutions

Reach out today and learn how we can help you maximize processing power and lower system memory costs via integrated memory pooling and expansion with the latest CXL technology.

Let's Talk