Leap Ahead of the Competition
with GPU-Accelerated Computing

Get faster time-to-results without the traditional equipment headaches

Get Ready for The Future with More Powerful, More Efficient Computing

Penguin Computing™ delivers targeted, modular and complementary AI & Analytics architectures for AI/ML and High Performance Data Analytic Pipeline. Our solutions shorten time to insight and discovery by removing the complexities involved in designing, deploying, and supporting customer’s AI & Analytics infrastructure.

Our GPU-accelerated compute delivers best of breed solution that powers our Technology Practices, especially in HPC Technology as well as AI & Analytic Technology. Our infrastructure offering includes both 19” EIA and 21” Tundra (Open Compute) infrastructure enabling higher density, and alternative non-air cooling for more compute per rack.

Penguin Computing team (2019 NVIDIA HPC Preferred OEM Partner of the Year) is experienced with building both CPU and GPU-based systems as well as the storage subsystems required for this level of data analytics, the outcome of moving to a GPU-accelerated strategy is superior performance by all measures, faster compute time, and reduced hardware requirements.

GPU-Accelerated Servers

19″ EIA Servers

Server

Processor

PCIe Slots

GPU(s) Supported

1U
Intel® Xeon® Scalable Processors
4x PCIe Gen3 x16 (GPU), 2x PCIe Gen3 x16 (LP-MD2)
NVIDIA A100 PCIe, NVIDIA V100/V100S PCIe, NVIDIA T4, NVIDIA RTX
2U
Intel® Xeon® Scalable Processors
2x PCIe Gen3 x16 (GPU), 2x PCIe Gen3 x8 (LP), 2x OCP Mezz
NVIDIA A100 PCIe, NVIDIA V100/V100S PCIe, NVIDIA T4, NVIDIA RTX
Intel® Xeon® Scalable Processors
4x PCIe Gen3 x16 (GPU), 1x PCIe Gen3 x16 (LP), 1x PCIe Gen3 x8 (LP)
NVIDIA A100 PCIe, NVIDIA V100/V100S PCIe, NVIDIA T4, NVIDIA RTX
Intel® Xeon® Scalable Processors
8x PCIe Gen3 x16 (GPU), 2x PCIe Gen3 x16 (LP)
NVIDIA A100 PCIe, NVIDIA V100/V100S PCIe, NVIDIA T4, NVIDIA RTX
AMD EPYC 7002/7003
4x PCIe Gen4 x16 (FHFL) and 2x PCIe Gen4 x16 (HHHL)
NVIDIA A100 PCIe, NVIDIA V100/V100S-PCIe, NVIDIA T4, NVIDIA RTX
AMD EPYC™ 9004 Series
8x PCIe Gen5 x16 FHFL, 2x PCIe Gen5 x8 LP
Up to 6TB (24 DIMMs)
4th Gen Intel® Xeon® Scalable Processors
8x PCIe Gen5 x16 FHFL, 2 x PCIe Gen5 x8 LP
Up to 6TB DDR5-4800MHz (24 DIMMs)
4U
Intel® Xeon® Scalable Processors
8x PCIe Gen3 x16 (GPU), 2x PCIe Gen3 x16 (LP)
NVIDIA A100 PCIe, NVIDIA V100/V100S PCIe, NVIDIA T4, NVIDIA RTX
Intel® Xeon® Scalable Processors
8x NVIDIA SXM2 (GPU), 2x PCIe Gen3 x16 (LP)
NVIDIA V100 SXM2
AMD EPYC™ 9004 Series
8x PCIe Gen5 x16 FHFL, 2x PCIe Gen5 x8 LP
Up to 6TB (24 DIMMs)
4th Gen Intel® Xeon® Scalable Processors
8x PCIe Gen5 x16 FHFL, 10x PCIe Gen5 x16 LP
Up to 12TB DDR5-4800MHz (48 DIMMs)

21″ OCP Servers

Server

Processor

PCIe Slots

GPU(s) Supported

1OU
AMD EPYC™ 7002/7003 Series Processors
4x PCIe Gen4 x16 (FHFL) and 2x PCIe Gen4 x16 (LP)
NVIDIA® A100 PCIe
Intel® Xeon® Scalable Processors
4x PCIe Gen3 x16 (GPU), 2x PCIe Gen3 x16 (LP)
NVIDIA V100 PCIe
Intel® Xeon® Scalable Processors
4x NVIDIA SXM2 (GPU), 2x PCIe Gen3 x16 (LP)
NVIDIA V100 SXM2 16GB/32GB
AMD 7000 EPYC
4x PCIe Gen3 x16 (FHFL) and 2x PCIe Gen3 x16 (LP)
NVIDIA V100 PCIe
3OU
AMD EPYC™ 7002/7003 Processors
10x HHHL PCIe Gen 4 slots
Nvidia HGX A100 XSM4 x8

Selected applications supported by NVIDIA-based Penguin Computing GPU servers:

  • Amber
  • ANSYS Fluent
  • Gaussian
  • Gromacs
  • LS-DYNA
  • NAMD
  • OpenFOAM
  • Simulia Abaqus
  • VASP
  • WRF

Selected deep learning frameworks supported by NVIDIA-based Penguin Computing GPU servers:

  • Caffe2
  • Microsoft Cognitive Toolkit
  • MXNET
  • Pytorch
  • TensorFlow
  • Theano

Benefits of GPU-Accelerated Computing

  • Computing Power/Speed A single GPU can offer the performance of hundreds of CPUs for certain workloads. In fact, NVIDIA, a leading GPU developer, predicts that GPUs will help provide a 1000X acceleration in compute performance by 2025.
  • Efficiency/Cost Adding a single GPU-accelerated server costs much less in upfront, capital expenses and, because less equipment is required, reduces footprint and operational costs. Using libraries also allows organizations to use GPU acceleration without in-depth knowledge of GPU programming, reducing the investment of time required to achieve results.
  • Flexibility The inherently flexible nature of GPU programmability allows new algorithms to be developed and deployed quickly across a variety of industries. According to Intersect360 Research, 70% of the most popular HPC applications, including 10 of the top 10, have built-in support for GPUs.
  • Long-Term Benefits Adding GPU-accelerated computing now prepares you for the artificial intelligence (AI) revolution, which also relies in GPU-accelerated computing. This inevitable increase on the reliance on GPUs means that early adopters will enjoy not only greater computing power over time but have a greater margin of difference over time than competitors who do not migrate to GPU-accelerated computing.

Learn More About GPU-Accelerators