Leap Ahead of the Competition
with GPU-Accelerated Computing

Get faster time-to-results without the traditional equipment headaches

Get Ready for The Future with More Powerful, More Efficient Computing

Penguin Computing™ delivers targeted, modular and complementary AI & Analytics architectures for AI/ML and High Performance Data Analytic Pipeline. Our solutions shorten time to insight and discovery by removing the complexities involved in designing, deploying, and supporting customer’s AI & Analytics infrastructure.

Our GPU-accelerated compute delivers best of breed solution that powers our Technology Practices, especially in HPC Technology as well as AI & Analytic Technology. Our infrastructure offering includes both 19” EIA and 21” Tundra (Open Compute) infrastructure enabling higher density, and alternative non-air cooling for more compute per rack.

Penguin Computing team (2019 NVIDIA® HPC Preferred OEM Partner of the Year) is experienced with building both CPU and GPU-based systems as well as the storage subsystems required for this level of data analytics, the outcome of moving to a GPU-accelerated strategy is superior performance by all measures, faster compute time, and reduced hardware requirements.

GPU-Accelerated Servers

19″ EIA Servers

Server

Processor

PCIe Slots

GPU(s) Supported

1U
2U
AMD EPYC 7002/7003
4x PCIe Gen4 x16 (FHFL) and 2x PCIe Gen4 x16 (HHHL)
NVIDIA A100 PCIe, NVIDIA V100/V100S-PCIe, NVIDIA T4, NVIDIA RTX
AMD EPYC™ 9004 Series
8x PCIe Gen5 x16 FHFL, 2x PCIe Gen5 x8 LP
NVIDIA H100, L40, A100
4th Gen Intel® Xeon® Scalable Processors
8x PCIe Gen5 x16 FHFL, 2 x PCIe Gen5 x8 LP
NVIDIA H100, L40, A100
3U
AMD EPYC™ 9004 Series
4x PCIe Gen5 x16 FHHL, 2x PCIe Gen5 x16 LP
NVIDIA HGX H100 SXM5 x4
4th Gen Intel® Xeon® Scalable Processors
6x PCIe Gen5 x16 LP Slots
NVIDIA HGX H100 SXM5 x4
4U
AMD EPYC™ 9004 Series
8x PCIe Gen5 x16 FHFL, 2x PCIe Gen5 x8 LP
NVIDIA H100, L40, A100
4th Gen Intel® Xeon® Scalable Processors
8x PCIe Gen5 x16 FHFL, 10x PCIe Gen5 x16 LP
NVIDIA H100, L40, A100
5U
AMD EPYC™ 9004 Series
12x PCIe Gen5 x16 Slots
NVIDIA HGX H100 SXM5 x8
4th Gen Intel® Xeon® Scalable Processors
12x PCIe Gen5 x16 LP Slots, 1x PCIe Gen4 x16 LP Slot
NVIDIA HGX H100 SXM5 x8

21″ OCP Servers

Server

Processor

PCIe Slots

GPU(s) Supported

1OU
AMD EPYC™ 7002/7003 Series Processors
4x PCIe Gen4 x16 (FHFL) and 2x PCIe Gen4 x16 (LP)
NVIDIA A100 PCIe
3OU
AMD EPYC™ 7002/7003 Processors
10x HHHL PCIe Gen 4 slots
NVIDIA HGX A100 SXM4 x8

Selected applications supported by NVIDIA-based Penguin Computing GPU servers:

  • Amber
  • ANSYS Fluent
  • Gaussian
  • Gromacs
  • LS-DYNA
  • NAMD
  • OpenFOAM
  • Simulia Abaqus
  • VASP
  • WRF

Selected deep learning frameworks supported by NVIDIA-based Penguin Computing GPU servers:

  • Caffe2
  • Microsoft Cognitive Toolkit
  • MXNET
  • Pytorch
  • TensorFlow
  • Theano

Benefits of GPU-Accelerated Computing

  • Computing Power/Speed A single GPU can offer the performance of hundreds of CPUs for certain workloads. In fact, NVIDIA, a leading GPU developer, predicts that GPUs will help provide a 1000X acceleration in compute performance by 2025.
  • Efficiency/Cost Adding a single GPU-accelerated server costs much less in upfront, capital expenses and, because less equipment is required, reduces footprint and operational costs. Using libraries also allows organizations to use GPU acceleration without in-depth knowledge of GPU programming, reducing the investment of time required to achieve results.
  • Flexibility The inherently flexible nature of GPU programmability allows new algorithms to be developed and deployed quickly across a variety of industries. According to Intersect360 Research, 70% of the most popular HPC applications, including 10 of the top 10, have built-in support for GPUs.
  • Long-Term Benefits Adding GPU-accelerated computing now prepares you for the artificial intelligence (AI) revolution, which also relies in GPU-accelerated computing. This inevitable increase on the reliance on GPUs means that early adopters will enjoy not only greater computing power over time but have a greater margin of difference over time than competitors who do not migrate to GPU-accelerated computing.

Learn More About GPU-Accelerators