InfiniPath Scales Better and Delivers Performance Advantages that are 50 to 200 Percent Better than Competitive Interconnect Products
International Supercomputer Conference - Heidelberg, Germany - 23 June, 2005 - PathScale released new benchmark results this week proving that its new InfiniPath™ interconnect for InfiniBand™ dramatically outperforms competitive interconnect solutions by providing the lowest latency across a broad spectrum of cluster-specific benchmarks. The results were announced at the International Supercomputer Conference 2005 in Heidelberg, Germany.
PathScale InfiniPath achieved an MPI latency of 1.32 microseconds (as measured by the standard MPI "ping-pong" benchmark), n1/2 message size of 385 bytes and TCP/IP latency of 6.7 microseconds. This represents performance advantages that are 50 percent to 200 percent better than the newly announced Mellanox and Myricom interconnect products. InfiniPath also produced industry-leading benchmarks on more comprehensive metrics that predict how real applications will perform.
More data is available at:
The InfiniPath HTX™ Adapter is a low-latency cluster interconnect for InfiniBand™ that plugs into standard HyperTransport technology-based HTX slots on AMD Opteron servers. Optimized for communications-sensitive applications, InfiniPath is the industry's lowest-latency Linux cluster interconnect for message passing (MPI) and TCP/IP applications.
"When evaluating interconnect performance for HPC applications, it is essential to go beyond the simplistic zero-byte latency and peak streaming bandwidth benchmarks," said Art Goldberg, COO of PathScale. "InfiniPath delivers the industry's best performance on simple MPI benchmarks and provides dramatically better results on more meaningful interconnect metrics such as n1/2 message size (or half-power point), latency across a spectrum of message sizes, and latency across multiprocessor nodes. These are important benchmarks that give better indications of real world application performance. We challenge users to benchmark their own applications on an InfiniPath cluster and see what the impact of this breakthrough performance means to them."
PathScale InfiniPath uniquely exploits multi-processor nodes and dual-core processors to deliver greater effective bandwidth as additional CPUs are added. Any of the existing serial offload HCA designs cause messages to stack up when multiple processors try to access the adapter. By contrast, the unique messaging parallelization capabilities of InfiniPath enable multiple processors or cores to send messages simultaneously, maintaining constant latency while dramatically improving small message capacity and further reducing the n1/2 message size and substantially increasing effective bandwidth.
"We compared the performance of PathScale's InfiniPath interconnect on a 16-node/32-CPU test run with VASP, a quantum mechanics application used frequently in our facility, and found that VASP running on InfiniPath was about 50 percent faster than on Myrinet," said Martin Cuma, Scientific Applications Programmer for the Center for High-Performance Computing at the University of Utah. "Standard benchmarks do not give an accurate picture of how well an interconnect will perform in a real-world environment. Performance improvement will vary with different applications due to their parallelization strategies, but InfiniPath almost always delivers better performance than other interconnects when you scale it to larger systems and run communications-intensive scientific codes. InfiniPath has proven to be faster and to scale better for our parallel applications than other cluster interconnect solutions that we tested."
PathScale InfiniPath Performance Results
PathScale has published a white paper that includes a technical analysis of several application benchmarks that compare the new InfiniPath interconnect with competitive interconnects. This PathScale white paper can be downloaded from: www.pathscale.com/whitepapers.html
PathScale Customer Benchmark Center
PathScale has established a fully-integrated InfiniPath cluster at its Customer Benchmark Center in Mountain View, California. Potential customers and ISVs are invited to remotely test their own MPI and TCP/IP applications and personally experience the clear performance advantages of the InfiniPath low-latency interconnect.
Based in Mountain View, California, PathScale develops innovative software and hardware technologies that substantially increase the performance and efficiency of Linux clusters, the next significant wave in high-end computing. Applications that benefit from PathScale's technologies include seismic processing, complex physical modeling, EDA simulation, molecular modeling, biosciences, econometric modeling, computational chemistry, computational fluid dynamics, finite element analysis, weather modeling, resource optimization, decision support and data mining. PathScale's investors include Adams Street Partners, Charles River Ventures, Enterprise Partners Venture Capital, CMEA Ventures, ChevronTexaco Technology Ventures and the Dow Employees Pension Plan. For more details, visit http://www.pathscale.com , send email to firstname.lastname@example.org or telephone 1-650-934-8100.