SpyderByte.com ;Technical Portals 
      
 The #1 Site for News & Information Related to Linux High Performance Technical Computing, Linux High Availability and Linux Parallel Clustering
Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Advertising/Sponsorship Search Privacy
HPC Vendors
Cluster Quoter (HPC Cluster RFQ)
Hardware Vendors
Software Vendors
HPC Consultants
Training Vendors
HPC Resources
Featured Articles
Cluster Builder
Beginners
Whitepapers
Documentation
Software
Lists/Newsgroups
Books
User Groups & Organizations
HP Server Diagrams
HPC News
Latest News
Newsletter
News Archives
Search Archives
HPC Links
ClusterMonkey.net
Scalability.org
HPCCommunity.org

Beowulf.org
HPC Tech Forum (was BW-BUG)
Gelato.org
The Aggregate
Top500.org
Cluster Computing Info Centre
Coyote Gultch
Dr. Robert Brown's Beowulf Page
FreshMeat.net: HPC Software
SuperComputingOnline
HPC User Forum
GridsWatch
HPC Newsletters
Stay current on Linux HPC news, events and information.
LinuxHPC.org Newsletter

Other Mailing Lists:
Linux High Availability
Beowulf Mailing List
Gelato.org (Linux Itanium)

LinuxHPC.org
Home
About
Contact
Mobile Edition
Sponsorship

Linux Cluster RFQ Form
Reach Multiple Vendors With One Linux Cluster RFQ Form. Save time and effort, let LinuxHPC.org do all the leg work for you free of charge. Request A Quote...

Latest News

Woven Systems and Chelsio Comm Deliver Scalable HPC Clusters at Sandia National Labs
Posted by Rebecca, Tuesday August 07 2007 @ 11:21AM EDT

Santa Clara and Sunnyvale, CA – August 7, 2007 – Woven SystemsTM, Inc., the leading innovator of 10 Gigabit Ethernet (GE) fabric solutions, and Chelsio™ Communications, the leading provider of 10 Gigabit Ethernet Unified Wire solutions, today announced performance test results for the world’s largest 10 GE remote direct memory access (RDMA) high performance computing (HPC) cluster built to date. Results from the Sandia National Laboratories 128-node cluster demonstrated twice the average throughput as processors were added and a fraction of the performance variation compared to single data rate (SDR) InfiniBand™.

GRAPH: http://www.newscom.com/cgi-bin/prnh/20070807/LATU093

The benchmark tests were conducted at Sandia using Chelsio’s R310E iWARP host bus adapter (HBA) and Woven’s EFX 1000 Ethernet Fabric Switch. A test developed by Sandia was designed to measure the ability to dynamically respond to congestion in the network to deliver the highest levels of throughput and consistent performance.

The combined 10 GE fabric and RDMA solution demonstrated a peak unidirectional bandwidth of 1204 megabytes per second (MB/s), as compared to 960 MB/s for SDR InfiniBand. Moreover, the test results showed that as processors were added, the average throughput for InfiniBand fell to 585 MB/s or 61% of its maximum whereas the 10 GE solution maintained 99% throughput.

“We are pleased to see an industry-standard 10 Gigabit Ethernet interconnect that meets the needs of HPC applications,” said John Naegle, senior engineer with Sandia. “The ability of the Woven 10 GE fabric to dynamically respond to congestion significantly reduced the gap between maximum and average throughput compared to InfiniBand, making this solution very attractive for applications with varying I/O profiles. Additionally, we were very impressed with the sustained performance demonstrated with Chelsio’s 10 GE RDMA implementation.”

“InfiniBand has been the unrivaled solution for building HPC clusters,” noted Joe Skorupa, research vice president, Enterprise Communications Applications and Infrastructure at Gartner, Inc. “As 10 GE vendors deliver on the promise of high throughput, low latency NICs and fabrics, 10 GE will become a viable option for many workloads.”

Woven’s Active Congestion Management feature continuously monitored traffic across Sandia’s 10 GE cluster interconnect. When congestion was detected, the system re-routed traffic to a less congested path without dropping data. For that reason, Sandia experienced more consistent performance for 10 GE as the cluster scaled. Coupled with Chelsio’s iWARP HBAs with support for OS-bypass and the ability to handle out of order packets in the fast path, the solution delivered consistent low-latency and high throughput without the need to manually re-tune. As a result, HPC users can more quickly deploy new applications and more effectively utilize expensive compute resources.

“Sandia’s tests prove that 10 GE can deliver the combination of the performance needed by the HPC community with the ease-of-use of Ethernet,” added Kianoosh Naghshineh, president and CEO of Chelsio Communications. “Chelsio’s unique low latency data-flow processor is particularly suited for iWARP applications, and in conjunction with a low latency 10 GE switch, can deliver all the benefits of today’s high performance cluster fabrics.”

“It is critical that the interconnect fabric can scale and manage the congestion that inevitably occurs as HPC clusters scale,” concluded Harry Quackenboss, president and CEO of Woven Systems. “Active Congestion Management allows HPC users to confidently share a cluster without the need to re-engineer the network for each application, saving time and resources.”

Sandia’s Benchmark Tests

Sandia Cbench HPC benchmark suite is a collection of tests designed to characterize and stress the capabilities of HPC interconnects. In the test designed to highlight congestion effects, high bandwidth traffic is transmitted between pairs of sources and destinations. The test rotates through a series of such pairs to present the fabric with a varying communication pattern. For additional information about the Cbench test suite, refer to http://cbench-sf.sourceforge.net/.

The tests compared the performance of SDR InfiniBand and 10 GE HPC cluster interconnects. The 128-node test cluster was configured using Chelsio’s R310E iWARP HBA supporting the OpenFabrics 1.2 RDMA stack and Woven’s low latency 144-port EFX 1000 Ethernet Fabric Switch.

The test measured I/O bandwidth as the number of nodes increased and traffic patterns varied. As node count increased, the average bandwidth for InfiniBand decreased to 62% of the maximum rate due to congestion in the network. The 10 GE solution maintained average bandwidth at 99% of the maximum rate. This result was due to its ability to respond to congestion by dynamically re-balancing traffic onto alternate paths. While double data rate InfiniBand provides a higher maximum data rate, test performance is expected to exhibit similar scaling characteristics due to congestion as node count increases.

Woven Systems EFX 1000 Ethernet Fabric Switch is a modular 10U switching platform with configurable line cards that can support up to 144 non-blocking 10 GE ports each. Woven's unique switch architecture incorporates patented vSCALE™ packet processing technology to efficiently manage the distribution of traffic through the switch fabric. The Active Congestion Management feature dynamically monitors traffic to detect congestion across a large fabric and automatically redirects traffic onto less congested paths. Utilizing cut-through switching, the EFX 1000 achieves port-to-port latency of 1.5 microseconds through a single switch and 4 microseconds across a 4000-node fabric.

The Chelsio R310E HBA includes on-board hardware that offloads iWARP RDMA processing from its host system, freeing up host CPU cycles for application processing. RDMA, which enables high-speed and low latency, is inherent in the InfiniBand protocol, and the OFED iWARP stack now makes RDMA technology available for 10 GE interconnects. By implementing all the exception paths in hardware, using a built-in fast error recovery mechanism and a single proprietary data-flow processor running at 10 Gbit/s, Chelsio’s iWARP adapters contribute significantly to delivering the demonstrated high level of performance at scale.

About Woven Systems

Woven Systems is an innovative network infrastructure provider that offers the industry’s first massively scalable Ethernet fabric switching solutions for data centers. Fully compliant with IEEE Ethernet standards, Woven redefines network performance and efficiency with Active Congestion Management for balancing traffic, operational simplicity, and a significantly lower cost of switching. The Woven solutions deliver the performance and scalability of InfiniBand™, the reliability of Fibre Channel and the ease-of-use of Ethernet. For more information, contact Woven Systems at the company’s web site http://www.wovensystems.com

About Chelsio Communications

Chelsio Communications is leading the convergence of networking, storage and clustering interconnects with its robust, high-performance and proven unified wire technology. Featuring a highly scalable and programmable architecture, Chelsio is shipping 10-Gigabit Ethernet and multi-port Gigabit Ethernet adapter cards, delivering the low latency and superior throughput required for high-performance computing applications. For more information, visit the company online at http://www.chelsio.com

About Sandia National Laboratories

Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin company, for the U.S. Department of Energy’s National Nuclear Security Administration. With main facilities in Albuquerque, N.M., and Livermore, Calif., Sandia has major R&D responsibilities in national security, energy and environmental technologies, and economic competitiveness. Learn more at http://www.sandia.gov

For additional information about Woven Systems: Curtis Chan CHAN & ASSOCIATES, INC. 1.714.447.4993, Ext. 100 cj_chan@chanandassoc.com

Rebecca Engel CHAN & ASSOCIATES, INC. 1.714.447.4993, Ext. 106 rebecca@chanandassoc.com

Derek Granath
Woven Systems Inc.
1.408.654.8900, Ext. 140
dgranath@wovensystems.com

For additional information about Chelsio Communications:
Bruck Girmay
Chelsio Communications
1.408.962.3632
bruck@chelsio.com


< Woven Systems and Chelsio Communications Deliver Scalable HPC Clusters at Sandia National Lab | ‘Green IT Doesn’t Work if Users See Red’ says Sy >

 

Sponsors


Affiliates

HPC Community

Cluster Monkey





Appro: High Performance Computing Resources
IDC: Appro Xtreme-X Supercomputer Blade Solution
Analysis of the Xtreme-X architecture and management system while assessing challenges and opportunities in the technical computing market for blade servers.

Video - The Road to PetaFlop Computing
Explore the Scalable Unit concept where multiple clusters of various sizes can be rapidly built and deployed into production. This new architectural approach yields many subtle benefits to dramatically lower total cost of ownership.
White Paper - Optimized HPC Performance
Multi-core processors provide a unique set of challenges and opportunities for the HPC market. Discover MPI strategies for the Next-Generation Quad-Core Processors.

Appro and the Three National Laboratories
[Appro delivers a new breed of highly scalable, dynamic, reliable and effective Linux clusters to create the next generation of supercomputers for the National Laboratories.

AMD Opteron-based products | Intel Xeon-based products



Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Advertising/Sponsorship Search Privacy
     Copyright © 2001-2010 LinuxHPC.org
Linux is a trademark of Linus Torvalds
All other trademarks are those of their owners.
    
  SpyderByte.com ;Technical Portals