SpyderByte.com ;Technical Portals 
      
 News & Information Related to Linux High Performance Computing, Linux Clustering and Cloud Computing
Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Advertising/Sponsorship Search Privacy
HPC Vendors
Cluster Quoter (HPC Cluster RFQ)
Hardware Vendors
Software Vendors
HPC Consultants
Training Vendors
HPC Resources
Featured Articles
Cluster Builder
Beginners
Whitepapers
Documentation
Software
Lists/Newsgroups
Books
User Groups & Organizations
HP Server Diagrams
HPC News
Latest News
Newsletter
News Archives
Search Archives
HPC Links
ClusterMonkey.net
Scalability.org
HPCCommunity.org

Beowulf.org
HPC Tech Forum (was BW-BUG)
Gelato.org
The Aggregate
Top500.org
Cluster Computing Info Centre
Coyote Gultch
Dr. Robert Brown's Beowulf Page
FreshMeat.net: HPC Software
SuperComputingOnline
HPC User Forum
GridsWatch
HPC Newsletters
Stay current on Linux HPC news, events and information.
LinuxHPC.org Newsletter

Other Mailing Lists:
Linux High Availability
Beowulf Mailing List
Gelato.org (Linux Itanium)

LinuxHPC.org
Home
About
Contact
Mobile Edition
Sponsorship

Latest News

285.2 TeraFLOPS Linpack at 736 MegaFLOPS/Watt and PUE of 1.1
Posted by Ken Farmer, Tuesday November 16 2010 @ 06:37AM EST

LOEWE-CSC High Performance cluster at Goethe University Frankfurt among the top500 fastest, green500, and Best TCO

NEW ORLEANS, Nov. 15, 2010 /PRNewswire/ -- SC10 -- The Frankfurt Institute for Advanced Research (FIAS) at the Goethe University Frankfurt today announced a 285.2 TeraFLOPS Linpack score at 736 MegaFLOPS per watt at a PUE of only 1.1, a key milestone toward the "Landes Offensive zur Entwicklung Wissenschasftlich okonomischer Exzellenz (LOEWE)."

Professor Lindenstruth, who holds a Helmholtz International Center (HIC) for the international large-scale Facility for Antiproton and Ion Research (FAIR) LOEWE professor position, shares the results as the LOEWE High-performance computing cluster (LOEWE-CSC) shows a performance in the first tuning efforts of 285.2 TeraFLOPS Linpack score. This includes single-node Linpack performance of 563,2 GigaFLOPS, multi-node performance of 522 GigaFLOPS and DGEMM performance of 621 GigaFLOPS. This performance result is confirmed by its high rankings in the latest Top500 and Green500 lists at www.top500.org and www.green500.org, respectively.

The LOEWE cluster development is based on real-life applications performance, utilizing green IT and best TCO directives to support the increasingly important role played by large-scale numerical simulations in science and engineering, as well as business, financial markets and medicine. One of the key scientific projects is the HIC for FAIR center, lead by Frankfurt.

Performance for Real-Life Applications

With 18,432 AMD Opteron™ 6172 processor cores operating at 2.1 GHz based on the Supermicro 2U Twin server (SuperServer 2022TG-GIBQRF) with Mellanox ConnectX-2 at 40Gb/s QDR InfiniBand, 768 ATI Radeon™ HD 5870 GPUs and implementing key BLAS subroutines like DGEMM in assembly language allows for the greatest performance increase, exceeding the base-line performance by more than 360%.

"Supermicro is very pleased that this FIAS high-performance computing cluster (HPCC) has achieved outstanding rankings on the prestigious Top500 and Green500 lists," said Hermann von Drateln Business Development Director for Data Center and High Performance computing at Super Micro Computer, Inc. "To achieve these ranking using our 2U Twin servers provides further proof that our high-efficiency systems deliver superior performance-per-watt, performance-per-square-foot and performance-per-dollar."

"AMD applauds this outstanding achievement in performance and power efficiency by the new FIAS cluster using AMD Opteron™ 6172 processors and ATI Radeon™ HD 5870 GPUs," said John Fruehe, Director of Server Product Marketing, AMD (NYSE: AMD). "AMD has a significant history of HPC leadership, largely based on AMD Opteron™ processor technology, and is now helping drive a new era of heterogeneous computing in HPC based on world-class CPU and GPU technology."

The target ratio of much less than 500 kW for 285 TeraFLOPS for the entire infrastructure including cooling is a first in the industry, as it reduces the cooling power requirements to 10% of the computer power (PUE=1.1). Specially designed datacenter elements allow for top efficiency cooling of the Supermicro 2U Twin GPU servers, which is paramount to the reduction in cooling required for the solution.

FIAS has teamed with HHLR-GU and HIC for FAIR to implement the LOEWE-CSC Cluster, a new HPCC platform, and is pleased to provide a breakthrough in the economics to high performance in Real-Life Applications, Green IT and Best TCO directives.

This new platform is to support the increasingly important role played by large-scale numerical simulations in science and engineering, as well as business, financial markets and medicine.

About the HHLR-GU

The Goethe-University's Hessian high performance computer organization (HHLR-GU) coordinates all IT and HPC activities at the Goethe University. Part of HHLR-GU is the Center for Scientific Computing, which was founded as a joint initiative of research groups from the university departments Physics, Chemistry, Biochemistry and Pharmacy, Geosciences, Computer Science and Mathematics, and the Frankfurt Institute for Advanced Studies (FIAS). The mission of the CSC is HPC support for the scientific community and education in Computational Science. The CSC organizes the interdisciplinary master program Computational Science and regular seminars. It will also operate the new flagship computer LOEWE-CSC.

About the Helmholtz International Center for FAIR

The Helmholtz International Center for FAIR (HIC for FAIR) constitutes a unique think tank for forefront interdisciplinary theoretical and experimental research associated with the international large-scale Facility for Antiproton and Ion Research. FAIR is the new planned accelerator facility at the GSI Helmholtzzentrum fur Schwerionenforschung GmbH at Darmstadt. HIC for FAIR is a joint research center of the Hessian Universities Frankfurt (lead), Darmstadt and GieBen, the Frankfurt Institute for Advanced Studies (FIAS), the GSI Helmholtzcenter for heavy ion research GmbH and the Helmholtz Association. It has been established within the framework of the Hessian initiative for scientific and economic excellence (LOEWE) in July 2008.

http://www.hessen.de/irj/


< EMC to Acquire Isilon | AMD Reinforces HPC Leadership with CPU and GPU Technology at SC10 >

 

Affiliates

Cluster Monkey

HPC Community


Supercomputing 2010

- Supercomputing 2010 website...

- 2010 Beowulf Bash

- SC10 hits YouTube!

- Louisiana Governor Jindal Proclaims the week of November 14th "Supercomputing Week" in honor of SC10!








Appro: High Performance Computing Resources
IDC: Appro Xtreme-X Supercomputer Blade Solution
Analysis of the Xtreme-X architecture and management system while assessing challenges and opportunities in the technical computing market for blade servers.

Video - The Road to PetaFlop Computing
Explore the Scalable Unit concept where multiple clusters of various sizes can be rapidly built and deployed into production. This new architectural approach yields many subtle benefits to dramatically lower total cost of ownership.
White Paper - Optimized HPC Performance
Multi-core processors provide a unique set of challenges and opportunities for the HPC market. Discover MPI strategies for the Next-Generation Quad-Core Processors.

Appro and the Three National Laboratories
[Appro delivers a new breed of highly scalable, dynamic, reliable and effective Linux clusters to create the next generation of supercomputers for the National Laboratories.

AMD Opteron-based products | Intel Xeon-based products



Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Advertising/Sponsorship Search Privacy
     Copyright © 2001-2011 LinuxHPC.org
Linux is a trademark of Linus Torvalds
All other trademarks are those of their owners.
    
  SpyderByte.com ;Technical Portals