SpyderByte.com ;Technical Portals 
 The #1 Site for News & Information Related to Linux High Performance Technical Computing, Linux High Availability and Linux Parallel Clustering
Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Advertising/Sponsorship Search Privacy
Research and Services
Cluster Quoter (HPC Cluster RFQ)
Hardware Vendors
Software Vendors
HPC Consultants
Training Vendors
Latest News
News Archives
Search Archives
Featured Articles
Cluster Builder
User Groups & Organizations
Golden Eggs (Configuration Diagrams)
Linux HPC Links
Cluster Monkey (Doug Eadline, et al)
HPCWire (Tabor Communications)
insideHPC.com (John West)
Scalability.org (Dr. Joe Landman)

Beowulf Users Group
High Performance Computing Clusters
Thinking Parallel
The Aggregate
Cluster Computing Info Centre
Coyote Gultch
Robert Brown's Beowulf Page
FM.net: Scientific/Engineering
HPC User Forum
Linux HPC News Update
Stay current on Linux related HPC news, events and information.
LinuxHPC Newsletter

Other Mailing Lists:
Linux High Availability
Beowulf Mailing List
Gelato.org (Linux Itanium)

Mobile Edition

Linux Cluster RFQ Form
Reach Multiple Vendors With One Linux Cluster RFQ Form. Save time and effort, let LinuxHPC.org do all the leg work for you free of charge. Request A Quote...

Latest News

Open-source software powers top US academic supercomputer
Posted by Ken Farmer, Thursday June 29 2006 @ 09:46AM EDT

The newest edition of the 500 fastest supercomputers in the world, released today, lists Indiana University's supercomputer cluster, Big Red, as the fastest supercomputer among all US academic institutions and ranked 23rd overall in the world--and it runs on open-source software.

BLOOMINGTON, Ind. -- The newest edition of the 500 fastest supercomputers in the world, released today, lists Indiana University's supercomputer cluster, Big Red, as the fastest supercomputer among all US academic institutions and ranked 23rd overall in the world--and it runs on open-source software.

Big Red consists of IBM?s very latest technology, an e1350 BladeCenter Cluster, that uses new chip technology and high speed internal networks to perform calculations at very fast speeds. Running the SLES 9 operating system, Big Red as of today is the largest IBM e1350 system in the world, with a peak theoretical capability of 20.4 trillion mathematical operations per second. It contains a total of 1024 dual-core IBM PowerPC 970 MP chips. Each chip has two floating point processor elements, one vector processor per chip, and runs at a clock rate of 2.5GHz. Big Red is made up of 512 JS21 Blade servers, each of which contains two IBM dual-core PowerPC 970MP processors and 8GB of RAM. The JS21 Blade servers have Ethernet and Myrinet2000 interconnects.

Indiana University will be relying on a suite of open-source software to operate Big Red. Open-source software offers the best opportunity to achieve high levels of performance and to get this very new and innovative system up and running quickly so that it is producing new scientific breakthroughs as rapidly as possible.

One of the tremendous challenges in achieving high levels of performance from applications running on Big Red and other large supercomputers will be managing multiple layers of complexity and many processors in the system. In Big Red, each PowerPC 970 chip has two floating point units and a vector unit. Two dual-core PowerPC970 processors share 8GB of RAM on one JS21 Blade server and the 512 Blade servers in Big Red each have two different communication paths with other parts of the system?Myrinet2000 and Ethernet.

To achieve the greatest parallel processing efficiency, IU will be using the OpenMPI implementation of the Message Passing Interface (MPI) specification. OpenMPI was created by an international consortium of several major research labs including the Open Systems Lab, part of Pervasive Technology Labs at Indiana University. OpenMPI provides especially advanced tools for taking advantage of and effectively utilizing a complex supercomputer cluster such as Big Red.

In addition, Indiana University will use the performance analysis tool Vampir NG, produced by the Technische Universit├Ąt Dresden, to study and improve the performance of applications running on this system. Vampir NG uses the open-source Open Trace Format for storing data about application performance on this system. As supercomputers get faster and more complex, open-source software provides the capabilities and nimbleness required to extract the best possible application performance--and thus the most important scientific breakthroughs--from these massive new supercomputers.

Big Red will also play a major role in the TeraGrid, the National Science Foundation's flagship effort to create an advanced national cyberinfrastructure. Cyberinfrastructure refers to supercomputers, massive data storage systems, advanced instruments, and people all connected by high speed networks, enabling new possibilities in scientific research. The National Science Foundation's goal for the TeraGrid is to make US scientific research more productive and to enhance the international competitiveness of US scientists. Big Red will be connected to the TeraGrid this summer, and will at that time be the fastest supercomputer connected to this innovative national grid computing system.

< IBM and HP monopolize Top 397 supercomputers list | Opteron flexes supercomputing muscles >


Supercomputing '07
Nov 10-16, Reno, NV

Register now...



Cluster Monkey

Golden Eggs
(HP Visual Diagram and Config Guides)
CP4000 32x DL145G2 GigE Opteron, Dual Core
CP4000 64x DL145 GigE Opteron
CP4000 102x DL145 GigE Opteron
CP4000 32x DL145 Myri Opteron
Rocks Cluster 16-22 DL145 Opteron
Rocks Cluster 30-46 DL145 Opteron
Rocks Cluster 64-84 DL145 Opteron
LC3000 GigaE 24-36 DL145 Opteron
LC3000 Myri 16-32x DL145 Opteron
LC3000 GigaE 16-22x DL145 Opteron
LC2000 GigaE 16-22x DL360G3 Xeon
> DL365 System 2600Mhz 2P 1U Opteron Dual Core
DL360 G5 System 3000Mhz 2P 1U EM64T Dual/Quad Core
DL385 G2 2600Mhz 2P Opteron Dual Core
DL380 G5 3000Mhz 2P EM64T Dual/Quad Core
DL140 3060MHz 2P IA32
DL140 G2 3600MHz 2P EM64T
DL145 2600MHz 2P Opteron
DL145 G2 2600MHz 2P Opteron Dual Core
DL360 G4 3400MHz 2P EM64T
DL360 G4p 3800MHz 2P EM64T
DL380 G4 3800MHz 2P EM64T
DL385 2800MHz 2P Opteron Dual Core
DL560 3000MHz 4P IA32
DL580 G3 3330MHz 4P EM64T
DL585 2800MHz 4P Opteron Dual Core
Montecito 2P-16P, rx2660-rx8640 (multi-system diagram)
rx2660 1600MHz 2P 2U Montecito Systems and Cluster
rx6600 1600MHz 4P 7U Single & Cluster
rx3600 1600MHz 2P 4U Single & Cluster
rx2620 1600MHz 2P 2U Single & Cluster
Superdome 64P base configuration
Integrity Family Portrait (rx1620 thru rx8620), IA64
rx1620 1600MHz 2P MSA1000 Cluster IA64
rx2620 1600MHz 2P MSA1000 Cluster IA64
rx4640 1600MHz 4P MSA1000 Cluster IA64
rx7620 1600MHz 8P 10U Systems and MSA1000 Cluster
rx8620 1600MHz 16P 17U Systems and MSA1000 Cluster
MSA30-MI Dual SCSI Cluster, rx3600, rx6600 and rx2660
MSA30-MI Dual SCSI Cluster, rx1620...rx4640
MSA1500 48TB, SCSI and SATA
Dual Core AMD64 and EM64T systems with MSA1500

Appro: Enterprise and High Performance Computing Whitepapers
Is Your HPC Cluster Ready for Multi-core Processors?:
Multi-core processors bring new challenges and opportunities for the HPC cluster. Get a first look at utilizing these processors and strategies for better performance.

Accelerating Results through Innovation:
Achieve maximum compute power and efficiency with Appro Cluster Solutions. Our highly scalable clusters are designed to seamlessly integrate with existing high performance, scientific, technical, and commercial computing environments.
Keeping Your Cool in the Data Center:
Rethinking IT architecture and infrastructure is not a simple job. This whitepaper helps IT managers overcome challenges with thermal, power, and system management.

Unlocking the Value of IT with Appro HyperBlade:
A fully integrated cluster combining advantages of blade and rack-mount servers for a flexible, modular, scalable architecture designed for Enterprise and HPC applications.
AMD Opteron-based products | Intel Xeon-based products

Hewlett-Packard: Linux High Performance Computing Whitepapers
Unified Cluster Portfolio:
A comprehensive, modular package of tested and pre-configured hardware, software and services for scalable computation, data management and visualization.

Your Fast Track to Cluster Deployment:
Designed to enable faster ordering and configuration, shorter delivery times and increased savings. Customers can select from a menu of popular cluster components, which are then factory assembled into pre-defined configurations with optional software installation.
Message Passing Interface library (HP-MPI):
A high performance and production quality implementation of the Message-Passing Interface (MPI) standard for HP servers and workstations.

Cluster Platform Express:
Cluster Platform Express comes straight to you, factory assembled and available with pre-installed software for cluster management, and ready for deployment.
AMD Opteron-based ProLiant nodes | Intel Xeon-based ProLiant nodes

Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Advertising/Sponsorship Search Privacy
     Copyright © 2001-2007 LinuxHPC.org
Linux is a trademark of Linus Torvalds
All other trademarks are those of their owners.
  SpyderByte.com ;Technical Portals