SpyderByte.com ;Technical Portals 
      
 The #1 Site for News & Information Related to Linux High Performance Technical Computing, Linux High Availability and Linux Parallel Clustering
Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Advertising/Sponsorship Search Privacy
Research and Services
Cluster Quoter (HPC Cluster RFQ)
Hardware Vendors Directory
Software Vendors Directory
HPC Consultants Directory
Training Vendors Directory
News
Latest News
Newsletter
News Archives
Search Archives
Reference
Featured Articles
Cluster Builder
Beginners
Whitepapers
Documentation
Software
Lists/Newsgroups
Books
User Groups
Golden Eggs (Configuration Diagrams)
Linux HPC News Update
Stay current on Linux related HPC news, events and information.
LinuxHPC Newsletter

Other Mailing Lists:
Linux High Availability
Beowulf Mailing List
Gelato.org (Linux Itanium)

Linux HPC Links
Favorites:
Cluster Monkey (Doug Eadline, et al)
HPCWire (Tabor Communications)
Scalability.org (Dr. Joe Landman)

Beowulf.org
Beowulf Users Group
High Performance Computing Clusters
Thinking Parallel
insideHPC.com
Gelato.org
The Aggregate
Top500
Cluster Computing Info Centre
Coyote Gultch
Robert Brown's Beowulf Page
FM.net: Scientific/Engineering
SuperComputingOnline
HPC User Forum
GridsWatch
LinuxHPC.org
Home
About
Contact
Mobile Edition
Sponsorship

Linux Cluster RFQ Form
Reach Multiple Vendors With One Linux Cluster RFQ Form. Save time and effort, let LinuxHPC.org do all the leg work for you free of charge. Request A Quote...

Latest News

Perils and pitfalls of HPC leads off LCI Conference
Posted by Barbara Jewett, Wednesday May 16 2007 @ 11:04AM EDT

Gary Montry

The LCI Conference focus this year is on big clusters. Not necessarily on raw performance per se, but on every other factor required to acquire, host, provision, maintain, and achieve scalable performance for systems as a whole.

The first two keynotes set the tone by describing the perils and pitfalls of installing huge systems and getting them to perform. Even after a few years all of the pieces don’t necessarily play together well enough to meet the original design objectives. Horst Simon began the first day with an excellent philosophical discussion about the current state of high performance computing (HPC), hardware architecture, and the political atmosphere surrounding the drive to assemble the worlds’ first petaflop machines. He noted that even though we have started construction of a petaflop computer, there are presently only two general-purpose machines in the world capable of 100+ teraflops on the Linpack benchmark.

This was a perfect segue from the opening keynote Monday evening by Robert Ballance of Sandia National Laboratory about the difficulties of assembling Red Storm and getting it to perform. Even though Sandia has years of experience building and maintaining some of the largest supercomputers in the world, Red Storm turned out to be a unique experience for them. Why? Because it was much bigger than anything they had previously built. So the old saw in computing, “if it’s 10x bigger, it is something entirely new,” still holds, and we should not expect a petaflop machine to come together quietly at this moment in HPC time.

One interesting observation which Horst made in his talk is that programming a 100,000+ core machine using MPI is akin to programming each transistor individually by hand on the old Motorola 68000 processor, which of course had only 68,000 transistors. That wasn’t so long ago to most of us, and his point is that we can’t grow too much more in complexity unless we have some new software methodology for dealing with large systems.

The discussions generated by his comments never really addressed the fact explicitly that we are going to need new compiler technology sooner rather than later to handle the complexity. Neither MPI or OpenMP are the answers by themselves.

The rest of the talks on day one had a heavy emphasis on parallel I/O systems, and the difficulties of getting them to scale on large cluster systems. The problem here is that some of the tests can take so long (Laros, SNLA) that the production system would be unavailable for unacceptable periods of time. So I/O system administrators are forced to do simulations of the I/O systems on smaller development configurations. Presently, it seems that scalable I/O systems are limited to about one KiloClient (my term) for single-process/single-file I/O scenarios. Forget about it if you’re talking about shared-file I/O. I think this is still pretty darn good progress, but the performance variability of these I/O systems is large, and it appears that their performance is very sensitive to a huge number of environmental parameters. Repeatability seems to be somewhere over the HPC horizon.

Day two we will have several presentations on software, new systems, and roadrunner (LANL’s one Petaflop system). It should be a lot of fun.


< TotalView Technologies' Chris Gottbrath to Present this week at LCI Conference | Myricom presents first end-to-end 10GE solution >

 

Sponsors








Affiliates




Cluster Monkey






Golden Eggs
(HP Visual Diagram and Config Guides)
Clusters:
CP4000 32x DL145G2 GigE Opteron, Dual Core
CP4000 64x DL145 GigE Opteron
CP4000 102x DL145 GigE Opteron
CP4000 32x DL145 Myri Opteron
Rocks Cluster 16-22 DL145 Opteron
Rocks Cluster 30-46 DL145 Opteron
Rocks Cluster 64-84 DL145 Opteron
LC3000 GigaE 24-36 DL145 Opteron
LC3000 Myri 16-32x DL145 Opteron
LC3000 GigaE 16-22x DL145 Opteron
LC2000 GigaE 16-22x DL360G3 Xeon
ProLiant:
DL365 System 2600Mhz 2P 1U Opteron Dual Core
DL360 G5 System 3000Mhz 2P 1U EM64T Dual/Quad Core
DL385 G2 2600Mhz 2P Opteron Dual Core
DL380 G5 3000Mhz 2P EM64T Dual/Quad Core
DL140 3060MHz 2P IA32
DL140 G2 3600MHz 2P EM64T
DL145 2600MHz 2P Opteron
DL145 G2 2600MHz 2P Opteron Dual Core
DL360 G4 3400MHz 2P EM64T
DL360 G4p 3800MHz 2P EM64T
DL380 G4 3800MHz 2P EM64T
DL385 2800MHz 2P Opteron Dual Core
DL560 3000MHz 4P IA32
DL580 G3 3330MHz 4P EM64T
DL585 2800MHz 4P Opteron Dual Core
Integrity:
Montecito 2P-16P, rx2660-rx8640 (multi-system diagram)
rx2660 1600MHz 2P 2U Montecito Systems and Cluster
rx6600 1600MHz 4P 7U Single & Cluster
rx3600 1600MHz 2P 4U Single & Cluster
rx2620 1600MHz 2P 2U Single & Cluster
Superdome 64P base configuration
Integrity Family Portrait (rx1620 thru rx8620), IA64
rx1620 1600MHz 2P MSA1000 Cluster IA64
rx2620 1600MHz 2P MSA1000 Cluster IA64
rx4640 1600MHz 4P MSA1000 Cluster IA64
rx7620 1600MHz 8P 10U Systems and MSA1000 Cluster
rx8620 1600MHz 16P 17U Systems and MSA1000 Cluster
Storage:
MSA30-MI Dual SCSI Cluster, rx3600, rx6600 and rx2660
MSA30-MI Dual SCSI Cluster, rx1620...rx4640
MSA500 G2, SCSI
MSA1510i IP SAN 48TB, SCSI and SATA
MSA1500 48TB, SCSI and SATA
Misc:
Dual Core AMD64 and EM64T systems with MSA1500






Appro: Enterprise and High Performance Computing Whitepapers
Is Your HPC Cluster Ready for Multi-core Processors?:
Multi-core processors bring new challenges and opportunities for the HPC cluster. Get a first look at utilizing these processors and strategies for better performance.

Accelerating Results through Innovation:
Achieve maximum compute power and efficiency with Appro Cluster Solutions. Our highly scalable clusters are designed to seamlessly integrate with existing high performance, scientific, technical, and commercial computing environments.
Keeping Your Cool in the Data Center:
Rethinking IT architecture and infrastructure is not a simple job. This whitepaper helps IT managers overcome challenges with thermal, power, and system management.

Unlocking the Value of IT with Appro HyperBlade:
A fully integrated cluster combining advantages of blade and rack-mount servers for a flexible, modular, scalable architecture designed for Enterprise and HPC applications.
AMD Opteron-based products | Intel Xeon-based products


Hewlett-Packard: Linux High Performance Computing Whitepapers
Unified Cluster Portfolio:
A comprehensive, modular package of tested and pre-configured hardware, software and services for scalable computation, data management and visualization.

Your Fast Track to Cluster Deployment:
Designed to enable faster ordering and configuration, shorter delivery times and increased savings. Customers can select from a menu of popular cluster components, which are then factory assembled into pre-defined configurations with optional software installation.
Message Passing Interface library (HP-MPI):
A high performance and production quality implementation of the Message-Passing Interface (MPI) standard for HP servers and workstations.

Cluster Platform Express:
Cluster Platform Express comes straight to you, factory assembled and available with pre-installed software for cluster management, and ready for deployment.
AMD Opteron-based ProLiant nodes | Intel Xeon-based ProLiant nodes



Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Advertising/Sponsorship Search Privacy
     Copyright © 2001-2007 LinuxHPC.org
Linux is a trademark of Linus Torvalds
All other trademarks are those of their owners.
    
  SpyderByte.com ;Technical Portals