SpyderByte.com ;Technical Portals 
      
 The #1 Site for News & Information Related to Linux High Performance Technical Computing, Linux High Availability and Linux Parallel Clustering
Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Advertising/Sponsorship Search Privacy
Research and Services
Cluster Quoter (HPC Cluster RFQ)
Hardware Vendors Directory
Software Vendors Directory
HPC Consultants Directory
Training Vendors Directory
News
Latest News
Newsletter
News Archives
Search Archives
Reference
Featured Articles
Cluster Builder
Beginners
Whitepapers
Documentation
Software
Lists/Newsgroups
Books
User Groups
Golden Eggs (Configuration Diagrams)
Linux HPC News Update
Stay current on Linux related HPC news, events and information.
LinuxHPC Newsletter

Other Mailing Lists:
Linux High Availability
Beowulf Mailing List
Gelato.org (Linux Itanium)

Linux HPC Links
Favorites:
Cluster Monkey (Doug Eadline, et al)
HPCWire (Tabor Communications)
Scalability.org (Dr. Joe Landman)

Beowulf.org
Beowulf Users Group
High Performance Computing Clusters
Thinking Parallel
insideHPC.com
Gelato.org
The Aggregate
Top500
Cluster Computing Info Centre
Coyote Gultch
Robert Brown's Beowulf Page
FM.net: Scientific/Engineering
SuperComputingOnline
HPC User Forum
GridsWatch
LinuxHPC.org
Home
About
Contact
Mobile Edition
Sponsorship

Linux Cluster RFQ Form
Reach Multiple Vendors With One Linux Cluster RFQ Form. Save time and effort, let LinuxHPC.org do all the leg work for you free of charge. Request A Quote...

Latest News

LCI Conference continues with hardware and software
Posted by Barbara Jewett, Thursday May 17 2007 @ 03:35PM EDT

by Gary Montry

One more issue pertaining to large I/O systems: “operability” is not a synonym for “capability.”

An interesting talk by Andrew Uselton and Brian Behlendorf from Lawrence Livermore National Laboratory discussed the difficulties they had with the I/O system delivered with BlueGene/L. They “sweated bullets” (their term, not mine) for six months trying to get the I/O system to perform up to design specs. Internally, they referred to it as “the death march.” The system, as delivered, “worked.” However, the severely oversubscribed network design left them with an initial performance deficit of 50 percent of the target of 30+ GB/sec. This seems to be akin to spending two hundred grand on a Ferrari and discovering that it won’t get you to the market faster than your neighbors’ Buick without considerable tuning. Not that I’m blaming IBM. This talk could have addressed systems from every other manufacturer. There was no sensible way to build the I/O system without oversubscription at that time. It just points out that these complex systems which push the state of the art do not come out of the box ready for prime time.

The second day was a sandwich of hardware and software sessions. The morning keynote by Norman Miller (UC Berkeley) discussed the usage of cluster-enabled climate modeling software to predict the impact of global warming on California’s Sierra mountains snowpack. It’s not a pretty picture. This work has thrust him into the state government political system. The message here is the success of the open-source WRF (Weather Research & Forecasting) project. Norman and his colleague Jin have added unique capabilities to the WRF code in order to do these simulations and will deliver these improvements to the WRF project for use by other climate researchers.

A short session on DARPAs HPCS program featured presentations from IBM on their PERCS project and from Cray on the Cascade offering. Both presentations were light on technical details, as might be expected. The important fact to take away from this program was highlighted by the IBM speaker (Govindaraju). He pointed out that the last factor of 10x in performance took IBM five years, but the PERCS project has a target of 100x performance gain over the next five years.

The evening session was the HPC body-building session, where descriptions of several new big machines were paraded before us and muscles were flexed. The parade included Roadrunner (LANL), Abe (NCSA), Ranger (TACC), Jaguar (ORNL), and the Red Storm upgrade (SNLA). The price prize winner is Ranger, a SUN-built system which is designed to be 529 teraflops with an acquisition cost of $30 million. That works out to slightly less than six cents per megaflop! This is more than a factor of two lower than the typical price range for large clusters.

Finally, Brent Gorda, (LLNL) announced the “Cluster Challenge” for Supercomputing ’07 in November. The idea here is for undergrads to build a cluster which can use one 30 amp circuit and run some applications to get a feel for the difficulty of provisioning clusters. Brent came up with the idea after realizing that outside of the laboratories and HPC-centric universities there is not much knowledge and experience in how to obtain and provision clusters. Deadlines for application are approaching, so if you are interested in fielding a team for the challenge contact him at bgorda@llnl.gov.

Gary Montry is an independent software consultant specializing in parallel applications development and optimization and in attached processor software. Gary can be reached at gary@spsoft.com


< China Petroleum Deploys Panasas Activestor Parallel Storage Clusters To Improve Seismic Analysis | Cray CEO Ungaro tells LCI Conference attendees they are listening to customers >

 

Sponsors








Affiliates




Cluster Monkey






Golden Eggs
(HP Visual Diagram and Config Guides)
Clusters:
CP4000 32x DL145G2 GigE Opteron, Dual Core
CP4000 64x DL145 GigE Opteron
CP4000 102x DL145 GigE Opteron
CP4000 32x DL145 Myri Opteron
Rocks Cluster 16-22 DL145 Opteron
Rocks Cluster 30-46 DL145 Opteron
Rocks Cluster 64-84 DL145 Opteron
LC3000 GigaE 24-36 DL145 Opteron
LC3000 Myri 16-32x DL145 Opteron
LC3000 GigaE 16-22x DL145 Opteron
LC2000 GigaE 16-22x DL360G3 Xeon
ProLiant:
DL365 System 2600Mhz 2P 1U Opteron Dual Core
DL360 G5 System 3000Mhz 2P 1U EM64T Dual/Quad Core
DL385 G2 2600Mhz 2P Opteron Dual Core
DL380 G5 3000Mhz 2P EM64T Dual/Quad Core
DL140 3060MHz 2P IA32
DL140 G2 3600MHz 2P EM64T
DL145 2600MHz 2P Opteron
DL145 G2 2600MHz 2P Opteron Dual Core
DL360 G4 3400MHz 2P EM64T
DL360 G4p 3800MHz 2P EM64T
DL380 G4 3800MHz 2P EM64T
DL385 2800MHz 2P Opteron Dual Core
DL560 3000MHz 4P IA32
DL580 G3 3330MHz 4P EM64T
DL585 2800MHz 4P Opteron Dual Core
Integrity:
Montecito 2P-16P, rx2660-rx8640 (multi-system diagram)
rx2660 1600MHz 2P 2U Montecito Systems and Cluster
rx6600 1600MHz 4P 7U Single & Cluster
rx3600 1600MHz 2P 4U Single & Cluster
rx2620 1600MHz 2P 2U Single & Cluster
Superdome 64P base configuration
Integrity Family Portrait (rx1620 thru rx8620), IA64
rx1620 1600MHz 2P MSA1000 Cluster IA64
rx2620 1600MHz 2P MSA1000 Cluster IA64
rx4640 1600MHz 4P MSA1000 Cluster IA64
rx7620 1600MHz 8P 10U Systems and MSA1000 Cluster
rx8620 1600MHz 16P 17U Systems and MSA1000 Cluster
Storage:
MSA30-MI Dual SCSI Cluster, rx3600, rx6600 and rx2660
MSA30-MI Dual SCSI Cluster, rx1620...rx4640
MSA500 G2, SCSI
MSA1510i IP SAN 48TB, SCSI and SATA
MSA1500 48TB, SCSI and SATA
Misc:
Dual Core AMD64 and EM64T systems with MSA1500






Appro: Enterprise and High Performance Computing Whitepapers
Is Your HPC Cluster Ready for Multi-core Processors?:
Multi-core processors bring new challenges and opportunities for the HPC cluster. Get a first look at utilizing these processors and strategies for better performance.

Accelerating Results through Innovation:
Achieve maximum compute power and efficiency with Appro Cluster Solutions. Our highly scalable clusters are designed to seamlessly integrate with existing high performance, scientific, technical, and commercial computing environments.
Keeping Your Cool in the Data Center:
Rethinking IT architecture and infrastructure is not a simple job. This whitepaper helps IT managers overcome challenges with thermal, power, and system management.

Unlocking the Value of IT with Appro HyperBlade:
A fully integrated cluster combining advantages of blade and rack-mount servers for a flexible, modular, scalable architecture designed for Enterprise and HPC applications.
AMD Opteron-based products | Intel Xeon-based products


Hewlett-Packard: Linux High Performance Computing Whitepapers
Unified Cluster Portfolio:
A comprehensive, modular package of tested and pre-configured hardware, software and services for scalable computation, data management and visualization.

Your Fast Track to Cluster Deployment:
Designed to enable faster ordering and configuration, shorter delivery times and increased savings. Customers can select from a menu of popular cluster components, which are then factory assembled into pre-defined configurations with optional software installation.
Message Passing Interface library (HP-MPI):
A high performance and production quality implementation of the Message-Passing Interface (MPI) standard for HP servers and workstations.

Cluster Platform Express:
Cluster Platform Express comes straight to you, factory assembled and available with pre-installed software for cluster management, and ready for deployment.
AMD Opteron-based ProLiant nodes | Intel Xeon-based ProLiant nodes



Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Advertising/Sponsorship Search Privacy
     Copyright © 2001-2007 LinuxHPC.org
Linux is a trademark of Linus Torvalds
All other trademarks are those of their owners.
    
  SpyderByte.com ;Technical Portals