SpyderByte.com ;Technical Portals 
      
 The #1 Site for News & Information Related to Linux High Performance Technical Computing, Linux High Availability and Linux Parallel Clustering
Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Sponsorship Search Privacy
Research and Services
Cluster Quoter
Windows HPC News
Cluster Builder
Hardware Vendors
Software Vendors
Service & Support (Consulting)
Training Vendors
Golden Eggs
News
Latest News
Newsletter
News Archives
Search Archives
Reference
Featured Articles
Beginners
Whitepapers
Documentation
Software
Lists/Newsgroups
Books
User Groups
Forums
Employment/Jobs
Beowulf
Applications
Interconnects
High Availability
AMD
Intel

Linux HPC News Update
Stay current on Linux related HPC news, events and information.
LinuxHPC Newsletter

Other Mailing Lists:
Linux High Availability
Beowulf Mailing List
Gelato.org (Linux Itanium)

Linux HPC Links
Favorites:
Cluster Monkey (Doug Eadline, et al)
HPCWire (Tabor Communications)
Scalability.org (Dr. Joe Landman)

Beowulf.org
Beowulf Users Group
Blade.org
High Performance Computing Clusters
Gelato.org
The Aggregate
Top500
Cluster Benchmarks
Cluster Computing Info Centre
Coyote Gultch
Linux Clustering Info Ctr.
Robert Brown's Beowulf Page
Sourceforge Cluster Foundry
HPC DevChannel
OpenSSI
Grid-Scape.org
SuperComputingOnline
HPC User Forum
Gridtech
GridsWatch
News Feed
LinuxHPC.org.org has an RSS/RDF feed if you wish to include it on your website.
LinuxHPC.org
Home
About
Contact
Mobile Edition
Sponsorship

Linux Cluster RFQ Form
Reach Multiple Vendors With One Linux Cluster RFQ Form. Save time and effort, let LinuxHPC.org do all the leg work for you free of charge. Request A Quote...

New sub-menu item: Service & Support Vendors
If you offer service and support consulting add your company.

Latest News

Cluster Resources Wins Largest Cluster and Grid Management Contract in History
Posted by Cluster Resources, Wednesday August 09 2006 @ 10:51AM EDT

Cluster Resources, Inc., a leading provider of cluster, grid and utility computing software, announced today that the Department of Energy's National Nuclear Security Administration's Advanced Simulation and Computing Program has selected Cluster Resources' Moab workload and resource management software as a standard for use across NNSA's high-performance computing systems.

The Advanced Simulation and Computing Program (ASC) unites the high performance computing expertise and capabilities of the national labs responsible for ensuring the safety, security and reliability of the nation's stockpile of nuclear weapons without testing. ASC, also known as Tri-Labs consists of Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory (LANL) and Sandia National Laboratories. ASC currently has the number 1, 3, 6 and 9 largest systems in the TOP500 Supercomputing list, as well as dozens of other systems equating to approximately 25% of the TOP500's total CPU count (www.top500.org).

"Cluster Resources is honored to be selected by ASC," said David Jackson, CEO of Cluster Resources, Inc. "There is no organization in the world which matches the technical expertise and scope of compute systems found at ASC in terms of scalability and architectural complexity."

This agreement brings two industry leaders together. ASC is widely acknowledged for their leadership in successfully deploying next-generation massive architectures, networks and storage solutions, as well as their research and expertise in scalable middleware. Cluster Resources provides industry leadership in intelligent workload and resource management that orchestrates compute, network, and storage resources, in order to maximize utilization, availability and responsiveness. The ASC/Cluster Resources partnership will push innovation boundaries for the Supercomputing / High Performance Computing (HPC) industry on both current and future leadership-class systems.

Assessing Resource and Workload Management Solutions

ASC initiated the search for a common resource and workload management solution to improve usability and manageability of their diverse resources and to attain an improved return on their significant computing investment. In addition, the program also sought enhanced reporting for managed resources and to optimize resource utilization while maintaining the flexibility required to meet the individual needs of each site and project. ASC has a highly heterogeneous environment with systems that range from large scale Intel and AMD Opteron-based systems provided by IBM, HP, Dell and others, to more exotic and powerful systems such as Cray's XT3 and IBM's Blue Gene. Going into the assessment, ASC also had a high degree of knowledge in the resource management space due to their development of advanced resource management and scheduling tools such as BProc, SLURM (http://www.llnl.gov/linux/slurm/), and LCRM.

ASC's expertise, from their own extensive research and development work and from managing the world's largest array of leadership class systems, makes this review and selection a great honor," Jackson said. "What makes this selection so meaningful is that this organization knows supercomputing, knows the real world and is able to see through the marketing fluff that can be so prevalent. Not only does this speak well of Cluster Resources' Moab product line and our service capabilities, but it also provides significant value to us as we collaborate with these thought leaders to develop capabilities for the next generation of systems and enhance our ability to meet their current and future needs."

The Selected Solution

The awarded contract grants ASC use of Moab software, which provides workload management, system accounting, capacity planning, automated failure recovery, virtualization and a host of other capabilities in cluster, grid, and utility computing environments. In addition, the contract also includes collaborative research and development, consulting, 24/7 support and other professional services.

The Moab solution adds significant manageability and optimization to HPC resources, while providing deployment methods that effectively minimize the risk and cost of adoption. Unique Moab capabilities allow it to be transparently deployed with little or no impact on the end-user; these capabilities include system workload, resource, and policy simulation, batch language translation, capacity planning diagnostics, non-intrusive test facilities, and infrastructure stress testing.

At the core of this solution is Moab Cluster SuiteĀ® and Moab Grid SuiteĀ® -- professional cluster management solutions that include Moab Workload Manager, a policy-based workload management and scheduling tool, as well as a graphical cluster administration interface and a web-based end-user job submission and management portal.

Moab simplifies and unifies management across heterogeneous environments to increase the ROI of HPC investments and act as a flexible policy engine that guarantees service levels and speeds job processing.

Collaborative Relationship and Direction

A second key aspect of the delivered solution is service and personnel engagement. Cluster Resources will actively collaborate with ASC on training, consulting, migration, and the creation of development roadmaps in order to ensure the highest degree of capability and scalability is provided. This relationship includes direct access to development resources and executive level engagement. Cluster Resources will actively work with hardware vendors to ensure Moab cleanly deploys on selected current and newly purchased systems. Cluster Resources will also fully support ASC throughout the usage lifecycle, providing on-site and online training, best-practices consulting and other enabling services.

"Partnerships such as this one are a key element of the ASC Program's success in pushing the frontiers of high performance scientific computing," said Brian Carnes, Service and Development Division leader at LLNL. "Only by working with leading innovators in HPC can we develop and maintain the large scale systems and increasingly complex simulation environments vital to our national security missions."

Industry Impact

The relationship between ASC and Cluster Resources will not only directly impact the three government laboratories that make up ASC/Tri-Labs, but will also help shape the future of large and small HPC sites.

"In many regards, what ASC is doing now reflects the future state of the data center and HPC industry," Jackson said. "However, the fundamental needs of ASC are not all that different from today's centers. They need total optimization of compute, network and storage resources, automated failure detection and recovery, more flexible policies, true visualization of cluster activity, detailed accounting, and reduced costs. It's just when you are dealing with over 100,000 processors, the approaches used to deliver this must become more efficient and manageable. We are fortunate that our collaboration with industry visionaries over the years has prepared us to address these needs in a way that works extremely well both at 100 and 100,000 processors. In our partnership with ASC, we hope to extend these capabilities further in environments that push the edges of scalability and capability."

http://www.clusterresources.com


< Job: High Performance Computing Consultant for HP | Three HPC Companies Introduce Turnkey High-Performance Statistical Computation System with RPro >

Sponsors

HP
HPC Market Leader
Three Years Running















Affiliates




Cluster Monkey









Golden Eggs
(HP Visual Diagram and Config Guides)
Clusters:
CP3000 32x DL140G2 & DL360G4p GigE EM64T
CP4000 32x DL145G2 GigE Opteron, Dual Core
CP4000 64x DL145 GigE Opteron
CP4000 102x DL145 GigE Opteron
CP4000 32x DL145 Myri Opteron
Rocks Cluster 16-22 DL145 Opteron
Rocks Cluster 30-46 DL145 Opteron
Rocks Cluster 64-84 DL145 Opteron
LC3000 GigaE 24-36 DL145 Opteron
LC3000 Myri 16-32x DL145 Opteron
LC3000 GigaE 16-22x DL145 Opteron
LC2000 GigaE 16-22x DL360G3 Xeon
ProLiant:
DL140 3060MHz 2P IA32
DL140 G2 3600MHz 2P EM64T
DL145 2600MHz 2P Opteron
DL145 G2 2600MHz 2P Opteron Dual Core
DL360 G4 3400MHz 2P EM64T
DL360 G4p 3800MHz 2P EM64T
DL380 G4 3800MHz 2P EM64T
DL385 2800MHz 2P Opteron Dual Core
DL560 3000MHz 4P IA32
DL580 G3 3330MHz 4P EM64T
DL585 2800MHz 4P Opteron Dual Core
Integrity:
Superdome 64P base configuration
Integrity Family Portrait (rx1620 thru rx8620), IA64
rx1620 1600MHz 2P MSA1000 Cluster IA64
rx2620 1600MHz 2P MSA1000 Cluster IA64
rx4640 1600MHz 4P MSA1000 Cluster IA64
rx7620 1600MHz 8P 10U Systems and MSA1000 Cluster
rx8620 1600MHz 16P 17U Systems and MSA1000 Cluster
Storage:
MSA30-MI Dual SCSI Cluster, rx1620...rx4640
MSA500 G2, SCSI
MSA1510i IP SAN 48TB, SCSI and SATA
MSA1500 48TB, SCSI and SATA
Misc:
Dual Core AMD64 and EM64T systems with MSA1500






Hewlett-Packard: Linux High Performance Computing Whitepapers
Unified Cluster Portfolio:
A comprehensive, modular package of tested and pre-configured hardware, software and services for scalable computation, data management and visualization.

Your Fast Track to Cluster Deployment:
Designed to enable faster ordering and configuration, shorter delivery times and increased savings. Customers can select from a menu of popular cluster components, which are then factory assembled into pre-defined configurations with optional software installation.
Message Passing Interface library (HP-MPI):
A high performance and production quality implementation of the Message-Passing Interface (MPI) standard for HP servers and workstations.

Cluster Platform Express:
Cluster Platform Express comes straight to you, factory assembled and available with pre-installed software for cluster management, and ready for deployment.
AMD Opteron-based ProLiant nodes | Intel Xeon-based ProLiant nodes



Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Sponsorship Search Privacy
     Copyright © 2001-2006 LinuxHPC.org
Linux is a trademark of Linus Torvalds
All other trademarks are those of their owners.
    
  SpyderByte.com ;Technical Portals