SpyderByte.com ;Technical Portals 
      
 The #1 Site for News & Information Related to Linux High Performance Technical Computing, Linux High Availability and Linux Parallel Clustering
Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Advertising/Sponsorship Search Privacy
Research and Services
Cluster Quoter (HPC Cluster RFQ)
Hardware Vendors
Software Vendors
HPC Consultants
Training Vendors
News
Latest News
Newsletter
News Archives
Search Archives
Reference
Featured Articles
Cluster Builder
Beginners
Whitepapers
Documentation
Software
Lists/Newsgroups
Books
User Groups & Organizations
Golden Eggs (Configuration Diagrams)
Linux HPC Links
Favorites:
Cluster Monkey (Doug Eadline, et al)
HPCWire (Tabor Communications)
insideHPC.com (John West)
Scalability.org (Dr. Joe Landman)

Beowulf.org
Beowulf Users Group
High Performance Computing Clusters
Thinking Parallel
Gelato.org
The Aggregate
Top500
Cluster Computing Info Centre
Coyote Gultch
Robert Brown's Beowulf Page
FM.net: Scientific/Engineering
SuperComputingOnline
HPC User Forum
GridsWatch
Linux HPC News Update
Stay current on Linux related HPC news, events and information.
LinuxHPC Newsletter

Other Mailing Lists:
Linux High Availability
Beowulf Mailing List
Gelato.org (Linux Itanium)

LinuxHPC.org
Home
About
Contact
Mobile Edition
Sponsorship

Linux Cluster RFQ Form
Reach Multiple Vendors With One Linux Cluster RFQ Form. Save time and effort, let LinuxHPC.org do all the leg work for you free of charge. Request A Quote...

Latest News

SGI to Install Leading-Edge HPC Environment for Data-Intensive Computing at Dresden Tech. Univ.
Posted by Kenneth Farmer, Friday August 26 2005 @ 12:01PM EDT

German University Invests Over $18 Million into Innovative Scientific Computing Infrastructure - Prime Contractor SGI to Deliver Altix System with 6TB Shared-Memory and More Than 1,500 Itanium Processor Cores

MOUNTAIN VIEW, Calif., Aug. 25 /PRNewswire-FirstCall/ -- Dresden University of Technology (TUD) has signed a contract with Silicon Graphics (NYSE: SGI) to provide a high-performance computing environment representing an investment of over $18 million, which will give TUD a distinction as Center for Scientific Computing. In two project phases to be completed within twelve months, a state-of-the-art, innovative and flexibly usable infrastructure with computational power of more than a dozen teraflops will be implemented. This will enable investigators in scientific areas such as physics, material sciences, engineering, bioinformatics and nanotechnology to find answers to new types of challenging problems.

As central component, SGI will install a large SGI(R) Altix(R) shared-memory system containing 6,000 Gigabytes of contiguously usable main memory and more than 1,500 processor cores based on the most recent Intel(R) Itanium(R) 2 dual-core technology. This HPC platform will pave the way for a new category of capability computing serving as a concentrated resource for selected projects, acting as a knowledge accelerator and allowing the researchers to work on challenging problems beyond the scope of traditional number crunching.

Beyond providing high computational performance, the procurement -- running under the designation "HPC/Storage Complex for Data Intensive Computing" -- will be specifically built to achieve very high data bandwidths by drawing on an intelligently architected, multi-level storage system. This tiered storage system will enable very high speed storing, moving and archiving of extremely large datasets.

To this end, SGI plans to install a Storage Area Network complex (SAN) containing 60 terabytes (TB) of online disk capacity, which provides a bandwidth of 8 GB/s (gigabytes per second) to the Altix system, and is capable of feeding a Petabyte-sized (PB) archive tape robot with high data rate. Another 50TB large SAN will be provided and connected to the throughput system, with the option of efficient access to the first SAN and hence the archiving system.

Both SAN systems are based on the SGI(R) InfiniteStorage SAN solution, using fibre channel disk array systems from Data Direct Network; a PB-sized tape library system from Storage Technology Corp will serve as the archive robot. For hierarchical storage management including life cycle management and data storage and retrieval the SGI(R) InfiniteStorage DMF (Data Migration Facility) software is provided. Shared file system functionality on the HPC system is implemented through SGI(R) InfiniteStorage CXFS(TM), while on the throughput system a Lustre file system - commonly deployed in many of the large US laboratories -- will be used. Both platforms will be running under Novell's SUSE(R) LINUX Enterprise Sever operating environment.

Complementary to this system, SGI will integrate a PC farm from Linux Networx with roughly 700 single system boards; acting as a platform for capacity computing, the PC farm will serve the throughput requirements of many hundreds of users throughout the Dresden campus.

The procurement is one of the largest HPC contracts to be tendered within Europe in 2005. According to Prof. Hermann Kokenge, Rector of TUD, "The system will effectively strengthen the innovational capabilities of the university, the Dresden area, and the surrounding region. It will provide a critical mass of additional computing power and novel working facilities to obtain groundbreaking discoveries."

Accumulated HPC Resources for Bold Questions

How can one discover highly robust organic materials that may replace metallic alloys in osteal (bone) surgery? How is it possible to grow novel types of crystals? What methods allow subduing background noise within a vehicle? Which techniques of tracking and understanding of cellular growth processes can be achieved via automatic cell microscopy? How can one analyze and influence the genetic causes of illnesses? These are only a few questions and application areas that will be tackled by researchers using the new TUD computing environment.

No matter which area of research a scientist is concerned with, be it the analysis of bio-molecular reactions, the methods for protein docking or quantum chemistry, the folding of three dimensional structures, the analysis of films or the study of turbulent flows in electro-fluid materials under the influence of external magnetic fields using methods of computational fluid dynamics -- the Altix platform provides new perspectives for many computational-based scientific methods.

Selected projects will have the opportunity to utilize up to two-thirds of the whole system for some period of time if required. Hundreds of processors working in parallel can then use the memory as a single, contiguously addressable entity, load enormously large data sets in one piece, efficiently perform their calculations on them or investigate them for patterns or similarities.

"We intend to enable bold and complex projects on the SGI Altix. Our focus is on providing a novel type of HPC tool to the scientific computing community," said Prof. Wolfgang E. Nagel, Director of ZIH (Center for Information Services & HPC). "Our efforts do not center on the usual simulation scenarios, we are more concerned with providing a platform which gives our users the opportunity to extract new and concise knowledge from huge amounts of structured or unstructured data encompassing a lot of hidden information."

In-memory computing is just one of the innovations offered to the scientists by ZIH via the SGI HPC platform; for the first time it will be possible to simultaneously load several complete scientific databases into the memory subsystem, and to search them for certain correlations at unprecedented speed. The problem besetting and hindering these kinds of investigations up to now -- the need for time-intensive I/O processing and disk accesses -- is being eliminated by in-memory computing.

To make capability computing feasible for alternating projects, it must be possible to rapidly load the HPC platform for a single run, and then to rapidly unload it to make the resources available for the next user. The SGI solution can load 4TB of data to memory within 10 minutes, and, at the end of a project run, is capable of saving computing results to the archive system with a 25TB in 4 hours. Nagel: "This is outstanding and allows scientists to use the machine as a real theory accelerator."

"We are pleased to implement a project of this size and ambition in Germany, which will be considered a significant achievement by the global HPC community," added Robert Ubelmesser, Director of Strategic HPC Projects, Europe, SGI. "The idea of data-intensive scientific computing with all its challenges and chances has been pursued by ZIH in a visionary manner. We take pride in providing the enabling technology for this future oriented concept." According to Hannes Schwaderer, Executive Director of Intel GmbH: "Intel's Itanium 2 architecture is the fastest growing CPU architecture for HPC deployments. We're pleased by its success at the universities, and we're proud to now also provide Dresden with a very powerful system based on the Itanium processor architecture, after having gained Leibniz Computing Center in Munich as a customer that takes advantage of thousands of our processor cores. The combination of dual-core Itanium 2 CPUs with SGI's innovative shared-memory technology in the Altix systems will provide Dresden with the capability to answer very complex questions."

Two-Phase Delivery -- Starting in Autumn 2005

A third of the total capacity -- memory and processing power -- is planned to be installed in autumn 2005. It will primarily serve ZIH as a preparation environment, and allow users to optimize algorithms and prepare themselves for the new possibilities. An SGI(R) Altix(R) 3000 BX2 system will be installed in this first phase. The installation is to be completed in the second phase of the project scheduled for summer 2006. When the system is completely installed, a next-generation Altix system will have taken over the HPC workload.

Award of Tender after tough competition

"This is the third time in a row that Dresden has selected SGI as preferred HPC partner -- and it's a 128-processor SGI(R) Origin(R) 3800 system we actually use for running our HPC shared memory jobs," explains ZIH Director Nagel. "However, SGI was required to prevail in a tough, very challenging competition. We made our decision in favor of SGI because the company is capable of delivering a system with such a uniquely large shared-memory size. This is a distinguishing factor - enabling us to provide our clients with a unique quality of service for their novel and challenging investigations."

Nagel concluded: "We will get an extremely balanced and versatile computing and storage complex -- with excellent components and a consistently high level of bandwidth that allows us to offer a powerful total resource for challenging new scientific computing problems in the homogeneous as well as heterogeneous requirement regime."

http://www.sgi.com


< Upcoming Geant4 International Users Conference 2005 | Panasas Rocks Stanford >

 


Sponsors





Intel Cluster Ready
Find out how you can buy a certified, interoperable cluster that just works when delivered.













Affiliates



Cluster Monkey




Golden Eggs
(HP Visual Diagram and Config Guides)
Clusters:
CP4000 32x DL145G2 GigE Opteron, Dual Core
CP4000 64x DL145 GigE Opteron
CP4000 102x DL145 GigE Opteron
CP4000 32x DL145 Myri Opteron
Rocks Cluster 16-22 DL145 Opteron
Rocks Cluster 30-46 DL145 Opteron
Rocks Cluster 64-84 DL145 Opteron
LC3000 GigaE 24-36 DL145 Opteron
LC3000 Myri 16-32x DL145 Opteron
LC3000 GigaE 16-22x DL145 Opteron
LC2000 GigaE 16-22x DL360G3 Xeon
ProLiant:
> DL365 System 2600Mhz 2P 1U Opteron Dual Core
DL360 G5 System 3000Mhz 2P 1U EM64T Dual/Quad Core
DL385 G2 2600Mhz 2P Opteron Dual Core
DL380 G5 3000Mhz 2P EM64T Dual/Quad Core
DL140 3060MHz 2P IA32
DL140 G2 3600MHz 2P EM64T
DL145 2600MHz 2P Opteron
DL145 G2 2600MHz 2P Opteron Dual Core
DL360 G4 3400MHz 2P EM64T
DL360 G4p 3800MHz 2P EM64T
DL380 G4 3800MHz 2P EM64T
DL385 2800MHz 2P Opteron Dual Core
DL560 3000MHz 4P IA32
DL580 G3 3330MHz 4P EM64T
DL585 2800MHz 4P Opteron Dual Core
Integrity:
Montecito 2P-16P, rx2660-rx8640 (multi-system diagram)
rx2660 1600MHz 2P 2U Montecito Systems and Cluster
rx6600 1600MHz 4P 7U Single & Cluster
rx3600 1600MHz 2P 4U Single & Cluster
rx2620 1600MHz 2P 2U Single & Cluster
Superdome 64P base configuration
Integrity Family Portrait (rx1620 thru rx8620), IA64
rx1620 1600MHz 2P MSA1000 Cluster IA64
rx2620 1600MHz 2P MSA1000 Cluster IA64
rx4640 1600MHz 4P MSA1000 Cluster IA64
rx7620 1600MHz 8P 10U Systems and MSA1000 Cluster
rx8620 1600MHz 16P 17U Systems and MSA1000 Cluster
Storage:
MSA30-MI Dual SCSI Cluster, rx3600, rx6600 and rx2660
MSA30-MI Dual SCSI Cluster, rx1620...rx4640
MSA500 G2, SCSI
MSA1510i IP SAN 48TB, SCSI and SATA
MSA1500 48TB, SCSI and SATA
Misc:
Dual Core AMD64 and EM64T systems with MSA1500






Appro: Enterprise and High Performance Computing Whitepapers
Is Your HPC Cluster Ready for Multi-core Processors?:
Multi-core processors bring new challenges and opportunities for the HPC cluster. Get a first look at utilizing these processors and strategies for better performance.

Accelerating Results through Innovation:
Achieve maximum compute power and efficiency with Appro Cluster Solutions. Our highly scalable clusters are designed to seamlessly integrate with existing high performance, scientific, technical, and commercial computing environments.
Keeping Your Cool in the Data Center:
Rethinking IT architecture and infrastructure is not a simple job. This whitepaper helps IT managers overcome challenges with thermal, power, and system management.

Unlocking the Value of IT with Appro HyperBlade:
A fully integrated cluster combining advantages of blade and rack-mount servers for a flexible, modular, scalable architecture designed for Enterprise and HPC applications.
AMD Opteron-based products | Intel Xeon-based products


Hewlett-Packard: Linux High Performance Computing Whitepapers
Unified Cluster Portfolio:
A comprehensive, modular package of tested and pre-configured hardware, software and services for scalable computation, data management and visualization.

Your Fast Track to Cluster Deployment:
Designed to enable faster ordering and configuration, shorter delivery times and increased savings. Customers can select from a menu of popular cluster components, which are then factory assembled into pre-defined configurations with optional software installation.
Message Passing Interface library (HP-MPI):
A high performance and production quality implementation of the Message-Passing Interface (MPI) standard for HP servers and workstations.

Cluster Platform Express:
Cluster Platform Express comes straight to you, factory assembled and available with pre-installed software for cluster management, and ready for deployment.
AMD Opteron-based ProLiant nodes | Intel Xeon-based ProLiant nodes



Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Advertising/Sponsorship Search Privacy
     Copyright © 2001-2007 LinuxHPC.org
Linux is a trademark of Linus Torvalds
All other trademarks are those of their owners.
    
  SpyderByte.com ;Technical Portals