SpyderByte: WinHPC.org EnterpriseLinux.org BigBlueLinux.org
      
 The #1 Site for News & Information Related to Linux High Performance Technical Computing, Linux High Availability and Linux Parallel Clustering
Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Sponsorship Search Privacy
More Links
  • Printer friendly version
  • Share this story

  • Research and Services
    Cluster Quoter
    Windows HPC News
    Cluster Builder
    Hardware Vendors
    Software Vendors
    Training Vendors
    Golden Eggs
    Forums
    Employment/Jobs
    Beowulf
    Applications
    Interconnects
    High Availability
    AMD
    Intel

    News
    Submit News/Article/PR
    Latest News
    Newsletter
    News Archives
    Search Archives
    Reference
    Featured Articles
    Beginners
    Whitepapers
    Documentation
    Software
    Lists/Newsgroups
    Books
    User Groups
    Higher Education
    Cluster List
    Linux HPC News Update
    Stay current on Linux related HPC news, events and information.
    LinuxHPC Newsletter

    Other Mailing Lists:
    Linux High Availability
    Beowulf Mailing List
    Gelato.org (Linux Itanium)

    Linux HPC Links
    Beowulf.org
    Beowulf Users Group
    Cluster Monkey
    High Performance Computing Clusters
    Gelato.org
    The Aggregate
    Top500
    Cluster Benchmarks
    Cluster Computing Info Centre
    Coyote Gultch
    Linux Clustering Info Ctr.
    Robert Brown's Beowulf Page
    Sourceforge Cluster Foundry
    HPC DevChannel
    OpenSSI
    Grid-Scape.org
    HPCWire
    SuperComputingOnline
    HPC User Forum
    Scalability.org
    Gridtech
    News Feed
    LinuxHPC.org.org has an RSS/RDF feed if you wish to include it on your website.
    LinuxHPC.org
    Home
    About
    Contact
    Mobile Edition
    Sponsorship

    Linux Cluster RFQ Form
    Reach Multiple Vendors With One Linux Cluster RFQ Form. Save time and effort, let LinuxHPC.org do all the leg work for you free of charge. Request A Quote...

    LinuxHPC.org is Looking for Interns - If you have experience with Linux clusters and/or cluster applications and you're interested in helping out with LinuxHPC.org let me know. I can promise the experience will be rewarding and educational, plus it'll make good resume fodder. You'll work with vendors and cluster users from around the world. We're also looking for writers and people to do hardware/software reviews. Contact Ken Farmer.

    Latest News

    Technical University of Karlsruhe Advances R&D Work with High Performance Linux Cluster from HP
    Thursday June 24 2004 @ 12:19PM EDT

    Technical University of Karlsruhe Advances R&D Work with High Performance Linux Cluster from HP

    HP cluster to provide up to 11 TeraFLOPS of Computing Power for University’s Scientific Supercomputing Center

    GENEVA/KARLSRUHE, JUNE 21, 2004 – HP and the Technical University of Karlsruhe (TH) have signed a contract to build an Itanium 2-based high performance cluster of HP Integrity servers running Linux that will help significantly advance research and development work at the universities and research institutes in the state of Baden-Wuerttemberg, Germany. The cluster will be unveiled today at the University of Karlsruhe, State of Baden-Wuerttemberg, Germany at the inauguration of the HPC (High Performance Computing) competence center, a newly founded organisation that is being driven by the Ministry of Science of the State Baden-Wuerttemberg to combine the expertise of the Supercomputing Centers of Stuttgart (HLRS) and Karlsruhe (SSCK) and to advance scientific and industrial computing. The ultra-high performance supercomputing system is installed at the Scientific Supercomputing Center Karlsruhe (SSCK). In two years, the final configuration with a total of 1.200 CPU cores is expected to achieve a total peak computing power of about 11 teraFLOPS and to provide more than seven terabyte of main memory.

    The high-availability system will be supplemented by a Lustre™-based solution as a central 40 terabyte high performance parallel file system. The HP solution runs on the newly developed HP XC Cluster Management Software and has already started trial operations. By providing the SSCK with an ultra-high performance supercomputing system, HP, Intel and the University of Karlsruhe demonstrate their joint commitment to high performance computing. This technology can be used to advance classical engineering sciences as well as life sciences, energy and environmental research, and technical grid computing.

    Successive expansion to 340 nodes The test phase of the clusters began in April 2004 and includes 16 HP Integrity rx2600 servers, each featuring two Itanium 2 CPUs. By early 2006, the overall system will be upgraded in two phases to a total of 334 nodes, which will use next generation Itanium CPUs with two or four processor cores each. In addition, six nodes featuring HP Integrity rx8640 servers with 16 next-generation Itanium 2 CPUs each will be integrated into the cluster by the end of the year, bringing the total up to 340 nodes. Each implementation phase offers the opportunity to integrate the latest market-ready technologies. The centrally managed nodes communicate via an ultra-high speed Quadrics interconnect with low latency and a concurrent bidirectional data rate of up to two Gbyte per second.

    HP StorageWorks Scalable File Share – central 40 Tbyte file system The HP StorageWorks Scalable File Share, Lustre™-based, solution will supplement the cluster with a central parallel file system, which will provide 40 terabytes of memory in its final configuration. This shared file system is optimised for use with large Linux clusters and ensures the highest levels of I/O performance. It is ultra-scalable, based on open standards and ensures easy and efficient management. Redundant and separate systems ensure high availability

    In order to enable the security and high availability of the cluster, certain functional areas are partitioned. In the unlikely event of a subsystem failure, any unaffected areas will remain fully functional. In addition all hardware and software components have an optimised redundant design. Additional high availability features will be implemented in the context of HP’s cooperation with the SSC. This will include nodes with specific critical functions such as resource management, special leader nodes, the HP StorageWorks Scalable File Share server, and the nodes that provide external network functionality.

    HP, University of Karlsruhe and Intel set up Competence Center The Technical University of Karlsruhe, HP and Intel are jointly establishing the competence center for High Performance Technical Computing (HPTC³ - High Performance Computing Competence Center). The center will handle the integration of the cluster with the operating environment. This includes the implementation of functions that are not yet included in the software for the XC cluster, the monitoring of the cluster, and the protection of high availability for critical functions. At the center, the Technical University of Karlsruhe, HP and Intel will provide training and education, and the porting and optimisation of applications from Independent Software Vendors (ISVs). In addition, they will collaborate in the field of HPC applications on new, innovative research areas such as life sciences, environmental research, and technical grid computing.

    About HP HP is a technology solutions provider to consumers, businesses and institutions globally. The company's offerings span IT infrastructure, personal computing and access devices, global services and imaging and printing. For the four fiscal quarters ended April 30, 2004, HP revenue totaled $76.8 billion. More information about HP (NYSE, Nasdaq: HPQ) is available at www.hp.com.

    06/2004


    < SPACI Selects HP For Grid | We welcome Penguin Computing to the Linux Cluster RFQ list >

    Sponsors






    WinHPC.org


    Stay up-to-date on the Progress of Microsoft's Windows Compute Cluster Server 2003

    Golden Eggs
    (HP Visual Diagram and Config Guides)
    Integrity:
    HP Integrity Family Portrait, IA64
    rx1620 1600MHz 2P MSA1000 Cluster IA64
    rx2620 1600MHz 2P MSA1000 Cluster IA64
    rx4640 1600MHz 4P MSA1000 Cluster IA64
    ProLiant:
    DL140 3060MHz 2P IA32
    DL140 G2 3600MHz 2P EM64T
    DL145 2600MHz 2P Opteron
    DL145 G2 2600MHz 2P Opteron Dual Core
    DL360 G4 3400MHz 2P EM64T
    DL360 G4p 3800MHz 2P EM64T
    DL380 G4 3800MHz 2P EM64T
    DL385 2800MHz 2P Opteron Dual Core
    DL560 3000MHz 4P IA32
    DL580 G3 3330MHz 4P EM64T
    DL585 2800MHz 4P Opteron Dual Core
    Clusters:
    CP3000 32x DL140G2 & DL360G4p GigE EM64T
    CP4000 32x DL145G2 GigE Opteron, Dual Core
    CP4000 64x DL145 GigE Opteron
    CP4000 102x DL145 GigE Opteron
    CP4000 32x DL145 Myri Opteron
    Rocks Cluster 16-22 DL145 Opteron
    Rocks Cluster 30-46 DL145 Opteron
    Rocks Cluster 64-84 DL145 Opteron
    LC3000 GigaE 24-36 DL145 Opteron
    LC3000 Myri 16-32x DL145 Opteron
    LC3000 GigaE 16-22x DL145 Opteron
    LC2000 GigaE 16-22x DL360G3 Xeon
    Storage:
    MSA500 G2, SCSI
    MSA1510i IP SAN 48TB, SCSI and SATA
    MSA1500 48TB, SCSI and SATA
    Misc:
    Dual Core AMD64 and EM64T systems with MSA1500









    Linux Magazine
    At Newstands Now!Linux Magazine

    Click for King, North Carolina Forecast

    PageRank is Google`s measure of the importance of this page!





       


    Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Sponsorship Search Privacy
         Copyright © 2001-2006 LinuxHPC.org
    Linux is a trademark of Linus Torvalds
    All other trademarks are those of their owners.
        
      SpyderByte.com ;Technical Portals