SpyderByte: WinHPC.org EnterpriseLinux.org BigBlueLinux.org
      
 The #1 Site for News & Information Related to Linux High Performance Technical Computing, Linux High Availability and Linux Parallel Clustering
Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Sponsorship Search Privacy
More Links
  • Printer friendly version
  • Share this story

  • Research and Services
    Cluster Quoter
    Windows HPC News
    Cluster Builder
    Hardware Vendors
    Software Vendors
    Training Vendors
    Golden Eggs
    Forums
    Employment/Jobs
    Beowulf
    Applications
    Interconnects
    High Availability
    AMD
    Intel

    News
    Submit News/Article/PR
    Latest News
    Newsletter
    News Archives
    Search Archives
    Reference
    Featured Articles
    Beginners
    Whitepapers
    Documentation
    Software
    Lists/Newsgroups
    Books
    User Groups
    Higher Education
    Cluster List
    Linux HPC News Update
    Stay current on Linux related HPC news, events and information.
    LinuxHPC Newsletter

    Other Mailing Lists:
    Linux High Availability
    Beowulf Mailing List
    Gelato.org (Linux Itanium)

    Linux HPC Links
    Beowulf.org
    Beowulf Users Group
    Cluster Monkey
    High Performance Computing Clusters
    Gelato.org
    The Aggregate
    Top500
    Cluster Benchmarks
    Cluster Computing Info Centre
    Coyote Gultch
    Linux Clustering Info Ctr.
    Robert Brown's Beowulf Page
    Sourceforge Cluster Foundry
    HPC DevChannel
    OpenSSI
    Grid-Scape.org
    HPCWire
    SuperComputingOnline
    HPC User Forum
    Scalability.org
    Gridtech
    News Feed
    LinuxHPC.org.org has an RSS/RDF feed if you wish to include it on your website.
    LinuxHPC.org
    Home
    About
    Contact
    Mobile Edition
    Sponsorship

    Linux Cluster RFQ Form
    Reach Multiple Vendors With One Linux Cluster RFQ Form. Save time and effort, let LinuxHPC.org do all the leg work for you free of charge. Request A Quote...

    LinuxHPC.org is Looking for Interns - If you have experience with Linux clusters and/or cluster applications and you're interested in helping out with LinuxHPC.org let me know. I can promise the experience will be rewarding and educational, plus it'll make good resume fodder. You'll work with vendors and cluster users from around the world. We're also looking for writers and people to do hardware/software reviews. Contact Ken Farmer.

    Latest News

    Breakthrough HP Technology Yields up to 100 Times More Bandwidth for Linux Clusters
    Wednesday June 23 2004 @ 02:08PM EDT

    HP Delivers Distributed File System Based on Lustre Open Source Protocol; Second Product Based on HP StorageWorks Grid Architecture

    HEIDELBERG, Germany, Jun 23, 2004 (BUSINESS WIRE) -- HP (NYSE:HPQ) (Nasdaq:HPQ) today announced a breakthrough file sharing product that uses new Linux clustering technology to deliver up to 100 times more bandwidth than typical clusters.(1) The new product, HP StorageWorks Scalable File Share (HP SFS), is a self-contained file server that enables bandwidth to be shared by distributing files in parallel across clusters of industry-standard server and storage components.

    The product is the second based on HP's "storage grid" architecture and the first commercial research product to use a new Linux clustering technology, called Lustre, which was developed through collaboration between HP, the U.S. Department of Energy (DoE) and Cluster File Systems, Inc.(2)

    Targeted initially for high-performance computing (HPC), HP SFS allows applications to see a single file system image regardless of the number of servers or storage devices connected to it. Built using industry-standard HP ProLiant servers and HP StorageWorks disk arrays, HP SFS provides protection from hardware failures through resilient, redundant hardware and built-in fail-over and recovery.

    Tuned for ease of use and manageability, the system can span dozens to thousands of clustered Linux servers -- making it dramatically easier to run distributed applications for challenging science and engineering needs.

    The Lustre protocol used in HP SFS is already running in some of the world's most demanding HPC environments, such as the one found at the DoE Pacific Northwest National Laboratory (PNNL). It helps to eliminate input/output (I/O) bandwidth bottlenecks and saves users hours of time copying files across hundreds or thousands of individual, distributed file systems.

    The DoE selected HP to provide program management, development, test engineering, hardware and services to support the Lustre project. HP is the only major vendor to offer a supported and case-hardened Lustre-based file share product.

    "HP's Lustre implementation on our supercomputer allows us to achieve faster, more accurate analysis," said Scott Studham, associate director for Advanced Computing, PNNL. "This translates into faster time-to-solution and better science for our researchers, who are addressing complex problems in energy, national security, the environment and life sciences."

    Lustre technology has been in use at PNNL for more than a year on one of the 10 largest Linux clusters in the world.(3) PNNL's HP Linux super cluster, with more than 1,800 Intel(R) Itanium(R) 2 processors, is rated at more than 11 teraflops (one teraflop equals one trillion floating point operations per second) and sustains more than 3.2 gigabytes per second of bandwidth running production loads on a single 53-terabyte Lustre-based file share. Individual Linux clients are able to write data to the parallel Lustre servers at more than 650 megabytes per second. The system is designed to make the enormous PNNL cluster centralized, easy to use and manage, and simple to expand.

    Studham also noted that Lustre scales the high-bandwidth I/O needed to match the large data files produced and consumed by the laboratory's scalable simulations. HP has worked with PNNL to help ensure Lustre is reliable, stable and cost-effective. "We are confident in the Lustre file system's ability to prevent loss of data," said Studham.

    "HP SFS demonstrates HP's commitment to using industry-standard and open technologies to meet the requirements of our most demanding customers and to ensure maximum long-term customer value, simplicity and agility," said Winston Prather, vice president and general manager, High Performance Technical Computing, HP. "HP SFS combines accessible, open source technology with a well-engineered HP product that solves the distributed file system I/O challenge our high-performance customers face. It also simplifies the use and administration of Linux clusters, provides faster processing and a higher return on investment."

    HP StorageWorks grid enables HP SFS

    HP SFS follows introduction this May of the HP StorageWorks Reference Information Storage System (RISS), an all-in-one archive and retrieval solution for storing, indexing and rapidly retrieving reference information based on the HP StorageWorks grid architecture.

    This standards-based architecture allows storage services, such as HP SFS, to be delivered across a massively scalable, centrally managed system. It divides storage, indexing, search and retrieval tasks across a distinct set of computing nodes or storage "smart cells" that cooperate to form a single shared file system.

    Each smart cell is composed of interconnected, self-contained, low-cost, high-density computing and storage devices. HP SFS smart cells running the Lustre protocol work in parallel with other smart cells on a shared storage grid to deliver extensive scalability and provide unprecedented levels of computing bandwidth.

    The initial HP SFS offering includes two classes of smart cell configurations, one with highly resilient StorageWorks Enterprise Virtual Array storage and another with lower priced StorageWorks Modular Smart Array storage. Additional classes of storage are planned to be added to HP SFS as the HP StorageWorks grid strategy expands.

    These smart cells can be connected to each other and to the Lustre clients (compute clusters) with standard 10-100 or Gb Ethernet. Additionally, customers can use higher speed message-passing interconnects, including InfiniBand, Myrinet and Quadrics ELAN4.

    The HP SFS servers are factory assembled, pre-configured, pre-cabled, pre-tested in clustered I/O racks, and ready to run the Lustre software with the HP SFS added-value installation, maintenance, monitoring and administration tools.

    More information is available at http://www.hp.com/go/technicalstorage

    -------------------------------------

    Need a quote from HP? Email kfarmer (at) linuxhpc (dot)org


    < PathScale's AMD-64 Compiler Suite Certified as Interoperable With Streamline's DDT | MSC.Software Expands Simulation Process and Data Management Capabilities >

    Sponsors






    WinHPC.org


    Stay up-to-date on the Progress of Microsoft's Windows Compute Cluster Server 2003

    Golden Eggs
    (HP Visual Diagram and Config Guides)
    Integrity:
    HP Integrity Family Portrait, IA64
    rx1620 1600MHz 2P MSA1000 Cluster IA64
    rx2620 1600MHz 2P MSA1000 Cluster IA64
    rx4640 1600MHz 4P MSA1000 Cluster IA64
    ProLiant:
    DL140 3060MHz 2P IA32
    DL140 G2 3600MHz 2P EM64T
    DL145 2600MHz 2P Opteron
    DL145 G2 2600MHz 2P Opteron Dual Core
    DL360 G4 3400MHz 2P EM64T
    DL360 G4p 3800MHz 2P EM64T
    DL380 G4 3800MHz 2P EM64T
    DL385 2800MHz 2P Opteron Dual Core
    DL560 3000MHz 4P IA32
    DL580 G3 3330MHz 4P EM64T
    DL585 2800MHz 4P Opteron Dual Core
    Clusters:
    CP3000 32x DL140G2 & DL360G4p GigE EM64T
    CP4000 32x DL145G2 GigE Opteron, Dual Core
    CP4000 64x DL145 GigE Opteron
    CP4000 102x DL145 GigE Opteron
    CP4000 32x DL145 Myri Opteron
    Rocks Cluster 16-22 DL145 Opteron
    Rocks Cluster 30-46 DL145 Opteron
    Rocks Cluster 64-84 DL145 Opteron
    LC3000 GigaE 24-36 DL145 Opteron
    LC3000 Myri 16-32x DL145 Opteron
    LC3000 GigaE 16-22x DL145 Opteron
    LC2000 GigaE 16-22x DL360G3 Xeon
    Storage:
    MSA500 G2, SCSI
    MSA1510i IP SAN 48TB, SCSI and SATA
    MSA1500 48TB, SCSI and SATA
    Misc:
    Dual Core AMD64 and EM64T systems with MSA1500









    Linux Magazine
    At Newstands Now!Linux Magazine

    Click for King, North Carolina Forecast

    PageRank is Google`s measure of the importance of this page!





       


    Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Sponsorship Search Privacy
         Copyright © 2001-2006 LinuxHPC.org
    Linux is a trademark of Linus Torvalds
    All other trademarks are those of their owners.
        
      SpyderByte.com ;Technical Portals