SpyderByte: WinHPC.org EnterpriseLinux.org BigBlueLinux.org
 The #1 Site for News & Information Related to Linux High Performance Technical Computing, Linux High Availability and Linux Parallel Clustering
Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Sponsorship Search Privacy
More Links
  • Printer friendly version
  • Share this story

  • Research and Services
    Cluster Quoter
    Windows HPC News
    Cluster Builder
    Hardware Vendors
    Software Vendors
    Training Vendors
    Golden Eggs
    High Availability

    Submit News/Article/PR
    Latest News
    News Archives
    Search Archives
    Featured Articles
    User Groups
    Higher Education
    Cluster List
    Linux HPC News Update
    Stay current on Linux related HPC news, events and information.
    LinuxHPC Newsletter

    Other Mailing Lists:
    Linux High Availability
    Beowulf Mailing List
    Gelato.org (Linux Itanium)

    Linux HPC Links
    Beowulf Users Group
    Cluster Monkey
    High Performance Computing Clusters
    The Aggregate
    Cluster Benchmarks
    Cluster Computing Info Centre
    Coyote Gultch
    Linux Clustering Info Ctr.
    Robert Brown's Beowulf Page
    Sourceforge Cluster Foundry
    HPC DevChannel
    HPC User Forum
    News Feed
    LinuxHPC.org.org has an RSS/RDF feed if you wish to include it on your website.
    Mobile Edition

    Linux Cluster RFQ Form
    Reach Multiple Vendors With One Linux Cluster RFQ Form. Save time and effort, let LinuxHPC.org do all the leg work for you free of charge. Request A Quote...

    LinuxHPC.org is Looking for Interns - If you have experience with Linux clusters and/or cluster applications and you're interested in helping out with LinuxHPC.org let me know. I can promise the experience will be rewarding and educational, plus it'll make good resume fodder. You'll work with vendors and cluster users from around the world. We're also looking for writers and people to do hardware/software reviews. Contact Ken Farmer.

    Latest News

    NERSC Launches Linux Networx Supercomputer into Production
    Tuesday August 16 2005 @ 08:45AM EDT

    722-Processor Computing System Maintains 99% Uptime in Production Environment

    Salt Lake City, Utah, and Berkeley, Calif. (Aug. 16, 2005) – Linux Networx and the U.S. Department of Energy’s (DOE) Office of Science announced today that DOE's National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory has accepted a 722-processor Linux Networx Evolocity® cluster system for full production use by researchers across the nation.

    Named “Jacquard,” the Linux Networx system will provide computational resources to scientists from DOE national laboratories, universities and other research institutions to support a wide range of scientific disciplines including climate modeling, fusion energy, nanotechnology, combustion, astrophysics and life sciences. Established in 1974, NERSC is DOE’s flagship facility for unclassified supercomputing.

    The acceptance test included a 14-day availability test during which a select group of NERSC users were given full access to the Jacquard cluster to thoroughly test the entire system in production operation. Jacquard had a 99 percent availability uptime during the testing while users and scientists ran a variety of codes and jobs on the system. The thorough acceptance testing by NERSC ensures Jacquard is ready for a production environment for thousands of scientists and researchers across the nation.

    “NERSC is the leading provider of computing resources for DOE’s Office of Science and this new system will provide valuable computational science support for a wide range of users, allowing them to run more detailed simulations with faster turnaround, thereby helping advance scientific discovery,” said NERSC General Manager Bill Kramer.

    The Jacquard system is one of the largest production InfiniBand-based Linux cluster systems and has met rigorous acceptance criteria for performance, reliability and functionality. Jacquard also takes advantage of Mellanox 12X InfiniBand uplinks in its fat-tree interconnect, reducing network hot spots and improving reliability by dramatically reducing the number of cables required.

    The system has 722 dual AMD Opteron™ processors, Model 248, with 640 processors devoted to computation, and the rest used for I/O, interactive work, testing and interconnect management. Jacquard has a peak performance of 3.1 trillion floating point operations per second (teraflop/s). Storage from DataDirect Networks provides 30 terabytes of globally available formatted storage.

    “By delivering this system to NERSC, we’ve provided a highly productive computing system to over 2,500 users nationwide,” said Robert (Bo) H. Ewald, CEO of Linux Networx. “We are committed to providing NERSC with the most advanced high-performance computing system available and are thrilled that this system will be a key part of major research initiatives taking place throughout the country.”

    Following the tradition at NERSC, the system was named for someone who has had an impact on science and/or computing. In 1801, Joseph-Marie Jacquard invented the Jacquard loom, which was the first programmable machine. The Jacquard loom used punched cards and a control unit that allowed a skilled user to program detailed patterns on the loom.

    About Linux Networx

    Linux Networx provides proven high-end computing systems that deliver maximum sustained performance and high return on investment to customers. The company’s computing systems are used for simulation, analysis and modeling. Through its innovative Evolocity® hardware, cluster management tools and professional service and support, Linux Networx provides end-to-end clustering solutions. To date, the company has built some of the fastest computing systems in the world, and boasts numerous Fortune 500 customers. For more information about Linux Networx, visit http://www.linuxnetworx.com

    About NERSC

    Established in 1974, the NERSC Center has long been a leader in providing systems, services and expertise to advance computational science throughout the DOE research community. NERSC is managed by Lawrence Berkeley National Laboratory for DOE. For more information about the NERSC Center, go to http://www.nersc.gov

    < Scyld Software's Becker on Linux, Clustering, Grid | Coraid's EtherDrive Storage Appliance Wins LinuxWorld Product Excellence Award for Best Storage >



    Stay up-to-date on the Progress of Microsoft's Windows Compute Cluster Server 2003

    Golden Eggs
    (HP Visual Diagram and Config Guides)
    HP Integrity Family Portrait, IA64
    rx1620 1600MHz 2P MSA1000 Cluster IA64
    rx2620 1600MHz 2P MSA1000 Cluster IA64
    rx4640 1600MHz 4P MSA1000 Cluster IA64
    DL140 3060MHz 2P IA32
    DL140 G2 3600MHz 2P EM64T
    DL145 2600MHz 2P Opteron
    DL145 G2 2600MHz 2P Opteron Dual Core
    DL360 G4 3400MHz 2P EM64T
    DL360 G4p 3800MHz 2P EM64T
    DL380 G4 3800MHz 2P EM64T
    DL385 2800MHz 2P Opteron Dual Core
    DL560 3000MHz 4P IA32
    DL580 G3 3330MHz 4P EM64T
    DL585 2800MHz 4P Opteron Dual Core
    CP3000 32x DL140G2 & DL360G4p GigE EM64T
    CP4000 32x DL145G2 GigE Opteron, Dual Core
    CP4000 64x DL145 GigE Opteron
    CP4000 102x DL145 GigE Opteron
    CP4000 32x DL145 Myri Opteron
    Rocks Cluster 16-22 DL145 Opteron
    Rocks Cluster 30-46 DL145 Opteron
    Rocks Cluster 64-84 DL145 Opteron
    LC3000 GigaE 24-36 DL145 Opteron
    LC3000 Myri 16-32x DL145 Opteron
    LC3000 GigaE 16-22x DL145 Opteron
    LC2000 GigaE 16-22x DL360G3 Xeon
    MSA500 G2, SCSI
    MSA1510i IP SAN 48TB, SCSI and SATA
    MSA1500 48TB, SCSI and SATA
    Dual Core AMD64 and EM64T systems with MSA1500

    Linux Magazine
    At Newstands Now!Linux Magazine

    Click for King, North Carolina Forecast

    PageRank is Google`s measure of the importance of this page!


    Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Sponsorship Search Privacy
         Copyright © 2001-2006 LinuxHPC.org
    Linux is a trademark of Linus Torvalds
    All other trademarks are those of their owners.
      SpyderByte.com ;Technical Portals