SpyderByte: WinHPC.org EnterpriseLinux.org BigBlueLinux.org
      
 The #1 Site for News & Information Related to Linux High Performance Technical Computing, Linux High Availability and Linux Parallel Clustering
Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Sponsorship Search Privacy
More Links
  • Printer friendly version
  • Share this story

  • Research and Services
    Cluster Quoter
    Windows HPC News
    Cluster Builder
    Hardware Vendors
    Software Vendors
    Training Vendors
    Golden Eggs
    News
    Submit News/Article/PR
    Latest News
    Newsletter
    News Archives
    Search Archives
    Reference
    Featured Articles
    Beginners
    Whitepapers
    Documentation
    Software
    Lists/Newsgroups
    Books
    User Groups
    Higher Education
    Cluster List
    Forums
    Employment/Jobs
    Beowulf
    Applications
    Interconnects
    High Availability
    AMD
    Intel

    Linux HPC News Update
    Stay current on Linux related HPC news, events and information.
    LinuxHPC Newsletter

    Other Mailing Lists:
    Linux High Availability
    Beowulf Mailing List
    Gelato.org (Linux Itanium)

    Linux HPC Links
    Beowulf.org
    Beowulf Users Group
    Blade.org
    Cluster Monkey
    High Performance Computing Clusters
    Gelato.org
    The Aggregate
    Top500
    Cluster Benchmarks
    Cluster Computing Info Centre
    Coyote Gultch
    Linux Clustering Info Ctr.
    Robert Brown's Beowulf Page
    Sourceforge Cluster Foundry
    HPC DevChannel
    OpenSSI
    Grid-Scape.org
    HPCWire
    SuperComputingOnline
    HPC User Forum
    Scalability.org
    Gridtech
    News Feed
    LinuxHPC.org.org has an RSS/RDF feed if you wish to include it on your website.
    LinuxHPC.org
    Home
    About
    Contact
    Mobile Edition
    Sponsorship

    LinuxHPC.org Polls
    Cluster Books: What Do You Want?
    Books that give me step by step How-To information
    Books on Cluster MPI (including MPICH1,2, LAM/MPI, Open MPI)
    Books on Cluster Design Techniques
    Books on Cluster Programming
    Books on Cluster Applications
    All of the above please
    Books, we dont need no stinkin books, we just use Google

    [ results | polls ]
    Linux Cluster RFQ Form
    Reach Multiple Vendors With One Linux Cluster RFQ Form. Save time and effort, let LinuxHPC.org do all the leg work for you free of charge. Request A Quote...

    LinuxHPC.org is Looking for Interns - If you have experience with Linux clusters and/or cluster applications and you're interested in helping out with LinuxHPC.org let me know. I can promise the experience will be rewarding and educational, plus it'll make good resume fodder. You'll work with vendors and cluster users from around the world. We're also looking for writers and people to do hardware/software reviews. Contact Ken Farmer.

    Latest News

    Emerald Cluster Shatters Benchmark on HPC Challenge
    Posted by Kenneth Farmer, Tuesday November 15 2005 @ 12:00PM EST

    PathScale InfiniPath Powers One of the World�s Largest Dual-Core AMD Opteron� Processor-Based InfiniBand Clusters

    �Emerald� Cluster at the AMD Developer Center Shatters Benchmarks on HPC Challenge

    MOUNTAIN VIEW, Calif. < November 15, 2005 < PathScale, the developer of innovative software and hardware solutions to accelerate high performance computing, today announced that its InfiniPath� HTX� InfiniBand� Adapters have been deployed by AMD in its Developer Center to maximize application scaling on the newly installed Dual-Core AMD Opteron� processor-based OEmerald� cluster. The combination of Dual-Core AMD Opteron processors and the InfiniPath interconnect is already demonstrating unprecedented performance, enabling Emerald to outperform traditional supercomputers in several critical benchmarks in the latest High Performance Computing (HPC) Challenge.

    The HPC Challenge, sponsored by DARPA, the National Science Foundation (NSF), and the U.S. Department of Energy (DOE), consists of nine benchmarks that evaluate how HPC systems handle real-world applications. Based on the most recent benchmarks, the AMD Emerald cluster with the InfiniPath InfiniBand interconnect outperformed much larger supercomputer systems. For instance, the 512-core AMD Opteron processor-based Emerald cluster configuration outperformed the highest-end systems from the three leading supercomputing suppliers in the Random Access (GUPs), Random Ring Latency, and Natural Ring Latency benchmarks. These benchmarks are highly sensitive to memory update performance and the speed of network communications, and showcase the clear performance advantages of the InfiniPath interconnect and Dual-Core AMD Opteron processors with Direct Connect Architecture.

    The AMD Emerald cluster, supplied by Rackable Systems, is comprised of 144 nodes, each with two 2.2 GHz Dual-Core AMD Opteron processors, for a total of 576 processing cores. Each node is equipped with a single PathScale InfiniPath HTX InfiniBand Adapter connected to a SilverStorm 9120 144-port InfiniBand switch. The AMD Emerald system, which achieved 2.1 TFLOPs on the Linpack benchmark, has been submitted to the Top500� supercomputing list, which ranks the 500 largest supercomputers in the world. The AMD Emerald system is one of the largest publicly accessible Dual-Core AMD Opteron processor-based InfiniBand clusters in the world.

    �When matching up the low-latency PathScale InfiniPath interconnect with the low-latency Direct Connect Architecture of AMD Opteron processors, the Emerald cluster produces phenomenal results,� said Pat Patla, director, server/workstation marketing, Microprocessor Solutions Sector, AMD (NYSE: AMD). �We have worked with PathScale to showcase this incredible performance through Emerald � the most powerful cluster ever implemented at the AMD Developer Center.�

    Located at the AMD Developer Center in Sunnyvale, Calif., Emerald is designed to provide AMD�s development collaborators and customers a way to benchmark and test performance-sensitive computing applications using the company�s Dual-Core AMD Opteron processor technology. PathScale�s InfiniPath is a cluster-interconnect that plugs directly into the HyperTransport interface on AMD Opteron processor-based servers, and designed to dramatically improve communications within the cluster.

    �The HPC Challenge benchmark results prove that InfiniPath can scale AMD Opteron processor-based clusters to performance levels that exceed systems from some of today�s supercomputing giants,� said Scott Metcalf, CEO of PathScale. �These benchmarks further validate the performance advantages of InfiniBand, and should demonstrate to the scientific and engineering communities that they no longer have to rely on proprietary technologies from the traditional, high priced supercomputing suppliers. They can now have an advantage by leveraging AMD Opteron processor-based Linux clusters and the InfiniPath interconnect to build cost-effective systems for their most demanding applications.�

    The PathScale InfiniPath interconnect helps deliver on the promise of Linux cluster computing by significantly lowering communications latency, helping to improve the performance of complex applications. The technology is enabling scientists, engineers and researchers to more effectively solve a whole new class of computational challenges, from weather modeling and aerospace design to drug discovery and oil and gas research. Today, the InfiniPath interconnect is used by leading scientific and engineering organizations in both the private and government sectors.

    The AMD Developer Center has helped hundreds of innovators test and optimize their products, enterprise configurations and HPC clusters for AMD64 technology. Located in Sunnyvale, Calif., the AMD Developer Center provides on-site technical support and global virtual access to the AMD64 environment - enabling secure, scheduled sessions onsite or remotely at http://developer.amd.com.

    ABOUT PATHSCALE

    PathScale Inc. develops technologies that enable breakthroughs in high performance computing, science and engineering. The PathScale InfiniPath� HTX� Adapter and EKOPath Compiler Suite drive Linux� clusters to performance benchmarks that exceed the world�s most powerful supercomputers. Today, PathScale technologies are the choice of leading scientific and engineering organizations to more effectively solve complex computational challenges, from weather modeling and aerospace design to drug discovery. PathScale is headquartered in Mountain View, Calif. For more information, visit http://www.pathscale.com

    Linux is a registered trademark of Linus Torvalds. HyperTransport and HTX are licensed trademarks of the HyperTransport Technology Consortium. AMD, AMD Opteron and combinations thereof are trademarks of Advanced Micro Devices, Inc.. InfiniBand is a registered trademark of the InfiniBand Trade Association. Pathscale, the Pathscale logo and InfiniPath are trademarks of Pathscale, Inc. All other product names mentioned are trademarks of their respective owners.


    < Foundry Networks to Take Leading Role at Premier High Performance Computing Event | PathScale Supports OpenIB and Introduces New InfiniBand AdapterCards >

    Sponsors

    HP
    HPC Market Leader
    Three Years Running











    Affiliates




    Gelato ICE


    Cluster Monkey


    HP


    HP










    Golden Eggs
    (HP Visual Diagram and Config Guides)
    Integrity:
    Integrity Family Portrait (rx1620 thru rx8620), IA64
    rx1620 1600MHz 2P MSA1000 Cluster IA64
    rx2620 1600MHz 2P MSA1000 Cluster IA64
    rx4640 1600MHz 4P MSA1000 Cluster IA64
    rx7620 1600MHz 8P 10U Systems and MSA1000 Cluster
    rx8620 1600MHz 16P 17U Systems and MSA1000 Cluster
    ProLiant:
    DL140 3060MHz 2P IA32
    DL140 G2 3600MHz 2P EM64T
    DL145 2600MHz 2P Opteron
    DL145 G2 2600MHz 2P Opteron Dual Core
    DL360 G4 3400MHz 2P EM64T
    DL360 G4p 3800MHz 2P EM64T
    DL380 G4 3800MHz 2P EM64T
    DL385 2800MHz 2P Opteron Dual Core
    DL560 3000MHz 4P IA32
    DL580 G3 3330MHz 4P EM64T
    DL585 2800MHz 4P Opteron Dual Core
    Clusters:
    CP3000 32x DL140G2 & DL360G4p GigE EM64T
    CP4000 32x DL145G2 GigE Opteron, Dual Core
    CP4000 64x DL145 GigE Opteron
    CP4000 102x DL145 GigE Opteron
    CP4000 32x DL145 Myri Opteron
    Rocks Cluster 16-22 DL145 Opteron
    Rocks Cluster 30-46 DL145 Opteron
    Rocks Cluster 64-84 DL145 Opteron
    LC3000 GigaE 24-36 DL145 Opteron
    LC3000 Myri 16-32x DL145 Opteron
    LC3000 GigaE 16-22x DL145 Opteron
    LC2000 GigaE 16-22x DL360G3 Xeon
    Storage:
    MSA30-MI Dual SCSI Cluster, rx1620...rx4640
    MSA500 G2, SCSI
    MSA1510i IP SAN 48TB, SCSI and SATA
    MSA1500 48TB, SCSI and SATA
    Misc:
    Dual Core AMD64 and EM64T systems with MSA1500






    Linux High Performance Computing Whitepapers
    Unified Cluster Portfolio:
    A comprehensive, modular package of tested and pre-configured hardware, software and services for scalable computation, data management and visualization.

    Your Fast Track to Cluster Deployment:
    Designed to enable faster ordering and configuration, shorter delivery times and increased savings. Customers can select from a menu of popular cluster components, which are then factory assembled into pre-defined configurations with optional software installation.
    Message Passing Interface library (HP-MPI):
    A high performance and production quality implementation of the Message-Passing Interface (MPI) standard for HP servers and workstations.

    Cluster Platform Express:
    Cluster Platform Express comes straight to you, factory assembled and available with pre-installed software for cluster management, and ready for deployment.
    AMD Opteron-based ProLiant nodes | Intel Xeon-based ProLiant nodes



    Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Sponsorship Search Privacy
         Copyright © 2001-2006 LinuxHPC.org
    Linux is a trademark of Linus Torvalds
    All other trademarks are those of their owners.
        
      SpyderByte.com ;Technical Portals