SpyderByte: WinHPC.org EnterpriseLinux.org BigBlueLinux.org
      
 The #1 Site for News & Information Related to Linux High Performance Technical Computing, Linux High Availability and Linux Parallel Clustering
Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Sponsorship Search Privacy
More Links
  • Printer friendly version
  • Share this story

  • Research and Services
    Cluster Quoter
    Windows HPC News
    Cluster Builder
    Hardware Vendors
    Software Vendors
    Training Vendors
    Golden Eggs
    News
    Submit News/Article/PR
    Latest News
    Newsletter
    News Archives
    Search Archives
    Reference
    Featured Articles
    Beginners
    Whitepapers
    Documentation
    Software
    Lists/Newsgroups
    Books
    User Groups
    Higher Education
    Cluster List
    Forums
    Employment/Jobs
    Beowulf
    Applications
    Interconnects
    High Availability
    AMD
    Intel

    Linux HPC News Update
    Stay current on Linux related HPC news, events and information.
    LinuxHPC Newsletter

    Other Mailing Lists:
    Linux High Availability
    Beowulf Mailing List
    Gelato.org (Linux Itanium)

    Linux HPC Links
    Beowulf.org
    Beowulf Users Group
    Blade.org
    Cluster Monkey
    High Performance Computing Clusters
    Gelato.org
    The Aggregate
    Top500
    Cluster Benchmarks
    Cluster Computing Info Centre
    Coyote Gultch
    Linux Clustering Info Ctr.
    Robert Brown's Beowulf Page
    Sourceforge Cluster Foundry
    HPC DevChannel
    OpenSSI
    Grid-Scape.org
    HPCWire
    SuperComputingOnline
    HPC User Forum
    Scalability.org
    Gridtech
    News Feed
    LinuxHPC.org.org has an RSS/RDF feed if you wish to include it on your website.
    LinuxHPC.org
    Home
    About
    Contact
    Mobile Edition
    Sponsorship

    LinuxHPC.org Polls
    Cluster Books: What Do You Want?
    Books that give me step by step How-To information
    Books on Cluster MPI (including MPICH1,2, LAM/MPI, Open MPI)
    Books on Cluster Design Techniques
    Books on Cluster Programming
    Books on Cluster Applications
    All of the above please
    Books, we dont need no stinkin books, we just use Google

    [ results | polls ]
    Linux Cluster RFQ Form
    Reach Multiple Vendors With One Linux Cluster RFQ Form. Save time and effort, let LinuxHPC.org do all the leg work for you free of charge. Request A Quote...

    LinuxHPC.org is Looking for Interns - If you have experience with Linux clusters and/or cluster applications and you're interested in helping out with LinuxHPC.org let me know. I can promise the experience will be rewarding and educational, plus it'll make good resume fodder. You'll work with vendors and cluster users from around the world. We're also looking for writers and people to do hardware/software reviews. Contact Ken Farmer.

    Latest News

    Fastest HPC Interconnect with Industry's Lowest Latency, PathScale InfiniPath HTX for InfiniBand
    Posted by Ken Farmer, Thursday June 23 2005 @ 09:02AM EDT

    InfiniPath Scales Better and Delivers Performance Advantages that are 50 to 200 Percent Better than Competitive Interconnect Products

    International Supercomputer Conference - Heidelberg, Germany - 23 June, 2005 - PathScale released new benchmark results this week proving that its new InfiniPath™ interconnect for InfiniBand™ dramatically outperforms competitive interconnect solutions by providing the lowest latency across a broad spectrum of cluster-specific benchmarks. The results were announced at the International Supercomputer Conference 2005 in Heidelberg, Germany.

    PathScale InfiniPath achieved an MPI latency of 1.32 microseconds (as measured by the standard MPI "ping-pong" benchmark), n1/2 message size of 385 bytes and TCP/IP latency of 6.7 microseconds. This represents performance advantages that are 50 percent to 200 percent better than the newly announced Mellanox and Myricom interconnect products. InfiniPath also produced industry-leading benchmarks on more comprehensive metrics that predict how real applications will perform.

    More data is available at:
    http://www.pathscale.com/infinipath-perf.html

    The InfiniPath HTX™ Adapter is a low-latency cluster interconnect for InfiniBand™ that plugs into standard HyperTransport technology-based HTX slots on AMD Opteron servers. Optimized for communications-sensitive applications, InfiniPath is the industry's lowest-latency Linux cluster interconnect for message passing (MPI) and TCP/IP applications.

    "When evaluating interconnect performance for HPC applications, it is essential to go beyond the simplistic zero-byte latency and peak streaming bandwidth benchmarks," said Art Goldberg, COO of PathScale. "InfiniPath delivers the industry's best performance on simple MPI benchmarks and provides dramatically better results on more meaningful interconnect metrics such as n1/2 message size (or half-power point), latency across a spectrum of message sizes, and latency across multiprocessor nodes. These are important benchmarks that give better indications of real world application performance. We challenge users to benchmark their own applications on an InfiniPath cluster and see what the impact of this breakthrough performance means to them."

    PathScale InfiniPath uniquely exploits multi-processor nodes and dual-core processors to deliver greater effective bandwidth as additional CPUs are added. Any of the existing serial offload HCA designs cause messages to stack up when multiple processors try to access the adapter. By contrast, the unique messaging parallelization capabilities of InfiniPath enable multiple processors or cores to send messages simultaneously, maintaining constant latency while dramatically improving small message capacity and further reducing the n1/2 message size and substantially increasing effective bandwidth.

    "We compared the performance of PathScale's InfiniPath interconnect on a 16-node/32-CPU test run with VASP, a quantum mechanics application used frequently in our facility, and found that VASP running on InfiniPath was about 50 percent faster than on Myrinet," said Martin Cuma, Scientific Applications Programmer for the Center for High-Performance Computing at the University of Utah. "Standard benchmarks do not give an accurate picture of how well an interconnect will perform in a real-world environment. Performance improvement will vary with different applications due to their parallelization strategies, but InfiniPath almost always delivers better performance than other interconnects when you scale it to larger systems and run communications-intensive scientific codes. InfiniPath has proven to be faster and to scale better for our parallel applications than other cluster interconnect solutions that we tested."

    PathScale InfiniPath Performance Results PathScale has published a white paper that includes a technical analysis of several application benchmarks that compare the new InfiniPath interconnect with competitive interconnects. This PathScale white paper can be downloaded from: www.pathscale.com/whitepapers.html

    PathScale Customer Benchmark Center PathScale has established a fully-integrated InfiniPath cluster at its Customer Benchmark Center in Mountain View, California. Potential customers and ISVs are invited to remotely test their own MPI and TCP/IP applications and personally experience the clear performance advantages of the InfiniPath low-latency interconnect.

    About PathScale

    Based in Mountain View, California, PathScale develops innovative software and hardware technologies that substantially increase the performance and efficiency of Linux clusters, the next significant wave in high-end computing. Applications that benefit from PathScale's technologies include seismic processing, complex physical modeling, EDA simulation, molecular modeling, biosciences, econometric modeling, computational chemistry, computational fluid dynamics, finite element analysis, weather modeling, resource optimization, decision support and data mining. PathScale's investors include Adams Street Partners, Charles River Ventures, Enterprise Partners Venture Capital, CMEA Ventures, ChevronTexaco Technology Ventures and the Dow Employees Pension Plan. For more details, visit http://www.pathscale.com , send email to sales@pathscale.com or telephone 1-650-934-8100.

    < Absoft and Intel Announce General Availability of New Software Tool Kit for Cluster Developers | SGI Ranks Third on List of World's Top 500 Supercomputing Systems >

    Sponsors

    HP
    HPC Market Leader
    Three Years Running











    Affiliates




    Gelato ICE


    Cluster Monkey


    HP


    HP










    Golden Eggs
    (HP Visual Diagram and Config Guides)
    Integrity:
    Integrity Family Portrait (rx1620 thru rx8620), IA64
    rx1620 1600MHz 2P MSA1000 Cluster IA64
    rx2620 1600MHz 2P MSA1000 Cluster IA64
    rx4640 1600MHz 4P MSA1000 Cluster IA64
    rx7620 1600MHz 8P 10U Systems and MSA1000 Cluster
    rx8620 1600MHz 16P 17U Systems and MSA1000 Cluster
    ProLiant:
    DL140 3060MHz 2P IA32
    DL140 G2 3600MHz 2P EM64T
    DL145 2600MHz 2P Opteron
    DL145 G2 2600MHz 2P Opteron Dual Core
    DL360 G4 3400MHz 2P EM64T
    DL360 G4p 3800MHz 2P EM64T
    DL380 G4 3800MHz 2P EM64T
    DL385 2800MHz 2P Opteron Dual Core
    DL560 3000MHz 4P IA32
    DL580 G3 3330MHz 4P EM64T
    DL585 2800MHz 4P Opteron Dual Core
    Clusters:
    CP3000 32x DL140G2 & DL360G4p GigE EM64T
    CP4000 32x DL145G2 GigE Opteron, Dual Core
    CP4000 64x DL145 GigE Opteron
    CP4000 102x DL145 GigE Opteron
    CP4000 32x DL145 Myri Opteron
    Rocks Cluster 16-22 DL145 Opteron
    Rocks Cluster 30-46 DL145 Opteron
    Rocks Cluster 64-84 DL145 Opteron
    LC3000 GigaE 24-36 DL145 Opteron
    LC3000 Myri 16-32x DL145 Opteron
    LC3000 GigaE 16-22x DL145 Opteron
    LC2000 GigaE 16-22x DL360G3 Xeon
    Storage:
    MSA30-MI Dual SCSI Cluster, rx1620...rx4640
    MSA500 G2, SCSI
    MSA1510i IP SAN 48TB, SCSI and SATA
    MSA1500 48TB, SCSI and SATA
    Misc:
    Dual Core AMD64 and EM64T systems with MSA1500






    Linux High Performance Computing Whitepapers
    Unified Cluster Portfolio:
    A comprehensive, modular package of tested and pre-configured hardware, software and services for scalable computation, data management and visualization.

    Your Fast Track to Cluster Deployment:
    Designed to enable faster ordering and configuration, shorter delivery times and increased savings. Customers can select from a menu of popular cluster components, which are then factory assembled into pre-defined configurations with optional software installation.
    Message Passing Interface library (HP-MPI):
    A high performance and production quality implementation of the Message-Passing Interface (MPI) standard for HP servers and workstations.

    Cluster Platform Express:
    Cluster Platform Express comes straight to you, factory assembled and available with pre-installed software for cluster management, and ready for deployment.
    AMD Opteron-based ProLiant nodes | Intel Xeon-based ProLiant nodes



    Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Sponsorship Search Privacy
         Copyright © 2001-2006 LinuxHPC.org
    Linux is a trademark of Linus Torvalds
    All other trademarks are those of their owners.
        
      SpyderByte.com ;Technical Portals