SpyderByte: WinHPC.org EnterpriseLinux.org BigBlueLinux.org
 The #1 Site for News & Information Related to Linux High Performance Technical Computing, Linux High Availability and Linux Parallel Clustering
Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Sponsorship Search Privacy
More Links
  • Printer friendly version
  • Share this story

  • Research and Services
    Cluster Quoter
    Windows HPC News
    Cluster Builder
    Hardware Vendors
    Software Vendors
    Training Vendors
    Golden Eggs
    High Availability

    Submit News/Article/PR
    Latest News
    News Archives
    Search Archives
    Featured Articles
    User Groups
    Higher Education
    Cluster List
    Linux HPC News Update
    Stay current on Linux related HPC news, events and information.
    LinuxHPC Newsletter

    Other Mailing Lists:
    Linux High Availability
    Beowulf Mailing List
    Gelato.org (Linux Itanium)

    Linux HPC Links
    Beowulf Users Group
    Cluster Monkey
    High Performance Computing Clusters
    The Aggregate
    Cluster Benchmarks
    Cluster Computing Info Centre
    Coyote Gultch
    Linux Clustering Info Ctr.
    Robert Brown's Beowulf Page
    Sourceforge Cluster Foundry
    HPC DevChannel
    HPC User Forum
    News Feed
    LinuxHPC.org.org has an RSS/RDF feed if you wish to include it on your website.
    Mobile Edition

    Linux Cluster RFQ Form
    Reach Multiple Vendors With One Linux Cluster RFQ Form. Save time and effort, let LinuxHPC.org do all the leg work for you free of charge. Request A Quote...

    LinuxHPC.org is Looking for Interns - If you have experience with Linux clusters and/or cluster applications and you're interested in helping out with LinuxHPC.org let me know. I can promise the experience will be rewarding and educational, plus it'll make good resume fodder. You'll work with vendors and cluster users from around the world. We're also looking for writers and people to do hardware/software reviews. Contact Ken Farmer.

    Latest News

    Industry's Highest Performance Linux Cluster Interconnect Now Shipping in Volume
    Wednesday August 10 2005 @ 10:19AM EDT

    LinuxWorld 2005: World's First AMD Dual-Core Linux Cluster Using PathScale InfiniPath Ultra Low Latency Interconnect Deployed at University of California

    -- Industry's Highest Performance Linux Cluster Interconnect Now Shipping in Volume --

    San Francisco, CA - August 10, 2005 - PathScale, developer of innovative software and hardware solutions to accelerate the performance and efficiency of Linux® clusters is now shipping its InfiniPath™ HTX™ InfiniBand™ Adapter, the industry's lowest-latency Linux cluster interconnect for message passing (MPI) and TCP/IP applications. The University of California at Davis is an early InfiniPath customer and represents the very first AMD dual-core Opteron cluster deployed with PathScale InfiniPath. This announcement was made today at LinuxWorld in San Francisco.

    More details on InfiniPath are available at:

    The ultra-low latency and unprecedented messaging rate of the PathScale InfiniPath HTX Adapter greatly improves MPI application performance and Linux cluster utilization, enabling scientists, mathematicians and engineers to solve new problems with higher degrees of resolution than ever before. The highly pipelined, cut-thru design of InfiniPath is optimized for applications sensitive to communication latency, the most difficult problem to overcome when migrating from large SMP systems to clusters. InfiniPath delivers superior Interconnect performance at commodity price levels by implementing a high-performance software stack and connecting directly to the AMD Opteron™ processor via a standard HyperTransport HTX slot. When combined with low-latency InfiniBand switching from Cisco (TopSpin), Silverstorm (Infinicon) or Voltaire, InfiniPath enables applications to reliably scale to hundreds or thousands of nodes.

    Pathscale has published new performance results including the Pallas Benchmark Suite and the HPC Challenge Benchmarks. These results validate the performance advantages of PathScale InfiniPath as the highest performance cluster interconnect for Linux-based HPC applications. They can be viewed at http://www.pathscale.com/infinipath-perf.html

    Among the first customers to adopt the PathScale InfiniPath interconnect is the Center for Computational Science and Engineering (CSE) at the University of California, Davis. CSE is implementing a 144-CPU AMD Opteron processor-based Linux cluster that leverages InfiniPath to run computational models and simulations related to physics, mathematics, engineering, biomedical diagnostics, and other processor-intensive HPC applications. This deployment consists of 36 server nodes from TeamHPC, a division of M&A Technology. Each server is equipped with two dual-core AMD Opteron processors and an InfiniPath HTX InfiniBand Adapter. They are interconnected with a Cisco TopSpin 270 InfiniBand switch.

    "We support scientists and academic researchers working to analyze and visualize highly complex physical and biological processes," said Bill Broadley, an Information Architect at UC Davis. "We require our compute resources to facilitate the best possible performance for our many communications-intensive applications. The PathScale InfiniPath Adapter is performing exceptionally thus far."

    PathScale InfiniPath outperforms competitive interconnect solutions by achieving the lowest latency across a broad spectrum of tests that indicate how real applications will actually perform in HPC environments. InfiniPath has achieved an MPI latency of 1.32 microseconds (as measured by the standard MPI "ping-pong" benchmark), n1/2 message size of 385 bytes and TCP/IP latency of 6.7 microseconds. Using the Random Ring Latency test from the HPC Challenge Benchmarks on 32-processor systems, PathScale InfiniPath produced results ranging from 3X to 10X faster than alternative high-speed interconnects.

    "PathScale's mission is to enable users to reliably and efficiently solve their most challenging computational problems. The performance results achieved on real world HPC applications and with key application benchmarks run on installations such as the new Linux cluster at UC Davis prove that PathScale InfiniPath is, without question, the world's highest performance cluster interconnect," said Scott Metcalf, CEO of PathScale. "PathScale's innovative approach to high-speed InfiniBand interconnect reduces the workload required to process messages, thereby increasing the effective message rate to unprecedented levels. PathScale's InfiniPath hardware and software constitute the industry's first commercial grade InfiniBand solution, and establishes new standards for InfiniBand performance."

    The new cluster at UC Davis CSE was designed and integrated by TeamHPC, a division of M&A Technology, Inc. based in Eudora, Kansas near Kansas City, and a charter member of the PathScale FastPath reseller program described at www.pathscale.com/fastpath_partners.html "TeamHPC and PathScale have worked closely to test and implement a highly efficient, cost effective, high performance research platform at UC Davis that enables scientists, academics and graduate students to overcome the performance bottlenecks of computing systems of the past," said Bret Stouder, Vice President of TeamHPC. "The combined performance of AMD Opteron processors and the low-latency PathScale InfiniPath interconnect along with complete testing and integration solutions from TeamHPC opens a new chapter in high performance computing, where an economically priced system does not mean compromised performance."

    About UC Davis CSE

    The Center for Computational Science and Engineering (CSE) at the University of California, Davis is concerned with the development of computational models and simulations as a means of understanding complex physical and biological processes, and to model and visualize entirely abstract processes encountered in physics, mathematics, engineering and computer science. Read more at http://www.cse.ucdavis.edu

    About TeamHPC

    TeamHPC, a division of M&A Technology, specializes in High Performance Computing, and assembles and integrates all of its products in an ISO-9000: 2000 certified manufacturing plant. TeamHPC gives researchers access to their clusters for benchmark and application testing before products are shipped. In carving new paths in the HPC market, TeamHPC also provides a 24-hour data center environment that allows researchers to host their computational machines at M&A Technology's headquarters in Dallas, TX. More information about TeamHPC is available at http://www.teamhpc.com

    About PathScale

    Based in Mountain View, California, PathScale develops innovative software and hardware technologies that substantially increase the performance and efficiency of Linux clusters, the next significant wave in high-end computing. Applications that benefit from PathScale's technologies include seismic processing, complex physical modeling, EDA simulation, molecular modeling, biosciences, econometric modeling, computational chemistry, computational fluid dynamics, finite element analysis, weather modeling, resource optimization, decision support and data mining. PathScale's investors include Adams Street Partners, Charles River Ventures, Enterprise Partners Venture Capital, CMEA Ventures, GF Private Equity LLC, ChevronTexaco Technology Ventures and the Dow Employees Pension Plan. For more details, visit http://www.pathscale.com , email sales@pathscale.com or telephone 1-650-934-8100.

    < Why Linux on Clusters? Part 1 of 3 | Springfield Deploys Ammasso 1100 RDMA over Ethernet Adapters for Distributed Computing >



    Stay up-to-date on the Progress of Microsoft's Windows Compute Cluster Server 2003

    Golden Eggs
    (HP Visual Diagram and Config Guides)
    HP Integrity Family Portrait, IA64
    rx1620 1600MHz 2P MSA1000 Cluster IA64
    rx2620 1600MHz 2P MSA1000 Cluster IA64
    rx4640 1600MHz 4P MSA1000 Cluster IA64
    DL140 3060MHz 2P IA32
    DL140 G2 3600MHz 2P EM64T
    DL145 2600MHz 2P Opteron
    DL145 G2 2600MHz 2P Opteron Dual Core
    DL360 G4 3400MHz 2P EM64T
    DL360 G4p 3800MHz 2P EM64T
    DL380 G4 3800MHz 2P EM64T
    DL385 2800MHz 2P Opteron Dual Core
    DL560 3000MHz 4P IA32
    DL580 G3 3330MHz 4P EM64T
    DL585 2800MHz 4P Opteron Dual Core
    CP3000 32x DL140G2 & DL360G4p GigE EM64T
    CP4000 32x DL145G2 GigE Opteron, Dual Core
    CP4000 64x DL145 GigE Opteron
    CP4000 102x DL145 GigE Opteron
    CP4000 32x DL145 Myri Opteron
    Rocks Cluster 16-22 DL145 Opteron
    Rocks Cluster 30-46 DL145 Opteron
    Rocks Cluster 64-84 DL145 Opteron
    LC3000 GigaE 24-36 DL145 Opteron
    LC3000 Myri 16-32x DL145 Opteron
    LC3000 GigaE 16-22x DL145 Opteron
    LC2000 GigaE 16-22x DL360G3 Xeon
    MSA500 G2, SCSI
    MSA1510i IP SAN 48TB, SCSI and SATA
    MSA1500 48TB, SCSI and SATA
    Dual Core AMD64 and EM64T systems with MSA1500

    Linux Magazine
    At Newstands Now!Linux Magazine

    Click for King, North Carolina Forecast

    PageRank is Google`s measure of the importance of this page!


    Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Sponsorship Search Privacy
         Copyright © 2001-2006 LinuxHPC.org
    Linux is a trademark of Linus Torvalds
    All other trademarks are those of their owners.
      SpyderByte.com ;Technical Portals