The #1 Site for News & Information Related to Linux High Performance Technical Computing, Linux High Availability and Linux Parallel Clustering
Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Sponsorship Search Privacy
More Links
  • Printer friendly version
  • Share this story

  • Research and Services
    Cluster Quoter
    Windows HPC News
    Cluster Builder
    Vendors - Hardware/Software
    Golden Eggs
    High Availability

    Submit News/Article/PR
    Latest News
    News Archives
    Search Archives
    Featured Articles
    User Groups
    Higher Education
    Cluster List
    Linux HPC News Update
    Stay current on Linux related HPC news, events and information.
    LinuxHPC Newsletter

    Other Mailing Lists:
    Linux High Availability
    Beowulf Mailing List (Linux Itanium)

    Linux HPC Links
    Beowulf Users Group
    Cluster Monkey
    High Performance Computing Clusters
    The Aggregate
    Cluster Benchmarks
    Cluster Computing Info Centre
    ClusterWorld Magazine
    Coyote Gultch
    Linux Clustering Info Ctr.
    Robert Brown's Beowulf Page
    Sourceforge Cluster Foundry
    HPC DevChannel
    HPC User Forum
    News Feed has an RSS/RDF feed if you wish to include it on your website.
    Mobile Edition

    Linux Cluster RFQ Form
    Reach Multiple Vendors With One Linux Cluster RFQ Form. Save time and effort, let do all the leg work for you free of charge. Request A Quote... is Looking for Interns - If you have experience with Linux clusters and/or cluster applications and you're interested in helping out with let me know. I can promise the experience will be rewarding and educational, plus it'll make good resume fodder. You'll work with vendors and cluster users from around the world. We're also looking for writers and people to do hardware/software reviews. Contact Ken Farmer.

    Latest News

    ASPEED Software selects Critical Software’s Fault Tolerant Message Passing Technology
    Monday February 14 2005 @ 02:34PM EST


    Coimbra, Portugal and New York, NY – February 14, 2005 – ASPEED Software plans to include Critical Software’s high performance fault tolerant message passing technology in its next release of ACCELLERANT expected later this month. “We were looking for a robust messaging technology that would deliver optimum and reliable communication performance for moving data among our ACCELLERANT On-Demand Application servers,” said Kurt Ziegler, Executive Vice President of Development at ASPEED. “We needed a middleware that would fit seamlessly into out application software and would be totally transparent to the underlying infrastructure, only Critical Software could deliver.”

    Critical Software is well known in the high performance computing (HPC) sector which makes extensive use of its WMPI II software product. WMPI II is a full implementation of version 2 of the MPI (Message Passing Interface) standard.

    To address ASPEED’s needs Critical Software specifically developed a new message passing layer called WMX which provides high performance messaging combined with the level of fault tolerance and abstraction necessary to run the business critical ASPEED ACCELLERATED applications. WMX uses a master-slave paradigm to ensure failed and stalled nodes can be restarted by ACCELLERANT’s fault detection services and to support the adaptive and dynamic load balancing across widely distributed Linux and Windows based clusters such as those found in Grid environments.

    “ASPEED’s requirements for a high performing, robust message passing technology could not be met by a standard MPI implementation,” said Peter Tyndale, Product Manager of Critical Software’s HPC division. “The most common paradigm for fault tolerance in MPIs is check-pointing but this is expensive in terms of CPU cycles and is inflexible. Our approach with WMX maintains the low latencies and high throughput required while allowing jobs to be dynamically managed ensuring the integrity of business critical applications,” he added.

    Critical Software’s HPC team can assist ISVs with integrating their applications with WMPI II, as well as providing customised message passing middleware. A free evaluation version of WMPI II can be downloaded directly from the web at .

    ** About Critical Software **

    Critical Software of San Jose, California and Coimbra, Portugal is a leading provider of middleware for High-Performance Computing. The company develops dependable solutions and technologies for mission and business critical systems where performance plays a key role. The company has grown from 3 to 95 engineers between 1998 and 2005, and is dedicated to solutions engineering and product development in several sectors and markets. Critical Software’s HPC business started in 2000, with the release of the entry-level WMPI product, providing an MPI-1.2 implementation for Windows clusters. Critical Software's MPI middleware is used worldwide to power hundreds of compute intensive applications in industry and academia. Continuous investment in R&D has enabled Critical to stay ahead of the competition with WMPI II being the only commercially available MPI-2 implementation for Windows clusters. Linux support has been added to provide a comprehensive solution for parallel processing across COTS clusters.

    ** About ASPEED **

    ASPEED Software Corporation is a privately held, venture funded software company based in the heart of New York City with development centres in New York and London.

    ASPEED’s value proposition is to enable clients with computationally intensive applications to significantly improve the response time and reduce the run times. Using ASPEED’s ACCELLERANT software enables its users to quickly adapt their existing application to use less expensive and newer hardware technologies in multi-processor, cluster and grid configurations. ACCELLERANT’s algorithm-aware API enables the distribution of applications thought by many as “undistributable” improving competitiveness, productivity, accuracy of the analysis and to participate in company cluster and grid activity

    For more information:
    Critical Software, SA
    Mr. Peter Tyndale
    Tel. +351 239 989 100

    < HP Expands Opteron Portfolio | HP Advances Computing Manageability, Performance and Choice with AMD Opteron >

    Sponsors | Affiliates

    Cost/Processor Poll
    What did you pay per dual processor node - Including software, accessories and other costs? (Total Cost/Node Count) About:
    Less than $1,500
    $1,501 to $2,000
    $2,001 to $2,500
    $2,501 to $3,000
    $3,001 to $3,500
    More than $3,500

    [ results | polls ]

    Mailing Lists
    Enter your email address to subscribe Newsletter (details)
    LinuxHPC Managers List (details)


    Golden Eggs
    (HP Visual Diagram and Config Guides)
    » rx2620 1600MHz 2P IA64
    » rx2620 Cluster MSA1000 IA64
    » rx2600 1500MHz 2P IA64
    » rx2600 Cluster MSA1000 IA64
    » DL140 3060MHz 2P IA32
    » DL140 G2 3600MHz 2P EM64T
    » DL145 2600MHz 2P Opteron
    » DL360 G4 3400MHz 2P EM64T
    » DL360 G4p 3800MHz 2P EM64T
    » DL380 G4 3800MHz 2P EM64T
    » DL385 2800MHz 2P Opteron
    » DL560 3000MHz 4P IA32
    » DL585 2600MHz 4P Opteron
    » DL580 G3 3330MHz 4P EM64T
    » CP3000 32x DL145G2 & DL360G4p GigE EM64T
    » CP4000 32x DL145 GigE Opteron
    » CP4000 64x DL145 GigE Opteron
    » CP4000 102x DL145 GigE Opteron
    » CP4000 32x DL145 Myri Opteron
    » Rocks Cluster 16-22 DL145 Opteron
    » Rocks Cluster 30-46 DL145 Opteron
    » Rocks Cluster 64-84 DL145 Opteron
    » LC3000 GigaE 24-36 DL145 Opteron
    » LC3000 Myri 16-32x DL145 Opteron
    » LC3000 GigaE 16-22x DL145 Opteron
    » LC2000 GigaE 16-22x DL360G3 Xeon
    » MSA500 G2, SCSI
    » MSA1500 24TB, SCSI and SATA
    » AMD64 and EM64T systems with MSA1500

    Linux Magazine
    At Newstands Now!Linux Magazine


    Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Sponsorship Search Privacy
         Copyright © 2001-2005
    Linux is a trademark of Linus Torvalds
    All other trademarks are those of their owners.
      ;Technical Portals