SpyderByte.com ;Technical Portals 
 The #1 Site for News & Information Related to Linux High Performance Technical Computing, Linux High Availability and Linux Parallel Clustering
Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Advertising/Sponsorship Search Privacy
More Links
  •

  • Research and Services
    Cluster Quoter (HPC Cluster RFQ)
    Hardware Vendors
    Software Vendors
    HPC Consultants
    Training Vendors
    Latest News
    News Archives
    Search Archives
    Featured Articles
    Cluster Builder
    User Groups
    Golden Eggs (Configuration Diagrams)
    Linux HPC News Update
    Stay current on Linux related HPC news, events and information.
    LinuxHPC Newsletter

    Other Mailing Lists:
    Linux High Availability
    Beowulf Mailing List
    Gelato.org (Linux Itanium)

    Linux HPC Links
    Cluster Monkey (Doug Eadline, et al)
    HPCWire (Tabor Communications)
    Scalability.org (Dr. Joe Landman)

    Beowulf Users Group
    High Performance Computing Clusters
    Thinking Parallel
    The Aggregate
    Cluster Computing Info Centre
    Coyote Gultch
    Robert Brown's Beowulf Page
    FM.net: Scientific/Engineering
    HPC User Forum
    Mobile Edition

    Linux Cluster RFQ Form
    Reach Multiple Vendors With One Linux Cluster RFQ Form. Save time and effort, let LinuxHPC.org do all the leg work for you free of charge. Request A Quote...

    Latest News

    $105 million goes to computing center
    Posted by Kenneth Farmer, Wednesday April 25 2007 @ 10:51AM EDT

    Standford News: The U.S. Army has awarded a $105 million, five-year grant to a multi-institution consortium led by Stanford University to build a new home for the Army High-Performance Computing Research Center. The facility will enable advanced simulations to develop new materials for military vehicles and equipment, improve wireless battlefield communication, advance detection of biological or chemical attacks and stimulate innovations in supercomputing itself. The research may spawn civilian innovations as well.

    "Modeling and simulation today play an equal role to theory and physical experimentation in discovery-driven engineering research," says Charbel Farhat, a professor of mechanical engineering and expert on supercomputer simulation who is also a member of the Stanford School of Engineering's Institute for Computational and Mathematical Engineering. "Using the most advanced high-performance computing resources, a research center of this magnitude has great potential for innovating technology and reducing design-cycle time."


    Press Release follows:

    HPTi Wins $215 Million Army Cooperative Agreement for High Performance Computing Research

    RESTON, Va.--(BUSINESS WIRE)--High Performance Technologies, Inc. (HPTi) announced today the company, as part of a consortium, was awarded a cooperative agreement valued at $215 million from the Army Research Laboratory to manage the Army’s High Performance Computing Research Center (AHPCRC). The consortium includes HPTi, Stanford University, NASA Ames Research Center, New Mexico State University, Morgan State University, and the University of Texas at El Paso.

    The AHPCRC conducts fundamental research in computational science and high performance scientific computing. With its consortium partners, HPTi will establish and implement an advanced computing research program in support of the Army’s mission. The program, which includes a five-year base period with an additional five-year option, will focus on enhancing the Army’s operational readiness by providing HPC-based modeling and simulation techniques and advanced visualization. HPTi will lead the computational science support to end users and the program management elements of the cooperative agreement. HPTi will also acquire, install, operate, and manage the program’s HPC system resources.

    “This is a premier program in the HPC industry, and we are very proud to have been selected by the Army to provide these key services to lead the transformation of future land warfare combat systems,” said HPTi president Tim Keenan. “Our roots are in high performance computing, and the chance to use our expertise to work on critical programs for the Army is especially meaningful to our mission of making America a safer place to live.”

    The research program is led by Stanford University and consortium partners New Mexico State University, Morgan State University, the University of Texas at El Paso and the NASA Ames Research Center. HPTi will provide HPC operations and computational sciences support at multiple Army research sites including the Army Research Laboratory at Aberdeen, MD and at the NASA Ames Research Center, Moffett Field, CA.

    The program focuses on four key research areas: Lightweight combat systems survivability; computational nanotechnologies and bio-sciences; computational battlefield network and information sciences; and HPC enabling technologies and advanced algorithmic development.

    “We are excited about the impact that our unique and innovative team will have on the Army’s future mission,” said HPTi Group Vice President Scott F. Miller. “The AHPCRC program will further enhance our ability to provide solutions for the DoD’s most advanced weapons platforms and science programs.”

    The Army’s continuing investments in HPC have resulted in increased use of computer-based modeling and simulation by the Army scientific and engineering community. The establishment of the Center and the continuation of the AHPCRC Program embody the Army’s support of the broader national effort to maintain U.S. leadership in computing technology and its application to issues critical to U.S. national security.

    About HPTi

    Headquartered in Reston, Virginia, High Performance Technologies, Inc. (HPTi) delivers critical results to the federal government by integrating high-end systems engineering, experience in performance-based architectures, and sound IT investment and management processes. Going beyond advanced systems design and development, HPTi’s offerings include advanced concept systems engineering, HPC, architecture design and implementation, technical assessments, technology standardization, and infrastructure design and protection.

    < Server Snapshots: Spotlight on Penguin Computing | HP Drives HPC Into the Mainstream With Expanded Unified Cluster Portfolio >




    Cluster Monkey

    Golden Eggs
    (HP Visual Diagram and Config Guides)
    CP4000 32x DL145G2 GigE Opteron, Dual Core
    CP4000 64x DL145 GigE Opteron
    CP4000 102x DL145 GigE Opteron
    CP4000 32x DL145 Myri Opteron
    Rocks Cluster 16-22 DL145 Opteron
    Rocks Cluster 30-46 DL145 Opteron
    Rocks Cluster 64-84 DL145 Opteron
    LC3000 GigaE 24-36 DL145 Opteron
    LC3000 Myri 16-32x DL145 Opteron
    LC3000 GigaE 16-22x DL145 Opteron
    LC2000 GigaE 16-22x DL360G3 Xeon
    DL365 System 2600Mhz 2P 1U Opteron Dual Core
    DL360 G5 System 3000Mhz 2P 1U EM64T Dual/Quad Core
    DL385 G2 2600Mhz 2P Opteron Dual Core
    DL380 G5 3000Mhz 2P EM64T Dual/Quad Core
    DL140 3060MHz 2P IA32
    DL140 G2 3600MHz 2P EM64T
    DL145 2600MHz 2P Opteron
    DL145 G2 2600MHz 2P Opteron Dual Core
    DL360 G4 3400MHz 2P EM64T
    DL360 G4p 3800MHz 2P EM64T
    DL380 G4 3800MHz 2P EM64T
    DL385 2800MHz 2P Opteron Dual Core
    DL560 3000MHz 4P IA32
    DL580 G3 3330MHz 4P EM64T
    DL585 2800MHz 4P Opteron Dual Core
    Montecito 2P-16P, rx2660-rx8640 (multi-system diagram)
    rx2660 1600MHz 2P 2U Montecito Systems and Cluster
    rx6600 1600MHz 4P 7U Single & Cluster
    rx3600 1600MHz 2P 4U Single & Cluster
    rx2620 1600MHz 2P 2U Single & Cluster
    Superdome 64P base configuration
    Integrity Family Portrait (rx1620 thru rx8620), IA64
    rx1620 1600MHz 2P MSA1000 Cluster IA64
    rx2620 1600MHz 2P MSA1000 Cluster IA64
    rx4640 1600MHz 4P MSA1000 Cluster IA64
    rx7620 1600MHz 8P 10U Systems and MSA1000 Cluster
    rx8620 1600MHz 16P 17U Systems and MSA1000 Cluster
    MSA30-MI Dual SCSI Cluster, rx3600, rx6600 and rx2660
    MSA30-MI Dual SCSI Cluster, rx1620...rx4640
    MSA500 G2, SCSI
    MSA1510i IP SAN 48TB, SCSI and SATA
    MSA1500 48TB, SCSI and SATA
    Dual Core AMD64 and EM64T systems with MSA1500

    Appro: Enterprise and High Performance Computing Whitepapers
    Is Your HPC Cluster Ready for Multi-core Processors?:
    Multi-core processors bring new challenges and opportunities for the HPC cluster. Get a first look at utilizing these processors and strategies for better performance.

    Accelerating Results through Innovation:
    Achieve maximum compute power and efficiency with Appro Cluster Solutions. Our highly scalable clusters are designed to seamlessly integrate with existing high performance, scientific, technical, and commercial computing environments.
    Keeping Your Cool in the Data Center:
    Rethinking IT architecture and infrastructure is not a simple job. This whitepaper helps IT managers overcome challenges with thermal, power, and system management.

    Unlocking the Value of IT with Appro HyperBlade:
    A fully integrated cluster combining advantages of blade and rack-mount servers for a flexible, modular, scalable architecture designed for Enterprise and HPC applications.
    AMD Opteron-based products | Intel Xeon-based products

    Hewlett-Packard: Linux High Performance Computing Whitepapers
    Unified Cluster Portfolio:
    A comprehensive, modular package of tested and pre-configured hardware, software and services for scalable computation, data management and visualization.

    Your Fast Track to Cluster Deployment:
    Designed to enable faster ordering and configuration, shorter delivery times and increased savings. Customers can select from a menu of popular cluster components, which are then factory assembled into pre-defined configurations with optional software installation.
    Message Passing Interface library (HP-MPI):
    A high performance and production quality implementation of the Message-Passing Interface (MPI) standard for HP servers and workstations.

    Cluster Platform Express:
    Cluster Platform Express comes straight to you, factory assembled and available with pre-installed software for cluster management, and ready for deployment.
    AMD Opteron-based ProLiant nodes | Intel Xeon-based ProLiant nodes

    Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Advertising/Sponsorship Search Privacy
         Copyright © 2001-2007 LinuxHPC.org
    Linux is a trademark of Linus Torvalds
    All other trademarks are those of their owners.
      SpyderByte.com ;Technical Portals