SpyderByte: WinHPC.org EnterpriseLinux.org BigBlueLinux.org
 The #1 Site for News & Information Related to Linux High Performance Technical Computing, Linux High Availability and Linux Parallel Clustering
Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Sponsorship Search Privacy
More Links
  • Printer friendly version
  • Share this story

  • Research and Services
    Cluster Quoter
    Windows HPC News
    Cluster Builder
    Hardware Vendors
    Software Vendors
    Training Vendors
    Golden Eggs
    High Availability

    Submit News/Article/PR
    Latest News
    News Archives
    Search Archives
    Featured Articles
    User Groups
    Higher Education
    Cluster List
    Linux HPC News Update
    Stay current on Linux related HPC news, events and information.
    LinuxHPC Newsletter

    Other Mailing Lists:
    Linux High Availability
    Beowulf Mailing List
    Gelato.org (Linux Itanium)

    Linux HPC Links
    Beowulf Users Group
    Cluster Monkey
    High Performance Computing Clusters
    The Aggregate
    Cluster Benchmarks
    Cluster Computing Info Centre
    Coyote Gultch
    Linux Clustering Info Ctr.
    Robert Brown's Beowulf Page
    Sourceforge Cluster Foundry
    HPC DevChannel
    HPC User Forum
    News Feed
    LinuxHPC.org.org has an RSS/RDF feed if you wish to include it on your website.
    Mobile Edition

    Linux Cluster RFQ Form
    Reach Multiple Vendors With One Linux Cluster RFQ Form. Save time and effort, let LinuxHPC.org do all the leg work for you free of charge. Request A Quote...

    LinuxHPC.org is Looking for Interns - If you have experience with Linux clusters and/or cluster applications and you're interested in helping out with LinuxHPC.org let me know. I can promise the experience will be rewarding and educational, plus it'll make good resume fodder. You'll work with vendors and cluster users from around the world. We're also looking for writers and people to do hardware/software reviews. Contact Ken Farmer.

    Latest News

    LBNL Forms Institute To Improve HPC Performance Analysis
    Monday December 06 2004 @ 01:01PM EST

    In the field of high performance computing, "peak performance" has been defined as the speed at which the manufacturer guarantees that you can't compute faster than. Although peak performance figures make for good marketing, they don't provide much insight into actual performance.

    To help rectify this, for the past eight years Lawrence Berkeley National Laboratory has been developing new tools and techniques for more accurately assessing the performance of high performance computers, especially when it comes to running real-world scientific applications.

    In November, many of these projects in performance characterization, modeling and benchmarking for supercomputers were brought together to comprise the Berkeley Institute for Performance Studies in Berkeley Lab's Computational Research Division (CRD). Known as BIPS, this umbrella organization will be led by UC Berkeley Professor Kathy Yelick and encompasses several research activities at LBNL and UC Berkeley:

    The Performance Evaluation Research Center (PERC), directed by CRD Chief Technologist David Bailey, is one of seven SciDAC Integrated Software Infrastructure Centers (ISICs). PERC involves approximately 25 researchers at eight centers (four labs and four universities). The goal of PERC is to develop a science for understanding performance of scientific applications on high-end computer systems, and develop engineering strategies for improving performance on these systems. The project is integrating several active efforts in the high performance computing community and is forging alliances with application scientists working on DOE Office of Science missions to ensure that the resulting techniques and tools are truly useful to end users. For detailed information about PERC, go to http://perc.nersc.gov/main.htm .

    The Berkeley Benchmarking and Optimization Group (BeBOP) is led by Kathy Yelick and James Demmel of UC Berkeley, with substantial participation by Berkeley graduate and undergraduate students. Their research areas include:

    * the interaction between application software, compilers, and hardware
    * managing trade-offs among the various measures of performance, such as speed, accuracy, power, storage,
    * automating the performance tuning process, starting with the computational kernels which dominate application performance in scientific computing and information retrieval
    * performance modeling and evaluation of future computer architectures.

    The BeBOP Web site can be found at http://bebop.cs.berkeley.edu/ .

    BeBOP works closely with the UCB LAPACK/ScaLAPACK project, which focuses on new algorithms for numerical linear algebra and new, more efficient implementations of linear algebra software.

    Berkeley Lab's architecture evaluation research project, led by Leonid Oliker and Yelick, is conducted by staff from LBNL's CRD and the NERSC Center Division, as well as collaborators from other institutions. They evaluate emerging architectures, such as processor-in- memory and stream processing, and develop adaptable "probes" to isolate performance-limiting features of architectures. They conducted the first in-depth analysis of state-of-the-art parallel vector architectures, running benchmark studies on the Japanese Earth Simulator System (ESS) and comparison runs on Cray's X1 system. Results on the ESS demonstrated 23 times faster performance than the IBM Power3 in a node-to- node comparison. (See the September issue of CRD Report at http://crd.lbl.gov/html/news/CRDreport0904.pdf for more information on this work.)

    NERSC's benchmarking and performance optimization project is carried out by NERSC staff with expertise in performance analysis. They developed the Effective System Performance (ESP) benchmark to measure system-level efficiency and the Sustained System Performance (SSP) benchmark to measure overall system application throughput. SSP resulted in a 30 percent increase in the Seaborg system's capability and is now used in several non-DOE procurements. This team also accelerated several SciDAC application programs running on Seaborg. Read more about ESP at http://www.nersc.gov/projects/esp.php .

    Yelick, a professor of computer science at UC Berkeley with a joint appointment in LBNL's Computational Research Division, has been named to lead the newly established BIPS. She will also be leading CRD's Future Technologies Group (FTG). Yelick's appointment, which includes a leave of absence from her teaching position, officially takes effect Jan. 1, 2005.

    The main goal of Yelick's research is to develop techniques for obtaining high performance on a wide range of computational platforms and to ease the programming effort required to obtain improved performance. She is perhaps best known for her efforts in global address space languages, which attempt to present the programmer with a shared memory model for parallel programming. These efforts have led to the design of Unified Parallel C (UPC), which merged some of the ideas from three shared address space dialects of C: Split-C, AC and PCP. In recent years, UPC has gained recognition as an alternative to message passing programming for large-scale machines. Compaq, Sun, Cray, HP, and SGI are implementing UPC, and Yelick is currently leading a large effort at LBNL to implement UPC on Linux clusters and IBM machines and to develop new optimizations. The language provides a uniform programming model for both shared and distributed memory hardware. Read more at http://upc.lbl.gov/. She has also worked on other global address space languages such as Titanium, which is based on Java.

    Yelick has also done some notable work on single processor optimizations including techniques for automatically optimizing sparse matrix algorithms for memory hierarchies. These efforts are part of an NSF-funded project called BeBOP (Berkeley Benchmarking and Optimization) that is working on methods to take advantage of special structure such as symmetry and triangular solves.

    Another area that Yelick has worked on that has led to very interesting results is her research on architectures for memory-intensive applications and in particular the use of mixed logic and DRAM, which avoids the off-chip accesses to DRAM, thereby gaining bandwidth, while lowering latency and energy consumption. In the IRAM project, a joint effort with David Patterson, she developed an architecture to take advantage of this technology. The IRAM processor is a single chip system designed for low power and high performance on multimedia applications and achieves an estimated 6.4 GOP/s in a two-watt design. The IRAM architecture is based on vector instructions, historically reserved for expensive vector supercomputers designed for large-scale scientific and engineering applications.

    Yelick earned her bachelor's (1985), master's (1985), and Ph.D. (1991) degrees in electrical engineering and computer science from the Massachusetts Institute of Technology. Her research interests include parallel computing, memory hierarchy optimizations, programming languages and compilers. You can read her UC Berkeley Web page at http://www.cs.berkeley.edu/~yelick/ .

    Berkeley Lab is a U.S. Department of Energy national laboratory located in Berkeley, California. It conducts unclassified scientific research and is managed by the University of California. Learn more at http://www.lbl.gov .

    < NCSA Adds Dell Cluster to Private Sector Resources | Microway To Provide AMD Opteron Cluster To UNH Institute >



    Stay up-to-date on the Progress of Microsoft's Windows Compute Cluster Server 2003

    Golden Eggs
    (HP Visual Diagram and Config Guides)
    HP Integrity Family Portrait, IA64
    rx1620 1600MHz 2P MSA1000 Cluster IA64
    rx2620 1600MHz 2P MSA1000 Cluster IA64
    rx4640 1600MHz 4P MSA1000 Cluster IA64
    DL140 3060MHz 2P IA32
    DL140 G2 3600MHz 2P EM64T
    DL145 2600MHz 2P Opteron
    DL145 G2 2600MHz 2P Opteron Dual Core
    DL360 G4 3400MHz 2P EM64T
    DL360 G4p 3800MHz 2P EM64T
    DL380 G4 3800MHz 2P EM64T
    DL385 2800MHz 2P Opteron Dual Core
    DL560 3000MHz 4P IA32
    DL580 G3 3330MHz 4P EM64T
    DL585 2800MHz 4P Opteron Dual Core
    CP3000 32x DL140G2 & DL360G4p GigE EM64T
    CP4000 32x DL145G2 GigE Opteron, Dual Core
    CP4000 64x DL145 GigE Opteron
    CP4000 102x DL145 GigE Opteron
    CP4000 32x DL145 Myri Opteron
    Rocks Cluster 16-22 DL145 Opteron
    Rocks Cluster 30-46 DL145 Opteron
    Rocks Cluster 64-84 DL145 Opteron
    LC3000 GigaE 24-36 DL145 Opteron
    LC3000 Myri 16-32x DL145 Opteron
    LC3000 GigaE 16-22x DL145 Opteron
    LC2000 GigaE 16-22x DL360G3 Xeon
    MSA500 G2, SCSI
    MSA1510i IP SAN 48TB, SCSI and SATA
    MSA1500 48TB, SCSI and SATA
    Dual Core AMD64 and EM64T systems with MSA1500

    Linux Magazine
    At Newstands Now!Linux Magazine

    Click for King, North Carolina Forecast

    PageRank is Google`s measure of the importance of this page!


    Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Sponsorship Search Privacy
         Copyright © 2001-2006 LinuxHPC.org
    Linux is a trademark of Linus Torvalds
    All other trademarks are those of their owners.
      SpyderByte.com ;Technical Portals