SpyderByte: WindowsHPC.org EnterpriseLinux.org BigBlueLinux.org
      
 The #1 Site for News & Information Related to Linux High Performance Technical Computing, Linux High Availability and Linux Parallel Clustering
Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Sponsorship Search Privacy
More Links
  • Printer friendly version
  • Share this story

  • Research and Services
    Cluster Quoter
    Cluster Builder
    Vendors - Hardware/Software
    Training
    Golden Eggs
    Forums
    Employment/Jobs
    Beowulf
    Applications
    Interconnects
    High Availability
    AMD
    Intel

    News
    Submit News/Article/PR
    Latest News
    Newsletter
    News Archives
    Search Archives
    Reference
    Featured Articles
    Beginners
    Whitepapers
    Documentation
    Software
    Lists/Newsgroups
    Books
    User Groups
    Higher Education
    Cluster List
    Linux HPC News Update
    Stay current on Linux related HPC news, events and information.
    LinuxHPC Newsletter

    Other Mailing Lists:
    Linux High Availability
    Beowulf Mailing List
    Gelato.org (Linux Itanium)

    Linux HPC Links
    Beowulf.org
    Beowulf Users Group
    Cluster Monkey
    High Performance Computing Clusters
    Gelato.org
    The Aggregate
    Top500
    Cluster Benchmarks
    Cluster Computing Info Centre
    ClusterWorld Magazine
    Coyote Gultch
    Linux Clustering Info Ctr.
    Robert Brown's Beowulf Page
    Sourceforge Cluster Foundry
    HPC DevChannel
    OpenSSI
    Grid-Scape.org
    HPCWire
    SuperComputingOnline
    HPC User Forum
    Golden Eggs
    HP Visual Diagram and Config Guides

    Integrity:
    rx2620 1600MHz 2P IA64
    rx2620 Cluster MSA1000 IA64
    rx2600 1500MHz 2P IA64
    rx2600 Cluster MSA1000 IA64
    ProLiant:
    DL140 3060MHz 2P IA32
    DL140 G2 3600MHz 2P EM64T
    DL145 2400MHz 2P Opteron
    DL360 G4 3400MHz 2P EM64T
    DL360 G4p 3800MHz 2P EM64T
    DL380 G4 3600MHz 2P EM64T
    DL385 2600MHz 2P Opteron
    DL560 3000MHz 4P IA32
    DL585 2600MHz 4P Opteron
    DL580 G3 3330MHz 4P EM64T
    Clusters:
    CP3000 32x DL145G2 & DL360G4p GigE EM64T
    CP4000 32x DL145 GigE Opteron
    CP4000 64x DL145 GigE Opteron
    CP4000 102x DL145 GigE Opteron
    CP4000 32x DL145 Myri Opteron
    Rocks Cluster 16-22 DL145 Opteron
    Rocks Cluster 30-46 DL145 Opteron
    Rocks Cluster 64-84 DL145 Opteron
    LC3000 GigaE 24-36 DL145 Opteron
    LC3000 Myri 16-32x DL145 Opteron
    LC3000 GigaE 16-22x DL145 Opteron
    LC2000 GigaE 16-22x DL360G3 Xeon
    Storage:
    MSA500 G2, SCSI
    MSA1500 24TB, SCSI and SATA
    Misc:
    AMD64 and EM64T systems with MSA1500


    News Feed
    LinuxHPC.org.org has an RSS/RDF feed if you wish to include it on your website.
    LinuxHPC.org
    Home
    About
    Contact
    Mobile Edition
    Sponsorship

    Linux Cluster RFQ Form
    Reach Multiple Vendors With One Linux Cluster RFQ Form. Save time and effort, let LinuxHPC.org do all the leg work for you free of charge. Request A Quote...
    Linux Cluster Builder
    Research for Building a Linux Cluster. Detailed & organized info, best practices, comparisons & reviews. ClusterBuilder...

    LinuxHPC.org is Looking for Interns - If you have experience with Linux clusters and/or cluster applications and you're interested in helping out with LinuxHPC.org let me know. I can promise the experience will be rewarding and educational, plus it'll make good resume fodder. You'll work with vendors and cluster users from around the world. We're also looking for writers and people to do hardware/software reviews. Contact Ken Farmer.

    Latest News

    New Supercomputer Due At The U 1,000-Computer 'Metacluster' To Tackle Tough Biomedical Problems
    Wednesday August 20 2003 @ 02:43PM EDT

    August 13, 2003 -- Construction of a $2 million supercomputer comprised of 1,000 smaller computers will begin in September at the University of Utah, where researchers will use the powerful machine to tackle complex problems in biomedical research.

    “This will be by far the largest computer in the state of Utah for scientific research,” says physicist Julio Facelli, director of the university’s Center for High Performance Computing.

    When the so-called “metacluster” supercomputer is assembled by the end of 2003 and tests are performed that show where it ranks in computing power, Facelli says he expects “it will be among the 20 to 30 most powerful computers in the world,” excluding classified military and government computers that are not ranked.

    The Center for High Performance Computing received a $1,531,008 grant last year from the National Center for Research Resources at the National Institutes of Health. It will combine that money with $500,000 in University of Utah funds to pay for the $2 million supercomputer, which will be named Arches after Utah’s famed Arches National Park.

    It is called a metacluster because it will be built from five clusters, each of which in turn contains many individual computers. Facelli says it is “like 1,000 desktop computers, all connected together.”

    “This is a significant computing resource of national caliber that will allow our researchers to tackle some of the most challenging biomedical problems,” says Facelli, an adjunct professor of physics, chemistry and medical informatics. “We are very interested in using this system to perform more in-depth analysis of the vast amount of biomedical data at the University of Utah, and couple that data analysis with advanced simulations that will allow us to more precisely understand biological processes.”

    After a bidding process, the center recently chose Angstrom Microsystems, Inc. of Boston to provide the components and assemble the “metacluster” supercomputer, which will include 1,000 individual Opteron processors or computers made by AMD or Advanced Micro Devices, Inc., of Sunnyvale, Calif. Each AMD Opteron is a 1.4-gigahertz processor with at least one gigabyte (one billion bytes) of memory. The Opteron processors in the supercomputer together will have more than 1,000 gigabytes – or one terabyte – of memory.

    “What is interesting is that the Opteron is brand new,” says computer scientist Guy Adams, assistant director for systems at the Center for High Performance Computing. “Very few computer centers in the world are using these processors.”

    Facelli and several other University of Utah faculty members are listed as investigators on the federal grant that is paying for most of the supercomputer, so they will have priority for using it. But Facelli says free computing time on Arches also will be available to other faculty members, with research funded by the National Institutes of Health getting higher priority than research funded by other sources.

    By working with professors, “both graduate and undergraduate students are going to have access to this first-class resource, and they are going to generate new ideas that will better allow us to understand biomedical systems and come up with new ways to address health problems,” Facelli says.

    The main investigators on the supercomputer and research they will use it for are:

    -- Lisa Cannon-Albright, a professor of medical informatics, is a genetic epidemiologist who uses Utah’s extensive family genealogies to identify genes responsible for inherited cancers and other diseases. Existing university computers are inadequate to allow her to analyze all members of a single family at once when looking for disease-causing genes. The new metacluster will allow simultaneous analysis of more family members, and also help her identify the causes of diseases attributed to multiple genes.

    -- Greg Voth, a professor of chemistry, uses high-performance computers to simulate the behavior of molecules involved in biological processes, such as the behavior of membranes in living organisms.

    -- Jeffrey Weiss, an associate professor of bioengineering, will use the metacluster for studies aimed at improving detection of changes of shape, surface area and size of body tissues. In one study, he will use the supercomputer to compare magnetic resonance images (MRI) of normal mouse brain development with changes caused by Niemann-Pick disease type C, a defect in cholesterol metabolism that kills thousands of children worldwide each year. In another study, he will create computer simulations of a common knee ligament injury with the eventual goal of simulating injuries to other ligaments and entire joints.

    -- Facelli, David Grant, a distinguished professor of chemistry, and Ron Pugmire, a professor of chemical and fuels engineering, use nuclear magnetic resonance (NMR) to understand the structure of biologically important molecules. But it takes massive computing power to convert NMR measurements into information about the structure of molecules.

    -- Robert Weiss, an associate professor of human genetics, studies and compares the genomes, or genetic blueprints, of humans and other animals, and also is involved in the search for genes that contribute to high blood pressure, addiction and neuromuscular diseases. Weiss now produces more genetic data that can be analyzed efficiently. The new supercomputer will give him the added computing power he needs.

    -- Tom Cheatham, an assistant professor of medicinal chemistry, must crunch large amounts of data to gain a detailed picture of the structure and behavior of large, biological molecules to understand how biological processes work. Much of Cheatham’s work focuses on the genetic materials DNA and RNA. Cheatham and Facelli also plan to use the new supercomputer to develop an “expert system” that would seek to improve drug treatment of various ailments by accurately predicting how drugs are absorbed, distributed in the body, metabolized and excreted.

    The Arches metacluster tentatively will be located in the university’s Student Services Building, where the Center for High Performance Computing already maintains other cluster computers, the largest of which is Icebox, which includes 450 smaller computers.

    The new supercomputer metacluster will include five clusters of Opteron processors. Adams says each cluster has a specific function, which means the supercomputer is much less expensive than if all of its computers had to have the same capabilities:
    * The parallel computing cluster will contain 512 individual Opteron computers or processors. It will be used for complex calculations that can be run in parallel – divided among numerous individual processors – and that require the processors to communicate with each other rapidly using special networking equipment.
    * The “cycle farm” cluster will include 328 processors. It is for calculations that need much computer time, cannot be divided among as many computers and do not require the processors to communicate rapidly.
    * The data-mining cluster, with 96 individual computers, will be used to look for patterns and relationships in large sets of data, such as genetic information from many members of a single extended family.
    * The visualization cluster will have 18 processors to deal with data that must be shown graphically.

    The last 12 processors of the 1,000 in Arches will be used to control the supercomputer.

    As part of the supercomputer acquisition, the university is buying 30 terabytes of data-storage equipment from Sun Microsystems, Inc. of Santa Clara, Calif., Facelli says.

    Some cluster computers are made of individual personal computers, with relatively bulky boxes arrayed in racks that take a lot of space. The Opteron processors will be installed in smaller units known as “blades” that measure 1.7 inches by 23.5 inches by 30 inches. Sixteen blades fit in a “nest,” and nests are then stacked, so the new supercomputer will occupy much less space that a typical cluster supercomputer made of PCs.

    Adams says the Center for High Performance Computing’s other cluster supercomputers eventually may be connected to Arches and become part of the metacluster.


    < CRI Ruggedizes SGI Systems to Meet Military's Specifications | eWeek: RLX Stands Up to IBM, HP >

    Sponsors | Affiliates









    Cost/Processor Poll
    What did you pay per dual processor node - Including software, accessories and other costs? (Total Cost/Node Count) About:
    Less than $1,500
    $1,501 to $2,000
    $2,001 to $2,500
    $2,501 to $3,000
    $3,001 to $3,500
    More than $3,500

    [ results | polls ]

    Mailing Lists
    Enter your email address to subscribe

    LinuxHPC.org Newsletter (details)
    LinuxHPC Managers List (details)




    The
    Beowulf
    Users
    Group







    The American Red Cross




       


    Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Sponsorship Search Privacy
         Copyright © 2001-2005 LinuxHPC.org
    Linux is a trademark of Linus Torvalds
    All other trademarks are those of their owners.
        
     SpyderByte.com ;Technical Portals