SpyderByte.com ;Technical Portals 
      
 The #1 Site for News & Information Related to Linux High Performance Technical Computing, Linux High Availability and Linux Parallel Clustering
Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Advertising/Sponsorship Search Privacy
More Links
  • PART 1: Low Fees, No Fluster, with Today's Linux Cluster...

  • Research and Services
    Cluster Quoter (HPC Cluster RFQ)
    Hardware Vendors
    Software Vendors
    HPC Consultants
    Training Vendors
    News
    Latest News
    Newsletter
    News Archives
    Search Archives
    Reference
    Featured Articles
    Cluster Builder
    Beginners
    Whitepapers
    Documentation
    Software
    Lists/Newsgroups
    Books
    User Groups
    Golden Eggs (Configuration Diagrams)
    Linux HPC News Update
    Stay current on Linux related HPC news, events and information.
    LinuxHPC Newsletter

    Other Mailing Lists:
    Linux High Availability
    Beowulf Mailing List
    Gelato.org (Linux Itanium)

    Linux HPC Links
    Favorites:
    Cluster Monkey (Doug Eadline, et al)
    HPCWire (Tabor Communications)
    Scalability.org (Dr. Joe Landman)

    Beowulf.org
    Beowulf Users Group
    High Performance Computing Clusters
    Thinking Parallel
    insideHPC.com
    Gelato.org
    The Aggregate
    Top500
    Cluster Computing Info Centre
    Coyote Gultch
    Robert Brown's Beowulf Page
    FM.net: Scientific/Engineering
    SuperComputingOnline
    HPC User Forum
    GridsWatch
    LinuxHPC.org
    Home
    About
    Contact
    Mobile Edition
    Sponsorship

    Linux Cluster RFQ Form
    Reach Multiple Vendors With One Linux Cluster RFQ Form. Save time and effort, let LinuxHPC.org do all the leg work for you free of charge. Request A Quote...

    Latest News

    PART2: Low Fees, No Fluster, with Today's Linux Cluster
    Posted by Terry Shannon, Tuesday May 13 2003 @ 06:52AM EDT

    The Good, The Bad, and the Ugly of Linux Clustering

    In my last Linux article, entitled "Low Fees, No Fluster, With Today's Linux Clusters," I provided a brief overview of Linux' expanding role in the HPTC arena as well as in other market segments craving high performance at popular prices. As promised, here's a followup piece that discusses the birth of Linux clustering as well as some of the good and bad points to this new approach to computing on the cheap.

    Cheap Computers Commingle With Linux and Beowulf

    VAXclusters appeared on the scene in about 1983, but it wasn't until 1994 that a group of NASA engineers developed the first Linux cluster, which they promptly awarded the "Beowulf" sobriquet in honor of the hero in the epic poem. The birth of Beowulf was an exercise that consisted of equal amounts of scavenging and savvy: the engineers managed to resurrect 16 Intel 486-based PC that were consigned to the trash heap, lashed the systems together with 10Mbps Ethernet, and shoehorned Linux onto this aggregation as a distributed operating system. The result was a parallel compute engine composed of technically obsolete hardware, a free operating system, and a lot of hard work. The economy-class cluster achieved some ~70M FLOPS per second at a cost of around $40K-roughly ten percent of the cost of a commercial computer that could achieve 70MFLOPS in 1994. Since then, "Beowulf" has been used to describe a class of Linux clusters than leverage a similar economy-class architecture to deliver high performance at bargain-basement prices.

    Faster, Cheaper, But What About Security?

    It didn't take early adopters of Linux clusters long to conclude that clustering boosted processing speed, increased transaction speeds, and improved reliability. But along high performance and low cost come with a new issue: security. Being Open Source software, Linux is not subject to any single entity controlling its growth or mandating security requirements. To the paranoid, this situation bordered on OS anarchy. Ironically, the same group of paranoid individuals will cheerfully eat Microsoft dog food without a care in the world. This despite the myriad patches, mandatory updates, service packs, and whatnot that Microsoft must distribute on an all-too-frequent basis to address fundamental flaws and security abysses in its products. Granted, far more people use Microsoft products than their Linux counterparts, so it's difficult to quantify the relative security of the two OSes. Time will tell, but SKHPC still thinks the smart money is on Linux! With most Linux clusters invisible to the public Internet and hidden behind firewalls, these systems are inherently less vulnerable to hacking than are high-profile Windows-powered sites. And, as we mentioned in our last article, the U.S. National Security Agency is busy armor-plating Linux. Word has it that Microsoft's security credentials were not issued by security organizations, but by acts of the U.S. Congress.

    Lots of Bang for the Buck

    Rapidly-emerging life science enterprises, in which applications such as drug discovery, protein folding, human genome research, and defensive measures against potential biowarfare weapons are generating enormous amounts of data and emphasizing the need for radically new and highly cost-efficient approaches to computing. In these realms, the Linux network-of-nodes-wherein each PC is a node-approach to computing is a great fit. Programs like SETI@Home and United Devices take somewhat similar approaches by scavenging spare cycles from millions of interconnected computers. In the Linux space, however, bang for the buck is what renders the OS attractive. In general, Linux clustering delivers a minimum fivefold improvement in price-performance over HPTC offerings from traditional IT vendors. And customers are catching on: Linux is asserting a growing presence on Top 500 computing sites. And why not? For a fraction of the cost of a top-of-the-line Sun or IBM server, you can buy a slew of CPUS, lash them together with low-cost cluster interconnects, throw on Linux and Beowulf software, and go to town in the HPTC space. IBM touts the fact that its mainframes run Linux, HP's Superdome can do the same thing. But while both platforms can run Linux, so can far more economical alternatives. Suffice it so say that most customers will not be gulled by a sales pitch wherein a smiling salesman says "But this million dollar box will run Linux!" After all, equivalent performance can be had with a cluster of aging IA-32 Linux boxes!

    Wanted: More Linux Expertise

    None of these wonderful things happen auto-magically with Linux clusters, hence users need specialized understanding of the OS. Linux consultancies, training courses, workshops, and even vendor certification efforts are being developed. In the meantime, Linux is generally a familiar environment for "propellerheads" such as laboratory scientists and bioinformatics researchers. Most of these experts became familiar with open-source software from their college computer studies, where Linux enjoys widespread use in price-sensitive academic environments.

    Scalability and Performance Soar

    One California-based genomics information firm, which would prefer to remain anonymous, claims that it ,slashed its computing costs by ~95 percent when it migrated to Linux clusters about three years ago. Given the decline in the cost of proprietary systems, the savings today would be less staggering, but it doesn't take a math major to figure out that a 128-node Linux cluster that sells for perhaps $100K USD can do the same job as a $1M Sun UE10K Starfire. What's more the Linux cluster isn't subject to the outright onerous licensing and maintenance costs that accompany big iron.

    Big iron is far from obsolete, as it generally houses the databases and data warehouses that contain info on gene structure, sequence, and function. This data is used by pharmaceutical and biotech companies for drug development and scientific discovery. That said, about half the firm's 4.5K processors from Compaq, Sun, Intel, and SGI, run Linux. The remainder of the systems handle tasks that Linux isn't ready to take on yet-such as apps that demand low latency and extremely high bandwidth. Still, it's estimated that Linux can accommodate ~80 percent of common HPTC apps.

    The Management Mares-Nest

    As in the past, the 80-20 Rule holds true with Linux clusters. The biggest challenge facing Linux today is the developing and maintaining industrial-strength Linux cluster management tools. Proprietary Linux management apps are all well and good, but they render it nearly impossible to easily move apps from one computing resource to another. Hence, unused processing power remains unused, rather than reallocated, processing power. In many early Linux cluster implementations, system administrators often wrote scripts for adding users, configuring an application, or cross-mounting a new network file system partition. These added administration costs cut into the initial savings provided by Linux clustering. The firm in question opted to purchase Platform Computing Inc.'s LSF management platform to handle these tasks and others.

    New And Improved Tools

    The firm also had the in-house Linux expertise to build its management tools, but was eager to avoid the effort if possible. The company chose Linux NetworX' ICE Box, which provides serial switching, remote power control, and system monitoring capabilities. The product helps the firm focus on finding new genes rather than on server operation and maintenance. The time saved by finding an off-the-shelf Linux management tool has shortened the time to market for products, say officials at several firms, eliminating their need to build proprietary tools. Scarcely any Linux cluster management tools were available in Y2K.. Today, Linux cluster suppliers are developing both open-source and proprietary cluster management products. Some commercial suppliers are building from scratch; others, such as Red Hat Inc., are picking, choosing, and using various pieces of open-source software to shorten their development cycles. Other vendors are taking proprietary Unix cluster technology and modifying it to run on Linux , including HP, SteelEye Technology Inc., and Veritas Software Corp. And Platform Computing recently announced its Platform Clusterware for Linux, the first hardware-independent support solution for cluster management.

    Today's Linux clusters are rivaling the throughput capabilities of legacy mainframes and current enterprise server offerings from the likes of HP and IBM. As these Linux clusters play a larger role in the HPTC realm, the HPs and IBMs of the world will be forced to come out with bigger faster, and cheaper future enterprise servers that will accommodate Linux as well as proprietary OSes. As usual, the customer will be the Big Winner in this race.

    PART 1: Low Fees, No Fluster, with Today's Linux Cluster...


    © 2003 by Terry C. Shannon, Consultant and Publisher, SKHPC

    Terry C. Shannon, consultant and publisher of "Shannon Knows HPC," has more than 25 years' experience in the IT industry as a system manager/administrator, programmer, analyst, journalist and consultant. Mr. Shannon's opinions are his own and do not necessarily reflect the opinion of this website He can be reached at terry@shannonknowshpc.com or his website at http://www.shannonknowshpc.com. He welcomes your feedback and suggestions.


    < Grid Wars featured at ClusterWorld Conference & Expo | Red Hat Enterprise Linux Webcast Today >

     

    Sponsors








    Affiliates




    Cluster Monkey






    Golden Eggs
    (HP Visual Diagram and Config Guides)
    Clusters:
    CP4000 32x DL145G2 GigE Opteron, Dual Core
    CP4000 64x DL145 GigE Opteron
    CP4000 102x DL145 GigE Opteron
    CP4000 32x DL145 Myri Opteron
    Rocks Cluster 16-22 DL145 Opteron
    Rocks Cluster 30-46 DL145 Opteron
    Rocks Cluster 64-84 DL145 Opteron
    LC3000 GigaE 24-36 DL145 Opteron
    LC3000 Myri 16-32x DL145 Opteron
    LC3000 GigaE 16-22x DL145 Opteron
    LC2000 GigaE 16-22x DL360G3 Xeon
    ProLiant:
    DL365 System 2600Mhz 2P 1U Opteron Dual Core
    DL360 G5 System 3000Mhz 2P 1U EM64T Dual/Quad Core
    DL385 G2 2600Mhz 2P Opteron Dual Core
    DL380 G5 3000Mhz 2P EM64T Dual/Quad Core
    DL140 3060MHz 2P IA32
    DL140 G2 3600MHz 2P EM64T
    DL145 2600MHz 2P Opteron
    DL145 G2 2600MHz 2P Opteron Dual Core
    DL360 G4 3400MHz 2P EM64T
    DL360 G4p 3800MHz 2P EM64T
    DL380 G4 3800MHz 2P EM64T
    DL385 2800MHz 2P Opteron Dual Core
    DL560 3000MHz 4P IA32
    DL580 G3 3330MHz 4P EM64T
    DL585 2800MHz 4P Opteron Dual Core
    Integrity:
    Montecito 2P-16P, rx2660-rx8640 (multi-system diagram)
    rx2660 1600MHz 2P 2U Montecito Systems and Cluster
    rx6600 1600MHz 4P 7U Single & Cluster
    rx3600 1600MHz 2P 4U Single & Cluster
    rx2620 1600MHz 2P 2U Single & Cluster
    Superdome 64P base configuration
    Integrity Family Portrait (rx1620 thru rx8620), IA64
    rx1620 1600MHz 2P MSA1000 Cluster IA64
    rx2620 1600MHz 2P MSA1000 Cluster IA64
    rx4640 1600MHz 4P MSA1000 Cluster IA64
    rx7620 1600MHz 8P 10U Systems and MSA1000 Cluster
    rx8620 1600MHz 16P 17U Systems and MSA1000 Cluster
    Storage:
    MSA30-MI Dual SCSI Cluster, rx3600, rx6600 and rx2660
    MSA30-MI Dual SCSI Cluster, rx1620...rx4640
    MSA500 G2, SCSI
    MSA1510i IP SAN 48TB, SCSI and SATA
    MSA1500 48TB, SCSI and SATA
    Misc:
    Dual Core AMD64 and EM64T systems with MSA1500






    Appro: Enterprise and High Performance Computing Whitepapers
    Is Your HPC Cluster Ready for Multi-core Processors?:
    Multi-core processors bring new challenges and opportunities for the HPC cluster. Get a first look at utilizing these processors and strategies for better performance.

    Accelerating Results through Innovation:
    Achieve maximum compute power and efficiency with Appro Cluster Solutions. Our highly scalable clusters are designed to seamlessly integrate with existing high performance, scientific, technical, and commercial computing environments.
    Keeping Your Cool in the Data Center:
    Rethinking IT architecture and infrastructure is not a simple job. This whitepaper helps IT managers overcome challenges with thermal, power, and system management.

    Unlocking the Value of IT with Appro HyperBlade:
    A fully integrated cluster combining advantages of blade and rack-mount servers for a flexible, modular, scalable architecture designed for Enterprise and HPC applications.
    AMD Opteron-based products | Intel Xeon-based products


    Hewlett-Packard: Linux High Performance Computing Whitepapers
    Unified Cluster Portfolio:
    A comprehensive, modular package of tested and pre-configured hardware, software and services for scalable computation, data management and visualization.

    Your Fast Track to Cluster Deployment:
    Designed to enable faster ordering and configuration, shorter delivery times and increased savings. Customers can select from a menu of popular cluster components, which are then factory assembled into pre-defined configurations with optional software installation.
    Message Passing Interface library (HP-MPI):
    A high performance and production quality implementation of the Message-Passing Interface (MPI) standard for HP servers and workstations.

    Cluster Platform Express:
    Cluster Platform Express comes straight to you, factory assembled and available with pre-installed software for cluster management, and ready for deployment.
    AMD Opteron-based ProLiant nodes | Intel Xeon-based ProLiant nodes



    Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Advertising/Sponsorship Search Privacy
         Copyright © 2001-2007 LinuxHPC.org
    Linux is a trademark of Linus Torvalds
    All other trademarks are those of their owners.
        
      SpyderByte.com ;Technical Portals