SpyderByte.com ;Technical Portals 
      
 News & Information Related to Linux High Performance Computing, Linux Clustering and Cloud Computing
Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Advertising/Sponsorship Search Privacy
More Links
  • Cluster Monkey
  • Douglas Eadline
  • Cluster Monkey
  • Creative Commons Attribution-NonCommercial-ShareAlike 2.5 License

  • HPC Vendors
    Cluster Quoter (HPC Cluster RFQ)
    Hardware Vendors
    Software Vendors
    HPC Consultants
    Training Vendors
    HPC Resources
    Featured Articles
    Cluster Builder
    Beginners
    Whitepapers
    Documentation
    Software
    Lists/Newsgroups
    Books
    User Groups & Organizations
    HP Server Diagrams
    HPC News
    Latest News
    Newsletter
    News Archives
    Search Archives
    HPC Links
    ClusterMonkey.net
    Scalability.org
    HPCCommunity.org

    Beowulf.org
    HPC Tech Forum (was BW-BUG)
    Gelato.org
    The Aggregate
    Top500.org
    Cluster Computing Info Centre
    Coyote Gultch
    Dr. Robert Brown's Beowulf Page
    FreshMeat.net: HPC Software
    SuperComputingOnline
    HPC User Forum
    GridsWatch
    HPC Newsletters
    Stay current on Linux HPC news, events and information.
    LinuxHPC.org Newsletter

    Other Mailing Lists:
    Linux High Availability
    Beowulf Mailing List
    Gelato.org (Linux Itanium)

    LinuxHPC.org
    Home
    About
    Contact
    Mobile Edition
    Sponsorship

    Latest News

    Monkey Talk: Who is Responsible?
    Posted by Kenneth Farmer, Tuesday September 20 2005 @ 04:23PM EDT

    Monkey Talk: Cluster Opinions and Insights from Cluster Monkey.

    By Douglas Eadline

    Who is Responsible?

    The phrase, if I only had one throat to choke often comes to mind when thinking about support for clusters. On the one hand, the history and the nature of clusters promotes a kind-of do your own thing methodology. On the other hand, how is the market ever going to grow beyond the pioneers if we do not have shrink-wrapped turn-key systems with a single point of support?

    In the past, I talked with a cluster user who had paper files for each of ten vendors he had to manage. From compilers to interconnects, a cluster is a collection of technology with no single all-knowing vendor. How nice would it be to have one number, one voice, and dare I say one throat to choke when there are support issues. Traditional big iron supercomputers have a single point of contact, however, their single all-in-one price is a little too steep for most cluster users. I suspect that support is one area where clusters may increase rather than decrease the cost of HPC (High Performance Computing). Of course, many of the pioneering clusters were built without regard to the added support cost and in many cases thrive on a multi-vendor collaborative support model. Having built a few clusters in my day, I understand this attitude and have probably muttered on more than one occasion, Support, I don't need no stinking support, I built the cluster, patched the software, integrated the middleware, and made it work. If it breaks, I will fix it.

    Is single source support possible for component systems? In reality are we really building custom cars and hoping the local dealerships will provide warranty service? Can we expect a single vendor to support the integration of everything we could possibly use to build a cluster? The price to performance curve is too compelling to ignore clusters and maybe a new support model will emerge that better suites the needs of production systems.

    I believe the answers to these questions will begin to emerge as we move forward in the market. Finally, I would also like to think, that with a strong community and open standards, we do not have to choke throats to support clusters. We can continue to work together to develop best practices and have open discussions about how to best solve problems. After all, we really just want to crunch numbers faster than the next guy, right?


    Douglas Eadline can be found swing around the binary trees at Cluster Monkey

    This work licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 2.5 License


    < The CPU’s next 20 years | Google Inc. is looking for talented Linux system engineers >

     

    Affiliates

    Cluster Monkey

    HPC Community


    Supercomputing 2010

    - Supercomputing 2010 website...

    - 2010 Beowulf Bash

    - SC10 hits YouTube!

    - Louisiana Governor Jindal Proclaims the week of November 14th "Supercomputing Week" in honor of SC10!








    Appro: High Performance Computing Resources
    IDC: Appro Xtreme-X Supercomputer Blade Solution
    Analysis of the Xtreme-X architecture and management system while assessing challenges and opportunities in the technical computing market for blade servers.

    Video - The Road to PetaFlop Computing
    Explore the Scalable Unit concept where multiple clusters of various sizes can be rapidly built and deployed into production. This new architectural approach yields many subtle benefits to dramatically lower total cost of ownership.
    White Paper - Optimized HPC Performance
    Multi-core processors provide a unique set of challenges and opportunities for the HPC market. Discover MPI strategies for the Next-Generation Quad-Core Processors.

    Appro and the Three National Laboratories
    [Appro delivers a new breed of highly scalable, dynamic, reliable and effective Linux clusters to create the next generation of supercomputers for the National Laboratories.

    AMD Opteron-based products | Intel Xeon-based products



    Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Advertising/Sponsorship Search Privacy
         Copyright © 2001-2011 LinuxHPC.org
    Linux is a trademark of Linus Torvalds
    All other trademarks are those of their owners.
        
      SpyderByte.com ;Technical Portals