SpyderByte: WinHPC.org EnterpriseLinux.org BigBlueLinux.org
      
 The #1 Site for News & Information Related to Linux High Performance Technical Computing, Linux High Availability and Linux Parallel Clustering
Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Sponsorship Search Privacy
More Links
  • Printer friendly version
  • Share this story

  • Research and Services
    Cluster Quoter
    Windows HPC News
    Cluster Builder
    Hardware Vendors
    Software Vendors
    Training Vendors
    Golden Eggs
    Forums
    Employment/Jobs
    Beowulf
    Applications
    Interconnects
    High Availability
    AMD
    Intel

    News
    Submit News/Article/PR
    Latest News
    Newsletter
    News Archives
    Search Archives
    Reference
    Featured Articles
    Beginners
    Whitepapers
    Documentation
    Software
    Lists/Newsgroups
    Books
    User Groups
    Higher Education
    Cluster List
    Linux HPC News Update
    Stay current on Linux related HPC news, events and information.
    LinuxHPC Newsletter

    Other Mailing Lists:
    Linux High Availability
    Beowulf Mailing List
    Gelato.org (Linux Itanium)

    Linux HPC Links
    Beowulf.org
    Beowulf Users Group
    Cluster Monkey
    High Performance Computing Clusters
    Gelato.org
    The Aggregate
    Top500
    Cluster Benchmarks
    Cluster Computing Info Centre
    Coyote Gultch
    Linux Clustering Info Ctr.
    Robert Brown's Beowulf Page
    Sourceforge Cluster Foundry
    HPC DevChannel
    OpenSSI
    Grid-Scape.org
    HPCWire
    SuperComputingOnline
    HPC User Forum
    Scalability.org
    Gridtech
    News Feed
    LinuxHPC.org.org has an RSS/RDF feed if you wish to include it on your website.
    LinuxHPC.org
    Home
    About
    Contact
    Mobile Edition
    Sponsorship

    Linux Cluster RFQ Form
    Reach Multiple Vendors With One Linux Cluster RFQ Form. Save time and effort, let LinuxHPC.org do all the leg work for you free of charge. Request A Quote...

    LinuxHPC.org is Looking for Interns - If you have experience with Linux clusters and/or cluster applications and you're interested in helping out with LinuxHPC.org let me know. I can promise the experience will be rewarding and educational, plus it'll make good resume fodder. You'll work with vendors and cluster users from around the world. We're also looking for writers and people to do hardware/software reviews. Contact Ken Farmer.

    Latest News

    Commercial Grid Demonstrated By IBM And T-Systems
    Tuesday September 16 2003 @ 07:50AM EDT

    IBM Research, the Lab in Böblingen and T-Systems signed a partnership to jointly develop basic technologies for e-business on demand and to gain more flexibility in IT and telecommunication infrastructures and their applications. In the T-Systems' high-security computer center in Frankfurt-Heddernheim, Germany, both partners demonstrated first results. They used an IBM eServer BladeCenter with a Grid middleware to automatically add and replace blades, to react immediately in cases of disaster and recover the situation. The operating system was Linux, the management blades ran a combination of Tivoli and Globus.

    IBM Research Lab and T-Systems agreed in an innovation partnership, IBM came from the on demand computing, T-Systems saw the virtual computer center and managed business flexibility. The virtual center improves the flexibility and reduces costs and complexity. It contains logical objects as archive, data, communication and archive with services on top of it. In the middle resides the resource manager and all is running a Data Center Operating System, an operating system on top of the different operating systems.

    They started with a workshop on using Grid technologies in legacy environments. Actually, the show case demonstrates the basic principles of the Grid vision. The nest steps are directed to a commercial applicability. The first result of this cooperation is a practicability study. It showed how computers at different locations independently work together -- like a virtual computer center -- and thus realize Grid computing in a commercial environment.

    In contrary to the scientific Grid computing, computer power is not the main issue. In the commercial world, topics like take precedence:

    - efficient use of available resources
    - optimal and automatic managing the computer center
    - managing service level agreements (SLAs) and the quality of service (QoS)
    - recover data in case of disaster
    - improvement of the continuity of the businesses.

    The new flexibility of computer power and the improved availability allows new offerings for the customers. With the demonstrator IBM and T-Systems showed, how the concepts of Grid computing can change the IT world.

    The demonstration took place at T-Systems in Frankfurt-Heddernheim. The base was an IBM eServer BladeCenter with two management blades. They ran a combination of Tivoli Automation and the Globus Grid software. The partners improved this software layer -- the e-Utility or Grid-Layer. The two blades worked as a file serve too. Two compute blades waited for tasks, an other blade was inserted and ready but not in operation. The fourth blade lay on the table.

    The partners started three applications, which demonstrated the broad range and applicability. The clock served as a stateless application. It was not allowed to run on the same blade as the Web shop. Then they started the transactional application, the Web shop. It automatically choose the second blade. A compute-intensive state-full application ran on both blades and computed and displayed fractals.

    The Grid software, developed by IBM and T-Systems, automatically manages the computing resources, allows adding and removing of them, supervises the dynamic distribution of the compute load, controls the SLAs and QoS and offers new methods for disaster recovery.

    When rising the fractal computation, the higher load reduced the part of the Web shop. This violated the defined SLAs and QoS -- the predefined rules. The Grid-Layer immediately recognized the situation and added -- on demand -- the third blade as an additional resource. As it was not in an operational state, it was configured, operating system and the application was installed automatically. The capacity was added within three minutes. Now the requirements -- the rules -- of the Web shop have been fulfilled.

    They removed the active blade with the Web shop. This simulated a severe crash. The Grid-Layer immediately copied the applications on the other two blades. Although the performance system was reduced, the applications are still running. The customer of the Web shop did not notice the crash. All the transferred data are kept. This opens new possibilities in the case of a disaster. Additionally, software automatically recognizes the insertion of a blade too and starts it without an interrupt.

    Now, the partners will commercialize the Grid-Layer. Next steps are directed into the managing of heterogeneous computer systems to integrate legacy systems too. Other issues are the accounting of the computer resources and the security.


    < LPI certification 102 Part 2: Configuring and compiling the kernel | Yahoo: Los Alamos Linux cluster to keep watch on nukes >

    Sponsors




    WinHPC.org


    Stay up-to-date on the Progress of Microsoft's Windows Compute Cluster Server 2003

    Golden Eggs
    (HP Visual Diagram and Config Guides)
    Integrity:
    HP Integrity Family Portrait, IA64
    rx1620 1600MHz 2P MSA1000 Cluster IA64
    rx2620 1600MHz 2P MSA1000 Cluster IA64
    rx4640 1600MHz 4P MSA1000 Cluster IA64
    ProLiant:
    DL140 3060MHz 2P IA32
    DL140 G2 3600MHz 2P EM64T
    DL145 2600MHz 2P Opteron
    DL145 G2 2600MHz 2P Opteron Dual Core
    DL360 G4 3400MHz 2P EM64T
    DL360 G4p 3800MHz 2P EM64T
    DL380 G4 3800MHz 2P EM64T
    DL385 2800MHz 2P Opteron Dual Core
    DL560 3000MHz 4P IA32
    DL580 G3 3330MHz 4P EM64T
    DL585 2800MHz 4P Opteron Dual Core
    Clusters:
    CP3000 32x DL140G2 & DL360G4p GigE EM64T
    CP4000 32x DL145G2 GigE Opteron, Dual Core
    CP4000 64x DL145 GigE Opteron
    CP4000 102x DL145 GigE Opteron
    CP4000 32x DL145 Myri Opteron
    Rocks Cluster 16-22 DL145 Opteron
    Rocks Cluster 30-46 DL145 Opteron
    Rocks Cluster 64-84 DL145 Opteron
    LC3000 GigaE 24-36 DL145 Opteron
    LC3000 Myri 16-32x DL145 Opteron
    LC3000 GigaE 16-22x DL145 Opteron
    LC2000 GigaE 16-22x DL360G3 Xeon
    Storage:
    MSA500 G2, SCSI
    MSA1500 24TB, SCSI and SATA
    Misc:
    Dual Core AMD64 and EM64T systems with MSA1500









    Linux Magazine
    At Newstands Now!Linux Magazine

    Click for King, North Carolina Forecast





       


    Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Sponsorship Search Privacy
         Copyright © 2001-2005 LinuxHPC.org
    Linux is a trademark of Linus Torvalds
    All other trademarks are those of their owners.
        
      SpyderByte.com ;Technical Portals