SpyderByte: WindowsHPC.org EnterpriseLinux.org BigBlueLinux.org
      
 The #1 Site for News & Information Related to Linux High Performance Technical Computing, Linux High Availability and Linux Parallel Clustering
Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Sponsorship Search Privacy
More Links
  • Printer friendly version
  • Share this story

  • Research and Services
    Cluster Quoter
    Windows HPC News
    Cluster Builder
    Vendors - Hardware/Software
    Training
    Golden Eggs
    Forums
    Employment/Jobs
    Beowulf
    Applications
    Interconnects
    High Availability
    AMD
    Intel

    News
    Submit News/Article/PR
    Latest News
    Newsletter
    News Archives
    Search Archives
    Reference
    Featured Articles
    Beginners
    Whitepapers
    Documentation
    Software
    Lists/Newsgroups
    Books
    User Groups
    Higher Education
    Cluster List
    Linux HPC News Update
    Stay current on Linux related HPC news, events and information.
    LinuxHPC Newsletter

    Other Mailing Lists:
    Linux High Availability
    Beowulf Mailing List
    Gelato.org (Linux Itanium)

    Linux HPC Links
    Beowulf.org
    Beowulf Users Group
    Cluster Monkey
    High Performance Computing Clusters
    Gelato.org
    The Aggregate
    Top500
    Cluster Benchmarks
    Cluster Computing Info Centre
    ClusterWorld Magazine
    Coyote Gultch
    Linux Clustering Info Ctr.
    Robert Brown's Beowulf Page
    Sourceforge Cluster Foundry
    HPC DevChannel
    OpenSSI
    Grid-Scape.org
    HPCWire
    SuperComputingOnline
    HPC User Forum
    News Feed
    LinuxHPC.org.org has an RSS/RDF feed if you wish to include it on your website.
    LinuxHPC.org
    Home
    About
    Contact
    Mobile Edition
    Sponsorship

    Linux Cluster RFQ Form
    Reach Multiple Vendors With One Linux Cluster RFQ Form. Save time and effort, let LinuxHPC.org do all the leg work for you free of charge. Request A Quote...

    LinuxHPC.org is Looking for Interns - If you have experience with Linux clusters and/or cluster applications and you're interested in helping out with LinuxHPC.org let me know. I can promise the experience will be rewarding and educational, plus it'll make good resume fodder. You'll work with vendors and cluster users from around the world. We're also looking for writers and people to do hardware/software reviews. Contact Ken Farmer.

    Latest News

    OT: Hack-proof and crash resistant - have you discovered the OS world's best-kept secret?
    Tuesday July 13 2004 @ 08:28AM EDT

    Hack-proof and crash resistant - have you discovered the OS world's best-kept secret?

    OpenVMS offers unmatched robustness for business-critical apps

    OpenVMS (originally known as VMS) is probably the best designed and most robust general purpose operating system in existence. It is also one of the least-known and appreciated, simply because it works quietly in the background without drama, unlike its noisier and more fussy siblings and offspring.

    You will typically find OpenVMS in any environment that is serious about high availability, disaster tolerance, security, performance and scalability, especially when running real-time applications. Users include banks, stock exchanges, healthcare, manufacturing, aerospace, online billing, lotteries, chip manufacturing, oil and gas production, power stations, railways, government and secure public sector applications. In short, anything that really has to work.

    Uptime measured in years

    OpenVMS system uptimes are often measured in years - it being a point of honour to avoid rebooting and causing disruption unless utterly essential.

    There are clusters out there with uninterrupted service uptimes in excess of 15 years, even if individual machines have been occasionally rebooted, upgraded or replaced. That is a far cry from today's "reboot and restart" culture, where users seem willing to tolerate disruption to service - indeed, they have come to expect it. If only they were aware there is a better way. OpenVMS is one of the industry's best-kept secrets - those in the know would not consider using anything else for business-critical systems.

    OpenVMS runs on three hardware platforms: Vax (32-bit Cisc), Alpha (64-bit Risc) and Itanium (64-bit Epic). A system disc from any Alpha will boot and run on any other Alpha. The same goes for Vaxes, including software-emulated Vaxes. Likewise for the latest HP Integrity servers. OpenVMS will boot and run on anything from an RX2600 to a Superdome. This scalability and interoperability derives from the excellent internal architectural structure of OpenVMS.

    The bigger machines (Superdome, GS1280, etc) can be hard-partitioned to make a group of hardware resources inaccessible from other partitions. OpenVMS also supports soft partitions, using a mechanism known as Galaxy. This allows CPU resources to be dynamically reallocated between soft partitions to meet changing workloads.

    Partitioned systems are often used for server consolidation. Extending that by dynamic reallocation of hardware resources leads us to adaptive computing.

    Pioneer of clustering

    OpenVMS pioneered clustering in the mid-1980s and is still the standard to which all others aspire. It provides a "shared everything" model with minimal cluster state transition latency if a cluster member fails.

    This model allows all the resources in a cluster to be used concurrently, not in a failover or standby mode. There are many disaster-tolerant, split-site clusters in operation that continue to provide uninterrupted service without loss of data, even when whole sites fail. The largest supported OpenVMS cluster is 96 nodes - where each node can be a large multiprocessor system.

    Cluster interconnects can be anything from the original CI hardware to Gigabit Ethernet, or even Galactic memory in a soft-partitioned system.

    Many operations staff find using better-known operating systems frustrating in comparison to OpenVMS. The issues are primarily poor availability and reliability, combined with the difficulty of obtaining performance analysis and fault log data for capacity planning and fault analysis purposes. OpenVMS is generally seen as the gold standard for such things.

    For instance, OpenVMS comes with essential tools and facilities (most prominently, image back-up and restore) built in, rather than having to be added on. In most cases, you simply install it, configure it for your workload, add your applications and system-management utilities (typically DCL command files), then run it as a black box operational environment.

    As an operating system with a real-time pre-emptive scheduling mechanism, OpenVMS has always been capable of handling complex real-time events. The interrupt-driven I/O subsystem design aims for minimal latency, so OpenVMS is capable of exceedingly high, sustained I/O throughput, especially with V7.3-2 on Alpha EV7 (Marvel) systems. It will be interesting to see how V8.2 on Alpha and the Integrity server range compare when it is released.

    As a software development environment, OpenVMS provides a rich set of features and programming languages, debug facilities and operating system services.

    A key aspect of the OpenVMS design is the "calling standard" that allows code modules to be written in any language and code to call routines written in other languages. This is a great aid to application portability and, of course, to debugging code.

    It is the architectural structures that make it easy to optimise memory use with shared image libraries and also to deliver software compatibility between versions of the operating system without the need to recompile and relink applications.

    Although off-the-shelf package-based products may be in fashion, designing and implementing your own is the only way to utilise the capabilities of the underlying platform.

    This is especially true for high-availability environments where the features have to be built into the application and need to be reflected throughout the system architecture. Time spent investigating, testing, customising and deploying a package can often be better spent developing your own product layered on top of a system designed around the minimum components that fit the overall application architecture.

    OpenVMS also has excellent security. A hacking contest was held at the DefCon 9 conference in July 2001, where the winner was not NT, XP, Solaris, Linux or BSD. It was VMS, which was rated "cool and unhackable".

    Not legacy nor unfashionable

    OpenVMS generally appeals to those who take pride in using computer systems to do a job effectively and reliably, rather than those who want to live at the bleeding edge with the newest (and often immature) technology.

    Probably the biggest challenge for OpenVMS to overcome is its lack of public visibility. This has led to the perception of it being old, or legacy, or simply unfashionable, whereas in fact it is still under major development. This includes secure and stable implementations of commonly-used software such as Apache, Java, Mozilla, Perl, Python and XML.

    End-users, system managers and software developers want and need to see sufficient advertising of OpenVMS' strengths and capabilities so that those at board level can realise that in many cases it is a better and more cost-effective way of delivering secure, ultra-reliable and scalable business-critical systems than the more fashionable and better-promoted alternatives.

    Now the new HP has begun to settle down, and with the porting work to the Integrity server range almost complete, the expectation is that we will see the many benefits of using OpenVMS-based systems being actively promoted. OpenVMS has a long life ahead of it, once the current and future generations of decision-makers realise what it can do for their businesses.

    Colin Butcher is technical director of XDelta and board member of the HP User Group

    Read more about OpenVMS at http://www.OpenVMS.org

    < How to migrate your company to Linux clusters | Fedora Core 2.90 Test 1 Release >

    Sponsors | Affiliates









    Cost/Processor Poll
    What did you pay per dual processor node - Including software, accessories and other costs? (Total Cost/Node Count) About:
    Less than $1,500
    $1,501 to $2,000
    $2,001 to $2,500
    $2,501 to $3,000
    $3,001 to $3,500
    More than $3,500

    [ results | polls ]

    Mailing Lists
    Enter your email address to subscribe

    LinuxHPC.org Newsletter (details)
    LinuxHPC Managers List (details)




    The
    Beowulf
    Users
    Group



    Golden Eggs
    (HP Visual Diagram and Config Guides)
    Integrity:
    » rx2620 1600MHz 2P IA64
    » rx2620 Cluster MSA1000 IA64
    » rx2600 1500MHz 2P IA64
    » rx2600 Cluster MSA1000 IA64
    ProLiant:
    » DL140 3060MHz 2P IA32
    » DL140 G2 3600MHz 2P EM64T
    » DL145 2600MHz 2P Opteron
    » DL360 G4 3400MHz 2P EM64T
    » DL360 G4p 3800MHz 2P EM64T
    » DL380 G4 3800MHz 2P EM64T
    » DL385 2800MHz 2P Opteron
    » DL560 3000MHz 4P IA32
    » DL585 2600MHz 4P Opteron
    » DL580 G3 3330MHz 4P EM64T
    Clusters:
    » CP3000 32x DL145G2 & DL360G4p GigE EM64T
    » CP4000 32x DL145 GigE Opteron
    » CP4000 64x DL145 GigE Opteron
    » CP4000 102x DL145 GigE Opteron
    » CP4000 32x DL145 Myri Opteron
    » Rocks Cluster 16-22 DL145 Opteron
    » Rocks Cluster 30-46 DL145 Opteron
    » Rocks Cluster 64-84 DL145 Opteron
    » LC3000 GigaE 24-36 DL145 Opteron
    » LC3000 Myri 16-32x DL145 Opteron
    » LC3000 GigaE 16-22x DL145 Opteron
    » LC2000 GigaE 16-22x DL360G3 Xeon
    Storage:
    » MSA500 G2, SCSI
    » MSA1500 24TB, SCSI and SATA
    Misc:
    » AMD64 and EM64T systems with MSA1500






    The American Red Cross




       


    Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Sponsorship Search Privacy
         Copyright © 2001-2005 LinuxHPC.org
    Linux is a trademark of Linus Torvalds
    All other trademarks are those of their owners.
        
      SpyderByte.com ;Technical Portals