SpyderByte.com ;Technical Portals 
      
 The #1 Site for News & Information Related to Linux High Performance Technical Computing, Linux High Availability and Linux Parallel Clustering
Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Sponsorship Search Privacy
More Links
  • http://www.clusterresources.com/torque
  • http://www.clusterresources.com

  • Research and Services
    Cluster Quoter
    Cluster Builder
    Hardware Vendors
    Software Vendors
    Service & Support (Consulting)
    Training Vendors
    Golden Eggs
    News
    Latest News
    Newsletter
    News Archives
    Search Archives
    Reference
    Featured Articles
    Beginners
    Whitepapers
    Documentation
    Software
    Lists/Newsgroups
    Books
    User Groups
    Forums
    Employment/Jobs
    Beowulf
    Applications
    Interconnects

    Linux HPC News Update
    Stay current on Linux related HPC news, events and information.
    LinuxHPC Newsletter

    Other Mailing Lists:
    Linux High Availability
    Beowulf Mailing List
    Gelato.org (Linux Itanium)

    Linux HPC Links
    Favorites:
    Cluster Monkey (Doug Eadline, et al)
    HPCWire (Tabor Communications)
    Scalability.org (Dr. Joe Landman)

    Beowulf.org
    Beowulf Users Group
    Blade.org
    High Performance Computing Clusters
    Gelato.org
    The Aggregate
    Top500
    Cluster Benchmarks
    Cluster Computing Info Centre
    Coyote Gultch
    Linux Clustering Info Ctr.
    Robert Brown's Beowulf Page
    Sourceforge Cluster Foundry
    HPC DevChannel
    OpenSSI
    Grid-Scape.org
    SuperComputingOnline
    HPC User Forum
    Gridtech
    GridsWatch
    News Feed
    LinuxHPC.org.org has an RSS/RDF feed if you wish to include it on your website.
    Scalability.org
    - Slightly OT
    - Generating an “optimal” circuit from a language construct
    - For a market that some claim does not exist, this is attracting lots of attention and product …
    - To abstract or not to abstract: that is the question …
    - The market for accelerators and APUs
    - APU programming made easy?
    - Breaking mirror symmetry in HPC
    - Is there a need for supercomputing?
    - APUs in the news
    - A teraflop here, a teraflop there, and pretty soon you are talking about real computing power
    hpcanswers.com
    - What is Geneseo?
    - What is stream programming?
    - What are grand challenge problems?
    - What is Roadrunner?
    - What is software pipelining?
    - How do we improve locality of reference?
    - Is RDMA really that bad?
    - What are some open-source tools for technical computing?
    - What is Cray Hood?
    - What is OpenFabrics?
    LinuxHPC.org
    Home
    About
    Contact
    Mobile Edition
    Sponsorship

    Linux Cluster RFQ Form
    Reach Multiple Vendors With One Linux Cluster RFQ Form. Save time and effort, let LinuxHPC.org do all the leg work for you free of charge. Request A Quote...

    Supercomputing 2006 Registration Open
    SC06, the premier international conference on high performance computing, networking and storage, will convene from November 11-17, 2006 in Tampa, Florida. Register Now... See you there!

    Latest News

    TORQUE Reaches 50,000 Milestone
    Posted by Cluster Resources, Inc., Wednesday August 30 2006 @ 04:13PM EDT

    Cluster Resources, Inc., a leading provider of cluster, grid, and utility computing software, announced today that TORQUE* Resource Manager passed a new milestone in its continued success, reaching 50,000 downloads since August 2005.

    Terascale Open-Source Resource and QUEue Manager, more commonly known as TORQUE, is a resource manager derived from the original Open PBS project that provides control over batch jobs and distributed compute nodes. With more than 2,500 patches and enhancements since its release in 2004, TORQUE has incorporated significant advances in the areas of scalability, flexibility, and feature extensions.

    “We are pleased to be part of the TORQUE community project that continues to provide a leading resource management solution for Top 500 systems and thousands of other clusters worldwide,” said David Jackson, CTO of Cluster Resources. “Combining professional development, testing, support, and documentation with the extensive support and development contributions of the TORQUE community, has proven to be highly successful.”

    Over the past six months, TORQUE downloads have grown almost exponentially. Cluster Resources recorded approximately 33,000 total downloads from February through July, nearly double the previous six-month’s total. TORQUE is also included for download in most of the major cluster building kits, including ROCKS, OSCAR, xCAT, and others. When including these kits, downloads over the past year are estimated at more than 100,000.

    Cluster Resources – providers of the Moab family of workload management products – professionally maintains and develops TORQUE, incorporating hundreds of feature extension patches from NCSA, OSC, USC, U.S. Department of Energy, Sandia, PNNL, U of Buffalo, Teragrid and many other leading HPC institutions and individuals in the user community. Cluster Resources also supports and maintains the TORQUE documentation and user lists, and provides current versions and patches at http://www.clusterresources.com/torque

    Through TORQUE’s user lists and documentation WIKI, community members are able to submit or view patches, suggestions, or questions in the archive, and contribute new information to the user manual, providing an active forum for the development of new ideas.

    Garrick Staples, a lead developer of TORQUE, attributes the continued development of TORQUE to the collaborative efforts of the user community and Cluster Resources.

    “Since many TORQUE users are the administrators of their own clusters, their needs often drive the competitive edge of our development focus,” Staples said. “TORQUE also has the strong backing of Cluster Resources, through whose leadership, the collective wisdom and requirements of thousands of sites worldwide are being plugged directly into TORQUE.”

    To facilitate community involvement in TORQUE development, Cluster Resources recently began the use of Subversion – an open-source version control system dedicated to source configuration management. By integrating anonymous checkout through Subversion, users can more easily access the TORQUE source code to test and implement their own improvements.

    “We take suggestions for improvements seriously,” said Josh Butikofer, a product manager for Cluster Resources. “Whenever users make suggestions and improvements, or request that TORQUE be able to perform an additional task, the community works together to try and find a way to make it happen.”

    Before making any changes to the original source code, Cluster Resources tests the relevant submission enhancements in order to ensure the continued reliability and functionality of TORQUE.

    Cluster Resources focuses on enabling long-term core enhancements to TORQUE in the areas of scalability, security, reliability and usability. In recent versions of TORQUE, Cluster Resources has implemented a number of significant feature enhancements including tight PAM integration, improved SGI CPUSet support, initial job array support, and dynamic resource definitions. The newest version of TORQUE, 2.1.2, offers X11 forwarding and also supports client-commands on Windows using Cygwin.

    TORQUE is currently in use at hundreds of leading government, academic, and commercial sites throughout the world and is used on many of the world's largest clusters and grids. TORQUE scales from single SMP machines and clusters to sites with tens of thousands of jobs and nearly 10,000 processors.

    “We look forward to continuing the same level of development excellence that has led to TORQUE's success.” David Jackson said. “We welcome cluster users everywhere to try out TORQUE, to get involved and to help cultivate the type of ideas that have produced one of the best community resource management solutions in the HPC industry.”

    About Cluster Resources:

    Cluster Resources, Inc.TM is a leading provider of workload and resource management software and services for cluster, grid and utility-based computing environments. At the core of this solution is Moab Cluster Suite® and Moab Grid Suite® — professional cluster management solutions that include Moab Workload Manager, a policy-based workload management and scheduling tool, as well as a graphical cluster administration interface and a Web-based, end-user job submission and management portal. Cluster Resources also supports and offers a number of popular open-source solutions including TORQUE Resource Manager, Maui Cluster Scheduler, and Gold Allocation Manager. With over a decade of industry experience, Cluster Resources products and services enable organizations to understand, control, and fully optimize their compute resources and related processes.

    For additional press or product information, call (801) 717-3700 or visit http://www.clusterresources.com

    Press Contact: Nick Ihli
    Phone: +1 (801) 717-3700
    Email: nick.ihli@clusterresources.com

    ###

    Moab Utility/Hosting Suite®, Moab Cluster Suite®, Moab Workload Manager®, Moab Cluster Manager®, and Moab Access Portal® are trademarks or registered trademarks of Cluster Resources Inc.TM. All third-party trademarks are the property of their respective owners. Statements concerning Cluster Resources' future development plans and schedules are made for planning purposes only, and are subject to change or withdrawal without notice.

    * TORQUE Resource Manager includes software developed by NASA Ames Research Center, Lawrence Livermore National Laboratory, and Veridian Information Solutions, Inc. Visit www.OpenPBS.org for OpenPBS software support, products, and information. TORQUE is neither endorsed by nor affiliated with Altair Grid Solutions, Inc.


    < HP Enhances Server Portfolio with Latest Intel Dual-core Technology | Remember Commodity Clusters? >

    Sponsors

    HP
    HPC Market Leader
    Three Years Running








    Affiliates




    Cluster Monkey









    Golden Eggs
    (HP Visual Diagram and Config Guides)
    Clusters:
    CP3000 32x DL140G2 & DL360G4p GigE EM64T
    CP4000 32x DL145G2 GigE Opteron, Dual Core
    CP4000 64x DL145 GigE Opteron
    CP4000 102x DL145 GigE Opteron
    CP4000 32x DL145 Myri Opteron
    Rocks Cluster 16-22 DL145 Opteron
    Rocks Cluster 30-46 DL145 Opteron
    Rocks Cluster 64-84 DL145 Opteron
    LC3000 GigaE 24-36 DL145 Opteron
    LC3000 Myri 16-32x DL145 Opteron
    LC3000 GigaE 16-22x DL145 Opteron
    LC2000 GigaE 16-22x DL360G3 Xeon
    ProLiant:
    DL140 3060MHz 2P IA32
    DL140 G2 3600MHz 2P EM64T
    DL145 2600MHz 2P Opteron
    DL145 G2 2600MHz 2P Opteron Dual Core
    DL360 G4 3400MHz 2P EM64T
    DL360 G4p 3800MHz 2P EM64T
    DL380 G4 3800MHz 2P EM64T
    DL385 2800MHz 2P Opteron Dual Core
    DL560 3000MHz 4P IA32
    DL580 G3 3330MHz 4P EM64T
    DL585 2800MHz 4P Opteron Dual Core
    Integrity:
    Superdome 64P base configuration
    Integrity Family Portrait (rx1620 thru rx8620), IA64
    rx1620 1600MHz 2P MSA1000 Cluster IA64
    rx2620 1600MHz 2P MSA1000 Cluster IA64
    rx4640 1600MHz 4P MSA1000 Cluster IA64
    rx7620 1600MHz 8P 10U Systems and MSA1000 Cluster
    rx8620 1600MHz 16P 17U Systems and MSA1000 Cluster
    Storage:
    MSA30-MI Dual SCSI Cluster, rx1620...rx4640
    MSA500 G2, SCSI
    MSA1510i IP SAN 48TB, SCSI and SATA
    MSA1500 48TB, SCSI and SATA
    Misc:
    Dual Core AMD64 and EM64T systems with MSA1500






    Hewlett-Packard: Linux High Performance Computing Whitepapers
    Unified Cluster Portfolio:
    A comprehensive, modular package of tested and pre-configured hardware, software and services for scalable computation, data management and visualization.

    Your Fast Track to Cluster Deployment:
    Designed to enable faster ordering and configuration, shorter delivery times and increased savings. Customers can select from a menu of popular cluster components, which are then factory assembled into pre-defined configurations with optional software installation.
    Message Passing Interface library (HP-MPI):
    A high performance and production quality implementation of the Message-Passing Interface (MPI) standard for HP servers and workstations.

    Cluster Platform Express:
    Cluster Platform Express comes straight to you, factory assembled and available with pre-installed software for cluster management, and ready for deployment.
    AMD Opteron-based ProLiant nodes | Intel Xeon-based ProLiant nodes



    Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Sponsorship Search Privacy
         Copyright © 2001-2006 LinuxHPC.org
    Linux is a trademark of Linus Torvalds
    All other trademarks are those of their owners.
        
      SpyderByte.com ;Technical Portals