SpyderByte: WinHPC.org EnterpriseLinux.org BigBlueLinux.org
      
 The #1 Site for News & Information Related to Linux High Performance Technical Computing, Linux High Availability and Linux Parallel Clustering
Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Sponsorship Search Privacy
More Links
  • Printer friendly version
  • Share this story

  • Research and Services
    Cluster Quoter
    Windows HPC News
    Cluster Builder
    Hardware Vendors
    Software Vendors
    Training Vendors
    Golden Eggs
    News
    Submit News/Article/PR
    Latest News
    Newsletter
    News Archives
    Search Archives
    Reference
    Featured Articles
    Beginners
    Whitepapers
    Documentation
    Software
    Lists/Newsgroups
    Books
    User Groups
    Higher Education
    Cluster List
    Forums
    Employment/Jobs
    Beowulf
    Applications
    Interconnects
    High Availability
    AMD
    Intel

    Linux HPC News Update
    Stay current on Linux related HPC news, events and information.
    LinuxHPC Newsletter

    Other Mailing Lists:
    Linux High Availability
    Beowulf Mailing List
    Gelato.org (Linux Itanium)

    Linux HPC Links
    Beowulf.org
    Beowulf Users Group
    Blade.org
    Cluster Monkey
    High Performance Computing Clusters
    Gelato.org
    The Aggregate
    Top500
    Cluster Benchmarks
    Cluster Computing Info Centre
    Coyote Gultch
    Linux Clustering Info Ctr.
    Robert Brown's Beowulf Page
    Sourceforge Cluster Foundry
    HPC DevChannel
    OpenSSI
    Grid-Scape.org
    HPCWire
    SuperComputingOnline
    HPC User Forum
    Scalability.org
    Gridtech
    News Feed
    LinuxHPC.org.org has an RSS/RDF feed if you wish to include it on your website.
    LinuxHPC.org
    Home
    About
    Contact
    Mobile Edition
    Sponsorship

    Linux Cluster RFQ Form
    Reach Multiple Vendors With One Linux Cluster RFQ Form. Save time and effort, let LinuxHPC.org do all the leg work for you free of charge. Request A Quote...

    LinuxHPC.org is Looking for Interns - If you have experience with Linux clusters and/or cluster applications and you're interested in helping out with LinuxHPC.org let me know. I can promise the experience will be rewarding and educational, plus it'll make good resume fodder. You'll work with vendors and cluster users from around the world. We're also looking for writers and people to do hardware/software reviews. Contact Ken Farmer.

    Latest News

    New Technology Enables More Efficient High-Performance Computing Applications
    Thursday January 08 2004 @ 11:54AM EST

    An optical network provisioning protocol to enable more efficient computing applications has been successfully demonstrated by scientists at MCNC Research & Development Institute and N.C. State University.

    The demonstration of the Just-in-Time (JIT) protocol for provisioning and managing light path connections in the all-optical Advanced Technology Demonstration Network (ATDnet) in Washington, D.C., confirmed the viability of user-initiated, ultra-fast provisioning of all-optical network connections and marked the transition of the JIT protocol from the laboratory to an operational network. The light paths linked host systems at the U.S. Department of Defense’s Laboratory for Telecommunications Sciences, the Naval Research Laboratory’s Center for Computational Science and the Defense Intelligence Agency.

    An overview of the JIT protocol was presented in December at the Globecom 2003 conference’s Optical Networking and Systems Symposium in San Francisco by Dan Stevenson, vice president of MCNC-RDI’s Advanced Network Research Division.

    JIT will provide much needed support to U.S. military and civilian researchers to solve real-world problems. In particular, the Naval Research Laboratory is interested in the protocol’s ability to quickly set up and release tens, possibly hundreds, of gigabits of bandwidth for demanding, high-performance computing applications such as immersive real-time visualization of satellite imagery, computational fluid dynamics, ocean and weather modeling, and space physics.

    “JIT addresses some very challenging problems in high-performance computing,” said Dr. Hank Dardy, chief scientist for advanced computing at the Naval Research Laboratory’s Center for Computational Science. “It can take weeks to establish an optical connection through a carrier network, and minutes to do so with generalized multi-protocol label switching, the current industry standard. With JIT, we can provision optical connections between sites in a few milliseconds through our microelectromechanical switches, and in a few microseconds when we deploy faster photonic switches.”

    The JIT architecture and protocols used in the ATDnet were jointly developed by researchers at MCNC and Professors Paul Franzon, Harry Perros and George Rouskas of North Carolina State University. The research was partially funded by NASA and supported by the Advanced Research and Development Activity, a Department of Defense research and development community for the development of information technologies that current networks, including today’s Internet, do not or cannot support.

    “This is the first deployment of its kind in an operational network at greater than gigabit speeds,” Stevenson said. “JIT is especially attractive to government customers because it doesn’t necessarily care about data rate or data format, not even whether the signal is digital or analog. Also, it works with commercial, off-the-shelf equipment from multiple vendors and multiple optical switching technologies.”

    Fast, real-time resource provisioning will enable the military, particle physics, and research communities to focus on problems in new ways. Stevenson said that JIT overcomes many limitations and problems inherent with the current Internet. Applications can request, use and release bandwidth when needed, without tying up an optical circuit for days.

    “High-speed, on-demand, application-initiated provisioning of bandwidth is also what the grid computing community is demanding,” Stevenson said. Grids connect heterogeneous computing platforms so that they operate, and appear to the user, as a single computing system. This means that computational problems can be directed to a system within the grid that will process it in the quickest and most cost effective manner. Grid computing provides users with unprecedented computing power, services and information, combing the resources of heterogeneous computing resources no matter where they are located.

    “Grid resource requirements of big science applications, such as particle physics, are very dynamic,” said Stevenson. “The goal for sparse networks like ATDnet, and the recently announced National Lambda Rail, is to share grid bandwidth the same way you share computing cycles and storage in the grid. You also want to use those resources efficiently. These applications often involve computational steering and cannot afford the latency associated with electronic routers. The applications may require 300 megabits per second, but that’s only a small percentage of a 10-gigabit optical channel. JIT lets you share the remaining 97 percent of that bandwidth with others on the grid without the reduced performance inherent with electronic routing.”

    MCNC and the University of North Carolina 16-campus system are jointly developing a statewide grid computing network for North Carolina’s higher education community using the existing statewide North Carolina Research and Education Network, operated by MCNC. The statewide research and education grid will link computing and data resources from multiple institutions in multiple locations with the potential to vastly increase the resources available to individual institutions. When complete, North Carolina will be one of the first states in the nation to deploy a statewide grid infrastructure.

    “We intend to move JIT into grid networks,” said Stevenson. “We see the grid as a widely distributed computing system, and optical networks as the backplane for that system. We’re working on several supporting technologies to make that happen, such as protocols for QoS-aware routing, network management, transport, security and authentication, and making JIT OGSI/OGSA (Open Grid Services Infrastructure/Architecture) compliant. We’re also developing JIT-aware network adaptors so that high performance grid servers and hosts can take full advantage of JIT.”

    Stevenson also sees other applications. “We believe that JIT will scale to finer timescales, and will support application-initiated provisioning of bandwidth for optical burst switching where a connection is provisioned in nanoseconds and may be released after only a few milliseconds,” he said. Optical burst switching is a high performance networking technology that transports digital and analog data an order of magnitude faster than today’s digital electronic packet switched technologies.

    < MSC.Software Announces Release of MSC.Simulation Data Management 2004 | LCI's HPC Revolution 2004 CPF - Submission Deadline is January 30, 2004 >

    Sponsors and Affiliates

    HP


    HP


    Cluster Monkey


    WinHPC.org


    Stay up-to-date on the Progress of Microsoft's Windows Compute Cluster Server 2003

    Golden Eggs
    (HP Visual Diagram and Config Guides)
    Integrity:
    HP Integrity Family Portrait, IA64
    rx1620 1600MHz 2P MSA1000 Cluster IA64
    rx2620 1600MHz 2P MSA1000 Cluster IA64
    rx4640 1600MHz 4P MSA1000 Cluster IA64
    ProLiant:
    DL140 3060MHz 2P IA32
    DL140 G2 3600MHz 2P EM64T
    DL145 2600MHz 2P Opteron
    DL145 G2 2600MHz 2P Opteron Dual Core
    DL360 G4 3400MHz 2P EM64T
    DL360 G4p 3800MHz 2P EM64T
    DL380 G4 3800MHz 2P EM64T
    DL385 2800MHz 2P Opteron Dual Core
    DL560 3000MHz 4P IA32
    DL580 G3 3330MHz 4P EM64T
    DL585 2800MHz 4P Opteron Dual Core
    Clusters:
    CP3000 32x DL140G2 & DL360G4p GigE EM64T
    CP4000 32x DL145G2 GigE Opteron, Dual Core
    CP4000 64x DL145 GigE Opteron
    CP4000 102x DL145 GigE Opteron
    CP4000 32x DL145 Myri Opteron
    Rocks Cluster 16-22 DL145 Opteron
    Rocks Cluster 30-46 DL145 Opteron
    Rocks Cluster 64-84 DL145 Opteron
    LC3000 GigaE 24-36 DL145 Opteron
    LC3000 Myri 16-32x DL145 Opteron
    LC3000 GigaE 16-22x DL145 Opteron
    LC2000 GigaE 16-22x DL360G3 Xeon
    Storage:
    MSA500 G2, SCSI
    MSA1510i IP SAN 48TB, SCSI and SATA
    MSA1500 48TB, SCSI and SATA
    Misc:
    Dual Core AMD64 and EM64T systems with MSA1500




    Linux Magazine
    At Newstands Now!Linux Magazine

    Linux on Top500.org
    Nov 2005 Count Share
    Linux 360 72 %
    CNK/Linux 18 3.6 %
    Redhat ES 3 5 1 %
    SuSE ES 9 5 1 %
    SuSE ES 8 1 .2 %
    UNICOS/Linux 1 .2 %

    Linux Totals
    Count Share
    Jun 2005 318 63.6 %
    Nov 2004 301 60.2 %
    Jun 2004 282 56.0 %
    Nov 2003 184 36.8 %
    Jun 2003 135 27 %
    Nov 2002 72 14.4 %
    Jun 2002 67 13.4 %
    Nov 2001 39 7.8 %
    Jun 2001 44 8.8 %
    Nov 2000 54 10.8 %
    Jun 2000 28 5.6 %

    From: Top500.org

    Click for King, North Carolina Forecast

    PageRank is Google`s measure of the importance of this page!










       


    Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Sponsorship Search Privacy
         Copyright © 2001-2006 LinuxHPC.org
    Linux is a trademark of Linus Torvalds
    All other trademarks are those of their owners.
        
      SpyderByte.com ;Technical Portals