SpyderByte: WinHPC.org EnterpriseLinux.org BigBlueLinux.org
 The #1 Site for News & Information Related to Linux High Performance Technical Computing, Linux High Availability and Linux Parallel Clustering
Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Sponsorship Search Privacy
More Links
  • Printer friendly version
  • Share this story

  • Research and Services
    Cluster Quoter
    Windows HPC News
    Cluster Builder
    Hardware Vendors
    Software Vendors
    Training Vendors
    Golden Eggs
    High Availability

    Submit News/Article/PR
    Latest News
    News Archives
    Search Archives
    Featured Articles
    User Groups
    Higher Education
    Cluster List
    Linux HPC News Update
    Stay current on Linux related HPC news, events and information.
    LinuxHPC Newsletter

    Other Mailing Lists:
    Linux High Availability
    Beowulf Mailing List
    Gelato.org (Linux Itanium)

    Linux HPC Links
    Beowulf Users Group
    Cluster Monkey
    High Performance Computing Clusters
    The Aggregate
    Cluster Benchmarks
    Cluster Computing Info Centre
    Coyote Gultch
    Linux Clustering Info Ctr.
    Robert Brown's Beowulf Page
    Sourceforge Cluster Foundry
    HPC DevChannel
    HPC User Forum
    News Feed
    LinuxHPC.org.org has an RSS/RDF feed if you wish to include it on your website.
    Mobile Edition

    Linux Cluster RFQ Form
    Reach Multiple Vendors With One Linux Cluster RFQ Form. Save time and effort, let LinuxHPC.org do all the leg work for you free of charge. Request A Quote...

    LinuxHPC.org is Looking for Interns - If you have experience with Linux clusters and/or cluster applications and you're interested in helping out with LinuxHPC.org let me know. I can promise the experience will be rewarding and educational, plus it'll make good resume fodder. You'll work with vendors and cluster users from around the world. We're also looking for writers and people to do hardware/software reviews. Contact Ken Farmer.

    Latest News

    Yale Boosts Beowulf Supercomputing Clusters with Turbolinux EnFuzion
    Thursday April 04 2002 @ 07:18AM EST

    There's no shortage of computing power in the computer science department at Yale University, in New Haven, Conn. Along with assorted high-powered workstations and other computers, the department has two Linux-based Beowulf clusters. But taking advantage of the computing power in those clusters hasn't always been easy...

    Until recently, students and professors who wanted to harness the processing power of the Beowulf clusters -- one of which uses 20 Dell servers, the other 20 IBM Netfinity systems -- had to modify the programs they were using to run in parallel. That, says Prof. Martin Schultz, of Yale's computer science department, requires having access to the program's source code, and using a low-level tool such as Message Passing Interface (MPI) to modify it -- not always a simple task. "MPI has over three hundred commands," Schultz says. "It's not easy to master."

    Thanks to Turbolinux' Enfuzion, which Yale is now running on both Beowulf clusters, that's no longer the case. Enfuzion, says Schultz, is easy to use, because it takes care of distributing programs across the various nodes in the cluster, so they run without having to be modified. Researchers and students at Yale can now take advantage of the super-computing level of power provided by the clusters, without having to do any special programming.

    The importance of that "can not be overestimated," says Schultz. "If scientists have to spend time making changes to their software program, it can take a long time to get their research up and running. When you're competing with other researchers, you don't want a system that takes a long time to use, you want to crank out results."

    Now that it's possible to run programs on the Beowulf clusters without spending lots of time to modify them for parallel computing, Schultz plans to assign projects involving cluster computing to his classes. Several dozen other professors and graduate students, both in and out of the computer science department, will be using Enfuzion to run programs as well.

    Eventually, Schultz expects lots of departments on campus to use the clusters. For example, the Yale medical school and the university's biology departments do a lot of computationally-intensive research. "I expect Enfuzion will play a big role there," Schultz says. Since Enfuzion can easily manage clusters consisting of thousands of computers, Schultz anticipates no problems expanding the clusters to handle the load.

    But the clusters may not have to grow at all. Yale is taking advantage of Enfuzion's unique ability to harness idle CPU cycles on computers outside the cluster. The university will use Enfuzion to distribute programs to 20 other computers, which sit on the desks of students and professors in the department. Enfuzion's load balancing facilities will automatically run programs on these systems when they're not otherwise in use. "If you don't use them, these CPU cycles disappear," say Schultz. "It's not a resource you can store up. But with Enfuzion, these machines will be in use nights and weekends, or whenever someone is not actually sitting at the keyboard using them. It lets us make maximum use of our computing resources."

    About Turbolinux EnFuzion

    EnFuzion clusters all available computing resources on a corporate network to create a powerful "virtual supercomputer" and, as a result, allows companies to reduce time and costs associated with computationally demanding data processing jobs. Traditionally, these jobs - such as complex financial calculations - have been handled by expensive high-end servers. With the growing need to process increasing volumes of complex jobs in a shorter time period, the cost of traditional solutions becomes prohibitive. To learn more about EnFuzion please visit www.turbolinux.com.

    (Linda Fulinmane of Turbolinux, Inc.)

    < SCO: Checking-In with Linux NetworX | Powerful Eight Processor Cluster Provides SAIC with Highly Effective Solution >



    Stay up-to-date on the Progress of Microsoft's Windows Compute Cluster Server 2003

    Golden Eggs
    (HP Visual Diagram and Config Guides)
    HP Integrity Family Portrait, IA64
    rx1620 1600MHz 2P MSA1000 Cluster IA64
    rx2620 1600MHz 2P MSA1000 Cluster IA64
    rx4640 1600MHz 4P MSA1000 Cluster IA64
    DL140 3060MHz 2P IA32
    DL140 G2 3600MHz 2P EM64T
    DL145 2600MHz 2P Opteron
    DL145 G2 2600MHz 2P Opteron Dual Core
    DL360 G4 3400MHz 2P EM64T
    DL360 G4p 3800MHz 2P EM64T
    DL380 G4 3800MHz 2P EM64T
    DL385 2800MHz 2P Opteron Dual Core
    DL560 3000MHz 4P IA32
    DL580 G3 3330MHz 4P EM64T
    DL585 2800MHz 4P Opteron Dual Core
    CP3000 32x DL140G2 & DL360G4p GigE EM64T
    CP4000 32x DL145G2 GigE Opteron, Dual Core
    CP4000 64x DL145 GigE Opteron
    CP4000 102x DL145 GigE Opteron
    CP4000 32x DL145 Myri Opteron
    Rocks Cluster 16-22 DL145 Opteron
    Rocks Cluster 30-46 DL145 Opteron
    Rocks Cluster 64-84 DL145 Opteron
    LC3000 GigaE 24-36 DL145 Opteron
    LC3000 Myri 16-32x DL145 Opteron
    LC3000 GigaE 16-22x DL145 Opteron
    LC2000 GigaE 16-22x DL360G3 Xeon
    MSA500 G2, SCSI
    MSA1510i IP SAN 48TB, SCSI and SATA
    MSA1500 48TB, SCSI and SATA
    Dual Core AMD64 and EM64T systems with MSA1500

    Linux Magazine
    At Newstands Now!Linux Magazine

    Click for King, North Carolina Forecast

    PageRank is Google`s measure of the importance of this page!


    Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Sponsorship Search Privacy
         Copyright © 2001-2006 LinuxHPC.org
    Linux is a trademark of Linus Torvalds
    All other trademarks are those of their owners.
      SpyderByte.com ;Technical Portals