SpyderByte.com ;Technical Portals 
      
 The #1 Site for News & Information Related to Linux High Performance Technical Computing, Linux High Availability and Linux Parallel Clustering
Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Advertising/Sponsorship Search Privacy
Research and Services
Cluster Quoter (HPC Cluster RFQ)
Hardware Vendors
Software Vendors
HPC Consultants
Training Vendors
News
Latest News
Newsletter
News Archives
Search Archives
Reference
Featured Articles
Cluster Builder
Beginners
Whitepapers
Documentation
Software
Lists/Newsgroups
Books
User Groups
Golden Eggs (Configuration Diagrams)
Linux HPC News Update
Stay current on Linux related HPC news, events and information.
LinuxHPC Newsletter

Other Mailing Lists:
Linux High Availability
Beowulf Mailing List
Gelato.org (Linux Itanium)

Linux HPC Links
Favorites:
Cluster Monkey (Doug Eadline, et al)
HPCWire (Tabor Communications)
Scalability.org (Dr. Joe Landman)

Beowulf.org
Beowulf Users Group
High Performance Computing Clusters
Thinking Parallel
insideHPC.com
Gelato.org
The Aggregate
Top500
Cluster Computing Info Centre
Coyote Gultch
Robert Brown's Beowulf Page
FM.net: Scientific/Engineering
SuperComputingOnline
HPC User Forum
GridsWatch
LinuxHPC.org
Home
About
Contact
Mobile Edition
Sponsorship

Linux Cluster RFQ Form
Reach Multiple Vendors With One Linux Cluster RFQ Form. Save time and effort, let LinuxHPC.org do all the leg work for you free of charge. Request A Quote...

Latest News

Where mountain lions roam: Star-P helps decipher threatened wildlife migration
Posted by Jill, Tuesday April 17 2007 @ 01:10PM EDT

WALTHAM, Mass., Apr. 17, 2007 – Researchers at the University of California, Santa Barbara (UCSB) are harnessing supercomputers and electronic circuit theory to help save wildlife from ever-shrinking habitats in an emerging scientific field called “computational ecology.” The project is run by the University’s National Center for Ecological Analysis and Synthesis (NCEAS).

NCEAS scientists are applying electronic circuit theory to model wildlife migration and gene flow across fragmented landscapes. The research could be instrumental in smart conservation planning, helping organizations decide which lands to preserve or restore – and where to best invest their tight conservation budgets – in order to preserve habitat and connectivity for wildlife populations.

Due to the massive volume of landscape data and the novel application of algorithms from circuit theory, NCEAS is working to speed up their code using state of the art sparse linear solvers, graph computations, vectorization and parallelization of their code with Interactive Supercomputing Inc.’s (ISC) Star-P™. The result has been a dramatic reduction in computing time from days to minutes on their 8-core server.

“It turns out that circuit theory shares a surprising number of properties with ecological theory describing animal movements and connectivity,” said Brad McRae, the NCEAS project leader. “We can now represent landscapes as conductive surfaces – with features like forests and highways having different resistance to movement – and analyze connectivity across them using powerful circuit algorithms. Unlike standard conservation planning tools, these algorithms simultaneously incorporate all possible pathways when predicting how corridors, barriers, and other features affect movement and gene flow over large areas.”

Corridors are areas that connect important habitats in human-altered landscapes. They provide natural avenues along which animals can travel, plants can propagate, genetic interchange can occur, species can move in response to environmental changes and natural disasters, and threatened populations can be replenished from other areas. A good example is “Y2Y,” or the Yellowstone to Yukon corridor, where U.S. and Canadian conservation organizations are trying to identify which habitats to conserve to protect species from harmful decline or extinction.

In applying their software to these problems, NCEAS scientists have modeled mountain lion movements in Southern California to identify important connective habitats and corridors. In Central America they modeled how habitat connectivity affects gene flow among threatened populations of mahogany throughout the species’ range. They are also analyzing connectivity among populations of wolverines, kit foxes and jaguars. For each species, researchers analyze geographic datasets representing habitat suitability over vast areas – in some cases spanning entire continents.

The challenge was choosing between how large or how finely-scaled the maps should be, explained McRae. “Even a relatively small region like the three-county area of Southern California can contain millions of raster cells, but our computing resources limited how finely we could grid those locations. While a mountain lion might perceive its habitat at a scale of about 100 meters, we originally had to increase the cell sizes to around a kilometer to keep our data requirements manageable,” he said. “And even at these lower resolutions, running the models on a single-processor computer without optimized code took three days to complete.”

A key step of the NCEAS simulations is a computation on a large graph (or network) that represents the connectivity of the landscape. UCSB Computer Scientist Viral Shah worked with the NCEAS researchers to integrate their code with GAPDT, a Star-P toolbox for graph computation developed by Shah and John Gilbert of UCSB’s Combinatorial Scientific Computing Laboratory together with ISC Vice President of Advanced Research Steve Reinhardt. Said Shah, “The graph toolbox allows researchers who are not experts in the field of combinatorial scientific computing to leverage its methods in their own research.”

“The combination of vectorization with Star-P’s graph toolbox and efficient sparse linear solvers has allowed scientists to take full advantage of their 8-processor server (with 32 gigabytes of memory) to run their models,” says Reinhardt. “The result: scientists can now model larger maps with much finer grids, while cutting computing time from three days to about 15 minutes for typical problems.”

Star-P is an interactive parallel computing platform that lets scientists use their preferred desktop tools – MATLAB®, Python, R and others – to model landscape connectivity, but run the models interactively while gaining the benefits of scalable HPC solutions. It eliminates the need to re-program the models in C, FORTRAN or MPI languages to run on the parallel computer, dramatically improving the researchers’ productivity.

“Habitat reduction and fragmentation are accelerating the decline of many native wildlife species,” said Ilya Mirman, vice president of marketing at ISC. “NCEAS’ novel approach of applying circuit theory to solve this problem blends well with Star-P’s novel way of making parallel computing available to anyone.” About the National Center for Ecological Analysis and Synthesis The National Center for Ecological Analysis and Synthesis (NCEAS) provides the intellectual atmosphere, facilities, equipment, and staff support to promote the analysis and synthesis of ecological information. Since 1995, NCEAS has hosted 3,500 individuals and supported 400 projects that have yielded more than 1,000 scientific articles. The projects have produced a wide array of outcomes, from specific results to general knowledge about ecology and its application to conservation and the management of resources. The Center has engaged hundreds of graduate students and grade school children, and has developed information access tools that are becoming the standard for the discipline. About Interactive Supercomputing Interactive Supercomputing (ISC) launched in 2004 to commercialize Star-P, an interactive parallel computing platform. With automatic parallelization and interactive execution of existing desktop simulation applications, Star-P merges two previously distinct environments – desktop computers and high performance servers – into one. Based in Waltham, Mass., the privately held company markets Star-P for a range of biomedical, financial, and government laboratory research applications. Additional information is available at http://www.interactivesupercomputing.com


< GridwiseTech releases an exclusive portal report | TotalView Technologies Partners with Italy's Cineca Supercomputing Center >

 

Sponsors








Affiliates




Cluster Monkey






Golden Eggs
(HP Visual Diagram and Config Guides)
Clusters:
CP4000 32x DL145G2 GigE Opteron, Dual Core
CP4000 64x DL145 GigE Opteron
CP4000 102x DL145 GigE Opteron
CP4000 32x DL145 Myri Opteron
Rocks Cluster 16-22 DL145 Opteron
Rocks Cluster 30-46 DL145 Opteron
Rocks Cluster 64-84 DL145 Opteron
LC3000 GigaE 24-36 DL145 Opteron
LC3000 Myri 16-32x DL145 Opteron
LC3000 GigaE 16-22x DL145 Opteron
LC2000 GigaE 16-22x DL360G3 Xeon
ProLiant:
DL365 System 2600Mhz 2P 1U Opteron Dual Core
DL360 G5 System 3000Mhz 2P 1U EM64T Dual/Quad Core
DL385 G2 2600Mhz 2P Opteron Dual Core
DL380 G5 3000Mhz 2P EM64T Dual/Quad Core
DL140 3060MHz 2P IA32
DL140 G2 3600MHz 2P EM64T
DL145 2600MHz 2P Opteron
DL145 G2 2600MHz 2P Opteron Dual Core
DL360 G4 3400MHz 2P EM64T
DL360 G4p 3800MHz 2P EM64T
DL380 G4 3800MHz 2P EM64T
DL385 2800MHz 2P Opteron Dual Core
DL560 3000MHz 4P IA32
DL580 G3 3330MHz 4P EM64T
DL585 2800MHz 4P Opteron Dual Core
Integrity:
Montecito 2P-16P, rx2660-rx8640 (multi-system diagram)
rx2660 1600MHz 2P 2U Montecito Systems and Cluster
rx6600 1600MHz 4P 7U Single & Cluster
rx3600 1600MHz 2P 4U Single & Cluster
rx2620 1600MHz 2P 2U Single & Cluster
Superdome 64P base configuration
Integrity Family Portrait (rx1620 thru rx8620), IA64
rx1620 1600MHz 2P MSA1000 Cluster IA64
rx2620 1600MHz 2P MSA1000 Cluster IA64
rx4640 1600MHz 4P MSA1000 Cluster IA64
rx7620 1600MHz 8P 10U Systems and MSA1000 Cluster
rx8620 1600MHz 16P 17U Systems and MSA1000 Cluster
Storage:
MSA30-MI Dual SCSI Cluster, rx3600, rx6600 and rx2660
MSA30-MI Dual SCSI Cluster, rx1620...rx4640
MSA500 G2, SCSI
MSA1510i IP SAN 48TB, SCSI and SATA
MSA1500 48TB, SCSI and SATA
Misc:
Dual Core AMD64 and EM64T systems with MSA1500






Appro: Enterprise and High Performance Computing Whitepapers
Is Your HPC Cluster Ready for Multi-core Processors?:
Multi-core processors bring new challenges and opportunities for the HPC cluster. Get a first look at utilizing these processors and strategies for better performance.

Accelerating Results through Innovation:
Achieve maximum compute power and efficiency with Appro Cluster Solutions. Our highly scalable clusters are designed to seamlessly integrate with existing high performance, scientific, technical, and commercial computing environments.
Keeping Your Cool in the Data Center:
Rethinking IT architecture and infrastructure is not a simple job. This whitepaper helps IT managers overcome challenges with thermal, power, and system management.

Unlocking the Value of IT with Appro HyperBlade:
A fully integrated cluster combining advantages of blade and rack-mount servers for a flexible, modular, scalable architecture designed for Enterprise and HPC applications.
AMD Opteron-based products | Intel Xeon-based products


Hewlett-Packard: Linux High Performance Computing Whitepapers
Unified Cluster Portfolio:
A comprehensive, modular package of tested and pre-configured hardware, software and services for scalable computation, data management and visualization.

Your Fast Track to Cluster Deployment:
Designed to enable faster ordering and configuration, shorter delivery times and increased savings. Customers can select from a menu of popular cluster components, which are then factory assembled into pre-defined configurations with optional software installation.
Message Passing Interface library (HP-MPI):
A high performance and production quality implementation of the Message-Passing Interface (MPI) standard for HP servers and workstations.

Cluster Platform Express:
Cluster Platform Express comes straight to you, factory assembled and available with pre-installed software for cluster management, and ready for deployment.
AMD Opteron-based ProLiant nodes | Intel Xeon-based ProLiant nodes



Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Advertising/Sponsorship Search Privacy
     Copyright © 2001-2007 LinuxHPC.org
Linux is a trademark of Linus Torvalds
All other trademarks are those of their owners.
    
  SpyderByte.com ;Technical Portals