SpyderByte.com ;Technical Portals 
      
 News & Information Related to Linux High Performance Computing, Linux Clustering and Cloud Computing
Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Advertising/Sponsorship Search Privacy
HPC Vendors
Cluster Quoter (HPC Cluster RFQ)
Hardware Vendors
Software Vendors
HPC Consultants
Training Vendors
HPC Resources
Featured Articles
Cluster Builder
Beginners
Whitepapers
Documentation
Software
Lists/Newsgroups
Books
User Groups & Organizations
HP Server Diagrams
HPC News
Latest News
Newsletter
News Archives
Search Archives
HPC Links
ClusterMonkey.net
Scalability.org
HPCCommunity.org

Beowulf.org
HPC Tech Forum (was BW-BUG)
Gelato.org
The Aggregate
Top500.org
Cluster Computing Info Centre
Coyote Gultch
Dr. Robert Brown's Beowulf Page
FreshMeat.net: HPC Software
SuperComputingOnline
HPC User Forum
GridsWatch
HPC Newsletters
Stay current on Linux HPC news, events and information.
LinuxHPC.org Newsletter

Other Mailing Lists:
Linux High Availability
Beowulf Mailing List
Gelato.org (Linux Itanium)

LinuxHPC.org
Home
About
Contact
Mobile Edition
Sponsorship

Latest News

New Dell cluster activated at Purdue
Posted by Greg Kline, Tuesday November 08 2011 @ 11:17AM EST

Purdue activates fourth campus-wide research cluster in four years

“Contagion” is just a hit movie, but its premise – a deadly disease spread around the world by airline passengers – is no fiction, as recent experience with SARS and the H1N1 flu virus illustrate.

That’s one reason Purdue University mechanical engineering Professor Qingyan Chen and his students are looking at aircraft environmental control systems with an eye to making them less apt to circulate airborne contaminants, as well making the systems less of a drain on fuel consumption.

A new research supercomputing cluster at Purdue will help advance Chen’s work and that of 19 other campus research groups and researchers nationally. The cluster, called Hansen, features 108 Dell compute nodes with four 12-core AMD Opteron 6176 processors, 48 cores per node. The nodes include 96, 192 or, in a large-memory option, 512 gigabytes of memory. In addition, the new cluster uses 10 gigabit Ethernet interconnects and has a high-performance Lustre scratch storage system. Hansen nodes run Red Hat Enterprise Linux 5.5 and use Portable Batch System Professional 11.1.0 for resource and job management.

Hansen is expected be on the latest TOP500 list of the world’s supercomputers, to be released at SC11, the largest international supercomputing conference, in Seattle Nov. 15.

Chen, principal investigator for the Air Transportation Center of Excellence for Airline Cabin Environment at Purdue, combines physical experiments and intensive computer modeling in his lab’s research. His models not only consider the three-dimensional nature of air circulation in spaces like airline cabins but often a fourth dimension changes over time.

“Our work involves the solution of very complex equations and then we do iterations, which is why it takes so much computing power,” Chen says.

Besides Chen, researchers in fields including earth and atmospheric sciences, chemistry, physics, computer science, aeronautics and astronautics, electrical and computer engineering, materials engineering, and more are using the new cluster.

“The research Dr. Chen and his team at Purdue are conducting is no doubt going to have a significant impact on the science community, and more importantly the safety and health of global citizens, “ says John Mullen, Dell vice president and general manager of major public accounts, education, state and local government. “This work is exactly the reason why we continue to challenge ourselves at Dell by building the most powerful research clusters in the world. Together, we’re making important advancements in science and technology.”

This is the fourth research cluster Purdue has built in as many years for campus researchers, as well as use on DiaGrid, a Purdue-based multi-campus distributed computing system, the National Science Foundation’s TeraGrid and XSEDE networks and the Open Science Grid. The three previous clusters have delivered more than 300 million research computing hours to researchers and their students. The new cluster should push that to a half billion hours by year’s end. Together the four clusters deliver more than 331 teraflops at peak.

“We build these clusters because of the still-growing demand for high-performance computing in engineering and science, whether it is examining how protein molecules work in the human body or examining the workings of the universe,” says John Campbell, the associate vice president who heads Purdue’s central research computing program. “The clusters provide faculty with a worry-free environment where they can focus on tackling new and larger computational challenges that lead to discovery and benefit society, among other things through technological and medical advances.”

Purdue builds clusters under its Community Cluster Program, which won a Campus Technology Innovators Award in 2010. The program is a partnership among Purdue faculty and Information Technology at Purdue (ITaP), the university’s central IT organization, who fund the systems jointly. ITaP’s Rosen Center for Advanced Computing assembles, operates and manages the clusters while the faculty partners make their purchased capacity available to their peers when it’s idle, maximizing the use of the systems.

Like the other clusters, the new cluster is named for a prominent figure in Purdue research computing history. The Hansen cluster recognizes the late Arthur G. Hansen, Purdue’s eighth president, who was a strong supporter of high-performance computing at Purdue. Hansen died in 2010.

Writer: Greg Kline, science and technology writer, Information Technology at Purdue (ITaP), 765-494-8167, gkline@purdue.edu

< Design a logo for SCALE 10X, win a trip to Los Angeles | siFlash Extreme Performance Flash Storage System Introduced by Scalable Informatics >

 

Affiliates

Cluster Monkey

HPC Community


Supercomputing 2010

- Supercomputing 2010 website...

- 2010 Beowulf Bash

- SC10 hits YouTube!

- Louisiana Governor Jindal Proclaims the week of November 14th "Supercomputing Week" in honor of SC10!








Appro: High Performance Computing Resources
IDC: Appro Xtreme-X Supercomputer Blade Solution
Analysis of the Xtreme-X architecture and management system while assessing challenges and opportunities in the technical computing market for blade servers.

Video - The Road to PetaFlop Computing
Explore the Scalable Unit concept where multiple clusters of various sizes can be rapidly built and deployed into production. This new architectural approach yields many subtle benefits to dramatically lower total cost of ownership.
White Paper - Optimized HPC Performance
Multi-core processors provide a unique set of challenges and opportunities for the HPC market. Discover MPI strategies for the Next-Generation Quad-Core Processors.

Appro and the Three National Laboratories
[Appro delivers a new breed of highly scalable, dynamic, reliable and effective Linux clusters to create the next generation of supercomputers for the National Laboratories.

AMD Opteron-based products | Intel Xeon-based products



Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Advertising/Sponsorship Search Privacy
     Copyright © 2001-2012 LinuxHPC.org
Linux is a trademark of Linus Torvalds
All other trademarks are those of their owners.
    
  SpyderByte.com ;Technical Portals