SpyderByte.com ;Technical Portals 
      
 The #1 Site for News & Information Related to Linux High Performance Technical Computing, Linux High Availability and Linux Parallel Clustering
Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Advertising/Sponsorship Search Privacy
Research and Services
Cluster Quoter (HPC Cluster RFQ)
Hardware Vendors
Software Vendors
HPC Consultants
Training Vendors
News
Latest News
Newsletter
News Archives
Search Archives
Reference
Featured Articles
Cluster Builder
Beginners
Whitepapers
Documentation
Software
Lists/Newsgroups
Books
User Groups
Golden Eggs (Configuration Diagrams)
Linux HPC Links
Favorites:
Cluster Monkey (Doug Eadline, et al)
HPCWire (Tabor Communications)
insideHPC.com (John West)
Scalability.org (Dr. Joe Landman)

Beowulf.org
Beowulf Users Group
High Performance Computing Clusters
Thinking Parallel
Gelato.org
The Aggregate
Top500
Cluster Computing Info Centre
Coyote Gultch
Robert Brown's Beowulf Page
FM.net: Scientific/Engineering
SuperComputingOnline
HPC User Forum
GridsWatch
Linux HPC News Update
Stay current on Linux related HPC news, events and information.
LinuxHPC Newsletter

Other Mailing Lists:
Linux High Availability
Beowulf Mailing List
Gelato.org (Linux Itanium)

LinuxHPC.org
Home
About
Contact
Mobile Edition
Sponsorship

Linux Cluster RFQ Form
Reach Multiple Vendors With One Linux Cluster RFQ Form. Save time and effort, let LinuxHPC.org do all the leg work for you free of charge. Request A Quote...

Latest News

Obsidian Longbow Delivers InfiniBand Storage Across University of Florida’s Campus and WAN Lin
Posted by Ken Farmer, Monday November 13 2006 @ 09:02PM EST

Obsidian Research Corporation, the leader in InfiniBand range extension, has teamed with Rackable Systems and Cisco Systems to deliver computing, storage, and networking equipment to the University of Florida (UF). This equipment enables high-performance storage across Wide-Area Network (WAN) links at rates approaching wire speed. UF’s InfiniBand-based (SDP) cluster file system built using Rackable Systems’ clustered file system technology currently supports sustained transfer rates of 1.4 GB/sec to the underlying Rackable Systems OmniStor™ Fibre Channel (FC) RAID arrays.

Obsidian Research Corp.’s Longbow InfiniBand range extension technology exposes this high-performance, parallel file system to other remote clusters on campus. “The Obsidian Longbow products allow UF to distribute InfiniBand-connected storage over a campus-area or wide-area link at full data rates,” said Dr. David Southwell, CEO of Obsidian Research Corporation, “This capability eases management and sharing of data in UF’s emerging grid infrastructure.” UF will showcase its advanced network technology in real-time at Supercomputing ’06 (Booth #2051 and #252).

UF Provides Ideal Case Study for InfiniBand and Range Extension

In October 2005, UF deployed a high-performance compute cluster from Rackable Systems consisting of two-hundred IB-connected, dual-processor nodes with AMD Opteron 275 processors (800 cores). They wanted an I/O subsystem that would compliment rather than negate the cluster’s computational capacity; that would utilize the IB interconnect; and that would scale in performance and capacity as storage needs increased. “We found our solution in Rackable Systems’ clustered file system technology utilizing IB/SDP for iSCSI transport”, said Dr. Craig Prescott from the UF HPC Center, “This I/O solution is capable of sustaining in excess of 1.4 GB/s of aggregate throughput for random write access patterns and will soon be expanded to support over 2.4 GB/s.”

With the local storage issue resolved, UF now needed a solution that allowed remote clusters to access the high-performance storage as simply as possible. “The Obsidian Longbow products were the only solution that allowed us to transparently extend the reach and performance of our InfiniBand storage network,” said Dr. Charles Taylor from the UF HPC Center, “The Longbow Campus product allows us to extend the reach of InfiniBand across our campus, while the Longbow XR would enable us to connect all major Florida Universities via the 10 Gb/s Florida Lambda Rail (FLR) WAN.”

See InfiniBand Storage and Range Extension in Action at SC06

With the help of InfiniBand infrastructure switches and host channel adapters (HCAs) from Cisco Systems, network connectivity from Florida Lambda Rail, and Longbow InfiniBand range extension products from Obsidian, UF will demonstrate remote InfiniBand storage not only across Campus, but across 1,100 km of Florida State.

In this demonstration, Rackable Systems servers located in the UF/FLR booth (#2051) access large data sets located in arrays of Rackable Systems storage appliances (Booth #252) using InfiniBand. Obsidian Longbow Campus units transport the iSCSI protocol payload via IB/SDP through a 10 km, dark-fiber spool, preserving local performance levels across a simulated campus network.

Identical application software drives I/O traffic to the UF/FLR booth (#2051) from additional storage appliances located in the High Performance Computing facility on the Gainesville campus using Obsidian Longbow XRs, a networking platform that preserves all the performance advantages of InfiniBand across 10GE, ATM, and OC-192 WANs. UF’s demo proves that islands of InfiniBand storage can be aggregated campus-wide and maintain superior performance to other storage and storage transport technologies while easing configuration, access control, capacity balancing, and management processes.

http://www.obsidianresearch.com


< Microway and DRC Partner for Reconfigurable Computing Products | DataDirect Networks Debuts S2A Petascale Storage Solution at SC06 >

 


Supercomputing '07
Nov 10-16, Reno, NV


Register now...

Sponsors








Affiliates



Cluster Monkey




Golden Eggs
(HP Visual Diagram and Config Guides)
Clusters:
CP4000 32x DL145G2 GigE Opteron, Dual Core
CP4000 64x DL145 GigE Opteron
CP4000 102x DL145 GigE Opteron
CP4000 32x DL145 Myri Opteron
Rocks Cluster 16-22 DL145 Opteron
Rocks Cluster 30-46 DL145 Opteron
Rocks Cluster 64-84 DL145 Opteron
LC3000 GigaE 24-36 DL145 Opteron
LC3000 Myri 16-32x DL145 Opteron
LC3000 GigaE 16-22x DL145 Opteron
LC2000 GigaE 16-22x DL360G3 Xeon
ProLiant:
> DL365 System 2600Mhz 2P 1U Opteron Dual Core
DL360 G5 System 3000Mhz 2P 1U EM64T Dual/Quad Core
DL385 G2 2600Mhz 2P Opteron Dual Core
DL380 G5 3000Mhz 2P EM64T Dual/Quad Core
DL140 3060MHz 2P IA32
DL140 G2 3600MHz 2P EM64T
DL145 2600MHz 2P Opteron
DL145 G2 2600MHz 2P Opteron Dual Core
DL360 G4 3400MHz 2P EM64T
DL360 G4p 3800MHz 2P EM64T
DL380 G4 3800MHz 2P EM64T
DL385 2800MHz 2P Opteron Dual Core
DL560 3000MHz 4P IA32
DL580 G3 3330MHz 4P EM64T
DL585 2800MHz 4P Opteron Dual Core
Integrity:
Montecito 2P-16P, rx2660-rx8640 (multi-system diagram)
rx2660 1600MHz 2P 2U Montecito Systems and Cluster
rx6600 1600MHz 4P 7U Single & Cluster
rx3600 1600MHz 2P 4U Single & Cluster
rx2620 1600MHz 2P 2U Single & Cluster
Superdome 64P base configuration
Integrity Family Portrait (rx1620 thru rx8620), IA64
rx1620 1600MHz 2P MSA1000 Cluster IA64
rx2620 1600MHz 2P MSA1000 Cluster IA64
rx4640 1600MHz 4P MSA1000 Cluster IA64
rx7620 1600MHz 8P 10U Systems and MSA1000 Cluster
rx8620 1600MHz 16P 17U Systems and MSA1000 Cluster
Storage:
MSA30-MI Dual SCSI Cluster, rx3600, rx6600 and rx2660
MSA30-MI Dual SCSI Cluster, rx1620...rx4640
MSA500 G2, SCSI
MSA1510i IP SAN 48TB, SCSI and SATA
MSA1500 48TB, SCSI and SATA
Misc:
Dual Core AMD64 and EM64T systems with MSA1500






Appro: Enterprise and High Performance Computing Whitepapers
Is Your HPC Cluster Ready for Multi-core Processors?:
Multi-core processors bring new challenges and opportunities for the HPC cluster. Get a first look at utilizing these processors and strategies for better performance.

Accelerating Results through Innovation:
Achieve maximum compute power and efficiency with Appro Cluster Solutions. Our highly scalable clusters are designed to seamlessly integrate with existing high performance, scientific, technical, and commercial computing environments.
Keeping Your Cool in the Data Center:
Rethinking IT architecture and infrastructure is not a simple job. This whitepaper helps IT managers overcome challenges with thermal, power, and system management.

Unlocking the Value of IT with Appro HyperBlade:
A fully integrated cluster combining advantages of blade and rack-mount servers for a flexible, modular, scalable architecture designed for Enterprise and HPC applications.
AMD Opteron-based products | Intel Xeon-based products


Hewlett-Packard: Linux High Performance Computing Whitepapers
Unified Cluster Portfolio:
A comprehensive, modular package of tested and pre-configured hardware, software and services for scalable computation, data management and visualization.

Your Fast Track to Cluster Deployment:
Designed to enable faster ordering and configuration, shorter delivery times and increased savings. Customers can select from a menu of popular cluster components, which are then factory assembled into pre-defined configurations with optional software installation.
Message Passing Interface library (HP-MPI):
A high performance and production quality implementation of the Message-Passing Interface (MPI) standard for HP servers and workstations.

Cluster Platform Express:
Cluster Platform Express comes straight to you, factory assembled and available with pre-installed software for cluster management, and ready for deployment.
AMD Opteron-based ProLiant nodes | Intel Xeon-based ProLiant nodes



Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Advertising/Sponsorship Search Privacy
     Copyright © 2001-2007 LinuxHPC.org
Linux is a trademark of Linus Torvalds
All other trademarks are those of their owners.
    
  SpyderByte.com ;Technical Portals