SpyderByte.com ;Technical Portals 
      
 The #1 Site for News & Information Related to Linux High Performance Technical Computing, Linux High Availability and Linux Parallel Clustering
Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Sponsorship Search Privacy
Research and Services
Cluster Quoter
Cluster Builder
Hardware Vendors
Software Vendors
Service & Support (Consulting)
Training Vendors
Golden Eggs
News
Latest News
Newsletter
News Archives
Search Archives
Reference
Featured Articles
Beginners
Whitepapers
Documentation
Software
Lists/Newsgroups
Books
User Groups
Forums
Employment/Jobs
Beowulf
Applications
Interconnects

Linux HPC News Update
Stay current on Linux related HPC news, events and information.
LinuxHPC Newsletter

Other Mailing Lists:
Linux High Availability
Beowulf Mailing List
Gelato.org (Linux Itanium)

Linux HPC Links
Favorites:
Cluster Monkey (Doug Eadline, et al)
HPCWire (Tabor Communications)
Scalability.org (Dr. Joe Landman)

Beowulf.org
Beowulf Users Group
Blade.org
High Performance Computing Clusters
Gelato.org
The Aggregate
Top500
Cluster Benchmarks
Cluster Computing Info Centre
Coyote Gultch
Linux Clustering Info Ctr.
Robert Brown's Beowulf Page
Sourceforge Cluster Foundry
HPC DevChannel
OpenSSI
Grid-Scape.org
SuperComputingOnline
HPC User Forum
Gridtech
GridsWatch
News Feed
LinuxHPC.org.org has an RSS/RDF feed if you wish to include it on your website.
Scalability.org
- Slightly OT
- Generating an “optimal” circuit from a language construct
- For a market that some claim does not exist, this is attracting lots of attention and product …
- To abstract or not to abstract: that is the question …
- The market for accelerators and APUs
- APU programming made easy?
- Breaking mirror symmetry in HPC
- Is there a need for supercomputing?
- APUs in the news
- A teraflop here, a teraflop there, and pretty soon you are talking about real computing power
hpcanswers.com
- What is Geneseo?
- What is stream programming?
- What are grand challenge problems?
- What is Roadrunner?
- What is software pipelining?
- How do we improve locality of reference?
- Is RDMA really that bad?
- What are some open-source tools for technical computing?
- What is Cray Hood?
- What is OpenFabrics?
LinuxHPC.org
Home
About
Contact
Mobile Edition
Sponsorship

Linux Cluster RFQ Form
Reach Multiple Vendors With One Linux Cluster RFQ Form. Save time and effort, let LinuxHPC.org do all the leg work for you free of charge. Request A Quote...

Supercomputing 2006 Registration Open
SC06, the premier international conference on high performance computing, networking and storage, will convene from November 11-17, 2006 in Tampa, Florida. Register Now... See you there!

Latest News

Cluster File Systems Attains World Leadership Position in High-Performance File Systems
Posted by Ken Farmer, Tuesday July 18 2006 @ 08:55AM EDT

Cluster File Systems(TM), Inc. (CFS), announced that its Lustre(R) File System has established a world leadership position in High Performance Computing (HPC) in the area of parallel, scalable cluster file systems. With the most recent release by the TOP500 Supercomputer Sites, it was confirmed that the highest-ranked supercomputers in North America, Europe and Asia rely on Lustre technology to meet their requirements for scalability and high performance. In fact, 10 of the world's top 30 supercomputers use Lustre software, including the number-one-ranked supercomputer in the world.

Highlights from the June 28th TOP500 list include:

-- In North America, the world's fastest supercomputer, IBM's BlueGene/L at Lawrence Livermore National Laboratory in Livermore, California, uses the Lustre File System to compute over 280 trillion floating-point operations per second.

-- In France, the French Atomic Energy Authority (CEA) uses the Lustre File System on Europe's fastest supercomputer - the 8700 processor TERA10 system provided by Bull SA.

-- Sun and NEC have deployed Asia's fastest supercomputer, the TSUBAME computer located at the Tokyo Institute of Technology in Japan. TSUBAME has over one petabyte of storage and uses the Lustre File System.

-- More than 70 of the TOP500 Supercomputers have been deployed with Lustre technology, with systems ranked from number one to number 491.

"The broad, global reach of the Lustre File System is a testament to its scalability, flexibility and stability," said Dr. Peter Braam, President and CEO of Cluster File Systems, Inc. "CFS, along with our partners in the open source community, will continue to build upon Lustre software capabilities, and further refine the user experience in order to meet the demands of clustered computing users today and in the future."

About Cluster File Systems, Inc.

Cluster File Systems, Inc. (CFS) has established itself as the recognized leader in high-performance, scalable cluster file system technology. Extensive experience, innovative insights, and proven engineering have enabled CFS to dramatically surpass the scalability limits of modern computing. The company's premier Lustre(R) cluster file system currently powers clusters with tens of thousands of nodes and petabytes of data, delivering groundbreaking parallel I/O and metadata throughput on some of the world's largest supercomputers. CFS provides Lustre technical support, training, and engineering services, and is actively working with storage and cluster vendors to develop the next generation of intelligent storage devices. The Lustre File System for Linux is Open Source software developed and maintained by CFS. For more information, see http://www.clusterfs.com .

About the TOP500 Supercomputer Sites List

The TOP500 list is released twice yearly and is compiled by: Hans Meuer of the University of Mannheim, Germany; Erich Strohmaier and Horst Simon of NERSC/Lawrence Berkeley National Laboratory; and Jack Dongarra of the University of Tennessee, Knoxville. More information on the TOP500 listings can be found at http://www.top500.org .

Lustre, the Lustre logo, Cluster File Systems, and CFS are trademarks of Cluster File Systems, Inc. in the United States. All other names are property of their respective owners.


< Cluster File Systems Attains Leadership Position In High Performance File Systems | OptimaNumerics Enabling Higher Performance on Dual Core Intel Itanium 2 Platforms >

Sponsors

HP
HPC Market Leader
Three Years Running








Affiliates




Cluster Monkey









Golden Eggs
(HP Visual Diagram and Config Guides)
Clusters:
CP3000 32x DL140G2 & DL360G4p GigE EM64T
CP4000 32x DL145G2 GigE Opteron, Dual Core
CP4000 64x DL145 GigE Opteron
CP4000 102x DL145 GigE Opteron
CP4000 32x DL145 Myri Opteron
Rocks Cluster 16-22 DL145 Opteron
Rocks Cluster 30-46 DL145 Opteron
Rocks Cluster 64-84 DL145 Opteron
LC3000 GigaE 24-36 DL145 Opteron
LC3000 Myri 16-32x DL145 Opteron
LC3000 GigaE 16-22x DL145 Opteron
LC2000 GigaE 16-22x DL360G3 Xeon
ProLiant:
DL140 3060MHz 2P IA32
DL140 G2 3600MHz 2P EM64T
DL145 2600MHz 2P Opteron
DL145 G2 2600MHz 2P Opteron Dual Core
DL360 G4 3400MHz 2P EM64T
DL360 G4p 3800MHz 2P EM64T
DL380 G4 3800MHz 2P EM64T
DL385 2800MHz 2P Opteron Dual Core
DL560 3000MHz 4P IA32
DL580 G3 3330MHz 4P EM64T
DL585 2800MHz 4P Opteron Dual Core
Integrity:
Superdome 64P base configuration
Integrity Family Portrait (rx1620 thru rx8620), IA64
rx1620 1600MHz 2P MSA1000 Cluster IA64
rx2620 1600MHz 2P MSA1000 Cluster IA64
rx4640 1600MHz 4P MSA1000 Cluster IA64
rx7620 1600MHz 8P 10U Systems and MSA1000 Cluster
rx8620 1600MHz 16P 17U Systems and MSA1000 Cluster
Storage:
MSA30-MI Dual SCSI Cluster, rx1620...rx4640
MSA500 G2, SCSI
MSA1510i IP SAN 48TB, SCSI and SATA
MSA1500 48TB, SCSI and SATA
Misc:
Dual Core AMD64 and EM64T systems with MSA1500






Hewlett-Packard: Linux High Performance Computing Whitepapers
Unified Cluster Portfolio:
A comprehensive, modular package of tested and pre-configured hardware, software and services for scalable computation, data management and visualization.

Your Fast Track to Cluster Deployment:
Designed to enable faster ordering and configuration, shorter delivery times and increased savings. Customers can select from a menu of popular cluster components, which are then factory assembled into pre-defined configurations with optional software installation.
Message Passing Interface library (HP-MPI):
A high performance and production quality implementation of the Message-Passing Interface (MPI) standard for HP servers and workstations.

Cluster Platform Express:
Cluster Platform Express comes straight to you, factory assembled and available with pre-installed software for cluster management, and ready for deployment.
AMD Opteron-based ProLiant nodes | Intel Xeon-based ProLiant nodes



Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Sponsorship Search Privacy
     Copyright © 2001-2006 LinuxHPC.org
Linux is a trademark of Linus Torvalds
All other trademarks are those of their owners.
    
  SpyderByte.com ;Technical Portals