SpyderByte.com ;Technical Portals 
      
 The #1 Site for News & Information Related to Linux High Performance Technical Computing, Linux High Availability and Linux Parallel Clustering
Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Sponsorship Search Privacy
Research and Services
Cluster Quoter
Cluster Builder
Hardware Vendors
Software Vendors
Service & Support (Consulting)
Training Vendors
Golden Eggs
News
Latest News
Newsletter
News Archives
Search Archives
Reference
Featured Articles
Beginners
Whitepapers
Documentation
Software
Lists/Newsgroups
Books
User Groups
Forums
Employment/Jobs
Beowulf
Applications
Interconnects

Linux HPC News Update
Stay current on Linux related HPC news, events and information.
LinuxHPC Newsletter

Other Mailing Lists:
Linux High Availability
Beowulf Mailing List
Gelato.org (Linux Itanium)

Linux HPC Links
Favorites:
Cluster Monkey (Doug Eadline, et al)
HPCWire (Tabor Communications)
Scalability.org (Dr. Joe Landman)

Beowulf.org
Beowulf Users Group
Blade.org
High Performance Computing Clusters
Gelato.org
The Aggregate
Top500
Cluster Benchmarks
Cluster Computing Info Centre
Coyote Gultch
Linux Clustering Info Ctr.
Robert Brown's Beowulf Page
Sourceforge Cluster Foundry
HPC DevChannel
OpenSSI
Grid-Scape.org
SuperComputingOnline
HPC User Forum
Gridtech
GridsWatch
News Feed
LinuxHPC.org.org has an RSS/RDF feed if you wish to include it on your website.
Scalability.org
- Slightly OT
- Generating an “optimal” circuit from a language construct
- For a market that some claim does not exist, this is attracting lots of attention and product …
- To abstract or not to abstract: that is the question …
- The market for accelerators and APUs
- APU programming made easy?
- Breaking mirror symmetry in HPC
- Is there a need for supercomputing?
- APUs in the news
- A teraflop here, a teraflop there, and pretty soon you are talking about real computing power
hpcanswers.com
- What is Geneseo?
- What is stream programming?
- What are grand challenge problems?
- What is Roadrunner?
- What is software pipelining?
- How do we improve locality of reference?
- Is RDMA really that bad?
- What are some open-source tools for technical computing?
- What is Cray Hood?
- What is OpenFabrics?
LinuxHPC.org
Home
About
Contact
Mobile Edition
Sponsorship

Linux Cluster RFQ Form
Reach Multiple Vendors With One Linux Cluster RFQ Form. Save time and effort, let LinuxHPC.org do all the leg work for you free of charge. Request A Quote...

Supercomputing 2006 Registration Open
SC06, the premier international conference on high performance computing, networking and storage, will convene from November 11-17, 2006 in Tampa, Florida. Register Now... See you there!

Latest News

ClearSpeed Breaks GigaFLOP per Watt Performance Barrier for Supercomputing
Posted by Kenneth Farmer, Tuesday September 05 2006 @ 09:32AM EDT

ClearSpeed Technology (LSE:CSD), the leader in double precision coprocessor acceleration technology, today announced Linpack benchmark results that set new standards for energy efficient computation for high performance computing (HPC) clusters.

ClearSpeed Advance accelerator boards rated at only 25 Watts power consumption per board added 28.5 GigaFLOPS (GFLOPS) each to a cluster of Hewlett Packard Proliant DL380 G5 servers running the high performance Linpack benchmark. With two Advance accelerator boards in each of the four servers, the cluster performance was increased to over 364 GFLOPS while adding only 200 Watts to the overall power levels. Without ClearSpeed acceleration, the four node cluster delivered 136 GFLOPS from its 8 Intel® Xeon® 5160 (Woodcrest) dual core processors while consuming 1,940 Watts of power. A similarly configured single node delivered 90 GFLOPS compared with 34 GFLOPS for the non-accelerated system.

The ClearSpeed accelerated cluster completed the Linpack benchmark run in just 18.4 minutes while using only 40% of the energy required by the non-accelerated cluster which took 48.4 minutes to finish.

"Consuming no more power than it takes to turn on the lights in a normal living room, we have increased the performance of the cluster more than two and a half times," said John Gustafson, ClearSpeed chief technical officer for HPC. "With the additional Linpack performance exceeding one GFLOP per Watt and almost perfect scaling, we have demonstrated that ClearSpeed accelerator technology can combine unmatched performance with the economic benefits of reduced energy consumption for HPC clusters."

To put these results in context, the performance delivered by the ClearSpeed accelerated four node HP cluster, (a total of 16 CPU cores), is equivalent to the number one Top500, http://www.top500.org , installation from November 1996 which was a massive 2048 CPU Hitachi system at the Center For Computational Science at the University Of Tsukuba in Japan that delivered 368.2 GFLOPS. Even more impressively, the test cluster is contained in a half populated 14U rack and can operate in a standard office environment.

Benchmark Results: http://www.clearspeed.com/pressreleases/Linpack%20GFLOP%20per%20Watt%2083106.pdf

Note: Previously published Linpack results for similar single node systems were 34.9 GFLOPS for the standard node and 93 GFLOPS for an accelerated node with two ClearSpeed Advance boards. The variations are a result of small differences between system configurations and problem sizes used during the benchmark runs.

Top500 Results from November 1996: http://www.clearspeed.com/pressreleases/Linpack%20GFLOP%20per%20Watt%2083106.pdf

Specifications of benchmark system supplied by Hewlett Packard and tested by ClearSpeed Technology Four HP ProLiant DL380 G5 servers, each with:

- Two 3.0 GHz dual core Intel Xeon 5160 processors - 16 GB fully buffered DIMM Memory
- Embedded NC373i Multifunction Gigabit Network Adapter
- 1000 Watt Hot-Plug Power Supply
- Two ClearSpeed Advance accelerator boards The four servers were connected with an HP Procurve 2824 Switch

The Linpack Benchmark and the Top500

The Linpack Benchmark was introduced by Jack Dongarra. A detailed description as well as a list of performance results on a wide variety of machines is available in postscript form from netlib. A parallel implementation of the Linpack benchmark and instructions on how to run it can be found at http://www.netlib.org/benchmark/hpl/. The benchmark used in the Linpack Benchmark is to solve a dense system of linear equations. For the Top500, a version of the benchmark is used that allows the user to scale the size of the problem and to optimize the software in order to achieve the best performance for a given machine. This performance does not reflect the overall performance of a given system, as no single number ever can. It does, however, reflect the performance of a dedicated system for solving a dense system of linear equations. Since the problem is very regular, the performance achieved is quite high, and the performance numbers give a good correction of peak performance.

Top500 Description

The Top500 table shows the 500 most powerful commercially available computer systems known. To keep the list as compact as possible, only a part of the data evaluated is shown on the website including:

* Nworld - Position within the Top500 ranking
* Manufacturer - Manufacturer or vendor
* Computer - Type indicated by manufacturer or vendor
* Installation Site - Customer
* Location - Location and country
* Year - Year of installation/last major update
* Field of Application
* #Proc. - Number of processors
* Rmax - Maximal Linpack performance achieved
* Rpeak - Theoretical peak performance
* Nmax - Problem size for achieving Rmax

Information about the Top500 can be found at http://www.top500.org/

About ClearSpeed

ClearSpeed Technology is a specialist semiconductor company focused on delivering double precision high performance coprocessors and boards to be used alongside general purpose processors in the world’s most compute-intensive applications. ClearSpeed’s advanced multi-threaded array processing technology provides the ability to significantly accelerate data-intensive applications at extremely low power. Products include chips, boards, software tools, applications and support. ClearSpeed has offices in San Jose, California and Bristol, UK and has over 86 patents granted and pending. For more information on ClearSpeed, visit http://www.clearspeed.com


< Rackable Systems Announces Agreement to Acquire Terrascale Technologies | Smaller Server Vendors Tout New Designs >

Sponsors

HP
HPC Market Leader
Three Years Running








Affiliates




Cluster Monkey









Golden Eggs
(HP Visual Diagram and Config Guides)
Clusters:
CP3000 32x DL140G2 & DL360G4p GigE EM64T
CP4000 32x DL145G2 GigE Opteron, Dual Core
CP4000 64x DL145 GigE Opteron
CP4000 102x DL145 GigE Opteron
CP4000 32x DL145 Myri Opteron
Rocks Cluster 16-22 DL145 Opteron
Rocks Cluster 30-46 DL145 Opteron
Rocks Cluster 64-84 DL145 Opteron
LC3000 GigaE 24-36 DL145 Opteron
LC3000 Myri 16-32x DL145 Opteron
LC3000 GigaE 16-22x DL145 Opteron
LC2000 GigaE 16-22x DL360G3 Xeon
ProLiant:
DL140 3060MHz 2P IA32
DL140 G2 3600MHz 2P EM64T
DL145 2600MHz 2P Opteron
DL145 G2 2600MHz 2P Opteron Dual Core
DL360 G4 3400MHz 2P EM64T
DL360 G4p 3800MHz 2P EM64T
DL380 G4 3800MHz 2P EM64T
DL385 2800MHz 2P Opteron Dual Core
DL560 3000MHz 4P IA32
DL580 G3 3330MHz 4P EM64T
DL585 2800MHz 4P Opteron Dual Core
Integrity:
Superdome 64P base configuration
Integrity Family Portrait (rx1620 thru rx8620), IA64
rx1620 1600MHz 2P MSA1000 Cluster IA64
rx2620 1600MHz 2P MSA1000 Cluster IA64
rx4640 1600MHz 4P MSA1000 Cluster IA64
rx7620 1600MHz 8P 10U Systems and MSA1000 Cluster
rx8620 1600MHz 16P 17U Systems and MSA1000 Cluster
Storage:
MSA30-MI Dual SCSI Cluster, rx1620...rx4640
MSA500 G2, SCSI
MSA1510i IP SAN 48TB, SCSI and SATA
MSA1500 48TB, SCSI and SATA
Misc:
Dual Core AMD64 and EM64T systems with MSA1500






Hewlett-Packard: Linux High Performance Computing Whitepapers
Unified Cluster Portfolio:
A comprehensive, modular package of tested and pre-configured hardware, software and services for scalable computation, data management and visualization.

Your Fast Track to Cluster Deployment:
Designed to enable faster ordering and configuration, shorter delivery times and increased savings. Customers can select from a menu of popular cluster components, which are then factory assembled into pre-defined configurations with optional software installation.
Message Passing Interface library (HP-MPI):
A high performance and production quality implementation of the Message-Passing Interface (MPI) standard for HP servers and workstations.

Cluster Platform Express:
Cluster Platform Express comes straight to you, factory assembled and available with pre-installed software for cluster management, and ready for deployment.
AMD Opteron-based ProLiant nodes | Intel Xeon-based ProLiant nodes



Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Sponsorship Search Privacy
     Copyright © 2001-2006 LinuxHPC.org
Linux is a trademark of Linus Torvalds
All other trademarks are those of their owners.
    
  SpyderByte.com ;Technical Portals