SpyderByte.com ;Technical Portals 
      
 News & Information Related to Linux High Performance Computing, Linux Clustering and Cloud Computing
Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Advertising/Sponsorship Search Privacy
HPC Vendors
Cluster Quoter (HPC Cluster RFQ)
Hardware Vendors
Software Vendors
HPC Consultants
Training Vendors
HPC Resources
Featured Articles
Cluster Builder
Beginners
Whitepapers
Documentation
Software
Lists/Newsgroups
Books
User Groups & Organizations
HP Server Diagrams
HPC News
Latest News
Newsletter
News Archives
Search Archives
HPC Links
ClusterMonkey.net
Scalability.org
HPCCommunity.org

Beowulf.org
HPC Tech Forum (was BW-BUG)
Gelato.org
The Aggregate
Top500.org
Cluster Computing Info Centre
Coyote Gultch
Dr. Robert Brown's Beowulf Page
FreshMeat.net: HPC Software
SuperComputingOnline
HPC User Forum
GridsWatch
HPC Newsletters
Stay current on Linux HPC news, events and information.
LinuxHPC.org Newsletter

Other Mailing Lists:
Linux High Availability
Beowulf Mailing List
Gelato.org (Linux Itanium)

LinuxHPC.org
Home
About
Contact
Mobile Edition
Sponsorship

Documentation

FAQ's, How-to's, Guides

Tutorial: Building a Beowulf System
by: Jan Lindheim, Caltech, (May 2005)
With the power and low prices of today's off-the-shelf PCs and the availability of 100 Mb/s ethernet interconnect, it makes sense to combine them to build High-Performance-Computing and Parallel Computing environment. This is the concept behind the Beowulf parallel computing system we will describe. With free versions of Unix and public domain software packages, no commercially available parallel computing system can compete with the price of the Beowulf system. The drawback to this system is, of course, there will exist no support center to call when a problem arises. But there does exist a wealth of good information available through FTP sites, web sites and newsgroups.

Engineering a Beowulf-style Compute Cluster
by: Robert G. Brown, Duke University
This is a local copy of my online book on beowulf style cluster engineering. It is (and will likely always be) a work in progress, so check the revision number and dates from time to time to see if new material has been added.

Redhat's Linux Cluster Project Pages
GFS | CLVM | CCS | CMAN | DLM | GULM | Fence | GNBD | CSNAP | LVM2 | Device-Mapper

Diskless, NFS, OpenMosix HOWTO
A simple step by step non distro specific HOWTO to setup a OpenMosix cluster. Using Wake on LAN, NFS, TFTP, and diskless clients. A couple of unique parts include a reboot script, to reboot a node by pressing the space bar, so you can use a PC as a regular workstation in the day and as a cluster node at night, if the BIOS supports network boot on WOL and hard drive boot otherwise (newer Dells do). Also the MP3 ripping script will count the number of nodes currently up and start the same number of rips.

Cluster Quick Start (DRAFT)
There are many ways to configure a cluster. This document describes one way. This guide not complete, it is really just a series of steps or issues that need to be addressed before a cluster can be usable. Indeed, it is very difficult to create a "step by step" set of instructions because every cluster seems to be different.

Beowulf Installation and Administration HOWTO (0.0.6)
Jacek Radajewski started writing the Beowulf HOWTO in November 1997 and was soon joined by Douglas Eadline. Over a few months the Beowulf HOWTO grew into a large document, and in August 1998 it was split into three documents: Beowulf HOWTO, Beowulf Architecture Design HOWTO, and the Beowulf Installation and Administration HOWTO. Version 1.0.0 of the Beowulf Installation and Administration HOWTO will be released to the Linux Documentation Project soon.

Beowulf HOWTO (1.1.1)
Jacek Radajewski started work on this document in November 1997 and was soon joined by Douglas Eadline. Over a few months the Beowulf HOWTO grew into a large document, and in August 1998 it was split into three documents: Beowulf HOWTO, Beowulf Architecture Design HOWTO, and the Beowulf Installation and Administration HOWTO. Version 1.0.0 of the Beowulf HOWTO was released to the Linux Documentation Project on 11 November 1998. We hope that this is only the beginning of what will become a complete Beowulf Documentation Project.

COCOA Beowulf Cluster FAQ (1.1)
COCOA stands for COst effective COmputing Array. It is a Beowulf class supercomputer. Beowulf is a multi computer architecture which can be used for parallel computations. It is a system which usually consists of one server node, and one or more client nodes connected together via Ethernet or some other fast network. It is a system built using commodity hardware components, like any office desktop PC with standard Ethernet adapters, and switches. It does not contain any custom hardware components and is trivially reproducible.

FNN: Flat Neighborhood Networks
Welcome to the home of FNN documents and software! Since the first press release on our cluster KLAT2, which has a Flat Neighborhood Network, many of you have been asking us for more information, access to the GA (genetic search algorithm) we developed to design FNNs, etc. This is where everything will be posted....

Linux Cluster HOWTO - How to set up high-performance Linux computing clusters.

YABIH - Yet Another Beowulf Installation Howto
This is a very short and somewhat old Howto. It describes a few steps in setting up a cluster.

Clusters Docs
The aim of this page is to collect documentation about how to set up, maintain, program and use a Linux cluster for high-performance computing. Actually, there are many web pages with documentation about this topic: tutorials, HOWTOs, forums, and so on. These pages can be helpful to answer frequently asked questions and to solve some common problems when working with Linux clusters; however, there isn't any attempt to collect all this documentation into a single source.

A Survey of Cluster Technologies (High Availability)
This paper surveys the cluster technologies for the operating systems available from many vendors, including IBM AIX, HP HP-UX, Linux, HP NonStop Kernel, HP OpenVMS, PolyServe Matrix Server, Sun Microsystems Solaris, HP Tru64 UNIX and Microsoft Windows 2000/2003.

IBM: Linux HPC Cluster Installation
This redbook will guide system architects and systems engineers toward a basic understanding of cluster technology, terminology, and the installation of a Linux High-Performance Computing (HPC) cluster (a Beowulf type of cluster) into an IBM eServer xSeries cluster.

Other Documentation...

AMD x86-64 Architecture Tech Docs
Survey of Freely Available Linear Algebra Software
Scientific Supercomputing: Architecture and Use of Shared and Distributed Memory Parallel Computers, 1998, W. Schoenauer
Special Issue on Parallel Computing on Networks of Computers, R. Buyya and M. Paprzycki=A0 (Eds.), 1999
Parallel Tools from Genias
Cluster Management Software
Cluster Computing Review (a paper on Cluster Management Software)
Compiling for Cluster Computing
Linux Parallel Processing using Clusters
MPI Implementation on Active Messages
Cornell Active Messages
Berkeley Active Messages
MPI Collective Communication (Cornell University)
MPI-Fast Messages
Cthreads (a Parallel Programming Library)
NAS Parallel Benchmarks
Message Passing Interface
MPICH - a portable MPI Implementation
Myrinet Software Documentation
MPI Persistent Communication
PVM3 Doku
Load Sharing Facility LSF
SCI Interconnection Network
Virtual Interface Architecture


Submit a link



 

Affiliates

Cluster Monkey

HPC Community


Supercomputing 2010

- Supercomputing 2010 website...

- 2010 Beowulf Bash

- SC10 hits YouTube!

- Louisiana Governor Jindal Proclaims the week of November 14th "Supercomputing Week" in honor of SC10!








Appro: High Performance Computing Resources
IDC: Appro Xtreme-X Supercomputer Blade Solution
Analysis of the Xtreme-X architecture and management system while assessing challenges and opportunities in the technical computing market for blade servers.

Video - The Road to PetaFlop Computing
Explore the Scalable Unit concept where multiple clusters of various sizes can be rapidly built and deployed into production. This new architectural approach yields many subtle benefits to dramatically lower total cost of ownership.
White Paper - Optimized HPC Performance
Multi-core processors provide a unique set of challenges and opportunities for the HPC market. Discover MPI strategies for the Next-Generation Quad-Core Processors.

Appro and the Three National Laboratories
[Appro delivers a new breed of highly scalable, dynamic, reliable and effective Linux clusters to create the next generation of supercomputers for the National Laboratories.

AMD Opteron-based products | Intel Xeon-based products



Home About News Archives Contribute News, Articles, Press Releases Mobile Edition Contact Advertising/Sponsorship Search Privacy
     Copyright © 2001-2012 LinuxHPC.org
Linux is a trademark of Linus Torvalds
All other trademarks are those of their owners.
    
  SpyderByte.com ;Technical Portals