Site hosted by Angelfire.com: Build your free website today!
HOT TOPIC

HOT TOPIC

Links

Home

Archives

MyPicks

Personality

Contact Me

The Future of Computing lies with "SuperComputers"

It seems everybody is looking for more from their Personal Computers. For some Pentium IV 3.0 GHz is too futuristic a thing, but for many others even the latest release from AMD: The Athlon XP a 64-bit processor for the Desktop computer is insufficient. For the majority of the wizards in the computer industry "The Future of the computing lies with the SuperComputers".

In this article, as in the previous one I would try to explain the underlying concepts in as simple a manner as possible.

Some High-End Computing Architectures
Researchers nowadays are evaluating the feasibility of constructing a computing system capable of a sustained rate of 10^15 floating-point operations per second (one petaflop). Moreover, they suggest a radical shift from current approach of looking at things. HTMT (Hybrid Technology Multi-Threaded) architecture would blend modified semiconductor technology with leading edge hybrid technologies including superconducting technology, optical interconnects, high speed very large scale integration (VLSI) semiconductors and magnetic storage technology configured to satisfy the architecture requirements.

1. Quantum Computing
Quantum computing is based on different physics than digital computing. Instead of having two (or three) states-per-element such as digital computers, which are off, on or neither quantum computers can have all the three states at the same time.

For example, an 8-bit digital computer can exist in only one of 256 states at a time, while an 8-bit quantum computer can exist in all 256 states at a time and, theoretically, work on 256 calculations at once (quantum parallelism). Each of the 256 states in this 8-bit example has an equal probability of being measured so that a quantum processor functions as a random number generator. The actual register represents all of these values at once but a single value output only occurs at measurement. While a classical digital computer would have to operate on each number from 0 to 255, quantum computer requires only one pass through the processor, radically reducing calculation time. Of course, the larger the register size, the larger the number - even a simple 10-bit quantum computer could scream past a supercomputer. Moreover, where the digital computer uses binary digits (bits) the quantum computer uses qubits - but qubits are difficult to generate and are still under preliminary research.

2. High - Performance Computing Clusters (HPCC)
In 1994, Thomas Sterling and Don Becker, working at The Center of Excellence in Space Data and Information Sciences built a cluster computer called "Beowulf", which consisted of 16 DX4 processors connected by channel bonded 10 Mbps Ethernet.

An HPS cluster uses a multiple computer architecture that features a parallel computing system consisting of one or more master nodes and one or more compute nodes interconnected in/by a private network system. All the nodes in the cluster are PC's, workstations, or servers - running software such as LINUX. The master node acts as a server for Network File System (NFS) and as a gateway to the outside world. In order to make the master node highly available to users, high availability clustering may be employed.

The sole task of the compute nodes is to execute parallel jobs. In most cases, therefore, the compute nodes do not have keyboard, video and mouse connected. All access and control to the compute nodes is provided via remote connections, such as network and/or serial port through the master nodes. Since compute nodes do not need to access the machines outside the cluster, nor do the machines outside the cluster need to access the compute nodes directly, compute nodes commonly use the private IP Addresses.
Software Components
There are two types of HPC software architectures:- loosely coupled clusters and tightly coupled clusters. HPC software architecture can decide what software components to use.

In general, HPC software components include: operating system, hardware drivers, middleware, compiler, parallel program development environment, debugger, performance analyzer, hardware level node monitoring/management tool, OS level node monitoring/management tool, cluster monitor/management tool and parallel applications.
Hardware Components
Use of high-density rack mounted servers is the most popular configuration for today's HPC cluster environment. Besides the compute nodes, each rack could be equipped with network switches, UPS, PDU (power distribution unit), and so on. For some type of applications where communication bandwidth between nodes is critical, low latency high bandwidth interconnects such as Gigabit Ethernet and Myrinet are common choices for interconnecting among compute nodes.

3. Terascale Computing
Comprehensive hierarchical design of both data structures and algorithms is the essential challenge in achieving efficient use of the computing resources at the terascale. Interoperability alone is not sufficient, as tools with this property may nevertheless have computations complexities and memory footprints tat render them useless at the terascale (trillions of operations per second). A single failure in this regard anywhere in the tool chain will incapacitate the overall system. So the hierarchical design must be applied throughout the entire software system in use. Examples include CAD systems that organize data in different layers of resolution, adaptivity in meshing, multi-level graph partitioning, multi-grid solvers and visualization systems that nimbly present data at many levels of resolution. It is believed, however, that although these tools are nearly optimized for their particular application, their union is not optimized for the overall problem of terascale simulation.

Also, a critical step in parallel computation is the assignment of application data to processors. The ideal distribution assigns data so that per-processors workloads are the same as overall processors (eliminating idle time) and inter-processor communication is minimized. For both structured and unstructured grids, the distribution is often done through a serial pre-processing step, using a static partitioning tool such as Chaco or METIS.

For adaptive terascale computing, serial static partitioning is insufficient. Dynamic partitioning techniques must be employed from the start of the simulation process. The specific load balancing algorithms used are often different for structured vs unstructured grids while either graph based or geometric methods may be appropriate for structured grids.

4. Grid Computing
The basic idea of grid computing is to create an infrastructure to harness the power of remote high-end computers, databases and other computing resources owned by various people across the globe through the Internet. As the electric power emanates from the 'electric grid', the computing power emanates from the computing grid. In the grid-computing world, people submit the computing jobs to the grid and the grid system allots the required computing resources and processes the job.

The successful implementation of the grid-computing infrastructure will certainly have far reaching implications for business, scientific and individual computing users. For example, an organization can pool all it's computing resources spread across different locations and create a supercomputing environment. A researcher who needs a high-speed machine with massive storage for some rigorous computing-intensive project does not need to procure costly equipment. He/she can simply hire the grid provider's computing resources.

5. Autonomic Computing
From the time immortal the human body and biological system has fascinated the researchers from all walks of life. Researchers are now trying to make a system that behaves much like our body does. Such a system is expected to perform the following functions:

 Self-configuring
The seamless integration of new hardware resources and the co-operative yielding of resources by the operating system is an important element of the self-configuring systems. Hardware subsystems and resources can configure and reconfigure automatically both at boot time and during runtime. This action may be initiated by the need to adjust the allocation of resources based on the current optimization criteria or in response to hardware or firmware faults. Self-configuring also include the ability to concurrently add or remove hardware resources in response to commands from administrators, service personnel, or hardware resource management tool.

 Self-healing
With self-healing capabilities, platforms can detect hardware and firmware faults instantly and then contain the effects of the faults within the defined boundaries. This allows the platform to recover from the negative effects of such faults with minimal or no impact on the execution of the operating system and user-level workloads.

 Self-optimizing
Self-optimizing capabilities allow computing systems to autonomously measure the performance or the usage of resources and then the tune the configuration of hardware resources to deliver the improved performance.

 Self-protecting
This allows computing systems to protect against internal and external threats to the integrity and privacy of the applications and data.

All the above mentioned innovative ideas and technologies still seem to be the realities of the distant future, but work is being done to reduce the time to feel the power of "Super Computers" by the enthusiasts who are carving for that extra punch of the computing power.

This page is built using the Cascading Style Sheets (CSS); a feature of DHTML.


HOT TOPIC