NVIDIA: The Era of the Personal Supercomputing Until now, a supercomputer was anything but personal. As many complex, data-intensive applications required the power of a clustered server environment, users would wait hours or days for the back room to get them their results. With the introduction of a new range of high-performance computing solutions for workstations, suddenly users have massive computing power quite literally at their fingertips. The workstation has now become a “personal supercomputer.” Parallel Processing Power Many applications that process large data sets, such as arrays or volumes, can use a data-parallel programming model on a GPU to speed up the computations. These applications include both traditional graphics and visualization uses, such as medical imaging, and pure computational problems with no relation to graphics, like option risk calculations in finance. In some applications, the GPU can produces results in about five minutes—instead of five hours using the CPU. The NVIDIA compute engine can be used throughout all our product lines, by physics engines in entertainment applications and games on the NVIDIA GeForce ® line, by visualization, digital content creation (DCC) and computer-aided design (CAD) applications on NVIDIA Quadro, or on pure computational applications by the new NVIDIA Tesla line. Tesla offers traditional add-in cards for PCs, Tesla desk-side units for adding computing power to workstations, and 1U server units for clusters, which can be used to tackle larger problems. New Generation of GPU Computing Software NVIDIA ® CUDA ™ GPU programming architecture is based on the C programming language, making it accessible to the widest possible base of developers. CUDA contains simple directives to specify what functions and what memory objects should be assigned to the GPU and libraries of mathematical functions that can be run on the GPU. Once the CUDA kernel is written and compiled with the NVIDIA C compiler (NVCC), it will automatically optimize itself to run on different GPUs with more or fewer processors. And a CUDA program can also be compiled using a standard C compiler and then run entirely on the CPU to make use of familiar debugging and optimization tools. Impact of Personal Supercomputing GPU computing is already making itself felt in a variety of fields from molecular biology to oil and gas exploration. Like the PC revolution of decades ago, the personal supercomputer is changing workflows. High-performance computing is now being done on lab benches and in engineering workspaces, not just in server rooms and is reducing the turnaround time for the applications. Simulations are being used more nimbly and iteratively. Researchers are better able to run variants of simulations based on different parameters, exploring possible “what if” scenarios. And GPUs are being added to back room clusters, providing even more computing power and the ability to solve even larger problems. The Tesla 1U servers are especially attractive in this regard as they can be readily added to existing clusters without fundamental changes in infrastructure. But what is potentially most revolutionary is the “personal” in this personal supercomputing. The low cost, relative ease of implementation and small footprint are democratizing high-performance computing. Once in the hands of large numbers of people, computing power can be put to uses previously unimagined. Similarly, the true impact of GPU personal supercomputing will only be seen in a few years, in applications that we cannot yet envision.
|