November 16, 2006

Nvidia Introduces CUDA Architecture for Computing on GPUs

Santa Clara, Calif. - Nvidia Corporation has unveiled Nvidia CUDA technology, a new architecture for computing on Nvidia graphics processing units (GPUs), and the industry's first C-compiler development environment for the GPU.
GPU computing with CUDA is a new approach to computing where hundreds of on-chip processor cores simultaneously communicate and cooperate to solve complex computing problems up to 100 times faster than traditional approaches. 
This architecture is complemented by another first, the Nvidia C-compiler for the GPU. This development environment gives developers the tools they need to solve new problems incomputation-intensive applications such as product design, data analysis, technical computing, and game physics.
Available today on the GeForce 8800 graphics card and future Nvidia Quadro Professional Graphics solutions, CUDA enables GPU processor cores to communicate, synchronize, and share data.
CUDA-enabled GPUs offer dedicated features for computing, including the Parallel Data Cache, which allows 128, 1.35GHz processor cores in newest-generation Nvidia GPUs to cooperate with each other while performing intricate computations.  Developers access these new features through a separate computing driver that communicates with DirectX and OpenGL, and the new Nvidia C compiler for the GPU, which obsoletes streaming languages for GPU computing.
A CUDA-enabled GPU operates as either a flexible thread processor, where thousands of computing programs called threads work together to solve complex problems, or as a streaming processor in specific applications such as imaging where threads do not communicate. CUDA-enabled applications use the GPU for fine grained data-intensive processing, and the multi-core CPUs for complicated coarse grained tasks such as control and data management.
The CUDA Software Developers Kit (SDK) is currently available to developers and researchers through the Nvidia registered developer program.