CUDA - CUDA - Wikipedia, the free encyclopedia Page 1 of 10...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
Developer(s) NVIDIA Corporation Stable release 3.2 / September 17, 2010 Operating system Windows 7, Windows Vista, Windows XP, Windows Server 2008, Windows Server 2003, Linux, Mac OS X Type GPGPU License Proprietary, Freeware Website Nvidia's CUDA zone (http://www.nvidia.com/object/cuda_home.html) CUDA CUDA From Wikipedia, the free encyclopedia CUDA (an acronym for Compute Unified Device Architecture ) is a parallel computing architecture developed by NVIDIA. CUDA is the computing engine in NVIDIA graphics processing units (GPUs) that is accessible to software developers through variants of industry standard programming languages. Programmers use 'C for CUDA' (C with NVIDIA extensions and certain restrictions), compiled through a PathScale Open64 C compiler, [1] to code algorithms for execution on the GPU. CUDA architecture shares a range of computational interfaces with two competitors -the Khronos Group's Open Computing Language [2] and Microsoft's DirectCompute [3] . Third party wrappers are also available for Python, Perl, Fortran, Java, Ruby, Lua, and MATLAB. CUDA gives developers access to the virtual instruction set and memory of the parallel computational elements in CUDA GPUs. Using CUDA, the latest NVIDIA GPUs become accessible for computation like CPUs. Unlike CPUs however, GPUs have a parallel throughput architecture that emphasizes executing many concurrent threads slowly, rather than executing a single thread very fast. This approach of solving general purpose problems on GPUs is known as GPGPU. In the computer game industry, in addition to graphics rendering, GPUs are used in game physics calculations (physical effects like debris, smoke, fire, fluids); examples include PhysX and Bullet. CUDA has also been used to accelerate non-graphical applications in computational biology, cryptography and other fields by an order of magnitude or more. [4][5][6][7] An example of this is the BOINC distributed computing client. [8] CUDA provides both a low level API and a higher level API. The initial CUDA SDK was made public on 15 February 2007, for Microsoft Windows and Linux. Mac OS X support was later added in version 2.0 [9] , which supersedes the beta released February 14, 2008. [10] CUDA works with all NVIDIA GPUs from the G8X series onwards, including GeForce, Quadro and the Tesla line. NVIDIA states that programs developed for the GeForce 8 series will also work without modification on all future NVIDIA video cards, due to binary compatibility. Contents 1 Advantages 2 Limitations Page 1 of 10 CUDA - Wikipedia, the free encyclopedia 21/09/2010 http://en.wikipedia.org/wiki/CUDA
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Example of CUDA processing flow 1. Copy data from main mem to GPU mem 2. CPU instructs the process to GPU 3. GPU execute parallel in each core 4. Copy the result from GPU mem to main mem 3 Supported GPUs 4 Example 5 Language bindings 6 Future CUDA architectures 7 Current and future usages of CUDA architecture 8 See also 9 References 10 External links
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 10

CUDA - CUDA - Wikipedia, the free encyclopedia Page 1 of 10...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online