SlideShare a Scribd company logo
VPU TECHNOLOGY &GPGPU COMPUTING Arka Ghosh(9007900477a@gmail.com) B.Tech Computer Science & Engineering DELIVERED AT Seacom Engineering College,CSE Dept DATE 7 th  April’2011
What Is VPU? VPU is Visual Processing Unit it is more generally known as Graphics Processing Unit or GPU. The Graphics Processing Unit is a MASSIVELY PARALAL & MASSIVELY MULTITHREADED microprocessor. HyBrid Solutions NVIDIA SLI ATI Raedon CROSSFIREX Why GPU? GPU is used for high performance Computing . Long time ago work of GPU was to offload & accelerate graphics rendering from the CPU, but now a days the scene has changed.GPU has capability to work like a CPU,in some complex computational cases it beats the CPU. GPU Solutions:- We can get GPU in two forms  1.Integrated GPU It is integrated on the chipset of MotherBoard.It has low memory bandwidth & its latency time is much more than Dedicated ones. i.e-NVIDIA 730a Chipset provides 8200GT GPU with 540Mhz core. 2.Discrete or Dedicated GPU It is the most power full form of GPU.it is generally installed on PCIe or AGP port of MotherBoard.It has its own memory module. i.e-ATI Raedon HD 5970 X2 has Compute power of  4.64 TeraFlops with 3200 Stream Processors & 1 Ghz core  © Arka Ghosh 2011
What is PPU? PPU is physics processing unit. which specialized for calculation of rigid body dynamics, soft body dynamics, collision detection, fluid dynamics, hair and clothing simulation, finite element analysis, and fracturing of objects. LARRABEE FUSION The Main Leader of PPU is AGIA PhysX. It consists of a general purpose RISC core controlling an array of custom SIMD floating point VLIW processors working in local banked memories, with a switch-fabric to manage transfers between them. There is no cache-hierarchy as in a CPU or GPU. GPUs vs PPUs:- The drive toward GPGPU is making GPUs more and more suitable for the job of a PPU. ULTIMATE FATE OF GPU:- 1.Intel’s LARRABEE 2.AMD’s FUSION © Arka Ghosh 2011
-:INTO THE ARCHITECTURE:- Use of SPM:- SPM or SCRATCHPAD MEMORY is a high-speed internal memory used for temporary storage of calculations, data, and other work in progress.Inreference to a microprocessor (&quot;CPU&quot;), scratchpad refers to a special high-speed memory circuit used to hold small items of data for rapid retrieval. EXAMPLE:-•  NVIDIA's 8800 GPU running under CUDA provides 16KiB of Scratchpad per thread-bundle when being used for gpgpu tasks. STREAM PROCESSING:  The stream processing paradigm simplifies parallel software and hardware by restricting the parallel computation that can be performed.  1.Uniform Stream. Applications:- Compute Intensity Data Parallelism Data Locality Conventional, sequential paradigm  Parallel SIMD paradigm, packed registers (SWAR) for(int el = 0; el < 100; el++)  // for each vector vector_sum(result[el], source0[el], source1[el]); for(int i = 0; i < 100 * 4; i++) result[i] = source0[i] + source1[i]; © Arka Ghosh 2011
 Graphics Pipeline  The graphics pipeline typically accepts some representation of a three-dimensional scene as an input and results in a 2D raster image as output. OpenGL and Direct3D are two notable graphics pipeline models accepted as widespread industry standards. Stages of the graphics pipeline:-> 1.Transformation 2.Per-vertex lighting 3.Viewing transformation 4.Primitives generation 5.Projection transformation 6.Clipping 7.Viewport transformation 8.Scan conversion or rasterization 9.Texturing, fragment shading 10.Display Shader  Shaders are used to program the graphics processing unit (GPU) programmable rendering pipeline, which has mostly superseded the fixed-function pipeline that allowed only common geometry transformation and pixel-shading functions; with shaders, customized effects can be used. <<<Types Of Shader>>> Vertex shaders. Pixel shaders Geometrical shaders USEFULLNESS OF SHADER:- 1.Simplified graphic processing unit pipeline 2.Parallel processing Programming shaders We can programe shader by using OpenGL,Cg & Microsoft HLSL.  © Arka Ghosh 2011
GPU CLUSTER  What is Cluster? GPU CLUSTER  Each node of the cluster is GPU. 1.Homogeneous 2.Heterogeneous Components Hardware (Other):- I nterconnector Software:- 1. Operating System 2. GPU driver for the each type of GPU present in each cluster node. 3. Clustering API (such as the Message Passing Interface, MPI). .. Algorithm mapping GPU SWITCHING  Means Switching from one cluster node to another. WINDOWS Switching. LINUX Switching. © Arka Ghosh 2011
What Is GPGPU? GPGPU stands for general purpose graphics processin unit computing.Using GPU as CPU is the GPGPU computing NVIDIA CUDA:- It is a GPGPU Computing architecture. It provides heterogeneous computing environment. Why GPU Computing? To achive high performance computing. Minimize ERROR LOW power Consumption..GO GREEN. NVIDIA FLEXES TESLA MUSCLE
CUDA Kernels and Threads Parallel   portions   of   an   application   are   executed   on the   device   as   kernels One   kernel   is   executed   at   a   time Many   threads   execute   each   kernel Differences   between   CUDA   and   CPU   threads CUDA   threads   are   extremely   lightweight CUDA   uses   1000s   of   threads   to   achieve   efficiency Multi-core   CPUs   can   use   only   a   few Definitions Device   =   GPU Host   =   CPU Kernel   =   function   that   runs   on   the   device Data Movement Example int   main(void) { float   *a_h,   *b_h;   //   host   data float   *a_d,   *b_d;   //   device   data int   N   =   14,   nBytes,   i   ; nBytes   =   N*sizeof(float); a_h   =   (float   *)malloc(nBytes); b_h   =   (float   *)malloc(nBytes); cudaMalloc((void   **)   &a_d,   nBytes); cudaMalloc((void   **)   &b_d,   nBytes); for   (i=0,   i<N;   i++)   a_h[i]   =   100.f   +   i; cudaMemcpy(a_d,   a_h,   nBytes,   cudaMemcpyHostToDevice); cudaMemcpy(b_d,   a_d,   nBytes,   cudaMemcpyDeviceToDevice); cudaMemcpy(b_h,   b_d,   nBytes,   cudaMemcpyDeviceToHost); for   (i=0;   i<   N;   i++)   assert(   a_h[i]   ==   b_h[i]   ); free(a_h);   free(b_h);   cudaFree(a_d);   cudaFree(b_d); return   0; } © Arka Ghosh 2011
© Arka Ghosh2011 10-Series   Architecture 240   thread   processors   execute   kernel   threads 30   multiprocessors ,   each   contains 8   thread   processors One   double-precision   unit Shared   memory   enables   thread   cooperation Thread Processors Multiprocessor Shared Memory Double
Execution   Model Software Hardware Threads   are   executed   by   thread   processors Thread Thread Processor Multiprocessor Thread   blocks   are   executed   on   multiprocessors Thread   blocks   do   not   migrate Several   concurrent   thread   blocks   can   reside   on Thread Block ... Grid Device one   multiprocessor   -   limited   by   multiprocessor resources   (shared   memory   and   register   file) A   kernel   is   launched   as   a   grid   of   thread   blocks Only   one   kernel   can   execute   on   a   device   at one   time © Arka Ghosh2011
Tesla Architecture  © Arka Ghosh 2011
Time GigaThread   Hardware   Thread   Scheduler Concurrent   Kernel   Execution   +   Faster   Context   Switch Serial   Kernel   Execution Parallel   Kernel   Execution Kernel   1 Kernel   1 Kernel   2 Kernel   2 Ker 4 nel Kernel   3 Kernel   5 Kernel   3 Kernel   4 Kernel   5 Kernel   2 Kernel   2 © Arka Ghosh2011
EXAMPLE:-> MATLAB CODE FOR SIMPLE FFT(CPU HOST MODE) FOR DEVICE( nVidia QUADRO Fx 5200*2) clear ALL; t1=cputime; x=rand(2^20,1); f=fft(x); t2=cputime; t3=t2-t1; Here t3=0.4056 Clear ALL; t1=cputime; x=rand(2^20,1); gx=gpuArray(x); f=fft(gx); t2=cputime; t3=t2-t1; Here t3=0.006056 clear ALL t1=cputime; x=rand(50); y=rand(50); z=rand(50); a=10; b=20; c=30; d=40; f=a*(x^2)+b*(x*y)+c*(y^3)+d*(z^4); net=feedforwardnet(800); net=trainlm(net,x,f); t2=cputime; t3=t2-t1; MATLAB code For Simple ANN  For CPU t3=250.2154 For GPU t3=122.25 So we can see that The GPU is nearabout 204% efficient than CPU. © Arka Ghosh 2011
CONCLUSION  C for the GPU Multi-GPU Computing Massively Multi-threaded Computing Architecture Compatible with Industry Standard Architectures WHERE GPGPU IS USED? MIT-for educational & Scientific Research Purpose Stanford University--for educational & Scientific Research Purpose NCSA (National Center for Supercomputing Applications) NASA Machine Learning & AI field Machine Vision(Mainly Robot Vision) Tablets © Arka Ghosh 2011
Acknowledgement  Mriganka Chakraborty(prof. Secom Engineering College) Saibal Chakraborty Dr.Nicolas Pinto .prof. of MIT.-Advanced Supercomputing Dept T.Halfhill-NVIDIA Corp Developer Guide GOOGLE
THANK YOU

More Related Content

What's hot (18)

PPTX
Cuda
Mannu Malhotra
 
PPTX
Gpu with cuda architecture
Dhaval Kaneria
 
PDF
Cuda
Gopi Saiteja
 
PDF
NVidia CUDA Tutorial - June 15, 2009
Randall Hand
 
PPTX
Intro to GPGPU Programming with Cuda
Rob Gillen
 
PDF
A beginner’s guide to programming GPUs with CUDA
Piyush Mittal
 
PDF
Kato Mivule: An Overview of CUDA for High Performance Computing
Kato Mivule
 
PPTX
GPGPU programming with CUDA
Savith Satheesh
 
PPTX
Cuda
Amy Devadas
 
PPTX
Cuda Architecture
Piyush Mittal
 
PDF
Computing using GPUs
Shree Kumar
 
PDF
GPU Programming
William Cunningham
 
PDF
Introduction to CUDA C: NVIDIA : Notes
Subhajit Sahu
 
PPT
NVidia CUDA for Bruteforce Attacks - DefCamp 2012
DefCamp
 
PDF
Introduction to GPU Programming
Chakkrit (Kla) Tantithamthavorn
 
PDF
Example Application of GPU
Chakkrit (Kla) Tantithamthavorn
 
PDF
PG-Strom - GPU Accelerated Asyncr
Kohei KaiGai
 
PDF
Gpu perf-presentation
GiannisTsagatakis
 
Gpu with cuda architecture
Dhaval Kaneria
 
NVidia CUDA Tutorial - June 15, 2009
Randall Hand
 
Intro to GPGPU Programming with Cuda
Rob Gillen
 
A beginner’s guide to programming GPUs with CUDA
Piyush Mittal
 
Kato Mivule: An Overview of CUDA for High Performance Computing
Kato Mivule
 
GPGPU programming with CUDA
Savith Satheesh
 
Cuda Architecture
Piyush Mittal
 
Computing using GPUs
Shree Kumar
 
GPU Programming
William Cunningham
 
Introduction to CUDA C: NVIDIA : Notes
Subhajit Sahu
 
NVidia CUDA for Bruteforce Attacks - DefCamp 2012
DefCamp
 
Introduction to GPU Programming
Chakkrit (Kla) Tantithamthavorn
 
Example Application of GPU
Chakkrit (Kla) Tantithamthavorn
 
PG-Strom - GPU Accelerated Asyncr
Kohei KaiGai
 
Gpu perf-presentation
GiannisTsagatakis
 

Similar to Vpu technology &gpgpu computing (20)

PPT
Vpu technology &gpgpu computing
Arka Ghosh
 
PPTX
GPU Computing: A brief overview
Rajiv Kumar
 
PDF
Cuda Without a Phd - A practical guick start
LloydMoore
 
PPTX
Introduction to Accelerators
Dilum Bandara
 
PPT
Cuda intro
Anshul Sharma
 
PPTX
lecture11_GPUArchCUDA01.pptx
ssuser413a98
 
PPTX
GPU in Computer Science advance topic .pptx
HamzaAli998966
 
PDF
Newbie’s guide to_the_gpgpu_universe
Ofer Rosenberg
 
PPTX
gpuprogram_lecture,architecture_designsn
ARUNACHALAM468781
 
PDF
The Rise of Parallel Computing
bakers84
 
PPTX
Graphics processing uni computer archiecture
Haris456
 
PPT
Parallel computing with Gpu
Rohit Khatana
 
PPT
3. CUDA_PPT.ppt info abt threads in cuda
Happy264002
 
PDF
The International Journal of Engineering and Science (The IJES)
theijes
 
PDF
CUG2011 Introduction to GPU Computing
Jeff Larkin
 
PDF
Trip down the GPU lane with Machine Learning
Renaldas Zioma
 
PDF
GPU - An Introduction
Dhan V Sagar
 
PPTX
C for Cuda - Small Introduction to GPU computing
IPALab
 
PPT
Exploring Gpgpu Workloads
Unai Lopez-Novoa
 
Vpu technology &gpgpu computing
Arka Ghosh
 
GPU Computing: A brief overview
Rajiv Kumar
 
Cuda Without a Phd - A practical guick start
LloydMoore
 
Introduction to Accelerators
Dilum Bandara
 
Cuda intro
Anshul Sharma
 
lecture11_GPUArchCUDA01.pptx
ssuser413a98
 
GPU in Computer Science advance topic .pptx
HamzaAli998966
 
Newbie’s guide to_the_gpgpu_universe
Ofer Rosenberg
 
gpuprogram_lecture,architecture_designsn
ARUNACHALAM468781
 
The Rise of Parallel Computing
bakers84
 
Graphics processing uni computer archiecture
Haris456
 
Parallel computing with Gpu
Rohit Khatana
 
3. CUDA_PPT.ppt info abt threads in cuda
Happy264002
 
The International Journal of Engineering and Science (The IJES)
theijes
 
CUG2011 Introduction to GPU Computing
Jeff Larkin
 
Trip down the GPU lane with Machine Learning
Renaldas Zioma
 
GPU - An Introduction
Dhan V Sagar
 
C for Cuda - Small Introduction to GPU computing
IPALab
 
Exploring Gpgpu Workloads
Unai Lopez-Novoa
 
Ad

Recently uploaded (20)

PDF
TOP 10 AI TOOLS YOU MUST LEARN TO SURVIVE IN 2025 AND ABOVE
digilearnings.com
 
PPTX
ENGLISH 8 WEEK 3 Q1 - Analyzing the linguistic, historical, andor biographica...
OliverOllet
 
PDF
EXCRETION-STRUCTURE OF NEPHRON,URINE FORMATION
raviralanaresh2
 
DOCX
Unit 5: Speech-language and swallowing disorders
JELLA VISHNU DURGA PRASAD
 
PPTX
Introduction to Probability(basic) .pptx
purohitanuj034
 
PPTX
Electrophysiology_of_Heart. Electrophysiology studies in Cardiovascular syste...
Rajshri Ghogare
 
PPTX
Basics and rules of probability with real-life uses
ravatkaran694
 
PPTX
Gupta Art & Architecture Temple and Sculptures.pptx
Virag Sontakke
 
PPTX
Top 10 AI Tools, Like ChatGPT. You Must Learn In 2025
Digilearnings
 
PDF
Tips for Writing the Research Title with Examples
Thelma Villaflores
 
PPTX
Applications of matrices In Real Life_20250724_091307_0000.pptx
gehlotkrish03
 
PPTX
LDP-2 UNIT 4 Presentation for practical.pptx
abhaypanchal2525
 
PPTX
Cleaning Validation Ppt Pharmaceutical validation
Ms. Ashatai Patil
 
PDF
Virat Kohli- the Pride of Indian cricket
kushpar147
 
PPTX
K-Circle-Weekly-Quiz12121212-May2025.pptx
Pankaj Rodey
 
PPTX
Unlock the Power of Cursor AI: MuleSoft Integrations
Veera Pallapu
 
PDF
Module 2: Public Health History [Tutorial Slides]
JonathanHallett4
 
PPTX
PROTIEN ENERGY MALNUTRITION: NURSING MANAGEMENT.pptx
PRADEEP ABOTHU
 
PPTX
Continental Accounting in Odoo 18 - Odoo Slides
Celine George
 
PPTX
Artificial Intelligence in Gastroentrology: Advancements and Future Presprec...
AyanHossain
 
TOP 10 AI TOOLS YOU MUST LEARN TO SURVIVE IN 2025 AND ABOVE
digilearnings.com
 
ENGLISH 8 WEEK 3 Q1 - Analyzing the linguistic, historical, andor biographica...
OliverOllet
 
EXCRETION-STRUCTURE OF NEPHRON,URINE FORMATION
raviralanaresh2
 
Unit 5: Speech-language and swallowing disorders
JELLA VISHNU DURGA PRASAD
 
Introduction to Probability(basic) .pptx
purohitanuj034
 
Electrophysiology_of_Heart. Electrophysiology studies in Cardiovascular syste...
Rajshri Ghogare
 
Basics and rules of probability with real-life uses
ravatkaran694
 
Gupta Art & Architecture Temple and Sculptures.pptx
Virag Sontakke
 
Top 10 AI Tools, Like ChatGPT. You Must Learn In 2025
Digilearnings
 
Tips for Writing the Research Title with Examples
Thelma Villaflores
 
Applications of matrices In Real Life_20250724_091307_0000.pptx
gehlotkrish03
 
LDP-2 UNIT 4 Presentation for practical.pptx
abhaypanchal2525
 
Cleaning Validation Ppt Pharmaceutical validation
Ms. Ashatai Patil
 
Virat Kohli- the Pride of Indian cricket
kushpar147
 
K-Circle-Weekly-Quiz12121212-May2025.pptx
Pankaj Rodey
 
Unlock the Power of Cursor AI: MuleSoft Integrations
Veera Pallapu
 
Module 2: Public Health History [Tutorial Slides]
JonathanHallett4
 
PROTIEN ENERGY MALNUTRITION: NURSING MANAGEMENT.pptx
PRADEEP ABOTHU
 
Continental Accounting in Odoo 18 - Odoo Slides
Celine George
 
Artificial Intelligence in Gastroentrology: Advancements and Future Presprec...
AyanHossain
 
Ad

Vpu technology &gpgpu computing

  • 1. VPU TECHNOLOGY &GPGPU COMPUTING Arka Ghosh([email protected]) B.Tech Computer Science & Engineering DELIVERED AT Seacom Engineering College,CSE Dept DATE 7 th April’2011
  • 2. What Is VPU? VPU is Visual Processing Unit it is more generally known as Graphics Processing Unit or GPU. The Graphics Processing Unit is a MASSIVELY PARALAL & MASSIVELY MULTITHREADED microprocessor. HyBrid Solutions NVIDIA SLI ATI Raedon CROSSFIREX Why GPU? GPU is used for high performance Computing . Long time ago work of GPU was to offload & accelerate graphics rendering from the CPU, but now a days the scene has changed.GPU has capability to work like a CPU,in some complex computational cases it beats the CPU. GPU Solutions:- We can get GPU in two forms 1.Integrated GPU It is integrated on the chipset of MotherBoard.It has low memory bandwidth & its latency time is much more than Dedicated ones. i.e-NVIDIA 730a Chipset provides 8200GT GPU with 540Mhz core. 2.Discrete or Dedicated GPU It is the most power full form of GPU.it is generally installed on PCIe or AGP port of MotherBoard.It has its own memory module. i.e-ATI Raedon HD 5970 X2 has Compute power of 4.64 TeraFlops with 3200 Stream Processors & 1 Ghz core © Arka Ghosh 2011
  • 3. What is PPU? PPU is physics processing unit. which specialized for calculation of rigid body dynamics, soft body dynamics, collision detection, fluid dynamics, hair and clothing simulation, finite element analysis, and fracturing of objects. LARRABEE FUSION The Main Leader of PPU is AGIA PhysX. It consists of a general purpose RISC core controlling an array of custom SIMD floating point VLIW processors working in local banked memories, with a switch-fabric to manage transfers between them. There is no cache-hierarchy as in a CPU or GPU. GPUs vs PPUs:- The drive toward GPGPU is making GPUs more and more suitable for the job of a PPU. ULTIMATE FATE OF GPU:- 1.Intel’s LARRABEE 2.AMD’s FUSION © Arka Ghosh 2011
  • 4. -:INTO THE ARCHITECTURE:- Use of SPM:- SPM or SCRATCHPAD MEMORY is a high-speed internal memory used for temporary storage of calculations, data, and other work in progress.Inreference to a microprocessor (&quot;CPU&quot;), scratchpad refers to a special high-speed memory circuit used to hold small items of data for rapid retrieval. EXAMPLE:-• NVIDIA's 8800 GPU running under CUDA provides 16KiB of Scratchpad per thread-bundle when being used for gpgpu tasks. STREAM PROCESSING:  The stream processing paradigm simplifies parallel software and hardware by restricting the parallel computation that can be performed. 1.Uniform Stream. Applications:- Compute Intensity Data Parallelism Data Locality Conventional, sequential paradigm Parallel SIMD paradigm, packed registers (SWAR) for(int el = 0; el < 100; el++) // for each vector vector_sum(result[el], source0[el], source1[el]); for(int i = 0; i < 100 * 4; i++) result[i] = source0[i] + source1[i]; © Arka Ghosh 2011
  • 5.  Graphics Pipeline  The graphics pipeline typically accepts some representation of a three-dimensional scene as an input and results in a 2D raster image as output. OpenGL and Direct3D are two notable graphics pipeline models accepted as widespread industry standards. Stages of the graphics pipeline:-> 1.Transformation 2.Per-vertex lighting 3.Viewing transformation 4.Primitives generation 5.Projection transformation 6.Clipping 7.Viewport transformation 8.Scan conversion or rasterization 9.Texturing, fragment shading 10.Display Shader  Shaders are used to program the graphics processing unit (GPU) programmable rendering pipeline, which has mostly superseded the fixed-function pipeline that allowed only common geometry transformation and pixel-shading functions; with shaders, customized effects can be used. <<<Types Of Shader>>> Vertex shaders. Pixel shaders Geometrical shaders USEFULLNESS OF SHADER:- 1.Simplified graphic processing unit pipeline 2.Parallel processing Programming shaders We can programe shader by using OpenGL,Cg & Microsoft HLSL. © Arka Ghosh 2011
  • 6. GPU CLUSTER  What is Cluster? GPU CLUSTER  Each node of the cluster is GPU. 1.Homogeneous 2.Heterogeneous Components Hardware (Other):- I nterconnector Software:- 1. Operating System 2. GPU driver for the each type of GPU present in each cluster node. 3. Clustering API (such as the Message Passing Interface, MPI). .. Algorithm mapping GPU SWITCHING  Means Switching from one cluster node to another. WINDOWS Switching. LINUX Switching. © Arka Ghosh 2011
  • 7. What Is GPGPU? GPGPU stands for general purpose graphics processin unit computing.Using GPU as CPU is the GPGPU computing NVIDIA CUDA:- It is a GPGPU Computing architecture. It provides heterogeneous computing environment. Why GPU Computing? To achive high performance computing. Minimize ERROR LOW power Consumption..GO GREEN. NVIDIA FLEXES TESLA MUSCLE
  • 8. CUDA Kernels and Threads Parallel portions of an application are executed on the device as kernels One kernel is executed at a time Many threads execute each kernel Differences between CUDA and CPU threads CUDA threads are extremely lightweight CUDA uses 1000s of threads to achieve efficiency Multi-core CPUs can use only a few Definitions Device = GPU Host = CPU Kernel = function that runs on the device Data Movement Example int main(void) { float *a_h, *b_h; // host data float *a_d, *b_d; // device data int N = 14, nBytes, i ; nBytes = N*sizeof(float); a_h = (float *)malloc(nBytes); b_h = (float *)malloc(nBytes); cudaMalloc((void **) &a_d, nBytes); cudaMalloc((void **) &b_d, nBytes); for (i=0, i<N; i++) a_h[i] = 100.f + i; cudaMemcpy(a_d, a_h, nBytes, cudaMemcpyHostToDevice); cudaMemcpy(b_d, a_d, nBytes, cudaMemcpyDeviceToDevice); cudaMemcpy(b_h, b_d, nBytes, cudaMemcpyDeviceToHost); for (i=0; i< N; i++) assert( a_h[i] == b_h[i] ); free(a_h); free(b_h); cudaFree(a_d); cudaFree(b_d); return 0; } © Arka Ghosh 2011
  • 9. © Arka Ghosh2011 10-Series Architecture 240 thread processors execute kernel threads 30 multiprocessors , each contains 8 thread processors One double-precision unit Shared memory enables thread cooperation Thread Processors Multiprocessor Shared Memory Double
  • 10. Execution Model Software Hardware Threads are executed by thread processors Thread Thread Processor Multiprocessor Thread blocks are executed on multiprocessors Thread blocks do not migrate Several concurrent thread blocks can reside on Thread Block ... Grid Device one multiprocessor - limited by multiprocessor resources (shared memory and register file) A kernel is launched as a grid of thread blocks Only one kernel can execute on a device at one time © Arka Ghosh2011
  • 11. Tesla Architecture  © Arka Ghosh 2011
  • 12. Time GigaThread Hardware Thread Scheduler Concurrent Kernel Execution + Faster Context Switch Serial Kernel Execution Parallel Kernel Execution Kernel 1 Kernel 1 Kernel 2 Kernel 2 Ker 4 nel Kernel 3 Kernel 5 Kernel 3 Kernel 4 Kernel 5 Kernel 2 Kernel 2 © Arka Ghosh2011
  • 13. EXAMPLE:-> MATLAB CODE FOR SIMPLE FFT(CPU HOST MODE) FOR DEVICE( nVidia QUADRO Fx 5200*2) clear ALL; t1=cputime; x=rand(2^20,1); f=fft(x); t2=cputime; t3=t2-t1; Here t3=0.4056 Clear ALL; t1=cputime; x=rand(2^20,1); gx=gpuArray(x); f=fft(gx); t2=cputime; t3=t2-t1; Here t3=0.006056 clear ALL t1=cputime; x=rand(50); y=rand(50); z=rand(50); a=10; b=20; c=30; d=40; f=a*(x^2)+b*(x*y)+c*(y^3)+d*(z^4); net=feedforwardnet(800); net=trainlm(net,x,f); t2=cputime; t3=t2-t1; MATLAB code For Simple ANN For CPU t3=250.2154 For GPU t3=122.25 So we can see that The GPU is nearabout 204% efficient than CPU. © Arka Ghosh 2011
  • 14. CONCLUSION  C for the GPU Multi-GPU Computing Massively Multi-threaded Computing Architecture Compatible with Industry Standard Architectures WHERE GPGPU IS USED? MIT-for educational & Scientific Research Purpose Stanford University--for educational & Scientific Research Purpose NCSA (National Center for Supercomputing Applications) NASA Machine Learning & AI field Machine Vision(Mainly Robot Vision) Tablets © Arka Ghosh 2011
  • 15. Acknowledgement  Mriganka Chakraborty(prof. Secom Engineering College) Saibal Chakraborty Dr.Nicolas Pinto .prof. of MIT.-Advanced Supercomputing Dept T.Halfhill-NVIDIA Corp Developer Guide GOOGLE