top of page
  • tecqusition

GPUs are faster processors than CPUs



They are more powerful in some very specific functions/operations and they lack the capability for others.


For example:

Most of the GPUs could execute SIMD instructions for many of the simplest integer and single-precision floating-point operations with many kernels (like multiple data, multiple arguments, multiple instructions, in multiple CPU cores) in parallel. They also have synchronized with the GPU clock the clock of their own (integrated) memory so the memory there is not a bottleneck for the instruction execution and act as a whole like an L2 or L3 cache.


As a result, in an ideal situation (and you should know that the world is never ideal) they could achieve performance from 1 to 4000 GFlops (depending on how many cores the GPU have, for example, AMD HD 7970 GHz with 256 cores could do 4000 GFlops), while a standard general-purpose CPU like Intel i7-5930K with 12 cores could do about 63GFlops. If you do the math, you will see that even per single CPU core, the GPU has 3 times higher performance compared to a standard CPU for those math instructions. So the GPUs are faster but within a very limited set of conditions (generally single precision math operations with a huge amount of arguments per instruction). The Generic CPUs are faster in a broader sense and set of conditions, including that they could be (and are) much faster than GPU with math, even if we think the GPU should win, like if you do math with a lower amount of arguments or with a higher level of precision.



The GPUs are specialized to be extremely fast when they do massive calculations with vertices and matrix of vertices (the vertex there is a 3 point vector), with practically 3–4 sets of math operations only, with single precision and extreme amount of arguments per instruction calculated in parallel. This is understandable, those are the typical calculations you have to perform if you draw 3D figures, light and textures. They cannot perform easily and as fast as the CPU complex algorithms with multiple complicated states and random behaviour. However, If your task/algorithm could be expressed effectively with single-precision math over such structures, with simple pipelined algorithms, and does not require a lot of memory (the memory capabilities per core of a GPU are in most cases very limited and the RTT delay when you feed the GPU from the CPU and its memory is sometimes higher than calculating the result in the CPU itself) then you could accelerate it very much with a GPU. If you can’t express it with such math, then the performance you can get with a GPU may be very very disappointing.


The good thing is that so much generic math (and I mean math where you need to perform massive amounts of basic calculations) could be expressed in a way that it could be processed faster with GPUs, and as a result, we are improving the performance of generic applications more and more every day. To allow that, we have many extensions of the standard API interfaces in OpenGL (like OpenCL) or proprietary (like CUDA), and even exposure to those interfaces to JavaScript for browsers (both for 3D drawing - WebGL 1.0, or for calculations - WebGL 2.0).


Back to the generic question, the answer is no. The GPUs are not faster than the generic CPUs.They are faster only in very specific conditions with a very limited set of operations. It is up to you as a programmer to decide, if that suits you well to accelerate the operations of your algorithm.

FOLLOW US ON INSTAGRAM, FACEBOOK AND PINTEREST


DISCLAIMER

The information is provided by Tecquisition for general informational and educational purposes only and is not a substitute for professional legal advice. If you have any feedback, comments, requests for technical support or other inquiries, please mail us by tecqusition@gmail.com.


3 views0 comments

Comments


Post: Blog2_Post
bottom of page