Tom provides a brief explanation of what Teraflops are and what they really mean about the processing power of your computer.
Featuring Tom Merritt.
Please SUBSCRIBE HERE.
A special thanks to all our supporters–without you, none of this would be possible.
Thanks to Kevin MacLeod of Incompetech.com for the theme music.
Thanks to Garrett Weinzierl for the logo!
Thanks to our mods, Kylde, Jack_Shid, KAPT_Kipper, and scottierowland on the subreddit
Send us email to [email protected]
You just heard somebody use teraflop in a sentence
It was probably related to a GPU, or maybe a supercomputer
But what the heck is it? Is it fast? Is it powerful?
Are you confused?
Let’s help you Know a Little more about FLOPS
You’ll likely hear Teraflops used a lot Like a Terrabyte, a Teraflops is just a large number of FLOPS, 10 to the 12th to be exact.
And yes I said A Teraflops. BECAUSE
FLOPS stands for Floating Point Operations per Second. See the plural there in operations? That can also be a singular operation. That last S is for second. So one FLOPS is possible. So are two FLOPS. I know. Also, you may sometimes see it written F-L-O-P slash s.
FLOPS are a measure specifically for computations that use floating-point calculations. So its not as broad as instructions per second but its more accurate for machines that require floating-point operations.
So let’s clear up THAT confusion.
None of this is Gigahertz, the term usually used to market processors. That’s the clock speed. You can think of clock speed as a hardware measurement since it’s measuring the frequency a processor can generate pulses that are used to keep its components in sync. Faster clock speeds in general lead to faster processing. But if you want to know how fast the processor is actually computing you can look at instructions per second.
Instructions per second seems pretty simple. It’s the number of instructions a processor can execute. BUT different instructions can take different amounts of time to execute. It can also be affected by how memory is used. Because so many different things can affect it, the computer world has settled on benchmarks as a way to provide standard comparisons for computer performance.
So why FLOPS?
Well, benchmarks are better than instructions per second, but they’re not perfect and occasionally someone figures out how to game them.
But IF you’re doing floating-point math, FLOPS is a really good measure of that.
So what is floating-point math and why don’t all computers us that?
Floating-point arithmetic is math. We’re not going to go deep into the fascinating world of floating point math but we’ll dabble.
The short version is floating-point lets you manage really huge numbers and really small numbers by approximating them. So 100 billion or 0.0000000000000000078.
Yes I did say FLOPS is more precise as a measurement, but floating-point math itself involves some approximations. It’s a tradeoff for being able to handle numbers that would otherwise just be too big.
If you know scientific notation you get the idea. You have a main number called the significand and a base number raised to an exponent. The floating point refers to the fact that the decimal point in the main number, the significand, can move around or “float” because you have an exponent.
Here’s an example that will help some of you wrap your head around it and also cause mathematicians to shake their head at me. This isn’t exactly floating-point math but I think it will help you get the idea.
Let’s say you want to work with the number 4,567,783,212 But man. That’s a lot of number for you with just pen and paper right?
How about you just call it 4,568 x 10 to the 6th? Or you could even call it 4.568 x 10 to the 9th. Or 45.68 x 10 8th.
See the decimal point is floating around in there?
And you want the ability to float the decimal point because it’s a whole lot easier to deal with numbers that have the same exponents sometimes right?
Anyway, I’m done annoying the mathematicians.
That overly simplistic example is done properly and on a whole other level by computers. Hardware can be made to handle floating-point math well and usually does it in base 2 not base 10 which is easier for binary. But that gets us into a whole other ball of wax like IBM using base 16.
Suffice to say that processors used for calculations that need to handle big and small numbers use floating-point math and you can compare them to each other by measuring how many floating-point operations per second they can execute.
So FLOPS is really good at measuring floating-point operations. But keep in mind FLOPS won’t tell you how “good” a machine is. It depends on what you want to use it for. Remember floating-point numbers are approximations. In our example earlier we just chopped off the end of the number because it was so big it was too hard to handle, and the shortened version was a reasonable approximation for certain things.
But not all things! Don’t try to compute the tangent of pi over 2 with floating-point operation friends.
Basically you’re dealing with rounding errors so floating-point can be used by lots of programs, but not for all things.
Like we said, it’s historically good at things that deal with very large or very small numbers where the rounding errors will cancel out or otherwise not cause too much of a problem. Cosmology, Climatology, Fluid dynamics, lots of science. It’s also good at real-time processing so you’re hearing it more recently in graphics cards and AI-specific hardware because floating-point is useful in AI, video editing and gaming as well.
To sum up, some computationally intense stuff, like scientific calculations, real-time rendering and such, use numbers with lots of decimal places. Floating-point math is used to handle those insanely big or small numbers. And FLOPS is a measure of the number of floating operations per second a piece of hardware can execute.
And a lot of floating-point operations per second can be expressed in teraflops.
In other words I hope now you know a little more about TeraFLOPS