# Javascript to avoid numerical calculation precision error method

- 2020-03-30 02:12:19
- OfStack

If I ask you what 0.1 plus 0.2 is? You might give me a dirty look. 0.1 + 0.2 = 0.3. Even the children in the kindergarten can answer such children's questions. But you know what, the same problem in a programming language may not be as simple as you think.

Don't believe it? Let's start with a little bit of JS.

Var numA = 0.1;

Var numB = 0.2;

Alert ((numA + numB) === 0.3);

The result is false. Yes, when I first saw this code, I took it for granted that it was true, but the execution surprised me. Did I open it the wrong way? No, no, no. Let's try the following code again to see why the result is false.

Var numA = 0.1;

Var numB = 0.2;

Alert (numA + numB);

Originally, 0.1 + 0.2 = 0.30000000000000004. Isn't that weird? For floating-point arithmetic, actually almost all programming languages have similar accuracy problem, but in c + + / c # / Java in these languages have good encapsulation of the problem of the precision of the methods to avoid and JavaScript is a weakly typed language, from the design thought, there is no had a strict for floating point data types, so the precision error problem is especially prominent. Here's why we have this accuracy error and how to fix it.

First, let's think about the seemingly trivial problem of 0.1 + 0.2 from the perspective of a computer. We know that computers can read binary, not decimal, so let's first convert 0.1 and 0.2 to binary and see:

0.1 = > 0.0001, 1001, 1001, 1001... (infinite loop)

0.2 = > 0.0011, 0011, 0011, 0011... (infinite loop)

B: I see. How to solve this problem? The result I want is 0.1 + 0.2 === 0.3!!

One of the simplest solutions is to give clear precision requirements, and the computer will automatically round off the returned value, such as:

Var numA = 0.1;

Var numB = 0.2;

Alert (parseFloat((numA + numB). ToFixed (2)) == 0.3);

But obviously this is not a one-and-done solution, and it would be nice if there were a way to solve the problem of the precision of floating point Numbers. Let's try this:

Math.formatfloat = function(f, digit) {

Var m = math.pow (10, digit);

Return parseInt(f * m, 10)/m;

}

Var numA = 0.1;

Var numB = 0.2;

Alert (math.formatfloat (numA + numB, 1) == 0.3);

What does this method mean? To avoid precision differences, we multiply the number we need to calculate by 10 to the NTH power, convert it to an integer that the computer can accurately recognize, and then divide by 10 to the NTH power. This is how most programming languages deal with precision differences, so let's use it to deal with floating point precision errors in JS.

Next time someone asks you what 0.1 + 0.2 is, be careful what you say!!