lichess.org
Donate

0.1+0.2

Software DevelopmentOff topic
Is 0.1 + 0.2 0.3

The reason why 0.1 + 0.2 != 0.3 in Python (and in many other programming languages) is because of how floating-point numbers are represented in computers.

Floating-point numbers are represented in binary, which means that numbers that have a finite decimal representation in base 10 (such as 0.1 and 0.2) may not have a finite binary representation. Therefore, when we try to represent these numbers in binary, there may be some loss of precision.

In particular, the decimal values 0.1 and 0.2 cannot be represented exactly in binary, and therefore they are approximated by the closest binary numbers that can be represented. When these approximations are added together, the result is not exactly equal to the binary representation of the decimal value 0.3.

Therefore, when we write 0.1 + 0.2 == 0.3 in Python, the result is False because the sum of 0.1 and 0.2 is not exactly equal to 0.3 due to the limitations of floating-point arithmetic.

To perform exact decimal arithmetic in Python, you can use the decimal module, which provides support for arbitrary-precision decimal arithmetic