2

It could be a silly question, yet I'm not able to understand. In modern day, computers, we have integers with million digits length. Even in my ordinary 2GB laptop, I can calculate values of numbers up to thousand digits long, but the value of a floating-point calculation is often limited to 15 digits or worse, like 0.1 + 0.2 is not equal to 0.3 (in Python, its 0.30000000000000004).

Why do we have such loss of precision in floating-point calculation? Is it due to the limit of binary system to express floating-point numbers?

J Arun Mani
  • 133
  • 1
  • 5

2 Answers2

3

Many people can afford to buy a computer with a powerful processor and say 128GB of RAM. Maybe the same cost as a small car.

That's enough to store one number with 300 billion digits. It's enough to store 300,000 numbers of a million digits. But it is also enough to store 16 billion double precision floating point numbers. There are problems where 300,000 numbers are nothing.

The number of operations is another matter. That computer can probably do 50 billion floating-point operations per second. It cannot do anywhere near that number of operations with one million digit numbers. Going down from 50 billion operations to 10,000 operations with one million digit numbers, that hurts.

Higher precision is almost never needed. And if it is not needed, it is an absolute waste of speed and memory. And remember that with a million digits precision, 0.1 + 0.2 still doesn't equal 0.3, but it equals 0.3 (one million zeroes) 4.

gnasher729
  • 32,238
  • 36
  • 56
3

There are a couple of different things happening in that question.

Is it due to the limit of binary system to express floating-point numbers?

Loss of precision isn't due to the use of binary, it is due to keeping the storage size constant. It also happens if you work with, say, 8-digit decimal numbers, also if you do it with pen and paper. Eventually you may need to round. But sometimes you don't, and indeed even with floating point numbers on a computer, some computations are exact.

we have integers with million digits length

We do, and we can use two of them to hold very precise rational numbers. Then loss of precision does not occur as long as your calculation stays within ℚ (which is already an annoying restriction), but as the numerator and denominator get larger, computations slow down, and eventually space may run out.

Floating point numbers stay the same size as they are manipulated, computations stay fast, and there is no risk of running out of space, but the price for that is intermediary rounding.

0.1 + 0.2 is not equal to 0.3

Actually it's worse than that: 0.1 is already not 0.1, in binary the number 1/10 suffers the same problem that 1/3 does in decimal, representing it exactly would take infinite digits.

The literal 0.1 converted to a double precision number is already rounded, to a double that encodes the value 0.1000000000000000055511151231257827021181583404541015625. Close, but not exact. Since usually floating point numbers are not printed exactly, this is usually hidden, but it's happening.

0.2 rounds to 0.200000000000000011102230246251565404236316680908203125, and summing them gives 0.3000000000000000444089209850062616169452667236328125. On the other hand, 0.3 rounds down to 0.299999999999999988897769753748434595763683319091796875.

user555045
  • 2,148
  • 14
  • 15