Sometimes, computers seem to suck at maths :-
>>> 1.2 - 1.0
0.19999999999999996
To be fair, that’s a raw Python interface; an application intended for use as a calculator works a bit better :-
» qalc
> 1.2 - 1.0
1.2 − 1 = 0.2
The problem is related to low-level numeric types. Computers store numbers in a variety of different formats (called types by developers). Whole numbers (integers) are easy – just allocate a certain number of 8-bit bytes (more means you can store bigger numbers; but it takes more memory for each number) and you would have something that would store whole numbers with perfect accuracy.
Floating point (i.e. numbers with a ‘decimal’ point) on the other hand are much more a compromise between size and accuracy. Floating point in effect uses scientific notation for numbers – 1.23E23. So a number is split into two – the mantissa (effectively the bit before the “E”) and the exponent. Storing two numbers in 32-bits (single precision) limits the precision in which numbers are stored but is usually sufficient and allows a far larger range of numbers :-
>>> print("{:10.4f}".format(1.2 - 1.0))
0.2000
In other words if you are using a low-level interface as a calculator, you can produce sensible output merely by writing your code properly. Or use a proper calculator program like qalc(ulator).
This is of course an over simplification and the Wikipedia article on single precision floating point goes into far more detail than I want to understand. Amongst other things I’ve glossed over is the problem of performing calculations in base 2 (binary) rather than base 10 (decimal).
Plus there are a whole bunch of other numeric types such as larger floating point types, decimal floating point, bignums (which use whatever memory is necessary to store a number), fixed point, etc.
Computers aren’t bad at maths; it is just you can trick them into making themselves look bad.