I could be missing something fundamental, but consider this interpreter session1:
>>> -0.0 is 0.0
False
>>> 0.0 is 0.0
True
>>> -0.0 # The sign is even retained in the output. Why?
-0.0
>>>
You would think that the Python interpreter would realize that -0.0
and 0.0
are the same number. In fact, it compares them as being equal:
>>> -0.0 == 0.0
True
>>>
So why is Python differentiating between the two and generating a whole new object for -0.0
? It doesn't do this with integers:
>>> -0 is 0
True
>>> -0 # Sign is not retained
0
>>>
Now, I realize that floating point numbers are a huge source of problems with computers, but those problems are always with regard to their accuracy. For example:
>>> 1.3 + 0.1
1.4000000000000001
>>>
But this isn't an accuracy problem, is it? I mean, we are talking about the sign of the number here, not its decimal places.
1I can reproduce this behavior in both Python 2.7 and Python 3.4, so this is not a version-specific question.