I was writing an instructive example for a colleague to show him why testing floats for equality is often a bad idea. The example I went with was adding .1 ten times, and comparing against 1.0 (the one I was shown in my introductory numerical class). I was surprised to find that the two results were equal (code + output).
float @float = 0.0f;
for(int @int = 0; @int < 10; @int += 1)
{
@float += 0.1f;
}
Console.WriteLine(@float == 1.0f);
Some investigation showed that this result could not be relied upon (much like float equality). The one I found most surprising was that adding code after the other code could change the result of the calculation (code + output). Note that this example has exactly the same code and IL, with one more line of C# appended.
float @float = 0.0f;
for(int @int = 0; @int < 10; @int += 1)
{
@float += 0.1f;
}
Console.WriteLine(@float == 1.0f);
Console.WriteLine(@float.ToString("G9"));
I know I'm not supposed to use equality on floats and thus shouldn't care too much about this, but I found it to be quite surprising, as have about everyone I've shown this to. Doing stuff after you've performed a calculation changes the value of the preceding calculation? I don't think that's the model of computation people usually have in their minds.
I'm not totally stumped, it seems safe to assume that there's some kind of optimization occurring in the "equal" case that changes the result of the calculation (building in debug mode prevents the "equal" case). Apparently, the optimization is abandoned when the CLR finds that it will later need to box the float.
I've searched a bit but couldn't find a reason for this behavior. Can anyone clue me in?
See Question&Answers more detail:os