Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
menu search
person
Welcome To Ask or Share your Answers For Others

Categories

I was writing an instructive example for a colleague to show him why testing floats for equality is often a bad idea. The example I went with was adding .1 ten times, and comparing against 1.0 (the one I was shown in my introductory numerical class). I was surprised to find that the two results were equal (code + output).

float @float = 0.0f;
for(int @int = 0; @int < 10; @int += 1)
{
    @float += 0.1f;
}
Console.WriteLine(@float == 1.0f);

Some investigation showed that this result could not be relied upon (much like float equality). The one I found most surprising was that adding code after the other code could change the result of the calculation (code + output). Note that this example has exactly the same code and IL, with one more line of C# appended.

float @float = 0.0f;
for(int @int = 0; @int < 10; @int += 1)
{
    @float += 0.1f;
}
Console.WriteLine(@float == 1.0f);
Console.WriteLine(@float.ToString("G9"));

I know I'm not supposed to use equality on floats and thus shouldn't care too much about this, but I found it to be quite surprising, as have about everyone I've shown this to. Doing stuff after you've performed a calculation changes the value of the preceding calculation? I don't think that's the model of computation people usually have in their minds.

I'm not totally stumped, it seems safe to assume that there's some kind of optimization occurring in the "equal" case that changes the result of the calculation (building in debug mode prevents the "equal" case). Apparently, the optimization is abandoned when the CLR finds that it will later need to box the float.

I've searched a bit but couldn't find a reason for this behavior. Can anyone clue me in?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
343 views
Welcome To Ask or Share your Answers For Others

1 Answer

This is a side effect of the way the JIT optimizer works. It does more work if there is less code to generate. The loop in your original snippet gets compiled to this:

                @float += 0.1f;
0000000f  fld         dword ptr ds:[0025156Ch]          ; push(intermediate), st0 = 0.1
00000015  faddp       st(1),st                          ; st0 = st0 + st1
            for (int @int = 0; @int < 10; @int += 1) {
00000017  inc         eax  
00000018  cmp         eax,0Ah 
0000001b  jl          0000000F 

When you add the extra Console.WriteLine() statement, it compiles it to this:

                @float += 0.1f;
00000011  fld         dword ptr ds:[00961594h]          ; st0 = 0.1
00000017  fadd        dword ptr [ebp-8]                 ; st0 = st0 + @float
0000001a  fstp        dword ptr [ebp-8]                 ; @float = st0
            for (int @int = 0; @int < 10; @int += 1) {
0000001d  inc         eax  
0000001e  cmp         eax,0Ah 
00000021  jl          00000011 

Note the difference at address 15 vs address 17+1a, the first loop keeps the intermediate result in the FPU. The second loop stores it back to the @float local variable. While it stays inside the FPU, the result is calculated with full precision. Storing it back however truncates the intermediate result back to a float, losing lots of bits of precision in the process.

While unpleasant, I don't believe this is a bug. The x64 JIT compiler behaves differently yet. You can make your case at connect.microsoft.com


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
...