Do the variables declared in a for loop affect the performance of the loop?

advertisements

I have done my homework and found repeated assurances that it makes no difference in performance whether you declare your variables inside or outside your for loop, and it actually compiles to the very same MSIL. But I have been fiddling with it nevertheless and found that moving the variable declarations inside the loop does actually cause a considerable and consistent performance gain.

I have written a small console test class to measure this effect. I initialise a static double[] array items, and two methods perform loop operations on it, writing the results to a static double[] array buffer. Originally, my methods were those with which I noticed the difference, namely the magnitude calculation of a complex number. Running these for an items array of length 1000000 for 100 times, I got consistently lower run times for the one in which the variables (6 double variables) were inside the loop: eg, 32,83±0,64 ms v 43,24±0,45 ms on an elderly configuration with Intel Core 2 Duo @2.66 GHz. I tried executing them in different order, but it did not influence the results.

Then I realised that calculating the magnitude of a complex number is far from a minimum working example and tested two much simpler methods:

    static void Square1()
    {
        double x;

        for (int i = 0; i < buffer.Length; i++) {
            x = items[i];
            buffer[i] = x * x;
        }
    }

    static void Square2()
    {
        for (int i = 0; i < buffer.Length; i++) {
            double x;
            x = items[i];
            buffer[i] = x * x;
        }
    }

With these, the results came out the other way: declaring the variable outside the loop seemed more favourable: 7.07±0.43 ms for Square1() v 12.07±0.51 ms for Square2().

I am not familiar with ILDASM, but I have disassembled the two methods, and the only difference seems to be the initialisation of the local variables:

      .locals init ([0] float64 x,
       [1] int32 i,
       [2] bool CS$4$0000)

in Square1() v

      .locals init ([0] int32 i,
       [1] float64 x,
       [2] bool CS$4$0000)

in Square2(). In accordance with it, what is stloc.1 in one is stloc.0 in the other, and vice versa. In the longer complex magnitude calculation MSIL codes even the code size differed and I saw stloc.s i in the external-declaration code where there was stloc.0 in the internal-declaration code.

So how can this be? Am I overlooking something or is it a real effect? If it is, it can make a significant difference in the performance of long loops, so I think it deserves some discussion.

Your thoughts are much appreciated.

EDIT: The one thing I overlooked was to test it on several computers before posting. I have run it on an i5 now and the results are nearly identical for the two methods. My apologies for having posted such a misleading observation.


Any C# compiler worth its salt will perform such micro-optimisations for you. Only leak a variable outside a scope if it's necessary.

So keep double x; internal to the loop if possible.

Personally though, if items[i] is plain-old-data array access then I'd write buffer[i] = items[i] * items[i];. C and C++ would optimise to that, but I don't think C# does (yet); your disassembly implies that it doesn't.