Good practice
So I can say that either you didn’t understand what they said about declaring variables, or they taught you wrong, and that happens a lot.
I really see a lot of things that made sense in the '60s or '70s to be repeated to this day as if they were true. People learn by cake recipe. That is, they learn good practices and don’t learn how something actually works and why they do it. Learn why is more important than learning what.
For decades there was less than a thousandth of processing that we have today and less than a millionth of memory. Compilers needed to be simple. They avoided hard work and forced the programmer to help him. This ceased to exist, but what was required became legend.
In fact it has never been ideal to do this. It is more readable to declare the variable as close as possible to its use. It is easier to follow what you are doing.
Smaller surface
The smaller the scope, the less damage it can do when something goes wrong. The less time of life, the less memory occupies, even if it is just a stack space.
If a variable is not being used outside a block there is no reason why it should be declared out of it, there is no gain in doing so. Even if it were a good practice, it should be justified, not even that someone can do.
Even because, respecting good practice should never be the goal of a code. It works properly, meeting the requirements and be readable and easy to give maintenance is what should occur always.
I answered something about this in C.
Technically worse
Note how absurd it is that you have to declare a variable, spend time to assign a value to it, and shortly after you have to assign another value and the first one to be discarded. A int
It’s simple, but you kind of have to assign a value that’s expensive. Every time I see someone assign a value that is never used I feel like crying.
Even if you don’t need all this performance, because the gain isn’t great, avoiding something totally unnecessary isn’t just optimizing, it’s simplifying.
Remembering that declaring a variable always has an assignment, at least in C#, even if implicit. By chance, or chance, depending on the point of view, declared reference types, but not explicitly assigned, have a very low cost because it only has to reset the reference, but is not zero, is similar to assigning an integer.
I was always directed to create the variables before any operation
This is true, you cannot use the variable before declaring :) But it does not need to be well before, it may be just before.
Resharper’s right about this one.
Semantic difference
Note that there is a semantic difference in these codes. The first creates a variable and changes its value. There may be some case that wants to do this, but it doesn’t seem to be this case. The second creates several variables, one by loop interaction. Yes, each passage will generate a a
different from the other.
But don’t think that it costs too much or takes up more memory, because it is also closed at the end of the loop, so it is created again, on top of where the other one was. There is no cost in doing this, beyond what it would have to assign a value.
It might look the same. In general it does, but because it has another identity it may give real noticeable difference if the variable is captured by a closure, for example.
If in the loop had the creation of closures stored in a list for later execution, the values would be different. The first will capture the same unique variable, so the value will be equal in all instances of closures. In the second code, each closure would have a different value since it is capturing a new variable every time. This is very important. I believe that in this case Resharper would not give this indication.
This code shows that:
using System;
using static System.Console;
using System.Collections.Generic;
public class Program {
public static void Main() {
var acoes = new List<Func<int>>();
var a = 0;
for (var i = 0; i < 5; i++) {
a = i;
acoes.Add(() => a * 2);
}
foreach (var acao in acoes) WriteLine(acao());
acoes = new List<Func<int>>();
for (var i = 0; i < 5; i++) {
int b = i;
acoes.Add(() => b * 2);
}
foreach (var acao in acoes) WriteLine(acao());
}
}
Behold working in the ideone. And in the .NET Fiddle. Also put on the Github for future reference.
Until C# 4 there was a bug in the compiler and even the second worked wrong, same as the first.
some may think that the
Resharper
makes the user lazy and sometimes he may suggest something wrong too, everything leads to believe that the use of the variablea
would not have perhaps logic in using, everything depends on a larger context and everything depends on whether the variablea
whether or not used outside thefor
. Important is to keep your code standard in the variable declaration on its availability and does not depend onplugins
to develop your code as it will often not be useful. There are companies that abhor the use ofResharper
.– novic