Why You Should Always Use the ‘var’ Keyword in C#

iStock_000012751071XSmallUsing the ‘var’ keyword in C# has always spurred a hot debate among developers.  I believe ‘var’ should be used at all times.  I believe this not because I choose to be “lazy,” as those who argue against it frequently claim.  Of all the reasons I use ‘var’, laziness is not one of them.

I’ve argued for the constant use of ‘var’ countless times; this blog post is a collection of thoughts that I have compiled resulting from my arguments.  Below are my reasons for using ‘var’ all of the time.

It decreases code-coupling

Coupling between code and its dependent code can be reduced by using ‘var’.  I do not mean coupling from an architectural perspective nor at an IL-level (the type is inferred anyway), but simply at the code level.


Imagine there are 20 explicit type references spanning over twenty code files to  an object that returned an another object of type IFoo.  By explicit type references, I mean by prefacing each variable name with IFoo.  What happens if IFoo changes to IBar, but the interface’s methods are kept the same? 

Wouldn’t you have to change it in 20 distinct places?  Doesn’t this increase coupling?  If ‘var’ was used, would you have to change anything?  Now, one could argue that it is trivial to change IFoo to IBar in a tool like ReSharper and have all of the references changed automatically.  However, what if IFoo is outside of our control?  It could live outside the solution or it could be a third-party library.

It is completely redundant with any expression involving the "new" operator

Especially with generics:

ICalculator<GBPCurrency, GBPTaxType> calculator = new GBPCalculator<GBPCurrency, GBPTaxType>

can be shorted to:

var calculator = new GBPCalculator<GBPCurrency, GBPTaxType> 

Even if calculator is returned from a method (such as when implementing the repository pattern), if the method name is expressive enough it is obvious the object is a calculator.  The name of the variable should be expressive enough for you to know what the object it represents is.  This is important to realize:  the variable expresses not what type it represents, but what the instance of that type actually is.  An instance of a type is truly an object, and should be treated as such.

There is a distinction between an object and its type:  an object exists at runtime, it has properties and behaviors; types simply describe what an object should be.  Knowing what type an object should be simply adds more noise to the source code, distracting the coder from what an object really is.

An object may be brought into this world by following the rules governed by a type, but this is only secondary information.  What the object actually is and how it behaves is more important than its type.  When we use an object at runtime, we are dependent on its methods and properties, not its type.  These methods and properties are an object’s behaviors, and it is behaviors we are dependent upon.

The argument for knowing a variable's type has been brought up in the past.  The move from Hungarian notation in Microsoft-based C++ to non-Hungarian notation found in C# is a great example of this once hot topic.  Most Microsoft-based C++ developers at the time felt putting type identifiers in front of variable names was helpful, yet Microsoft published coding standards for C# that conflicted with these feelings.

It was a major culture change and mind-shift to get developers to accept non-Hungarian notation.  I was among those who thought that non-Hungarian variable naming was downright wicked heresy and anyone following such a practice must be lazy and did not care about their profession.   If knowing a variable’s type is so important, shouldn’t we then preface variable names in Hungarian style to know more information about an object's type?

You shouldn't have to care what the type of an object is

You should only care what you are trying do with an object, not what type an object may come from.  The methods you are attempting to call on an object are its object contract, not the type.  If variable names, methods, and properties are named appropriately, then the type is simply redundant.

In the previous example, the word "calculator" was repeated three times.  In that example, you only need to know that the instance of a type (the object) is a calculator, and it allows you to call a particular method or property.

The only reason a calculator object was created was so that other code could interact with its object contract.  Other code needs the calculator’s methods and properties to get something done.  This need has no dependency on any type, only on an object’s behaviors..

For example, as long as the object is a calculator, and the dependent code needs to call a method named “CalculateTax,” then the dependent code is coupled to an object with a method called “CalculateTax” and not a specific type.  This allows for much more flexibility, because now the variable can reference any type as long as that type supports the “CalculateTax” method.

‘var’ is less noisy than explicitly referencing the type

As programming languages evolve, we spend less time telling the compiler and the computer what to do and more time expressing problems that exist in the specific domain we are working in.

For example, there are a number of things in C++ that are very technical with respect to the machine, but have nothing to do with the domain.  If you are a customer of Quicken or Microsoft Money, all you really want to do is manage your finances better.  These software packages allow you to do that.

The better a software package can do this for you, the more valuable it is to you.  Therefore, from a development perspective value is defined by how well a software package solves a user's problem.  When we set out to develop such software, the only code that is valuable is the code that contributes to solving a particular user’s problem.  The rest of the code is unfortunately a necessary waste, but is required due to limitations of technology.

If we had infinite memory, we would not need to worry about deleting pointers in C++ or garbage collection in C#.  However, memory is a limitation and therefore the technician in us has to find ways of coping with this limitation.

The inclusion of ‘var’ into the C# language was done for a reason and bookmarks another iteration of C# (particularly C# 3.0).  It allows us to spend less time telling the compiler what to do and more time thinking about the problem we are trying to solve.

Often I hear dogma like "use var only when using anonymous types."  Why then should you use an anonymous type?  Under these conditions you usually do not have a choice, such as when assigning variables to the results of LINQ expressions.  Why do you not have a choice when using LINQ expressions?  It's because the expression is accomplishing something more functional and typing concerns are the least of your worries.

In the ideal C# world, we would not have to put any words in front of a variable name at all.  In fact, prefacing a variable with anything just confuses the developer even further, and allows for poor variable names to become a standard whereby everyone is reliant upon explicit type references.

Arguments against using ‘var’

Some of the arguments I have heard against using ‘var’ and my responses to these are:

  1. “It reduces clarity” – How?  By removing the noise in front of a variable name, your brain has only one thing to focus on:  the variable name.  It increases clarity.
  2. “It reduces readability and adds ambiguity” – This is similar to #1:  readability can be increased by removing words in front of the variable and by choosing appropriate variable names and method names.  Focusing on type distracts you from the real business problem you are trying to solve.
  3. “It litters the codebase” – This is usually an argument for consistency.  If your codebase uses explicit type references everywhere, then by all means do not use ‘var’.  Consistency is far more important.  Either change all explicit references in the codebase to ‘var’ or do not use ‘var’ at all.  This is a more general argument that applies to many more issues, such as naming conventions, physical organization policies, etc.


As a final thought, why do we preface interface names with “I” but not class names with “C” as we did in the days when Microsoft-C++ was the popular kid in school?