Performance Optimization in Visual Basic
Performance Optimization in Visual Basic
Performance Optimization in Visual Basic
NET
(http://msdn.microsoft.com/en-us/library/aa289513(VS.71).aspx)
Introduction
One of the primary goals of Visual Basic® .NET is to perform faster than previous versions. But performance still
depends on how you design your program. This article describes some important considerations that can help you
optimize your application's performance. The remainder of this introduction reviews the conditions and assumptions
underlying these optimization recommendations.
Intermediate Language
Visual Basic .NET and C#™ both compile to Microsoft intermediate language (MSIL). Equivalent source code in the two
languages usually compiles to the same MSIL code and results in the same performance for your application.
Performance should not be a criterion in choosing between the two languages. For more information, see Compiling to
MSIL.
Execution Frequency
Some of the recommendations in this article might represent a negligible difference within one statement, but the
performance gain can be greatly amplified inside a loop or a frequently called procedure. Therefore, code blocks that
are executed many times are good candidates for optimization.
Bottlenecks
The most productive approach to optimization is to identify the bottlenecks, or slow places, in your application and
work to improve them. Common bottlenecks are long loops and accesses to databases. Micro-optimizing every
expression and procedure call is not an efficient use of development effort.
Application Dependency
Performance is highly dependent on the characteristics of each individual application. There are no guarantees. The
recommendations in this article are guidelines only. It is possible that you might need to make some adjustments to
optimize your particular application.
Platform
Visual Studio® .NET is optimized for the recommended system hardware configuration, both for the integrated
development environment (IDE) and for the runtime. If you have less than the recommended amount of RAM, your
performance is likely to suffer. This is especially true if you are running large or multiple applications. For more
information, see Locating Readme Files to confirm that your system satisfies the prerequisites.
Testing Conditions
Some specialized testing for this article was performed using Visual Studio .NET 2002 on Microsoft® Windows® 2000
Professional, running on a 600 MHz Pentium III with 256 MB of RAM. The specialized testing consisted of tight loops
that did nothing more than exercise the code elements being compared. In other words, the loops contained the code
elements and nothing else. Therefore, the timing differences represent the most extreme cases, and you should not
expect such large differences in a normal application.
Preliminary Information
This article is based on preliminary information. The recommendations are subject to change as experience provides
updated and more refined information. Also, some of the underlying considerations might change in future releases.
Data Types
Page 1 of 13
The data types you choose for your variables, properties, procedure arguments, and procedure return values can
affect the performance of your application.
Value types hold their data within their own allocated memory. Because each instance of a value type is isolated and
cannot be accessed through more than one variable, value types can be held and managed on the stack.
Reference types hold only a pointer to the memory location that stores the data. Because more than one variable can
point to the data, reference types must be held and managed on the heap.
Heap management is harder on performance than stack allocation. There is an overhead for heap allocation, object
access, and garbage collection (GC). This means that a small value type is a better choice than a reference type when
you do not need the flexibility of a reference type. However, value types lose this advantage as they become larger.
For example, it takes more time to make an assignment with a five-byte value type than with a reference type.
The performance effect of data type choice can range from unnoticeable to as much as 30 percent in favor of small
value types. Note that it also depends on other factors such as hardware platform, system loading, and data size. For
more information, see Value Types and Reference Types.
A reference type variable declared as Object can point to data of any data type. This flexibility, however, can
compromise performance because an Object variable is always late bound.
A reference type variable is early bound if it is declared to be of a specific class, such as Form. This allows the Visual
Basic compiler to perform certain optimizations at compile time, such as type checking and member lookup. When you
access members on an early-bound object variable at run time, the compiler has already done much of the
management work.
A variable is late bound if it is declared as type Object or without an explicit data type. When your code accesses
members on such a variable, the common language runtime is obliged to perform type checking and member lookup
at run time. Note that, if Option Explicit is Off, a variable declared without an explicit data type is of type Object.
Early-bound objects have significantly better performance than late-bound objects. They also make your code easier
to read and maintain, and they reduce the number of run-time errors. This means you should declare your object
variables using a specific class type whenever you know it at development time. For more information, see Early and
Late Binding.
You should avoid using the Object type when it is not necessary. In addition to being subject to late binding, Object
variables that point to data of a value type consume additional memory, both for the pointer and for an additional
copy of the data. For more information, see Object Type.
The most efficient data types are those that use the native data width of the run-time platform. On current platforms,
the data width is 32 bits, for both the computer and the operating system.
Consequently, Integer is currently the most efficient data type in Visual Basic .NET. Next best are Long, Short, and
Byte, in that order of efficiency. You can improve the performance of Short and Byte by turning off integer overflow
checking, for example by setting the RemoveIntegerChecks property, but this incurs the risk of incorrect
calculations due to undetected overflows. You cannot toggle this checking on and off during run time; you can only set
its value for the next build of your application.
If you need fractional values, the best choice is Double, because the floating-point processors of current platforms
perform all operations in double precision. Next best are Single and Decimal, in that order of efficiency.
Units of Representation
Page 2 of 13
In some cases your application can use integral data types (Integer, Long, Short, Byte) in place of fractional types.
This is often true when you have a choice of units. For example, if you are representing the size of an image, you can
use pixels instead of inches or centimeters. Because the number of pixels in an image is always a whole number, you
can store it in an Integer variable. If you chose inches, you would probably need to deal with fractional values, so
you would need a Double variable, which is not as efficient. The smallest unit of representation can usually be
integral, whereas the larger units are typically fractional. If you perform many operations with these units, an integral
data type can improve performance.
Boxing is the extra processing the common language runtime must do when you treat a value type as a reference
type. Boxing is necessary, for example, if you declare an Integer variable and then assign it to an Object variable or
pass it to a procedure that takes an Object argument. In this case, the common language runtime must box the
variable to convert it to type Object. It copies the variable, embeds the copy in a newly allocated object, and stores
its type information.
If you subsequently assign the boxed variable to a variable declared as a value type, the common language runtime
must unbox it, that is, copy the data from the heap instance into the value type variable. Furthermore, the boxed
variable must be managed on the heap, whether or not it is ever unboxed.
Boxing and unboxing cause very significant performance degradation. If your application frequently treats a value
type variable as an object, it is better to initially declare it as a reference type. An alternative is to box the variable
once, retain the boxed version as long as it is being used, and then unbox it when the value type is needed again.
You can eliminate inadvertent boxing by setting Option Strict On. This helps you find places where you
unintentionally box a value type, and it forces you to use explicit conversion, which is often more efficient than
boxing. Note, however, that you cannot bypass boxing by using explicit conversion. CObj(<value type>) and
CType(<value type>, Object) both box the value type.
Arrays
Avoid using a larger rank than necessary. The fewer dimensions an array has, the more efficiently it performs. The
difference is most significant between one- and two-dimensional arrays, because the common language runtime
optimizes for one dimension.
Jagged arrays (arrays of arrays) are currently more efficient than rectangular (multidimensional) arrays. In other
words, A(9)(9) performs more efficiently than A(9,9). This is because jagged arrays can profit from the optimization for
one-dimensional arrays. The difference can exceed 30 percent.
Note that jagged arrays are not compliant with the common language specification (CLS). This means you should not
expose jagged arrays from any class you want CLS-compliant code to consume. For more information, see Arrays.
ArrayList
The ArrayList class in the System.Collections namespace supports a dynamic array, which changes its size as
required. To use it, you declare a variable with the ArrayList data type instead of using the standard array
declaration. You can then call its Add, AddRange, Insert, InsertRange, Remove, and RemoveRange methods to
add and delete elements.
If your array changes size frequently and you need to retain the values of existing elements, the ArrayList object can
give you better performance than the ReDim statement with the Preserve keyword. The disadvantage of ArrayList
is that all its members are of type Object and are therefore late bound. Whether the advantage over ReDim
compensates for the disadvantage of late binding depends on your individual application. You should be prepared to
try both approaches and compare performance. For more information, see ArrayList Class.
Declarations
As previously discussed in the Object Type and Late Binding section, early binding, which is faster than late binding,
also provides better error checking. You should declare your object variables using the most specific, suitable class
Page 3 of 13
type. As an example of this, consider the partial inheritance hierarchy of these classes in the
System.Windows.Forms namespace:
Object
Control
Button
Label
Form
Suppose you use an object variable in such a way that every object you assign to it is a Control, and most but not all
are of type Form. Although Form is a more specific type than Control, you cannot declare the variable as type
System.Windows.Forms.Form, because it might need to take some objects of type Button or Label. You should
declare the variable as type System.Windows.Forms.Control, because that is the most specific type that can
accept every object assigned to it. For more information, see System.Windows.Forms Namespace.
Variables perform faster than properties. A variable access generates a simple memory fetch or store. A property
access requires a call to the Get or Set method on that property, which sometimes does extra processing in addition
to fetching or storing the value. Note that the compiler implements WithEvents variables as properties, so they do
not share the performance advantage of other variables over properties.
Constants perform faster than variables because their values are compiled into the code. A constant access does not
even require a memory fetch, except for constants of type Date or Decimal.
Option Settings
Option Explicit On forces you to declare all your variables, which makes your code easier to read and maintain. Be
sure to use the As clause in every declaration, including procedure arguments. If you do not specify As, your
variables and arguments take data type Object, which is usually not the optimal type. Using As improves
performance because it moves type inference from run time to compile time. For more information, see Option Explicit
Statement.
Option Strict On disallows implicit narrowing, requires the As clause in every declaration, and disallows late binding
regardless of the Option Explicit setting. You can still perform narrowing type conversions, but you must use explicit
conversion keywords such as CInt and CType. Explicit declaration improves performance because it protects your
code from inadvertent late binding. For more information, see Option Strict Statement.
Option Compare Binary specifies that strings are to be compared and sorted based on the binary representation of
their characters, without considering equivalent characters such as uppercase/lowercase pairs. You should use binary
comparison whenever your application's logic permits it. It improves performance because the code does not need to
deal with case insensitivity, or with groups of characters considered alphabetically the same in a given culture.
When you have a set of related objects that you handle similarly, you can put them in an array of objects, or you can
create a collection with the objects as members. The following considerations can help you choose between these
schemes:
• The common language runtime can optimize the code for an array, while every access into a collection
requires one or more method calls. Therefore, arrays are usually preferable when they support all the
operations you need to perform.
• For indexed accesses, arrays are never slower, and are usually faster, than collections.
• For keyed accesses, you should use a collection. Arrays do not support access using a key field, so you would
have to write code to search through the elements of the array for the key.
• For insertions and deletions, collections are usually preferable. Arrays do not directly support adding and
removing elements. If you are inserting or deleting at the end of an array, you must use the ReDim
statement, which reduces performance. To insert or delete anywhere else, you must use an ArrayList object
instead of a standard array. By contrast, insertions and deletions are straightforward operations in a
collection, and they are equally fast regardless of the position of the elements involved.
Page 4 of 13
Disk File I/O
Visual Basic .NET offers three principal ways of accessing disk files:
The traditional runtime file functions are provided for compatibility with earlier versions of Visual Basic. FSO is
provided for compatibility with scripting languages, and for applications that require its functionality. Each of these
models is implemented as a set of wrapper objects that call members of classes in the System.IO namespace.
Therefore, they are not as efficient as using System.IO directly. In addition, FSO requires your application to
implement COM interop, which incurs marshaling overhead.
You can improve your performance by using System.IO classes such as the following:
• Path, Directory, and File for processing at the drive and folder level
• FileStream for general reading and writing
• BinaryReader and BinaryWriter for files with binary or unknown data
• StreamReader and StreamWriter for text files
• BufferedStream to supply buffering for an I/O stream
In general, FileStream provides the most efficient disk performance. It does its own buffering and its own disk
operations, without wrapping anything other than the Windows I/O procedures. However, if disk I/O is not a
bottleneck in your application, one of the other classes might be more convenient. For example, you might prefer
StreamReader and StreamWriter if you are dealing only with text and the disk performance is not critical.
Buffers
Declare your buffers to be of reasonable size, particularly when using BufferedStream or FileStream. Usually the
ideal size is a multiple of 4 KB, although this could vary depending on the application. Buffers smaller than 4 KB can
degrade performance by causing too many I/O operations for a given amount of data. Buffers that are too large can
consume more memory than necessary to achieve a given performance improvement. Depending on the hardware,
the reasonable upper limit can vary from 8 KB to 64 KB.
Asynchronous I/O
If your application is disk bound, you can take advantage of the disk latency to perform other tasks while waiting for
an I/O transfer to complete. To do this, use the asynchronous I/O available with the FileStream class. This approach
can require more source code, but it optimizes your run-time performance because some of the code executes while a
read or write operation is in progress. If you are transferring large amounts of data, or if your disk latency is
significant, the improvement can be considerable.
Operations
Integer arithmetic is much faster than floating point or decimal. In calculations where you do not need decimal points
or fractional values, declare all your variables and constants as integral data types, preferably Integer. Keep in mind,
however, that converting them to and from floating point degrades performance.
Operators
When dividing integral values, use the integer division (\ operator) when you only need the quotient and not the
remainder. The \ operator can be more than ten times as fast as the / operator.
Assignment Operators
Assignment operators such as += are more concise than their constituent operators (separate = and +), and they
can make your code easier to read. If you are operating on an expression instead of a simple variable, for example an
Page 5 of 13
array element, you can achieve a significant performance improvement with assignment operators. This is because
the expression, for example MyArray(SubscriptFunction(Arg1, Arg2)), has to be evaluated only once.
Concatenations
You should use the concatenation operator (&) instead of the plus operator (+) to concatenate strings. They are
equivalent only if both operands are of type String. When this is not the case, the + operator becomes late bound
and must perform type checking and conversions. The & operator is designed specifically for strings.
Boolean Tests
When you test a Boolean variable in an If statement, it is easier to read if you specify only the variable in the test
rather than using the = operator to compare it to True. Although there is no significant performance difference, the
second test in the following example represents better programming practice:
Short-Circuiting
When possible, you should use the short-circuiting Boolean operators, AndAlso and OrElse. These can save time by
bypassing the evaluation of one expression depending on the result of the other. In the case of AndAlso, if the result
of the expression on the left is False, the final result is already determined and the expression on the right is not
evaluated. Similarly, OrElse bypasses the expression on the right if the one on the left evaluates to True. Note also
that the Case statement can short-circuit a list of multiple expressions and ranges, if it finds a match value before the
end of the list.
Member Access
Some member accesses (. operator) call a method or a property that returns an object. In the
System.Windows.Forms.Form class, for example, the Controls property returns a ControlCollection object. Such
an access entails object creation, heap allocation and management, and garbage collection (GC). If you make this
kind of member access in a loop, you create a new object every time, resulting in slower performance. If your
intention is to deal with the same object in each loop iteration, this might also be a logic error, because every access
of this type creates a different object.
Avoiding Requalification
If you make many references to members of an element that is qualified, such as MyForm.Controls.Item(Subscript), you can
improve performance by using the With ... End With construction:
The preceding code evaluates MyForm.Controls.Item(Subscript) only once. It can run more than twice as fast as requalifying
every member access. However, if the element is not qualified, for example AnswerForm or Me, there is no performance
improvement using With ... End With. For more information, see With...End With Statements.
Type Conversions
Two class types have an inheritance relationship when one is derived from the other. If you have objects of each of
these types and you need to convert one to the type of the other, you can use the DirectCast keyword instead of the
CType keyword. DirectCast can have somewhat better performance because it does not use run-time helper
functions. Note that DirectCast throws an InvalidCastException error if there is no inheritance relationship
between the two types. For more information, see DirectCast.
Page 6 of 13
Property Caching
If you repeatedly access a property, for example within a loop that is executed a large number of times, you can
improve performance by caching the property value. To do this, you assign the property value to a variable before
entering the loop, and you then use the variable during the loop. If necessary, you can assign the variable value back
to the property when the loop has completed.
If accesses to repeatedly used properties represent a significant part of the code in the loop, caching them can allow
your loop to run as much as three times as fast.
You might not be able to cache a property, however, if its Get and Set methods do extra processing in addition to
fetching or storing the value.
Exception Handling
Traditional Visual Basic error handling uses the On Error GoTo and On Error Resume Next statements. These are
not always easy to design, and the resulting source code is often convoluted and difficult to read and maintain.
Visual Basic .NET offers structured exception handling with the Try ... Catch ... Finally statements. This is based on a
control structure that is flexible and easy to read. Structured exception handling can check a given block of code for
several different exceptions and handle each one differently.
Both approaches carry some performance overhead. Using the On Error Resume Next statement obliges the
compiler to generate additional intermediate language (IL) for every source statement in the block following the On
Error Resume Next statement. A Catch block changes the state of the Err object on entry and on exit. Preliminary
testing indicates that the performance is roughly equivalent for both approaches when the block has fewer than 20
source statements. However, blocks of several hundred statements should perform better with Try and Catch,
because On Error Resume Next generates more IL for otherwise identical source code.
If you do not have a compelling reason to use On Error statements, you should use structured exception handling.
For more information, see Structured Exception Handling.
Throwing Exceptions
Although structured exception handling is useful, you should use it exclusively for exceptions. An exception is not
necessarily an error, but it should be something that happens infrequently and is not expected in normal operation.
Throwing exceptions takes more processing time than testing and branching, for example using a Select construction
or a While loop. Exceptions also make your code harder to read when used for normal flow control. You should not
use them as a way of branching or returning values.
Catching Exceptions
Try ... Catch ... Finally constructions incur very little performance overhead unless an exception is thrown. In other
words, creating an exception handler does not degrade performance, and you should not hesitate to use structured
exception handling when you expect the exception to happen rarely.
Strings
Instances of the String class are immutable. Consequently, every time you change a String variable, you leave the
existing String object allocated and create a new one. This can cause a high memory and performance overhead if
you manipulate the same String variable many times. The most common manipulation is concatenation, but String
methods such as Insert and PadRight also generate new instances.
StringBuilder
The StringBuilder class in the System.Text namespace supports a mutable string, which retains the same instance
after modification. To use it, you declare a string variable with the StringBuilder data type instead of String. You
can then call its Append, Insert, Remove, and Replace methods to manipulate the string.
Page 7 of 13
If you do a large number of concatenations or other alterations, the StringBuilder class can perform up to three
times as fast as the String data type. If desired, you can use the ToString method to copy the final string data to a
String object when your manipulations are finished.
However, if you do not expect to manipulate the same instance very often, String is a better choice. This is because
StringBuilder has one-time overhead that String does not. At creation time, the StringBuilder constructor takes
more time than the String constructor. At conclusion, you must call ToString in most cases. For more information,
see StringBuilder Class.
Concatenation
Sometimes you can combine all your string modifications into a single statement, for example:
MyString = PrefixString & ": " & MyString & " -- " & SuffixString
A statement like the one in the preceding example creates a new string only once, and there is no late binding
because it uses the & operator.
When you concatenate string constants in a statement, Visual Basic combines them at compile time. This generates
the final, resulting string in the intermediate language (IL), which improves performance at run time. Note that the
result of the ChrW function can be used as a constant if its argument is a constant.
String Functions
The Format function does a large amount of checking, type conversion, and other processing, including formatting
according to the current culture. If you do not need any of this special functionality, use the appropriate ToString
method instead. The ToString methods are faster than the CStr conversion keyword because CStr does additional
parsing before calling ToString.
The Asc and Chr functions work with single-byte character set (SBCS) and double-byte character set (DBCS) code
points. They must consult the code page for the current thread and then translate characters into and out of Unicode.
The AscW and ChrW functions are more efficient, because they work exclusively within Unicode and are independent
of the culture and code page settings for the current thread.
Procedures
There is a trade-off between calling a procedure from within a loop and placing the body of the procedure inside the
loop. If you include the procedure code inside the loop, you avoid the overhead of the call mechanism. However, other
places in your application cannot access that code. Also, if you duplicate the code elsewhere, you make maintenance
more difficult, and you run the risk of update synchronization errors.
Defining the procedure outside the loop makes the loop code easier to read, and it also makes the procedure available
from other places in your application. The call overhead is not important if the procedure is large. However, the
overhead can become very significant if the procedure does only one small task, for example accessing a member of a
class object. In such a case, you achieve better performance by simply accessing the member directly inside the loop.
Procedure Size
Procedures are subject to just-in-time (JIT) compilation. A procedure is not compiled until the first time it is called.
The JIT compiler attempts to perform a number of optimizations on a procedure while compiling it, such as generating
inline code for small procedure calls. Very large procedures are not able to benefit from such optimizations. As a
guideline, a procedure containing more than about 1000 lines of code is less likely to profit from JIT optimization.
In previous versions of Visual Basic, it could be faster to call a procedure from its own module than from another
module, and faster to call a procedure from its own project than from another project. In Visual Basic .NET, neither of
these makes a difference, so the location of a procedure in relation to the calling code is not a performance criterion.
Page 8 of 13
Use the Return statement whenever your logic permits it. For more information, see Return Statement. The compiler
can optimize the code better than if you use Exit Function, Exit Property, or Exit Sub, or allow the End Function,
End Get, End Set, or End Sub statement to generate a return.
Virtual Calls
A virtual call is a call to an overridable procedure. Note that this depends only on whether the procedure is declared
by using the Overridable keyword, not on whether any overrides have been defined. When you make a virtual call,
the common language runtime must inspect the run-time type of the object to determine which override to invoke. By
contrast, a nonvirtual call can obtain all the required information from the compile-time type.
From a performance standpoint, virtual calls take approximately twice as much time as nonvirtual calls. This
difference is especially pronounced with a value type, for example when you call a procedure on a structure. (A
structure cannot declare Overridable members, but it inherits Equals, GetHashCode, and ToString from Object,
and it can implement Overridable members of interfaces.) You should define Overridable procedures only when
there is a clear architectural advantage, and you should limit your calls as much as possible to NotOverridable
procedures.
Procedure Arguments
When you pass an argument to a procedure by using the ByRef keyword, Visual Basic copies only a pointer to the
underlying variable, whether that variable is a value type or a reference type. When you pass an argument ByVal, the
contents of the underlying variable are copied. For a reference type, these contents consist only of a pointer to the
object itself. For a value type, however, they consist of all the variable's data.
The performance difference between ByRef and ByVal is usually insignificant, especially for reference types. For
value types, the difference depends on the data width of the type. Most value types have a data width that is about
the size of a pointer, and in these cases the performance is equivalent. However, a large value type, such as a lengthy
structure, can be more efficient to pass ByRef to avoid copying all the data. On the other hand, when you pass a
value type of optimal size (Integer or Double), ByVal can be preferable because the compiler can often optimize the
calling code, for example by holding the argument in a register.
Because the performance difference between ByVal and ByRef is usually not important, you can consider other
criteria when choosing between the two passing mechanisms. The advantage of passing an argument ByRef is that
the procedure can return a value to the calling code by modifying the contents of the variable you pass to the
argument. The advantage of passing an argument ByVal is that it protects a variable from being changed by the
procedure.
In the absence of a compelling reason to pass an argument ByRef, you should pass it ByVal. For more information,
see Argument Passing ByVal and ByRef.
Loops
Minimize the number of loops inside a Try block, and minimize the number of Try blocks inside a loop. A long loop
could amplify the overhead of structured exception handling.
There has been speculation regarding whether For, Do, or While loops are more efficient. Preliminary testing does
not reveal any significant or consistent difference between them. Therefore, performance is not a consideration when
choosing among these types of loops.
Collections
When you traverse a collection, you can use either a For loop or a For Each loop. If you expect to add, delete, or
rearrange elements of the collection during the traversal, a For loop can produce more reliable results. It also allows
you to determine the order of traversal. Furthermore, a collection derived from the CollectionBase class in the
System.Collections namespace throws an exception if you attempt to change its members during a For Each loop.
A For Each loop gives control of the collection to the enumerator object returned by the
IEnumerable.GetEnumerator method. This means that you cannot necessarily predict the order of traversal. For
Each is useful when you are not able to access a collection's members using the Item property.
Page 9 of 13
The performance difference between For and For Each loops does not appear to be significant.
Adding Members
When you call a collection's Add method, avoid using the Before and After arguments. When you specify a position in
the collection with Before or After, you oblige the collection to find another member before it can add your new
member.
If you are using a collection that has an AddRange method, use it in preference over Add. AddRange adds an entire
list or collection in one call. Several collection classes expose the AddRange method, for example the ArrayList class
in the System.Collections namespace and the ComboBox.ObjectCollection class in the System.Windows.Forms
namespace.
Threading
If your application spends a high percentage of its time waiting for some of its operations to complete, consider using
asynchronous processing, making use of methods of the Thread class in the System.Threading namespace. This
can be useful when waiting for user responses, as well as when reading and writing to persistent media. Asynchronous
processing and threading entail additional coding, but they can make a significant performance difference.
Be aware, however, that threading does carry overhead and must be used carefully. A thread with a short lifetime is
inherently inefficient, and context switching takes a significant amount of execution time. You should use the
minimum number of long-term threads, and switch between them as rarely as you can.
Multiple Buffering
If your application reads and writes extensively and you use asynchronous I/O, multiple buffering might be profitable.
In multiple buffering, you allocate two or more buffers for a file you are reading or writing. While waiting for I/O to
complete in one buffer, you can process the data in another. Multiple buffering is also called swing buffering. It is
commonly called double buffering when you use two buffers.
Interprocess Calls
You should minimize interprocess calls, remote calls, and calls across application domains because of the overhead for
marshaling. This is particularly true for calls across a COM interop boundary, that is, between managed code and
unmanaged code (COM). When you need to make such calls, try to combine them into a few "chunky" calls instead of
many "chatty" calls. A "chunky" call performs several tasks, such as initializing all the fields on an object. A "chatty"
call does only one short task before returning. For more information, see Programming with Application Domains and
Assemblies.
Blittable types, which have the same representation in both managed and unmanaged memory, can be copied across
the managed/unmanaged boundary without conversion. When your application makes COM interop calls, try to use
only blittable data types for the arguments. The blittable types in Visual Basic .NET are Byte, Short, Integer, Long,
Single, and Double. The common language runtime types System.SByte, System.UInt16, System.UInt32, and
System.UInt64 are also blittable. For more information, see Blittable and Non-Blittable Types.
Some composite data types can also be blittable. A structure with all blittable members is itself blittable. A class is not
automatically blittable even if all its members are, but you can still improve marshaling performance by using only
blittable members. You can also set the ExplicitLayout member of the TypeAttributes enumeration in the
System.Reflection namespace. This forces class members to be marshaled at the specified offsets without any
realignments by the common language runtime.
Marshaling
Marshaling represents a significant part of interprocess calls, and lessening it can improve performance. Blittable data
types and explicit member layout are among the most effective ways to minimize marshaling overhead. Using
structures instead of classes whenever possible usually speeds up performance.
Page 10 of 13
Managed code can also use the platform invoke functionality to call unmanaged functions implemented in dynamic-
link libraries (DLLs), such as those in the Win32 API. To use platform invoke, you declare each external reference with
a Declare statement, using the Function and Lib keywords. Calling through Declare statements can be more
efficient than calling COM objects. For more information, see Consuming Unmanaged DLL Functions.
Other Optimizations
Your source code is compiled to Microsoft intermediate language (MSIL) by the Visual Basic compiler. The MSIL
resides in your application's .exe file, which is read by the just-in-time (JIT) compiler of the common language
runtime. The JIT compiler normally compiles each procedure's MSIL to the platform's native code the first time that
procedure is called.
It can be useful to compile frequently used procedures even before they are called. The integrated development
environment (IDE) does this precompilation process for the standard libraries of Visual Basic, and it puts the native
code versions in a special section of the global assembly cache (GAC). This saves time by making JIT compilation
unnecessary for Visual Basic runtime functions.
In some cases, certain procedures in your application can be good candidates for precompilation. For example,
Windows Forms applications typically use many shared libraries and call many procedures at startup time. You might
be able to improve performance if you precompile such procedures.
You can precompile parts of your application by using ngen.exe during installation. Note that ngen.exe does not call the
JIT compiler, but instead performs its own compilation. For more information, see Native Image Generator
(Ngen.exe). You should be aware of the following considerations before you precompile any of your code:
DLLs
Loading a dynamic-link library (DLL) takes a considerable amount of execution time. Bringing in a DLL only to call one
or two procedures is highly inefficient. You should try to generate the smallest possible number of DLLs, even if this
makes them relatively large. This means your application should use as few projects as possible and large solutions.
You can improve performance by compiling to a retail build instead of to a debug build. This enables compiler
optimizations, which make the resulting intermediate language (IL) smaller and faster. However, these optimizations
reorder code, making debugging more difficult. You might want to compile to a debug build while your application is
still under development.
In a Windows Forms application, sometimes you want to display important forms and controls as quickly as possible.
To optimize this, consider the following:
• Avoid unnecessary repainting of controls. One helpful approach is to hide controls while you are setting their
properties.
• When you need to repaint a control or any display object, try to repaint only the newly exposed areas of an
object. This reduces the time the user waits to see a completed display.
Page 11 of 13
• There is no equivalent in Visual Basic .NET to the Image control of previous versions. You must use
PictureBox controls to display most graphics. The AutoRedraw function is also no longer supported. For
more information, see Introduction to the Windows Forms PictureBox Control.
When you do not want the user to believe that your application has stopped running, you can try to optimize the
perceived display speed. The following suggestions might help:
• Use progress indicators, such as the ProgressBar control, available in the ProgressBar class in the
System.Windows.Forms namespace. This assures the user that your application is still running. For more
information, see Introduction to the Windows Forms ProgressBar Control.
• During short operations that require one second or less, you can turn the mouse pointer into an hourglass by
using the MousePointer property of the Masked Edit control.
• Preload critical data before your application needs it. This includes forms and controls, along with other data
items. Although it still takes the same amount of time to load these items, you reduce the time that users wait
for them to appear when they need to see them.
• When you have preloaded forms or controls, keep them hidden. This also minimizes the amount of painting
necessary.
• While your application is waiting for user input, use threads and timers to do small tasks in the background.
This can help prepare data for display when the user requests it.
• It is often useful to maximize the speed of the early displays of your application, that is, those it displays
when it first loads. The following points are worth considering:
• Keep your early forms and controls as simple as possible to reduce loading and initialization time.
• Call Me.Show as one of the first lines of code in each form load event.
• Avoid loading modules that are not needed immediately. Be careful to avoid calling procedures that
force such premature loads.
• If your display includes animation or changes a display element often, use double or multiple buffering to
prepare the next image while the current one is being painted. The ControlStyles enumeration in the
System.Windows.Forms namespace applies to many controls, and the DoubleBuffer member can help
prevent flickering.
It is important to keep the amount of your executable code to a minimum. You might also need to reduce the memory
requirements of your application's data. Such reductions often improve performance, because smaller executables
usually run faster, and eliminating memory swaps increases execution speed. In this respect, you might be able to
profit from the following recommendations:
• Minimize the number of forms that are loaded simultaneously. Delay the loading of a form until it is needed,
unless you wish to optimize the perceived display speed. When you are finished with a form, unload it and set
the variable to Nothing.
• Use as few controls as possible on a form. Use the smallest and simplest controls that do what you need. For
example, try to use labels instead of text boxes.
• Keep related procedures in the same module. This minimizes the loading of modules.
• Avoid using bigger data types than you need, especially in arrays.
• When you are finished with a variable of a large data type, set it to Nothing. This applies especially to
strings, arrays, and other potentially large objects.
• If you are finished with some elements of an array but need to keep others, use ReDim to release the
unneeded memory to garbage collection (GC).
Conclusions
When you are designing and writing a Visual Basic .NET application to optimize performance, the following are
important points to keep in mind:
• Concentrate your optimization efforts on code that runs within loops and frequently called procedures.
• Find the slowest places in your application and optimize them to achieve performance that is acceptable to the
user.
• Use strongly typed variables, and arrange for early binding on object variables whenever possible.
Page 12 of 13
• Set Option Strict On and Option Explicit On.
• Minimize memory usage.
• Compile to a retail build when you do not need a debug build.
• Plan your application with large solutions and as few projects as possible.
• Measure your performance, rather than simply assuming that one technique is more efficient than another.
Disciplined coding, with proper design of the overall logic, is the first step in optimizing performance. Tuning is of little
help if your application is not well designed from the top down.
Page 13 of 13