Detecting and Solving Memory Problems in Net
Detecting and Solving Memory Problems in Net
.NET
Alexey Totin
This book is for sale at http://leanpub.com/detectingandsolvingmemoryproblemsinnet
This version was published on 2016-04-04
Contents
JetBrains Technical Series . .
About This Book . . . . . .
How the Book Is Organized
Acknowledgements . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
1
1
1
.
.
.
.
3
3
4
5
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Memory Leaks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
How .NET Stores Objects in Memory . . . . . . . . . . . . . . . . . . . . . . .
Who Retains the Object? A Common Approach of Detecting Memory Leaks .
What If a Leak Is Not Obvious? Automatic Inspections . . . . . . . . . . . . .
How to Fight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
What If There Is Nothing Suspicious in the Objects Retention Path? GC Roots
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
7
7
9
11
11
20
23
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
25
25
29
33
55
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
56
57
61
63
70
72
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
CONTENTS
How It Works . . . . . . . . . . . . . . . . . . . . . . . . . .
When to Use dotMemory Unit . . . . . . . . . . . . . . . . .
Example 1. Checking for Specific Objects . . . . . . . . . . .
Example 2. Selecting Objects by a Number of Conditions . . .
Example 3. Checking Memory Traffic . . . . . . . . . . . . .
Example 4. Complex Scenarios for Checking Memory Traffic .
Example 4: Comparing Snapshots . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
72
73
73
74
75
75
76
77
78
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
dotMemory and dotTrace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
82
82
Acknowledgements
The author would like to specially thank Serjic Shkredov, Maarten Balliauw, Ed Pavlov, Fedor Reznik,
Anastasia Goncharova, and Mikhail Kropotov for their help with the content, and Hadi Hariri for
the idea of the book.
https://www.jetbrains.com
http://blog.jetbrains.com/dotnet/
With the exception of the last two chapters, which come as bonus content describing ways to automate the detection of memory issues in unit
Alexey Totin
Memory Leaks
A classic memory leak is a situation when an object in memory cannot be accessed by the running
code. This situation is impossible in a .NET application, as the runtime tracks all unused objects
(objects that are not referenced by other objects) and removes them from memory when they are no
longer needed. This mechanism is called garbage collection (GC). Nevertheless, it cannot prevent
the case when your application constantly creates objects that are referenced by other objects with
the reference you dont know about (from garbage collectors point of view, such objects are still
needed). So, sooner or later your application may run into the Out of memory exception.
memory garbage collector has to collect, the slower our application. Thats how high memory traffic
impacts application performance.
The classic high memory traffic examples that everyone likes are the ones based on the string type
immutability. Immutability means that each time you explicitly change string content, a new string
is created. For example, if you reverse a string using the + operator in a loop, it will create as many
strings as the total number of characters in the string.
will create almost 300 objects and take up more than 30 kilobytes of memory.
For medium- or large-sized applications, its millions of objects and hundreds of megabytes. And
this is just a static memory dump. In dynamics, things get much, much worse. Memory traffic of a
typical application is tens of thousands of objects created and removed from memory every second.
Even if we know that our app has, say, a leak; how on earth are we supposed to find it in such a
mess? The answer is memory profiling. Memory profiling is a process of dynamic program analysis
that allows you not only to identify but also analyze instant data on objects in memory and memory
traffic data. Tools used to perform memory profiling are called memory profilers. With their help,
you can get answers to questions such as:
Why is this object still in memory (what is causing the memory leak)?
What takes so much memory (which exact objects)?
How does garbage collection affect the performance of my application? What method is the
origin of high memory traffic?
Are any memory allocation/distribution patterns being violated?
Well return to this example later in the book, so its OK if this short explanation is not 100% clear.
We dont want to turn this book into a step-by-step dotMemory manual. Therefore, it does
not contain instructions on basic dotMemory usage: how to start profiling, get memory
snapshots, navigate through snapshots, and so on. If desired, you can get these details from
the official dotMemory documentation.
Heap Allocations Viewer plugin for JetBrains ReSharper. The plugin highlights all places
in your code where memory is allocated. While not a must, it makes coding much more
convenient and in some sense forces you to avoid excessive allocations.
http://jetbrains.com/dotmemory/
https://www.jetbrains.com/dotmemory/help
https://resharper-plugins.jetbrains.com/packages/ReSharper.HeapView
Yes, you will need JetBrains ReSharper installed in your Visual Studio. Seeing how dotMemory is now available only as a part of ReSharper
Memory Leaks
Lets start with definitions. According to Wikipedia, a memory leak is a result of incorrect memory
management when an object is stored in memory but cannot be accessed by the running code. In
addition, memory leaks add up over time, and if they are not cleaned up, the system eventually runs
out of memory.
Actually, if we strictly follow the definition above, classic memory leaks in .NET applications are
impossible. Garbage collector fully controls memory release and removes all objects that cannot
be accessed by the code. Moreover, after an application is closed, garbage collector entirely frees
the memory occupied by the application. Nevertheless, point #2 (memory exhaustion because of
a leak) is quite real. This wont crash the system, but sooner or later the application will raise an
OutOfMemory exception.
The thing is, garbage collector collects only unreferenced objects. If theres a reference to an object
you dont know about, garbage collector will not collect it.
To better understand what all this reference stuff means, lets make a little digression and talk
about how .NET stores objects in memory.
Memory Leaks
Memory Allocation
When a new process is started, the runtime reserves a region of address space for the process
called the managed heap.
Objects are allocated in the heap contiguously one after another.
Memory allocation is a very fast process as it is just the adding of a value to a pointer.
In addition to the managed heap, an app always consumes some amount of unmanaged
memory which is not managed by garbage collector. Generally, it is required by CLR itself,
dynamic libraries employed by the app, graphics buffer, and so on.
If you were thinking of some stack where the runtime places blocks (new objects) one after another,
put this idea out of your mind. It is correct to some extent, but not really helpful. To find a better
analogy, lets take a look at how the allocated memory is released.
Memory Release
The process of releasing memory is called garbage collection. It is performed by a CLR
component called garbage collector.
When garbage collector performs a collection, it releases only objects that are no longer in use
by the application. (For example, a local variable in a method can be accessed only while the
method executes. Afterwards, the variable is no longer needed).
To determine whether an object is used or not, garbage collector examines the applications
roots - strong references that are global to the application. Typically, these are static object
pointers, local variables, and CPU registers.
For each active root, garbage collector builds a graph which contains all objects that are
reachable from these roots.
If an object is unreachable, garbage collector considers it no longer in use and removes the
object from the heap (releases the memory occupied by the object).
After the object is removed, garbage collector compacts reachable objects in memory.
Memory Leaks
Therefore, the most appropriate representation of all objects in memory is a graph of objects. Next
time you think about how objects are stored in memory, instead of a plain stack imagine some
2D-plan with interconnected blocks (e.g., a street map with buildings and roads).
So, what is the best way to fight memory leaks in your applications? Based on the theory above,
to fight a memory leak you need to determine the objects that add up over time causing a leak, and
then find the objects that prevent the former ones from being collected (have a reference to them).
Lets take a look at a more elaborate workflow.
Memory Leaks
10
2. At some point while working with the application, take a memory snapshot.
3. Work with the application for some time so that the leak might reveal itself more obviously,
or reproduce the actions that, in your opinion, may probably lead to the leak.
4. Take one more memory snapshot.
5. By using specific dotMemory views:
Compare snapshots to find all objects that were not collected within your profiling time
interval. Using dotMemory grouping views, determine the objects that should not be in
memory at this execution point.
Using views that show object retention paths, determine what prevents these objects
from being collected.
Memory Leaks
11
How to Fight
Binding Leak
There are a number of memory leaks related to WPF data binding patterns. The patterns, if not
followed correctly, can cause a memory leak. Consider the following example:
For example, an object is subscribed to an event of another object but never unsubscribed from it.
Memory Leaks
12
class Person
{
public Person(string name)
{
Name = name;
}
public string Name { get; set; }
}
When we bind to an instances Name property, the binding target starts listening for property
change notifications. If the property is not a DependencyProperty or an object that implements
INotifyPropertyChanged interface, WPF will resort to subscribing to the ValueChanged event of
the System.ComponentModel.PropertyDescriptor class to get notifications when the source objects
property value changes.
Why is this a problem? Well, since the runtime creates a reference to this PropertyDescriptor,
which in turn references our source object, and the runtime will never know when to deallocate
that initial reference (unless explicitly told), both the PropertyDescriptor and the source object
will remain in memory.
Detecting
dotMemory has an automatic inspection for this issue. Suppose we have some control that binds to
our class and then disposes. After we profile our application and open the snapshot, the snapshot
overview page will immediately warn us about WPF binding leaks.
This should be all we need to know, but lets see if we can find proof of the theory above (about the
PropertyDescrriptors ValueChanged event handler keeping our objects in memory). After doubleclicking the list entry, we can see the object set open. When we navigate to the Group by Similar
Retention view, we see the proof - it is ValueChangedEventManager who is retaining our object.
This view groups objects by similarity of their retention paths. For each object set, the view shows the two shortest paths to roots. For more
details see [](jetbrains.com/dotmemory/help/Similar_Retention.html)
Memory Leaks
13
Solving
The simplest fix for a WPF binding leak would be making our Name property a DependencyProperty,
or implementing the INotifyPropertyChanged interface correctly in our Person class and its Name
property. For example:
class Person: INotifyPropertyChanged
{
private string _name;
public Person(string name)
{
Name = name;
}
public string Name
{
get { return _name; }
set
{
_name = value;
14
Memory Leaks
If the object is of a type we can not edit (say, it comes from a library we depend on), we can also explicitly clear the binding by calling: BindingOperations.ClearBinding(textBox, TextBlock.TextProperty);
Note that if a binding has the OneTime mode, this leak wont exist as the binding is done only once
and the binding target wont listen for changes in the source object.
If we now open this set of objects and look at the Group by Dominators view, we will see that
our collection is held in memory by the WPF DataBindEngine, an object that will be around for
the lifetime of our application. So, as long as our objects dominator stays in memory, the collection
stays as well.
This view allows you to answer the question, Who exclusively retains the object?. We will discuss the concept of dominators in later chapter
titled Ineffective Memory Usage.
Memory Leaks
15
Solving
An easy way to fix the issue is to implement the INotifyCollectionChanged interface in our custom
collection type. If the collection does not need any specific implementations, we could also inherit
from the ObservableCollection type as it handles the implementation for us.
public class MyBigCollection : ObservableCollection<int>
{
}
x:Name Leak
The WPF technology was a huge progress for the .NET Framework that made our work with user
interfaces much easier than before. Unfortunately, like any other technology, it has some pitfalls. For
example, such a common and easy operation as removing a UI control can cause a memory leak.
The thing is that WPF creates a strong global reference to any UI element that is declared in XAML
if it uses the x:Name directive.
<acmecompany:PersonEditorControl Grid.Row=0 x:Name=personEditor/>
Removing an element from code will not remove the control from memory, not even if we remove
it from the parent controls Children collection. This can be a real problem for an application that
dynamically creates and removes numerous UI elements (e.g., points on some real-time diagram).
Memory Leaks
16
Detecting
To detect the issue, we should take a snapshot in dotMemory right after the suspicious control is
removed. The leaked control will be shown on snapshot overview in the corresponding inspection
section.
If we need more details, we can drill down and use the Key Retention Paths view to see how WPF
retains the object in memory.
Solving
To ensure the control gets removed from memory, we will have to call the UnregisterName method
for the parent control. The updated code that removes the control could look like this:
private void DeleteData_OnClick(object sender, RoutedEventArgs e)
{
if (personEditor != null)
{
this.UnregisterName(personEditor);
_grid.Children.Remove(personEditor);
personEditor = null;
}
}
Memory Leaks
17
public AdWindow()
{
adTimer = new DispatcherTimer();
adTimer.Interval = TimeSpan.FromSeconds(3);
adTimer.Tick += ChangeAds;
adTimer.Start();
}
Now what happens if we close this AdWindow? That depends. If we do nothing, the DispatcherTimer
will keep on firing Tick events, and since were still subscribed to it, the ChangeAds event handler
will be called. If this event handler has to remain in memory for it to be called, our AdWindow will
stay in memory too, even if we expect it to be released.
Detecting
There are a number of ways to detect this type of leak. The easiest is to capture a snapshot after the
object was expected to be released. In the snapshot overview page, we will immediately see if the
object stays in memory because of an event handler leak.
See our AdWindow there? Now we should find who holds it in memory. If we double-click the entry,
we will see the details on the instance. The Key Retention Paths view will show us how the object
is retained - by the DispatcherTimer instance.
Memory Leaks
18
If we are familiar with the source code, we know where to look. But what if were seeing the source
for the first time ever? How do we know where the subscribing to this event handler takes place?
All we need to do is to double-click the EventHandler entry (here in the Key Retention Paths
diagram). This will open the specific event handler instance. The Creation Stack Trace view built
for this instance will show us that were subscribing to the event handler in the AdWindow constructor.
Memory Leaks
19
The Shortest Paths to Roots | Tree view will tell us which event were subscribing to exactly.
Solving
From the investigation above, we know which event and which event handler weve forgotten to
unsubscribe from (DispatcherTimers Tick event), and where we subscribe to it in the first place
(the AdWindow constructor).
Unsubscribing from the event in the constructor is pointless in this case, as it would render our
functionality of rotating content every few seconds useless. A more logical place to unsubscribe is
when closing the AdWindow:
20
Memory Leaks
The DispatcherTimer example here is a special case, as the above will still not ensure our
AdWindow is released from memory. If we profile the application, we would be able to see
the AdWindow instance is still there. The Key Retention Paths view will help us discover
that we have to set the private variable adTimer to null as well, in order to remove another
reference from the .NET runtimes DispatcherTimers collection. Or how one memory leak
can hide another.
Memory Leaks
21
Note that in release builds, a roots lifetime may be shorter JIT can discard the variable right after
it is no longer needed.
Static Reference
When CLR meets a static object (class member, variable, or event), it creates a global instance of this
object. The object can be accessed during the entire app life time, so static objects are almost never
collected. Thus, references to static objects is one of the main root types.
class StaticClass
{
public static Collection<string> StCollection;
}
After the collection is initialized, CLR will create a static instance of the collection. The reference to
the instance will exist for the lifetime of the application domain.
When the static object is referenced through a field, dotMemory shows you the fields name. Of
course, unnamed static references can also take place. One obvious example of such a root is a
reference to a string declared in a method.
22
Memory Leaks
Note that in the example above, CLR also creates the Regular local variable reference. Nevertheless,
to simplify further analysis, dotMemory doesnt show you this root.
Pinning Handle
One additional problem for garbage collector is the interaction between managed and unmanaged
code. For example, you need to pass an object from the managed heap to, say, an external API library.
As a small object heap is compacted during collection, the object can be moved. This is an issue for
the unmanaged code if it relies on the exact object location. One solution is to fix the object in the
heap. In this case, garbage collector gets a pinning handle to the object, which implies that the object
cannot be moved.
Considering the above, if you see the Pinning handle type, the object is probably retained by some
unmanaged code.
For example, the App object always has a pinning reference:
You can also pin objects intentionally using the fixed block.
RefCounted Handle
The root prevents garbage collection if the reference count of the object is a certain value.
Memory Leaks
23
If an object is passed to a COM library using COM Interop, CLR creates a RefCounted handle to this
object. This root is needed as COM is unable to perform garbage collection. Instead, it uses reference
counting. If the object is no longer needed, COM sets the count to 0. This means that RefCounted
handle is no longer a root and the object can be collected.
Thus, if you see RefCounted handle, then the object is probably passed as an argument to
unmanaged code.
Weak Handle
As opposed to other roots, the Weak handle does not prevent referenced objects from garbage
collection. Thus, objects can be collected at any time but still can be accessed by the application.
Access to such objects is performed via an intermediate object of the WeakReference class. Such an
approach may be efficient when working with some temporary data structures like caches.
As weak references (typically) do not survive full garbage collection, you will see weak references
mostly in combinations with other handles. For example, Weak, RefCounted handle.
Regular Handle
When handle type is undefined, dotMemory marks it as Regular handle. Typically, these are
references to system objects required during the entire lifetime of the application. The OutOfMemoryException object is a prime example. To prevent its collection, the environment references the
object through a regular handle.
Summary
Though classic memory leaks are impossible in .NET, uncontrolled memory consumption
that ends up with an OutOfMemory exception is a definite possibility.
Memory Leaks
24
26
really take a while. Of course, .NET CLR developers tried to lower the GC overhead with a number of
tricks: They organized the managed heap into generations; created another heap for large objects
- Large Object Heap; and moved garbage collection into a separate process - background garbage
collection. Lets talk about these optimizations in more details so you can better understand what
is going on when your application releases memory.
Generations
The first performance trick .NET runtime developers implemented was dividing the managed heap
into segments called generations: 0, 1, and 2. Why is this a trick? Because garbage collection is
also divided into separate steps - each of those being performed independently reduces the overall
performance impact. Heres how it works:
Objects that are smaller than 85KB are allocated on the so-called Small Object Heap (SOH).
When objects are just created, they are placed to Generation 0 (Gen 0) segment of SOH.
When Gen 0 is full (the size of the heap and generations is defined by GC), GC performs a
garbage collection. During the collection, GC removes all unreachable objects from the heap.
All reachable objects are promoted to Generation 1 (Gen 1).
The garbage collection of Gen 0 is a rather cheap operation from the performance perspective.
When Gen 1 is full, the Gen 1 garbage collection is performed. All objects that survive the
collection are promoted to Gen 2. Gen 0 collection also takes place here.
When Gen 2 is full, GC performs full garbage collection. First Gen 2 collection is performed,
then Gen 1 and Gen 0 collections. If at this point there is still not enough memory for new
allocations, GC raises the OutOfMemory exception.
During full garbage collection, GC has to pass through all objects in the heap, so this process
may have a great impact on system resources.
This is by no means a full list, but the main aspects of GC you should definitely know about.
27
This means the worst-case scenario is high Gen2 traffic, when your application allocates a lot of
objects that become no longer needed right after promoting to Gen2. This reduces the Gen2 free
space and means that heavy Gen2 collections will occur more frequently.
from .NET Framework 4.5.1, you can force GC to compact LOH during full garbage collection by using the
GCSettings.LargeObjectHeapCompactionMode property.
28
29
30
dotTrace can also show you the number of blocking garbage collections in a specific time interval
(high blocking GC values clearly identify high memory traffic as the main cause of performance
issues).
Switching the analysis subject from time (in ms) to memory allocation (in MB) will allow you to see
what threads/methods allocate the most memory.
31
Thus, using dotTrace and its timeline profiling mode, you are able to identify blocking garbage
collection as a cause of performance issues as well as threads and even methods that allocate the
most memory. But this doesnt give the exact answer on what is wrong with your code. This is when
memory profiling comes to the rescue.
32
2. Collect a memory snapshot after the method or functionality youre interested in finishes
working.
3. Open the snapshot and select the Memory Traffic view.
33
Why is this a problem? Value types are stored on the stack, while reference types (object) are stored
in the managed heap. Therefore, to assign an integer value to an object, CLR has to take the value
from the stack and copy it to the heap. Of course, this movement impacts app performance.
Detecting
With dotMemory, finding boxing is an elementary task:
1. Open a memory snapshot and select the Memory Traffic view.
2. Find objects of value type. All these objects are the result of boxing.
3. Identify methods that allocate these objects and generate a major portion of the traffic.
34
The Heap Allocations Viewer plugin also highlights allocations made because of boxing.
The main concern here is that the plugin shows you only the fact of a boxing allocation. But from
the performance perspective, youre more interested in how frequently this boxing takes place. E.g.,
if the code with a boxing allocation is called once, then optimizing it wont help much. Taking this
into account, dotMemory is much more reliable in detecting whether boxing causes real problems.
Solving
First of all: before fixing the boxing issue, make sure it really is an issue, i.e. it does generate
significant traffic. If it does, your task is clear-cut: rewrite your code to eliminate boxing. When you
introduce some struct type, make sure that the methods that work with this struct dont convert it
to a reference type anywhere in the code. For example, one common mistake is passing variables of
value types to methods working with strings (e.g. String.Format):
int i = 5;
String.Format("i = {0}", i);
A simple fix is to call the ToString() method of the appropriate value type:
int i = 5;
String.Format("i = {0}", i.ToString());
Resizing Collections
Dynamically-sized collections such as Dictionary, List, HashSet, and StringBuilder have the
following specifics: When the collection size exceeds the current bounds, .NET resizes the collection
and redefines the entire collection in memory. Obviously, if this happens frequently, your apps
performance will suffer.
Detecting
The insides of dynamic collections can be seen in the managed heap as arrays of a value type (e.g.
Int32 in case of Dictionary) or of the String type (in case of List). The best way to find resized
collections is to use dotMemory. For example, to find whether Dictionary or HashSet objects in
your app are resized too often:
35
The workflow for the List collections is similar. The only difference is that you should check the
System.String arrays and the List<>.SetCapacity method that creates them.
36
Solving
If the traffic caused by the resize methods is significant, the only solution is reducing the number
of cases when the resize is needed. Try to predict the required size and initialize a collection with
this size or larger.
List<string> list = new List<string>(1000);
In addition, keep in mind that any allocation greater than or equal to 85,000 bytes goes on the
Large Object Heap. Allocating memory in LOH has some performance penalties: as LOH is not
compacted, some additional interaction between CLR and the free list is required at the time of
allocation. Nevertheless, in some cases allocating objects in LOH makes sense, for example, in the
case of large collections that must endure the entire lifetime of an application (e.g. cache).
Enumerating Collections
When working with dynamic collections, pay attention to the way you enumerate them. The typical
major headache here is enumerating a collection using foreach only knowing that it implements the
IEnumerable interface. Consider the following example:
37
class EnumerableTest
{
private void Foo(IEnumerable<string> sList)
{
foreach (var s in sList)
{
}
}
public void Goo()
{
var list = new List<string>();
for (int i = 0; i < 1000; i++)
Foo(list);
}
}
The list in the Foo method is cast to the IEnumerable interface, which implies further boxing of the
enumerator.
Detecting
As with any other boxing, the described behavior can be easily seen in dotMemory.
1. Open a memory snapshot and select the Memory Traffic view.
2. Find the System.Collections.Generic.List+Enumerator value type and check generated
traffic.
3. Find methods that originate those objects.
38
As you can see, a new enumerator was created each time we called the Foo method.
The same behavior applies to arrays as well. The only difference is that you should check traffic for
the SZArrayHelper+SZGenericArrayEnumerator<> class.
The Heap Allocation Viewer plugin will also warn you about hidden allocations:
39
Solving
Avoid casting a collection to an interface. In our example above, the best solution would be to create
a Foo method overload that accepts the List<string> collection.
private void Foo(List<string> sList)
{
foreach (var s in sList)
{
}
}
If we profile the code after the fix, well see that the Foo method doesnt create enumerators anymore.
you modify string contents, a new string object is created. This fact is the main source of performance
issues caused by strings. The more you change string contents, the more memory is allocated. This,
in turn, triggers garbage collections that impact app performance. The straightforward remedy is to
optimize your code so as to minimize the creation of new string objects.
40
Detecting
Check all string instances that are not created by your code, but by the methods of the String class.
The most obvious example is the String.Concat method that creates a new string each time you
combine strings with the + operator.
To do this in dotMemory:
1. In the Memory Traffic view, locate and select the System.String class.
2. Find all methods of the String class that create the selected strings.
Consider an example of the function that reverses strings:
internal class StringReverser
{
public string Reverse(string line)
{
char[] charArray = line.ToCharArray();
string stringResult = null;
for (int i = charArray.Length; i > 0; i--)
stringResult += charArray[i - 1];
return stringResult;
}
}
An app that uses this function to revert a 1000-character line generates enormous memory traffic
(more than 5 MB of allocated and collected memory). A memory snapshot taken with dotMemory
reveals that most of the traffic (4 MB of allocations) comes from the String.Concat method, which,
in turn, is called by the Reverse method.
41
The Heap Allocations Viewer plugin will also warn you about allocations by highlighting the
corresponding line of code:
Solving
In most cases, the fix is to use the StringBuilder class or handle a string as an array of chars using
specific array methods. Considering the reverse string example, the code could be as follows:
public string Reverse(string line)
{
var sb = new StringBuilder(line.Length);
for (int i = line.Length; i > 0; i--)
sb.Append(line[i - 1]);
return sb.ToString();
}
42
dotMemory shows that after the fix traffic dropped by over 99%:
Improving Logging
When seeking ways to optimize your project, take a look at the logging subsystem. In complex
applications, for the sake of stability and support convenience, almost all actions are logged. This
results in significant memory traffic from the logging subsystem. Thats why it is important to
minimize allocations when writing messages to a log. There are multiple ways to improve logging.
Actually, the optimization approaches shown in this section are universal. The logging
subsystem was taken as an example because it works with strings most intensively.
What are the pitfalls of such implementation? The main concern here is how you call this method.
For example, the call
LogMessage("message");
will cause an empty array to allocated. In other words, this line will be equivalent to
43
Detecting
The easiest way to detect the allocation of an empty Object array is use the Heap Allocations Viewer
plugin:
Solving
The best solution is to create a number of method overloads with explicitly specified arguments. For
instance:
44
Hidden Boxing
The implementation above has a small drawback. What if you pass a value type to the following
method?
void LogMessage(string message, object arg0);
For example:
LogMessage("message", 123);
As the method accepts only the object argument, which is a reference type, boxing will take place.
Detecting
As with any other boxing, the main clue is a value type on the heap. So, all you need to do is look
at the memory traffic and find a value type. In our case this will look as follows:
45
Solving
The easiest way is to use generics - a mechanism for deferring type specification until it is declared
by client code. Thus, the revised version of the LogMessage method should look as follows:
void LogMessage<T>(string message, T arg0) {...}
Excessive Logging
If you use logging for debugging purposes, make sure log calls never reach the release build. You
can do this by using the [Conditional] attribute.
In the example below, the LogMessage method will be called only if the DEBUG attribute is explicitly
defined.
46
#define DEBUG
...
[Conditional ("DEBUG")]
public void LogMessage(string message, object arg) {...}
Lambda Expressions
Lambda expressions are a very powerful .NET feature that can significantly simplify your code
in certain situations. Unfortunately, convenience has its price. If used wrongly, lambdas can
significantly impact app performance. Lets look at what exactly can go wrong.
The trick is in how lambdas work. To implement a lambda (which is a sort of local function), the
compiler has to create a delegate. Each time a lambda is called, a delegate is created as well. This
means that if the lambda stays on a hot path (is called frequently), it will generate huge memory
traffic.
Is there anything we can do? Fortunately, .NET Framework developers have already thought about
this and implemented a caching mechanism for delegates. For better understanding, consider the
example below:
class LambdaTest
{
void Foo(Func<string, string> goo)
{
}
public void Hoo()
{
Foo((x) => x);
}
}
http://www.jetbrains.com/decompiler/
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
47
As you can see, a delegate is made static and created only once (line 4) - LambdaTest.CS<>9__CachedAnonymousMethodDelegate1.
So, what pitfalls should we watch out for? At first glance, this behavior wont generate any traffic.
Thats true unless your lambda contains a closure. If you pass any context (this, an instance member,
or a local variable) to a lambda, caching wont work. That makes sense: the context may change
anytime, and thats what closures are made for - passing context.
Lets look at a more elaborate example. For example, your app uses some Substring method to get
48
Suppose this code is called frequently and strings on input are often the same. To optimize the
algorithm, you can create a cache that stores results:
private Dictionary<string, string> myCache = new Dictionary<string, string>();
Your algorithm should check whether the substring is already in the cache:
private string GetOrCreate(string key, Func<string> evaluator)
{
string ret;
if (myCache.TryGetValue(key, out ret))
return ret;
ret = evaluator();
myCache[key] = ret;
return ret;
}
As you pass the local variable x to the lambda, the compiler is unable to cache a created delegate.
Lets look at the decompiled code:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
49
There it is. A new instance of the c__DisplayClass1() is created each time the Substring method
is called (line 3). The parameter x we pass to the lambda is implemented as a public field of c__DisplayClass1 (line 14).
Detecting
As with any other example in this series, first of all, make sure that a certain lambda is in fact causing
performance issues, i.e. generating huge traffic. This can be easily checked in dotMemory. 1. Open
a memory snapshot and select the Memory Traffic view. 1. Find delegates that generate significant
traffic. Objects of ...+c__DisplayClassN are also a hint. 1. Identify the methods responsible for this
traffic.
For instance, if the Substring method from the example above is run 10,000 times, the Memory
Traffic view will look as follows:
50
As you can see, the app has allocated and collected 10,000 delegates.
When working with lambdas, the Heap Allocation Viewer also helps a lot as it can proactively detect
delegate allocation. In our case, the plugins warning will look like this:
But once again, data gathered by dotMemory is more reliable, because it shows you whether this
lambda is a real issue (i.e. whether it does in fact generate lots of traffic).
Solving
Considering how tricky lambda expressions may be, some companies even prohibit using lambdas
in their development processes. We believe that lambdas are a very powerful instrument which
definitely can and should be used - as long as particular caution is exercised.
The main strategy when using lambdas is avoiding closures. In such a case, a created delegate will
always be cached with no impact on traffic.
Thus, for our example, one solution is to not pass the parameter x to the lambda. The fix would look
as follows:
51
The updated lambda doesnt capture any variables; therefore, its delegate should be cached. This
can be confirmed by dotMemory:
52
LINQ Queries
As we just saw in the previous section, lambda expressions always assume that a delegate is created.
What about LINQ? The concepts of LINQ queries and lambda expressions are closely connected and
have very similar implementation under the hood. This means that all concerns weve discussed
for lambdas are also valid for LINQs.
If your LINQ query contains a closure, the compiler wont cache the corresponding delegate. For
example:
public List<string> GetLongNames(List<string> inList, int threshold)
{
var result =
from s in inList
where s.Length > threshold
select s;
return result.ToList();
}
As the threshold parameter is captured by the query, its delegate will be created each time the
method is called. As with lambdas, traffic from delegates can be checked in dotMemory:
53
Unfortunately, theres one more pitfall to avoid when using LINQs. Any LINQ query (as any other
query) assumes iteration over some data collection, which, in turn, assumes creating an iterator. The
subsequent chain of reasoning should already be familiar: if this LINQ query stays on a hot path,
then constant allocation of iterators will generate significant traffic.
Consider this example:
class LinqTest
{
private List<string> companies;
private List<string> GetLongNames(List<string> inList)
{
var result =
from s in inList
where s.Length > 3
select s;
return result.ToList();
}
public void Foo()
{
var longNames = GetLongNames(companies);
}
}
Each time GetLongNames is called, the LINQ query will create an iterator.
54
Detecting
With dotMemory, finding excessive iterator allocations is an easy task: 1. Open a memory snapshot
and select the Memory Traffic view. 1. Find objects from the namespace System.Linq that
contain the word iterator. In our example we use the Where LINQ method, so we look for System.Linq.Enumerable+WhereListIterator<string> objects. 1. Determine the methods responsible
for this traffic.
For instance, if we call the Foo method from our example 10,000 times, the Memory Traffic view
will look as follows:
The Heap Allocation Viewer plugin also warns us about allocations in LINQs, but only if they
explicitly call LINQ methods. For example:
Solving
Unfortunately, the only answer here is to not use LINQ queries on hot paths. In most cases, a LINQ
query can be replaced with a foreach. In our example, a fix could look like this:
55
Summary
Garbage collection is a resource-consuming process. The golden rule is: The more memory is
allocated, the more has to be collected and the slower your application.
To find whether GC is the cause of performance issues, use a performance profiler.
Use a memory profiler to find the exact objects and methods that cause high memory traffic.
A number of bad design patterns are known to cause memory traffic. Exercise caution when
working with value types (boxing), collections, strings, lambdas, and LINQs.
What is ineffective usage? When your app consumes more memory than it should, or could, we
call this ineffective. Sometimes you just feel that a particular algorithm consumes too much, but
nothing seems to explain why it does.
As we said earlier, triggers of ineffective memory usage are numerous. Typically, they are all related
to bad code design, which is also why this chapter doesnt suggest any exact solutions. Nevertheless,
there are some basic considerations you should keep in mind when faced with ineffective memory
usage.
First and foremost, if youre not satisfied with memory consumption in your application, you should
perform memory profiling.
Then, after you have a memory snapshot, use your profiler to answer two main questions:
What objects retain the most memory?
56
57
Dominators
Object A dominates object B if every path to B from an applications roots goes through A. In other
words, object B is retained in memory exclusively by object A: If A is garbage-collected, B is also
garbage-collected. For example, an array is a dominator for its elements (in case there are no other
references to array elements). If there are multiple paths to an object from the apps roots, it is not
dominated by anyone.
The amount of memory exclusively retained (dominated) by an object is one of the most useful
characteristics when analyzing ineffective memory usage. Consider an example.
58
There are 5 dominators on this figure: A, B, F, G, and I. C, D, and E are not dominators as
neither of them dominates F. H and J do not dominate K.
Object I retains 8 bytes of memory (J). If I is removed from memory, K will stay as it will be
still retained through the G - H path.
Object F retains 52 bytes (G + I + H + J + K).
Question (answer given below): how much memory does B retain?
So, when your application seems to consume too much memory, first, determine the largest
dominators and analyze what objects they retain (and how).
B retains 120 bytes.
59
What are the possible ways of doing this? Earlier dotMemory versions offered just one way of
analyzing dominators - the Group by Dominators view, which shows the tree of dominators sorted
by retained memory size:
Starting with version 4.3, dotMemory offers a new visual way of analyzing dominators: the
Sunburst Chart. In this view, the dominators hierarchy is shown on a sunburst chart. The more
memory a dominator retains, the larger the central angle.
60
A quick look at this chart shows what objects are crucial for your app and helps you evaluate the
largest structures.
If you click a particular dominator, the Domination Path on the right will show you the retention
path of this dominator. Double-click a dominator to zoom into the chart showing the objects retained
by this dominator in more detail.
Our experience shows that the Dominators chart is also very effective when you need to quickly
evaluate how a certain functionality works in your app. For example, below are two charts built for
an image editor application. The first one was plotted before anything is done in the app, and the
second one reflects memory usage after the user has applied an image filter.
61
After some time, if you profile your app constantly, youll even be able to see not only how your
app works, but even how particular changes in code affect memory usage.
62
Of course, this algorithm is applicable to dotMemory as well. However, dotMemory 4.0 and later
offers a much easier way called Call Tree as Icicle Chart.
The idea behind the chart is simple - its a graphical representation of the call tree. Each call is shown
as a horizontal bar whose length depends on the size of objects allocated in the calls subtree. The
more memory allocated in the underlying subtree, the longer the bar. The bars color value serves
as an additional indicator - the more memory allocated by the call itself, the darker the bar.
So instead of looking at lots of numbers, start your analysis by opening the Call Tree as Icicle Chart
view. In just a glance you can match a certain call with how much memory it allocates.
For example, the following chart shows the same data as the Call Tree table from the picture above.
Notice how theres no need to dive into the call tree: main memory allocations can be seen instantly.
63
Object Lifetime
Its possible that your application stores objects longer than they are needed. Here are some basic
considerations on object lifetime in .NET.
64
The GC.SuppressFinalize(this) deletes the reference from the finalization queue, thereby eliminating the problem of extended lifetime. All you need to do is to either call the Dispose method
explicitly:
var fObj = new MyFinalizableClass();
... // do something
fObj.Dispose();
65
Cache
Try looking at your cache from the perspective of how long it stores data. For example, the simplest
Dictionary cache implementations store cached data forever. Thus, it may store a lot of data that
will never be used again. To prevent this problem, consider implementing cache using the Most
Recently Used (MRU) or Least Recently Used (LRU) model.
On the other hand, consider a cache implemented on weak references (each object in cache is stored
via WeakReference). Though it is not really useful as an ordinary cache (data are wiped after each
full garbage collection), it may become in handy in some specific cases. In addition, you can use weak
references to enhance your MRU or LRU cache. For example, instead of removing a cache item that
is no longer needed, you can change its reference from strong to weak. Thus, there is a chance that
it will be still alive when needed (in this case, you return a strong reference to this object).
66
If we drill deeper into this object set and look at its dominators (the Group by Dominators view),
we can see that these object types are held in memory by several others. The first dominator here
(TextTreeRootNode) is our textbox control itself. Of course it needs a few Char[] arrays to hold its
contents. The second one however, UndoManager, is more interesting.
It seems the UndoManager is keeping a few Char[] arrays as well. This is because WPFs undo
behavior will need this information to be able to undo/redo changes made to the textbox.
Solving
First of all, this is not really an issue. Its a feature! It is important to know its there, though, for two
reasons. First, when profiling WPF applications we may see a number of Char[] arrays being created.
Dont get distracted by the UndoManager and try focusing on other dominators if the allocations are
too excessive. Second, when building applications where a lot of text editing is done, high memory
usage can be explained by this undo behavior.
To limit the number of entries the undo and redo stacks can hold, we can update the textboxs
UndoLimit property to a smaller number. The default value was set to 1 (unlimited) in earlier .NET
version but in recent ones it defaults to 100.
67
<Grid>
<TextBox UndoLimit=10 HorizontalAlignment=Left
TextWrapping=Wrap AcceptsReturn=True />
</Grid>
We could also turn off undo entirely, by changing the IsUndoEnabled property.
<Grid>
<TextBox IsUndoEnabled=False HorizontalAlignment=Left
TextWrapping=Wrap AcceptsReturn=True />
</Grid>
String Interning
One more automatic inspection in dotMemory that helps you fight ineffective memory usage is the
String duplicates inspection. The idea behind it is quite simple: it automatically checks memory for
string objects with the same value. After you open a memory snapshot, you will see the list of such
strings:
Why are string duplicates bad? Well answer with another question: Why create a new string if it is
already in memory?
Imagine, for example, that in the background our app parses some text files with repetitive content
(say, some XML logs).
68
So, dotMemory finds a lot of strings with identical content. What can we do?
The obvious answer is: rewrite our app so that it allocates strings with unique content just once.
Actually, there are at least two ways this can be done. The first one is to use the string interning
mechanism provided by .NET.
CLR Intern Pool
.NET automatically performs string interning for all string literals. This is done by means of an
intern pool a special table that stores references to all unique strings. But why arent the strings
in our example interned? The thing is that only explicitly declared string literals are interned on the
compile stage. Strings created at runtime are not checked for being already added to the pool. For
example:
string s = "ABC"; // will be interned
string s1 = "A";
string s2 = s1 + "BC"; // will not be interned
You can circumvent this limitation by working with the intern pool directly. For this purpose,
.NET offers two methods: String.Intern and String.IsInterned. If the string value passed to
String.Intern is already in the pool, the method returns the reference to the string. Otherwise, the
method adds the string to the pool and returns the reference to it. If you want to just check if a string
69
is already interned, use the String.IsInterned method. It returns the reference to the string if its
value is in the pool, or null if it isnt.
Thus, the fix for our log parsing algorithm could look as follows:
public void ProcessLogFile(string file)
{
using (XmlReader reader = XmlReader.Create(new StreamReader(file)))
{
...
// read XML element
string logEntry = String.Intern(reader.ReadElementContentAsString());
LogFileData.Add(logEntry); // add string to list
...
// some list processing goes here
...
}
}
Further memory profiling would show that strings are successfully interned.
Nevertheless, such an implementation has one disadvantage the interned strings will stay in
memory forever (or, to be more precise, they will persist for the lifetime of the process that hosts
our application, as the intern pool will store references to the strings even if they are no longer
needed).
If, for example, our app has to parse a large number of different log files, this could be a problem. In
such a case, a better solution would be to create a local analogue of the intern pool.
Local Intern Pool
The simplest (though very far from optimal) implementation might look like this:
70
class LocalPool
{
private readonly Dictionary<string, string> _stringPool = new Dictionary<str\
ing, string>();
public string GetOrCreate(string entry)
{
string result;
if (!_stringPool.TryGetValue(entry, out result))
{
_stringPool[entry] = entry;
result = entry;
}
return result;
}
}
In this case, pool will be removed from memory with the next garbage collection after ProcessLogFile is done working.
Summary
Ineffective memory usage occers when your application consumes more memory than it
should or could.
71
In other words, dotMemory Unit extends your unit testing framework with the functionality of a
memory profiler.
How It Works
dotMemory Unit is distributed as a NuGet package installed to your test project:
PM> Install-Package JetBrains.DotMemoryUnit
dotMemory Unit requires ReSharper unit test runner. In this case, you should have either
ReSharper 9.1 (or later) or dotCover 3.1 (or later) installed on your machine. Another option is
to run tests using the standalone dotMemory Unit launcher. You can take the launcher either
from the NuGet package or from the zip package available for download on the dotMemory
Unit page.
https://www.jetbrains.com/dotmemory/unit/
72
73
After you install the dotMemory Unit package, ReSharpers menus for unit tests will include
an additional item, Run Unit Tests under dotMemory Unit. In this mode, the test runner
will execute dotMemory Unit calls as well as ordinary test logic. If you run a test the normal
way (without dotMemory Unit support), all dotMemory Unit calls will be ignored.
dotMemory Unit works with MSTest, NUnit and most of the other unit-testing frameworks
available on the market.
dotMemory Unit can be integrated with any continuous integration system using a standalone
launcher. JetBrains TeamCity provides support for dotMemory Unit with a special plugin. For
more details please turn to the chapter Memory Profiling in Continuous Integration.
74
[Test]
public void TestMethod1()
{
var foo = new Foo();
foo.Bar();
// 1
dotMemory.Check(memory => //2
Assert.That(memory.GetObjects(where => where.Type.Is<Goo>()).ObjectsCoun\
t, Is.EqualTo(0))); // 3
GC.KeepAlive(foo); // protect objects from GC if this is implied by test log\
ic
}
1. A lambda is passed to the Check method of the static dotMemory type. This method creates a
dump of the managed heap, and will be called only if you run the test using Run test under
dotMemory Unit.
2. The memory object of the Memory type passed to the lambda contains all memory data for the
current execution point.
3. The GetObjects method returns a set of objects that match the condition passed in another
lambda. This line slices the memory by leaving only objects of the Goo type. The NUnit Assert
expression asserts that there should be 0 objects of the Foo type.
Note that dotMemory Unit does not force you to use any specific Assert syntax. Simply use the
syntax of the framework your test is written for. For example, the shown assertion uses the NUnit
syntax but could be easily modified for MSTest:
Assert.AreEqual(0, memory.GetObjects(where => where.Type.Is<Foo>()).ObjectsCount\
);
75
76
terface.Is<IFoo>()).AllocatedMemory.SizeInBytes, Is.LessThan(1000));
});
bar.Foo();
dotMemory.Check(memory =>
{
// 3
Assert.That(memory.GetTrafficFrom(memoryCheckPoint2).Where(obj => obj.Ty\
pe.Is<Bar>()).AllocatedMemory.ObjectsCount, Is.LessThan(10));
});
}
1. To mark time intervals where memory traffic can be analyzed, use checkpoints created by
dotMemory.Check (as youve probably guessed, this method simply takes a memory snapshot).
2. The checkpoint that defines the starting point of the interval is passed to the GetTrafficFrom
method. For example, this line asserts that the total size of objects implementing the IFoo
interface created on the interval between memoryCheckPoint1 and memoryCheckPoint2 is less
than 1000 bytes.
3. You can use any checkpoint created earlier as a base for analysis. Thus, this line gets traffic
data between the current dotMemory.Check call and memoryCheckPoint2.
Here:
-targetExecutable is the path to the unit test runner that will run tests.
-returnTargetExitCode makes the launcher return the unit test runners exit code. This is
important for CI as the build step must fail if any memory tests fail (test runners return a
nonzero exit code in this case).
The parameters passed after the double dash (--) are unit test runner arguments (in our case
its a path to the dll with tests).
Now its easier than ever to make memory tests a part of your continuous integration builds. Simply
add the command shown above as a build step on your CI server, and it will run your tests with
dotMemory Unit support.
The tools output contains data on successful and failed tests. For example:
https://www.jetbrains.com/dotmemory/download/#section=dotmemoryunit
https://www.nuget.org/packages/JetBrains.DotMemoryUnit/
77
78
...
Tests run: 3, Errors: 1, Failures: 0, Inconclusive: 0, Time: 28.3051788194675 se\
conds
Not run: 0, Invalid: 0, Ignored: 0, Skipped: 0
Errors and Failures:
1) Test Error : MainTests.IntegrationTests.Method2
AssertTrafficException : Allocated memory amount
Expected: 50,000,000
But was: 195,344,723
...
If you use JetBrains TeamCity as your CI server, you are a little bit luckier than others.

https://teamcity.jetbrains.com/project.html?projectId=TeamCityPluginsByJetBrains_DotMemoryUnit&tab=projectOverview
https://www.jetbrains.com/dotmemory/unit/
79
1. Now, update the step used to run tests in your build configuration. Open the corresponding
build step in your build configuration:
2. Note that after we installed the dotMemoryUnit plugin, this build step now additionally
contains the JetBrains dotMemory Unit section. Here you should:
Turn on Run build step under JetBrains dotMemory Unit.
Specify the path to the dotMemory Unit standalone launcher directory in Path to
dotMemory Unit. Note that as we decided to use the launcher from the NuGet
referenced by our project (see step 3), we specify the path relative to the project checkout
directory.
In Memory snapshots artifacts path, specify a path to the directory (relative to the
build artifacts directory) where dotMemory Unit will store snapshots in case memory
test(s) fail.
80
The Tests tab will show you the exact tests that have failed. For example, here the reason had to do
with the amount of memory traffic:
81
Now, you can investigate the issue more thoroughly by analyzing a memory snapshot that is saved
in build artifacts:
Conclusion
We hope our little book helped you get general understanding or at least refresh your knowledge
of how you can fight memory issues in .NET applications. In fact, one of the main book takeaways
is that you should not be afraid of memory profiling. With modern memory profilers (and little
background knowledge we hope now you do have) its not that complex and time-consuming as
its commonly believed. Moreover, with frameworks like dotMemory Unit, you can automate this
process once and for all.
If, for some reason, you missed books intro, its worth mentioning one more time that the content of
this book is based on various posts from our ReSharper blog. If you liked the book, youll definitely
like the blog as well. Its a great place to learn something new not only about our tools but also about
best .NET practices in the way we see them.
http://blog.jetbrains.com/dotnet/
82