Some objects require explicit tear-down to release resources such as open files, locks, operating system handles, and unmanaged objects. In .NET, this is called disposal and is supported by IDisposable:
public interface IDisposable { // Performs application-defined tasks associated with freeing, releasing, or resetting // unmanaged resources void Dispose(); }
The managed memory occupied by unused objects must also be reclaimed at some point in what is known as Garbage Collection. Disposal differs from garbage collection in that disposal is explicitly invoked whereas garbage collection is automatic.
C# using statement provides a syntactic shortcut for calling Dispose on objects that implement IDisposable:
public void BasicUsing() { using (FileStream fs = new FileStream("SomeFile.txt", FileMode.Open)) { // Read or write to the file ... } // Dispose called here even if an exception was thrown // This code block is equivalent to the code above FileStream fs2 = new FileStream("SomeFile.txt", FileMode.Open); try { // Read or write to the file ... } finally { // finally code block is called even when an exception is thrown if (fs2 != null) (fs2 as IDisposable).Dispose(); } }
In very simple scenarios (i.e., a sealed class or a very simple class with no hierarchy), writing a disposable type is a matter of implementing IDisposable.Dispose to perform clean up or tear down. However a more elaborate pattern is required to account for class hierarchies and to provide a backup for consumers that forget to call Dispose.
.NET follows a set of rules for its disposal logic:
Once disposed, an object is beyond redemption. Calling methods or properties on a disposed object throws ObjectDisposedException.
Calling an object's Dispose method repeatedly causes no error.
A container object automatically disposes its child objects. In other words, if disposable object x contains/wraps disposable object y, x.Dispose must call y.Dispose, unless instructed otherwise.
Some types define a Close method in addition to implementing IDisposable.Dispose. .NET is not completely consistent with the semantics of a Close method, although in nearly all cases, it's either
Functionally identical to Dispose.
A functional subset of Dispose.. An example is IDBConnection.Close: a Closed connection can be re-opened but a Disposed connection cannot.
Some classes (i.e., Timer, HttpListener) define a Stop method which may release unmanaged resources (like Dispose), but still allow for restarting (unlike Dispose).
Rule of Thumb 1: When in doubt, dispose.
Rule of Thumb 2: Implement IDisposable if any field in your class is assigned an object that implements IDisposable.
Generally, when you've finished with an object, dispose it. Otherwise, it might cause trouble for other object instances, the app domain, the network or the database. Objects wrapping an unmanaged resource handle will always require disposal in order to free the handle. Examples include Windows Forms controls, file handles, network sockets, GDI pens/brushes/bitmaps.
There are three scenarios for not disposing:
When obtaining a shared object via a static field or property.
When an object's Dispose method does something that you don't want.
When an object's Dispose method is unnecessary by design.
The first case is rare with the main case being in the System.Drawing namespace where GDI+ objects obtained through static fields or properties (such as Brushes.Red) must never be disposed because the same instance is used throughout the application. Instances that you obtain from constructors (such as new SolidBrush), as well as instances obtained from static methods (such as Font.FromHdc), should be disposed.
The second case is more common as shown below:
| Type | Disposal Function | When not to dispose |
| MemoryStream | Prevents further I/O | You later need to read/write the steam |
| StreamReader, StreamWriter |
Flushes the reader/writer, and closes the underlying stream | You want to keep the underlying stream open |
|
IDbConnection |
Releases a database connection and clears the connection string | You need to re-open the database connection |
|
DataContext (LINQ to SQL) |
Prevents further use | You lazily evaluate queries connected to that context |
The third case includes the following classes: WebClient, StringReader, StringWriter, and BackgroundWorker. You can ignore disposal for these objects.
A bad practice is to extend the use of IDispoable to non-essential activities, such as:
public sealed class ClassA : IDisposable { public void Dispose() { LogToFile(); // Non-essential CloseDBConnection(); // Essential } }
The user may not know what's inside the Dispose method and may opt not to call it. The solution is the opt-in disposal pattern:
public sealed class ClassA : IDisposable { public bool LogOnDisposal { get; private set; } public ClassA(bool log) { LogOnDisposal = log; } public void Dispose() { if (LogOnDisposal) LogToFile(); // Non-essential opt-in disposal CloseDBConnection(); // Essential } }
The opt-in disposal resolves the ownership issue: Does the user own the underlying resource that he's using or is he just renting it from someone else? The opt-in pattern avoids this problem by making the ownership explicit and documented.
Generally, you do not need to clear an object's fields in its Dispose method. However, it is a good practice to:
Unsubscribe from events that the object has subscribed to internally. Unsubscribing from such events avoids receiving unwanted event notifications and avoids keeping the object alive for GC purposes.
Clear an object's own event handlers (by setting them to null) to eliminate the possibility of evens firing during or after disposal.
Set a field to indicate that an object has been disposed so that you can throw an ObjectDisposedException if a consumer tries to dispose the object again. A good pattern is to use a public property:
public bool IsDiposed { get; private set; }
Clear high-value or sensitive fields such as encryption keys.
Regardless of whether an object requires a Dispose method, the memory it occupies must be freed at some time. The CLR handles this automatically using automatic garbage collection. You never de-allocate managed memory yourself. For example, consider the following method:
public void Test() { var myArray = new byte[1000]; // ... }
When Test executes, an array to hold 1000 bytes is allocated on the heap. The array is referenced by the variable myArray stored on the local variable stack. When the method exits, the local variable myArray pops out of scope, meaning that nothing is left to reference the array on the managed heap. The unreachable array then becomes eligible to be reclaimed in garbage collection.
Garbage collection does not happen immediately after an object becomes unreachable. Rather, garbage collection happens periodically although not to a fixed schedule. The CLR bases its decision on when to collect on a number of factors, such as available memory, amount of required memory allocation by the program, and the time since the last collection. In other words, garbage collection is indeterminate.
Note: Applications can consume more memory than they needs, particularly if large temporary arrays are constructed. In Task Manager, Memory Usage column includes memory that a process has internally deallocated and is willing to rescind immediately to the OS should another process need it. This is in contrast to Private Working Set which describes the amount of memory a process is using that can't be shared by other processes
A root is something that keeps an object alive. A root is one of the following:
If an object is not directly or indirectly referenced by root, it will be eligible for garbage collection. In the following figure, objects that cannot by following the arrows (references) from a root object are unreachable - and subject to garbage collection:

A finalizer is declared as follows:
class Test { ~Test() // Note use of ~ to declare a finalizer { // Finalizer logic } }
Before an object is released from memory, its finalizer must, if it has one. The GC identifies unreachable objects so they can deleted. Object without finalizers are deleted from memory right away. Object with finalizers, are kept alive (for now) and are put into a special queue known as the Finalzation queue. The Finalization queue acts as a root object, and objects queued in there are live objects. This is pass one, and garbage collection is complete. The finalizer thread then starts running in parallel to your program, picking objects from the Finalization queue and running their finalization methods. Once an object has been dequeued from the Finalization queue and its finalizer executed, it becomes orphaned and will get deleted in the next collection.
Note the following about finalizers:
Finalizers slow the allocation and collection of memory as the GC needs to keep track of which finalizers have run.
Finalizers prolong the life of the object and any referred objects, as they must all await the next garbage collection for actual deletion.
The order in which finalizers are run cannot be determined.
You cannot control when the finalizer for an object will be called.
If code in a finalizer blocks, other object cannot get finalized.
A finalizer can get called even if an exception is thrown during construction. It is good practice not to assume fields are correctly initialized when writing a finalizer.
Here some guidelines for implementing finalizers:
Finalizers must execute quickly.
Never block in your finalizer.
Don't reference other finalizable objects.
Don't throw exceptions.
One excellent use for finalizers is to provide a backup for cases when you forget to call Dispose on a disposable object. The following is the standard pattern for implementing this:
class DisposableObject : IDisposable { ~DisposableObject() { Dispose(false); } // IDisposable implementation public void Dispose() { Dispose(true); GC.SuppressFinalize(this); // Prevent finalizer from running when the GC catches up } // Helper. Contains actual disposing logic. Note the following // 1. protected virtual, so it can be overridden and called in derived classes. // 2. The disposing flag means that it's being called properly from the Dispose method, than // in 'last-resot mode' from the finalizer. // 3. When called with disposing = false, the method should not in general reference other // objects with finalizers as they may themselves have been finalized and may be in an // unpredictable state. // 4. Any code that can throw exceptions must be wrapped in a try/catch block. protected virtual void Dispose(bool disposing) { if (disposing) { // Call Dispose() on other objects owned by this instance // You can reference other finalizable objects here // ... } // Release any unmanaged resources owned by this instance. You can also // delete any temporary files created by the application here } }
Suppose finalizer for object A modifies living object B such that B refers back to dying object A. In other words, object A is now reachable from living object B. When the next garbage collection happens, the CLR will note that the previously dying object A is now reachable from living object B, and object A will evade collection. This is called resurrection.
To illustrate, suppose we want to write class TempFileRef to manage a temporary file. When TempFileRef is garbage-collected, we'd like the finalizer to delete the file:
class TempFileRef { public string FilePath { get; private set; } public TempFileRef(string path) { FilePath = path; } ~TempFileRef() { File.Delete(FilePath); // throws exception and crashes program } }
public void ResurrectionTest1() // In class GarbageCollectionTests { var tfr = new TempFileRef(@"..."); // Some file already in use (use the current running instance) }
private static void TestGC() // Called from Main { var tests = new GarbageCollectionTests(); tests.ResurrectionTest1(); GC.Collect(); }
File.Delete throws an exception because the file is used, and the entire application crashes. We could swallow the exception with an empty catch block, but then we'd never know that something went wrong. We want to restrict finalization actions to those that are simple, reliable and quick. For example, we could record the failure to a static collection as follows:
class TempFileRef { public string FilePath { get; private set; } static ConcurrentQueue<TempFileRef> queueFailedDeletions = new ConcurrentQueue<TempFileRef>(); public TempFileRef(string path) { FilePath = path; } ~TempFileRef() { try { File.Delete(FilePath); } catch (Exception ex) { queueFailedDeletions.Enqueue(this); // Resurrection } } }
ConcurrentQueue<T> is a thread-safe version of Queue<T>. A thread-safe collection is used because:
The CLR may execute finalizers on more than one thread in parallel. The means that two objects may be finalized at the same time, and hence writing to a shared state such as a static collection.
At one point we may dequeue items from the shared collection (queueFailedDeletions) to process them, but a finalizer could also be enqueuing another object at the same time.
The finalizer for a resurrected object will not run a second-time - unless you call GC.ReRegisterForFinalize. The previous example is now modified to reregister the object for finalization so as to try again in the next garbage collection.
class TempFileRef { public string FilePath { get; private set; } private static ConcurrentQueue<TempFileRef> queueFailedDeletions = new ConcurrentQueue<TempFileRef>(); private int deleteAttemptCount; public TempFileRef(string path) { FilePath = path; } ~TempFileRef() { try { File.Delete(FilePath); } catch (Exception ex) { // After the thrid failed attempt, the finalizer will give up trying to delete the file // and logs this to the queue to that we have a record of it if (deleteAttemptCount++ < 3) GC.ReRegisterForFinalize(this); else queueFailedDeletions.Enqueue(this); // Resurrection } } }
The CLR uses a generational mark-and-compact Garbage Collector that performs automatic memory management for objects stored on the managed heap. The GC is considered a tracing garbage collector meaning that it wakes up periodically and traces the graph of objects stored on the managed heap to determine which objects can be considered garbage and therefore collected.
The GC initiates a garbage collection upon performing a memory allocation (via the new keyword), either after a certain threshold of memory has been allocated, or at other times to reduce the application's memory footprint. The process can also be manually initiated by calling System.GC.Collect. During a garbage collection, all threads may be frozen.
The GC begins with its root object references, and walks the object graph, marking all its objects it reaches as ... reachable. Once this process is complete, all objects which have not been marked as reachable are marked as ... unreachable, and are subject to garbage collection:
Unreachable objects without finalizers are immediately discarded.
Unreachable objects with finalizers are enqueued on the Finalization queue for processing by the finalization thread once the garbage collection cycle is complete. These objects them become eligible for collection in the next GC cycle (unless resurrected).
The remaining live object are then shifted to the start of the heap (compacted) freeing space for more objects. This compaction serves two purposes:
It avoids memory fragmentation.
It allows the GC to allocate memory for new objects at the end of the heap. This helps avoid the time-consuming task of maintaining a list of free memory segments.
If there is insufficient memory to allocate for a new object after garbage collection, and the OS is unable to grant more memory, an OutOfMemoryException is thrown.
The GC uses various optimization techniques to reduce garbage collection time
Certain objects are long-lived and do not need to be traced during every collection. Basically, the GC divides the managed heap into three generations:
Objects that have just been allocated are in Gen0.
Objects that have survived one garbage collection cycle are in Gen1.
All other objects are in Gen2.
The CLR keeps the Gen0 section relatively small (16MB or less). When the Gen0 section fills up, the GC initiates a Gen0 collection - which happens fairly frequently. The GC applies a similar memory threshold to Gen1. Full collections that include Gen2 take much longer (tens of milliseconds) and happen infrequently. The following figures shows the effect of a full collection:

Ther GC uses a separate heap called the Large Object Heap (LOH) for objects larger than a certain threshold (currently > 85Kb). This avoids excessive Gen0 collections for large but short-lived objects. The LOH is not subject to compaction because moving large blocks of memory during garbage collection would be very expensive. This has two consequences:
Allocations can be slower because GC has to look in the LOH heap for gaps big enough to allocate the required memory.
The LOH is subject to fragmentation with lots of memory gaps that may be difficult to fill.
LOH is non-generational. All objects are treated as Gen2.
The GC must freeze (block) your execution threads during a collection. This includes the period during which Gen0 and Gen1 collection take place. The GC makes a special attempt at allowing threads to run during a Gen2 collection to avoid freezing an application for a noticeable period. This optimization applies to the workstation version of the CLR only. For server CLR, multiple cores are leveraged to perform full GC many times faster. In effect, the server GC is tuned for throughput rather than latency. The workstation optimization has been called concurrent collection, but from CLR 4.0, it's been revamped and renamed to background collection.
The server version of the CLR can send notifications just before a full GC will occur. This is intended for server farm configurations. The main idea is that you divert incoming requests from clients (WCF, Http, etc) to another server just before a collection. You then start the collection immediately and wait for it to finish before re-routing requests back to that server.
You can manually force a garbage collection anytime by calling GC.Collect. Calling GC.Collect without an argument starts a full collection. If you pass an integer, only generations to that value are collected; for example, GC.Collect(0) performs a fast Gen0 collection. Generally, the best performance is obtained by allowing the GC to decide when to collect. There are exceptions. The most common case for calling GC.Collect is when the application goes to sleep for a while. For example, a Windows service may initiate activity from 7am to 10pm, after which it sleeps with no activity until the next day. After 10pm, no code executes, which means no memory allocations are made, and so the GC has no opportunity to activate. The solution is to call GC.Collect right after 10pm. To ensure collection for objects with finalizers:
GC.Collect(); GC.WaitForPendingFinalizers(); GC.Collect();
In unmanaged languages like C++, you must remember to manually deallocate memory when an object is no longer required, otherwise a memory leak will result. With managed languages such as C#, this kind of memory is much less likely to appear. Large and complex .NET applications can end up consuming more and more memory over its lifetime until eventually it has to be restarted. However, managed memory leaks are easier to diagnose and prevent.
Generally, managed memory leaks are due to unused objects remaining alive through unused or forgotten references. Common causes for managed memory leaks:
Event handlers: objects with event handlers hold a reference to the object firing the event. objects with event handlers remain alive as long as the object firing the event is alive.
Timers: .NET Framework holds references to active timer so it fire the Elapsed event.
WPF data binding: see Knowledge Base Article 938416 for full details.
Consider the following case which shows memory leaks due to event handlers:
class Publisher { public event EventHandler<EventArgs> FireEvent; } class Subscriber { public Publisher Publisher { get; private set; } public Subscriber(Publisher publisher) { Publisher = publisher; publisher.FireEvent += PublisherOnFireEvent; } private void PublisherOnFireEvent(object sender, EventArgs args) { Trace.WriteLine("Received event from publisher"); } }
private static Publisher publisher = new Publisher(); public void TestMemoryLeaks() { // Create a 1000 clients var subscribers = Enumerable.Range(0, 1000).Select(i => new Subscriber(publisher)); // Do something with clients // .... }
You might expect that after TestMemoryLeaks finishes executing, the 1,000 Subscriber objects will become eligible for collection. However, each Subscriber instance is still referenced - unnecessarily - by publisher object which remains live for the duration of the program. One solution to solve this is to or use Weak References (next section), but another simpler solution is to implement IDisposable on each subscriber as shown below:
class Subscriber : IDisposable { public Publisher Publisher { get; private set; } public int Id { get; private set; } public Subscriber(Publisher publisher, int id) { Publisher = publisher; Id = id; Publisher.FireEvent += PublisherOnFireEvent; } private void PublisherOnFireEvent(object sender, EventArgs args) { Trace.WriteLine(string.Format("Received event from publisher on subscriber {0}", Id)); } public void Dispose() { // Unhook any event handlers Trace.WriteLine(string.Format("Unhooking FireEvent from publisher on subscriber {0}", Id)); Publisher.FireEvent -= PublisherOnFireEvent; } }
private static Publisher publisher = new Publisher(); public void TestMemoryLeaks() { // Create a 100 clients int id = 0; var subscribers = Enumerable.Range(0, 100).Select(i => new Subscriber(publisher, id++)); // Do something with clients // .... // Clean up subscribers Array.ForEach( subscribers.ToArray(), s => s.Dispose()); }
Unhooking FireEvent from publisher on subscriber
0
Unhooking FireEvent from publisher on subscriber 1
Unhooking FireEvent from publisher on subscriber 2
Unhooking FireEvent from publisher on subscriber 3
...
Forgotten timers can cause memory leaks. There are two distinct scenarios, depending on the kind of the timer:
Consider the following case where the Elapsed event handler is called every second:
class Foo { private System.Timers.Timer timer; public Foo() { timer = new Timer {Interval = 1000}; timer.Elapsed += TimerOnElapsed; // Causes memory leak! timer.Start(); } private void TimerOnElapsed(object sender, ElapsedEventArgs elapsedEventArgs) { // ... } }
Foo instances cannot be collected while the application is running: this is because the .NET Framework holds a reference to the timer object so that it can call its Elapsed handler. And because the timer object is live, Foo instances are also live. The solution is simple once you recognize that System.Timers.Timer implements IDisposable: Dispose of the timer when it's no longer needed:
class Foo : IDisposable { private System.Timers.Timer timer; public Foo() { timer = new Timer {Interval = 1000}; timer.Elapsed += TimerOnElapsed; timer.Start(); } private void TimerOnElapsed(object sender, ElapsedEventArgs elapsedEventArgs) { // ... } public void Dispose() { timer.Dispose(); } }
The .NET Framework does not hold references to active threading timers; it instead references the callback delegates directly. This means that if you forget to dispose of a threading timer, a finalizer will eventually fire and this will automatically stop and dispose the timer. This can create a different problem illustrated below:
public void TestTimerLeak() { var timer = new System.Threading.Timer(Callback, null, 1000, 1000); GC.Collect(); System.Threading.Thread.Sleep(1000); } private void Callback(object state) { Trace.WriteLine("timer called"); }
If this example is compiled in Release mode (debugging disabled and optimizations enabled), the timer will be collected and finalized before it has a chance to fire even once. We fix this by disposing of the timer when we're done with it.
public void TestTimerLeak() { using (var timer = new System.Threading.Timer(Callback, null, 1000, 1000)) { GC.Collect(); System.Threading.Thread.Sleep(1000); } // Calls timer.Dispose() }
The implicit call to Timer.Dispose at the end of the using block ensures that the timer variable is used and so not considered deal by the GC until the end of the block.
If you have an application with managed memory leaks, you can use windbg.exe, Microsoft CLR Profile, SciTech Memory Profiler, Red Gate ANTS Memory profiler, and various Performance Monitor coutners.
Occasionally, it's useful to hold a reference to an object that is invisible to the GC in terms of keeping the object alive. This is called a Weak Reference and is implemented by System.WeakReference class. To create a weak reference, create a WeakReference object with a target object as follows:
public void TestWeakReference() { // Constructing a weak reference var sb = new StringBuilder("this is the target object"); var weak = new WeakReference(sb); Trace.WriteLine("weak.Target: " + weak.Target); // If a target is referenced by one or more WeakReference objects, the GC will consider // the target eligible for collection. And when it gets collected, the WeakReference.Target // property will be null GC.Collect(); Trace.WriteLine("weak.Target: " + weak.Target); // To avoid the target being collected, assign it to a local variable. Once the target // is assigned to a local variable, it has a strong root and cannot be collected while // that variable is in use var weak2 = new WeakReference( new StringBuilder("Weak2")); StringBuilder sb2 = (StringBuilder) weak2.Target; if (sb2 != null) Trace.WriteLine(sb2); // Use the local variable }
One use for WeakReference is to cache large object graphs. This allows memory intensive data to be cached briefly without causing extensive memory consumption:
_weakCache = new WeakReference( ... ); ... var cache = _weakCache.Target; if (cache == null) { /* Recreate cache and assign it to _weakCache */ }
This strategy is not very effective because you have little control over when the GC fires and what generation it chooses to collect. So if the cache remains in Gen0, it may be collected within microseconds (recall that GC collected regularly under normal memory conditions, not just under low memory conditions). At a minimum you should employ a two-level cache where you start out by holding strong references that you convert to weak references over time.
Events can cause managed memory leaks (see Leaks in Managed Memory). The simplest solution is usually to implement IDispose to unsubscribe. Weak references offer another solution.
Imagine a delegate that holds only weak references to its targets. Such a delegate would not keep its targets alive - unless those targets had independent references. Of course this does not prevent a firing delegate from hitting an unreferenced target - in the time between the target being eligible for collection and the GC catching up with it. The following shows a weak delegate implementation:
// TODO: Improve by adding locks for thread safety public class WeakDelegate<TDelegate> where TDelegate : class { List<WeakReference> _targets = new List<WeakReference>(); public WeakDelegate() { // Check that TDelegate generic type argument is a Delegate type. This check is required // because the following type constraint is illegal as C# considers System.Delegate as a // special type for which constraints are not supported: // ... where TDelegate : Delegate // Instead we choose a class constains and perform a runtime check var isDelegate = typeof (TDelegate).IsSubclassOf(typeof (Delegate)); if (!isDelegate) throw new InvalidOperationException("TDelegate generic type parameter must a Delegate type"); } public void Combine(TDelegate target) { if (target == null) return; // We've already checked that TDelegate is a sub class of Delegate. Note the use of the 'as' // operator rather than the usual cast operator. This is because C# disallows the cast operator // with this type parameter because of a potential ambiguity between a custom conversion and a // reference conversion. We then call GetInvocationList as the delegate might be a multicast // delegate with multiple methods to invoke var invocationList = (target as Delegate).GetInvocationList(); foreach (Delegate del in invocationList) { _targets.Add( new WeakReference(del)); } } public void Remove(TDelegate target) { if (target == null) return; // We've already checked that TDelegate is a sub class of Delegate var invocationList = (target as Delegate).GetInvocationList(); foreach (Delegate del in invocationList) { var delegateToRemove = _targets.FirstOrDefault(weak => ReferenceEquals(del, weak.Target)); if (delegateToRemove != null) _targets.Remove(delegateToRemove); } } public TDelegate Target { get { // Build up a multicast delegate that combines all the delegates referenced by weak // references whose targets are alive. We then clear out the remaining dead references // from the list (to avoid the _targets list endlessly growing) var deadRefs = new List<WeakReference>(); Delegate combinedTargets = null; foreach (var weakReference in _targets) { Delegate target = (Delegate) weakReference.Target; if (target != null) combinedTargets = Delegate.Combine(combinedTargets, target); else deadRefs.Add(weakReference); } foreach (var weakReference in deadRefs) _targets.Remove(weakReference); return combinedTargets as TDelegate; } set { _targets.Clear(); Combine(value); } } }
// Note on custom event accessors: // An event is a special kind of multicast delegate that can only be invoked from within // the class that it is declared in. Client code subscribes to the event by providing // a reference to a method that should be invoked when the event is fired. These methods // are added to the delegate's invocation list through event accessors, which resemble // property accessors, except that event accessors are named add and remove. In most cases, // you do not have to supply custom event accessors. When no custom event accessors are // supplied in your code, the compiler will add them automatically. However, in some cases, // like the class below, you have to provide custom event accessors to be able to add // client's handler to the weak delegate's invocation list public class WeakDelegateEventSource { private readonly WeakDelegate<EventHandler> _click = new WeakDelegate<EventHandler>(); public event EventHandler Click { add {_click.Combine(value);} remove { _click.Remove(value);} } // The standard way to declare events in a base class so that they can also be raised from // derived classes. This is because events are a special type of delegate that can only be // invoked from within the class that declared them. Derived classes cannot directly invoke // events that are declared within the base class. By calling or overriding this invoking // method, derived classes can invoke the event indirectly // Note: Do not declare virtual events in a base class and override them in a derived class. // The C# compiler does not handle these correctly protected virtual void OnClick(EventArgs args) { // Assign _click.Target to a temporary variable before checking and invoking it // This avoids the possibility of targets being collected in the interim EventHandler target = _click.Target; if (target != null) target(this, args); } }