C# Multi-Threading Part 3
By Manisha Mehta
Multithreading Part 3
Gradually, as you start picking up the threads of multi-threading, you would feel the need to manage shared resources. The .NET framework provides a number of classes and data types that you can use to control the access to shared resources.
Consider that you have a global variable or a shared class variable that you need to update from different threads. You can use the System.Threading.Interlocked class to get this done. This class provides atomic, non-blocking integer updates.
You can use the System.Threading.Monitor class to lock a section of code in an objectís method that should not be accessed concurrently by many threads. System.Threading.WaitHandle objects help you respond to actions taken by other threads, especially when interoperating with unmanaged code. Wait-based techniques use this class to perform waits on one or more waitable objects.
The System.Threading.Mutex class allows more complex thread synchronization. It allows single-threaded access.
Thread synchronization refers to the act of shielding against multithreading issues such as data-races, deadlocks and starvation.
The synchronization event classes like the ManualResetEvent and AutoResetEvent (both in System.Threading namespace) allow one thread to notify the other threads of some event.
Though, no discussion on threading is complete without talking about synchronization, yet, remember that synchronization should be used judiciously. You have to determine properly the objects and methods to synchronize as failure to do so would lead to situations like deadlocks (where the threads stop responding, each waiting for the other to complete) and data race (when an inconsistent result occurs due to dependence on the timing of two events). Say, we have two threads X and Y. Thread X reads data from file and fills up a data structure. Thread Y reads from that structure and sends the data across to some other computer. Imagine, when Y is reading data, X tries to write the data to the structure. If the two things happen simultaneously, it will lead to inconsistent data getting transferred, which is highly undesirable. We should prevent such a situation by letting one thread access the data structure at one point of time. Having smaller number of threads makes it easier to synchronize resources.
The Common Language Runtime (CLR) provides three ways to synchronize access to global fields, specific blocks of method, instance and static methods and fields.
- Synchronized code regions (SyncBlock based): You can synchronize either the entire static/instance methods or a part of them using the Monitor class. There is no support for synchronized static fields. On instance methods, the current object ( this keyword, in C#) is used for synchronization. On static methods, the class is used for synchronization. We would study about this class soon.
- Classic Manual Synchronization: Use the various synchronization classes (like the WaitHandle, Mutex, ReaderWriterLock, ManualResetEvent, AutoResetEvent and the Interlocked ) to create your own synchronization mechanisms. You have to manually synchronize the different fields and methods. The Manual based approach can be used for interprocess synchronization and offers deadlock free waits for multiple resources. These are two improvements over the SyncBlock based techniques. We would study about all of them in this article.
- Synchronized contexts: You can also use the SynchronizationAttribute to enable simple, automatic synchronization for ContextBoundObject objects. You can use this technique to synchronize only the instance fields and methods. All objects in the same context domain share the same lock.
The Monitor (in the System.Threading namespace) is useful in situations where you want a particular region of code to be used only by a single thread at a given point of time. All the methods in this class are static so you do not need to instantiate this class. These static methods provide a mechanism to synchronize access to objects thus protecting them against data-races or deadlocks. A few of the methods are shown below:
- Enter, TryEnter
- Pulse / PulseAll
You can synchronize access to a piece of code by locking and unlocking a particular object. The methods Monitor.Enter, Monitor.TryEnter and Monitor.Exit are used lock/unlock an object. Once a lock is obtained on a particular piece of code (done by saying Monitor.Enter(object) ), no other thread can obtain a lock on the same region of code. Let us assume that a thread named X has acquired a lock on an object. The lock can be released (by saying Monitor.Exit(object) or Monitor.Wait ). When the lock is released, the methods Monitor.Pulse and Monitor.PulseAll signal the next thread(s) in the ready queue (that want to lock the code) to proceed and the other thread waiting in the ready queue gets a chance to exclusively lock the piece of code. Letís say the thread X has released the lock and another thread Y has now acquired thelock. Meanwhile the thread that had invoked Monitor.Wait (which is Thread X) enters the objectís waiting queue. It leaves the wait queue to enter the ready queue when it receives a Pulse or PulseAll from the thread that has locked the object now (Thread Y). The Monitor.Wait returns when the calling thread (Thread X) reacquires a lock on the object. This method may block indefinitely if the thread that holds the lock (Thread Y) does not call Pulse or PulseAll. The methods Pulse, PulseAll and Wait must be invoked from within a synchronized block of code. For each synchronized object you need to maintain a reference to the thread that currently holds the lock, a reference to a ready queue and a reference to a waiting queue (that contains the threads that are waiting for notification of a change in the state of the locked object).
You might wonder what would happen if two threads call Monitor.Enter simultaneously. However close they might be in calling the Enter method, one thread would always be the one to call it first, and that thread gets a chance to lock the object. Since, Monitor.Enter is an atomic operation, the CPU cannot preempt one thread in favor of another. For better performance, you should delay the process of acquiring lock on the object and should release the lock as soon as possible. It is advisable to acquire locks on private or internal objects, locking external objects might result in deadlocks, because unrelated code could choose the same objects to lock on for different purposes.
If it is a block of code on which you want to acquire a lock, then it is better to place the set of instructions in a try block and to place the Monitor.Exit in the finally block. If it is an entire method on which you want to acquire a lock (rather than a few lines of code) then you can use the class MethodImplAttribute (in System.Runtime.CompilerServices namespace) specifying the Synchronized value in the constructor of the class. This is an alternative way of working. Since the lock is applied to the entire method the lock is released when the method returns. But if you want to release the lock sooner then use the Monitor class or the C# lock statement instead of this attribute.
Let us create a small function to study the Monitor class methods as shown below:
public void some_method()
//say we do something here.
This kind of code will lead to problems. The moment the code reaches the instruction : c=a/b, (b being zero) an exception would be thrown and Monitor.Exit will never get called. Hence the code would go in a hang and any other thread will never get a chance to acquie the lock. There are two ways to handle such situations. First, you should always place the code in a try..finally block. You should call Monitor.Exit in the finally block. That way it is certain that the lock would be released as finally block always gets called. Second way is using the C#ís lock() method. Calling lock(this) is equivalent to calling Monitor.Enter(this). But in this case, as soon as the execution goes out of scope, unlock would be called automatically. The above code when written using the C#ís lock() method looks like this:
public void some_method()
//say we do something here.
The C# lock statement provides the same functionality as that provided by the Enter and Exit methods. Use lock when you have a section of code that should not be interrupted by code running on a separate thread.
The WaitHandle class (in the System.Threading namespace) is used as a base class for all synchronization objects that allow multiple wait operations. This class encapsulates the Win32 synchronization handles. WaitHandle objects signal the status of one thread to another thereby notifying other threads that they need exclusive access to a resource. Other threads must then wait, until the wait handle is no longer in use, to use this resource. Classes derived from this are:
These classes define a signaling mechanism to take or release exclusive access to a shared resource. They have two states, signaled and nonsignaled. A wait handle that is not owned by any thread is in the signaled state otherwise nonsignaled. Threads that own a wait handle call the Set method when they no longer need the wait handle. Other threads can thus call Reset (to change the status to nonsignaled) or call any one of the WaitHandle methods (shown below) to request ownership of a wait handle. These are:
- WaitOne: accepts a wait handle (as an argument) and causes the calling thread to wait until the current wait handle signal by calling Set.
- WaitAny: accepts an array of wait handles (as an argument) and causes the calling thread to wait until any one of the specified wait handles signal by calling Set.
- WaitAll: accepts an array of wait handles (as an argument) and causes the calling thread to wait until all specified wait handles signal by calling Set.
These wait methods block a thread (similar to the Join method, on individual threads) until one or more synchronization objects receive a signal.
WaitHandle objects represent the ability to wait on many objects at once and are operating-system waitable objects that are useful for synchronizing between managed and unmanaged code. But they are less portable than Monitor objects. Monitor objects are fully managed and are more efficient in their use of operating system resources.
The Mutex class (in the System.Threading namespace) is another way of achieving synchronization between threads and across processes. This class provides interprocess synchronization. It allows a thread to have exclusive access to a shared resource thus preventing simultaneous access by multiple threads or processes. The name mutex itself suggests that the ownership of the mutex is mutually exclusive. Once one thread acquires a mutex the other thread that wants to acquire the mutex is suspended till the first thread releases it. The method Mutex.ReleaseMutex must be called to release the mutex. A thread can request the same mutex in repeated calls to Wait but must call the Mutex.ReleaseMutex the same number of times to release the ownership of the mutex. If no thread owns a mutex (or the thread that owns a mutex, terminates normally) then the state of the Mutex object is set to signaled otherwise nonsignaled. Once the state is set to signaled the next thread waiting in the queue acquires the mutex. The Mutex class corresponds to the Win32 CreateMutex call.
The creation of a Mutex object is very simple. There are three ways of doing so:
- public .ctor(); - creates a Mutex and sets the ownership to the calling thread.
- public .ctor(bool owner); - allows you to decide whether the thread calling the Mutex needs to be the owner or not.
- public .ctor(bool owner string name); - also specifies the name of the Mutex.
As seen from above it is not necessary that a thread calling a mutex also own it. But a thread can always get the ownership of the mutex by calling any one of the methods like WaitHandle.WaitOne, WaitHandle.WaitAny or WaitHandle.WaitAll. If no other thread currently owns the mutex then the calling thread would be granted ownership and the method WaitOne will return immediately. But, if any other thread owns the mutex, then WaitOne will spin infinitely till it gets access to the mutex. You may specify the time (in milliseconds) on the WaitOne method. This will prevent the infinite wait on the mutex. You can call Close method on the mutex to release it. Once a mutex has been created, we can use the GetHandle to obtain a handle to the mutex that can be used with the WaitHandle.WaitAny or WaitHandle.WaitAll methods.
Following code is a very simple illustration of how to create a Mutex object.
public void some_method()
Mutex firstMutex = new Mutex(false);
//some kind of processing can be done here.
In this function, the thread calling the Mutex does not own it. So, the thread calls the WaitOne method on the mutex to get the ownership of the Mutex.
Synchronization events are wait handles that are used to notify other threads that something has occurred or that a resource is available. They have two states: signaled and nonsignaled. There are two synchronization event classes: AutoResetEvent and the ManualResetEvent.
This class notifies one or more waiting threads that an event has occurred. It automatically changes the status to signaled when a waiting thread is released. Instances of the AutoResetEvent class can also be set to signaled using Set, but the state will automatically become nonsignaled by the system as soon as a waiting thread is notified that the event became signaled. If no threads are waiting to listen to the event, the state remains signaled. This class cannot be inherited.
This class also notifies one or more waiting threads that an event has occurred. The state of the event can be manually set or reset. The state of a manually reset event remains signaled until the ManualResetEvent.Reset method sets it to the nonsignaled state and the state remains nonsignaled until the ManualResetEvent.Set method changes the state back to signaled. This class cannot be inherited.
This class (in the System.Threading namespace) helps to synchronize access to variables that are shared amongst threads. It thus provides atomic operations for variables that are shared by multiple threads.
You can increment or decrement a shared variable by calling Interlocked.Increment or Interlocked.Decrement on the shared variable. The advantage is that the two methods operate in an ďatomicĒ manner meaning that the methods take an integer, increment (or decrement) it and return its new value, all in one step. You can also use this class to set the variables to a specific value (done with Interlocked.Exchange method) or to check the equality of two variables, if they are equal, replaces one of the variables with a given value ( Interlocked.CompareExchange method).
This class (in the System.Threading namespace) defines a lock that works on the single- writer/multiple-reader mechanism. Thus it offers read/write-aware synchronization. Any number of threads can read data concurrently. Data locking is needed only when threads are updating. Reader threads can acquire locks, if and only if, there are no writer threads. Writer threads can acquire locks if there are no reader and no writer threads. Hence, once a writer-lock is requested, no new readers will be accepted until the writer has access. It supports time-outs, thus preventing deadlocks. It also supports nested reader/writer locks.
The function that supports nested reader locks is ReaderWriterLock.AcquireReaderLock. This thread blocks if a different thread has the writer lock.
The function that supports nested writer locks is ReaderWriterLock.AcquireWriterLock. This thread blocks if a different thread has the reader lock. Worse it can deadlock if it has the reader lock.
It is always safe to use the ReaderWriterLock.UpgradeToWriterLock function. This will upgrade the reader thread to writer.
You can also change the writer thread to a reader. The function that does this is called ReaderWriterLock.DowngradeFromWriterLock. You can call ReaderWriterLock.ReleaseLock to release the lock and you can use ReaderWriterLock.RestoreLock to restore the lock state of the thread to what it was before calling ReaderWriterLock.ReleaseLock.
This brings us to an end of how to do thread synchronization on the .NET platform. The coming series would show (via examples) how you could do it. But remember that though thread synchronization is invaluable in multithreaded applications yet synchronization should be done very carefully otherwise it will lead to problems like data-race or deadlocks etc bringing your entire application down. It is a difficult technique to master and it is only with a lot of practice that you would fully reap benefits out of this. Always remember to do as little as possible within synchronized methods. Try to avoid doing things that will likely not complete in the synchronization block and might block indefinitely, particularly I/O. As far as possible, use local variables rather than global variables. Synchronize only that part of the code that would be used by more than one process and that uses state that may be shared by many processes. Arrange your code in such a way so that each piece of data is controlled in exactly one thread. Data that is not shared between threads is always safe. In the next article we would learn about the System.Threading.ThreadPool class and a few other classes and concepts related to multithreading.
About the Author: Manisha Mehta has been programming on the java platform and is now actively involved with the .NET platform.