Race condition in C#

A race condition occurs in multi-threaded or distributed systems when two or more threads can access shared data and they try to change it at the same time. As a result, the values of variables may be unpredictable and vary depending on the timings of context switches of the processes or threads.


 In other words, a race condition leads to unpredictable behavior where the output depends on the sequence or timing of other uncontrollable events. It gets its name from the metaphor that the threads or processes are racing to complete the operation and the result of the race impacts the result of the operation.

 Race conditions can lead to unpredictable results and subtle program bugs. A common example of a race condition in programming is a check-then-act sequence. For example, a multi-threaded program may check if a variable is null before it creates an instance of an object. If two threads are executing this sequence and the creation of the instance takes some time, then the second thread may also find the variable to be null and create a new object, leading to two objects being created when only one was expected. 

Synchronization primitives like locks, semaphores, monitors etc. are used to prevent race conditions by ensuring that certain sections of code (critical sections) do not execute concurrently. Proper use of these primitives can help prevent race conditions, making multi-threaded code correct and easier to reason about. However, incorrect use of these primitives can lead to deadlocks or resource starvation, so care must be taken when using them.

Here’s an example of a race condition with a static counter variable in C#. One thread is incrementing the counter and another is decrementing it.

using System;
using System.Threading;

class Program
{
    static int counter = 0;

    static void Main()
    {
        Thread thread1 = new Thread(() =>
        {
            for (int i = 0; i < 1000000; i++)
                counter++;
        });

        Thread thread2 = new Thread(() =>
        {
            for (int i = 0; i < 1000000; i++)
                counter--;
        });

        thread1.Start();
        thread2.Start();

        thread1.Join();
        thread2.Join();

        Console.WriteLine(counter); // The output can be unpredictable because of the race condition
    }
}

In this example, the final output can be unpredictable because of the race condition. Both threads are accessing and modifying the shared counter variable concurrently.

To fix this issue, you can use the Interlocked.Increment and Interlocked.Decrement methods which perform the increment and decrement operations atomically:

using System;
using System.Threading;

class Program
{
    static int counter = 0;

    static void Main()
    {
        Thread thread1 = new Thread(() =>
        {
            for (int i = 0; i < 1000000; i++)
                Interlocked.Increment(ref counter);
        });

        Thread thread2 = new Thread(() =>
        {
            for (int i = 0; i < 1000000; i++)
                Interlocked.Decrement(ref counter);
        });

        thread1.Start();
        thread2.Start();

        thread1.Join();
        thread2.Join();

        Console.WriteLine(counter); // The output will be 0
    }
}

In this fixed version, Interlocked.Increment and Interlocked.Decrement ensure that the increment and decrement operations are done atomically. So, the final output will be 0 as expected. This is because the number of increment and decrement operations are the same. Therefore, they cancel each other out resulting in the counter remaining at its initial value of 0. This is true regardless of the order in which the operations are performed. This is a simple example and real-world multithreaded programs can be much more complex. Proper synchronization is essential to ensure correctness in multithreaded programs.

Vikash Chauhan

C# & .NET experienced Software Engineer with a demonstrated history of working in the computer software industry.

Post a Comment

Previous Post Next Post

Contact Form