Skip to content

Understanding the problem of synchronization

In most cases the usage of multiple threads requires that some data are accessed from more than one thread. For simplicity let’s assume there are 3 global variables which can be accessed from different threads:

// Global data definition with initial values
int    g_someCounter = 0;
double g_someValue   = 0.0;
bool   g_ready       = false;

Assume that the data are changed by some worker thread, which signals the end of his work by setting the ready flag.

// Thread changing global data
g_someCounter = 47;
g_someValue   = 3.14;
g_ready       = true;

Some other thread may want to process the data if they are available:

// Thread using global data
if (g_ready)
{
    myCounter += g_someCounter;
    myValue = g_someValue * 2;
}

This code has at least the following problems:

  • possible: changed execution order
    compiler may generate optimized code which first sets the ready flag and then changes the other values. As a consequence the consuming thread may work with wrong (or half written) data.
  • possible: caching strategy
    changes to data may only be visible to one thread
  • data race
    one thread writes, other thread reads or writes same memory location (e.g. double value g_someValue), result depends on random thread execution
  • race condition
    same problem on a higher level, two threads work on shared data, result depends on random thread execution

For more infos about “race condition” and “data race” see blog.regehr.org