Category Archives: Uncategorized

Left-Right: Wait-free Reading While Writing


In some previous posts (here and here) I’ve described Pedro Ramalhete and Andreia Craveiro’s Left-Right, a concurrency construction that enables concurrent reading and writing of a data structure, while keeping reads wait-free. My last post described a variation of Left-Right where a writer may be forced to wait indefinitely — i.e., be starved. In this post, I’ll describe how writer starvation can be avoided.

Left-Right Features Refresher

As a quick refresher (once again), the key features of Left-Right are:

  1. It’s generic, works with any data structure, and doesn’t affect the time complexity of any operations.
  2. Reads and writes can happen concurrently.
  3. Reading is always wait free.
  4. No allocation is required, so no garbage is generated, and no garbage collector is required.
  5. A writer cannot be starved by readers.

In my last post, the implementation of Left-Right I presented had all these properties except the last. Let’s see how write starvation can be avoided.

Solving Write Starvation

Let’s remind ourselves how writing works on the variation of Left-Right I last described:

void Write()
        var readIndex = _readIndex;
        var nextReadIndex = Toggle(readIndex);
        _readIndex = nextReadIndex;
        // This wait is the source of starvation:
        while(_readIndicator.IsOccupied) ;

Recall that readers mark _readIndicator to communicate their reads to the writer. It is the wait on _readIndicator.IsOccupied that opens up the possibility of writer starvation: new readers could continuously arrive, indefinitely delaying an in-progress write.

Clearly, for the writer to avoid starvation it cannot wait on a _readIndicator that new readers can mark. This need echos the original idea of Left-Right: keep two copies of the data structure so that writes can proceed during reads. So the key to avoiding starvation is to have two read-indicators also.


Reading is similar to with a single read-indicator, there is just one additional step, readers read a variable _versionIndex that determines which read-indicator they will mark:

// An array of the two instances of the data 
// structure we are coordinating access to.
List[] _instances;
// An index (either 0 or 1) that tells readers 
// which instance they can safely read.
int _readIndex; 
// An index (either 0 or 1) that tells readers 
// which read-indicator they should mark.
int _versionIndex; 
// An array of two 'read indicators' so a 
// reader can inform writers that it has a 
// read in progress
IReadIndicator[] _readIndicators; 
int Read()
    var versionIndex = _versionIndex;
    var readIndicator = _readIndicator[versionIndex];
    var idx = _readIndex;
    var result = read(instances[idx]);
    return result;

This is so similar to the “No Version” read mechanism of the previous post that there isn’t much to say about it, instead, let’s look at writing, where the real differences lie.


Writing is again similar to in the simpler “No Version” Left-Right, but includes two waits. Roughly the way writing works is: Write to the instance we’re sure no readers are reading, change _readIndex so readers read the version we just wrote to, wait for any outstanding reads on that _readIndex we want to write next, change _versionIndex so readers mark the other read-indicator, wait for the outstanding reads to clear, and do the final write. That explanation is a bit of a mess, hopefully the code makes things a little clearer:

void Write()
        var readIndex = _readIndex;
        var nextReadIndex = Toggle(readIndex);
        _readIndex = nextReadIndex;
        var versionIndex = _versionIndex;
        var nextVersionIndex = Toggle(versionIndex);
        while(_readIndicator[nextVersionIndex].IsOccupied) ;
        _versionIndex = nextVersionIndex;
        while(_readIndicator[versionIndex].IsOccupied) ;

Hopefully writing makes some intuitive sense, getting a confident feel for exactly why it enforces both mutual exclusion between readers and writers, as well as ensures writing is starvation free is a little more effort. We’ll try and do that in the next section.

Why This is Correct

Mutual Exclusion

The argument for correctness here is similar the the one for the version of Left-Right presented in the previous post. The key difference is that there is an additional piece of state to consider: _versionIndex. To keep the argument clear, let’s start with some definitions.

Let’s define the end-state of a reader as the values of _versionIndex and _readIndex it has respectively witnessed at the time it calls Depart. So there are four possible reader end-states: (0, 0), (0, 1), (1, 0) and (1, 1). After a reader calls Depart it is said to complete and is not defined to be in any state.

Let’s also divide readers into two categories: pre-arrival readers and post-arrival readers. Pre-arrival readers have not called Arrive on a read-indicator yet. While post-arrival readers have called Arrive on a read-indicator.

Let’s introduce a small bit of notation for writes. By a writer “performing” a Wait(V, R) we mean the writer waited until _readIndicator[V].IsOccupied returned false, while _readIndex was equal to R.

Now we can note the elimination property of Wait(V, R): After a writer performs Wait(V, R) then there can be no reader with end-state (V, Toggle(R)). This is because any pre-arrival readers will enter state (V, R), and any post-arrival readers will complete.

Now consider the two waits performed during a Write. Let’s refer to the values of _versionIndex and _readIndex at the beginning of the write as V0 and R0 respectively:

  1. The first wait is of the form Wait(Toggle(V0), Toggle(R0)), which by the elimination property means there can be no reader with end-state (Toggle(V0), Toggle(Toggle(R0)) = (Toggle(V0), R0).
  2. The second wait is of the form Wait(V0, Toggle(R0)), which by the elimination property means there can be no reader with end-state (V0, Toggle(Toggle(R0)) = (V0, R0)

Now we can spell out how these waits enforce mutual exclusion. A write is mutually exclusive with a read if it is writing to instance _readIndex and there are no readers with end-states (0, _readIndex) or (1, _readIndex). But these are exactly the end-states eliminated by the two waits performed during a Write before it performs its second mutation of the data structure (i.e., write(instances[readIndex])). Thus the second mutation is mutually exclusive with readers.

Given that this second mutation is mutually exclusive with readers, then clearly a subsequent call to Write can safely mutate this instance first, without waiting. This just leaves the very first mutation performed by the very first call to Write, which must be mutually exclusive with readers since it targets the toggle of the initial value of _readIndex.

The above stops short of formalising things into a proof (although it wouldn’t be too much more effort), but it’s enough to be fairly confident that Left-Right enforces mutual exclusion.

The whole point of this more complicated version of Left-Right compared to the one in my last post is that writers cannot be starved by readers, so let’s look at that next.

Starvation Freedom

What is meant be starvation freedom of the writer by readers is that in any wait performed by a Write operation, that wait must end when only the readers that were in-progress at the moment it began waiting complete.

It’s pretty clear both of the waits performed in Write satisfy this, both are of the form Wait(Toggle(V), .), where V is the current value of _versionIndex. That is, while new readers are marking _readIndicator[V], the wait depends only on those that have marked _readIndicator[Toggle(V)] before the wait began, and so the wait will end when these existing readers complete.

Next Time

There is still a large gap in the description of Left-Right given so far: it omits all details of memory fencing. The descriptions above all rely on viewing the code as sequentially consistent. An actual implementation is likely to diverge from this. I’ll try and close this gap in a subsequent post, using RelaSharp to help. On this last point, generally speaking in my experience, a good working assumption for code implementing a mutex (or lock free or wait free code in general) is that any implementation that hasn’t been model-checked in some way — even where the underlying construction has been proved correct — is buggy. The source code for this Left-Right implementation (including a simple RelaSharp test) is here, there’s also a NuGet package available: SharpLeftRight

Reading While Writing Without Waiting

Reading and Writing

It’s fairly common to have a data structure that is occassionally mutated but often read, with the writing and reading occurring on different threads. This type of scenario can arise during intialization or in a loading phase. A bunch of data structures are built, with potentially many mutating threads, and from then on (or for the next long while) are only read, again potentially from many threads. The usual ways of dealing with this are reader-writer locks, or occasionally a lock-free approach, or perhaps even RCU.

A while ago I wondered if there were other generic, efficient ways to deal with this scenario, and I happened upon the very interesting so-called Left-Right construction. I thought it was so nice I’d even resurrect my blog to describe it!

Left-Right to the Rescue

Left-Right is a genuinely new way to deal with concurrent reading and writing of data structures, first described by Pedro Ramalhete and Andreia Craveiro (see here for a paper). After I first read about it, I decided I’d throw together a little C# implementation, as one apparently didn’t exist. I must also say that Pedro Ramalhete was extremely helpful and friendly in answering my queries about Left-Right.

But before describing how the Left-Right construction works, I’ll motivate things a little by revisting a simpler generic solution to the problem of concurrent reading and writing of a data structure, namely lock-free copy-on-write. There are other examples that could also be used to motivate things, like an optimised reader-writer lock. Copy-on-write is handy to describe because it’s fairly self-contained and has similar-ish progress guarantees to Left-Right, and perhaps most importantly it is similarly an extremely general construction.


Making a data structure thread-safe via copy-on-write simply copies the entire data structure when a write occurs. This sounds expensive, but in some situations where the read-path latency is critical, it can be justified.

For a single writer, and many readers, an example of copy-on-write is something like this:

List data;

void Write(int v)
   var original = Volatile.Read(ref _data);
   var copy = new List(original);
   Voltile.Write(ref _data, copy); 

int Read(int idx) => Volatile.Read(ref _data)[idx];

This is neat in a few ways:

  • It’s very simple.
  • It’s generic, and works with any data structure.
  • Reads and writes can happen concurrently.
  • It relies only on a store-release by the writer and a load-acquire by the reader. So, on say x86/x64, the thread synchronisation is free.
  • If we are mostly reading (or after an initialisation phase, always reading) the reading path is very efficient, hardly different than when only a single thread is involved.

It also has some disadvantages:

  • For large data structures, writing is very inefficient. In this case, an (amortized) \Theta(1) add to end of list changes to \Theta(n) time.
  • It creates garbage. Or maybe worse it requires a garbage collector of some kind. Something has to be responsible for freeing the memory associated with data when it is replaced with the copy, and this must only happen when all readers are finished with the pre-write version.
  • In the presence of multiple writers, scaling is very poor – worse than blocking – as described next.

In the presence of multiple writers, things gets worse. To support multiple writers, we need to do something like this:

void Write(int v)
      var original = x;
      var copy = new List(original);
      if (Interlocked.CompareExchange(ref _data, copy, local) == local)

For workloads that are not overwhelmingly writers or larger lists, this scales very poorly indeed. In fact, simple blocking completes more quickly as we’ll see next.

A Micro-benchmark of Copy-on-Write

To get a slightly better feel for how copy-on-write behaves, a micro-benchmark can tell us a little. The graphs below compare three simple list implementations:

  • MultipleWriterCOWList: This is the copy-on-write list described above, and is the one we’re really talking about.
  • LockList: This is a list that is protected by a simple lock, i.e., it’s fully blocking for reads and writes.
  • RWLockList: This is a list protected by a reader-writer lock (using the ReaderWriterLockSlim in C#).

The following two graphs show some of the higher percentiles for the latency of an operation (either a read or write) on the lists, for 4 threads, a 100-element list and a workload of 95% reads (left), and 5% reads (right):

Looking at the left plot, the copy-on-write approach fairs quite well. The right plot reminds us how much copy-on-write depends on the number of writes being comparatively small though. Incidentally, the fact that the reader-writer lock is so bad is interesting too, but a topic for another day!

Next we can look at how copy-on-write scales with the number of threads involved, jumping up to 20 threads, things don’t go well for it:

The left plot shows something any lock-free algorithm is vulnerable to: outliers. Lock-freedom guarantees some thread makes progress at each step, but makes no guarantees about fairness or starvation. In the case of the left plot, above about the 95th percentile, the blocking list (“LockList”) has lower latency. On the right, the setup is the same except it consists of only 5% reads. In this case, the y-axis is the logarithm of latency. In other words the copy-on-write list is a scaling disaster.

The last obvious knob we can turn is the capacity of the list in question. The copy-on-write list must do more work the larger lists. Switching to 1000-element lists with 95% reads for 4 threads (left) and 20 threads (right) shows the copy-on-write list again is vulnerable to outliers, more so than a trivial fully blocking list (“LockList”):

Aside from the latency distributions, we can look at the throughput of the various types of list:

The four graphs above show that for a 1000-element list, the copy-on-write list is always lower throughput than simpler blocking “LockList” alternative (again, why the reader-writer lock is so poor is a topic for another day!). Moreover, its throughput is only higher for a small, 100-element list for workloads that are overwhelmingly writes.

All the same, copy-on-write can still be attractive in the case where there is just a single writer. In that case, there is no question of outliers due to multiple writes competing (although they might still occur if the data structure was large).

It seems natural to wonder if there is some intermediate solution between reads being as cheap as in the single-threaded case, and writing involving copying the entire data structure. That is, is there a solution where reads involve more work than copy-on-write, but writes require less. It turns out that the Left-Right construction is an example that fills this gap.

A First Look at Left-Right

Copy-on-write creates copies of the data structure only at the moment when writes occur, instead of this, what if we always maintained two copies of the data structure?

A write operation would always write to both instances, but it would only write to an instance when it can be certain that no reader is reading that instance.

This is basically the first idea of Left-Right. The remaining details are the ingenious mechanism for coordinating reading and writing threads cheaply and with strong progress guarantees.

Left-Right combines the following surprisingly powerful guarantees:

  • It’s generic, works with any data structure, and doesn’t affect the time complexity of any operations.
  • Reads and writes can happen concurrently.
  • Reading is always wait free.
  • No allocation is required, so no garbage is generated, and no garbage collector is required.
  • A writer cannot be starved by readers.

Some disadvantages of Left-Right are:

  • It’s more complicated than copy-on-write.
  • It requires performing every mutation of the data structure twice.
  • Multiple writers block each other (but actually, this scales better than copy-on-write, and at least matches a simple blocking solution).

Next Time

This post is getting a bit long, but hopefully it has piqued your interest in Left-Right!

There is an evolving landscape of solutions to the type of problem addressed by Left-Right (as well as simple old copy-on-write). If you’re interested in an overview and a ‘state of the art’ approach, Chen et al’s 2016 paper Fast Consensus Using Bounded Staleness for Scalable Read-Mostly Synchronization is a great read.

In some future posts I hope to describe a C# implementation of Left-Right, and how to validate the implementation using RelaSharp. Be sure to check out the blog of Perdro Ramalhete and Andreia Craveiro where you can find lots of information on the creation of Left-Right.

On the off chance you’d like to see the (not very nice!) bench-marking code used for copy-on-write it’s available at