home

Concurrent Reads with Serialized Writes

Nov 20, 2013

If anyone knows the name for this pattern, I'd very much like to know.

Let's start with a simple example of using a read-write mutex:

func Get(key string) value string {
  lock.RLock()
  defer lock.RUnlock()
  return lookup[key]
}

func Set(key, value string) {
  lock.Lock()
  defer lock.Unlock()
  lookup[key] = value
}

This allows us to have concurrent reads and a single writer. It's simple and effective. The problem I was facing was that my write, while infrequent, could take a relatively long time. This resulted in blocked readers. What I realized though was that, while I needed to serialize writes, much of the writing code could, in fact, be done concurrently with reads. For example, what if we're dealing with an array which we want to append to:

func Len() int {
  lock.RLock()
  defer lock.RUnlock()
  return len(values)
}

func Append(value string) {
  lock.Lock()
  defer lock.Unlock()
  l := len(values)
  newValues := make([]string, l+1)
  copy(newValues, values)
  newValues[l] = value
  values = newValues
}

Even though the shared values is only briefly updated, Append's write lock could be relatively long-lived. A naive solution might be to do:

func Append(value string) {
  lock.RLock()
  l := len(values)
  newValues := make([]string, l+1)
  copy(newValues, values)
  lock.RUnlock()

  newValues[l] = value

  lock.Lock()
  values = newValues
  lock.Unlock()
}

This definitely performs better, but concurrent calls to Append could easily result in lost writes. Don't see how? Imagine we have have an array ["a", "b", "c"] and two calls to Append happen at the same time: Append("d") and Append("e"). Both calls would create a new array with 4 slots, copy the original 3 values and append their respective value. Whichever call got the lock last, would win. However, the desired result would likely be an array with 5 values.

The solution? A second lock, used only for writing:

func Append(value string) {
  // these two lines are new
  writeLock.Lock()
  defer writeLock.Unlock()

  l := len(values)
  newValues := make([]string, l+1)
  copy(newValues, values)

  newValues[l] = value

  lock.Lock()
  values = newValues
  lock.Unlock()
}

The new lock ensures that writes are serialized, so that one won't stomp another. With this new guardian in place, we can safely use our main lock more finely, allowing concurrent reads even during the slow part (in this case copy) of our write.

This won't always work. It depends what your write is doing. But I'm rather fond of the approach. (Thanks to @pchapuis for further reducing the amount of locking required.)