home

Buying Concurrency with Memory

29 Oct 2014

A while ago, I wrote about a way to efficiently calculate percentiles. Whether you take that sampling approach or not, what you'd end up with is an array of values that you want to process. Consider this naive implementation:

type Stats struct {
  sync.Mutex
  timings []time.Duration
}

//happens on every request
func (s *Stats) hit(d time.Duration) {
  s.Lock()
  s.timings = append(s.timings, d)
  s.Unlock()
}

// happens on a background job
func (s *Stats) calculate() {
  s.Lock()
  // calculate our percentile
  // relatively slow, we need to sort s.timings!
  s.Unlock()
}

Whenever we call calculate, we block everyone else. One solution is to copy the values:

// happens on a background job
func (s *Stats) calculate() {
  s.Lock()
  c := make([]time.Duration, len(s.timings))
  copy(c, s.timings)
  s.timings = [0:0]
  s.Unlock()
  // calculate our percentile against c
}

This works so long as copying the values is quicker than the processing we need to do. In the above example, we also have to consider the allocation and subsequent garbage the code will generate.

The approach I decided to take, which is nothing novel, is to keep two arrays and to swap them:

type Stats struct {
  sync.Mutex
  timingsA []time.Duration
  timingsB []time.Duration
}

//happens on every request
func (s *Stats) hit(d time.Duration) {
  s.Lock()
  s.timingsA = append(s.timingsA, d)
  s.Unlock()
}

// happens on a background job
func (s *Stats) calculate() {
  s.Lock()
  //swap
  s.timingsA, s.timingsB = s.timgingsB, s.timingsA
  s.timingsA = [0:0]
  s.Unlock()
  //calculate based on s.timingsB
}

We've reduced the length of our lock by a lot. We no longer allocate, copy or operate within our lock. We merely assign a few variables. This also avoids unnecessary allocations and GC.

Nothing ground breaking new, but still neat.

blog comments powered by Disqus