# Knuth’s Wisdom

## Prologue

In a previous post I talked about trying to bring out the worst in sorting algorithms. Given the torrent of interest in that post, I’m going to continue with the sorting theme. So, in this post, I’m going to talk about a sorting algorithm that brings out the worst in a CPU: bubble sort. This is clearly a point of deep industrial and practical interest, so, fasten your seat belts!

It turns out that the real reason a bubble sort performs so badly on today’s CPUs is just a little surprising. Actually, I wrote about this some time ago with some friends in a paper, but it was a long paper, and I felt this fun little bubble sort nugget got a bit lost inside it.

## Bubble Sort

Mundane and all as it is, let’s remind ourselves what bubble sort is.

For an array of $n$ elements, bubble sort works by performing roughly $n$ sweeps over the array. A sweep just scans from left-to-right, swapping an element with its immediate right neighbour if that neighbour is smaller than it.

So in C, the sweep step might look like this:

void maybe_swap(int* a, int i)
{
if(a[i + 1] < a[i])
{
swap(a[i + 1], a[i]);
}
}

void bubble_sweep(int* a, int n, int sweep_number)
{
for(int i = 0; i < n - sweep_number; ++i)
{
maybe_swap(a, i)
}
}

In all its glory, bubble sort just calls the function above with “sweep_number” ranging from 1 to $n - 1$. So it looks like this:

void bubble_sort(int* a, int n)
{
for(int i = 1; i < n; ++i)
{
bubble_sweep(a, n, i);
}
}

I’m ignoring an improvement that can be made to bubble sort here. That is, if no elements are swapped in an entire sweep, the whole algorithm can just terminate. I’m leaving that improvement out because adding it makes the point I’m trying to make in this post a bit messier without adding anything very interesting.

Without getting too side-tracked into implementations, I can’t resist including one in Clojure:

(defn maybe-swap [v i] (let [j (+ i 1)] (if (> (v i) (v j)) (swap v i j) v)))
(defn bubble-sweep [v sweep-num]
(reduce #(maybe-swap %1 %2) v (range (- (count v) sweep-num))))
(defn bubble-sort [v] (reduce #(bubble-sweep %1 %2) v (rest (range (count v)))))

With our coding done, we can run a simple experiment that has a surprising result.

## An Experiment

A quick look at the bubble sort implementation above should convince us it takes $\Theta(n^2)$ time. This is obvious: each call to “bubble_sweep” takes $\Theta(n)$ time, and we make $\Theta(n)$ calls.

If we were feeling very curious, we might try timing individual calls to “bubble_sweep” as it’s called by bubble sort, just to see that the time taken is indeed linear each time. If we do that, here’s what we get:

The vertical axis here is time (shown in CPU cycles), and the horizontal axis is the “sweep_number” from our code. The input array is 50,000 32-bit integers.

I don’t know about you, but until now I felt like I understood the performance of the humble-looking “bubble_sweep”. Now, I’m confused. Of course we know it’s really taking $\Theta(n)$ time per sweep. It just looks like the constant hidden in there is varying pretty strangely. How come?

## Not the Caches

A natural first thought is that maybe the number of swaps varies per sweep, and more swaps are more expensive? Maybe things are exacerbated by a CPU cache effect?

This is jumping the gun a little. It’s not a cache effect, because the entire array fits in the (L2) cache where I did the measurements. The first half of this idea is good though.

The number of swaps does vary per sweep, and the real expense is due to branch mispredictions.

## Due to What?

Let’s quickly remind ourselves what branch mispredictions are. CPUs don’t just work on a single instruction at once. Instead, they divide the work for each instruction into say, 5 stages. Then after one instruction finishes stage 1, the CPU fetches the next instruction and starts it in stage 1 (while the first instruction is now in stage 2). This is called pipelining. In this case we’d have a 5 stage pipeline.

So if processing a single instruction took time $T$, and we divide it into $N$ stages, we still take time $T$ to process a single instruction (and actually, a little longer because of the expense of administering the pipeline). On the other hand, once the pipeline is full, a new instruction is completed every $T/N$ units of time. This is great. There’s a little extra latency for each instruction, but we’ve improved throughput by a factor of $N$.

The wrinkle of course is that the CPU needs to know the next instruction to keep its pipeline full. When we have conditional branches (like that “if” statement in “bubble_sweep” above), the CPU can’t know the next instruction. So it has to make a guess. The hardware on the CPU to do this is called a branch predictor.

When the branch predictor correctly guesses the next instruction, the pipeline can stay full. When it doesn’t it needs to abort the in-flight instructions and re-start from the correct instruction. Depending on the length of the pipeline, this can be very expensive.

## Another Experiment

Let’s take a look at another experiment to try and confirm our branch prediction theory.
The top-most curve in this image, labelled “hardware predictor”, is the number of branch mispredictions, measured from the performance counter on the CPU, caused by the calls to “bubble_sweep” in bubble sort.

It certainly looks like branch mispredictions are indeed what’s at work. But why?

## Intuitive Analysis

Looking at our experimental results again, it seems like bubble sort’s branches get harder and harder to predict, before becoming easier again. So what’s going on?

Thinking very intuitively, we might imagine that the swapping of array elements to the right in bubble sort is a kind of partial sorting. When the data is unordered, the branch is likely to be taken, when the data is somewhat ordered, it’s 50/50, and when the data is nearly sorted it’s very unlikely to be taken. So perhaps it’s moving between these two extremes of predictability that confounds the branch predictor.

We can do better than this though.

## Detailed Analysis

Let’s think about the probability that a particular comparison branch is taken. So let $Q^k_l$ be the probability that the $l^{th}$ comparison branch of the inner-loop (i.e. “bubble_sweep”) is taken on the $k^{th}$ sweep of bubble sort.

Now, the $l^{th}$ inner-loop comparison branch can be taken only if $\texttt{a[l]}$ is not larger than all of $\texttt{a[0..l - 1]}$. For this to be the case, the $(l + 1)^{th}$ branch in the previous sweep must have been taken, since otherwise the previous sweep left the maximum of $\texttt{a[0..l - 1]}$ in $\texttt{a[l]}$. But the probability that the $(l + 1)^{th}$ branch of the previous sweep was taken is just $Q^{k-1}_{l + 1}$.

Given that the $(l + 1)^{th}$ branch of the previous sweep was taken, that probability that $\texttt{a[l]}$ is not larger than all of $\texttt{a[0..l - 1]}$ is $1 - 1/(l + 1)$, and so we have:

$Q^k_l = Q^{k-1}_{l + 1}\left(1 - \frac{1}{l + 1}\right)$

The penultimate step is just a result of the fact that the probability that the $l^{th}$ element of a random permutation is a left-to-right maximum is $1/l$.

Filling in the bases $Q^1_l = 1 - 1/(l + 1)$ for $1 \leq l < n$ and $Q^1_n = 0$, we see that when $l \leq n - k$ then $Q^k_l = \frac{l}{l + k}$, and $Q^k_l = 0$ otherwise.

So as our experiment showed us, and our rough intuitive analysis too, in the first sweep, $k = 1$, and the branches are highly likely to be taken, on the other hand, on the last sweep $k = n - 1$ and the branches are unlikely to be taken.

Or, said another way, this analysis tells us that initially left-to-right maxima are quite rare and branches are predictable, but the repeated shifting of them to the right peppers them throughout the array making branches unpredictable, until they are dense enough that things become predictable again.

## Matching the Hardware

To round off our mighty analysis, we can try and reproduce the curves we saw in our experiments where we measured the total number of branch mispredictions per sweep.

If we let $S(k)$ be the average total mispredictions in sweep number $k$, then we know
$S(k) = \sum_{l=1}^{n-k}M(Q^k_l)$. Where $M(p)$ is the probability that a branch that is taken with probability $p$ is mispredicted.

One way of modelling $M$ is just to imagine an ideal branch predictor: if the branches are taken (independently) with probability $p$, then the ideal branch predictor can be wrong with probability at most $\min(p, 1 - p)$. This will of course, lower bound the branch mispredictions, because in reality branch predictors work hard to guess what $p$ is.

Filling in for the ideal predictor gives $S(k) = \sum_{l=1}^{n-k}\frac{1}{l + k}\min(l, k)$. After a lot of messy algebra we’ll find that $S(k) = (1 + H_n + H_k - 2H_{2k})k$ when $k \leq n/2$ and $S(k) = n - (1 + H_n - H_k)k$ otherwise (here $H_n \approx \ln n$ is the $n^{th}$ harmonic number).

Plotting this gives us the bottom-most curve, labelled “perfect predictor”, in the branch mispredictions plot above — lower bounding the real number of mispredictions, as we expect.

## Shaker Sort

Bubble sort has been around a long time, and people have tried to hack it into something more efficient. One attempt at this is shaker sort. In shaker sort, there are two “bubble_sweep”s that alternate. One shifts large elements right as we did above, and the other shifts small elements left (the opposite of what we did above).

Shaker sort still takes $\Theta(n^2)$ time, but is more efficient than a naive bubble sort in practice. Shaker sort is still a dog though. A selection sort implementation easily beats it, while being much simpler (and insertion sort is better still).

But, this is where the plot thickens: the reason shaker sort is so much slower than selection sort is almost entirely down to branch mispredictions. If you care to measure, shaker sort causes fewer cache misses than selection sort, and executes a small number of extra instructions — but performs much worse due to branch mispredictions. On an unpipelined machine, this bubble sort variant might actually beat selection sort.

## So What?

If there’s anything to conclude here it’s that, first and foremost, bubble sort is awful! Not only is it (in naive implementations) more instruction hungry and cache unfriendly than the other quadratic-time sorting algorithms — it manages to be fantastically confusing to branch predictors, on average at least.

The relative performance of the quadratic-time sorting algorithms is important in practice. This is the reason good quicksort implementations know to switch to insertion sort (which, as it happens causes only a linear number of branch mispredictions despite executing a quadratic number of branches).

As a result, I think it’s important to understand what’s going on behind the simple looking code of the quadratic-time sorting algorithms. It also shows that even simple and very similar looking pieces of code can behave quite differently.

And lastly, all this messing with bubble sort reminds us of Don Knuth’s near omniscience. In Volume 3 of The Art of Computer Programming, he remarks:

In short, the bubble sort seems to have nothing to recommend it, except
a catchy name and the fact that it leads to some interesting theoretical
problems.

Knuth isn’t referring to instruction pipelines when he makes this comment, but somehow he’s right about how bubble sort gets on with them too!

Discussion at HackerNews
Code from our paper.

# Making good sorts go bad

## Prologue

Everyone loves sorting. Even people who don’t know they do, because our phones, tablets and laptops are always sorting. Keeping things in order is just part of life, and especially so if you’re a computer. As each internet packet boinks through the routers of the Internet, its IP address is compared in an ordered way. So these days little bits of sorting are happening almost constantly. As rain falls, sun shines and humanity breathes: computers sort.

A while ago, I needed to stress test a sorting-like algorithm. It turned out I needed something a bit smarter than say, McIlroy’s quicksort killer. So I came up with the naive (and somewhat impractical) algorithm described below. It’s just a bit of idle curiosity, but I hope it’s interesting. I’ll also present a minor “application” of the algorithm.

So: sorting algorithms sort data. Here, we’re going to enter the mind of the data. We’re also going to be bad. We’re going to think: As a bunch of data, how can I make the job of the sorting algorithm as hard as possible?

## Killing Quicksort

We all remember quicksort only works in $\Theta (n \log n)$ time on average. Its worst case is quadratic. Yikes.

What’s more, the famous Doug McIlroy described a simple way of making pretty much any practical quicksort implementation go quadratic.

In C, we sort things by saying

qsort (values, num_values, sizeof(int), compare);

McIlroy’s quicksort killer lives inside the compare function. It never lies, but it doesn’t have anything else in mind other than to make quicksort’s job difficult. To do this, it uses a simple heuristic to guess which element is the pivot, and assigns it the smallest possible value consistent with the results of previous calls to compare. The result is that any quicksort examining only a constant number of elements to determine its pivot will go quadratic, always.

## Inside the Killer

The killer works by regarding the entire array to be sorted as gas to begin with. An element of the array qualifies as gas if the sorting algorithm hasn’t asked enough questions about it yet to force the killer to reveal the element’s position in the sorted output. As the sorting algorithm asks questions about the input (by calling compare) , it pins down where some elements belong in the output. The killer keeps track of this by freezing these elements into solid values. It always freezes gas into the smallest consistent solid value available.

The rules for returning results from the compare function are then simple, if we’re comparing:

• Two solid values, just compare them, the killer has already divulged their relative order.
• Solid to gas, the gas is always larger (when the killer freezes the gas later, it’ll be consistent about this)
• Gas to gas, freeze one of them with a value larger than any currently frozen value (i.e. $\texttt{num\_solid++}$)

The final detail is that, in gas to gas comparisons, we’ll freeze the one we think is the pivot. Since we know the partition step repeatedly compares elements to the pivot, we will freeze the gas element that has been involved in the previous comparison (tie-breaking with one or the other if neither has). This matches the goal of trying to freeze the pivot as soon as possible, and thus forcing it to be as small as possible.

McIlroy’s code is here.

## Keeping Secrets

McIlroy’s program is pretty neat, but it’s specific to quicksort. It’s imagining those pivot comparisons to decide what to freeze. Can we come up with something that’ll work against any old sorting algorithm, or sorting-like algorithm?

Let’s think about it this way — we’re the space of all possible input arrays. From the sorting algorithm’s point of view, we’re some particular array $x[1\ldots n]$. The sorting algorithm is going to ask us questions, like $\mbox{Is } x[1] < x[2]\mbox{?''}$ . Here’s the key: Each time we answer, we want to give out as little information as possible.

## Secret Descendants

Let’s get down to business. We can represent the information the sorting algorithm has extracted about the input $x[1 \ldots n]$ as a directed acyclic graph (DAG) on $n$ nodes, labelled $V_1, \ldots, V_n$.

Initially, the DAG has no edges. An edge from $V_i$ to $V_j$ means that $x[i] < x[j]$. So if the sorting algorithm asks us $\mbox{Is } x[i] < x[j]\mbox{?''}$, we either say “Yes”, and add an edge from $V_i$ to $V_j$ in the DAG, or say “No”, and add an edge from $V_j$ to $V_i$.

Now imagine $V_i$ has 10,000 descendants, and $V_j$ has only 1. If the sorting algorithm asks $\mbox{Is } x[i] < x[j]\mbox{?''}$ we had better say a loud “No”. Answering “Yes” tells it that $x[j]$ is greater than $x[i]$ as well as the 10,000 elements corresponding to the descendants of $V_i$. Instead, answering “No”, tells the sorting algorithm about the relative order of just 3 elements.

## DAG in Pictures

Let’s look at a few pictures of what this looks like. Suppose the DAG adversary is being quizzed about an array $x[1\ldots 7]$, and it has already answered a few times, giving:

Now it gets asked $\mbox{Is } x[1] < x[6]\mbox{?''}$. If it says “Yes” we are left with this DAG

Whereas answering “No” gives this one:

Saying “No” told the sorting algorithm more than we had to. It got to know not only that $x[6] \geq x[1]$, but that $x[6]$ isn’t less than any of $x[1]$‘s descendants.

## Rules of Secret Descendants

Each time the sorting algorithm asks $\mbox{Is } x[i] < x[j]\mbox{?''}$, it answers as follows:

1. If there is a directed path from $V_i$ to $V_j$ , it says “Yes.”
2. Otherwise, if there is a directed path from $V_j$ to $V_i$ , it says “No.”
3. Otherwise, if $V_i$ has fewer descendants than $V_j$ , it says “No.”, and adds an edge from $V_j$ to $V_i$.
4. Otherwise ($V_j$ has no more descendants than $V_i$) it says “Yes.” and adds an edge from $V_i$ to $V_j$.

The first two rules just maintain consistency with previous answers the DAG adversary has given. The last two try and leak as little information as possible, just as was described earlier.

## Does it Work?

If we implement the DAG adversary and give it a whirl against, say C#’s List(T).Sort function, here’s what happens:

Success! It certainly looks like we’ve hurt the C# library function — the red-line is the number of comparisons against input size.  The green-line is a guessed at quadratic matching the measured number of comparisons. This is kind of nice, because the adversary knows nothing in particular about the sorting algorithm’s implementation, unlike McIlroy’s nifty quicksort killer.

## Stepping Back

What’s really going on with this DAG? At any instant, it represents the information the sorting algorithm has extracted about the input. What’s more, any topological sorting of the DAG (i.e. a listing of the vertices such that $V_i$ always comes before $V_j$ if there is an edge from $V_i$ to $V_j$) is an ordering of $x[1\ldots n]$ consistent with the questions the sorting algorithm has asked.

Initially, there are no edges, and any ordering of the vertices of the DAG is a topological sort. When the sorting algorithm is done, they’ll just be a single topological sort possible. The DAG will just be one long chain of vertices.

So, the goal of the DAG adversary should be to ensure the number of topological sortings of the DAG is maximized after its answer. Looking at our pictures above, the “Yes” DAG has $5! = 120$ topological sortings, whereas the “No” DAG, has only $2 \times 4! = 48$ topological sortings. Our “count the descendants” heuristic payed off. But why the heuristic, why not just count?

## Counting is Hard

So, we have a simple new adversary right? When it gets asked a question, it sees if adding the “Yes edge” or “No edge” gives a DAG with a larger number of topological sortings, and answers accordingly.

Unfortunately, a little Googling convinces us we’d end up with an amazingly slow adversary. Counting topological sortings isn’t just NP-complete, it’s #P-complete.

What about an approximation? More Googling.  That too, is rather slow. Even approximately counting takes (roughly speaking) $O(n^3 (\log L(P))^2)$ time, here $L(P)$ is the number of topological sortings of the DAG (or, linear extensions of the partial order). In our case, early on, since the DAG has no edges, we’ll have $L(P) = n!$, and so the time just to answer one question (i.e. compare two elements!) will be, again ignoring messy terms, $O(n^5)$.

## Our DAG is Slow

Even with the simple count-the-descendants heuristic. It’s worth coming to terms with the fact that our DAG adversary is sadly quite slow. A simple implementation of the underlying DAG is an adjacency list representation. This will take $\Theta(1)$ time to add a new a edge, $\Theta(n)$ time to query for a directed path, and $\Theta(n)$ time to count descendants.

This is pretty bad, especially if we’re driving a sorting algorithm into quadratic territory — our program will need cubic time! Perhaps worse, the DAG will have a quadratic number of edges in this case.

## Tweaking the DAG

There’s scarily extensive literature on maintaining transitive closures (or reductions) of DAGs — probably relevant if we want to optimize our implementation. Rather than do all that reading though, we’ll just attack the constant factors to get an implementation that’s fast enough to be of use.

It turns out just one idea gives about a factor of 2 speed-up: Our adversary is designed so that the queries for directed paths should almost always not find any path at all (or if you prefer, in the worst case, this is what must happen). This means our path queries will explore all the descendants of their source node. Aaaand, one more thing: we only ever count descendants having performed a path query.

With that little idea in hand, we can implement a DAG that’s only guaranteed to report descendant counts correctly if we’ve just performed a path query — which we’ll force to always explore all descendants. This should save about half the work of the adversary. You’ll be excited to know I’ve tested it and it does.

One or two other uninteresting tweaks give us about a 1.5x speed-up: Removing some pretty Linq and avoiding repeatedly clearing and initializing traversal data by working in “epochs”.

So, is there any use for this DAG adversary? Well, kind of.

A while ago, I wanted to stress test some selection algorithms. These are quicksort-like algorithms that, given some whole number $k$, then re-arrange an array $A$ such that $A[k]$ has everything less than it to its left, and everything greater to its right.

The details of the algorithms don’t matter too much. They’re based on Floyd and Rivest’s SELECT algorithm. Also, they’re variations on “introspective” algorithms. That is, they start by using one algorithm, and if they notice that algorithm approaching its worst case time, they switch to an algorithm with better worst case time. They do this because although the algorithm they switch to is better in the worst case, it’s much slower on average. The idea is to get the good average case performance of say, quicksort, with the worst case guarantee of heapsort.

In the case of these selection algorithms, the algorithm they switch to is labelled “Fallback” (red-line). Here’s a plot of their performance against McIlroy’s quicksort killer:

The plot above should be a bit worrying. Both the Intro-SELECT and Hybrid-SELECT algorithm switch to the Fallback when they hit their primary algorithm’s worst case. They spend some time realizing things are going badly, so in those cases, they perform worse than just running the Fallback in the first place. Except that, in the plot above, Hybrid-SELECT is doing better than the Fallback: It hasn’t encountered its worst case, despite the efforts of McIlroy’s quicksort killer. Meanwhile, Intro-SELECT has hit its worst case — it’s taking longer than the Fallback.

The next plot shows what happens against the DAG adversary:

Here, we can see what we expect (well, except for that weird dip for Hybrid-SELECT — I haven’t looked into that!) the Fallback is fastest, and the two other algorithms perform worse. The DAG adversary has forced them to switch to the Fallback after some wasted work.

So, in this rather unusual situation, the DAG adversary is more powerful than McIlroy’s quicksort killer. Of course, it’s also much slower, but at least we’re getting something in return for the time and CPU heat.

## Code

I’ve implemented the adversaries described here in C#. I’m sure you’re just itching to take a look: