Quantum routing with fast reversals

We present methods for implementing arbitrary permutations of qubits under interaction constraints. Our protocols make use of previous methods for rapidly reversing the order of qubits along a path. Given nearest-neighbor interactions on a path of length $n$, we show that there exists a constant $\epsilon \approx 0.034$ such that the quantum routing time is at most $(1-\epsilon)n$, whereas any swap-based protocol needs at least time $n-1$. This represents the first known quantum advantage over swap-based routing methods and also gives improved quantum routing times for realistic architectures such as grids. Furthermore, we show that our algorithm approaches a quantum routing time of $2n/3$ in expectation for uniformly random permutations, whereas swap-based protocols require time $n$ asymptotically. Additionally, we consider sparse permutations that route $k \le n$ qubits and give algorithms with quantum routing time at most $n/3 + O(k^2)$ on paths and at most $2r/3 + O(k^2)$ on general graphs with radius $r$.


Introduction
Qubit connectivity limits quantum information transfer, which is a fundamental task for quantum computing. While the common model for quantum computation usually assumes all-to-all connectivity, proposals for scalable quantum architectures do not have this capability [MK13; Mon+14;Bre+16]. Instead, quantum devices arrange qubits in a fixed architecture that fits within engineering and design constraints. For example, the architecture may be grid-like [MG19; Aru+19] or consist of a network of submodules [MK13;Mon+14]. Circuits that assume all-to-all qubit connectivity can be mapped onto these architectures via protocols for routing qubits, i.e., permuting them within the architecture using local operations.
Long-distance gates can be implemented using swap gates along edges of the graph of available interactions. A typical procedure swaps pairs of distant qubits along edges until they are adjacent, at which point the desired two-qubit gate is applied on the target qubits. These swap subroutines can be sped up by parallelism and careful scheduling [SWD11; SSP13; SSP14; PS16; LWD15; Mur+19; ZW19]. Minimizing the swap circuit depth corresponds to the Routing via Matchings problem [ACG94;CSU19]. The minimal swap circuit depth to implement any permutation on a graph G is given by its routing number, rt(G) [ACG94]. Deciding rt(G) is generally NP-hard [BR17], but there exist algorithms for architectures of interest such as grids and other graph products [ACG94; Zha99; CSU19]. Furthermore, one can establish lower bounds on the routing number as a function of graph diameter and other properties.
Routing using swap gates does not necessarily give minimal circuit evolution time since it is effectively classical and does not make use of the full power of quantum operations. Indeed, faster protocols are already known for specific permutations in specific qubit geometries such as the path [Rau05;Bap+20]. These protocols tend to be carefully engineered and do not generalize readily to other permutations, leaving open the general question of devising faster-than-swap quantum routing. In this paper, we give a positive answer to this question.
Following [Rau05;Bap+20], we consider a continuous-time model of routing, where the protocol is defined by a Hamiltonian that can only include nearest-neighbor interactions. To make consistent comparisons with a gate-based model of routing, we bound the spectral norm of interactions [Bap+20] so that a swap gate takes unit time [VHC02], as determined by the canonical form of a two-qubit Hamiltonian [Ben+02]. We suppose that single-qubit operations can be performed arbitrarily fast, a common assumption [VHC02;Ben+02] that is practically well-motivated due to the relative ease of implementing single-qubit rotations.
Rather than directly engineering a quantum routing protocol, we consider a hybrid strategy that leverages a known protocol for quickly performing a specific permutation to implement general quantum routing. Specifically, we consider the reversal operation ρ := n 2 k=1 swap k,n+1−k (1) that swaps the positions of qubits about the center of a length-n path. Fast quantum reversal protocols are known in the gate-based [Rau05] and time-independent Hamiltonian [Bap+20] settings. The reversal operation can be implemented in time [Bap+20] T (ρ) ≤ (n + 1) 2 − p(n) where p(n) ∈ {0, 1} is the parity of n. Both protocols exhibit an asymptotic time scaling of n/3 + O(1), which is asymptotically three times faster than the best possible swap-based time of n − 1 (bounded by the diameter of the graph) [ACG94]. The odd-even sort algorithm provides a nearly tight time upper bound of n [LDM84] and will be our main point of comparison. The Hamiltonian protocol of [Bap+20] can be understood by looking at the time evolution of the site Majorana operators obtained by a Jordan-Wigner transformation of the spin chain. In this picture, the protocol can be interpreted as the rotation of a fictitious particle of spin n + 1/2 whose magnetization components are in one-to-one correspondence with the Majoranas on the chain. A reversal corresponds to a rotation of the large spin by an angle of π. The gate-based reversal protocol [Rau05] is a special case of a quantum cellular automaton with a transition function given by the (n + 1)-fold product of nearest-neighbor controlled-Z (cz) operations-an operation that can be done 3 times faster than a swap gate-and Hadamard operations. In an open spin chain, this process spreads out local Pauli observables at site i over the chain and "refocuses" them at site n + 1 − i in n + 1 steps for every i. The ability to spread local observables (which is present in the gate-based and Hamiltonian protocols but not in swap-based protocols) may be key to obtaining a speedup over swap-based algorithms.
We expect both the gate-based and Hamiltonian protocols to be implementable on near-term quantum devices. The gate-based protocol uses nearest-neighbor cz gates and Hadamard gates, both of which are widely used on existing quantum platforms. The Hamiltonian protocol involves nearest-neighbor Pauli XX interactions with non-uniform couplings, which is within the capabilities of, e.g., superconducting architectures [Kja+20].
Routing using reversals has been studied extensively due to its applications in comparative genomics (where it is known as sorting by reversals) [BP93;KS95]. References [Ben+08; PS02; NNN05] present routing algorithms where, much like in our case, reversals have length-weighted costs. However, these models assume reversals are performed sequentially, while we assume independent reversals can be performed in parallel, where the total cost is given by the evolution time, akin to circuit depth. To our knowledge, results from the sequential case are not easily adaptable to the parallel setting and require a different approach.
Routing on paths is a fundamental building block for routing on more general graphs. For example, a two-dimensional grid graph is the Cartesian product of two path graphs, and the best known routing routine applies a path routing subroutine 3 times [ACG94]. A quantum protocol for routing on the path in time cn, for a constant c > 0, would imply a routing time of 3cn on the grid. A similar speedup follows for higher-dimensional grids. More generally, routing algorithms for the generalized hierarchical product of graphs can take advantage of faster routing of the path base graph [CSU19]. For other graphs, it is open whether fast reversals can be used to give faster routing protocols for general permutations.
In the rest of this paper, we present the following results on quantum routing using fast reversals. In Section 2, we give basic examples of using fast reversals to perform routing on general graphs to indicate the extent of possible speedup over swap-based routing, namely a graph for which routing can be sped up by a factor of 3, and another for which no speedup is possible. Section 3 presents algorithms for routing sparse permutations, where few qubits are routed, both for paths and for more general graphs. Here, we obtain the full factor 3 speedup over swap-based routing. Then, in Section 4, we prove the main result that there is a quantum routing algorithm for the path with worst-case constant-factor advantage over any swap-based routing scheme. Finally, in Section 5, we show that our algorithm has average-case routing time 2n/3 + o(n) and any swap-based protocol has average-case routing time at least n − o(n).
2 Simple bounds on routing using reversals Given the ability to implement a fast reversal ρ with cost given by Eq. (2), the largest possible asymptotic speedup of reversal-based routing over swap-based routing is a factor of 3. This is because the reversal operation, which is a particular permutation, cannot be performed faster than n/3 + o(n), and can be performed in time n classically using odd-even sort. As we now show, some graphs can saturate the factor of 3 speedup for general permutations, while other graphs do not Figure 1: K * 9 admits the full factor of 3 speedup in the worst case when using reversals over swaps, whereas K 5 admits no speedup when using reversals over swaps.
admit any speedup over swaps.
Maximal speedup: For n odd, let K * n denote two complete graphs, each on (n + 1)/2 vertices, joined at a single "junction" vertex for a total of n vertices (Figure 1a). Consider a permutation on K * n in which every vertex is sent to the other complete subgraph, except that the junction vertex is sent to itself. To route with swaps, note that each vertex (other than that at the junction) must be moved to the junction at least once, and only one vertex can be moved there at any time. Because there are (n + 1)/2 − 1 non-junction vertices on each subgraph, implementing this permutation requires a swap-circuit depth of at least n − 1.
On the other hand, any permutation on K * n can be implemented in time n/3 + O(1) using reversals. First, perform a reversal on a path that connects all vertices with opposite-side destinations. After this reversal, every vertex is on the side of its destination and the remainder can be routed in at most 2 steps [ACG94]. The total time is at most (n + 1)/3 + 2, exhibiting the maximal speedup by an asymptotic factor of 3.
No speedup: Now, consider the complete graph on n vertices, K n ( Figure 1b). Every permutation on K n can be routed in at most time 2 using swaps [ACG94]. Consider implementing a 3-cycle on three vertices of K n for n ≥ 3 using reversals. Any reversal sequence that implements this permutation will take at least time 2. Therefore, no speedup is gained over swaps in the worst case.
We have shown that there exists a family of graphs that allows a factor of 3 speedup for any permutation when using fast reversals instead of swaps, and others where reversals do not grant any improvement. The question remains as to where the path graph lies on this spectrum. Faster routing on the path is especially desirable since this task is fundamental for routing in more complex graphs.

An algorithm for sparse permutations
We now consider routing sparse permutations, where only a small number k of qubits are to be moved. For the path, we show that the routing time is at most n/3 + O(k 2 ). More generally, we show that for a graph of radius r, the routing time is at most 2r/3 + O(k 2 ). (Recall that the radius of a graph G = (V, E) is min u∈V max v∈V dist(u, v), where dist(u, v) is the distance between u and v in G.) Our approach to routing sparse permutations using reversals is based on the idea of bringing all k qubits to be permuted to the center of the graph, rearranging them, and then sending them to their respective destinations.

Paths
A description of the algorithm on the path, called MiddleExchange, appears in Algorithm 3.1. Figure 2 presents an example of MiddleExchange for k = 6.
In Theorem 3.1, we prove that Algorithm 3.1 achieves a routing time of asymptotically n/3 when implementing a sparse permutation of k = o( √ n) qubits on the path graph. First, let S n denote the set of permutations on {1, . . . , n}, so |S n | = n!. Then, for any permutation π ∈ S n that acts on a set of labels {1, . . . , n}, let π i denote the destination of label i under π. We may then write π = (π 1 , π 2 , . . . , π n ). Letρ denote an ordered series of reversals ρ 1 , . . . , ρ m , and letρ 1 + +ρ 2 be the concatenation of two reversal series. Finally, let S · ρ and S ·ρ denote the result of applying ρ and ρ to a sequence S, respectively, and let |ρ| denote the length of the reversal ρ, i.e., the number of vertices it acts on.
Proof. Algorithm 3.1 consists of three steps: compression (Line 4-Line 9), inner permutation (Line 11), and dilation (Line 12). Notice that compression and dilation are inverses of each other.
. As in the algorithm, let t be the largest index for which x t ≤ n/2 . Then, for From all reversals in the first part of Algorithm 3.1,ρ, consider those that are performed on the left side of the median (position n/2 of the path). The routing time of these reversals is ( By a symmetric argument, the same bound holds for the compression step on the right half of the median. Because both sides can be performed in parallel, the total cost for the compression step is at most n/6 + O(k 2 ). The inner permutation step can be done in time at most k using odd-even c c G T Figure 3: Illustration of the token tree T in Theorem 3.2 for a case where G is the 5 × 5 grid graph. Blue circles represent vertices in S and orange circles represent vertices not in S. Vertex c denotes the center of G. Red-outlined circles represent intersection vertices. In particular, note that one of the blue vertices is an intersection because it is the first common vertex on the path to c of two distinct blue vertices.
sort. The cost to perform the dilation step is also at most n/6+O(k 2 ) because dilation is the inverse of compression. Thus, the total routing time for Algorithm 3.1 is at most 2(n/6 It follows that sparse permutations on the path with k = o( √ n) can be implemented using reversals with a full asymptotic factor of 3 speedup.

General graphs
We now present a more general result for implementing sparse permutations on an arbitrary graph. Proof. We route π using a procedure similar to Algorithm 3.1, consisting of the same three steps adapted to work on a spanning tree of G: compression, inner permutation, and dilation. Dilation is the inverse of compression and the inner permutation step can be performed on a subtree consisting of just k = |S| nodes by using the Routing via Matchings algorithm for trees in 3k/2 + O(log k) time [Zha99]. It remains to show that compression can be performed in r/3 + O(k 2 ) time.
We construct a token tree T that reduces the compression step to routing on a tree. Let c be a vertex in the center of G, i.e., a vertex with distance at most r to all vertices. Construct a shortest-path tree T of G rooted at c, say, using breadth-first search. We assign a token to each vertex in S. Now T is the subtree of T formed by removing all vertices v ∈ V (T ) for which the subtree rooted at v does not contain any tokens, as depicted in Figure 3. In T , call the first common vertex between paths to c from two distinct tokens an intersection vertex, and let I be the set of all intersection vertices. Note that if a token t 1 lies on the path from another token t 2 to c, then the vertex on which t 1 lies is also an intersection vertex. Since T has at most k leaves, For any vertex v in T , let the descendants of v be the vertices u = v in T whose path on T to c includes v. Now let T v be the subtree of T rooted at v, i.e., the tree composed of v and all of the descendants of v. We say that all tokens have been moved up to a vertex v if for all vertices u in T v without a token, T u also does not contain a token. The compression step can then be described as moving tokens up to c. We describe a recursive algorithm for doing so in Algorithm 3.2. The base case considers the trivial case of a subtree with only one token. Otherwise, we move all tokens on the subtrees of descendant b up to the closest intersection w using recursive calls as illustrated in Figure 4. Afterwards, we need to consider whether the path p between v and w has enough room to store all tokens. If it does, we use a Routing via Matchings algorithm for trees to route tokens from w onto p, followed by a reversal to move these tokens up to v. Otherwise, the path is short enough to move all tokens up to v by the same Routing via Matchings algorithm.
We now bound the routing time on T w 1 of MoveUpTo(w 1 ), for any vertex w 1 ∈ V (T ). First note that all operations on subtrees T b of T w 1 are independent and can be performed in parallel. Let w 1 , w 2 , . . . , w t be the sequence of intersection vertices that MoveUpTo(·) is recursively called on that dominates the routing time of MoveUpTo(w 1 ). Let d w , for w ∈ V (T w 1 ), be the distance of w to the furthest leaf node in T w . Assuming that the base case on Line 2 has not been reached, we have a routing time of  where O(k) bounds the time required to route m ≤ k tokens on a tree of size at most 2m following the recursive MoveUpTo(w 2 ) call [Zha99]. We expand the time cost T (w i ) of recursive calls until we reach the base case of w t to obtain Since d v ≤ r and t ≤ k, this shows that compression can be performed in r/3 + O(k 2 ) time.
In general, a graph with radius r and diameter d will have d/2 ≤ r ≤ d. Using Theorem 3.2, this implies that for a graph G and a sparse permutation with k = o( √ r), the bound for the routing time will be between d/3 + o(d) and 2d/3 + o(d). Thus, for such sparse permutations, using reversals will always asymptotically give us a constant-factor worst-case speedup over any swap-only protocol since rt(G) ≥ d. Furthermore, for graphs with r = d/2, we can asymptotically achieve the full factor of 3 speedup.
return ρ Algorithm 4.1: Divide-and-conquer algorithm for recursively sorting π. BinaryLabeling(π) is a subroutine that uses Eq. (6) to transform π into a bitstring, and BinarySorter is a subroutine that takes as input the resulting binary string and returns an ordered reversal sequenceρ that sorts it.

Algorithms for routing on the path
Our general approach to implementing permutations on the path relies on the divide-and-conquer strategy described in Algorithm 4.1. It uses a correspondence between implementing permutations and sorting binary strings, where the former can be performed at twice the cost of the latter. This approach is inspired by [PS02] and [Ben+08] who use the same method for routing by reversals in the sequential case. First, we introduce a binary labeling using the indicator function This function labels any permutation π ∈ S n by a binary string I(π) := (I(π 1 ), I(π 2 ), . . . , I(π n )).
Let π be the target permutation, and σ any permutation such that I(πσ −1 ) = (0 n/2 1 n/2 ). Then it follows that σ divides π into permutations π L , π R acting only on the left and right halves of the path, respectively, i.e., π = π L · π R · σ. We find and implement σ via a binary sorting subroutine, thereby reducing the problem into two subproblems of length at most n/2 that can be solved in parallel on disjoint sections of the path. Proceeding by recursion until all subproblems are on sections of length at most 1, the only possible permutation is the identity and π has been implemented. Because disjoint permutations are implemented in parallel, the total routing time is T (π) = T (σ) + max(T (π L ), T (π R )). We illustrate Algorithm 4.1 with an example, where the binary labels are indicated below the corresponding destination indices: We present two algorithms for BinarySorter, which perform the work in our sorting algorithm. The first of these binary sorting subroutines is Tripartite Binary Sort (TBS, Algorithm 4.2). TBS works by splitting the binary string into nearly equal (contiguous) thirds, recursively sorting these thirds, and merging the three sorted thirds into one sorted sequence. We sort the outer thirds forwards and the middle third backwards which allows us to merge the three segments using at most one reversal. For example, we can sort a binary string as follows: For the sake of clarity, we implement an exhaustive search over all possible ways to choose the partition points. However, we note that the optimal partition points can be found in polynomial time by using a dynamic programming method [Ben+08].
time. Thus, for any permutation, the sequence of reversals found by Adaptive TBS costs no more than that found by TBS. However, TBS is simpler to implement and will be faster than Adaptive TBS in finding the sorting sequence of reversals.

Worst-case bounds
In this section, we prove that all permutations of sufficiently large length n can be sorted in time strictly less than n using reversals. Let n x (b) denote the number of times character x ∈ {0, 1} appears in a binary string b, and let T (b) (resp., T (π)) denote the best possible sorting time to sort b (resp., implement π) with reversals. Assume all logarithms are base 2 unless specified otherwise. Proof. To achieve this upper bound, we use TBS (Algorithm 4.2). There are log 3 n steps in the recursion, which we index by j ∈ {0, 1, . . . , log 3 n }, with step 0 corresponding to the final merging step. Let |ρ j | denote the size of the longest reversal in recursive step j that merges the three sorted subsequences of size n/3 j+1 . The size of the final merging reversal ρ 0 can be bounded above by (c + 2/3)n + O(log n) because |ρ 0 | is maximized when every x is contained in the leftmost third if x = 1 or the rightmost third if x = 0. So we have where we used |ρ j | ≤ n/3 j for j ≥ 1. If there are few zeros and ones in the leftmost and rightmost thirds, respectively, we can shorten the middle section so that it can be sorted quickly. Then, because each of the outer thirds contain far more zeros than ones (or vice versa), they can both can be sorted quickly as well.
By the inductive hypothesis, T (b 2 ) can be bounded above by Using Eq. (11) and the fact that |ρ| ≤ n, we get the bound as claimed.
This bound on the cost of a sorting series found by Adaptive TBS for binary sequences can easily be extended to a bound on the minimum sorting sequence for any permutation of length n. Proof. To sort π, we turn it into a binary string b using Eq. (6). Then let ρ 1 , ρ 2 , . . . , ρ m be a sequence of reversals to sort b. If we apply the sequence to get π = πρ 1 ρ 2 · · · ρ m , every element of π will be on the same half as its destination. We can then recursively perform the same procedure on each half of π , continuing down until every pair of elements has been sorted.
This process requires log n steps, and at step i, there are 2 i binary strings of length n 2 i being sorted in parallel. This gives us the following bound to implement π: where b i ∈ {0, 1} n/2 i . Applying the bound from Theorem 4.2, we obtain

Average-case performance
So far we have presented worst-case bounds that provide a theoretical guarantee on the speedup of quantum routing over classical routing. However, the bounds are not known to be tight, and may not accurately capture the performance of the algorithm in practice. In this section we show better performance for the average-case routing time, the expected routing time of the algorithm on a permutation chosen uniformly at random from S n . We present both theoretical and numerical results on the average routing time of swap-based routing (such as odd-even sort) and quantum routing using TBS and ATBS. We show that on average, GDC(TBS) (and GDC(ATBS), whose sorting time on any instance is at least as fast) beats swap-based routing by a constant factor 2/3. We have the following two theorems, whose proofs can be found in Appendices A and B, respectively.  . We exhaustively search for n < 12 and sample 1000 permutations uniformly at random otherwise. We show data for GDC(ATBS) only for n ≤ 207 because it becomes too slow after that point. We find that the fit function µ n = an + b √ n + c fits the data with an R 2 > 99.99% (all three algorithms). For OES, the fit gives a ≈ 0.9999; for GDC(TBS), a ≈ 0.6599; and for GDC(ATBS), a ≈ 0.6513. Similarly, for the standard deviation, we find that the fit function σ 2 n = an + b √ n + c fits the data with R 2 ≈ 99% (all three algorithms), suggesting that the normalized deviation of the performance about mean scales as σ n /n = Θ(n −0.5 ) asymptotically.

Theorem 5.1. The average routing time of any swap-based procedure is lower bounded by n−o(n).
Theorem 5.2. The average routing time of GDC(TBS) is 2n/3 + O(n α ) for a constant α ∈ 1 2 , 1 .
These theorems provide average-case guarantees, yet do not give information about the nonasymptotic behavior. Therefore, we test our algorithms on random permutations for instances of intermediate size.
Our numerics [KSS21] show that Algorithm 4.1 has an average routing time that is wellapproximated by c · n + o(n), where 2/3 c < 1, using TBS or Adaptive TBS as the binary sorting subroutine, for permutations generated uniformly at random. Similarly, the performance of odd-even sort (OES) is well-approximated by n + o(n). Furthermore, the advantage of quantum routing is evident even for fairly short paths. We demonstrate this by sampling 1000 permutations uniformly from S n for n ∈ The results of our experiments are summarized in Figure 6. We find that the mean normalized time costs for OES, GDC(TBS), and GDC(ATBS) are similar for small n, but the latter two decrease steadily as the lengths of the permutations increase while the former steadily increases. Furthermore, the average costs for GDC(TBS) and GDC(ATBS) diverge from that of OES rather quickly, suggesting that GDC(TBS) and GDC(ATBS) perform better on average for somewhat small permutations (n ≈ 50) as well as asymptotically.
The linear coefficient a of the fit of µ n for OES is a ≈ 0.9999 ≈ 1, which is consistent with the asymptotic bound proven in Theorems 5.1 and 5.2. For the fit of the mean time costs for GDC(TBS) and GDC(ATBS), we have a ≈ 0.6599 and a ≈ 0.6513 respectively. The numerics suggest that the algorithm routing times agree with our analytics, and are fast for instances of realistic size. For example, at n = 100, GDC(TBS) and GDC(ATBS) have routing times of ∼ 0.75n and 0.72n, respectively. On the other hand, OES routes in average time > 0.9n. For larger instances, the speedup approaches the full factor of 2/3 monotonically. Moreover, the fits of the standard deviations suggest σ n /n = Θ(1/ √ n) asymptotically, which implies that as permutation length increases, the distribution of routing times gets relatively tighter for all three algorithms. This suggests that the average-case routing time may indeed be representative of typical performance for our algorithms for permutations selected uniformly at random.

Conclusion
We have shown that our algorithm, GDC(ATBS) (i.e., Generic Divide-and-Conquer with Adaptive TBS to sort binary strings), uses the fast state reversal primitive to outperform any swap-based protocol when routing on the path in the worst and average case. Recent work shows a lower bound on the time to perform a reversal on the path graph of n/α, where α ≈ 4.5 [Bap+20]. Thus we know that the routing time cannot be improved by more than a factor α over swaps, even with new techniques for implementing reversals. However, it remains to understand the fastest possible routing time on the path. Clearly, this is also lower bounded by n/α. Our work could be improved by addressing the following two open questions: (i) how fast can state reversal be implemented, and (ii) what is the fastest way of implementing a general permutation using state reversal?
We believe that the upper bound in Corollary 4.3 can likely be decreased. For example, in the proof of Lemma 4.1, we use a simple bound to show that the reversal sequence found by GDC(TBS) sorts binary strings with fewer than cn ones sufficiently fast for our purposes. It is possible that this bound can be decreased if we consider the reversal sequence found by GDC(ATBS) instead. Additionally, in the proof of Theorem 4.2, we only consider two pairs of partition points: one pair in each case of the proof. This suggests that the bound in Theorem 4.2 might be decreased if the full power of GDC(ATBS) could be analyzed.
Improving the algorithm itself is also a potential avenue to decrease the upper bound in Corollary 4.3. For example, the generic divide-and-conquer approach in Algorithm 4.1 focused on splitting the path exactly in half and recursing. An obvious improvement would be to create an adaptive version of Algorithm 4.1 in a manner similar to GDC(ATBS) where instead of splitting the path in half, the partition point would be placed in the optimal spot. It is also possible that by going beyond the divide-and-conquer approach, we could find faster reversal sequences and reduce the upper bound even further.
Our algorithm uses reversals to show the first quantum speedup for unitary quantum routing. It would be interesting to find other ways of implementing fast quantum routing that are not necessarily based on reversals. Other primitives for rapidly routing quantum information might be combined with classical strategies to develop fast general-purpose routing algorithms, possibly with an asymptotic scaling advantage. Such primitives might also take advantage of other resources, such as long-range Hamiltonians or the assistance of entanglement and fast classical communication.

A Average routing time using only swaps
In this section, we prove Theorem 5.1. First, define the infinity distance d ∞ : S n → N to be d ∞ (π) = max 1≤i≤n |π i − i|. Note that 0 ≤ d ∞ (π) ≤ n − 1. Finally, define the set of permutations of length n with infinity distance at most k to be B k,n = {π ∈ S n : d ∞ (π) ≤ k}.
The infinity distance is crucially tied to the performance of odd-even sort, and indeed, any swap-based routing algorithm. For any permutation π of length n, the routing time of any swapbased algorithm is bounded below by d ∞ (π), since the element furthest from its destination must be swapped at least d ∞ (π) times, and each of those swaps must occur sequentially. To show that the average routing time of any swap-based protocol is asymptotically at least n, we first show that |B (1−ε)n,n |/n! → 0 for all 0 < ε ≤ 1/2. Schwartz and Vontobel [SV17] present an upper bound on |B k,n | that was proved in [Klø08] and [TS10]: Proof. Note that r = k/n. For the case of 0 < r ≤ 1/2, refer to [Klø08] for a proof. For the case of 1/2 ≤ r < 1, refer to [TS10] for a proof. (see for example [Rob55]).
Now we prove the theorem.
Proof of Theorem 5.1. LetT denote the average routing time of any swap-based protocol. Consider a random permutation π drawn uniformly from S n . Due to Theorem A.3, π will belong in B (1−ε)n,n with vanishing probability, for all 0 < ε ≤ 1/2. Therefore, for any fixed 0 < ε ≤ 1/2 as n → ∞, . This translates to an average routing time of at least n − o(n) because we have, asymptotically, (1 − ε)n ≤T for all such ε.

B Average routing time using TBS
In this section, we prove Theorem 5.2, which characterizes the average-case performance of TBS (Algorithm 4.2). This approach consists of two steps: a recursive call on three equal partitions of the path (of length n/3 each), and a merge step involving a single reversal. We denote the uniform distribution over a set S as U(S). The set of all n-bit strings is denoted B n , where B = {0, 1}. Similarly, the set of all n-bit strings with Hamming weight k is denoted B n k . For simplicity, assume that n is even. We denote the runtime of TBS on b ∈ B n by T (b).
While B n n/2 is easier to work with than S n , the constraint on the Hamming weight still poses an issue when we try to analyze the runtime recursively. To address this, Lemma B.2 below shows that relaxing from U(B n n/2 ) to U(B n ) does not affect expectation values significantly. We give a recursive form for the runtime of TBS. We use the following convention for the substrings of an arbitrary n-bit string a: if a is divided into 3 segments, we label the segments a 0.0 , a 0.1 , a 0.2 from left to right. Subsequent thirds are labeled analogously by ternary fractions. For example, the leftmost third of the middle third is denoted a 0.10 , and so on. Then, the runtime of TBS on string a can be bounded by where a is the bitwise complement of bit string a and n 1 (a) denotes the Hamming weight of a.
Logically, the first term on the right-hand side is a recursive call to sort the thirds, while the second term is the time taken to merge the sorted subsequences on the thirds using a reversal. Each term T (a 0.i ) can be broken down recursively until all subsequences are of length 1. This yields the general formula where i ∈ ∅ indicates the empty string.
The intuition behind this lemma is that by the law of large numbers, the deviation of the Hamming weight from n/2 is subleading in n, and the TBS runtime does not change significantly if the input string is altered in a subleading number of places.
Proof. Consider an arbitrary bit string a, and apply the following transformation. If n 1 (a) = k ≥ n/2, then flip k − n/2 ones chosen uniformly randomly to zero. If k < n/2, flip n/2 − k zeros to ones. Call this stochastic function f (a). Then, for all a, f (a) ∈ B n n/2 , and for a random string a ∼ U(B n ), we claim that f (a) ∼ U(B n n/2 ). In other words, f maps the uniform distribution on B n to the uniform distribution on B n n/2 . We show this by calculating the probability Pr(f (a) = b), for arbitrary b ∈ B n n/2 . A string a can map to b under f only if a and b disagree in the same direction: if, WLOG, n 1 (a) ≥ n 1 (b), then a must take value 1 wherever a, b disagree (and 0 if n 1 (a) ≤ n 1 (b)). We denote this property by a b. The probability of picking a uniformly random a such that a b with x disagreements between them is n/2 x , since n 0 (b) = n/2. Next, the probability that f maps a to b is n/2+x x .
Combining these, we have = 1 n n/2 Therefore, f (a) ∼ U(B n n/2 ). Thus, f allows us to simulate the uniform distribution on B n n/2 starting from the uniform distribution on B n . Now we bound the runtime of TBS on f (a) in terms of the runtime on a fixed a. Fix some α ∈ ( 1 2 , 1). We know that n 1 (f (a)) = n/2, and suppose |n 1 (a) − n/2| ≤ n α . Since f (a) differs from a in at most n α places, then at level r of the TBS recursion (see Eq. (30)), the runtimes of a and f (a) differ by at most 1/3 · min{2n/3 r , n α }. This is because the runtimes can differ by at most two times the length of the subsequence. Therefore, the total runtime difference is bounded by On the other hand, if |n 1 (a) − n/2| ≥ n α /2, we simply bound the runtime by that of OES, which is at most n. Now consider a ∼ U(B n ) and b = f (a) ∼ U(B n n/2 ). Since n 1 (a) has the binomial distribution B(n, 1/2), where B(k, p) is the sum of k Bernoulli random variables with success probability p, the Chernoff bound shows that deviation from the mean is exponentially suppressed, i.e., where c is a constant. Finally, we conclude that as claimed.
Next, we prove the main result of this section, namely, that the runtime of GDC(TBS) is 2n/3 up to additive subleading terms.
Proof of Theorem 5.2. We first prove properties for sorting a random n-bit string a ∼ U(B n ) and then apply this to the problem of sorting b ∼ U(B n n/2 ) using Lemmas B.1 and B.2. The expected runtime for TBS can be calculated using the recursive formula in Eq. (30): {n 1 (a 0.i0 ) + n 1 (a 0.i2 )} + n/3 r + 1   .
The summand contains an expectation of a maximum over Hamming weights of i.i.d. uniformly random substrings of length n/3 r , which is equivalent to a binomial distribution B(n/3 r , 1/2) where we have n/3 r Bernoulli trials with success probability 1/2. Because of independence, if we sample X 1 , X 2 ∼ B(n/3 r , 1/2), then X 1 + X 2 ∼ B(2n/3 r , 1/2). Using Lemma B.3 with m = 3 r−1 , the expected maximum can be bounded by Lemma B.2 then gives E[T (b)] ≤ n 3 + O(n α ). The routing algorithm GDC(TBS) proceeds by calling TBS on the full path, and then in parallel on the two disjoint sub-paths of length n/2. We show that the distributions of the left and right halves are uniform if the input permutation is sampled uniformly as π ∼ U(S n ). There exists a bijective mapping g such that g(π) = (b, π L , π R ) ∈ B n n/2 × S n/2 × S n/2 for any π ∈ S n since |S n | = n! = n n/2 n 2 ! n 2 ! = B n n/2 × S n/2 × S n/2 .
In particular, g can be defined so that b specifies which entries are taken to the first n/2 positionssay, without changing the relative ordering of the entries mapped to the first n/2 positions or the entries mapped to the last n/2 positions-and π L and π R specify the residual permutations on the first and last n/2 positions, respectively. Given g(π) = (b, π L , π R ), TBS only has access to b. After sorting, TBS can only perform deterministic permutations µ L (b), µ R (b) ∈ S n/2 on the left and right halves, respectively, that depend only on b. Thus TBS performs the mappings π L → π L • (µ L (b)) and π R → π R • (µ R (b)) on the output. Now it is easy to see that when π L , π R ∼ U(S n/2 ), the output is also uniform because the TBS mapping is independent of the relative permutations on the left and right halves. More generally, we see that a uniform distribution over permutations U(S n ) is mapped to two uniform permutations on the left and right half, respectively. Symbolically, for, π ∼ U(S n ), we have that g(π) = (b, π L , π R ) ∼ U(B n n/2 × S n/2 × S n/2 ) = U(B n n/2 ) × U(S n/2 ) × U(S n/2 ).
As shown earlier, given uniform distributions over left and right permutations, the output is also uniform. By induction, all permutations in the recursive steps are uniform. We therefore get a sum of expected TBS runtime on bit strings of lengths n/3 r , i.e., where, by Lemma B.1 and the uniformity of permutations in recursive calls, we need only consider b r ∼ U(B n/2 r n/2 r−1 ) and we bound the expected runtime using Lemma B.2 with a r ∼ U(B n/2 r−1 ). We end with a lemma about the order statistics of binomial random variables used in the proof of the main theorem. for every i = 1, . . . , m. Then the probability that Y < (p + )n is identical to the probability that X i < (p + )n for every i, which for i.i.d X i is given by Pr(Y < (p + )n) = Pr(X < (p + )n) m > 1 − 1 (mn) c m . (52) Using Bernoulli's inequality ((1 + x) r ≥ 1 + rx for x ≥ −1), we can simplify the above bound to Pr(Y < (p + )n) m > 1 − m 1−c n −c .