Quantum Interior Point Methods for Semidefinite Optimization

We present two quantum interior point methods for semidefinite optimization problems, building on recent advances in quantum linear system algorithms. The first scheme, more similar to a classical solution algorithm, computes an inexact search direction and is not guaranteed to explore only feasible points; the second scheme uses a nullspace representation of the Newton linear system to ensure feasibility even with inexact search directions. The second is a novel scheme that might seem impractical in the classical world, but it is well-suited for a hybrid quantum-classical setting. We show that both schemes converge to an optimal solution of the semidefinite optimization problem under standard assumptions. By comparing the theoretical performance of classical and quantum interior point methods with respect to various input parameters, we show that our second scheme obtains a speedup over classical algorithms in terms of the dimension of the problem $n$, but has worse dependence on other numerical parameters.

We present two quantum interior point methods for semidefinite optimization problems, building on recent advances in quantum linear system algorithms. The first scheme, more similar to a classical solution algorithm, computes an inexact search direction and is not guaranteed to explore only feasible points; the second scheme uses a nullspace representation of the Newton linear system to ensure feasibility even with inexact search directions. The second is a novel scheme that might seem impractical in the classical world, but it is well-suited for a hybrid quantum-classical setting. We show that both schemes converge to an optimal solution of the semidefinite optimization problem under standard assumptions. By comparing the theoretical performance of classical and quantum interior point methods with respect to various input parameters, we show that our second scheme obtains a speedup over classical algorithms in terms of the dimension of the problem n, but has worse dependence on other numerical parameters.

Introduction
We develop quantum interior point methods (QIPMs) for the solution of semidefinite optimization problems (SDOPs). Letting b ∈ R m , matrices A (1) , . . . , A (m) , C ∈ S n , where S n is the space of n × n symmetric matrices, we consider the primal SDOP given as where [m] = {1, . . . , m}, U • V = tr(U V ) defines the trace inner product of two matrices U and V , and U ⪰ V indicates that U − V is a symmetric positive semidefinite matrix. We assume that the matrices A (1) , . . . , A (m) are linearly independent. The corresponding dual problem is given by The so called Interior Point Condition (IPC) holds if there is a primal feasible X with X ≻ 0, and there exists a dual feasible (y, S) with S ≻ 0. Although duality results are generally weaker for SDO when compared to linear optimization (LO), under the IPC strong duality holds; optimal solutions exist for both the primal and dual SDOPs, and their optimal objective values are equal, i.e., z P = z D .
Interior point methods (IPMs) are algorithms to solve convex optimization problems, including semidefinite optimization problems, in polynomial time. The first polynomial-time algorithm for linear optimization problems (LOPs), a special class of convex optimization problems, is the ellipsoid method [1]. Karmarkar's IPM improved on the complexity of the ellipsoid method [2]. The first polynomial-time IPM for SDOPs is attributed to Nesterov and Nemirovskii [3,4]. Their framework employs efficiently-computable self-concordant barrier functions.
The interest in SDO originates from the fact that SDO provides a framework that allows one to formulate fundamental problems in control [5], statistics, information theory [6], machine learning [7,8], finance [9,10], and quantum information science [11][12][13], to name a few. Additionally, SDO can provide tight approximations for various combinatorial optimization problems [14,15], and it can be used to study the properties of convex optimization problems [5]. SDOPs are attractive since they exhibit practical efficiency [16][17][18] and tractability [3,19]. LOPs are a special case of SDOPs in which each of the input matrices C, A (1) , . . . , A (m) are diagonal matrices. Considering the vast number of applications of SDO, it is natural to ask if quantum computers can accelerate their solution. Several works have investigated this possibility, which we discuss next.

Literature Review
When m ≤ √ n, the Cutting Plane Methods of [20,21] are the best performing classical algorithms for solving SDOPs, and can do so in time O m(mn 2 + m 2 + n ω ) · polylog m, n, R, where ω ∈ [2, 2.38] is the matrix multiplication exponent, R is an upper bound on the trace of primal optimal solutions, ϵ is the precision parameter, and mn 2 is the input size of the SDO problem. However, in practical situations we typically have that m is roughly between n and n 2 , in which case the Cutting Plane Methods given in [20,21] are outperformed by the IPM for SDO from Jiang et al. [22]. Their IPM exhibits a worst case running time of O √ n(mn 2 + m ω + n ω ) · polylog m, n, where the term m ω + n ω represents the per-iteration cost of inverting the Hessian and dual slack matrices. For IPMs, ϵ-optimality implies that the primal and dual feasible solutions exhibit a normalized duality gap bounded by ϵ, i.e.: One can also note that when m = O(n 2 ), the running time of the IPM in [22] is shown to be O 1 ϵ n 5.246 , providing an enhancement over the running time bound of O 1 ϵ n 6.5 found in [23][24][25].
Although IPMs are prevalent in practice, they are not the only approach to solve SDOPs. The classical matrix multiplicative weights update (MWU) algorithm of Arora and Kale [26] is one such algorithm that has been quantized with notable gains. The classical MWU method has a running time of where r is an upper bound on the ℓ 1 -norm of dual optimal solutions, and ϵ abs is an additive error to which the optimal objective value is approximated. That is, the objective value attained by the acquired solution is OPT ∈ [ς − O(ϵ abs ), ς + O(ϵ abs )], where ς is a bound on the optimal objective value determined using binary search. The MWU algorithm alternates between candidate solutions to the primal and dual SDO problems, and does not involve the solution of linear systems. A quantum SDO solver based on the Arora-Kale framework was constructed by Brandão and Svore [27], and van Apeldoorn et al. [28]. With successive improvements [29,30], the running time of the quantum MWU algorithm for SDOPs has been brought down to: where s is a sparsity parameter (maximum number of non-zero entries per row). While these methods achieve running times with a more favorable dependence on m and n as compared to classical IPM solvers, their polynomial dependence on the parameters R, r and 1/ϵ abs is exponentially slower than classical IPMs, which exhibit logarithmic dependence on the inverse precision.
(If ϵ abs is considered to be part of the input, it is expressed in binary format and a running time of 1/ϵ abs is not polynomial in the usual sense.) The QIPM presented by Kerenidis and Prakash [31] (rather, the earlier arXiv version of that paper) initiated the other major research direction regarding the solution of SDOPs with quantum algorithms. By constructing block-encodings of the Newton linear system matrix, which can be efficiently constructed at runtime using quantum random access memory (QRAM), the authors in [31] solve the Newton linear system in time polylogarithmic in n (but note that this does not include the time to obtain a classical description of the solution). As the solution of the Newton linear system is the most expensive step in classical IPMs, the hope is to obtain a quantum advantage. The paper [31] posits a worst case running time of O n 2.5 ξ 2 µκ 3 · polylog n, κ, for SDOPs, where ϵ is the optimality gap tolerance to which we solve the SDOP, ξ is a feasibility tolerance, κ is an upper bound for the condition number of the Newton systems, and µ is upper bounded by the largest Frobenius norm of the coefficient matrix of the Newton systems. Note, however, that the algorithm of [31] does not converge to an ϵ-optimal solution in the usual sense, for reasons that will be discussed below; thus, this running time expression does not settle the complexity of QIPMs for SDOPs. The term O n 2 κ 2 ξ 2 comes from a tomography subroutine that is used to map the quantum-mechanical encoding of the solution of the Newton linear system, to a classical description used to construct the subsequent iterate. This reconstruction is performed in each step to ℓ 2 -norm error ξ/κ. When this algorithm is applied to solve LOPs, the running time is O n 1.5 ξ 2 µκ 3 · polylog n, κ, 1 ϵ .
Similar ideas have been used in other recent papers [32,33], all of which follow the same structure: they apply a classical interior point method, and substitute the classical solution of the Newton linear system (or equivalent) with a quantum linear system algorithm, with the goal of improving the running time. The solution of the linear system is extracted from the quantum computer via tomography. The analysis of the algorithms requires nontrivial machinery and has led to interesting developments. One limitation of some of these existing QIPM works is the use of the feasibility tolerance ξ, that does not translate to the usual concept of primal feasibility. Another limitation is that the analysis of [31] is heavily based on classical exact IPMs, and does not account for some challenges that arise when translating those algorithms to the quantum setting. Similar challenges are found in [32,33]. These challenges are discussed below, and addressed in this paper for the specific case of semidefinite optimization. The QIPM of [31] directly quantizes the classical IPM described in [34]. This leads to two major issues. The first is that this algorithm does not guarantee that the solutions to the Newton linear system remain symmetric [35]. As a consequence, the algorithm described in [31] may not converge when applied to SDOPs (this is not an issue for linear optimization, because all the involved matrices are diagonal). Additional safeguards must be employed to guarantee that the resulting solution will be symmetric; numerous techniques have been proposed for this purpose. These methods are generally grouped into families of search directions such as those introduced by Kojima, Shindoh, and Hara [36], Monteiro and Zhang [23,[37][38][39], Monteiro and Tsuchiya [40], and Tseng [41]. We refer the reader to an overview of these (as well as many other) search directions by Todd [35].
The second major issue is that the analysis of the IPM in [34] assumes that the Newton linear systems are solved exactly. This cannot be done in the quantum setting, because nonnegligible errors are introduced in the tomography step, even if the quantum linear systems algorithm can solve the problem with high precision. Notably, the primal and dual search directions resulting from tomography are not orthogonal, therefore the convergence proof of [34] no longer applies 1 . To ensure convergence when the search directions are not exact, we must rely on IPM frameworks that can deal with the errors introduced from tomography.
Infeasible IPMs were originally motivated by the need of having an initial point (and all subsequent iterates) in the interior of the semidefinite cone. It is easy to have an initial interior point which is infeasible [42,43], hence the use of infeasible IPMs. Further, excessive CPU time and memory requirements may prevent solving large linear systems directly. In such a situation, one can employ an iterative method to determine the solutions to the linear systems to compute search directions, and such solutions are not guaranteed to be exact (e.g., due to early stopping of the iterative method). The resulting algorithm is called inexact-infeasible IPM. The first polynomial time inexact-infeasible IPMs were proposed by Korzak [44] and Mizuno and Jarre [45]. Inexactinfeasible IPMs are natural candidates for quantization. Using this terminology, the "prototypical" classical IPM described in [34] can be called an exact-feasible IPM.
The little-studied inexact-feasible IPM (IF-IPM) bridges the gap between exact-feasible and inexact-infeasible IPMs. In an inexact-feasible IPM, it is typically assumed that primal and dual feasibility can be satisfied exactly, but we allow for a margin of error in solving a reduced Newton system. We argue that this framework is the most well-suited to serve as a foundation for QIPMs, compared to inexact-infeasible IPMs. Indeed, inexact-feasible IPMs exhibit the same iteration complexity as exact-feasible IPMs, and they impose milder restrictions on the inexactness of the Newton linear system; because of this, quantizing an inexact-feasible IPM leads to better theoretical performance than an inexact-infeasible IPM. Gondzio [46] presented an Inexact-Feasible IPM for convex quadratic optimization problems; his complexity analysis is the basis for the analysis of our novel IF-QIPM, although our QIPM bears little resemblance to the classical IF-IPMs. Indeed, when designing an inexact-feasible QIPM we make several choices that are very atypical for classical IPMs, yielding an algorithm with no direct classical analogue, regardless of the fact that its analysis relies on known classical tools. One key distinction being that the IPM in [46] assumes that primal and dual feasibility can be satisfied exactly, whereas our IF-QIPM guarantees primal-dual feasibility.

Contributions of this paper
This paper highlights several challenges in the literature on QIPMs that have been previously overlooked. To address some of these challenges, it develops two provably convergent QIPMs for SDO. One algorithm is a quantization of the classical inexact-infeasible IPM from [47]; the other algorithm is an inexact-feasible QIPM loosely based on [46], where we introduce significant differences for the quantum setting. The inexact-infeasible QIPM converges to an ϵ-approximate solution to an SDOP in O n 2 log 1 ϵ iterations, the inexact-feasible QIPM converges in O √ n log 1 ϵ iterations.
Our first algorithm can be seen as a direct extension of [31] to properly address the two main issues discussed above, i.e., ensuring the existence of symmetric solutions and dealing with inexact search directions. This leads to the inexact-infeasible QIPM (II-QIPM). The running time of this algorithm, when using the HKM direction [23,36,48], is where κ A is the condition number of the matrix A whose columns are the vectorized constraint matrices A (1) , . . . , A (m) , and ρ > 0 is used to define the size of the initial infeasible solution and is generally considered a constant in many papers (i.e., it is often ignored in the running time analysis). Comparing the running time of the inexact-infeasible QIPM to classical IPM, there does not seem to be an advantage in any parameter: classical exact-feasible IPMs (with running time O(n 2ω+0.5 log 1/ϵ) on dense problems) are superior to the above quantum algorithm. While the II-QIPM does not achieve any advantage compared to classical algorithms, we discuss it because it shows one possible way of dealing with the issues of existing QIPMs in the open literature, highlighting the fact that quantum advantage seems unlikely in this setting: to improve upon classical algorithm at least in some parameters, we need to modify the basic classical IPM scheme. Our second algorithm, the inexact-feasible QIPM (IF-QIPM), solves the primal-dual SDO pair (1)-(2) with m = O(n 2 ) constraints to ϵ-optimality using at most O n 3.5 κ 2 ϵ · polylog n, κ, 1 ϵ QRAM accesses and O n 4.5 polylog n, κ, 1 ϵ arithmetic operations. When applied to LO problems, the IF-QIPM requires O n 2 κ 2 ϵ · polylog n, κ, 1 ϵ QRAM accesses and O n 2.5 · polylog n, κ, 1 ϵ arithmetic operations. For SDO problems involving n × n matrices and m = O(n 2 ) constraints, the IF-QIPM achieves speedups in terms of n compared to both classical feasible IPMs, and II-(Q)IPMs existing in the literature, though its dependence on κ and 1/ϵ prohibits an overall speedup, see, Section 7.4. We also de-quantize our framework to obtain a classical IF-IPM, finding that our classical algorithm exhibits an overall running time of O n 4.5 κ · polylog κ, 1 ϵ when applied to SDO problems m = O(n 2 ). Like the IF-QIPM, our IF-IPM is faster in n than all existing classical IPMs, and its dependence on ϵ is exponentially faster than its quantum counterpart. Yet, we cannot decisively conclude that the IF-IPM outperforms the current state of the art classical IPMs for SDO; its running time still depends (linearly) on κ. When applied to LO problems, our IF-IPM has a running time of O n 2.5 κ · polylog κ, 1 ϵ . Note that this does not outperform the best classical IPM for LO even with respect to the dimension n: classical randomized IPMs can solve LOPs with m = n constraints in time O n, 1 ϵ (n ω ), see [49]. It should be noted that the running time dependence on the condition number κ is due to use of a QLSA for the solution of the Newton linear system. It is known that κ can grow very large over the course of IPMs (regardless of problem class, e.g., LO or SDO), and in fact κ → ∞ as we approach optimality, see the discussion in Section 7.4. The QIPM has better dependence on 1/ϵ than the quantum MWU, but this comes at a cost of worse dependence on other parameters, most notably n and κ. It is worth noting, however, that QIPMs (as they are presented in this work) classically output the primal-dual optimal solution (X * , y * , S * ), whereas QMWU algorithms only output y * and a way to construct X * , S * . In addition, the solutions we obtain using the IF-QIPM will satisfy primal and dual feasibility exactly, while (Q)MWU algorithms can only obtain solutions which satisfy the constraints to some additive error, e.g., in the primal case we have It should also be noted that (Q)IPMs achieve a normalized duality gap X * •S * n at most ϵ, whereas with (Q)MWUs the guarantee is ϵ-additive. For a more detailed comparison of (Q)IPMs and (Q)MWUs, see Section 7.3. The QIPMs developed here enjoy a lower complexity than classical IPMs only if ϵ is relatively large, and κ remains polynomially small over the course of the algorithm, a condition that cannot be guaranteed in theory. In practice, it is possible that these algorithms exhibit good performance if we are interested in obtaining a low-precision solution.
Summarizing, based on the current state of the art for classical and quantum algorithms, quantum algorithms for linear systems do not seem sufficient to obtain a speedup over classical IPMs, due to the challenges highlighted above. This was also observed in [50] comparing classical and quantum algorithms for a specific application of SDO in combinatorial optimization. It is known that the dependence of QLSAs on κ cannot in general be reduced to κ 1−δ for any δ > 0 unless BQP = PSPACE [51]. We therefore conjecture that a quantum speedup would have to rely on different techniques: the direct quantization of a classical IPM, with linear algebra performed on a quantum computer, does not seem to be sufficient for an end-to-end speedup in general.
Our paper identifies some open questions. First, can we reduce the dependence on 1/ϵ from linear to polylogarithmic? A promising direction to attain this objective is that of iterative refinement techniques, which have been successfully applied in the classical optimization literature for LOPs [52][53][54]. We conjecture that they can be adapted to the setting of SDO, and this may help us improve the ϵ-dependence of the QIPM. Second, is there a way to improve the dependence on κ? This seems difficult within the QLSA framework, due to the lower bounds of [51,55], but in the context of QIPM, it would still be advantageous to use techniques that allow us to trade κ for a slightly higher dependence on n -which, crucially, is exponential in the number of qubits anyway.
The rest of this paper is organized as follows. In Section 2, we introduce our notation, some necessary concepts in linear algebra, and basic quantum procedures. In Section 3 we discuss the main components of classical IPMs. Section 4 discusses how to deal with tomography errors , as well as algorithmic variants that allow one to capture inexactness and infeasibility. Section 5 provides technical results that are crucial to the convergence analysis of the IF-QIPM. In Section 6 we discuss its quantization, and provide a theoretical analysis of the running time for the Inexact-Feasible and Inexact-Infeasible Quantum Interior Point Methods in Section 7. Section 8 concludes the paper. Some technical results, and results from other papers are presented in Appendices A and B, respectively.

Preliminaries
We write A † to indicate the conjugate transpose of A. We denote by G ⊗ K the tensor product of two matrices. For a matrix G ∈ R n×n , vec(G) is the n 2 × 1 vector given by vec(G) = (g 11 , g 21 , . . . , g n1 , g 12 , . . . , g nn ) ⊤ .
We make extensive use of the functions svec and smat, formally defined as follows.

Definition 1.
For G ∈ S n , svec(G) ∈ R 1 2 n(n+1) is given by Further, the operator smat is the inverse operator of svec. That is, We also use the symmetric Kronecker product, defined below.

Definition 2.
Let G, K ∈ S n . Define the n(n+1) 2 × n 2 matrix V as: . Then, the symmetric Kronecker product G ⊗ s K of G and K is defined as: We let S n + and S n ++ denote the cones of symmetric positive semidefinite, and symmetric positive definite matrices, respectively. Given two vectors x, y, we denote by x • y the concatenation of the two vectors. For x ∈ R n , we denote its amplitude encoding by |x⟩, defined as x j |j⟩ .
Notice that |x⟩ is a (log n)-qubit state; for simplicity, we assume that the sizes of all spaces are powers of 2. All logarithms are base 2. The smallest and largest singular values of a matrix A are denoted σ min (A), σ max (A), and the smallest and largest eigenvalues are denoted λ min (A), λ max (A).

The condition number of
λmin(A) ; we simply write κ when referring the upper bound on the condition number of the Newton linear system. For an element x ∈ C d , ℜ(x) ∈ R d is its real part. We write "II" for inexact-infeasible and "IF" for inexact-feasible.

Basic concepts on block-encoded matrices
The QIPMs described in this paper make extensive use of quantum linear algebra subroutines; notably, matrix multiplication and matrix-vector products. We perform all these operations in the framework of block-encodings, as discussed in [56,57]. The block-encoding framework generalizes earlier work on high-accuracy QLSAs in the sparse-access input model [58] by expanding not only the possible input models, but also the set of operations allowed on the input matrix. Before giving formal definitions and an overview of key results from the literature, we give an informal introduction of the work in [56] to convey intuition.
To get a basic understanding of how to perform matrix-vector and matrix-matrix multiplication using this framework, suppose we want to multiply a matrix A by some vector u. We assume that u is encoded in the quantum state |u⟩, which we want to update as follows: A block-encoding U of A is a unitary such that: where α is a normalization factor that is chosen to ensure that U has operator norm at most 1, i.e., ∥U ∥ ≤ 1. In particular, if A is a w-qubit operator, an (α, a, ξ)-block-encoding of A uses a extra qubits and implements A up to total error ξ. Hence, we have and performing approximately α ∥A|u⟩∥ rounds of amplitude amplification yields A|u⟩ ∥A|u⟩∥ with high probability, see e.g., [56].
For matrix multiplication, let U A be an (α 1 , a 1 , ξ 1 )-block-encoding of an w-qubit operator A, and U B be a (α 2 , a 2 , ξ 2 )-block-encoding of a w-qubit operator B. Then we have which yields an (α 1 α 2 , a 1 + a 2 , α 1 ξ 2 + α 2 ξ 1 )-block-encoding of AB. For matrix inversion, let U be a (α, a, ξ)-block encoding of A that can be implemented in time T U . Then, Chakraborty et al. [56] prove that we can implement a block-encoding V of the inverse of A: Hence, given A as a block-encoding and |u⟩, we can solve the linear system computed by applying amplitude amplification to the state: To construct block-encodings of the matrices used by our QIPMs, we use the following construction, see [56] for details. Suppose we want to construct a matrix A and we have a way of constructing the quantum states |ψ i ⟩ = j Aij ∥Ai∥ |i, j⟩, where A i is the i-th row of A, and |ϕ j ⟩ = i ∥Ai∥ ∥A∥ F |i, j⟩. Then we construct: where we used the fact that Thus, U † R U L is a block-encoding of A with normalization factor α = ∥A∥ F . For the above construction to work, we must have a procedure to implement U R , U L . Note that these unitaries are easy to obtain starting from controlled operations to prepare the states |ψ i ⟩ , |ϕ j ⟩, i.e., operations of the form |i⟩ |0⟩ → |i⟩ |ψ i ⟩, |j⟩ |0⟩ → |j⟩ |ϕ j ⟩, and these controlled operations can in turn be constructed with a procedure similar to the state preparation algorithm of Grover and Rudolph [59] for efficiently integrable distributions. This can be made more efficient if we allow pre-processing to create certain data structures that can be stored in quantum-accessible storage, i.e., QRAM. A quantum RAM (QRAM) is a form of storage that allows for querying a superposition of addresses. Given a QRAM that stores the classical vector v j ∈ R 2 q , and a quantum state 2 q −1 j=0 β j |j⟩, the QRAM assumption is that the following mapping can be performed in time O n (1), i.e., polylogarithmic in the size of the vector: Throughout this paper, we assume that we have access to a classical-write, quantum-read QRAM that is large enough to store all input matrices, as well as a number of vectors so that we can efficiently perform indexed SWAP operations in our tomography subroutine (see Section 2.3), i.e., O n, 1 δ , 1 ϵ mn 2 + 2 n 2 ϵ . The exponential term in the QRAM size comes from the tomography subroutine if we insist on a classical-write QRAM, but as we discuss in 2.3, there are several alternative models to reduce this cost: for example, if the QRAM can be written in superposition, or if we have access to indexed SWAP gates, then the necessary QRAM size is only polynomial. For more details about the QRAM data structure to prepare the amplitude encoding of a vector, or the matrices U R , U L discussed above, we refer the reader to [31,56].

Useful results on block-encoded matrices
We now provide formal definitions for the concepts informally discussed in the previous section, as well as other results that are used in the remainder of this paper. The material in this section is mostly taken from [56,57], which provide improvements over the framework discussed in [31,60].
Note that Definition 1 is equivalent to the following property: The next proposition formalizes an idea discussed in the previous section. We do not provide details on the necessary data structure, referring the reader to [56] for an extensive discussion on this topic. Proposition 1 (Lemma 50 in [57]). Let A ∈ C m×m with m = 2 w and ξ > 0.
(i) Fix q ∈ [0, 2] and define µ q (A) = n q (A)n (2−q) (A ⊤ ) where n q (A) = max i ∥a i ∥ q q is the qth power of the maximum q-norm of the rows of A. If A q and (A 2−q ) † are both stored in QRAM data structures, then there exist unitaries U R and U L that can be implemented in time O(poly(w log 1 ξ )) and such that U † R U L is a (µ q (A), w + 2, ξ)-block-encoding of A.
(ii) If A is stored in a QRAM data structure, then there exist unitaries U R and U L that can be implemented in time O(poly(w log 1 ξ )) and such that U † R U L is an (∥A∥ F , w + 2, ξ)-blockencoding of A.

Definition 4 (State preparation pair).
Let y ∈ C m and ∥y∥ 1 ≤ β. The pair of unitaries

Proposition 2 (Lemma 52 in [57]). (Linear combination of block-encoded matrices, with weights given by a state preparation pair) Let
is an (w + a + p)-qubit unitary with the property that U j is an (α, a, ξ 2 )-block-encoding of A (j) . Then we can implement a (αβ, a + p, αξ 1 + αβξ 2 )-block-encoding of A with a single use of W, P R and P † L .
The following two propositions are critical to the efficiency of the QIPMs. They state that one can construct a block-encoding as a product of two block-encoded matrices, with overhead that is merely polylogarithmic in the size of the matrices. The difference between the two propositions is that in the second one, the normalization factor of the block-encoding of the product is fixed, rather than dependent on the input. Proposition 3 (Lemma 4 [56]). (Product of block-encoded matrices) If U A is an (α 1 , a 1 , ξ 1 )-blockencoding of an s-qubit operator A, and U B is an (α 2 , a 2 , ξ 2 )-block-encoding of an s-qubit operator B, then (I a2 ⊗ U A )(I a1 ⊗ U B ) is an (α 1 α 2 , a 1 + a 2 , α 1 ξ 2 + α 2 ξ 1 )-block-encoding of AB.

Proposition 4 (Lemma 5 in [56]). (Product of preamplified block-encoded matrices) Suppose A and
B have sizes such that AB is a a valid matrix product, and ∥A∥, ∥B∥ ≤ 1. Given an (α 1 , a 1 , ξ 1 )block-encoding U A of an s-qubit operator A, and a (α 2 , a 2 , ξ 2 )-block-encoding U B of an w-qubit operator B, with α 1 , α 2 ≥ 1, we can implement a (2, a 1 + a 2 + 2, where T U A and T U B are the implementation times for U A and U B , respectively. If one has a block-encoded matrix, one can also implement a block-encoding of (positive or negative) powers of that matrix, see [56,Lemma 9,10]. We are now in a position to state how block-encodings can be used to solve quantum linear systems problems, which is formalized in the following result from [56].

Proposition 6. Let
where for simplicity r, c are powers of 2, and each M ij is a matrix, of appropriate dimension, that is stored in a QRAM data structure. Suppose further the row norms and column norms of each matrix M ij are known. Then we can construct a (rc∥A∥ F , O(log n), ξ)-block-encoding of A in time O(poly(rc, log n, log 1 ξ )).
Proof. Let A (1) , . . . , A (m) denote the m = rc matrices in R n×n that each store one of the M ij matrices in the same position that it appears in A, but with all other entries being 0. Then, A can be written as a linear combination of the m many matrices A (k) as A = m k=1 y k A (k) , where y k = 1 for all k. We normalize all matrices using the Frobenius norm of A, to ensure that all spectral norms are ≤ 1.
Notice that if a matrix M ij is stored in QRAM, we can construct the block-encoding of the corresponding A (k) using Proposition 1, with straightforward modifications of the underlying algorithm, because the elements and norms of A (k) can be efficiently computed from those of M ij . Thus, a (∥A∥ F , log n + 2, ξ ′ )-block-encoding of A (k) can be efficiently constructed, for any ξ ′ > 0.
Let P L , P R be a (m, log m, 0)-state-preparation pair for y, which can be constructed by simply taking P L = P R = H ⊗ log m where H is the Hadamard gate. We then use Proposition 2, where W can be obtained by adding a control qubit to the circuits used to construct the block-encoding of each A (k) , and the parameter ξ 2 is chosen to be ξ/(m∥A∥ F ). This yields a (m∥A∥ F , log n+log m, ξ)block-encoding of A with a single use of W, P R and P † L ; since P R and P † L only require O(log m) gates each, and W takes O(poly(m, log n, log 1 ξ )) time, the proof is complete. When symmetrizing the Newton linear system, some components of the coefficient matrix can be conveniently defined using Kronecker, and symmetric Kronecker products of the input matrices. We conclude this section by formalizing how one can block-encode G ⊗ s K up to error ξ in time O n ξ (1), provided that the matrices G and K are stored in a QRAM data structure.
To do so in an efficient manner, we construct a block-encoding using the sparse-access input model; in particular we use this input model to construct block-encodings of the matrix V and its transpose V ⊤ which are used to define the symmetric tensor product of two matrices as given in Definition 2. In this setting, it is assumed that the matrix M ∈ C M ×N we wish to block-encode has s r -sparse rows and s c -sparse columns. Further, we can query the elements of M using an oracle: Likewise, it is assumed that we have access to oracles O r and O c that are capable of querying the indices of non-zero elements of each row and column. This input model is the most suitable to construct V , since we can easily implement an efficient algorithmic description of its sparsity pattern and element values. The following result from [57] establishes how one can efficiently implement block-encodings of sparse-access matrices.

Lemma 1 (Lemma 48 in [57]).
Let M ∈ C 2 w ×2 w be a matrix that is s r -row-sparse and s c -columnsparse, and each element of M has value at most 1. Suppose that we have access to the following sparse-access oracles acting on two (w + 1) qubit registers: r ij is the index for the j-th non-zero entry of the i-th row of M, or if there are less than i non-zero entries, then it is j + 2 w , and similarly c ij is the index for the i-th non-zero entry of the j-th column of M, or if there are less than j non-zero entries, then it is i + 2 w . Additionally, assume that we have access to an oracle O M that returns the entries of M in a binary description: The above result allows us to establish how to block-encode the Symmetric Kronecker product of two matrices.
Using the description of V , we can construct the sparse-access oracles O r and O c as defined in Lemma 1 (which act on two (log n 2 + 1) qubit registers). Additionally, from the definition of V we can construct an oracle O V that returns the entries of V in a binary description: where v ij is a p-bit binary description of the ij-matrix element of V . By Lemma 1 we can efficiently implement a ( , and similarly for K⊗G, for some error parameters to be specified later. Indeed, note that a (∥G⊗K∥ F , O(log n), ξ G⊗K )block-encoding of G ⊗ K is trivial to construct: by definition of block-encoding, it suffices to take the tensor product of block-encodings for G and K, constructed using Proposition 1, keeping the ancilla bits separate and ensuring that the sum of errors is less than ξ G⊗K .
From here we use a linear combination of our block-encodings of G ⊗ K and K ⊗ G to block- (1). Using our block-encodings of V and G ⊗ K + K ⊗ G, we apply Proposition 3 with A final application of Proposition 3 with in time O(poly(log n, log 1 ξ )), which completes the proof.

Extracting the solution of a linear system: tomography algorithm
In each iteration of our QIPMs, we solve a linear system of equations (known as the Newton linear system) to obtain search directions to progress to the next iterate. Due to the fact that we solve this system using a QLSA, the solution we obtain is encoded in a quantum state, and a classical description is necessary to construct the linear system that arises in the subsequent iteration. Namely, in order to update the current solutions to the primal and dual SDOPs, X and (y, S), we require a procedure to map the state proportional to the solution of the Newton linear system to a classical solution pair. There are many such procedures (e.g., [61,62]); in this paper we can take advantage of the fact that we have access to a unitary preparing the quantum state, as well as its inverse.
To obtain an ℓ 2 -norm estimate of the quantum state, we rely on the following result.
j=0 y j |j⟩ be a quantum state, y ∈ C d the vector with elements y j , and U |0⟩ = |ψ⟩. There is a quantum algorithm that, with probability at least 1 − δ, outputs Proof. This follows from [63,Prop. 22], setting the ℓ ∞ -norm error to ε/ √ d. For the gate complexity, [63] uses a "QRAM-like" indexed-SWAP gate, which is used to efficiently implement the mapping from a binary description of a vector x to its amplitude encoding |x⟩. It is straightforward to note that this mapping can also be stored in a classical-write, quantum-read QRAM: as the vectors x ∈ [−1, 1] d necessary for the construction are known in advance, and each coordinate requires precision O(1/ε), we can employ the usual QRAM data structure for each of these O(2 d/ε ) vectors in QRAM. This operation needs to be performed only once and it only depends on the dimension and the precision. Alternatively, the operation can be performed with a O d, 1 ε ) quantum-writable QRAM (the data structure can be computed in superposition and written onto the QRAM).
It is shown in [63] that the above sample complexity is optimal, i.e., the number of applications of U cannot be reduced in general. For purposes of the QIPMs presented in this paper, we treat the algorithm from [63] as a black box oracle that performs our tomography steps. For simplicity, and since we are already assuming access to a classical-write, quantum-read QRAM, we use that input model, although as noted, other models are possible and potentially more efficient.
In light of the normalization of quantum states, in quantum state tomography it is assumed that the vector to be extracted (for our purposes, the solution to a linear system) has unit norm. The tomography error bound is relative, and hence, in order to ensure that we satisfy the absolute error bound for the unscaled vector, we need to divide ξ k by the maximum norm of a solution vector, which we denote by ϱ. Setting ε = ξ/ϱ and T U = T LS (i.e, the time to prepare and solve our linear system using a quantum computer), Theorem 2 asserts that we can extract a classical estimate of a solution to the linear system with Euclidean norm error ξ in time Note that previous work on QIPMs used the tomography subroutine in [31], which has quadratic dependence on the inverse precision, rather than linear; this improves our final running time, but it is not the only reason why we are able to give a faster QIPM than previously known. Indeed, as we describe in Section 6, the most impactful source of improvement is our development of an inexact-feasible QIPM, that overcomes the issues with feasibility present in other approaches.

Primal and Dual SDOs and the Central Path
Recall that the symmetric matrices A (1) , . . . , A (m) are linearly independent by assumption, and S ∈ S n is the slack matrix of the dual problem, as defined in (2). We denote the feasible sets of (1) and (2) by For a feasible IPM we must assume that a strictly feasible pair X and (y, S) with X ≻ 0 and S ≻ 0 exists, i.e., the interior point condition (IPC) is satisfied [64]. It is known that with the self-dual embedding model this condition may be assumed without loss of generality [64]. We define the sets of interior feasible solutions by With the IPC satisfied, it is guaranteed that the primal and dual optimal sets: are nonempty and bounded, with z P = z D , i.e., the duality gap is zero. In particular, this holds for all optimal solutions X * and (y * , S * ) satisfying which implies X * S * = S * X * = 0 as X * and S * are symmetric positive semidefinite matrices. However, for primal-dual IPMs, the complementarity condition XS = 0 is perturbed to: where I is the n × n identity matrix, σ ∈ [0, 1] is the centering parameter, and ν > 0 is the so-called central path parameter, which is progressively reduced to zero in the course of the optimization algorithm. For all ν > 0, assuming the IPC and linear independence of the matrices A (i) for i ∈ [m], the central path equation system has a unique solution [3]. The set of solutions for all ν > 0 gives the central path for the primal and dual SDOPs. In IPMs we aim to follow the central path as ν → 0, i.e., as we approach optimality. In fact, we simply seek to stay in a certain neighborhood of the central path.
A classical IPM, as outlined in Algorithm 1, begins with a strictly feasible primal-dual pair (X 0 , S 0 ) ∈ P 0 × D 0 for the primal and dual SDO problems (1) and (2). This pair has a duality gap of X 0 • S 0 = ν 0 n, and a distance to the central path of d(X 0 , S 0 ) ≤ γν 0 for γ ∈ (0, 1), where ν 0 = X0•S0 n , for some appropriate distance metric d(X, S). Without loss of generality, one can assume that ν 0 = O(1). As we mentioned earlier, it is always possible obtain an interior feasible solution using the self-dual embedding model, and in particular, the analytic center of our space, i.e., X 0 = S 0 = n −1 I is always strictly feasible in this setting.
In our work on the IF-QIPM, we consider the narrow (or, Frobenius) neighborhood Another popular alternative is the so called negative infinity neighborhood that is a large neighborhood, defined as It is well known that classical feasible primal-dual path following IPMs that use the former neighborhood exhibit an iteration complexity of the order O( √ n log(1/ϵ)) whereas long-step IPMs, using the latter neighborhood, have iteration complexity O(n log(1/ϵ)). However, in practice the long step algorithms tend to converge faster than the short step counterpart; in this paper we focus on the theoretical running time and therefore use the narrow (or Frobenius) neighborhood N F (γ). Thus, we define our centrality measure at a point (X, S) ∈ S n Then, in each iteration, we solve the so-called Newton linear system in order to obtain ∆X and ∆S and update the solutions using the following rule: If we were to directly linearize the complementarity condition, the Newton linear system would then be written as: where Null(·), R(·) are the nullspace (or, kernel ) and rowspace of the constraint matrices, respectively (we provide explicit definitions for these subspaces in the following sections). However, it is known that (7) does not have a symmetric matrix solution with respect to ∆X, see, e.g., [19]. This must be addressed, because both the primal and dual solutions must be symmetric.

Algorithm 1 Classical interior point method
Choose a nonsingular matrix P from Monteiro-Zhang family of search directions , and compute P 0

Symmetrizing the Newton System
As has been extensively studied in the literature of IPMs (see, e.g., [19,24,25]), the Newton steps ∆S and ∆X have to be symmetric matrices. More specifically, even though ∆S is guaranteed to be symmetric by the definition of a dual feasible solution, there is no symmetric ∆X that solves system (7). Thus, we need a different approach to guarantee symmetry.
In an effort to generalize the scaling methods required for primal-dual symmetry, following Zhang [39], we define the linear transformation H P (M ) that symmetrizes a matrix M using a given invertible matrix P : Observe that H P (M ) = σνI ⇐⇒ M = σνI; we can then write the central path equations in symmetric form as: H P (XS) = σνI. In this light, we can also symmetrize the linearized complementarity condition of the Newton system X∆S + S∆X = σνI − XS as Equation (9) can be explicitly expressed as: , we can write the symmetrized Newton linear system as where I n 2 is the identity matrix of order n 2 and . Following Todd et al. [65], we can also write the Newton linear system using the svec notation, in which case we have: where I is the identity matrix of order n(n+1) One can note that matrices E s and F s are nonsingular whenever X and S are positive definite matrices [65]. We have: where the equalities follow from the definition of the symmetric Kronecker product detailed earlier.
We then need to choose P to obtain specific instances of this system, and we refer to the class of search directions parameterized by P as the Monteiro-Zhang family of search directions [38]. We present results involving three possible choice of P : P = I, the AHO direction [19]; P = S 1/2 , the HKM direction [23,36,48]; and P = W −1/2 , where W is the Nesterov-Todd [25] scaling matrix, defined as: To define the HKM Newton linear system for the k-th iteration, we use symmetry to drop all of the transpose terms involving P and set: In defining the AHO Newton Linear system, taking P = I simplifies the above quantities to to Although this guarantees that our solutions to the Newton system are symmetric, the AHO direction only guarantees a solution if we are within a small (local) neighborhood of the central path: the ∞-norm neighborhood with opening less than 1/3. Finally, for the Nesterov-Todd Newton system, we set P = W −1/2 and we can write: Symmetry is not the only issue overlooked by previous works on QIPMs; we still need to account for the consequences of inexact tomography in the context of QIPMs.

Newton Linear Systems for QIPMs
We now turn our attention to solving the Newton linear system using quantum linear system solvers. We present two different Newton linear systems that account for the inexactness in the search directions resulting from the use of tomography. The first system, based on the infeasible central path and its neighborhood, allows us to quantize an Inexact-Infeasible IPM. However, in subsequent sections we will show that an algorithm based on this direct methodology leads to an algorithm with a poor overall running time. Thus, we subsequently introduce a new methodology that allows us to design an inexact, but feasible QIPM framework which yields a speedup in n over the classical variants.

An Infeasible Central Path and its Neighborhood
Our first effort to deal with the tomography errors is to quantize an Inexact-Infeasible IPM framework. That is, we work with a solution to a Newton linear system in which the right hand side includes residuals to capture (i) infeasibility of the iterate sequence and (ii) inexactness of the search direction. In this section we describe the infeasible central path, and the resulting Newton linear system.
To illustrate the issue at hand precisely, previous works on QIPMs [31][32][33] (if we apply symmetrization to their Newton systems) assume we can exactly solve the Newton linear system (10). Yet, solving the system (10) with a quantum computer introduces errors due to our use of QLSA and tomography. As a result, a classical estimate (∆X, ∆y, ∆S) of the quantum state |∆X • ∆y • ∆S⟩ will indeed only satisfy where (ξ d , ξ p , ξ c ) are the errors to which we satisfy primal feasibility, dual feasibility and the complementarity condition, respectively.
The feasible IPM framework, upon which the QIPMs in [31,33] are built, therefore cannot be applied (even if these works had properly accounted for symmetry). The analysis of feasible primal-dual IPMs relies on the fact that ∆X • ∆S = 0, this property is used throughout the classical analyses. However, the presence of the errors ξ d and ξ p means the primal and dual search directions ∆X and ∆S are not guaranteed to be in orthogonal subspaces. While this alone implies that the QIPMs presently in the literature are not convergent, one also has to account for the inexactness to which we solve the complementarity condition.
Let (X 0 , y 0 , S 0 ) be an initial point, not necessarily feasible, where for ρ > 0 we have Following [47], it is assumed that ρ is specified such that for a primal-dual optimal solution (X * , y * , S * ), and for some 0 < γ 1 < 1, we have , consider the quantities: Then, for τ ∈ (0, 1], the infeasible central path equations are given by: These equations define a set of infeasible centers, that approaches the solution of (1)-(2) as τ → 0. Let γ 2 ∈ (0, 1); we can define a neighborhood of the infeasible central path, which we denote N I , as: The following results from Zhou and Toh [47] are crucial for the complexity analysis of the II-QIPM later in the paper; they correspond to when the HKM search direction is chosen, which is the one for which we give a full analysis of the method. 2 Lemma 2 (Lemma 1 in [47]). For any r p and r d satisfying ∥A + r p ∥ ≤ γ 1 ρ and ∥r d ∥ ≤ γ 1 ρ, there exists (X,ỹ,S) that satisfies the following conditions: Lemma 3 (Lemma 2 in [47]). If the conditions (15) From the above, one can derive Frobenius norm bounds on the solutions X and S by noting the relationship ∥G∥ F ≤ √ n∥G∥ for G ∈ R n×n , i.e., We do however note that the bounds on the operator norms and trace of the solutions ∥X∥ ≤ tr (X) ≤ O(ρn) and ∥S∥ ≤ tr (S) ≤ O(ρn) given in the above remarks are only relevant to II-IPMs. They are properties associated with the infeasible central path, and our discussion for the IF-QIPM will not use these bounds.
If τ k = θ k , then the sequence {(X k , S k )} is also bounded.

Inexact Search Directions
Let η 1 ∈ (0, 1], η 2 ∈ (0, 1) with η 1 ≥ η 2 and τ 0 , θ 0 = 1. Given a point (τ k , θ k , X k , y k , S k ) ∈ N I , we try to generate a new point by computing an update along the search direction (∆X k , ∆y k , ∆S k ) for the solution X, y, S. This naturally leads to an iterative algorithm. We choose the search direction to be a solution of the following system of equations: where ) . As previously discussed, in the quantum setting both the QLSA and tomography necessarily introduce errors in the solution of (21). We therefore consider an inexact search direction (∆X k , ∆y k , ∆S k ), defined next, following [47]. Let {ϑ i } ∞ i=0 be a sequence of reals in (0, 1], such thatθ We say that (∆X k , ∆y k , ∆S k ) is an inexact search direction at the k-th iteration if it solves the linear system: where r c k ∈ S n and the residual terms satisfy: with 0 <θ ≤ σ min (A). Upon obtaining a classical estimate of the solution to (22) using QLSA and tomography, it is possible that the complementarity error r c is not symmetric (as we have no safeguards to ensure that the tomography errors are symmetric). Therefore, in order to ensure that we always have r c ∈ S n , we take one further precaution and utilize the svec(·) characterization of the Newton system (22). Let Then, in each iteration of the II-QIPM we use the HKM direction with P = S 1/2 , and solve where E s k and F s k are defined according to (13a)-(13b), and We can now give a full description of the classical Inexact-Infeasible IPM, as given in [47], in Algorithm 2. This algorithm converges in polynomial time provided that (23) and (24) hold; this will be discussed in Section 6.

The Nullspace Representation of the Newton Linear System
Analogous to Gondzio's [46] IF-IPM for linearly-constrained quadratic optimization (LCQO), in our novel IF-QIPM approach the first two equations of (10) will hold, while we assume that the third equation is only satisfied up to some residual error R r ∈ S n , which in our setting is due to the use of quantum state tomography (subsequent discussion will show that we can guarantee the residual is symmetric): where R c s = svec(σνI − H P (XS)) = svec(R c ). As is standard in the inexact Newton method literature [46,66,67], we assume that the residual term R r , for some β ∈ (0, 1), satisfies In this way, we ensure that the residual norms are being driven towards zero as we approach optimality, due to the fact that ∥R c ∥ F → 0 as ν → 0. Additionally, choosing the level of error in this manner is not very restrictive, as the algorithm terminates upon reaching a solution satisfying ν ≤ ϵ.
The ensuing result from [65] establishes the uniqueness of the solution to the system (25). Proof. The proof is the same as the one given in [65], but we repeat it here for completeness. In what follows, positive definiteness does not imply symmetry. We seek to show that the system is satisfied only by the all-zero solution. Note that for convenience, we have reordered the blockrows of the Newton system in order to utilize block-Gaussian elimination for the purposes of this proof. From (11a) it follows that E s is invertible. Hence, applying block-Gaussian elimination to (26) yields the Schur complement equation: By assumption, E −1 s F s ≻ 0 and A s has full row rank, implying that the matrix A s E −1 s F s A ⊤ s is positive definite, therefore ∆y = 0. Thus the second equation in (26) gives ∆S = −smat(A ⊤ s ∆y) = 0 and from the third equation it follows that ∆X = −E −1 s F s ∆S = 0. This completes the proof. If the Newton linear system as given by (25) is solved using QLSA and tomography, the first two block equations in this system will not be satisfied exactly, either. To design our IF-QIPM, we re-write the optimality conditions (10) as:  2) is given in canonical form (inequalities instead of equalities), we can trivially obtain bases for the nullspace and rowspace directly from the coefficient matrix. Alternatively, one can recast the SDOP as a slightly larger one using the Self-Dual embedding model, for which the nullspace basis matrix can be constructed trivially from the coefficient matrix (i.e., without needing factorization or Gaussian elimination). We can also calculate a basis of Null(A s ) using either Gaussian elimination or the QR-Factorization. Recall that A s ∈ R m× n(n+1) ×m . The QR factorization of A ⊤ s is defined as: The following result defines the Newton linear system to be solved at each iteration of the IF-QIPM.

The Newton linear system for the IF-QIPM is given by
Proof. Substituting equations (28a) and (28b) into the left-hand side of the third equation in system (27) yields Observe that the coefficient matrix of equation (29) is full rank, as the columns of A ⊤ s and Q 2 are orthogonal, and the columns of A ⊤ s and Q 2 are linearly independent. Thus, equation (29) has a unique solution. Upon using QLSA to solve the system (29) for |∆z • ∆y⟩, we will need to map classical estimates (∆z, ∆y) of the solution to a classical estimate (∆X, ∆y, ∆S) in order to proceed to the next iteration. Define ∆X and ∆S by (28a) and (28b), respectively. As (∆z, ∆y) is the unique solution of (29), consequently ∆X and ∆S are uniquely determined as well.
Our use of inexact tomography causes ∆z and ∆y to be calculated with some error. Yet, by using the bases of the nullspace and the rowspace to calculate ∆X and ∆S, we ensure that these search directions are always coming from orthogonal subspaces, regardless of the errors introduced by tomography or the length of the stepsize. We formalize this result in the following proposition.
Proof. First, note that for any X ∈ P and (y, S) ∈ D, we have Therefore, Similarly, Having established the fact that our search directions computed from inexact estimates of the solution to (29) preserve primal and dual feasibility, we next certify the validity of using the system (29) to obtain a solution to the system (25). Further, we demonstrate that if the system (25) has a unique solution, than the system (29) does as well. The proof of the next result is straightforward using the definition of the quantities involved.

Technical Results
In this section, we present the technical results required to prove that our IF-QIPM for SDO exhibits a polynomial iteration complexity. Our convergence analysis can be viewed as a generalization of Gondzio's [46] analysis from LCQO to SDO. To accomplish this, we adapt the work of Monteiro [23] to account for the inexactness of the complementarity condition in order to prove polynomial convergence of the IF-QIPM presented in Section 6. Though in this work we chose the Nesterov-Todd direction P = W −1/2 , the convergence analysis we perform is presented in general terms of P , and holds for any member of the Monteiro-Zhang family of search directions [38], which includes the NT, AHO and HKM directions. We leave the results used to establish convergence of the II-QIPM to Appendix B.
For the rest of this section, we work under the assumption that (X, y, S) ∈ P 0 × D 0 .
We remark that in the scaled setting our assumption for the residual term in (AR1) is given by: where β ∈ (0, 1) and R c = σνI − H I ( X S). For the analysis in this section we assume that the error tolerances are chosen so that the residual term R r for the scaled problem (34) in each iteration for some β ∈ (0, 1). Note that these assumptions that we make regarding the residual are mild. In light of Lemma 4(a), R r and R r satisfy the same Frobenius norm bound, and the right hand sides of the inequalities in (AR1), (AR2) and (AR3) are all invariant under the stated scaling.
The following result can be viewed as the inexact analogue of Lemma 3.4 in [23].
Proof. From (32c) we have where the final equality follows from (36). Dividing the above expression by n and applying (33) yields (37).
The following result from [23] plays a crucial role in the analysis.

Lemma 8 (Lemma 3.5 in [23]).
Let M ∈ R n×n be such that H Q (M ) = 0 for some nonsingular Q ∈ R n×n . Then, In particular, if M = U 1 + U 2 for some U 1 ∈ S n and U 2 ∈ R n×n , then We can now use the above result in order to prove the following lemma, which is adapted from Lemma 3.6 in [23].

Lemma 9.
For every θ ∈ R, we have and for all α ∈ [0, 1], where Proof. We follow a similar strategy to [23], but modify the decomposition (namely the term U 2 ), to account for the inexactness residual term. Let θ ∈ R be given. By (32f), we have W X = U 1 + U 2 , where U 1 = X 1/2 ∆S X 1/2 + θν X −1/2 ∆X X −1/2 + X 1/2 S X 1/2 − σνI ∈ S n , and Then, applying (35), it follows H Q (W X ) = 0 for Q = X 1/2 . This fact, combined with Lemma 8 and the fact that ∥AB∥ F ≤ ∥A∥ F ∥B∥ for every A, B ∈ R n×n , from the definition of U 2 and δ x it follows i.e., (41) holds. From here, we apply (6) and (38), and note that for α ∈ [0, 1], by the definition of δ x and δ s we have: Combining the above result with (41) implies (42) and the proof is complete.
For ease of notation, we follow [23] in defining the quantity for every (A, B) ∈ S n ++ × S n ++ and θ ∈ R. Lemma 10 (Lemma 3.7 in [23]). If d(X, S) ≤ γν for some γ ∈ (0, 1), then The following result establishes upper bounds on the Frobenius norms of terms involving the residuals.

Further, applying Lemma 4(a) one can observe
Combining this fact with assumptions (AR1) and (AR2) we have and hence (46a) holds. By assumption (AR3), it follows that Recall that for two matrices A, B ∈ R n×n , tr (AB) ≤ ∥A∥ F ∥B∥ F , thus Then, noting that ∥R r ∥ F ≤ β∥R c ∥ ≤ βσγν, it follows which completes the proof.
In the next result, we establish an upper bound on the terms δ x and δ s by adapting Lemma 3.8 of [23] to the inexact setting.
The final result of this section, analogous to Lemma 3.9 of [23], follows from the above lemmas.

Quantum Interior Point Methods for SDO
In this section we present our Inexact-Feasible and Inexact-Infeasible QIPMs. For the IF-QIPM, we establish polynomial convergence, then discuss how to implement the block-encoding of the Newton linear system, and conduct the overall running time analysis. Though we present the II-QIPM in its entirety, we leave the discussion on convergence, and implementing the block-encodings for the corresponding Newton linear system to the Appendix.

An Inexact-Feasible QIPM for SDO
We quantize a general IF-IPM and the resulting scheme is described in Algorithm 3. There is no classical counterpart in the literature for SDO, as the nullspace representation of the Newton system destroys the symmetry of the Newton system, and is thus impractical for classical computing. The algorithm has several parameters, the most important of which is the optimality gap tolerance ϵ. Details for all steps of Algorithm 3 are discussed subsequently in this paper. In our initialization steps, we must calculate bases of the nullspace and rowspace of A s . For this step, one can either use Gaussian elimination, or the QR-factorization of A s , which is typically more stable from a numerical point of view. Theoretically this can be accomplished using O(n 2ω ) classical arithmetic operations where ω = 2.37 is the matrix multiplication exponent [68,69], although this cost can be potentially reduced much closer to O(n 4 ) if the initial data is sparse and we use sparse matrix-vector multiplication, see [70]. It is also possible to trivially obtain a basis for the nullspace and the range space for "free" if the SDO problem is given in canonical form (inequalities instead of equalities): in that case, using the Self-Dual embedding model for that form, the nullspace basis matrix can be constructed trivially too from the coefficient matrix (without needing a QR-factorization or Gaussian elimination). Crucially, the computation of these bases need only be carried out once. That is, we calculate A s and Q 2 one time, and store these matrices in QRAM before the algorithm begins, as they remain unchanged for the duration of the algorithm. This step is akin to the preprocessing steps (e.g., Cholesky factorization, ordering of basis) typical of many optimization algorithms.
In steps 2 and 3 of Algorithm 3 we require classical matrix multiplication and inversion of n × n matrices. These steps take O(n ω ) classical arithmetic operations. A detailed analysis for step 3 can be found in Proposition 13. In steps 4 and 5, we employ a QLSA and the tomography routine which reconstructs, with high probability, a classical description of the solution to the Newton linear system. Following our discussion on tomography in Section 2.3, this can be accomplished in O n, 1 ξ k n 2 ξ k T LS time, where, for the NT direction, T LS is the time needed to prepare and solve the NT linear system , and ξ k is the error to which we perform our tomography steps to extract a classical estimate of the solution to the Newton linear system. A discussion on how to set ξ k at each iterate is provided at the beginning of Section 6.1.1. Then, in step 6, we use our classical estimates of ∆z and ∆y to classically calculate ∆X and ∆S. This step amounts to matrix-vector multiplication of dimension n 2 , and requires O(n 4 ) arithmetic operations. Finally, we classically update the current solutions to the SDOP X k , y k and S k , and the central path parameter ν k .
Here we establish that Algorithm 3 converges for a specific choice of the error parameters, discuss how to construct the Newton linear system for the Nesterov-Todd direction, and provide a complexity analysis for the chosen parameters.

Polynomial Convergence of the IF-QIPM
In order to have a convergent algorithm, we need to ensure that the errors ξ k introduced by tomography are properly controlled. As we discussed earlier, the tomography error bound is relative (it is stated in additive form, but assumes that the vector to be extracted has unit norm): to ensure that we satisfy the absolute error bound for the unscaled vector, we need to divide ξ k by the maximum norm of a solution vector. Now, by definition ∥R c k ∥ F ≤ σγν k holds at every iteration, and we terminate once ν k ≤ ϵ. Hence, for every iteration of the algorithm before termination with an ϵ-optimal solution, the inequality ∥R c k ∥ F > σγϵ,

Algorithm 3 Inexact-Feasible Quantum Interior Point Method
Input: ϵ, δ > 0; σ = 1 − δ/ √ n; β, γ ∈ (0, 1) Choose a nonsingular matrix P from Monteiro-Zhang family of search directions Choose (X 0 , y 0 , S 0 ) ∈ N F (γ) Set ν 0 = X0•S0 n Store (X 0 , y 0 , S 0 ) in QRAM Compute bases for R(A s ) = A ⊤ s and Null(A s ) = Q 2 Store A s and Q 2 in QRAM while ν > ϵ do 1. Update central path parameter ν k ← X k •S k n 2. Compute matrices X −1 k and S −1 k classically, and store these matrices in QRAM 3. Compute scaling matrix P k , right hand side matrix, and necessary matrix products for Newton system (29) and store in QRAM 4. Using block-encodings, solve (29) to construct inexact search direction |∆z k • ∆y k ⟩ 5. Obtain classical estimate ∆z k , ∆y k of ∆z k , ∆y k using vector state tomography 6. Use classical estimate ∆z k , ∆y k to obtain classical estimate ∆X k , ∆S k of ∆X k , ∆S k 7. Update current solution end holds, where γ = 1 20 and σ ≈ 1 for large n. In other words, the requirement on the residual bound never becomes much smaller than ϵ, if we aim at ϵ-optimality. Thus, we can set where ϱ is the maximum norm of a solution to the Newton linear system.
Following from Theorem 4, we obtain the convergence result for Algorithm 3.
Then, every iterate (X k , y k , S k ) generated by the IF-QIPM in Algorithm 3 is an element of the neighborhood N F (γ), and satisfies the relation Examples of constants γ, δ and β satisfying the conditions of Corollary 1 are γ = δ = 1/20 and β = 1/4.

Implementing the Nesterov-Todd linear system
As in [71], we factorize the NT Newton linear system given by (29), and construct block-encodings for its factors. In this paper we use block-encodings and QSLA within an IPM by pre-computing some matrix inverses and products classically, and storing the results in QRAM so that the blockencodings of the factors of the Newton linear system can be constructed in time O n ξ (1), where ξ is some error tolerance. This is more efficient than other schemes that we explored. Note that the largest matrices appearing in the Newton system, namely ×m , are only calculated once before the algorithm begins so their size is not prohibitive. Hence, compared to solving the Newton linear system, classical computation of matrix powers and products has an insignificant cost, and we circumvent increased dependence on the condition number. Proof. First, given that Q 2 is stored in QRAM, we can construct a (∥Q 2 ∥ F , O(log n), ξ 1 )-blockencoding of Q 2 in time O n ξ 1 (1). Next, with P and P −⊤ S stored in QRAM, we can apply Proposition 7 so that we can construct a (∥P (1). From here we apply Proposition 3 with (1), as desired.

Proposition 12. Let
Proof. The steps of the proof follow that of the previous result. (1). Further, with P X and P −⊤ stored in QRAM, we apply Proposition 7 such that we can construct a (∥P X ⊗ s (1). Applying Proposition 3 with we obtain a (∥M 2 ∥ F , O(log n), ξ M2 )-block-encoding of M 2 , and the proof is complete.
One should note that the construction of the Newton linear system provided in Proposition 13 is only one of the many possible ways to obtain a block-encoding of our Newton linear system while utilizing the Nesterov-Todd direction. Further, though the Nesterov-Todd direction is the search direction we study in this manuscript, note that our construction is written in general terms of P , thus this construction can be used for any suitable choice of P , such as the AHO (P = I) and HKM (P = S 1/2 ) directions. Note that by employing a factorization of the Newton linear system, we are able to theoretically improve the running time of implementing the block-encodings of the Newton linear system; which in turn allows us to improve the efficiency with which we solve the system. Next, we use this factorization to solve the Newton system.

Theorem 5.
Using the Nesterov-Todd direction P = W −1/2 , there is a quantum algorithm, which given |svec(R c )⟩ and access to QRAM data structures encoding P , P −⊤ , −A ⊤ s , Q 2 , P X and P −⊤ S , outputs a state ξ-close to |∆z • ∆y⟩ in time We can also output an estimate of ∥∆z • ∆y∥ with relative error δ by increasing the running time by a factor of 1 δ . Proof. This is a direct consequence of Proposition 13 and Theorem 1.

Proposition 14. The norm ϱ of the solution of the quantum Newton linear system (29) satisfies
Proof. We have ∆z ∆y = (M N T ) −1 svec(R c + R r ).
By Lemma 11, we have Choosing a constant c 0 ≥ (1 + β)σγ, it follows that ∥R c + R r ∥ F ≤ c 0 ν, and therefore where the final equality follows from the definition of condition number and ν = O(1). To see this, note that across every iteration of the QIPM, we have max k {ν k } = ν 0 = X0•S0 n = O(1), as in each iteration we strictly decrease the quantity X • S (which consequently decreases ν).
The bound on the norm of the solution is thus given by The proof is complete.
Recall (see page 30) that T LS denotes the time needed to prepare and solve the Newton linear system (29) with the Nesterov-Todd direction. Proof. The result follows trivially from Proposition 14 and noting the fact that max{∥M 1 Then, where the final equality follows from the definition of condition number.

An Inexact-Infeasible QIPM for SDO
The II-QIPM follows a similar algorithmic structure to that of the IF-QIPM, with the key distinction being the Newton linear system to be solved at each iteration, and that we do not need to compute the bases of the nullspace and rowspace of A.
We now discuss convergence of the algorithm, using the main result from the work of Zhou and Toh [47], adapted to our case. Note that the proof in [47] is only explicitly given for the HKM direction (P = S 1/2 ), but convergence for the other cases can be shown analogously. For a detailed discussion on the analysis required to prove convergence, and implementing the block-encodings for the II-QIPM Newton linear system, we refer the reader to the Appendix.
To prove convergence, we first need to show that Algorithm 4 satisfies several properties. More specifically, we need to show that the inexact search direction computed at the end of step 4 satisfies (23)- (24). This requires choosing error parameters for the QLSA and for the tomography step. Recall the required bounds on the inexactness residuals, reported for convenience from (23): Note that to obtain a classical estimate in step 4, we must determine the norm of the inexact search direction. Recall that QLSA also introduces some error in the inexact search direction. In line with our convergence discussion for the IF-QIPM we assume that all of the error comes from tomography.
To determine appropriate error parameters, we first discuss the sequence {ϑ k } ∞ k=0 , as ϑ k determines the rate at which the residual norms must be decreased with respect to τ k , where τ ∈ (0, 1] is defined in (17). We choose the sequence: This sequence, which is admissible according to [47], has the advantage of a relatively slow decrease of the error terms, and it greatly simplifies our running time analysis. Thenθ ≤ n 2 + 10 and for a sufficiently large n, ϑ k ≈ 1 log n log(1/ϵ) for all iterations k before reaching ϵ-optimality. Therefore, choosing the sequence as defined by (55), the residual norms will decrease 1/(log n log(1/ϵ)) faster than τ k .
At iteration k, we then choose ξ k satisfying: where ϱ is the maximum norm of a solution to the Newton linear system (24) corresponding to the II-QIPM. Setting ϑ k according to (55), for sufficiently large n, we simply have ϑ k = 1 log n log 1/ϵ before convergence. Hence, we can then take the error ξ k to be: Lemma 15. Choosing ξ k according to (56), the inexact search direction computed at the end of step 4 of Algorithm 4 satisfies (23).
Proof. For infeasible IPMs, the termination condition requires the feasibility gap be less than or equal to ϵ, and we scale down by ϱ due to normalization.
Proof. Follows from [47, Theorem. 1], after noting that, as a consequence of Lemma 15, the inexact search direction used in Algorithm 4 satisfies the conditions of [47].
By the definition of the neighborhood of the infeasible central path, N I , we have τ k ≤ θ k for every iteration k. In view of Theorem 6, after k = O(n 2 ln(1/ϵ)) iterations we have θ k ≤ ϵ, which implies τ k ≤ ϵ. Accordingly, the infeasible central path parameter has been sufficiently reduced to obtain an ϵ-optimal solution.
Having established convergence of both the IF-and II-QIPMs, we now turn our attention to their overall running times, this is discussed next.

Complexity
In this section, we formally analyze the worst case overall running times of the IF-and II-QIPMs. The per-iteration cost of either QIPM can be generally written as where T N T is the quantum gate complexity of obtaining a classical solution to the Newton linear system (29), and T progress is the number of classical arithmetic operations required to prepare the next iteration. Thus, upon bounding T iter for the IF-and II-QIPMs one can obtain the worstcase overall complexity of each algorithm using the iteration bounds that were established in the previous section.
We conclude this section by comparing the performance of our methods to known classical and quantum algorithms for SDO from the literature, before discussing the impact of dependence on the condition number bound κ on the overall running time of QIPMs.

Running time of the IF-QIPM
The following result establishes per-iteration cost of the IF-QIPM.

Proposition 16. Each iteration k of the IF-QIPM in Algorithm 3 has a quantum gate complexity of
O n 3 κ 2 ϵ · polylog n, κ, 1 ϵ and requires O(n 4 log(κ)) classical arithmetic operations.
Proof. In every iteration, we need to prepare and solve the NT Newton linear system (29), and obtain a classical estimate of the quantum state encoding its solution. Furthermore, we apply the state tomography algorithm to obtain a classical description of the solution of the Newton linear system; we denote the corresponding running time by T T O (T LS , ξ k ) for precision ξ k . For simplicity and without loss of generality, we consider the tomography precision ξ k but neglect the QLSA precision, as only the tomography precision appears polynomially in the final expression for the iteration cost. Applying Theorem 2 with d = n 2 and T U = T LS , running the tomography step to precision ξ k chosen according to (49) implies QRAM accesses are required to prepare the Newton linear system and obtain a classical estimate its the solution at iteration k.
To progress to the iteration, we must compute the primal and dual search directions ∆X and ∆S from the nullspace representation provided in equations (28a)-(28b), and update the solution (X, y, S) to the SDO before proceeding to the next iteration. We must also compute our scaling matrix P and its inverse. Computing ∆X and ∆S using (28a)-(28b) amounts to classically computing matrix-vector products in which the dimension is n 2 , and hence requires O(n 4 ) arithmetic operations, while updating the solution is matrix addition involving n × n matrices, and can be accomplished using O(n 2 ) arithmetic operations. Finally, noting that all of the matrices involved in the computation of P have size n × n, we can compute both P and P −1 using O(n ω ) arithmetic operations. Summarizing, we have a total of T IF progress = O(n 4 + n 2 + n ω ) = O(n 4 ), arithmetic operations at each iterate, as computing ∆X and ∆S from our estimate of the solution to (29) is the dominant operation. Note however, at each iterate the QLSA circuit requires the condition number of the Newton linear system coefficient matrix M N T (or, an upper bound on it) as part of the input. Computing the condition number classically is too costly, and so alternatively we use the following scheme to ensure correctness of steps 4 and 5 in Algorithm 3 (i.e., to ensure thatκ was chosen large enough). Useκ = 1 as a starting guess, and then 1. Execute steps 4 and 5 of Algorithm 3 withκ as input 2. Use classical estimate of the solution to compute 3. If ε ≤ ϱξ k , then accept (∆z k , ∆y k ) and terminate loop. Otherwise, setκ = 2 ·κ and go to Step 1.
One can observe that ε can be computed using O(n 4 ) arithmetic operations, and at most log(κ) such verification steps will be necessary in the worst case. Summarizing, we arrive at a per- We are now in a position to bound the overall running time of our IF-QIPM given in Algorithm 3, which is formalized in the following result.
Proof. By Corollary 1, the IF-QIPM requires O( √ n log(1/ϵ)) iterations to converge to an ϵ-optimal solution. Hence, letting χ be a nonnegative constant and applying Proposition 16, it follows that Algorithm 3 requires at most QRAM accesses, and O Recall that LOPs are simply a special instance of SDO in which all of the matrices are diagonal. Hence, our algorithm can also be applied to these problems as well.
Proof. The quantum gate complexity can be obtained upon repeating the analysis we used in the proof of Corollary 2, noting that for LO the dimension of the problem is d = n instead of d = n 2 . Then, recall that in the case of LO, all of the matrices are diagonal, and therefore no symmetrization is required. Hence, the only classical progression steps in this case are computing the primal and dual search directions from their nullspace representation, and subsequently updating the solution, and each of these steps require O(n 2 ) arithmetic operations.
Since our IF-QIPM utilizes a hybrid quantum-classical scheme, one can trivially obtain a fully classical IF-IPM by replacing the quantum subroutine for solving the Newton linear system with an inexact classical linear systems algorithm. Note that the system we obtain in (29) using the nullspace representation is not guaranteed to be positive semidefinite, so we cannot directly apply the Conjugate Gradient Method (CG) to solve it. For M ∈ R d×d and v ∈ R d with M ̸ ∈ S n + , one could first symmetrize the system M u = v as and subsequently apply CG, in which case the cost of this step would be because we need to carry out matrix-matrix multiplication, and the condition number of the resulting coefficient matrix is κ 2 M . Alternatively, we could employ one of the generalizations of CG such as the Generalized Minimum Residual Method (GMRES), however, not only is the coefficient matrix we employ indefinite, but it is nonsymmetric and dense: applying GMRES 4 to solve a dense, nonsymmetric and indefinite system of dimension d to precision ξ leads to a complexity of O d 3 log(1/ξ) . In our case d = n 2 , so classically performing this step will require time O(n 2ω + n 4 κ log(1/ξ)) if we symmetrize and use CG, or time O n 6 log(1/ξ) using GMRES.
This outcome is undesirable because it would lead to a slower algorithm than what already exists in the classical literature. Alternatively, one can use a polynomial approximation q, of 1/u on the interval [1/κ 2 M , 1]. The solution would then be q( Such a polynomial q with degree roughly κ M log(1/ξ) exists (see, e.g., [72,Section 6.11]), and the approximate solution would be q(M 2 )M v, which is close to M −1 v whenever the linear system M u = v has a solution. Crucially, q(M 2 )M v can be obtained using 2 · deg(q) + 1 matrixvector products, and does not require computing (or even writing down) the matrix M 2 , thus avoiding the n 2ω present term in the complexity associated with symmetrization and subsequently using CG. Rather, in our case the complexity becomes For an explicit discussion of this method, we refer the reader to Section 16.5 of the monograph [73]. This argument is the basis for the next result, which bounds the running time of the classical IF-IPM for SDO. Proof. Simply note that applying [73,Theorem 16.6] with d = n 2 one has and we still have T IF progress = O(n 4 ). Just as in the case of the IF-QIPM, the linear system solver we use for our IF-IPM takes the condition number of the Newton linear system as input. Using the verification scheme described in the proof of Proposition 16 implies that we might need to repeat these steps log(κ) times to ensure the solution we obtain is accepted. The stated result then follows upon accounting for the iteration bound provided by Corollary 1.

Running time of the II-QIPM
Similar to our analysis for the IF-QIPM, we begin by formally stating the per-iteration cost of the II-QIPM.

Proposition 17. Every iteration k of the II-QIPM in Algorithm 4 has a quantum gate complexity of
and requires O(n ω log(κ)) classical arithmetic operations.
Proof. In each iteration of the II-QIPM, we prepare and solve the HKM linear system; and let T II LS denote the time required to do so. As in the IF-QIPM, we the utilize vector state tomography algorithm from [63] to obtain a classical description of the solution of the Newton linear system (24); and similar to the proof of Proposition 16, denote the corresponding running time by T T O (T II LS , ξ k ) for precision ξ k . Applying Proposition 23, we have T II LS = O n,κ (κ(∥A∥ F + ∥S∥ F )), and hence, we have: From here, we point out that for II-(Q)IPMs, ϱ = O(κ A ρn 1.5 ); this is a consequence of the known bounds ∥∆X∥, ∥∆S∥ = O(ρn) (see, e.g., [47]), the relationship between spectral norm and Frobenius norm, and the definition of the neighborhood of the central path (18). We therefore have as ∥S∥ F = O(ρn 1.5 ) by (20). We must also update the solution to the SDOP, and compute our scaling matrix P and its inverse. Updating the solution is again standard matrix addition involving n × n matrices, and can be accomplished using O(n 2 ) arithmetic operations, and the computation of P = S 1/2 and its inverse can be performed using O(n ω ) arithmetic operations, as all of the involved matrices are of size n × n.
Just as in the case of the IF-QIPM, the QLSA we use for our II-QIPM takes the condition number of the Newton linear system as input. Using the scheme described in the proof of Proposition 16 implies that we might need to repeat these steps (both quantum and classical) log(κ) times to ensure the solution we obtain is accepted. Thus, each iteration of Algorithm 4 has a quantum gate complexity of O n 3.5 κκ A ρ ∥A∥ F + ρn 1.5 1 ϵ · polylog n, κ, 1 ϵ and requires O(n ω log(κ)) classical arithmetic operations. The proof is complete.
Combining the preceding result with the iteration bound in Theorem 6 immediately gives the overall running time of the II-QIPM, which is formalized next.
Proof. We can begin by expanding the expression for the per-iteration cost of the algorithm provided in Proposition 17. Then, noting that by Theorem 6, the II-QIPM requires O(n 2 ln 1/ϵ) iterations, we can express the total running time of the algorithm as and the proof is complete.

Comparison with existing algorithms for SDO
Before proceeding further, we remind the reader of a number of key differences between (Q)IPMs and (Q)MWU algorithms. The output of (Q)IPMs is a classical primal-dual pair (X, y, S) such that Conversely, primal-dual (Q)MWUs output a dual solution y ∈ R m such that setting y i A (i) ⪰ −ϵ abs I and the objective value attained by this solution is OPT ∈ [ς − O(ϵ abs ), ς + O(ϵ abs )], where ς is a bound on the optimal objective value determined using binary search. Observe that this is another distinction between (Q)IPMs and (Q)MWUs; the output of our QIPMs always satisfies primal and dual feasibility exactly, whereas the solution obtained by (Q)MWU methods satisfies primal and dual feasibility approximately.
The QMMWUs (specifically, all papers by Brandao et al. [27,50,74], as well as the papers by van Apeldoorn et al. [28,30] also assume that the SDO problem data is normalized: Our QIPMs do not require this assumption. For the QIPMs presented in this work, only the Newton linear system must be normalized in each iteration (so we can apply a QLSA to solve it), and this normalization is accounted for in our accuracy requirements and the subsequent running time analysis. (For classical MMWUs, the choice of the normalization can vary depending on the papers; the analysis of the classical algorithm in [28] assumes the above normalization.) Assuming the data is normalized with respect to the operator norm means that for more general problems (i.e., without normalized data) the error scales as θϵ abs , where θ = max ∥A (1) ∥, . . . , ∥A (m) ∥, ∥C∥ .
Thus, for MMWUs one would lose a factor θ in the precision. Note that being able to compute θ to normalize the data as θ −1 A (1) , . . . , θ −1 A (m) , θ −1 C could require classical access to these matrices, and an additional O(mns) cost in the running time to compute θ. Table 1 presents the running time of our algorithms alongside well-known algorithms for solving SDOPs. For ease of comparison, we assume that m = O(n 2 ) and that the problem is fully dense (i.e., s = n). It can be readily seen that the II-QIPM is worse in every parameter when compared to the other algorithms. Conversely, the IF-QIPM obtains a speedup with respect to the dimension n over each of the listed classical solution methodologies. Yet, the IF-QIPM exhibits a quadratic dependence on the condition number bound κ, and linear dependence on the inverse precision, implying an asymptotically worse running time overall.

References
Method Runtime [23][24][25] IPM O n, 1 ϵ n 6.5 [22] IPM When compared to the QMWU algorithm, it can be observed that the QIPM achieves a favorable dependence on the error. Although the QMWU is faster with respect to dimension, the QMWU has polynomial dependence on the trace bound R, and for certain applications this may negate any speedup, such as the SDO approximation of Quadratic Unconstrained Binary Optimization (QUBO) problems, i.e., problems of the form QUBOs play an important role in quantum information sciences, and the resulting SDO approximation is given by where e is the all ones vector of dimension n. Hence, any optimal solution of (58) satisfies tr(X) = n, suggesting that for these instances the worst case running time of the QMWU is For SDOPs with diagonal constraints such as (58), the Arora-Kale method can be specialized (see, e.g., [75]) to obtain anÕ(nsϵ −3.5 ) algorithm. Note that to achieve this running time, the algorithm avoids explicitly computing or reporting the primal solution X, and instead makes use of techniques to estimate its diagonal entries and trace inner products of the form A • X. As an alternative, these methods report a "gradient" G ∈ S n such that X = W exp(G)W for a diagonal matrix W . While the algorithm of Lee and Padmanabhan [75] offers speedups with respect to the dimension, its running time is exponentially slower than classical IPMs with respect to the inverse precision.
To obtain an additive ϵ-convergence using the algorithm found in [75], the error parameter would have to be set toε = ϵ ∥C∥ ℓ1 .
It can then be seen that in this case, our QIPM scales better than the classical MWU in n and ϵ −1 in this case, but depends linearly on κ. However, one could solve the QUBO problem to ϵ-optimality using the IPM from Jiang et al. [22] in time O(n ω+0.5 log(n/ϵ)), or even our classical IF-IPM with overall running time O κ, n ϵ n 4.5 κ . One potential use case of our quantum and classical IF-IPMs would be as a limited precision oracle in an iterative refinement method. Iterative refinement (IR) methods are used to solve optimization problems to extended precision by using (possibly fixed) low-precision calls to an optimization subroutine (such as an IPM) to solve a sequence of simpler problems [52][53][54]. In this setting, low-precision solutions are completely acceptable, and since both the classical and quantum IF-IPMs we propose classically report the solution, the IF-QIPM would seem to be the preferable choice of subroutine.
However, the behavior of κ as we traverse the central path towards optimality poses a significant hurdle to the performance of QIPMs. This is highlighted in detail next.

The relationship between κ and optimality in IPMs
Noting the presence of the condition number κ in our running time results for the QIPMs, it is of crucial importance to determine whether or not bounds on κ can be established. This is of particular importance as the coefficient matrices M NT change at each iteration of the QIPMs. It has been well studied that without any numerical safeguards, in theory the condition number κ will go to ∞ as we approach the optimal set to the optimization problem at hand. In what follows, we highlight how the condition number κ is related to the optimality gap ϵ, and also note that ξ, the precision of the quantum linear solver has to be also in the order of ϵ or less.
To keep our discussion simple, let us first consider the case of LO. Following Roos, Terlaky and Vial [76] for the LO case, let B = {i : x i > 0 for some optimal solutions}, N = {i : s i > 0 for some optimal solutions}, where x and s denote the diagonals of X and S. Then, (B, N ) forms the so called optimal partition of the LO case of (1). It is known that Further, letting SP * denote the primal-dual optimal set, we define the quantities By the boundedness of the optimal set SP * , it follows that σ x SP is finite if B is nonempty, and similarly σ s SP is finite if N is nonempty. As Roos, Terlaky and Vial [76] note, by the definition of B and N , at least one must be nonempty. Therefore, taking σ SP = min{σ x SP , σ s SP }, it follows that the condition number of the LO problem must be positive and finite. As we follow the central path, for the case of LO we have s i x i = ν for all i. As we approach optimality we have Lemma 16 (Lemma I.43 in [76]). For any ν > 0, As in the Newton system of LO, at each iteration we need to calculate the coefficient matrix A ⊤ XS −1 A of the Newton system in compact form (see e.g., [76]). The condition number of this matrix will be in the order of the largest element of the diagonal matrix XS −1 divided by the smallest diagonal element. Thus, considering the bounds of Lemma 16, we have that this ratio is at least which clearly goes to infinity as ν → 0, as the QIPM approaches the optimal set. It is also known that the condition number σ of the LO problem can be as small as 2 L if the problem data is integer, see, e.g., [76]. Now, let us consider, as customary in the classical setting [76], that all of the data of the LO problem is integer. Then, the binary encoding of the LO problem requires at most (1 + log 2 (|a ij | + 1)) + i (1 + log 2 (|b i | + 1)) + j (1 + log 2 (|c j | + 1)) bits. It is also known that δ ≥ 2 −L and by [76,Theorem I.53], that if ϵ < σ 5 SP 4n 1.5 then an exact optimal solution can be identified by solving a linear system of equations. For simplicity, the bound given here is not the tightest possible bound, some quantities in the bound of [76,Theorem I.53] are estimated by σ SP .
Another way to look at it, is to estimate the total running time to reach a near-optimal primaldual feasible solution pair, with optimality gap less than ϵ. To reach this precision, we need to have Thus we can identify an exact optimal solution of the LO problem if Then, we have We also need ξ < ϵ < c 0 , As LO is a special case of SDO, the general bound for the condition number of the Newton linear systems (24) and (29) are bounded below by the condition number bound of LO problems. For the condition number of SDO problems, we refer to Mohammad-Nezhad and Terlaky [77]. Let where R(·) denotes the range space and (X * , y * , S * ) denotes a maximally complementary solution. Define n B = dim(B) and n N = dim(N ). Then, letting Q = [Q B , Q T , Q N ] be orthonormal bases for B, N and T , respectively, it follows from [77] that for every primal-dual optimal solution (X, y, S) ∈ P * × D * , one can write X and S as: where U X and U S are symmetric n B × n B and n N × n N matrices, respectively. Then, similar to the result for linear optimization, defining where Γ B and Γ N denote the sets of all orthonormal bases for B and N , respectively. Final, we arrive at the following results from [77].
Lemma 18 (Lemma 8 in [77]). Let the SDO problems (P ) and (D) be given by integer data, L denote the binary input length of the largest absolute value of the entries in b, C and A (i) for i = 1, . . . , m and ∥ · ∥ F be the Frobenius norm. Then for the condition number σ, we have This implies that σ is positive for SDO as well, therefore κ → ∞.

Conclusion
In this work we present two provably convergent quantum interior point methods for semidefinite optimization problems. They are building on recent advances in quantum linear system solvers.
The quantization of classical interior point methods is the subject of several recent papers in the literature. We compare the theoretical performance of classical and quantum interior point methods with respect to various input parameters, concluding that our Inexact-Feasible QIPM provides a speedup in terms of the size of the problem. Yet, due to the dependence on the condition number of the QLSA, as well as a 1/ϵ dependence picked up from repeated tomography, it is not faster than the classical small neighborhood IPM in the worst case. The dependence on the condition number in particular presents a significant obstacle toward a theoretical quantum speedup. In practice, the QIPM may still have an advantage if low-precision solutions are acceptable. Proof. The proof follows the same argument as in the proof of Proposition 19.
The construction described in Proposition 21 is only one of the possible ways to obtain a blockencoding of the Newton linear system corresponding to the Nesterov-Todd direction. We choose this specific decomposition because among the ones that we analyzed, it minimizes the amount of classical work while keeping the quantum gate complexity unchanged. We can then use this factorization to solve the Newton system. Theorem 7. Let P = W −1/2 , where W is the Nesterov-Todd scaling matrix. There is a quantum algorithm that given and access to QRAM data structures encoding A s , P , P −⊤ , P X, and P −⊤ S, outputs a state ξ-close to |∆X • ∆y • ∆S⟩ in time: O n,κ, 1 ξ (κ max{∥M 1 ∥ F , ∥M 2 ∥ F , ∥M 3 ∥ F }) , using the NT direction. We can also output an estimate of ∥∆X • ∆y • ∆S∥ with relative error δ by increasing the running time by a factor 1 δ . Proof. This is a direct consequence of Proposition 21 and Theorem 1.

A.1.1 Implementing the HKM linear system
We now analyze the version of our II-QIPM that uses the HKM direction. As before, we could implement a (2κ 1/2 S , O(1), ξ 2 )-block-encoding for S 1/2 XS 1/2 in time O(∥S∥ F κ S ), using Proposition 1 and [56, Lemma 9 and 10], but it is more efficient to perform these computations classically -see the discussion in the previous subsection.
Having established a compact factorization of the Newton linear system corresponding to the HKM direction, similar to our work for the NT direction, we may now use the factorization to solve the Newton system. The following two results establish the validity of our factorization and bound the running time for solving this system in a quantum setting, respectively. Proposition 23. Using the HKM direction P = S 1/2 , there is a quantum algorithm, which given and access to QRAM data structures encoding A s , P , P −⊤ , P X, and P −⊤ S, outputs a state ξ-close to |∆X • ∆y • ∆S⟩ in time O n,κ, 1 ξ (κ(∥A∥ F + ∥S∥ F )). We can also output an estimate of ∥∆X • ∆y • ∆S∥ with relative error δ by increasing the running time by a factor of 1 δ . Proof. Observe, adding (i.e., a linear combination of) M 1 , M 2 and M 3 yields Then, the result follows from Theorem 7.

A.1.2 Implementing the AHO linear system
We now analyze the version of our II-QIPM that uses the AHO direction.

B Full proofs of useful results from the literature
We begin with a proof of Theorem 6.

B.1 Proof of Lemma 19
Following the framework presented in [47], we make use of the quantities that originally appeared in [39]. Choosing P = S 1/2 (i.e., the HKM direction), define Proof. See Lemmas 3.1 and 3.3 in [39].
As a consequence of (79) we have and thus (78a) is proved.