Loss-tolerant architecture for quantum computing with quantum emitters

We develop an architecture for measurement-based quantum computing using photonic quantum emitters. The architecture exploits spin-photon entan-glement as resource states and standard Bell measurements of photons for fusing them into a large spin-qubit cluster state. The scheme is tailored to emitters with limited memory capabilities since it only uses an initial non-adaptive (ballistic) fusion process to construct a fully per-colated graph state of multiple emitters. By exploring various geometrical constructions for fusing entangled photons from deterministic emitters, we improve the photon loss tolerance significantly compared to similar all-photonic schemes.


Introduction
Measurement-based quantum computing requires the generation of large graph states (cluster states) followed by measurements on them [1,2,3].This approach is particularly promising for photonic systems since quantum operations can be implemented with linear optics and photon detectors only.The required photonic cluster states can be created from small resource states by connecting them through so-called fusion processes [4,5,6].This approach, however, has several challenges: first, fusions are probabilistic and consume photonic qubits [7].Furthermore, photons travel at immense speed, which necessitates long delay lines to implement conditional feedback operations [8].Most critically, however, photons are easily lost which puts strin-gent bounds on the required photon efficiency.Recent schemes require efficiencies > 97% [5,9] which is above the typical values of photonic platforms [10,11,12,13].Furthermore, schemes using few-photon resource states require boosting [14,15] the fusion success probability with ancillary photons [5,6,9], which further complicates the experimental implementation.Improved architectures have been developed based on larger initial resource states [9], but even fewphoton states can only be made with a low probability using the typically employed parametric down conversions sources [16,10,11].New approaches are thus required to make the generation of large-scale cluster states experimentally feasible.
Fortunately, there are promising new methods to generate large resource states [17,18].In particular quantum emitters, such as quantum dots, even enable generating them in a deterministic and thus scalable way [19,20,21,22,23].At the same time, these emitters have high photon efficiencies on-chip [24,25] and end-to-end [12,13].Indeed, the largest photonic resource states that have ever been generated are GHZ-states made with a quantum emitter [26].Several resource states that can be generated with a single emitter are shown in Fig. 1(a).Fig. 1(b) illustrates a lattice on which photonic resource states are geometrically arranged and fused.Besides purely photonic resource states, quantum emitters can also generate states where a stationary spin is entangled with the photons [27,28,29].Here, we exploit these properties to construct an architecture, which uses star-shaped resource states (locally equivalent to GHZ states) with a central spin qubit.The spin is entangled with several photonic leaf qubits where the connections represent the entanglement properties (see Fig. 1(c)).
From these resource states, we generate a large spin cluster state via rotated type-II fusions (Bell measurements on the photons) [30].We propose an implementation with semiconductor quantum dots [31,32], but the scheme can also be applied to atoms [26] or color centers [33].By using spin qubits as the building blocks of the cluster state, we remove the need to implement feedback on flying qubits as well as unheralded loss of the qubits in the final cluster state.The latter poses a challenge to purely photonic approaches [34].The proposed hybrid approach combines the advantages of spin-based and photon-based platforms: spins in quantum dots are excellent photon emitters with coherence times [35,36] much longer than qubit initialization, readout, and manipulation [37,38,39], but so far no clear strategy existed for how to scale these systems.In our proposal, the photons provide a fast link between the static spins enabling full-scale quantum computing.
Previous proposals for generating large spinspin entanglement use a repeat-until-success strategy [40,41,42] that creates an immense overhead in the number of qubits [40] and requires long coherence times of the qubits.This overhead can be circumvented to some extent with more than one qubit per emitter, one for storage and one for generating the entanglement [43,44,45], but this introduces additional experimental complexity.Our cluster-state generation does not use repeat-until-success and all fusions are performed in one shot (ballistically), enabling a high overall clock speed.In contrast to previously proposed ballistic schemes [5,6], our architecture can operate loss-tolerantly without boosting [14,15] the fusion success probability with ancillary photons.It only requires rotated type-II fusions [30,46] where success, failure, and photon loss are heralded on the detection pattern.All these features keep the experimental overhead low and make our approach particularly feasible.
To optimize the tolerance to photon loss, we explore several lattices on which star-shaped resource states are arranged and fused.Since there are no locality constraints for entanglement generated by Bell measurements on photons, we consider lattices in several dimensions and search for ideal lattices with a discrete optimization algorithm.[19,21].(b) Several star-shaped photonic resource states are arranged on a simple cubic lattice.Fusions of the resource states are performed by using the photons on the leaves of the resource states.Sufficiently many successful fusions generate a large connected cluster state.(c) Several star-shaped resource states with a quantum emitter spin as the central qubit can be fused into a distributed spin graph state.(d) As the fusions succeed with a finite probability, parts of the desired graph state are missing.The state can be renormalized into a cluster state with a well-defined lattice (here a square lattice) via Y and Z measurements [47].

Building lattices by fusing resource states
Our approach starts by arranging resource states on a fusion lattice as illustrated in Fig. 1(c).Here the resource state is represented as a star-shaped graph with the central spin in the middle and photon as the leaves of the star.It is defined by: where the first qubit is a spin (s) and the N other qubits are photons.The state C sj represents a controlled-Z gate between the spin and photon number j (see Appendix 8.1 for details).The central spin qubit of the starshaped state will be part of the final cluster state and the photons on the leaves are used to perform fusions with other resource states.The fusion that we consider succeeds with a probability of p s = 0.5 [30,46] in which case a connection between two central qubits is established [5] (see Appendix 8.2 for the required setup, known as rotated type-II fusion).When the fusion fails, there is no connection and the leaves are erased.When enough fusions succeed, a large connected entangled state is created, which is a resource for measurement-based quantum computing [1,4].The required success probability of the fusions can be quantified by a so-called percolation threshold [4,5,6]: when the success probability of fusion is below the bond percolation threshold of the lattice, the generated graph state consists of many small pieces and is useless for quantum computing (that applies for instance to the two-dimensional honeycomb lattice [48]).Above the percolation threshold, a cluster state with a large connected component spanning the entire lattice is generated.Such a graph state can then be renormalized into a lattice [47] by local Pauli measurements [49,50] as illustrated in Fig. 1(d).From here quantum computation can be performed by measurements on the generated graph [1].
However, photon losses make it challenging to generate a large-scale percolated cluster state as they lead to a mixed quantum state [50].To retain a pure state, the neighborhood of every lost qubit is removed from the graph state by measurements in the Z-basis [5].When fusion photons are lost, the fusion heralds the loss but not which of the two fusion qubits was lost.Therefore, the neighborhoods of both fusion qubits are removed from the graph state [5] (see Fig. 2(a)).
In our model, every edge of the fusion lattice has two fusion photons and a loss thus occurs with probability 1 − (1 − p loss ) 2 per edge, where p loss is the probability that an individual photon is lost (p loss is assumed to be the same for all photons).In contrast, fusion success occurs with probability (1 − p loss ) 2 • p s and fusion failure with (1 − p loss ) 2 • (1 − p s ) (both without loss).With this model, we compute percolation thresholds for photon loss: only when the photon efficiency η = 1 − p loss is above the percolation threshold λ η c , a percolated graph state with a large connected component is created.
Our simulations generally consist of three steps: we first build a lattice defining the used resource states and the performed fusions, we then apply the described model for fusion failure and photon loss, and finally we evaluate if the resulting graph state percolates the lattice (if it is a spanning cluster).

Lattice construction
The loss tolerance of the constructions described above depends on which lattice the resource states are geometrically arranged and fused.Therefore, we consider various fusion lattices and analyze their tolerance to photon loss.Our lattice construction is based on the d-dimensional hypercubic lattice Z d [51,52].Every point of the lattice represents the central qubit of a star-shaped resource state and the edges represent which fusions are performed.Fig. 1(b) corresponds to a simple cubic fusion lattice for instance.We represent all the connections to the neighbors of a lattice point (qubit) by integer vectors ⃗ z ∈ Z d that illustrate the corresponding geometric differences.We consider graphs with a local neighborhood such that for every connection vector ⃗ z, its maximum integer value per dimension is restricted.Initially, the maximum integer value per dimension is set to k = 1 (⃗ z ∈ { 0, ±1 } d \ { 0 }).Lattices of this type are for instance the hypercubic or the fcc lattice.Further lattices such as the d-dimensional brickwork representation of the diamond lattice [5,51] can be obtained by removing particular edges from these lattices.A more detailed description of all these lattices can be found in Appendix 8.3.To validate our implementation of these lattices, we compare the corresponding classical site-and bond-percolation thresholds [51,52] in Appendix 8.4.
Note that, in practice, the construction of these high-dimensional lattices is obtained in a standard set-up by collapsing them in the (3+1)dimensional space.Such a construction is suitable for the photonic platform as the long-range links that are generated in such a collapse can be implemented with minimal loss via optical fibers.When a fusion photon is lost, the neighborhoods of both fusion photons are removed from the graph state by Zmeasurements to retain a pure quantum state.(b) Percolation simulation for estimating the tolerance to photon loss.The simulated lattice is two-dimensional and has been optimized for a low percolation threshold.Every curve is an average of over 10 3 simulations.Above a percolation threshold λ η c , the probability for a connection between the edges of a d-dimensional lattice of size L d approaches unity as L → ∞.(c) Percolation thresholds λ η c (minimum required value for η = 1 − p loss ).The thresholds are obtained by extrapolating the simulation results of lattices with finite sizes L (see part (b)) towards infinity using the method from Ref. [51].The considered lattices are hypercubic (hc), diamond, body-centered cubic combined with hc (bcc+hc), face-centered cubic combined with hc (fcc+hc), as well as lattices obtained by an optimization (L a 2d , L a 3d ).The simulations are plotted for all-photonic star-shaped resource states (dashed lines) as well as resource states with a quantum emitter spin as a static (st) central qubit (solid lines).

Simulating photon loss
As a metric for the loss tolerance of a graph state construction, we use the percolation probability (a cluster spanning from one edge of the simulated lattice to the other) when fusion failures and photon losses are probabilistically applied (see Fig. 2(a)).A corresponding percolation simulation of a two-dimensional lattice is shown in Fig. 2(b).In all simulations, we consider the emitter to be the central qubit of the star-shaped resource state (see Fig. 1(c)).This type of qubit cannot be lost, in contrast to the fused photonic qubits.The percolation thresholds for several multi-dimensional lattices are shown in Fig. 2(c), and the corresponding values can be found in Appendix 8.5.For the best lattices, the percolation thresholds λ η c are below 0.94 showing that a photon loss probability p loss of about 1 − λ η c > 6% can be tolerated. 2Associated numerical results for the size of the largest connected cluster component are given in Appendix 8.6.
For completeness, we also simulate the loss tolerance when using purely photonic resource states where the central qubits can also suffer unheralded loss [34], reported as dashed lines in Fig. 2(c).Note that, in contrast to our spinbased approach, the obtained loss thresholds for the all-photonic cases only provide necessary conditions for creating a useful cluster state after lattice renormalization [4,47].
For the considered lattices, Fig. 2(c) shows that the hypercubic lattices perform best in dimensions 3 and 4 whereas the diamond lattices are ideal for higher dimensions.A remarkable feature is the dependence of the percolation thresholds on the dimension.For the hypercubic (hc) lattices and the lattices fcc+hc, bcc+hc, we observe an optimum dimension where the corresponding fusion lattice has the lowest percolation threshold.For the diamond lattice, it is not obvious where the optimum is but we expect such an optimum: with increasing dimension, the vertex degree of the corresponding lattices increases.On the one hand, a higher vertex degree is desirable because more fusion attempts lead to a higher chance of establishing connections in the cluster state.On the other hand, in the presence of photon loss, a too high vertex degree (more fusion photons) is problematic since the loss of a fusion qubit makes the central qubit useless (it must be measured in the Z-basis, see Fig. 2(a)).We further analyze this point in Appendix 8.6 in the context of the largest connected cluster state component.A dimension where the percolation threshold of a certain lattice type reaches a minimum is a feature of the fragile nature of entanglement.This differs from classical bond-and site-percolation where the percolation thresholds always decrease when adding more bonds to the same lattice.

Lattice optimization
Fig. 2(c) illustrates that different lattices show quite different performances under photon loss.Therefore, we search for ideal lattices in a given dimension by a discrete optimization algorithm.As before, we virtually place vertices on a hypercubic lattice Z d and represent the connections by a set E of integer vectors: where k ∈ N + bounds the maximum number of steps that a connection vector ⃗ z makes per dimension.The lattice representation by connection vectors allows constructing lattices with so-called complex neighborhoods [54,55] yet it is more general and flexible than the typical approach of specifying lattices by nearest neighbor connections [54,55].Therefore, the chosen representation is particularly suited for optimizations within a large space of geometries.We only consider lattices where all nodes have an identical neighborhood, so the presence of a vector ⃗ z ∈ E implies that there is also a connection to a neighbor in the opposite direction −⃗ z ∈ E. Therefore, all lattices that we consider have an even vertex degree.
A flow chart of our algorithm is shown in Fig. 3(a).The algorithm starts from a lattice where every node only has two connection vec- and added to the lattice (resp.moved from R to E).If the percolation threshold λ η c (E) improves by adding these vectors, the algorithm adds further vectors from R. In contrast, if the percolation threshold gets worse, the algorithm removes the two most recently added vectors from E and adds another pair of vectors ±⃗ z ∈ R instead.If adding any vector from R just makes the percolation threshold worse, the algorithm terminates.In this case, E likely represents a good lattice that cannot be improved by adding any other vector remaining in R 3 .
In two and three dimensions, we find lattices that show better performance in comparison to the simple lattices studied above.These lattices (L a 2d and L a 3d ) are shown in Fig. 3(b,c) and the vectors representing the different lattices are given in Appendix 8.7.They can already tolerate up to 6.5% photon loss when the standard type-II fusions only succeed with p s = 0.5.The lattice L a 2d has an interesting spider structure with some local connection plus some long legs making connections to nodes that are further away.We believe that this lattice has a high loss tolerance because it imitates properties of higher dimensional lattices.In a high-dimensional lattice, a loss may break a connection along a certain dimension, but connections in the other dimensions find a way around it.The long connections of the spider-lattice may play a similar role by bridging local interruptions caused by photon loss.In four dimensions and with a photon as the central qubit, the algorithm finds a lattice with a performance that is almost identical to the 4d hypercubic lattice.Remarkably, when the central qubit is a spin, the algorithm finds exactly the 4d hypercubic lattice (resp.an identical lattice, up to distortions).

Discussion
We have proposed a scheme for generating large spin cluster states for measurement-based quantum computing.The basic building blocks are star-shaped resource states that can be generated on demand by various types of quantum emitters with a spin [19,21], a resource that is experimentally in reach [20,26,23,22].We fuse these resource states performing all fusions simultaneously (ballistically) allowing a fast clock rate for generating large-scale entanglement.We Two-dimensional fusion lattice L a 2d with high loss tolerance.In the abstract representation of the fusion lattice (upper part), every connection is a fusion, and every node is the center of a resource state.For one exemplary star-shaped resource state, an explicit representation of the fusions is shown (lower part).The red and blue graph structures represent the fusions connected to two qubits in the center of the plot.Every node on the two-dimensional square lattice (gray) has the same connection pattern.(c) Optimized fusion lattice L a 3d in three dimensions.Every qubit (node) is linked to eight neighbors by fusions (edges).An exemplary pair of connection vectors ±⃗ z is illustrated in the lower part.
only use non-boosted rotated type-II fusion [30], and find fusion lattices in two and higher dimensions that can tolerate about 6 − 8% photon loss.This loss tolerance is an improvement by more than a factor of two compared to similar ballistic schemes [5] that are all-photonic and an order of magnitude compared to intrinsically fault-tolerant schemes [9,56] even though those schemes assume boosted fusion and some even employ larger resource states.The reason for the improvement is a combination of our lattice optimization and the fact that some adaptiveness is moved to the required lattice renormalization step [4,47].Alternatively, repeatuntil-success schemes can be used to further improve the loss-tolerance by a divide-and-conquer strategy [40,42] which, however, creates a significant overhead since large parts of the lattice construction may have to be repeated in case of fusion failure.Furthermore, the need to do operations sequentially puts a high demand on quantum memories.Another proposal [43] applies a repeat-until-success scheme to generate entanglement between one type of quantum emitter spins which is swapped to a second type of spins upon success.Such a highly adaptive scheme can improve the loss tolerance to arbitrary values, but increases the requirements on the emitters, since they now need to contain two spins.These schemes can therefore only be applied to certain emitters, such as NV-centers [45], but are e.g.not directly applicable to quantum dots.A suitable compromise might be a minimally adaptive repeat-until-success scheme where each entanglement attempt can be repeated if it is unsuccessful.Allowing a fixed number n r of fusion attempts would decrease the overall rate by the constant factor n r , but would increase the fusion success probability to 1 − 1/2 nr in the absence of losses (see Ref. [53]).In contrast to such approaches, the scheme discussed in this paper optimizes the loss tolerance while at the same time keeping the requirement on the photon emitters to a minimum.
Our results can guide experimental work towards scalable quantum computing with quantum emitters and provide performance thresholds that such emitters need to meet.To further reduce the experimental requirements we see several possible extensions of our work: (1) We only consider a certain type of lattices and more general classes of lattices could be simulated by using combinatorial tiling theory [57] or quasiperiodic tilings [58].(2) In our approach, lattices are built from star-shaped (GHZ) states.Using more complex resource states might further improve the loss tolerance and potentially also enable the exploitation of loss and error-tolerant sub-spaces [59].However, generating such states is more involved and might require direct spinspin gates between emitters [60,61,62].(3) Our scheme uses quantum emitter spins as the qubits and thus will require a large number of emitters.It would be interesting to estimate this number and investigate methods to reduce it by recycling qubits, keeping only part of the cluster state active during computation [34] (4) Here we focus on loss tolerance.Integrating the approach with techniques for quantum error correction against logical errors can yield a fully fault-tolerant architecture.In this regard, it is encouraging that fault tolerance is related to similar percolation concepts [63,64,65] facilitating its integration with the current approach.Furthermore, error correction benefits from high-dimensional structures, which also favors the developed architecture [66,67,68].
ideal spontaneous photon sources in silicon quantum photonics".Nat.Commun.

Generating the resource states
As we have pointed out in the main text quantum emitters with a spin degree of freedom can be used to generate GHZ states or linear cluster states.This concept has been the subject of various theoretical proposals [17,19,18,69,21] and some experimental realizations have recently been achieved [20,22,23,26].In this work, starshaped resource states with a central spin qubit are used and we therefore explain how such states can be created.The typical idea is to make use of a level scheme where an optical π-pulse on the emitter in the |↑⟩ state leads to a photon in state |0⟩ which is orthogonal to the photon state |1⟩ that is generated when starting with the spin state |↓⟩.The typical protocol uses polarization encoding [19,20,26,22,23] (see Fig. A4(a)), but other degrees of freedom such as time-bin encoding can be used as well [21,69] (see Fig. A4(b)).To generate a resource state, the spin is initialized in the state |+⟩ s = 1/ √ 2(|↑⟩ + |↓⟩) s .Applying N π-pulses therefore leads to the following GHZ state: Applying a single-qubit Hadamard gate H to all photonic qubits transforms this state into: ) and C sj representing a controlled-Z gate between the spin and photon number j.This state is exactly a star-shaped graph state with the spin as the central qubit.
Graph states and GHZ states are stabilizer states [70,49], and creating a star-shaped resource state from a GHZ state can thus be explained by transforming stabilizers.The stabilizer generators4 of the GHZ state (A2) are: where the first operator acts on the spin qubit and the index j ∈ {1..N } labels the corresponding photon.Applying H ⊗N to the GHZ state transforms its stabilizers in the Heisenberg picture.This leads to new stabilizer generators of the form which are the stabilizers of the star-shaped resource state with a central spin qubit and N photonic leaves 5 .Note that the state in Eq. (A2) can be transferred to a purely photonic GHZ state by measuring the spin in the X-basis.Measuring |+⟩ s projects the system into the corresponding Nphoton GHZ state.When measuring |−⟩ s , an additional Z-gate needs to be applied to an arbitrary photon to retain the same photonic GHZ state.Therefore, photonic star-shaped resource states can be generated with the above method as well.
Pulse sequence to generate a linear cluster state with polarization encoding [19].By leaving out the second π/2-pulse (light green) one creates a GHZ state.When starting with spin |↑⟩, a photon with σ + polarization is emitted.When starting with spin |↓⟩, a photon with σ − polarization is emitted.(b) Corresponding pulse sequence for generating a linear cluster state or GHZ state with time-bin encoding [21].When starting in the state |↑⟩, a photon |e⟩ is emitted in the early time-bin.When starting in the state |↓⟩, no photon is generated in the early time-bin, yet in the late time-bin |l⟩ after the spin has been flipped.

Fusion circuits
In this section, we describe the fusion operations that we assume for fusing star-shaped resource states (such as the two states in Fig. A5(a)).The process is known as type-II fusion [30] which, upon success, corresponds to two simultaneous parity measurements involving the detection of two photons.For pedagogical reasons we first describe two fusion operations that may be useful for other applications [72], yet do not correspond to the fusion model in Figs.1(b,c).We then will give a detailed explanation of a rotated Bell measurement that has exactly the desired properties [5].The fusion circuit in Fig. A5(b) is a Bell measurement where the states are measured upon a specific detection pattern.Sending, for instance, the state |Ψ + ⟩ to the depicted fusion circuit results in the detection patterns 13 or 24, which are only measured for this particular Bell basis state (here e.g. 13 corresponds to clicks in detectors 1 and 3).|Ψ + ⟩ and |Ψ − ⟩ are eigenstates of −Z A Z B and ±X A X B with eigenvalues +1 (and these operators are known as the stabilizers of the states).A successful fusion operation corresponds to a projection on |Ψ + ⟩ or |Ψ − ⟩ and thus is a simultaneous measurement of the stabilizers −Z A Z B and ±X A X B .The states cannot be distinguished by the circuit as they both result in the same detection patterns 11, 22, 33, or 44 6 .Their detection pattern corresponds to an independent measurement of both qubits in the Z-basis.Failure of the fusion operation thus corresponds to the measurement of the operators Simultaneously measuring Z A Z B and X A X B connects graph components that were detached before.However, it does not connect star-shaped resource states in the desired way, since it produces the graph state illustrated in Fig. A5(b), which is not locally equivalent to the desired graph in Fig. A5(d) [73,74].(Note that the derivation for the corresponding graph transformation is very similar to the derivation of the rotated fusion that we will discuss in the course of this section.) The effect of the fusion circuit can be modified by adding single qubit gates before the Bell measurement [72,5].When applying such a gate U , the stabilizers S i of the graph state get rotated in the Heisenberg picture: S i → U S i U † .To understand the effect on the graph state, one can analyze the effect of the Bell measurement on the rotated stabilizers.Alternatively one can rotate the Bell measurement as well as the stabilizers by applying U † from the left and U from the right.This operation leaves the stabilizers the same (U † U S i U † U = S i ) and rotates the Bell measurement: when U acts on qubit A, the measured operators X A X B and A typical modification is rotating the Bell measurement by adding a single Hadamard gate before one of the two qubits [75,72].Upon fusion success, the operators X A Z B and Z A X B are then measured which corresponds to the desired fusion operation on the graph state [75,72] 13,24 The four Bell basis states are stabilized by the operators ±XX and ±ZZ.We consider the effect of fusing the qubits A and B in the graph that is shown on the right.We consider three different circuits in the following.(b) Upon success (s), the depicted circuit (left half) measures the Bell states |Ψ ± ⟩ and therefore the corresponding stabilizers.The Bell states |Φ ± ⟩ result in the same detection patterns and therefore cannot be distinguished (fusion failure).In the failure case (f), the circuit measures both qubits in the Z-basis.Fusion success and failure transform the graph state as illustrated on the right.To obtain the displayed graph state, the gate H (Hadamard) needs to be applied after the fusion on the qubit highlighted in red.(c) Rotated fusion with the gate H applied to the first qubit before the fusion.(d) Modified fusion that can be used to merge star-shaped resource states into cluster states.R = 1 0 0 i is applied post-fusion to one qubit (red) to obtain the shown graph state.
(see Fig. A5(c)).Upon failure, however, it corresponds to measuring qubit A in the X-and qubit B in the Z-basis.The X-measurement of one fusion qubit also measures the central qubit of the corresponding star-shaped state in the Z-basis 7 [50].This Z-measurement destroys one star-shaped resource state as illustrated in Fig. A5(c).For this reason, the corresponding fusion circuit is not suited for our purposes.With this definition, the hypercubic lattice corresponds to n = 1, the fcc lattice corresponds to n = 2, and the bcc lattice corresponds to n = d.For all these cases, an exemplary connection vector ⃗ z is given in red.
every node being connected to its first nearest neighbor, while the fcc lattice corresponds to every node being connected to its second nearest neighbor.
With this method, we construct all lattices from Ref. [51] except the Kagome lattice as well as lattices with complex neighborhoods (see e.g.Refs.[52,81,82]).In our simulations, we consider the hypercubic, the bcc, and the fcc lattice as well as combinations of them 9 .In three dimensions, for instance, the combination of hypercubic (simple cubic) and fcc lattice leads to the lattice NN+2NN [52].
Except for the diamond lattice, all the lattices that we consider can be constructed with the above method.A graph with the structure of the generalized diamond lattice can be created by removing edges from the hypercubic lattice [51].In this construction, all hypercubic edges pointing in the first dimension (|z 1 | = 1) stay untouched.Every second edge with a connection vector stepping in any of the other dimensions is removed.In particular, when ⃗ z connects the vertices V A and V B with V A having the smaller coordinate in the dimension l (V A l < V B l , |z l | = 1, l > 1), the corresponding edge e = (V A , V B ) is removed 9 The d-dimensional bcc lattice alone consists of 2 d−1 mutually unconnected partitions, and the fcc lattice alone has d(d − 1) mutually unconnected components.Combining these lattices with the hypercubic lattice makes them fully connected.

Classical percolation simulations
In our simulation, we first build a certain lattice, second, apply a particular percolation model, and third analyze the resulting graph structure.To verify that our implementation of various lattices in the first step is correct, we perform classical bond-and site-percolation simulations of these lattices and compare the resulting values to the literature [51,52].The site-percolation thresholds, λ site c , resulting from these simulations are shown in the upper part of Table A1.The simulated lattices are mainly the lattices described in section 8.3.Additionally, we simulate the lattices 2NN+3NN and NN+2NN+3NN [52] (lattices with complex neighborhoods) for which we find λ site c = 0.1039 (6) and λ site c = 0.0978(1), respectively (in reasonable agreement with the literature values 0.1036(1), 0.0976(1) [52]).
To compensate for finite size effects in our simulations, we use the approach from Ref. [51] but more elaborate methods that do not rely on cluster spanning could be used instead [81].Most percolation thresholds agree very well with the literature values.The only exceptions are the six-dimensional bcc and diamond lattices.The deviation by a few standard deviations might be caused by an underestimation of the systematic error (by us and/or the literature [51]) when using the method from Ref. [51] (6) [85] 0.0920213(7) [55] a Reviewed site-percolation threshold for the NN+3NN lattice, private communication, K. Malarz, 2023 Table A1: Results of classical site-percolation (λ site c , upper part) and bond-percolation (λ bond c , lower part) simulations of various lattices together with corresponding literature values.Note that some of the lattices are identical, for instance, hc and bcc are the same in two dimensions.
finite size effects.
The lower part of Tab.A1 shows corresponding bond-percolation simulations.In the case of no photon loss, the resulting percolation thresholds, λ bond c , specify the fusion success probability that is minimally required for creating a large percolating cluster state [56].Also for the bondpercolation thresholds, we find a good agreement with the literature values.

Percolation thresholds
In Fig. 2 of the main text, we plot percolation thresholds for an architecture where cluster states are built from star-shaped resource states.In Table A2 we give the corresponding numerical values for the case that the central qubit of the star-shaped resource states is either a spin or a photon.The percolation thresholds for the purely photonic architecture are higher compared to the case of a central spin qubit because of unheralded photon loss.When unheralded loss is present, the corresponding percolation thresholds present only a necessary condition for quantum computing after lattice renormalization, because unheralded loss cannot be foreseen during the renormalization/path-finding [90].

Size of the largest connected component
In the main text, we have presented percolation thresholds that specify the loss tolerance of different fusion lattices.When the efficiency η = 1 − p loss is above this threshold, the created graph state has a large connected component that grows linearly with the system size.From a practical point of view, it is interesting to know the size of the largest connected component of the graph.We simulate the largest connected component size as a function of η using periodic boundary conditions (to reduce finite size effects).The results of these simulations are presented in Fig. A7.When a precise number for η is known in an experiment, such simulations show with which fusion lattice the largest connected graph state can be obtained.A smaller percolation threshold does not necessarily mean that the lattice performs better for all values of η.An example is the four-dimensional diamond lattice which has a slightly higher percolation threshold than the four-dimensional hypercubic lattice, yet performs better in terms of component size in a certain range of η.
An interesting point is that the size of the largest connected component decreases linearly for very low losses.The slope of this decrease is directly connected to the vertex degree d v of the lattice.Consider a fusion lattice made of star-shaped resource states with a central spin qubit in a quantum emitter.If |V | is the number of spins, then the number of fusion photons is d v |V |.On average, the number of lost photons is then d v • |V | • p loss .Every lost photon on a fusion edge E will make both spins connected to E useless (as they are measured in the Z-basis).For very low photon losses, there are practically no spin qubits that are affected by two photon losses simultaneously and the overall number of active spin qubits in the graph thus becomes The slope with which the relative size of the largest connected component decreases at low losses is, therefore, −2•d v •p loss .We have illustrated this slope for the six-dimensional hypercubic lattice and the sixdimensional diamond lattice in Fig. A7(a).The six-dimensional hypercubic lattice has a vertex degree of d v = 12 and the corresponding slope matches the simulation well.For the diamond lattice, the match is less good which we attribute to its lower vertex degree of d v = 8 in combination with the finite fusion success probability.Fusion failure leads to missing bonds and the size of the largest connected component can therefore be smaller than the number of active spins (some spins may be unconnected even for zero loss).
When the photon loss increases, the slope of the largest connected component as a function of loss can change for two reasons.(1) For more losses, the chance that the same spin qubit is affected by two or more different photon losses increases.Consequently, fewer spin qubits are affected and the slope bends towards less negative values (compared to −2 • d v • p loss ) lowering the percolation threshold.This effect can be seen for lattices with high vertex degrees such as the fivedimensional fcc+hc lattice.The high vertex degree increases the chance that a qubit is affected by two losses.(2) When a lattice is already heavily affected by photon loss, both loss and the finite fusion probability may cause a large component to be split into two pieces.In this case, the size of the largest component rapidly decreases which causes the slope to become more negative (com- Table A2: Percolation thresholds λ η c for the loss-tolerance of fusion lattices in several dimensions (with rotated type-II fusion and star-shaped resource states).In the upper part, the central qubit of every star-shaped resource state is a spin that cannot suffer unheralded loss.In the lower part, the star-shaped resource states are purely photonic and the central qubits can suffer unheralded loss.For the two-dimensional diamond brickwork lattice (honeycomb), no percolation is possible since the bond-percolation threshold of the lattice is above the fusion probability of 0.5 [30].The two-dimensional simple cubic (square) lattice has a bond percolation threshold of exactly 0.5 and η = 1.0 is therefore required.pared to −2 • d v • p loss ) making the percolation threshold higher.This effect can be seen for the three-dimensional lattices in Fig. A7.

Results of lattice search
We have performed a search for particularly losstolerant fusion lattices with a greedy optimization described in the main text.Every lattice is represented by a set of connection vectors {⃗ z (j) } that specify connections between lattice points Z d .Note that the presence of a connection vector ⃗ z (j) means that there is also a connection along the vector −⃗ z (j) since we only consider lattices where all nodes are equivalent.So if we give say 4 different vectors ⃗ z (j) , then The results of this search in dimensions two, three, and four are given in Table A3 together with the corresponding percolation thresholds.First, we consider the case that the central qubits of the resource states are spins.In two dimensions, we allow connection vectors with |⃗ z  A3.Table A3: Fusion lattices with particularly high tolerance to photon loss and the corresponding percolation thresholds.
For the upper part of the table, the central qubit of each star-shaped resource state is a spin.For the lower part, the resource states are purely photonic.

�Figure 2 :
Figure2: (a) Loss on a fusion lattice where star-shaped resource states are geometrically arranged and fused.When a fusion photon is lost, the neighborhoods of both fusion photons are removed from the graph state by Zmeasurements to retain a pure quantum state.(b) Percolation simulation for estimating the tolerance to photon loss.The simulated lattice is two-dimensional and has been optimized for a low percolation threshold.Every curve is an average of over 10 3 simulations.Above a percolation threshold λ η c , the probability for a connection between the edges of a d-dimensional lattice of size L d approaches unity as L → ∞.(c) Percolation thresholds λ η c (minimum required value for η = 1 − p loss ).The thresholds are obtained by extrapolating the simulation results of lattices with finite sizes L (see part (b)) towards infinity using the method from Ref.[51].The considered lattices are hypercubic (hc), diamond, body-centered cubic combined with hc (bcc+hc), face-centered cubic combined with hc (fcc+hc), as well as lattices obtained by an optimization (L a 2d , L a 3d ).The simulations are plotted for all-photonic star-shaped resource states (dashed lines) as well as resource states with a quantum emitter spin as a static (st) central qubit (solid lines).

Figure 3 :
Figure 3: (a) Flow chart of the algorithm used to optimize the loss tolerance of fusion lattices.The lattice is represented by a set of nodes on Z d together with connection vectors between the nodes specified by the set E. The algorithm adds pairs of connection vectors ±⃗ z ∈ R to E trying to improve the percolation threshold λ c (E).(b) Two-dimensional fusion lattice L a2d with high loss tolerance.In the abstract representation of the fusion lattice (upper part), every connection is a fusion, and every node is the center of a resource state.For one exemplary star-shaped resource state, an explicit representation of the fusions is shown (lower part).The red and blue graph structures represent the fusions connected to two qubits in the center of the plot.Every node on the two-dimensional square lattice (gray) has the same connection pattern.(c) Optimized fusion lattice L a 3d in three dimensions.Every qubit (node) is linked to eight neighbors by fusions (edges).An exemplary pair of connection vectors ±⃗ z is illustrated in the lower part.

Figure A6 :
Figure A6: Construction of different lattices by adding connection vectors to the lattice Z d .The illustration shows three-dimensional lattices, but the construction works in any dimension.In the illustration, every connection vector ⃗ z fulfills |z i | ≤ k = 1 for all its elements, resp.⃗ z ∈ {0, ±1} d .At the same time, the number of dimensions in which the connection vector is non-zero is fixed to a specific value n = d i=1 |z i |.With this definition, the hypercubic lattice corresponds to n = 1, the fcc lattice corresponds to n = 2, and the bcc lattice corresponds to n = d.For all these cases, an exemplary connection vector ⃗ z is given in red.

7
and we find the lattices L a 2d , L b 2d , L c 2d (with L a 2d being shown in Fig. 3(b) of the main text).In three dimensions, we allow all connection vectors with |⃗ z (j) i | ≤ 1 and we find the lattice L a 3d which is shown in Fig. 3(c) of the main text.Allowing |⃗ z (j) i | ≤ 2 leads to the slightly better lat-tice L b 3d .The optimized four-dimensional lattice (with |⃗ z (j) i | ≤ 1) is equivalent to the hypercubic lattice up to distortions.Finally, we also consider purely photonic resource states.The results of these simulations are shown in the lower part of Table

Figure A7 :
Figure A7: Size of the largest connected component relative to the initial number of star-shaped resource states in the fusion lattice.The largest component size is plotted against the efficiency η = 1 − p loss .Smaller lattice sizes are indicated by a lighter color, but the curves almost overlap for the chosen lattice sizes.(a) Simulation with a spin qubit as the central qubit of every star-shaped resource state.The dashed lines illustrate how the number of active qubits in the six-dimensional hypercubic and diamond lattices decreases in the regime of low photon loss.(b) The corresponding simulations for purely photonic resource states.
to compensate for