A robust W-state encoding for linear quantum optics

Error-detection and correction are necessary prerequisites for any scalable quantum computing architecture. Given the inevitability of unwanted physical noise in quantum systems and the propensity for errors to spread as computations proceed, computational outcomes can become substantially corrupted. This observation applies regardless of the choice of physical implementation. In the context of photonic quantum information processing, there has recently been much interest in passive linear optics quantum computing, which includes bosonsampling, as this model eliminates the highlychallenging requirements for feed-forward via fast, active control. That is, these systems are passive by definition. In usual scenarios, error detection and correction techniques are inherently active, making them incompatible with this model, arousing suspicion that physical error processes may be an insurmountable obstacle. Here we explore a photonic error-detection technique, based on W-state encoding of photonic qubits, which is entirely passive, based on post-selection, and compatible with these near-term photonic architectures of interest. We show that this W-state redundant encoding techniques enables the suppression of dephasing noise on photonic qubits via simple fan-out style operations, implemented by optical Fourier transform networks, which can be readily realised today. The protocol effectively maps dephasing noise into heralding failures, with zero failure probability in the ideal nonoise limit. We present our scheme in the context of a single photonic qubit passing through a noisy communication or quantum memory channel, which has not been generalised to the more general context of full quantum computation.


Introduction
Within quantum information processing systems, the ability to detect errors is an absolute prerequisite for the road towards fault-tolerance. In the standard approach to fault-tolerant quantum computing, one first constructs error-detection circuits, upon which we build error-correction capabilities, finally revisiting the construction to ensure error transversality, facilitating recursive nesting of the protocol to suppress error rates [20,21,31]. In the absence of the initial error detection stage, such a construction for mitigating errors cannot function.
The standard framework when considering quantum error-correction is in the context of universal quantum computation [15]. Given that it is universal, multiple levels of error correcting codes can be implemented. In general this requires large, but sub-exponential, resource overheads each with subthreshold error rates. Although such constructions are essential for realising the full potential of quantum computing, it remains a distant target. Hence there is currently a pursuit to find utility for achievable near-term devices with post-classical capabilities, even if not universal [12,18]. This has lead to the alternative target where universality is discarded as a requirement and the sole purpose is demonstrating some form of quantum computational advantage with pragmatically reasonable resources. Two examples for such paradigms whose quantum power is proven by links to widely presumed structures in computational complexity theory are IQP (Instantaneous Quantum Polynomial), and boson-sampling devices, both examples of so-called sampling problems. IQP is the class of sampling problems consisting of commuting gates acting on qubits that are prepared and measured in superposition basis relative to that of the eigenbasis of the commuting gates [5,29]. Boson-sampling is the set of problems that can be constructed from the preparation and measurement of individual bosons subject to evolution via passive linear interferometers [1,2].
In the development of classical hardness arguments for these restricted models, the consideration of errors under a trace-norm induced distance has been of prime importance. Sampling from these distributions with bounded error (which is actually an input to the problem definition) is called approximate sampling. The arguments for the classical computational hardness of approximate sampling do not utilize any form of extra resources to deal with errors as the main purpose of these models was the minimisation of quantum resources. Therefore, in standard error analysis for restricted modes the objective has been to find the scaling relationships between trace-norm distance and the parameters defined within the error process (e.g. loss rates, mode overlaps, unitary noise, etc.) [3,4,14,16,22,27,28,32]. However, some quantum resources come cheaper than others within these models. In particular, additional modes prepared with vacuum states within the boson-sampling paradigm are considered to have much lower cost than additional modes prepared with single photons. However, given that this model is passive, one may suspect that it is not possible to perform any kind of error correction without leaving the constraints of the model, and hence dealing with errors defaults back to the requirements associated with universal models.
Marshman et al. [19] have shown that, for boson-sampling, it is possible to detect the presence of random phase errors without leaving the paradigm and that the conditional state on detecting the error has a lower error than would otherwise be the case. This was done using a redundant encoding of the passive linear interferometer with a particular network chosen for encoding and decoding of input single photons. The presence of the photon within a particular mode was used as the error detection mechanism. Devices requiring higher photon numbers could be accommodated by parallel combinations of single photon interferometers. This is distinct from the considerations of [4] for errors within unitary networks as there it was assumed that there was no redundancy utilizing additional resources.
In this paper, we extend this result by considering single photon encoding that involve W-state path entanglement encoding of photonic qubits encoded in dual-rail form. These states can be generated from single photons through passive linear interferometers, and resemble a generalisation of an optical fan-out operation, having desirable properties for error correction such as the maintenance of path entanglement when single systems are lost. The expansion in mode number can be conceptually related to conventional error-correction schemes based on redundancy, such as Shor's original 3-qubit code [30]. We show that this encoding yields an improvement on local phase shift errors [26] much like that of the previous work but also show that photon loss is the constraining factor in the heralded fidelity for this localised noise model. We also show that this performance is independent of the type of distribution underlying the random phase errors provided that the errors acting on different modes are independent (i.e., no correlated errors), identical (all modes are treated the same) and the characteristic function for the distribution is well defined. Under these conditions any level of encoding will improve fidelity when conditioned on detecting no error and with a large enough encoding we can fully mitigate the dephasing error.
To present our results we will first discuss different classes of multipartite entangled states and elaborate on why W-states are a good candidate for encoding and define the W-basis in section 2. In section 3 we introduce our W-state based encoding using only linear optics and single-photon inputs and describe how to post-select to filter our error. In this section we will also describe the linear optics error model that we will consider. Then in section 4 we compute the success probability of the protocol to succeed and compute the fidelity of the output logical state with the input logical state and show that the performance improves as the level of encoding increases. We discuss how to implement qubit gates on the logical qubit while in the W-state encoding in section 5. Finally and we will discuss some implications of this work and finally make some concluding remarks in section 6. Importantly, we present an elementary economic argument in Sec. 6.2 for the merit of our scheme from the perspective of engineering economics that, regardless of the state of precision engineering, this approach is likely to be economically justified is some regimes, complimentary to investment into improved precision engineering. This simple observation is based on the intuitive notion that investment into enhanced engineering precision scales exponentially with precision requirements, whereas redundancy scales roughly linearly, in economic overhead, from which the inevitability arises of there being a crossover point between how resources should be allocated to maximise economic efficiency.

Conceptual basis -Redundancy & entanglement robustness
An inherent feature of any kind of multi-qubit entangled state is that, by virtue of its entanglement, loss or decoherence of a single constituent qubit diminishes its degree of entanglement, similarly reducing its purity (or conversely, increasing its collective entropy). Some entangled states are more robust than others in this respect and, as discussed below, the W-states are a quintessential example of entangled states with this robustness property. Note that the resultant state following a partial trace operation upon a qubit (equivalent to loss when using single photon encoding) is independent of anything done only to traced out qubits prior to the partial tracing operation. Therefore considering loss via partial trace is completely sufficient to understand the worst-case degradation of an entangled state under any kind of local noise process.

GHZ states
The worst-case scenario is the GHZ state, a maximally-entangled n-qubit state of the form, whereby all qubits are collectively perfectly correlated. That is, measurement of any one (in the computational Z-basis), reveals the equivalent measurement outcome of all others. However, this directly implies that losing access to this information similarly implies loss of knowledge of the others. Loss or dephasing directly correspond to such loss of information. For this reason, dephasing a single qubit, or losing it outright, implies complete decoherence of the entire n-qubit state. Specifically, partial tracing out a single qubit from a GHZ state leaves behind the hopelessly mixed stated, where the partial trace is performed upon any qubit i.

Cluster states
Cluster (or graph) states [24,25] are a highly useful class of states, enabling universal quantum computation using the measurement based model for quantum computing (MBQC). Despite being more computationally useful than GHZ states, they are far less entangled, and hence far more robust against localised noise processes. For example, by measuring out the immediate neighbours of a lost qubit from within a graph state, a reduced, yet perfect graph state is recovered, given by the sub-graph of the original graph, with the neighbourhood of the lost qubit removed.

W-states
An especially robust (and so far not particularly useful) class of entangled states are the W-states [8,35], given by the equal superposition of a single excitation across n-sites. In qubit form this can be expressed, Alternately, this can be expressed in terms of creation or excitation operators,â † i for the ith site, where |Ω is the collective ground or optical vacuum state. The latter representation is the one we will focus on here, given its direct applicability to photonic encoding. These states exhibit complete permutational symmetry under qubit interchange. That is, the state is invariant under any permutationπ ∈ S n in the symmetric group,π Tracing out a single qubit from a W-state yields, That is, upon loss of a single qubit, with probability p = (n − 1)/n it simply undergoes a reduction in its level of encoding to a |W n−1 state, preserving its W-type structure entirely, otherwise collapsing to the |0 ⊗n−1 state. This implies that that for large n, Wstates are extremely robust (indeed almost invariant) against single-qubit loss. As discussed earlier, this directly implies similar single-qubit robustness against other noise channels.
Note that atomic ensemble qubits [7] are a direct alternate physical manifestation of W-type encoding, whereby an ensemble (or cloud) of collectivelyaddressed atomic qubits undergo collective excitation, mathematically of the form given in Eq. (4). This approach to realising physical qubits has attracted much attention, especially as good candidates for quantum memories, given their notably high coherence lifetimes, often at room temperature, which can be intuitively associated with the described robustness of their underlying W-type entanglement structure -if a few atoms go missing from the cloud, little is lost.
The n-qubit W-state can be easily generalised to an entire orthonormal W-basis, by appropriately manipulating the phase relationships within the n terms in the superposition. One way in which to choose these phases is by taking the elements from the Quantum Fourier Transform (QFT) matrix, or generalised Hadamard matrices, both of which have equal 1/ √ n amplitudes across all matrix entries, with phase relationships ensuring orthonormality.
These different phase relationships do not change the earlier observations about the states' robustness against local noise. This immediately leads to the intuition, that by choosing such a W-basis for encoding logical qubits, the encoded logical qubit must inherit via linearity these same robustness characteristics. This makes them a direct candidate for optical encoding, given that photonic implementation of QFT mappings may be implemented via passive linear optics, in the absence of any active control, and is realisable with today's technology integrated wave-guide architectures across a large number of modes.

Protocol
Success Failure + Figure 1: Photonic W-state error-correction and -detection protocol. Encoding of a single dual-rail photonic qubit proceeds via a Quantum Fourier Transform (QF T ), which maps the 2-mode encoding across a larger number of redundant modes. The independent dephasing noise channel is denoted by E. Decoding proceeds via the inverse Quantum Fourier Transform (QF T † ). Post-selection upon detecting the single photon within the desired 2 output modes defining the single qubit, projects the logical state into one with reduced noise action.
The error-detection and correction protocol is shown in Fig. 1. Consider N optical modes, the first two of which contain a single photon state, defining a dual-rail-encoded photonic qubit. This qubit can defines a logical qubit, where |Ω is the N mode vacuum state. To W-encode the logical qubit we pass the N modes through a linear optical network implementing the N -mode quantum Fourier transform, where, are matrix elements of the N -dimensional quantum Fourier transformation operatorQ with ω N = e 2πi N . This transforms the logical qubit to the encoded qubit, which represents the same state of quantum information, but in expanded form. Next the W-encoded state passes through a noisy channel that independently adds random phases to each optical mode, where the θ j represent random variables, whose distribution is considered arbitrary at this point, that form a vector θ describing the phases applied to each mode. The state |W θ denotes the W-encoded state following application of the phase noise channel. We now apply the decoding operation (the inverse QFT operation), and the first two output modes represent the decoded logical state,ρ L . Because of the noise in the channel we are not guaranteed to observe the photon strictly within the first two modes. Thus we post-select and treat cases where photons are found in the other modes as heralding a failure. The intuition is then that for the heralded success cases the phase noise errors would have been filtered out. The fidelity of the decoded state compared to the input logical qubit |L is where |L is implicitly a function of the superposition parameters α and β. Note that the overlap between two states is invariant under common unitary operations. As the encoding and decoding operations are unitary it suffices to consider the fidelity of the Wencoded state, where F N is used here to show that the fidelity will depend on the number of modes used for the encoding N . Eq. (13) assumes knowledge of the phase errors in each mode as represented by θ but these are of course unknown. However we can model them as independent random variables acting on each mode separately according to some arbitrary distribution p, where p j is the distribution for mode j. The encoded state after application of the error channel on average is given by,ρ The fidelity between the output and input of the error channel is given by, As with all quantum operations, the noise channel is a linear map on the state space. Let the channel map be denoted by L θ , then we have for the encoded qubit state, where |W k =Ŵ † k |Ω , following the definition in Eq. (8), These equations can now be used to computeρ W , where c 0 = α and c 1 = β. Substituting the definition of |W θ i from Eq. (18) and defining, we obtain, where the matrix within the integral represents the state after the noise channel in the photon number basis. The characteristic function of a probability distribution p(x) is defined as, If we assume all θ j are identically and independently distributed as p(θ) then we have, whenever the indices j and k are different. Thus, where we have used the fact that, and ∆ is the completely dephasing map in the photon number basis defined as, We can see from Eq. (24) that the our error channel is essentially a dephasing channel with dephasing parameter λ. To analyze our protocol further we will choose a particular error model by assuming that the phase error in each mode is distributed as a Gaussian with mean µ and variance δ 21 , This is a natural choice when we do not have any knowledge about the nature of the processes that generates the errors beyond that many underlying random distributions average to give a final contribution to the error (à la the central limit theorem). The characteristic function of a normal distribution is given by, This leads to λ = e −δ 2 and, We can interpret the error channel as performing the identity with probability e −δ 2 and applying the Fock basis dephasing operator with probability (1 − e −δ 2 ). This channel is thus a dephasing channel with probability of no error occurring p = e −δ 2 . In practical terms, the variance δ 2 of the phase error will depend on the physical implementation of the quantum channel. For fibre-optic cables we would generally expect the variance to increase with the length of the cable L or equivalently the propagation time of the photon in cable t p = L/v, where v is the propagation velocity of the photon in the fibre. If we model the variance as increasing linearly with propagation time, i.e, where T 2 is a constant defining a characteristic time for the dephasing channel, we can write down our error channel in the standard dephasing channel notation as,

Error heralding
In implementing the protocol as described in Fig. 1, we can perform the post-selection in two different ways: • Presence heralding: Success is assumed based upon the detection of exactly one photon between 1 This assumption allows for values of θ that are larger than single multiples of 2π, but the theory used here does not need to be changed to incorporate this. The operations used in defining the phase shift channel are periodic and hence having larger value of phase does not invalidate this description. However, it does mean that there is no unique probability density function for any given distribution on the range [0, 2π). the output modes 0 and 1, which define the logical qubit space. Note that in the absence of quantum non-demolition measurements, this is necessarily destructive, limiting its applicability. The heralding operator is effectively the projector • Absence heralding: Success is inferred via the detection of no photons in any of the remaining modes outside the logical qubit space. This is non-destructive on the logical qubit, broadening its utility. However, photon loss contributes to the occurrence of this signature, implying higher error rates on the remaining logical qubit. The heralding operator is equivalently a projection given byΠ

Absence heralding
We define the absence heralded probability P Ha as the probability that no photons are detected in modes 2 to (N −1) andρ out to be the N -mode state of the system at the end of the protocol before the final measurement. Assuming a uniform loss model parameterized by η, for our choice of input states the probability of detecting the photon in mode m is where |Ω is the global vacuum state. Using this expression for the probability of detection under loss we can see that, P Ha = Pr(Photon is in modes 0 or 1) + Pr(Photon is in modes 2-(N − 1)) × Pr(Loss in modes 2-(N − 1)), Using Eq. (24) we can write, The first term on the R.H.S. of Eq. (36) is simply, which are the values of k that preserve the encoded qubit. The second term can be calculated as, where in the first equality we have used the fact that, and This implies that the photon in the error state is equally spread over all the modes after decoding. If the state contains an error, the heralding will detect it with a probability of N −2 N and miss it with probability 2 N . So there will be a linear advantage in error detection with the number of modes. Substituting these results in Eq. (35) we get, If we assume the Gaussian error model in Eq. (27) this will reduce to, As the number of modes N increases the heralded probability will decrease, this is because the probability of the error state being in the modes 1 and 2 is inversely proportional to N . As we connected the phase error variance to a T 2 time via Eq. (30), we can also reparameterize the loss probability as η = 1 − e −tp/T1 .
In terms of the T 1 and T 2 parameters and propagation time t p , the absenence heralded probability can be written as,

Presence heralding
The presence heralded case is the case where we postselect on there being no photon loss. The presence heralded fidelity is the probability of getting a photon in modes 1 and 2 and this can be easily seen to be,

Absence heralding
The absence heralded state is the state in the output modes 0 and 1 when no photons are detected in the modes modes 2 -(N − 1). This can happen in two mutually exclusive ways; either the photon is lost and there is no photon in any mode or there is no loss and our negative measurement of modes 2 -(N − 1) projects the quantum state into the subspace spanned by a † 0 |Ω and a † 1 |Ω . So, the absence heralded state is given by, where,Π is the projector on to the subspace of modes 1 and 2. But we observe that, The fidelity of the heralded state with our logical input state is given by, where,Π and in the third equality we have used the fact that Π 0,1 |Ω Ω|Π 0,1 = 0. We can interpret Eq. (48) as saying that heralding improves the output fidelity of our protocol by a factor of, . (50) We have, Notice that, where, [|W W |] ii are diagonal elements of the the state |W W | in the computational basis. We know that, (53) Using the above expression we obtain, Therefore, In terms of Bloch variables θ and φ where α = cos θ 2 , β = e iφ sin θ 2 as, it can be seen that, . (56) In the limit of large N the heralded fidelity will approach (1−η). This implies that the only error will be from photon loss. In terms of the dephasing and amplitude damping channel parameters T 2 and T 1 and a propagation time t p this can be written as,

Presence heralding
In the presence heralded case, we are post selecting the case where there is no photon loss in the system, so the post measurement state in this scenario will be, this just improves the fidelity by a factor of (1 − η) giving, the fidelity as, The heralded probability is plotted in Fig. 2 as a function of δ and T 2 with fixed values of η and T 1 respectively. From these plots it is evident that the heralded fidelity improves with the number of modes N . The choice of the parameters values T 1 and η do not influence the ordering of these plots.

Probability and fidelity plots
The heralding probability and associated postselected state fidelities are shown as a function of the channel parameters in Fig. 2, and the respective analytic expressions in Tab. 1. We note that while we are specifically plotting for a Guassian noise model, the qualitative features can be expected to be the same for any i.i.d. error model. This is because depolarizing parameter λ is related to the characteristic function φ p(θ) (z) through equation (23). A function and it's Fourier transform will have their variances inversely related like quadrature variances so even if the exact expressions for the fidelity and heralding probabilities might vary we can expect the qualitative behaviour to remain the same and the average map to be a depolarizing channel.

Probability Fidelity
Presence Table 1: Heralding probabilities and post-selected logical qubit fidelities of a single photon qubit under the W-state encoding protocol, according to the two different modes of post-selection operation. Note that here λ = |φ p(θ) (1)| 2 as defined in equation (23).
Encoding level

Single-qubit unitary operations
Once in the encoded basis, can we directly perform single-qubit unitary operations, without the rigmarole of decoding and encoding? The answer is yes. Consider the single qubit operation, in the logical qubit basis. In the encoded photonic basis, this can be expressed, where we have inserted the identity operationÎ N −2 on the ancillary input photonic modes, andÎ =Q †Q . This yields the equivalent redundantly-encoded photonic unitary operation (i.e between encoding and decoding), where,Ũ is the redundantly-encoded equivalent of the logical 2-qubit operation, obtained by conjugating withQ.

Pros and cons
Our scheme has the following advantages: 1. It can be implemented in a number of quantum memory architectures such as atomic ensembles, optical cavities and delay lines.
2. Any independent uncorrelated phase noise can be corrected for with sufficient levels of encoding.
The physical source of these phase errors will depend on the particular architecture. For example, in a delay line temporal mismatch between photon arrival times will manifest as a phase error. If this is influenced by thermal fluctuations, it will be manifested as a dephasing error reflecting our theoretical calculations of the post error state. In an optical cavity array, the source of the phase error could be decay rate mismatch between cavities. All these cases are consistent with our model.
3. Normally to correct phase mismatch one could either thermally or mechanically isolate the system or use a high intensity source to periodically measure phase errors and actively correct it using feedback. Our scheme mitigates this.
4. Because we only need passive linear optics without feed-forward, this is quite scalable with present-day technology, notably integrated photonic waveguide chips.
5. Robustness against mode loss. Standard QECs require the use of entangling gates such as CNOTs and the code state themselves can be highly entangled such as the GHZ states. These states however are not robust against loss in the sense of a partial trace operation while the W-state encoding will robust against such loss and increasingly so with higher levels of encoding.
The disadvantages of our scheme are: 1. Inability to correct correlated phase fluctuations (e.g a uniform phase shift across all redundant memory cells).

2.
A multiplier in production cost and resource overhead, determined by the degree of redundancy.

Economic justification
There is a strong economic argument for the merit of our scheme, regardless of the state of engineering. The main economic overhead associated with the protocol is the substitution of a single quantum memory with a bank of N identical ones, a roughly linear cost overhead. However, via this trade-off, dephasing processes inherent within them can be asymptotically suppressed, enabling the construction of a quantum memory bank that overcomes the fidelity bounds of a single one.
Given that the engineering and production cost of a single quantum memory unit can increase exponentially with inverse infidelity, an N -fold cost overhead is expected to be a more economically efficient mechanism for noise-suppression in the regime of very high fidelity targets.
The net cost of the memory bank scales roughly linearly with N , whereas the cost of a single cell within it grows exponentially with fidelity. The economically optimal configuration is determined by the minimum of this cost trade-off curve over N , for a given target fidelity F target , where C unit (F unit ) is the engineering cost of a single memory cell with fidelity F unit , and the relationship between the target and unit fidelity follows from the respective heralded fidelity given in Tab. 1, related by N . The crossover point, at which it becomes economically efficient to begin utilising our encoding, occurs when manufacturing a single memory cell with the target fidelity matches that of a redundant bank of cells with lower unit cost [7],

Robustness against different noise models
Whilst we have used a Gaussian noise model for detailed analysis in Section 3, this is by no means an absolute requirement on many of the results we present. As mentioned in Section 4 C, the property that determines the output state is the characteristic function for the random variables in the noise model for the unitary errors evaluated at z = 1. Due to the nature of the characteristic function, this value will be well defined in virtually all possible distributions, even ones that do not have a well defined moment generating function. The exact details of specific properties of the scheme will change under different distributions (e.g. the T 1 and T 2 decoherence factors identified here won't be well defined in general), but the analysis from the point of view of the encoded state will be essentially the same as what we have presented here.

Compatability with no-go theorem
It might seem that this protocol permits the distillation of entanglement that is present within the input state using only operations that maintain the Gaussian form of a Gaussian state, which has been proven impossible by a no-go theorem [9]. The conditions for the no-go theorem do not apply here as the final heralding measurement, which heralds success on a (Gaussian) vacuum state, is overall not a Gaussian measurement under our proposal to measure in the Fock basis. The input Fock states are also non-Gaussian. This is similar to how the no-go Gaussian distillation theorem is avoided in current bosonic entanglement distillation schemes [6,23,33,34].

Comparison with other schemes
The idea of error filtration in a passive linear optic network has been explored in [11,13,17]. Broadly these schemes transmit a photon through a linear optical network such that some measurement outcomes will indicate an uncorrupted state in some output. We have formalized this intuition by giving an explicit code space and showed how it is robust against mode loss and i.i.d. dephasing noise.
Other schemes have explored the use of probabilistic gates to protect against transmission loss such as [10] where optical Bell measurements are used along with a parity encoding. However the encoding states used for these schemes are highly entangled GHZ-like states. These states cannot be deterministically prepared using passive linear optics without introducing active feed-forward and additional photons to accommodate the higher level of encoding -making such schemes highly impractical using present-day technology.
It is important to clarify the distinction between this protocol, which can be regarded as a form of error correction, and the more general concept of faulttolerance where gate errors are accommodated for.
Here we have assumed that our encoding and decoding operations are ideal, and all the dephasing errors are associated with the channel between them. Furthermore, we are not considering full quantum computations, but rather the storage or communication of just a single photonic qubit.
While future work might consider the effects of errors in the encoding and decoding errors in this protocol, the presented analysis is nonetheless reasonably well justified in most practical circumstances.
Current linear optics technology, both using discrete elements or in integrated wave-guides, has become extremely mature and precise, enabling passive linear networks to be implemented with very high degrees of fidelity.
On the other hand, photonic qubits communicated over long-distance links, via any medium, or which are held in quantum memories by coupling them to non-optical physical systems, are far more likely to contribute to these noise processes.
A further distinction between our scheme and conventional error correction schemes, is that we don't rely on any notion of code concatenation to asymptotically improve error thresholds. Instead, we directly expand our level of encoding by increasing the number of optical modes in the fan-out operation implemented by the QFT encoding operation.
Unlike most well-known codes whereby error syndrome measurements are used to apply feed-forward corrections to encoded qubits, this protocol does not rely on any form of active correction via syndrome extraction. Rather, dephasing noise is effectively mapped to non-determinism, such that upon success the effective dephasing rate has been reduced.
The final important distinction between this scheme and conventional QEC schemes, is that we do not create our encoded state via the introduction of addi-tional qubits (i.e photons), but via the the introduction of additional optical modes, where the number of photons is preserved.
Owing to these conceptual differences compared to more familiar QEC and fault-tolerance techniques, our scheme as presented is especially suited to the context of photonic quantum communication or storage via coupling into quantum memories.

Conclusion
We have proposed a passive linear optics encoding, using W-states which have the property of being strongly robust against entanglement degradation from qubit loss. This encoding was shown to be robust against any dephasing error modelled as an uncorrelated independent and identically distributed dephasing process on each subsystem. We showed that the effective error probability is inversely related to the level of encoding N , vanishing in the large N limit. The loss rate upper-bounds the fidelity and success probability, but its effect does not scale with N , given that uniform losses can be commute through passive linear optics systems.
The protocol is naturally suited to optical quantum memories (e.g via atomic ensembles, cavities, or delay lines), where the dominant error processes are independent dephasing and loss. Single-qubit operations are readily implementable within the encoded basis using conjugated, passive, linear optics operations.
We argued that for high-fidelity quantum memories, utilising this technique complimentary to improving engineering precision, has merit from an economic perspective, given the only linear overhead in cost associated with simple redundancy, versus the far greater cost of improving engineering precision.