A significant research effort is being made to develop near-term quantum devices which fall short of fault-tolerance requirements and the ability to perform universal computation, but are able to perform specific computational tasks beyond the reach of today’s best classical computers.
Although much attention in this area is given to Google’s plan to perform a large random quantum circuit sampling experiment, Aaronson and Arkhipov’s BosonSampling  problem continues to be considered a strong contender for the basis of a demonstration of significant quantum advantage.
The appeal of BosonSampling is in the way that it seems to translate so naturally to an experiment: sample from the output state generated when $N$ single photons have been injected into a large $M \in \Theta(N^2)$ (modes) linear optical circuit implementing a random transformation on optical modes. Amazingly, Aaronson and Arkhipov were able to show (whilst only making a few reasonable conjectures) that the existence of an efficient classical computing strategy to perform BosonSampling, even approximately, would have surprising and far reaching consequences in complexity theory.
But is this the most natural problem solved by such an experiment? Work by García-Patrón, Renema and Shchesnovich  (complemented by concurrent work by Oszmaniec and Brod ) suggests not. They show that, in fact, the tendency for photons to get lost to the environment means that the problem solved by conducting such an experiment in typical architectures can be simulated using an efficient classical algorithm. As it turns out, it’s hard to convince photons to be part of a demonstration of BosonSampling – they would rather do something much more classically accessible.
The authors show that, by commuting loss to before the interferometer, the input multiphoton state can be well approximated by a thermal state. This, coupled with the fact that CPTP maps contract trace distance, means that the thermal state remains a good approximation at the output of the experiment. The evolution and sampling of the thermal state is efficient, and the approximation is shown to be a good one when the depth $D$, of the linear optical circuit satisfies $D \in \Omega(\log(M))$. Furthermore, the authors show that making the depth grow more slowly is not an obvious way to re-establish classical hardness, presenting a classical quasi-polynomial-time algorithm based on tensor network methods.
Beyond the limits imposed by this work, the utility of tricks such as post-selecting on a small number of photons being lost  have been constrained by the same authors . Daunting levels of improvement in the engineering of photon sources, linear optical circuitry and single photon detectors seem to be required unless a new approach is taken.
García-Patrón, Renema and Shchesnovich have not only provided novel classical simulation methods, but also an increased incentive to devise and utilise schemes to mitigate loss induced errors in multiphoton linear optics experiments. This could, potentially, make the quest for quantum advantage by BosonSampling more relevant to the development of a wider range of photonic quantum computing devices.
 Scott Aaronson and Alex Arkhipov, "The computational complexity of linear optics”, in Proc. 43rd Annu. ACM Symp. Theory of Comput., 333 –342 (2011). 10.1145/1993636.1993682.
 Raúl García-Patrón, Jelmer J. Renema, and Valery Shchesnovich, "Simulating boson sampling in lossy architectures", Quantum 3, 169 (2019). 10.22331/q-2019-08-05-169.
 Michał Oszmaniec and Daniel J Brod, "Classical simulation of photonic linear optics with lost particles", New J. Phys. 20, 092002 (2018). 10.1088/1367-2630/aadfa8.
This View is published in Quantum Views under the Creative Commons Attribution 4.0 International (CC BY 4.0) license. Copyright remains with the original copyright holders such as the authors or their institutions.