Abstract
Boson sampling represents a promising approach to obtain evidence of the supremacy of quantum systems as a resource for the solution of computational problems. The classical hardness of Boson Sampling has been related to the so called Permanent-of-Gaussians Conjecture and has been extended to some generalizations such as Scattershot Boson Sampling, approximate and lossy sampling under some reasonable constraints. However, it is still unclear how demanding these techniques are for a quantum experimental sampler. Starting from a state of the art analysis and taking account of the foreseeable practical limitations, we evaluate and discuss the bound for quantum supremacy for different recently proposed approaches, accordingly to today's best known classical simulators.
Export citation and abstract BibTeX RIS
Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
1. Introduction
The boson sampling (BS) problem is a well-built example of a dedicated issue that cannot be efficiently solved through classical resources (unless the collapse of polynomial hierarchy to its third level), though it can be tackled with a quantum approach [1]. More specifically, it consists in sampling from the probability output distribution of n non-interacting bosons evolving through a m × m unitary transformation. Together with applications in quantum simulation [2] and searching problems [3], the aim of a Boson Sampling device is to outperform its classical simulator counterpart. This would provide strong evidence against Extended Church–Turing Thesis and would represent a demonstration of quantum supremacy4 . Following the initial proposal, many experiments have been settled so far by using linear optical interferometers [4–7] where indistinguishable photons are sent in an interferometric lattice made-up of passive optical elements such as beam splitters and phase shifters. In the perspective of implementing a scalable device, one of the main differences with respect to a universal quantum computer is that only passive operations are permitted before detection. This implies that it is not known whether it is possible to apply quantum error correction and fault tolerance [8–10].
This apparent limitation was already considered in the first proposal [1], where the problem was proved to be classically hard also lowering the demand to approximate Boson Sampling, under mild constraints. Many papers have focused on this issue [11] as well on several possible causes of experimental errors [10, 12–16]. The intensive discussion on this topic has triggered a number both of theoretical [17–23] and experimental [24–27] studies on the validation of a Boson Sampler, i.e. the assessment that the output data sets are not generated by other efficiently computable models. Moreover, an advantageous variant of the problem called Scattershot Boson Sampling has been theoretically proposed [28, 29] and experimentally implemented [30] in order to better exploit the peculiarities of the experimental apparatus based on spontaneous parametric down conversion (SPDC). It was eventually very recently proved that the same hardness result holds when there is a constant number of photons lost before being input, which in turn can presumably be extended to constant losses at the output [31].
In this paper we review the fundamental issue of experimental limitations to understand which are the requirements that make the implementation suitable to reach quantum supremacy. We define the latter as the regime where the quantum agent samples faster than his classical counterpart. We analyze the state of the art together with all the complexity requirements, reviewing the whole process in light of recent theoretical extensions and experimental proposals [31, 32]. Starting from the already established idea of sampling with constant losses occurring only at the input, we discuss the extension of Boson Sampling to a more general lossy case, where photons might be lost either at the input and/or at the output. This method provides a gain from the experimental perspective both in terms of efficiency and of effectiveness. Indeed, we actually estimate a new threshold for the achievement of quantum supremacy and we show how the application of such generalizations could pave the way towards beating this updated bound.
2. Standard and scattershot boson sampling
BS consists in sampling from the probability distribution over the possible Fock states of n indistinguishable photons distributed over m spatial modes, after their evolution through a m × m interferometer which operates a unitary transformation U on their initial, known, Fock state . If si (tj) denotes the occupation number for mode i (j), the transition amplitude from the input to the output configuration is proportional to the permanent of the n × n matrix obtained by repeating and crossing si times the column with tj times the row of U [33]
where represents the associated transformation on the Fock space. Given a square matrix , its permanent is defined as , where the sum extends over all permutations of the columns of A. If A is a complex (Haar) unitary the permanent is #P-hard even to approximate [34]. Conversely, for a nonnegative matrix it can be classically approximated in probabilistic polynomial time [35]. The most efficient way to compute the permanent of a n × n matrix A with elements is currently Glynn's formula [36]
where the outer sum is over all possible -dimensional vectors with . Processing these vectors in Gray code (i.e. changing the content of only one bit per time, so that the number of counting operations is minimized to O(n)) allows the number of steps to scale as .
While in the original proposal all the samples are derived from the same input, Scattershot BS consists of injecting each time a random, though known, input state. To this end, each input mode of a linear interferometer is fed with one output of a SPDC source (see figure 1). Successful detection of the corresponding twin photon heralds the injection of a photon in a specific mode of the device. It has been proved that the Scattershot version of the BS problem still maintains at least the same computational complexity of the original problem [28, 29]. Since BS was proved to be hard only in the regime , attention can be restricted only to those outputs with no more than one photon per mode among all possible output states. This also helps to overcome the experimental difficulty to resolve the number of photons in each output mode.
To give an idea of the computational complexity behind the BS problem, we show in figure 2 the real time an ordinary PC requires to calculate a permanent of various size. The time needed to perform exact classical calculation of a complete BS distribution is enhanced by a factor . The values for the most powerful existing computer, which is approximately one million times faster, can be obtained by straightforward calculations. Currently no other approaches different from a brute force simulation, that is, calculation of the full distribution and (efficient) sampling of a finite number of events, have been reported in the literature to perform the classical simulation of BS experiments with a general interferometer.
Download figure:
Standard image High-resolution image3. Scattershot boson sampling in lossy conditions
We are now going to discuss how a Scattershot BS experiment with optical photons depends on the parameters of the setup. We will analyze how errors in the input state preparation and system's inefficiencies (i.e. losses and failed detections) affect the scalability of the experimental apparatus. We will not consider here issues such as photons partial distinguishability and imperfections in the implementation of the optical network, since in certain conditions they do not affect the scalability of the system. For the input state, the average mutual fidelity of single photons must satisfy [15, 16]. Necessary conditions in terms of fidelity [11] and sufficient conditions in terms of operator distance [37] have been also investigated for the amount of tolerable noise on the network optical elements.
Spontaneous parametric down conversion is the most suitable known to-date technique to prepare optical heralded single-photon states. Photon pairs are emitted probabilitistically in two spatial modes, and one of the photons is measured to witness the presence of the twin photon. Note that without post-selecting upon the heralded photons, the input state would be Gaussian and thus the distribution would not be hard if detected with a system performing Gaussian measurements [17, 38]. The main drawback of using SPDC sources is in the need of a compromise between the generation rate and the multiple pair emission. Indeed, the single-pair probability g has to be kept low so as to avoid the injection of more than two photons in the same optical mode. Hence, it reveals to be essential to consider at least the noise introduced by second order terms that characterize double pairs generation which scales as ∼g2 (see appendix
where is the multinomial coefficient . This expression includes all possible combinations of s sources generating one pair (gs) and t sources generating two pairs (g2t). We show in figure 3(a) a schematic representation of a Scattershot BS setup where we depicted all the experimental parameters that we define below. We define with the probability to trigger a single photon, leaving out dark counts. If we assume that we do not employ photon number resolving detectors (accordingly with the performance of current technology), the probability that a detector clicks with n input photons is then given by . Meanwhile, we call the probability that a single photon is correctly injected in the interferometer, while is the probability that the injected photon does not get lost in the network and is eventually detected at the output.
Download figure:
Standard image High-resolution imageIn addition to the original scheme for Scattershot Boson Sampling, optical shutters, that is, a set of vacuum stoppers, are placed on each of the m input modes. The shutters are open only in presence of a click on the corresponding heralding detector, thus ruling out the possibility of injecting photons from unheralded modes. The hypothesis of working in a post-selected regime (with shutters) is helpful in this context: indeed, we are interested only in those events where exactly n photons enter and exit the chip, disregarding every other possible combination. After some combinatoric manipulation, we derive the probability to successfully perform a Scattershot BS experiment with n photons (i.e. an experiment where n triggers click, n single photons are injected and successfully detected at the output)
where is the probability to detect a pair of photons . The outer sums consider all possible Scattershot single photon and pairs generations, while the inner sum constraints the number of correctly injected single photons to n (among these, only n1 derive from single generated pairs).
However, from an experimental point of view we only know that n detectors have clicked both at the input and at the output. Hence, we cannot rule out that this was the result of a fake sampling where additional photons have been injected and some erroneous compensation has occurred (e.g. unsuccessful injection of single photons, losses in the interferometer, failures in the output detection). Indeed, the probability to carry out an experiment from an (non-verifiable) incorrect input state is given by
where we sum the probability to inject a fake state, while triggering and detecting n photons, over all possible generations with t double pairs: (see appendix
We plot in figure 3(b) a numerical analysis of the ratio for different numbers of photons, varying the number of modes and accordingly changing the detection probability in a feasible way. We obtain in parallel that the ratio of correctly sampled events over the fake ones is highly dependent on the number of extra undetected photons. Indeed, this ratio is actually a decreasing function of g and , since higher values of these parameters increase the weight of multiphoton emission and injection, and an increasing function of and .
4. Validation with losses
We will discuss here some extensions of the system that could boost quantum experiments towards reaching the classical limit. A major contribution in this direction came from Scott Aaronson and Daniel Brod who generalized BS to the case where a constant number of losses occurs in input [31], though setting the stage for losses at the output as well. Addressing their proposals, we discuss here the problem of successfully validating these lossy models against the output distribution of distinguishable photons, representing a significant benchmark to be addressed. Indeed, it is still an open question whether it is possible to discriminate true multiphoton events with respect to data sampled from easy-to-compute distributions. A non-trivial example is given by the output distribution obtained when the same unitary is injected with distinguishable photons. The latter presents rather close similarities with the true BS one, and at the same time provides a physically motivated alternative model to be excluded. A possible approach to validate BS data against this alternative hypothesis is a statistical likelihood ratio test [24, 39], which requires calculating the output probability assigned to each sampled event by both the distributions (i.e. a permanent). In this case a validation parameter is defined as the product over a given number of samples of the ratios between the probabilities assigned to the occurred outcomes by the BS distribution and the distinguishable one. The certification is considered successful if is greater than one with a 95% confidence level after a fixed number of samples. On one side, the number of data required to validate scales inversely with the number of photons and is constant with respect to the modes. This means that with this method there is no exponential overhead in terms of number of necessary events. Conversely, the need of evaluating matrix permanents to apply the test implies an exponential (in n) computational overhead.
A relevant question is then if lossy BS with indistinguishable photons can in principle be discriminated from lossy sampling with distinguishable particles. The same likelihood ratio technique can be adopted to validate a sample in which some losses have occurred. Indeed, for each event we apply the protocol by including in each output probability all the cases that could have yielded the given outcome. This calculation is performed both in the BS and in the distinguishable photons picture. We thus verified that the scaling in n and m obtained in the lossless case is preserved when constant losses in input are considered, that is, constant with respect to n. We will then show in section 5 that constant losses still boost the system performances.
Additionally, we have considered the case where losses happen at the output, after the evolution, and the combined case when they might occur both at the input and at the output. We plot in figure 4(b) the validation of a 30 modes BS device for these lossy cases, verifying the scalability with respect to the number of photons. This result confirms the findings of [31] that constant losses with respect to the number of photons should not affect the complexity of the problem. It is thus a relevant basis for the definition of a new problem, lossy Scattershot Boson Sampling, which, as we are going to show, allows to lower the bound for quantum supremacy.
Download figure:
Standard image High-resolution image5. The bound for quantum supremacy
We can now discuss a threshold for quantum supremacy by resuming all the considerations and the experimental details related to the implementation of Scattershot BS with optical photons that we have presented so far, including losses at the input and at the output. Let us call the time required to classically sample a single BS event and the one for a successful experimental run, our aim is then to calculate the set of parameters that define the region where . As discussed in section 2, if m is the number of modes, the time required by a classical computer to simulate a single Scattershot BS run with n photons by using a brute force approach (classical computation of the full distribution and efficient sampling of an output event) is given by:
where is the estimated time scaling for Tianhe 2, the most efficient existing computer capable of 34 petaFLOPS (a first run with has been recently reported in [40]). On the other hand, a quantum competitor that arranges m single photon sources connected in parallel to m inputs could theoretically sample from any event with photons. However, runs with too many or too few photons will be strongly suppressed: in particular, we will have to wait on average
to sample from a n photons generalized Scattershot BS run, i.e. either a successful or a lossy experiment. Indeed, is the rate at which the laser pumps photons in the SPDC sources, is the probability to correctly perform a n photons BS given m sources and reads
In this expression, we consider all possible cases where q − t single pairs and t double pairs are generated, n trigger detectors successfully click (n1 single-photon inputs with detection probability , two-photon inputs with detection probability ), photons are lost at the input (each one with efficiency ), j is the fraction of lost photons coming from correctly generated single pairs, and finally photons are detected at the output (each one with detection efficiency ).
As we have just shown, and depend on the experimental parameters such as the detectors efficiency, the coupling among various segments in the interferometer and the single photon sources. If is the difference between the number of heralded and detected photons, the probability of a lossy BS with photons will be the sum of all possible cases in which , where () are the photons lost at the input (output). We remark that the different distributions which yield to the same outcome in the lossy case present a significant total variation distance with respect to the lossless one (see appendix
We display in figure 5 the results of the comparison between a classical and a quantum agent for a traditional Scattershot BS together with data of a case with constant losses. We vary the number of photons and sources and we look for all the n photons events in accordance with the principle . The detection efficiency is supposed to decrease when we increase the dimension of the optical network, since it includes the transition through the interferometer. Indeed, let us call the probability to lose a photon in an integrated beam splitter (a directional coupler) with current technology. The overall single-photon transmittivity then scales as for interferometer architectures where the number of beam-splitter layers scales as m. Assuming a feasible improvement in the experimental techniques to come alongside with the realization of larger devices, we obtain that the bound for quantum supremacy lies in a regime with photons and sources and modes. Despite being experimentally demanding, this generalized scattershot BS reveals to be a step forward if compared with the previously estimated regime of 20 photons in 400 modes. In fact, on the one hand it requires a smaller interferometric network, less sensitive to losses, and on the other hand the lower number of photons increases the rate and loosens the requirements on the single photons and the optical elements fidelities [11, 15, 16].
Download figure:
Standard image High-resolution image6. Boson sampling with quantum dot sources
As we have highlighted in sections 3–5, the main issue of BS with optical photons is the low scalability of SPDC sources due to the occurrence of multiple pairs events. Recent experiments have tried to overcome this problem relying on quantum dot sources [41–43], where a train of single-photon pulses is deterministically generated (with up to 99% fidelity) by a InGaAs quantum dot embedded in a micro-cavity and excited by a quasi-resonant laser beam [44, 45]. The emitted pulses are subsequently collected in a single-mode fiber with a total source efficiency η, which depends on the laser pump power due to saturation effects in the quantum dot. The most common approaches to convert a train of single photons equally separated in time in a n single photon–Fock state are passive [42] and active [43] demultiplexing. The former can be achieved by arranging a single array of beam splitters whose reflectivities and transmittivities are tuned such that the probability for each photon to escape the cascade is always (i.e. numbering the beam splitters from 1 to their reflectivities scale as ). Maintaining the previous notation, the probability to successfully perform a BS experiments with photons injected from the first i ports of the array reads
We observe from equation (9) that BS with quantum dots suffers a significant drawback when passive demultiplexing is adopted, since the probability of a successful event scales inversely with the factorial of the number of photons. We plot in figure 6(a) a comparison of the performances between Scattershot BS with SPDC single photon sources and BS with a quantum dot source and passive demultiplexing. While the advantage is quite remarkable for a small number of photons, the adoption of passive demultiplexing reduces the efficiency for increasing n. A substantial improvement, proportional to ni, can be achieved by exploiting an efficient active demultiplexing method, thus rendering quantum dot sources a promising platform to reach the quantum supremacy regime (see figure 6(b)). In the latter case the probability of a successful BS run is supposed to scale as , where is the efficiency of the demultiplexing procedure (see [46] for a technique with heralded photons)
Download figure:
Standard image High-resolution image7. Boson sampling with microwave photons
Now that we have addressed the strengths and weaknesses of Scattershot BS for SPDC and quantum dot sources with optical photons, we discuss the expectations offered by a completely new approach [32]. BS with microwave photons is a new experimental proposal that meets all the requirements (e.g. Fock states with indistinguishable photons in input, Haar random unitary transformation and entangled Fock states at the output) while carrying them out in a different way. More specifically, n photons are deterministically generated by exciting n X-mon qubits among a chain composed by m qubits (potentially at very high repetition rate ), each coupled both to a storage and a measurement resonator through a Jaynes–Cummings interaction. By tuning their frequencies through an external magnetic field the n selected qubits are set in resonance with the storage resonator, thus creating n single photon–Fock states with high efficiency [47, 48]. The interferometric network of beam splitters and phase shifters implementing the m × m unitary transformation is replaced by the chain of time-dependently interacting cavities. A superconducting ring with a Josephson junction is used to tune the coupling between resonators i and and acts as a beam splitter [49] described by the interaction Hamiltonian , with . Lasting only , this interaction can be turned on and off very rapidly (in the order of nanoseconds) and has already reported high coupling ratios [50, 51]. Meanwhile, a phase shift operation can be realized by exciting the qubit associated with the cavity and pushing it off resonance for a time , inducing a frequency shift among the resonators that after an appropriate time is turned into a phase shift. By subsequently applying beam splitter and phase shifter operations a m × m unitary transformation can be implemented after O(m) steps [52], each requiring . Considering that the typical cavity decoherence time is around [53, 54], this potentially permits to perform more than one hundred steps (assuming a sufficiently high fidelity for each operation). Eventually, the input preparation procedure is inverted to measure the output state: the qubits are put in resonance with the cavities and through a Jaynes–Cummings interaction those whose corresponding resonators contain a photon are naturally excited [47]. A non demolition measurement of the qubit state can finally be addressed with more than 90% efficiency by coupling it with a low quality cavity [55]. Even though in the regime (where BS was proved to be hard) attention can be restricted to those outcomes with 0 or 1 photon for each output due to the Boson Birthday Paradox [1, 56]. Furthermore, superconducting qubits are a promising tool towards generalizations employing photon number resolving detectors [57].
Using feasible experimental parameters provided in [32], we can evaluate the threshold for a generalized version of BS with microwave photons. The calculations are similar to the ones for the case of optical photons without the issue of double pairs generation, but with the only constraint that the experimental rate is bounded by , scaling inversely with the number of steps (modes) m. We refer to appendix D for the complete expressions that take into account also losses and dark counts (which have been proved to be theoretically equivalent to losses at the input [31]). We present a summary of the results in figure 7, where the ratio is plotted as a function of the number of modes m, i.e. the dimension of the unitary. In this case we keep the detection efficiency constant, since we expect negligible losses of photons for times quite below the cavities decoherence time. The final theoretical result leads to a significant improvement in the efficiency and an additional step towards quantum supremacy which can be achieved with a 7 photons in 50 modes experiment.
Download figure:
Standard image High-resolution image8. Conclusions
We have reviewed the problem of BS together with its most recent extensions and variations: Scattershot and lossy sampling and the proposal to adopt photons in the microwave spectrum. In particular, we have highlighted the strengths and weaknesses of the model under reasonable experimental assumptions in order to understand if and how BS can be an effective approach to assess quantum supremacy. Using SPDC sources for single photons generation has the unavoidable drawback of multiple pairs, hence leading to experimentally inaccurate results that worsen by increasing the number of photons. Besides, not only the generation of an initial state with large n can be hardly achieved, but it also requires higher mutual fidelities among the particles and higher accuracy for the optical elements to preserve the scalability. Recent experimental results on quantum dot sources [41–43] can open the way to new perspectives in the implementation of Boson Sampling with large photon numbers, due to their high generation efficiency and high photon indistinguishability.
Performing a state-of-the-art analysis, we have shown that the threshold , i.e. the regime where the quantum agent samples faster (in time ) than his classical counterpart (in time ), can be achieved with a Scattershot BS experiment with SPDC sources, far less than the original regime of photons and modes. While on the one hand the permanent guarantees the complexity of the problem, on the other hand it eventually reveals to be much more convenient to increase the sampling rate, rather than focusing on the size of the permanent. Indeed, a crucial role to reach the regime is played by the disposal of many sources in parallel, together with the possibility of sampling every time from a different random input and, most of all, the inclusion of events with constant losses of photons. The same analysis conducted with quantum dot sources and an active demultiplexing approach reported a theoretical attainment of the bound with photons and modes.
Aiming to maximize the efficiency and the accuracy of the protocol, we have finally analyzed a new suggestion by Peropadre et al [32] consisting in the adoption of on demand microwave photons. This proposal overcomes the problem of erroneous input states and provides a remarkable decrease of losses, thus enhancing the experimental rate. Since in this case the unitary is implemented in time (it is decomposed in a number of steps performed one after the other), the sampling rate scales inversely with the dimension. However, this does not constitute a relevant issue considering that the bound for quantum supremacy is lowered to photons in modes, which requires a running time appreciably below the decoherence time of the system.
Acknowledgments
The authors acknowledge E F Galvao, D J Brod and J Huh for very useful discussions on the subject of this paper. This work was supported by the H2020-FETPROACT-2014 Grant QUCHIP (Quantum Simulation on a Photonic Chip; Grant Agreement No. 641039, www.quchip.eu).
Appendix A.: SPDC sources
The two-mode SPDC state reads , the related photon number probability distribution being
where s is the number of photons per mode and χ is the squeezing parameter. Since quantum supremacy is expected to require quite a large number of possible input states and sources, it is mandatory to evaluate the contribution of second order generation terms. If we define as the probability of generating a single pair, then from equation (A.1) the second order scales as .
Appendix B.: Boson sampling from erroneous input state
Given an apparently correct sample with n photons triggered at the input and output, this could be the result of the injection of couples of photons such that even though the total number of particles is n, the effective input state is different from the expected one. The probability of injecting n1 single photons while triggering n entries results to be
where is the total number of injected single photons, z and y being the fractions coming respectively from single photons and pairs impinging on the input ports of the interferometer. The outer sum has the constraints (otherwise we would trigger photons in input) and (otherwise we would detect less than n photons at the end of the chip), being x the number of erroneously injected pairs.
In particular, if we generate s single photons and t pairs, the probability to inject a fake state ( single photons and pairs), despite having heralded an apparently correct one, and detect n photons at the output is given by
Here the sum considers all possible cases with at least an extra injected photon coming from a double pair (), where n trigger detectors successfully click (n1 with single-photon input and with two-photon inputs, detection efficiencies and respectively), and n photons are detected at the output (efficiency ). We remark that we did not distinguish those events in which a greater number of photons is injected in the interferometer, some of them get lost within the chip and finally only n are successfully detected. Indeed, we included all kind of losses (i.e. those in the interferometer and those at the detection stage) in the parameter . The effect of losses during the unitary evolution on the BS distribution would be very subtle to estimate, and might be addressed in future works; however, it does not affect our aim of finding a lower bound for quantum supremacy.
Appendix C.: Validation data
We report hereafter additional simulated data on the validation for m = 40 modes lossy BS against the distinguishable sampler in the case of losses at the input, at the output and both (see tables C1, C2 and C3 respectively). We finally sum up the most relevant cases. The probability assigned to each event of a lossy BS distribution is obtained by averaging over all possible samplings that could have led to the given lossy outcome. More specifically, when photons are lost at the input and n are detected at the output we will have to mediate over distributions, each corresponding to a possible input state with n photons. The same applies when we know that losses occur before the output detection: this time we will have possible distributions to weight. Conversely, if we assume that photons can be lost at the input and at the output we will have to average over all events such that . This means that for every value of we mediate over the combination of possible inputs to reconstruct the theoretical output distribution from which in turn we deduce the probability for photons events (where is the number of heralded input photons).
Table C1. Minimum data set size to validate BS with losses occurring at the input against a sampling with distinguishable photons with a 95% confidence level. The results have been averaged over 100 Haar random 40 × 40 unitaries. Inputs indicate the number of possible inputs combinations, given the number of injected photons and losses.
Photons [n] | Losses | Inputs | # samples |
---|---|---|---|
3 | 0 | 1 | 19 ± 3 |
3 | 1 | 4 | 50 ± 6 |
3 | 2 | 10 | 88 ± 9 |
4 | 0 | 1 | 14 ± 2 |
4 | 1 | 5 | 37 ± 4 |
4 | 2 | 15 | 64 ± 6 |
5 | 0 | 1 | 12 ± 2 |
5 | 1 | 6 | 31 ± 3 |
5 | 2 | 21 | 54 ± 6 |
6 | 0 | 1 | 11 ± 1 |
6 | 1 | 7 | 29 ± 3 |
We finally report in table C4 the simulated error distance of BS distributions concerning different possible input states (, where p(i) and are the probabilities assigned to event i by the two distributions and the sum is intended over all possible events). In particular, we compare the single n photon–Fock state distributions to some other inputs, respectively with second order terms, vacuum states and an extra photon that is lost at the detection.
Table C2. Minimum data set size to validate BS with losses occurring at the output against a sampling with distinguishable photons with a 95% confidence level. The results have been averaged over 100 Haar random 40 × 40 unitaries. Outputs indicate the number of possible n photons output combinations from which a generic sampled event could come from.
Photons [n] | Losses | Outputs | # samples |
---|---|---|---|
3 | 0 | 1 | 19 ± 2 |
3 | 1 | 38 | 101 ± 14 |
4 | 0 | 1 | 14 ± 2 |
4 | 1 | 37 | 53 ± 6 |
4 | 2 | 703 | 208 ± 24 |
5 | 0 | 1 | 12 ± 2 |
5 | 1 | 36 | 41 ± 4 |
5 | 2 | 66 | 109 ± 12 |
Table C3. Minimum data set size to validate BS with losses occurring either at the input, at the output or both cases against a sampling with distinguishable photons with a 95% confidence level. The results have been averaged over 100 Haar random 40 × 40 unitaries. The number of photons indicates how many photons are actually detected.
Modes [m] | Photons [n] | Losses | # samples (in) | # samples (out) | # samples (both) |
---|---|---|---|---|---|
30 | 3 | 1 | 95 ± 12 | 103 ± 13 | 179 ± 35 |
4 | 1 | 52 ± 6 | 56 ± 7 | 80 ± 10 | |
5 | 2 | 95 ± 11 | 121 ± 13 | — | |
5 | 1 | 38 ± 4 | 43 ± 6 | 55 ± 6 | |
6 | 2 | 69 ± 7 | 93 ± 9 | — | |
6 | 1 | 33 ± 4 | 39 ± 4 | 49 ± 5 | |
7 | 2 | 59 ± 7 | 86 ± 7 | — | |
40 | 3 | 1 | 93 ± 12 | 101 ± 14 | 181 ± 31 |
4 | 1 | 50 ± 6 | 53 ± 6 | 78 ± 8 | |
5 | 2 | 88 ± 9 | 109 ± 12 | — | |
5 | 1 | 37 ± 4 | 41 ± 4 | 52 ± 7 |
Table C4. Variational error distance averaged over 100 samples between the ideal BS distribution with n single input photons in the state 1-⋯-1 and some wrong samplings. The state 2-1-0 (2-1-1) generically represents the 2-1-⋯-1-0 (2-1-⋯-1) state with a couple of photons and () remaining single photons (e.g. 2-1-1-0 and 2-1-1-1 for n = 4). The error distance with respect to the state 1-1-1-1 tells how far BS with n photons is from BS with photons, one of which is lost at the output.
Photons [n] | Modes [m] | 2-1-0 | 2-1-1 | 1-0-1-1 | 1-1-1-1 |
---|---|---|---|---|---|
3 | 15 | 0.473 ± 0.054 | 0.222 ± 0.016 | 0.316 ± 0.026 | 0.337 ± 0.029 |
25 | 0.460 ± 0.036 | 0.210 ± 0.013 | 0.318 ± 0.018 | 0.326 ± 0.020 | |
40 | 0.466 ± 0.031 | 0.211 ± 0.011 | 0.321 ± 0.018 | 0.327 ± 0.015 | |
50 | 0.444 ± 0.024 | 0.205 ± 0.013 | 0.320 ± 0.015 | 0.315 ± 0.013 | |
4 | 15 | 0.482 ± 0.041 | 0.263 ± 0.013 | 0.328 ± 0.022 | 0.349 ± 0.018 |
20 | 0.467 ± 0.032 | 0.251 ± 0.011 | 0.333 ± 0.019 | 0.346 ± 0.019 | |
30 | 0.464 ± 0.021 | 0.245 ± 0.009 | 0.329 ± 0.014 | 0.332 ± 0.010 | |
5 | 20 | 0.473 ± 0.023 | 0.276 ± 0.010 | 0.338 ± 0.015 | 0.353 ± 0.011 |
25 | 0.475 ± 0.027 | 0.267 ± 0.008 | 0.336 ± 0.012 | 0.350 ± 0.011 | |
30 | 0.461 ± 0.010 | 0.256 ± 0.008 | 0.336 ± 0.012 | — | |
6 | 25 | 0.468 ± 0.018 | 0.280 ± 0.007 | 0.341 ± 0.012 | — |
For example, considering the case n = 3, we compare the correct input 1-1-1 (all other m − n entries are 0) with input states 2-1-0 (the fourth photon is triggered but not injected), 2-1-1 (there is an extra pair but one photon is not detected), 1-1-0-1 (1-1-1-1 is generated and one photon is lost at the input) and 1-1-1-1 (one photon is lost at the detection). We report an overview of the variational error distance for several values of the number of photons n and modes m. To better understand the significance of the values in the table we computed the variational error distance for distributions with a completely wrong input state. Always with respect to the 1-1-1 case, we got for n = 3 photons in m = 50 modes: 3-0-0 , 0-0-0-3 and 0-0-0-1-1-1 .
Appendix D.: Boson sampling with microwave photons
If we call the probability to successfully excite a X-mon qubit and then create a single photon in the coupled cavity through a Jaynes–Cummings interaction, then the probability to lose ni photons at the input is
From this quantity we can evaluate the probability to perform a microwave BS losing photons overall (i.e these losses can occur either at the input or at the output)
This result assumes that dark counts are negligible (the chance to erroneously detect photons () in vacuum modes is very low). If this is not the case, the probability for a microwave BS with photons becomes
Footnotes
- 4
Extended Church–Turing Thesis conjectures that a probabilistic Turing machine can efficiently simulate any realistic model of computation, where efficiently means up to polynomial-time reductions.