Paper The following article is Open access

Towards quantum supremacy with lossy scattershot boson sampling

, and

Published 4 November 2016 © 2016 IOP Publishing Ltd and Deutsche Physikalische Gesellschaft
, , Citation Ludovico Latmiral et al 2016 New J. Phys. 18 113008 DOI 10.1088/1367-2630/18/11/113008

1367-2630/18/11/113008

Abstract

Boson sampling represents a promising approach to obtain evidence of the supremacy of quantum systems as a resource for the solution of computational problems. The classical hardness of Boson Sampling has been related to the so called Permanent-of-Gaussians Conjecture and has been extended to some generalizations such as Scattershot Boson Sampling, approximate and lossy sampling under some reasonable constraints. However, it is still unclear how demanding these techniques are for a quantum experimental sampler. Starting from a state of the art analysis and taking account of the foreseeable practical limitations, we evaluate and discuss the bound for quantum supremacy for different recently proposed approaches, accordingly to today's best known classical simulators.

Export citation and abstract BibTeX RIS

Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

The boson sampling (BS) problem is a well-built example of a dedicated issue that cannot be efficiently solved through classical resources (unless the collapse of polynomial hierarchy to its third level), though it can be tackled with a quantum approach [1]. More specifically, it consists in sampling from the probability output distribution of n non-interacting bosons evolving through a m × m unitary transformation. Together with applications in quantum simulation [2] and searching problems [3], the aim of a Boson Sampling device is to outperform its classical simulator counterpart. This would provide strong evidence against Extended Church–Turing Thesis and would represent a demonstration of quantum supremacy4 . Following the initial proposal, many experiments have been settled so far by using linear optical interferometers [47] where indistinguishable photons are sent in an interferometric lattice made-up of passive optical elements such as beam splitters and phase shifters. In the perspective of implementing a scalable device, one of the main differences with respect to a universal quantum computer is that only passive operations are permitted before detection. This implies that it is not known whether it is possible to apply quantum error correction and fault tolerance [810].

This apparent limitation was already considered in the first proposal [1], where the problem was proved to be classically hard also lowering the demand to approximate Boson Sampling, under mild constraints. Many papers have focused on this issue [11] as well on several possible causes of experimental errors [10, 1216]. The intensive discussion on this topic has triggered a number both of theoretical [1723] and experimental [2427] studies on the validation of a Boson Sampler, i.e. the assessment that the output data sets are not generated by other efficiently computable models. Moreover, an advantageous variant of the problem called Scattershot Boson Sampling has been theoretically proposed [28, 29] and experimentally implemented [30] in order to better exploit the peculiarities of the experimental apparatus based on spontaneous parametric down conversion (SPDC). It was eventually very recently proved that the same hardness result holds when there is a constant number of photons lost before being input, which in turn can presumably be extended to constant losses at the output [31].

In this paper we review the fundamental issue of experimental limitations to understand which are the requirements that make the implementation suitable to reach quantum supremacy. We define the latter as the regime where the quantum agent samples faster than his classical counterpart. We analyze the state of the art together with all the complexity requirements, reviewing the whole process in light of recent theoretical extensions and experimental proposals [31, 32]. Starting from the already established idea of sampling with constant losses occurring only at the input, we discuss the extension of Boson Sampling to a more general lossy case, where photons might be lost either at the input and/or at the output. This method provides a gain from the experimental perspective both in terms of efficiency and of effectiveness. Indeed, we actually estimate a new threshold for the achievement of quantum supremacy and we show how the application of such generalizations could pave the way towards beating this updated bound.

2. Standard and scattershot boson sampling

BS consists in sampling from the probability distribution over the possible Fock states $| T\rangle $ of n indistinguishable photons distributed over m spatial modes, after their evolution through a m × m interferometer which operates a unitary transformation U on their initial, known, Fock state $| S\rangle $. If si (tj) denotes the occupation number for mode i (j), the transition amplitude from the input to the output configuration is proportional to the permanent of the n × n matrix ${U}_{S,T}$ obtained by repeating and crossing si times the ${i}^{\mathrm{th}}$ column with tj times the ${j}^{\mathrm{th}}$ row of U [33]

Equation (1)

where ${U}_{{\rm{F}}}$ represents the associated transformation on the Fock space. Given a square matrix ${A}_{n\times n}$, its permanent is defined as $\mathrm{per}(A)={\sum }_{\sigma }{\prod }_{i=1}^{n}{a}_{i,\sigma (i)}$, where the sum extends over all permutations of the columns of A. If A is a complex (Haar) unitary the permanent is #P-hard even to approximate [34]. Conversely, for a nonnegative matrix it can be classically approximated in probabilistic polynomial time [35]. The most efficient way to compute the permanent of a n × n matrix A with elements ${a}_{i,j}$ is currently Glynn's formula [36]

Equation (2)

where the outer sum is over all possible ${2}^{n-1}n$-dimensional vectors $\vec{\delta }=({\delta }_{1}=1,{\delta }_{2},\cdots {\delta }_{n})$ with ${\delta }_{i\ne 1}\in \{\pm 1\}$. Processing these vectors in Gray code (i.e. changing the content of only one bit per time, so that the number of counting operations is minimized to O(n)) allows the number of steps to scale as $O(n\,{2}^{n})$.

While in the original proposal all the samples are derived from the same input, Scattershot BS consists of injecting each time a random, though known, input state. To this end, each input mode of a linear interferometer is fed with one output of a SPDC source (see figure 1). Successful detection of the corresponding twin photon heralds the injection of a photon in a specific mode of the device. It has been proved that the Scattershot version of the BS problem still maintains at least the same computational complexity of the original problem [28, 29]. Since BS was proved to be hard only in the regime $m\gg {n}^{2}$, attention can be restricted only to those $\left(\genfrac{}{}{0em}{}{m}{n}\right)$ outputs with no more than one photon per mode among all possible $\left(\genfrac{}{}{0em}{}{m+n-1}{n}\right)$ output states. This also helps to overcome the experimental difficulty to resolve the number of photons in each output mode.

Figure 1.

Figure 1. (a) Conventional Boson Sampling: the linear transformation is sampled with n sources, injecting a fixed input state for each run. (b) Scattershot Boson Sampling: m SPDC sources are connected in parallel to the m ports of a linear transformation. Each event is sampled from a random (though known) input state.

Standard image High-resolution image

To give an idea of the computational complexity behind the BS problem, we show in figure 2 the real time an ordinary PC requires to calculate a permanent of various size. The time needed to perform exact classical calculation of a complete BS distribution is enhanced by a factor $\left(\genfrac{}{}{0em}{}{m}{n}\right)$. The values for the most powerful existing computer, which is approximately one million times faster, can be obtained by straightforward calculations. Currently no other approaches different from a brute force simulation, that is, calculation of the full distribution and (efficient) sampling of a finite number of events, have been reported in the literature to perform the classical simulation of BS experiments with a general interferometer.

Figure 2.

Figure 2. Computer simulations of the time required to compute permanents of different size n on a 4 cores 2.3 GHz processor. The fitting function is of the form $A\,n\,{2}^{{Bn}}$, with $A=4.47\times {10}^{-8}$ and B = 1.05: the fact that B is slightly greater than one can be explained by the exponential increase in terms of memory resources. The time required for the complete calculation of a boson sampling output probability distribution of n photons in m modes will scale as $\left(\genfrac{}{}{0em}{}{m}{n}\right)A\,n\,{2}^{{Bn}}$.

Standard image High-resolution image

3. Scattershot boson sampling in lossy conditions

We are now going to discuss how a Scattershot BS experiment with optical photons depends on the parameters of the setup. We will analyze how errors in the input state preparation and system's inefficiencies (i.e. losses and failed detections) affect the scalability of the experimental apparatus. We will not consider here issues such as photons partial distinguishability and imperfections in the implementation of the optical network, since in certain conditions they do not affect the scalability of the system. For the input state, the average mutual fidelity of single photons must satisfy $1-\langle F\rangle \sim O({n}^{-1})$ [15, 16]. Necessary conditions in terms of fidelity ${F}_{\mathrm{el}}=1-O({n}^{-2})$ [11] and sufficient conditions in terms of operator distance $| | A-\tilde{A}| {| }_{\mathrm{op}}=O({n}^{-2}/\mathrm{log}m)$ [37] have been also investigated for the amount of tolerable noise on the network optical elements.

Spontaneous parametric down conversion is the most suitable known to-date technique to prepare optical heralded single-photon states. Photon pairs are emitted probabilitistically in two spatial modes, and one of the photons is measured to witness the presence of the twin photon. Note that without post-selecting upon the heralded photons, the input state would be Gaussian and thus the distribution would not be hard if detected with a system performing Gaussian measurements [17, 38]. The main drawback of using SPDC sources is in the need of a compromise between the generation rate and the multiple pair emission. Indeed, the single-pair probability g has to be kept low so as to avoid the injection of more than two photons in the same optical mode. Hence, it reveals to be essential to consider at least the noise introduced by second order terms that characterize double pairs generation which scales as ∼g2 (see appendix A for additional information). The probability for m SPDC sources in parallel to generate s single pairs and t double pairs will hence read

Equation (3)

where $\left(\genfrac{}{}{0em}{}{m}{s,t}\right)$ is the multinomial coefficient $m!/((m-s-t)!s!t!)$. This expression includes all possible combinations $\left(\genfrac{}{}{0em}{}{m}{s,t}\right)$ of s sources generating one pair (gs) and t sources generating two pairs (g2t). We show in figure 3(a) a schematic representation of a Scattershot BS setup where we depicted all the experimental parameters that we define below. We define with ${\eta }_{{\rm{T}}}$ the probability to trigger a single photon, leaving out dark counts. If we assume that we do not employ photon number resolving detectors (accordingly with the performance of current technology), the probability that a detector clicks with n input photons is then given by $1-{(1-{\eta }_{{\rm{T}}})}^{n}$. Meanwhile, we call ${p}_{\mathrm{in}}$ the probability that a single photon is correctly injected in the interferometer, while ${\eta }_{{\rm{D}}}$ is the probability that the injected photon does not get lost in the network and is eventually detected at the output.

Figure 3.

Figure 3. (a) Schematic view of Scattershot BS, consisting in connecting many parallel SPDC sources to different input modes of the interferometer and post-selecting on the heralded photons. Optical shutters are placed before the input modes to avoid photon injection into wrong ports (i.e. without proper heralding). Losses are divided in ${\eta }_{{\rm{T}}}$ (single-photon triggering probability), ${p}_{\mathrm{in}}$ (injection losses) and ${\eta }_{{\rm{D}}}$ (detection losses). (b) Probabilities to successfully carry on a correct Scattershot BS experiment, i.e. to sample from the single photon–Fock states corresponding to those heralded by the triggers, expressed by the ratio ${P}_{\mathrm{SBS}}/{P}_{\mathrm{SBS}}^{(\mathrm{fake})}$. The probability decreases if we increase the number of modes and photons: blue circles n = 4, black squares n = 6, red triangles n = 8 and green stars n = 10. Experimental parameters are set as: g = 0.02, ${\eta }_{{\rm{T}}}=0.6$, ${p}_{\mathrm{in}}=0.7$ and ${\eta }_{{\rm{D}}}=0.6-0.25\,\ast \,(m-10)/90$ (the probability for a photon to propagate through the interferometer and to be finally detected decreases when we increase the dimension).

Standard image High-resolution image

In addition to the original scheme for Scattershot Boson Sampling, optical shutters, that is, a set of vacuum stoppers, are placed on each of the m input modes. The shutters are open only in presence of a click on the corresponding heralding detector, thus ruling out the possibility of injecting photons from unheralded modes. The hypothesis of working in a post-selected regime (with shutters) is helpful in this context: indeed, we are interested only in those events where exactly n photons enter and exit the chip, disregarding every other possible combination. After some combinatoric manipulation, we derive the probability to successfully perform a Scattershot BS experiment with n photons (i.e. an experiment where n triggers click, n single photons are injected and successfully detected at the output)

Equation (4)

where ${\eta }_{{{\rm{T}}}_{2}}$ is the probability to detect a pair of photons ${\eta }_{{{\rm{T}}}_{2}}=[1-{(1-{\eta }_{{\rm{T}}})}^{2}]$. The outer sums consider all possible Scattershot single photon and pairs generations, while the inner sum constraints the number of correctly injected single photons to n (among these, only n1 derive from single generated pairs).

However, from an experimental point of view we only know that n detectors have clicked both at the input and at the output. Hence, we cannot rule out that this was the result of a fake sampling where additional photons have been injected and some erroneous compensation has occurred (e.g. unsuccessful injection of single photons, losses in the interferometer, failures in the output detection). Indeed, the probability to carry out an experiment from an (non-verifiable) incorrect input state is given by

Equation (5)

where we sum the probability to inject a fake state, while triggering and detecting n photons, over all possible generations with t double pairs: ${P}_{\mathrm{trig},\det }^{(\mathrm{fake})}(n| (q-t),t)$ (see appendix B for full details on the calculation).

We plot in figure 3(b) a numerical analysis of the ratio ${P}_{\mathrm{SBS}}/{P}_{\mathrm{SBS}}^{(\mathrm{fake})}$ for different numbers of photons, varying the number of modes and accordingly changing the detection probability ${\eta }_{{\rm{D}}}$ in a feasible way. We obtain in parallel that the ratio of correctly sampled events over the fake ones is highly dependent on the number of extra undetected photons. Indeed, this ratio is actually a decreasing function of g and ${p}_{\mathrm{in}}$, since higher values of these parameters increase the weight of multiphoton emission and injection, and an increasing function of ${\eta }_{{\rm{T}}}$ and ${\eta }_{{\rm{D}}}$.

4. Validation with losses

We will discuss here some extensions of the system that could boost quantum experiments towards reaching the classical limit. A major contribution in this direction came from Scott Aaronson and Daniel Brod who generalized BS to the case where a constant number of losses occurs in input [31], though setting the stage for losses at the output as well. Addressing their proposals, we discuss here the problem of successfully validating these lossy models against the output distribution of distinguishable photons, representing a significant benchmark to be addressed. Indeed, it is still an open question whether it is possible to discriminate true multiphoton events with respect to data sampled from easy-to-compute distributions. A non-trivial example is given by the output distribution obtained when the same unitary is injected with distinguishable photons. The latter presents rather close similarities with the true BS one, and at the same time provides a physically motivated alternative model to be excluded. A possible approach to validate BS data against this alternative hypothesis is a statistical likelihood ratio test [24, 39], which requires calculating the output probability assigned to each sampled event by both the distributions (i.e. a permanent). In this case a validation parameter ${ \mathcal V }$ is defined as the product over a given number of samples of the ratios between the probabilities assigned to the occurred outcomes by the BS distribution and the distinguishable one. The certification is considered successful if ${ \mathcal V }$ is greater than one with a 95% confidence level after a fixed number of samples. On one side, the number of data required to validate scales inversely with the number of photons and is constant with respect to the modes. This means that with this method there is no exponential overhead in terms of number of necessary events. Conversely, the need of evaluating matrix permanents to apply the test implies an exponential (in n) computational overhead.

A relevant question is then if lossy BS with indistinguishable photons can in principle be discriminated from lossy sampling with distinguishable particles. The same likelihood ratio technique can be adopted to validate a sample in which some losses have occurred. Indeed, for each event we apply the protocol by including in each output probability all the cases that could have yielded the given outcome. This calculation is performed both in the BS and in the distinguishable photons picture. We thus verified that the scaling in n and m obtained in the lossless case is preserved when constant losses in input are considered, that is, ${n}_{\mathrm{lost}}^{\mathrm{in}}$ constant with respect to n. We will then show in section 5 that constant losses still boost the system performances.

Additionally, we have considered the case where losses happen at the output, after the evolution, and the combined case when they might occur both at the input and at the output. We plot in figure 4(b) the validation of a 30 modes BS device for these lossy cases, verifying the scalability with respect to the number of photons. This result confirms the findings of [31] that constant losses with respect to the number of photons should not affect the complexity of the problem. It is thus a relevant basis for the definition of a new problem, lossy Scattershot Boson Sampling, which, as we are going to show, allows to lower the bound for quantum supremacy.

Figure 4.

Figure 4. Minimum data set size to validate lossy Boson Sampling against a sampling with distinguishable photons with a 95% confidence level. The results have been averaged over 100 Haar random 30 × 30 unitaries, though they are almost independent of the dimension within the regime $m\gt {n}^{2}$ (see appendix C). (a) Losses occur only at the input: $n+{n}_{\mathrm{lost}}^{\mathrm{in}}$ are triggered but only n photons are injected and finally detected at the output. The number of samples decreases as ${\#}_{\mathrm{samples}}=A+B\,{n}^{-3}$ for fixed ${n}_{\mathrm{lost}}^{\mathrm{in}}$ and increases with the losses (vertically aligned data). b) Losses occur at the output: n photons are triggered and injected, but only $n-{n}_{\mathrm{lost}}^{\mathrm{out}}$ are detected (${n}_{\mathrm{lost}}^{\mathrm{out}}=1$ blue circles and ${n}_{\mathrm{lost}}^{\mathrm{out}}=2$ black squares). The red triangles represent the case in which one photon can be lost with equal probability either at the input or at the output. The number of samples necessary to validate decreases as ${\#}_{\mathrm{samples}}=A+B\,{\tilde{n}}^{-3}$, where $\tilde{n}$ is the number of detected photons.

Standard image High-resolution image

5. The bound for quantum supremacy

We can now discuss a threshold for quantum supremacy by resuming all the considerations and the experimental details related to the implementation of Scattershot BS with optical photons that we have presented so far, including losses at the input and at the output. Let us call ${t}_{{\rm{c}}}$ the time required to classically sample a single BS event and ${t}_{{\rm{q}}}$ the one for a successful experimental run, our aim is then to calculate the set of parameters that define the region where ${t}_{{\rm{c}}}/{t}_{{\rm{q}}}\gt 1$. As discussed in section 2, if m is the number of modes, the time required by a classical computer to simulate a single Scattershot BS run with n photons by using a brute force approach (classical computation of the full distribution and efficient sampling of an output event) is given by:

Equation (6)

where ${A}^{\prime }\sim 1.2\times {10}^{-14}s$ is the estimated time scaling for Tianhe 2, the most efficient existing computer capable of 34 petaFLOPS (a first run with ${A}^{\prime }\sim 6\times {10}^{-14}s$ has been recently reported in [40]). On the other hand, a quantum competitor that arranges m single photon sources connected in parallel to m inputs could theoretically sample from any event with $n\leqslant m$ photons. However, runs with too many or too few photons will be strongly suppressed: in particular, we will have to wait on average

Equation (7)

to sample from a n photons generalized Scattershot BS run, i.e. either a successful or a lossy experiment. Indeed, ${F}_{\mathrm{pump}}^{\mathrm{rate}}$ is the rate at which the laser pumps photons in the SPDC sources, ${P}_{\mathrm{SBS}}(m,n)$ is the probability to correctly perform a n photons BS given m sources and ${P}_{\mathrm{SBS}}^{\mathrm{lossy}}(m,n,{n}_{\mathrm{lost}})$ reads

Equation (8)

In this expression, we consider all possible cases where q − t single pairs and t double pairs are generated, n trigger detectors successfully click (n1 single-photon inputs with detection probability ${\eta }_{{\rm{T}}}$, $n-{n}_{1}$ two-photon inputs with detection probability ${\eta }_{{{\rm{T}}}_{2}}$), $i={n}_{\mathrm{lost}}^{\mathrm{in}}$ photons are lost at the input (each one with efficiency ${p}_{\mathrm{in}}$), j is the fraction of lost photons coming from correctly generated single pairs, and finally $n-{n}_{\mathrm{lost}}$ photons are detected at the output (each one with detection efficiency ${\eta }_{{\rm{D}}}$).

As we have just shown, ${P}_{\mathrm{SBS}}$ and ${P}_{\mathrm{SBS}}^{\mathrm{lossy}}$ depend on the experimental parameters such as the detectors efficiency, the coupling among various segments in the interferometer and the single photon sources. If ${n}_{\mathrm{lost}}$ is the difference between the number of heralded and detected photons, the probability of a lossy BS with $n-{n}_{\mathrm{lost}}$ photons will be the sum of all possible cases in which ${n}_{\mathrm{lost}}={n}_{\mathrm{lost}}^{\mathrm{in}}+{n}_{\mathrm{lost}}^{\mathrm{out}}$, where ${n}_{\mathrm{lost}}^{\mathrm{in}}$ (${n}_{\mathrm{lost}}^{\mathrm{out}}$) are the photons lost at the input (output). We remark that the different distributions which yield to the same outcome in the lossy case present a significant total variation distance with respect to the lossless one (see appendix C). Besides, the time required to classically simulate a lossy Scattershot BS event is a weighted average between the computation of the $\left(\genfrac{}{}{0em}{}{n+{n}_{\mathrm{lost}}^{\mathrm{in}}}{{n}_{\mathrm{lost}}^{\mathrm{in}}}\right)n$ photons distributions when losses happen at the input and the $\left(\genfrac{}{}{0em}{}{m-n+{n}_{\mathrm{lost}}^{\mathrm{out}}}{{n}_{\mathrm{lost}}^{\mathrm{out}}}\right)$ possible evolutions for a $n-{n}_{\mathrm{lost}}^{\mathrm{out}}$ output. Note however that to simulate a $n-{n}_{\mathrm{lost}}^{\mathrm{out}}$ event we still need to evolve a n photons state through the unitary.

We display in figure 5 the results of the comparison between a classical and a quantum agent for a traditional Scattershot BS together with data of a case with constant losses. We vary the number of photons and sources and we look for all the n photons events in accordance with the principle ${n}^{2}\lt m$. The detection efficiency is supposed to decrease when we increase the dimension of the optical network, since it includes the transition through the interferometer. Indeed, let us call $(1-{p}_{l}^{\mathrm{dc}})$ the probability to lose a photon in an integrated beam splitter (a directional coupler) with current technology. The overall single-photon transmittivity then scales as ${({p}_{l}^{\mathrm{dc}})}^{m}$ for interferometer architectures where the number of beam-splitter layers scales as m. Assuming a feasible improvement in the experimental techniques to come alongside with the realization of larger devices, we obtain that the bound for quantum supremacy lies in a regime with ${n}_{\mathrm{th}}\lesssim 8$ photons and ${m}_{\mathrm{th}}\simeq 80$ sources and modes. Despite being experimentally demanding, this generalized scattershot BS reveals to be a step forward if compared with the previously estimated regime of 20 photons in 400 modes. In fact, on the one hand it requires a smaller interferometric network, less sensitive to losses, and on the other hand the lower number of photons increases the rate and loosens the requirements on the single photons and the optical elements fidelities [11, 15, 16].

Figure 5.

Figure 5. Ratio between the time required to compute a single Scattershot BS event and to perform a single experimental run. Blue circles correspond to correct BS, red triangles identify the one lost photon case (equally likely at input and output) and black squares are the generalized BS (i.e. the sum of both). Experimental parameters are set as: g = 0.02, ${\eta }_{{\rm{T}}}=0.6$, ${p}_{\mathrm{in}}=0.7$ and ${\eta }_{{\rm{D}}}=0.6-0.25\,\ast \,(m-10)/90$ (the chance that a photon crosses the whole interferometric network scales anti-linearly with the dimension). The plot is the result of the weighted average over all Scattershot BS events with $3\leqslant n\lt \sqrt{m}$.

Standard image High-resolution image

6. Boson sampling with quantum dot sources

As we have highlighted in sections 35, the main issue of BS with optical photons is the low scalability of SPDC sources due to the occurrence of multiple pairs events. Recent experiments have tried to overcome this problem relying on quantum dot sources [4143], where a train of single-photon pulses is deterministically generated (with up to 99% fidelity) by a InGaAs quantum dot embedded in a micro-cavity and excited by a quasi-resonant laser beam [44, 45]. The emitted pulses are subsequently collected in a single-mode fiber with a total source efficiency η, which depends on the laser pump power due to saturation effects in the quantum dot. The most common approaches to convert a train of single photons equally separated in time in a n single photon–Fock state are passive [42] and active [43] demultiplexing. The former can be achieved by arranging a single array of $n-1$ beam splitters whose reflectivities and transmittivities are tuned such that the probability for each photon to escape the cascade is always $1/n$ (i.e. numbering the beam splitters from 1 to $n-1$ their reflectivities scale as $1/(n-i+1)$). Maintaining the previous notation, the probability to successfully perform a BS experiments with $i\leqslant n$ photons injected from the first i ports of the array reads

Equation (9)

We observe from equation (9) that BS with quantum dots suffers a significant drawback when passive demultiplexing is adopted, since the probability of a successful event scales inversely with the factorial of the number of photons. We plot in figure 6(a) a comparison of the performances between Scattershot BS with SPDC single photon sources and BS with a quantum dot source and passive demultiplexing. While the advantage is quite remarkable for a small number of photons, the adoption of passive demultiplexing reduces the efficiency for increasing n. A substantial improvement, proportional to ni, can be achieved by exploiting an efficient active demultiplexing method, thus rendering quantum dot sources a promising platform to reach the quantum supremacy regime (see figure 6(b)). In the latter case the probability of a successful BS run is supposed to scale as ${P}_{\mathrm{QD}}^{\mathrm{BS}}={\eta }^{i}{\eta }_{\mathrm{dm}}^{i}{p}_{\mathrm{in}}^{i}{\eta }_{{\rm{D}}}^{i}$, where ${\eta }_{\mathrm{dm}}$ is the efficiency of the demultiplexing procedure (see [46] for a technique with heralded photons)

Figure 6.

Figure 6. Performances of the quantum dot source case by considering (a) a passive demultiplexing approach and (b) an active one, in comparison with heralded Scattershot Boson Sampling with SPDC sources (red points). (a) Blue points: passive demultiplexing. (b) Blue points: lossless active demultiplexing. Purple points: lossy active demultiplexing with efficiency ${\eta }_{\mathrm{dm}}=0.7$. Square, circle and spades depict a correct BS experiment, while triangle, star and rhomb points stand for the lossy case where a single photon is lost either at the input or at the output. Experimental parameters are set as: $\eta =0.35$, ${p}_{\mathrm{in}}=0.7$, ${\eta }_{{\rm{T}}}=0.6$, ${p}_{\mathrm{in}}=0.7$ and ${\eta }_{{\rm{D}}}=0.6-0.25\,\ast \,(m-10)/90$. The number of photons for the quantum dot case is increased in steps so as to fulfil the complexity requirement $m\gt {n}^{2}$, thus giving rise to the jumps appearing in the plot.

Standard image High-resolution image

7. Boson sampling with microwave photons

Now that we have addressed the strengths and weaknesses of Scattershot BS for SPDC and quantum dot sources with optical photons, we discuss the expectations offered by a completely new approach [32]. BS with microwave photons is a new experimental proposal that meets all the requirements (e.g. Fock states with indistinguishable photons in input, Haar random unitary transformation and entangled Fock states at the output) while carrying them out in a different way. More specifically, n photons are deterministically generated by exciting n X-mon qubits among a chain composed by m qubits (potentially at very high repetition rate $\sim {10}^{2}\,\mathrm{MHz}$), each coupled both to a storage and a measurement resonator through a Jaynes–Cummings interaction. By tuning their frequencies through an external magnetic field the n selected qubits are set in resonance with the storage resonator, thus creating n single photon–Fock states with high efficiency [47, 48]. The interferometric network of beam splitters and phase shifters implementing the m × m unitary transformation is replaced by the chain of time-dependently interacting cavities. A superconducting ring with a Josephson junction is used to tune the coupling between resonators i and $i+1$ and acts as a beam splitter [49] described by the interaction Hamiltonian ${H}_{\mathrm{int}}={g}_{i,i+1}({a}_{i}{a}_{i+1}^{\dagger }+h.c.)$, with ${g}_{i,i+1}\sim 50\,\mathrm{MHz}$. Lasting only ${t}_{\mathrm{bs}}\sim 0.02\,\mu s$, this interaction can be turned on and off very rapidly (in the order of nanoseconds) and has already reported high coupling ratios $O({10}^{4})$ [50, 51]. Meanwhile, a phase shift operation can be realized by exciting the qubit associated with the cavity and pushing it off resonance for a time ${t}_{\mathrm{ps}}$, inducing a frequency shift among the resonators that after an appropriate time is turned into a phase shift. By subsequently applying beam splitter and phase shifter operations a m × m unitary transformation can be implemented after O(m) steps [52], each requiring ${t}_{\mathrm{step}}={t}_{\mathrm{ps}}+{t}_{\mathrm{bs}}\sim 0.3\,\mu s$. Considering that the typical cavity decoherence time is around $\tau =1/\kappa \sim 100\,\mu s$ [53, 54], this potentially permits to perform more than one hundred steps (assuming a sufficiently high fidelity for each operation). Eventually, the input preparation procedure is inverted to measure the output state: the qubits are put in resonance with the cavities and through a Jaynes–Cummings interaction those whose corresponding resonators contain a photon are naturally excited [47]. A non demolition measurement of the qubit state can finally be addressed with more than 90% efficiency by coupling it with a low quality cavity [55]. Even though in the regime $m=O({n}^{2})$ (where BS was proved to be hard) attention can be restricted to those outcomes with 0 or 1 photon for each output due to the Boson Birthday Paradox [1, 56]. Furthermore, superconducting qubits are a promising tool towards generalizations employing photon number resolving detectors [57].

Using feasible experimental parameters provided in [32], we can evaluate the threshold ${t}_{{\rm{c}}}/{t}_{{\rm{q}}}\gt 1$ for a generalized version of BS with microwave photons. The calculations are similar to the ones for the case of optical photons without the issue of double pairs generation, but with the only constraint that the experimental rate is bounded by ${(m\times 0.3\mu s)}^{-1}$, scaling inversely with the number of steps (modes) m. We refer to appendix D for the complete expressions that take into account also losses and dark counts (which have been proved to be theoretically equivalent to losses at the input [31]). We present a summary of the results in figure 7, where the ratio ${t}_{{\rm{c}}}/{t}_{{\rm{q}}}$ is plotted as a function of the number of modes m, i.e. the dimension of the unitary. In this case we keep the detection efficiency ${\eta }_{{\rm{D}}}$ constant, since we expect negligible losses of photons for times quite below the cavities decoherence time. The final theoretical result leads to a significant improvement in the efficiency and an additional step towards quantum supremacy which can be achieved with a 7 photons in 50 modes experiment.

Figure 7.

Figure 7. Expected quantum supremacy bound for Boson Sampling with microwave photons. Red squares represent the case of a correct BS experiment, purple triangles consider the ratio ${t}_{{\rm{c}}}/{t}_{{\rm{q}}}$ when one photon is lost either at the input or at the output and there is a certain non-zero dark count probability (${p}_{\mathrm{dark}}$), i.e. when photons are erroneously detected in vacuum modes. Blue dots represent the sum of the two cases. Experimental parameters are set as: ${p}_{\mathrm{dark}}=0.1$, ${\eta }_{{\rm{D}}}=0.7$, ${p}_{\mathrm{in}}=0.9$ and ${t}_{\mathrm{step}}=0.3\,\mu s$. The number of photons is increased in steps so as to fulfil the complexity requirement $m\gt {n}^{2}$, thus giving rise to the jumps appearing in the plot.

Standard image High-resolution image

8. Conclusions

We have reviewed the problem of BS together with its most recent extensions and variations: Scattershot and lossy sampling and the proposal to adopt photons in the microwave spectrum. In particular, we have highlighted the strengths and weaknesses of the model under reasonable experimental assumptions in order to understand if and how BS can be an effective approach to assess quantum supremacy. Using SPDC sources for single photons generation has the unavoidable drawback of multiple pairs, hence leading to experimentally inaccurate results that worsen by increasing the number of photons. Besides, not only the generation of an initial state with large n can be hardly achieved, but it also requires higher mutual fidelities among the particles and higher accuracy for the optical elements to preserve the scalability. Recent experimental results on quantum dot sources [4143] can open the way to new perspectives in the implementation of Boson Sampling with large photon numbers, due to their high generation efficiency and high photon indistinguishability.

Performing a state-of-the-art analysis, we have shown that the threshold ${t}_{{\rm{c}}}\gt {t}_{{\rm{q}}}$, i.e. the regime where the quantum agent samples faster (in time ${t}_{{\rm{q}}}$) than his classical counterpart (in time ${t}_{{\rm{c}}}$), can be achieved with a Scattershot BS experiment with ${m}_{\mathrm{th}}\simeq 80$ SPDC sources, far less than the original regime of ${n}_{\mathrm{th}}=20$ photons and ${m}_{\mathrm{th}}=400$ modes. While on the one hand the permanent guarantees the complexity of the problem, on the other hand it eventually reveals to be much more convenient to increase the sampling rate, rather than focusing on the size of the permanent. Indeed, a crucial role to reach the ${t}_{{\rm{c}}}\gt {t}_{{\rm{q}}}$ regime is played by the disposal of many sources in parallel, together with the possibility of sampling every time from a different random input and, most of all, the inclusion of events with constant losses of photons. The same analysis conducted with quantum dot sources and an active demultiplexing approach reported a theoretical attainment of the bound with ${n}_{\mathrm{th}}\simeq 7$ photons and ${m}_{\mathrm{th}}\simeq 50$ modes.

Aiming to maximize the efficiency and the accuracy of the protocol, we have finally analyzed a new suggestion by Peropadre et al [32] consisting in the adoption of on demand microwave photons. This proposal overcomes the problem of erroneous input states and provides a remarkable decrease of losses, thus enhancing the experimental rate. Since in this case the unitary is implemented in time (it is decomposed in a number of steps performed one after the other), the sampling rate scales inversely with the dimension. However, this does not constitute a relevant issue considering that the bound for quantum supremacy is lowered to ${n}_{\mathrm{th}}\simeq 7$ photons in ${m}_{\mathrm{th}}\simeq 50$ modes, which requires a running time appreciably below the decoherence time of the system.

Acknowledgments

The authors acknowledge E F Galvao, D J Brod and J Huh for very useful discussions on the subject of this paper. This work was supported by the H2020-FETPROACT-2014 Grant QUCHIP (Quantum Simulation on a Photonic Chip; Grant Agreement No. 641039, www.quchip.eu).

Appendix A.: SPDC sources

The two-mode SPDC state reads $| {{\rm{\Psi }}}^{\mathrm{SPDC}}\rangle ={\sum }_{s}{\lambda }_{s}\ | s,s\rangle $, the related photon number probability distribution being

Equation (A.1)

where s is the number of photons per mode and χ is the squeezing parameter. Since quantum supremacy is expected to require quite a large number of possible input states and sources, it is mandatory to evaluate the contribution of second order generation terms. If we define $g={P}^{\mathrm{SPDC}}(1)$ as the probability of generating a single pair, then from equation (A.1) the second order scales as ${P}^{\mathrm{SPDC}}(2)\sim {g}^{2}$.

Appendix B.: Boson sampling from erroneous input state

Given an apparently correct sample with n photons triggered at the input and output, this could be the result of the injection of couples of photons such that even though the total number of particles is n, the effective input state is different from the expected one. The probability of injecting n1 single photons while triggering n entries results to be

Equation (B.1)

where $w=z+y$ is the total number of injected single photons, z and y being the fractions coming respectively from single photons and pairs impinging on the input ports of the interferometer. The outer sum has the constraints $x+y+z\leqslant n$ (otherwise we would trigger $n^{\prime} \geqslant n$ photons in input) and $2x+y+z\geqslant n$ (otherwise we would detect less than n photons at the end of the chip), being x the number of erroneously injected pairs.

In particular, if we generate s single photons and t pairs, the probability to inject a fake state ($0\leqslant {n}_{1}\lt n$ single photons and $n-{n}_{1}$ pairs), despite having heralded an apparently correct one, and detect n photons at the output is given by

Equation (B.2)

Here the sum considers all possible cases with at least an extra injected photon coming from a double pair ($n-{n}_{1}$), where n trigger detectors successfully click (n1 with single-photon input and $n-{n}_{1}$ with two-photon inputs, detection efficiencies ${\eta }_{{\rm{T}}}$ and ${\eta }_{{{\rm{T}}}_{2}}$ respectively), and n photons are detected at the output (efficiency ${\eta }_{D}$). We remark that we did not distinguish those events in which a greater number of photons is injected in the interferometer, some of them get lost within the chip and finally only n are successfully detected. Indeed, we included all kind of losses (i.e. those in the interferometer and those at the detection stage) in the parameter ${\eta }_{{\rm{D}}}$. The effect of losses during the unitary evolution on the BS distribution would be very subtle to estimate, and might be addressed in future works; however, it does not affect our aim of finding a lower bound for quantum supremacy.

Appendix C.: Validation data

We report hereafter additional simulated data on the validation for m = 40 modes lossy BS against the distinguishable sampler in the case of losses at the input, at the output and both (see tables C1, C2 and C3 respectively). We finally sum up the most relevant cases. The probability assigned to each event of a lossy BS distribution is obtained by averaging over all possible samplings that could have led to the given lossy outcome. More specifically, when ${n}_{\mathrm{lost}}^{\mathrm{in}}$ photons are lost at the input and n are detected at the output we will have to mediate over $\left(\genfrac{}{}{0em}{}{n+{n}_{\mathrm{lost}}^{\mathrm{in}}}{{n}_{\mathrm{lost}}^{\mathrm{in}}}\right)$ distributions, each corresponding to a possible input state with n photons. The same applies when we know that losses occur before the output detection: this time we will have $\left(\genfrac{}{}{0em}{}{m-n+{n}_{\mathrm{lost}}^{\mathrm{out}}}{{n}_{\mathrm{lost}}^{\mathrm{out}}}\right)$ possible distributions to weight. Conversely, if we assume that photons can be lost at the input and at the output we will have to average over all events such that ${n}_{\mathrm{lost}}^{\mathrm{in}}+{n}_{\mathrm{lost}}^{\mathrm{out}}={n}_{\mathrm{lost}}$. This means that for every value of ${n}_{\mathrm{lost}}^{\mathrm{in}}$ we mediate over the combination of possible inputs to reconstruct the theoretical output distribution from which in turn we deduce the probability for ${n}_{\mathrm{her}}-{n}_{\mathrm{lost}}$ photons events (where ${n}_{\mathrm{her}}=n+{n}_{\mathrm{lost}}^{\mathrm{in}}$ is the number of heralded input photons).

Table C1.  Minimum data set size to validate BS with losses occurring at the input against a sampling with distinguishable photons with a 95% confidence level. The results have been averaged over 100 Haar random 40 × 40 unitaries. Inputs indicate the number of possible inputs combinations, given the number of injected photons and losses.

Photons [n] Losses $[{n}_{\mathrm{lost}}^{\mathrm{in}}]$ Inputs # samples
3 0 1 19 ± 3
3 1 4 50 ± 6
3 2 10 88 ± 9
4 0 1 14 ± 2
4 1 5 37 ± 4
4 2 15 64 ± 6
5 0 1 12 ± 2
5 1 6 31 ± 3
5 2 21 54 ± 6
6 0 1 11 ± 1
6 1 7 29 ± 3

We finally report in table C4 the simulated error distance of BS distributions concerning different possible input states (${\nu }_{\mathrm{err}}=1/2{\sum }_{i}| p(i)-{p}^{\prime }(i)| $, where p(i) and ${p}^{\prime }(i)$ are the probabilities assigned to event i by the two distributions and the sum is intended over all possible events). In particular, we compare the single n photon–Fock state distributions to some other inputs, respectively with second order terms, vacuum states and an extra photon that is lost at the detection.

Table C2.  Minimum data set size to validate BS with losses occurring at the output against a sampling with distinguishable photons with a 95% confidence level. The results have been averaged over 100 Haar random 40 × 40 unitaries. Outputs indicate the number of possible n photons output combinations from which a generic sampled event could come from.

Photons [n] Losses $[{n}_{\mathrm{lost}}^{\mathrm{out}}]$ Outputs # samples
3 0 1 19 ± 2
3 1 38 101 ± 14
4 0 1 14 ± 2
4 1 37 53 ± 6
4 2 703 208 ± 24
5 0 1 12 ± 2
5 1 36 41 ± 4
5 2 66 109 ± 12

Table C3.  Minimum data set size to validate BS with losses occurring either at the input, at the output or both cases against a sampling with distinguishable photons with a 95% confidence level. The results have been averaged over 100 Haar random 40 × 40 unitaries. The number of photons indicates how many photons are actually detected.

Modes [m] Photons [n] Losses $[{n}_{\mathrm{lost}}]$ # samples (in) # samples (out) # samples (both)
30 3 1 95 ± 12 103 ± 13 179 ± 35
  4 1 52 ± 6 56 ± 7 80 ± 10
  5 2 95 ± 11 121 ± 13
  5 1 38 ± 4 43 ± 6 55 ± 6
  6 2 69 ± 7 93 ± 9
  6 1 33 ± 4 39 ± 4 49 ± 5
  7 2 59 ± 7 86 ± 7
40 3 1 93 ± 12 101 ± 14 181 ± 31
  4 1 50 ± 6 53 ± 6 78 ± 8
  5 2 88 ± 9 109 ± 12
  5 1 37 ± 4 41 ± 4 52 ± 7

Table C4.  Variational error distance averaged over 100 samples between the ideal BS distribution with n single input photons in the state 1-⋯-1 and some wrong samplings. The state 2-1-0 (2-1-1) generically represents the 2-1-⋯-1-0 (2-1-⋯-1) state with a couple of photons and $n-2$ ($n-1$) remaining single photons (e.g. 2-1-1-0 and 2-1-1-1 for n = 4). The error distance with respect to the state 1-1-1-1 tells how far BS with n photons is from BS with $n+1$ photons, one of which is lost at the output.

Photons [n] Modes [m] 2-1-0 2-1-1 1-0-1-1 1-1-1-1
3 15 0.473 ± 0.054 0.222 ± 0.016 0.316 ± 0.026 0.337 ± 0.029
  25 0.460 ± 0.036 0.210 ± 0.013 0.318 ± 0.018 0.326 ± 0.020
  40 0.466 ± 0.031 0.211 ± 0.011 0.321 ± 0.018 0.327 ± 0.015
  50 0.444 ± 0.024 0.205 ± 0.013 0.320 ± 0.015 0.315 ± 0.013
4 15 0.482 ± 0.041 0.263 ± 0.013 0.328 ± 0.022 0.349 ± 0.018
  20 0.467 ± 0.032 0.251 ± 0.011 0.333 ± 0.019 0.346 ± 0.019
  30 0.464 ± 0.021 0.245 ± 0.009 0.329 ± 0.014 0.332 ± 0.010
5 20 0.473 ± 0.023 0.276 ± 0.010 0.338 ± 0.015 0.353 ± 0.011
  25 0.475 ± 0.027 0.267 ± 0.008 0.336 ± 0.012 0.350 ± 0.011
  30 0.461 ± 0.010 0.256 ± 0.008 0.336 ± 0.012
6 25 0.468 ± 0.018 0.280 ± 0.007 0.341 ± 0.012

For example, considering the case n = 3, we compare the correct input 1-1-1 (all other mn entries are 0) with input states 2-1-0 (the fourth photon is triggered but not injected), 2-1-1 (there is an extra pair but one photon is not detected), 1-1-0-1 (1-1-1-1 is generated and one photon is lost at the input) and 1-1-1-1 (one photon is lost at the detection). We report an overview of the variational error distance for several values of the number of photons n and modes m. To better understand the significance of the values in the table we computed the variational error distance for distributions with a completely wrong input state. Always with respect to the 1-1-1 case, we got for n = 3 photons in m = 50 modes: 3-0-0 $\to \,(0.593\pm 0.050)$, 0-0-0-3 $\to \,0.699\pm 0.050)$ and 0-0-0-1-1-1 $\to \,(0.630\pm 0.035)$.

Appendix D.: Boson sampling with microwave photons

If we call ${p}_{\mathrm{in}}$ the probability to successfully excite a X-mon qubit and then create a single photon in the coupled cavity through a Jaynes–Cummings interaction, then the probability to lose ni photons at the input is

Equation (D.1)

From this quantity we can evaluate the probability to perform a microwave BS losing ${n}_{\mathrm{lost}}$ photons overall (i.e these losses can occur either at the input or at the output)

Equation (D.2)

This result assumes that dark counts are negligible (the chance to erroneously detect photons (${p}_{{\rm{d}}}$) in vacuum modes is very low). If this is not the case, the probability for a microwave BS with ${n}_{\mathrm{lost}}$ photons becomes

Equation (D.3)

Footnotes

  • Extended Church–Turing Thesis conjectures that a probabilistic Turing machine can efficiently simulate any realistic model of computation, where efficiently means up to polynomial-time reductions.

Please wait… references are loading.