Brought to you by:
Report on Progress

Randomness in quantum mechanics: philosophy, physics and technology

, , , and

Published 6 November 2017 © 2017 IOP Publishing Ltd
, , Citation Manabendra Nath Bera et al 2017 Rep. Prog. Phys. 80 124001 DOI 10.1088/1361-6633/aa8731

0034-4885/80/12/124001

Abstract

This progress report covers recent developments in the area of quantum randomness, which is an extraordinarily interdisciplinary area that belongs not only to physics, but also to philosophy, mathematics, computer science, and technology. For this reason the article contains three parts that will be essentially devoted to different aspects of quantum randomness, and even directed, although not restricted, to various audiences: a philosophical part, a physical part, and a technological part. For these reasons the article is written on an elementary level, combining simple and non-technical descriptions with a concise review of more advanced results. In this way readers of various provenances will be able to gain while reading the article.

Export citation and abstract BibTeX RIS

1. Introduction

Randomness is a very important concept finding many applications in modern science and technologies. At the same time it is also quite controversial, and may have different meanings depending on the field of science it concerns. In this short introduction to our report we explain, in a very general manner, why randomness plays such an important role in various branches of science and technology. In particular we elaborate the concept of 'apparent randomness', to contrast it with what we understand under the name 'intrinsic randomness'.

Apparent randomness as an element of more efficient description of nature is used practically in all sciences, and in physics in particular, see Penrose (1979), Schrödinger (1989), Khinchin (2014), Tolman (2010) and Halmos (2013). This kind of randomness expresses our lack of full knowledge of the considered system. Paradigmatic example concerns classical mechanics of many-body systems that are simply too complex to be considered with all details. The complexity of the dynamics of systems consisting of many interacting constituents makes predictions, even assuming perfect knowledge of initial conditions, practically impossible. This fact motivates the development of statistical mechanics and thermodynamics. The descriptions that employ probability distributions and statistical ensembles, or even more reduced thermodynamic description, are more adequate and useful. Another paradigmatic example concerns chaotic systems. In deterministic chaos theory, see Bricmont (1995), Ivancevic and Ivancevic (2008) and Gleick (2008) even for the small systems involving few degrees of freedom, the immanent lack of precision in our knowledge of initial conditions leads to the impossibility of making long time predictions. This is due to an exponential separation of trajectories, when small differences at start lead to large end-effects. Also here, intrinsic ergodicity allows one to use the tools of statistical ensembles.

In quantum mechanics apparent (a.k.a. epistemic) randomness also plays an important role and reflects our lack of full knowledge of the state of a system. A state of a system in quantum mechanics corresponds to a vector in a Hilbert space, and is described by the projector operator on that vector. Such states and the corresponding projectors of rank one are termed as pure states. In general, we never know the actual (pure) state of the system precisely. Such situation may be caused by our own imperfectness in determining the state in question. Even, these may arise from measurements that result in statistical ensembles of many pure states. The appropriate way of describing such states is using a density matrix, i.e. the probabilistic mixture of the projectors on the pure states. The pure states are, simply, represented by those density matrices that are just rank-one projectors. In fact, expressing quantum systems, with a lack of the full knowledge about the state in question, constitutes the main reason of the introduction of the density matrix formalism (Cohen-Tannoudji et al 1991, Messiah 2014).

However, in quantum physics there is a new form of randomness, which is rather intrinsic or inherent to the theory. Namely, even if the state of the system is pure and we know it exactly, the predictions of quantum mechanics could be intrinsically probabilistic and random! Accepting quantum mechanics, that is assuming that the previous sentence is true, we should consequently accept that quantum mechanics could be intrinsically random. We adopt this position in this paper.

To summarize the above discussion let us define:

Def. 1—Apparent (a.k.a. epistemic) randomness.

Apparent randomness is the randomness that results exclusively from a lack of full knowledge about the state of the system in consideration. Had we known the initial state of the system exactly, we could have predicted its future evolution exactly. Probabilities and stochastic processes are used here as an efficient tool to describe at least a partial knowledge about the system and its features. Apparent randomness implies and requires existence of the, so called, underlying hidden variable theory. It is the lack of knowledge of hidden variables that causes apparent randomness. Had we known them, we could have make predictions with certainty.

Def. 2—Intrinsic (a.k.a. inherent or ontic) randomness.

Intrinsic randomness is the randomness that persists even if we have the full knowledge about the state of the system in consideration. Even exact knowledge of the initial state does not allow to predict future evolution exactly: we can only make probabilistic predictions. Probabilities and stochastic processes are used here as a necessary and inevitable tool to describe our knowledge about the system and its behavior. Of course, intrinsic randomness might coexist with the apparent one—for instance, in quantum mechanics when we have only partial knowledge about the state of the system expressed by the density matrix, the two causes of randomness are present. Moreover, intrinsic randomness does not exclude existence of effective hidden variable theories that could allow for partial predictions of the evolution of the systems with certainty. As we shall see, in quantum mechanics of composite systems, an effective local hidden variable theories in general cannot be used to make predictions about local measurements and the local outcomes are intrinsically random.

Having defined the main concepts, we present here short resumes of the subsequent parts of the report, where our focus would be mostly on quantum randomness:

  • Quantum randomness and philosophy. Inquiries concerning the nature of randomness accompany European philosophy from its beginnings. We give a short review of classical philosophical attitudes to the problem and their motivations. Our aim is to relate them to contemporary physics and science in general. This is intimately connected to discussion of various concepts of determinism and its understanding in classical mechanics, commonly treated as an exemplary deterministic theory, where chance has only an epistemic status and leaves room for indeterminism only in form of statistical physics description of the world. In this context, we briefly discuss another kind of indeterminism in classical mechanics caused by the non-uniqueness of solutions of the Newton's equations and requiring supplementing the theory with additional unknown laws. We argue that this situation shares similarities with that of quantum mechanics, where quantum measurement theory à la von Neumann provides such laws. This brings us to the heart of the problem of intrinsic randomness of quantum mechanics from the philosophical point of view. We discuss it in two quantum aspects: contextuality and nonlocality, paying special attention to the question: can quantum randomness be certified in any sense?
  • Quantum randomness and physics. Unlike in classical mechanics, randomness is considered to be inherent in the quantum domain. From a scientific point of view, we would raise arguments if this randomness is intrinsic or not. We start by briefly reviewing standard the approach to randomness in quantum theory. We shortly recall the postulates of quantum mechanics and the relation between quantum measurement theory and randomness. Nonlocality as an important ingredient of the contemporary physical approach to randomness is then discussed. We proceed then with a more recent approach to randomness generation based on the so called 'device independent' scenario, in which one talks exclusively about correlations and probabilities to characterize randomness and nonlocality. We then describe several problems of classical computer science that have recently found elegant quantum mechanical solutions, employing the nonlocality of quantum mechanics. Before doing this we devote a section to describe a contemporary information theoretic approach to the characterization of randomness and random bit sources. In continuation, we discuss the idea of protocols for Bell certified randomness generation (i.e. protocols based on Bell inequalities to generate and certify randomness in the device independent scenario), such as quantum randomness expansion (i.e. generating larger number of random bits from a shorter seed of random bits), quantum randomness amplification (i.e. transforming weakly random sequences of, say, bits into 'perfectly' random ones). It should be noted that certification, expansion and amplification of randomness are classically not possible or require extra assumptions in comparison with what quantum mechanics offers. Our goal is to review the recent state-of-art results in this area, and their relations and applications for device independent quantum secret key distribution. We also review briefly and analyze critically the 'special status' of quantum mechanics among the so called no-signaling theories. These are the theories, in which the choice of observable to measure by, say, Bob does not influence the outcomes of measurements of Alice and all other parties (for precise definition in terms of conditional probabilities for arbitrary number of parties, observables and outcomes see equation (9). While quantum mechanical correlation fulfill the no-signaling conditions, correlations resulting from no-signaling theories form a strictly larger set. No-signalling correlations were first considered in relation to quantum mechanical ones by Popescu and Rohrlich (1992). In many situations, it is the no-signaling assumption and Bell non-locality that permit certification of randomness and perhaps novel possibilities of achieving security in communication protocols.
  • Quantum randomness and technology. We start this part by shortly reminding the readers why random numbers are useful in technology and what they are used for. The drawbacks of the classical random number generation, based on classical computer science ideas, are also mentioned. We describe proof-of-principle experiments, in which certified randomness was generated using nonlocality. We then focus on describing existing 'conventional' methods of quantum random number generation and certification. We discuss also the current status of detecting non-locality and Bell violations. We will then review current status of commercial implementation of quantum protocols for random number generations, and the first steps toward device independent or at least partially device independent implementations. A complementary review of quantum random generators may be found in Herrero-Collantes and Garcia-Escartin (2017).
  • Quantum randomness and future. In the conclusions we outline new interesting perspectives for fundamentals of quantum mechanics that are stimulated by the studies of quantum randomness: What's the relation between randomness, entanglement and non-locality? What are the ultimate limits for randomness generation using quantum resources? How does quantum physics compare to other theories in terms of randomness generation? What's the maximum amount of randomness that can be generated in a theory restricted only by the no-signaling principle?

Randomness in physics has been a subject of extensive studies and our report neither has ambition, nor objective to cover all existing literature on this subject. We stress that there are of course various, highly recommended reviews on randomness in physics, such as for instance the excellent articles by Bricmont (1995), or the recent book by Vallejo and Sanjuan (2017). The main novelty of our report lies in the incorporation of the contemporary approach to quantum randomness and its relation to quantum nonlocality and quantum correlations, and emerging device independent quantum information processing and quantum technologies.

In fact, our report focuses on certain aspects of randomness that have become particularly relevant in the view of the recent technical (i.e. qualitative and quantitative, theoretical and experimental) developments in quantum physics and quantum information science: quantum randomness certification, amplification and extension are paradigmatic examples of these developments. The technological progress in constructing publicly or even commercially available, highly efficient quantum random number generators is another important aspect: it has in particular led to the first experimental proof of quantum nonlocality, i.e. loophole-free violation of Bell inequalities (Hensen et al 2015, Giustina et al 2015, Shalm et al 2015). In particular:

  • In the philosophical part we concentrate on the distinction between apparent (epistemic) and intrinsic (inherent or ontic) randomness, and on the question whether intrinsic randomness of quantum mechanics can be certified in certain sense. We devote considerable attention to the recent discussion of non-deterministic models in classical physics, in which (in contrast to the standard Newtonian–Laplacian mechanics) similar questions may be posed. Based on the recently proposed protocols, we argue that observation of nonlocality of quantum correlations can be directly use to certify randomness; moreover this can be achieved in a secure device independent way. Similarly, contextuality of quantum mechanics, i.e. results of measurement depend on the context in which they are performed, or, more precisely, which compatible quantities are simultaneously measured, can be used to certify randomness, although not in device independent way.
  • In the physical part we concentrate on more detailed presentation of the recent protocols of randomness certification, amplification end expansion.
  • In the technological part we first discuss the certified randomness generation (Pironio et al 2010), accessible as an open source NIST Beacon (National Institutes of Standards and Technology 2011). Then we concentrate on the recent technological developments that have led to the first loophole free detection of nonlocality, and are triggering important commercial applications.

Here we limit ourselves to the contemporary, but traditional approach to quantum mechanics and its interpretation, as explicated in the books of Messiah (2014) or Cohen-Tannoudji et al (1991). In this sense this review is not complete, and important relevant philosophical aspects are not discussed. Thus, we do not describe other interpretations and approaches, such as pilot wave theory of Bohm (1951) or many-world interpretation (MWI) of Everett (1957), as they are far beyond the scope of this report. Of course, the meanings of randomness and non-locality are completely different in these approaches.

For instance, one can consider de Broglie–Bohm's interpretation of quantum theory. This is also known as the pilot-wave theory, Bohmian mechanics, the Bohm (or Bohm's) interpretation, and the causal interpretation of quantum mechanics. There a wave function, on the space of all possible configurations, not only captures the epistemic knowledge of system's state but also carry a 'hidden variable' to encode it's ontic information and this 'hidden variable' may not be accessible or observed. In addition to the wave function, the Bohmian particle positions also carry information. Thus, the Bohmian QM has two ontological ingredients: the wave function and positions of particles. As we explain below, the theory is non-local and that is why we do not discuss it in the present review in details.

The time evolution of the system (say, the positions of all particles or the configuration of all fields) is guided by Schrödinger's equation. By construction, the theory is deterministic (Bohm 1952) and explicitly non-local. In other words, the velocity of one particle relies on the value of the guiding equation, which depends on the configuration of the system given by its wave function. The latter is constrained to the boundary conditions of the system and that could. in principle, be the entire universe. Thus, as explicitly stated by Bohm (1952): 'in contrast to the usual interpretation, this alternative interpretation permits us to conceive of each individual system as being in a precisely definable state, whose changes with time are determined by definite laws, analogous to (but not identical with) the classical equations of motion. Quantum-mechanical probabilities are regarded (like their counterparts in classical statistical mechanics) as only a practical necessity and not as an inherent lack of complete determination in the properties of matter at the quantum level'.

So Bohm's theory has to be regarded as non-local hidden variable theory and therefore it does not allow intrinsic randomness; similarly, the many-world interpretation (MWI) suggests that intrinsic randomness is an illusion (Vaidman 2014). MWI asserts the objective reality of 'universal' wave function and denies any possibility of wave function collapse. MWI implies that all possible pasts and futures are elements of reality, each representing an objective 'world' (or 'universe'). In simpler words, the interpretation states that there is a very large number of universes, and everything that could possibly have occured in our past, but did not, has occurred in the past of some other universe or universes. Therefore, MWI indeed does not leave much space for any kind of probability or randomness, since formally, all outcomes take place with certainty. This is already a sufficient reason to not to consider the WMI in the present review. But, obviously, the whole problem is whether one can speak about probabilities within MWI or not. This problem has been extensively discussed by several authors, e.g. Saunders (1998, 2010), Papineau (2010) and Albert (2010).

We stress that we adopt in this review the 'traditional' interpretation, in which quantum mechanics is intrinsically random, but nonlocal. This adaptation is the result of our free choice. Other readers may freely, or better to say deterministically, but nonlocally adopt the Bohmian point of view.

2. Quantum randomness and philosophy

2.1. Epistemic and ontic character of probability

Randomness is a fundamental resource indispensable in numerous scientific and practical applications like Monte-Carlo simulations, taking opinion polls, cryptography etc. In each case one has to generate a 'random sample', or simply a random sequence of digits. Variety of methods to extract such a random sequence from physical phenomena were proposed and, in general successfully, applied in practice. But how do we know that a sequence is 'truly random'? Or, at least. 'Random enough' for all practical purposes? Such problems become particularly acute for cryptography where provably unbreakable security systems are based on the possibility to produce a string of perfectly random, uncorrelated digits used later to encode data. Such a random sequence must be unpredictable for an adversary wanting to break the code, and here we touch a fundamental question concerning the nature of randomness. If all physical processes are uniquely determined by their initial conditions and the only cause of unpredictability is our inability to determine them with an arbitrary precision, or lack of detailed knowledge of actual conditions that can influence their time evolution, the security can be compromised, if an adversary finds finer methods to predict outcomes. On the other hand, if there are processes 'intrinsically' random, i.e. random by their nature and not due to gaps in our knowledge, unconditional secure coding schemes are conceivable.

Two attitudes concerning the nature of randomness in the world mentioned above can be dubbed as epistemic and ontic. Both agree that we observe randomness (indeterminacy, unpredictability) in nature, but differ in identifying the source of the phenomenon. The first claims that the world is basically deterministic, and the only way in which a random behavior demanding probabilistic description appears is due to lack of knowledge of the actual state of the observed system or details of its interaction with the rest of the universe. In contrast, according to the second, the world is nondeterministic, randomness is its intrinsic property, independent of our knowledge and resistant to attempts aiming at circumventing its consequences by improving precision of our observations. In other words, 'intrinsic' means that this kind of randomness cannot be understood in terms of a deterministic 'hidden variable' model. The debate on both epistemic and ontic nature of randomness can be traced back to the pre-Socratic beginnings of the European philosophy. For early atomists, Leucippus4 and Democritus5, the world was perfectly deterministic. Any occurrence of chance is a consequence of our limited abilities6. One century later Epicurus took the opposite side. To accommodate an objective chance the deterministic motion of atoms must be interrupted, without a cause, by 'swerves'. Such an indeterminacy propagates then to macroscopic world. The main motivation was to explain, or at least to give room for human free will, hardly imaginable in a perfectly deterministic world7. It should be clear, however, that purely random nature of human actions is as far from free will, as the latter from a completely deterministic process. A common feature of both extreme cases of pure randomness and strict determinism is lack of any possibility to control or influence the course of events. Such a possibility is definitely indispensable component of the free will. The ontological status of randomness is thus here irrelevant and the discussion whether 'truly random theories', (as supposedly should quantum mechanics be), can 'explain the phenomenon of the free will' is pointless. It does not mean that free will and intrinsic randomness problems are not intertwined. On one side, as we explain later, the assumption that we may perform experiments in which we can freely choose what we measure, is an important ingredient in arguing that violating of Bell-like inequalities implies 'intrinsic randomness' of quantum mechanics. On the second side, as strict determinism in fact precludes the free will, the intrinsic randomness seems to be a necessary condition for its existence. But, we need more to produce a condition that is sufficient. An interesting recent discussion of connections between free will and quantum mechanics may be found in Part I of Suarez and Adams (2013). In Gisin (2013) and Brassard and Robichaud (2013) the many-world interpretation of quantum mechanics, which is sometimes treated as a cure against odds of orthodox quantum mechanics, is either dismissed as a theory that can accommodate free will (Gisin 2013) or, taken seriously in Brassard and Robichaud (2013), as admitting the possibility that free will might be a mere illusion. In any case it is clear that one needs much more than any kind of randomness to understand how free will appears. In Suarez (2013) the most radical attitude to the problem (apparently present also in Gisin (2013)) is that 'not all that matters for physical phenomena is contained in space-time'.

2.2. Randomness in classical physics

A seemingly clear distinction between two possible sources of randomness outlined in the previous section becomes less obvious if we try to make the notion of determinism more precise. Historically, its definition usually had a strong epistemic flavor. Probably the most famous characterization of determinism is that of Laplace (1814): ' Une intelligence qui, pour un instant donné, connaîtrait toutes les forces dont la nature est animée, et la situation respective des êtres qui la composent, si d'ailleurs elle était assez vaste pour soumettre ces données à l'analyse, embrasserait dans la même formule les mouvemens des plus grands corps de l'univers et ceux du plus léger atome : rien ne serait incertain pour elle, et l'avenir comme le passé, serait présent á ses yeux8'. Two hundred years later Karl Raimund Popper writes 'We can ... define 'scientific' determinism as follows: the doctrine of 'scientific' determinism is the doctrine that the state of any closed physical system at any given future instant of time can be predicted, even from within the system, with any specified degree of precision, by deducing the prediction from theories, in conjunction with initial conditions whose required degree of precision can always be calculated (in accordance with the principle of accountability) if the prediction task is given' (Popper 1982). By contraposition thus, unpredictability implies indeterminism. If we now equate indeterminism with existence of randomness, we see that a sufficient condition for the latter is the unpredictability. But, unpredictable can be equally well events about which we do not have enough information, and those that are 'random by themselves'. Consequently, as it should have been obvious, Laplacean-like descriptions of determinism are of no help when we look for sources of randomness.

Let us thus simply say that a course of events is deterministic if there is only one future way for it to develop. Usually we may also assume that its past history is also unique. In such cases the only kind of randomness is the epistemic one.

As an exemplary theory describing such situations one usually invokes classical mechanics. Arnol'd in his treatise on ordinary differential equations, after adopting the very definition of determinism advocated above9, writes: 'thus for example, classical mechanics considers the motion of systems whose past and future are uniquely determined by the initial positions and velocities of all points of the system'10. The same can be found in his treatise on classical mechanics11. He gives also a kind of justification, 'It is hard to doubt this fact, since we learn it very early'12. But, what he really means is that a mechanical system are uniquely determined by positions and momenta of its constituents, 'one can imagine a world, in which to determine the future of a system one must also know the acceleration at the initial moment, but experience shows us that our world is not like this'13. It is clearly exposed in another classical mechanics textbook, Landau and Lifschitz's Mechanics: 'if all the co-ordinates and velocities are simultaneously specified, it is known from experience that the state of the system is completely determined and that its subsequent motion can, in principle, be calculated. Mathematically, this means that, if all the co-ordinates q and velocities $\dot{q}$ are given at some instant, the accelerations $\ddot{q}$ at that instant are uniquely defined'14. Apparently, also here the 'experience' concerns only the observation that positions and velocities, and not higher time-derivatives of them, are sufficient to determine the future.

In such a theory there are no random processes. Everything is in fact completely determined and can be predicted with desired accuracy once we improve our measuring and computing devices. Statistical physics, which is based on classical mechanics, is a perfect example of indeterministic theory where measurable quantities like pressure or temperature are determined by mean values of microscopical 'hidden variables', for example positions and momenta of gas particles. These hidden variables, however, are completely determined at each instant of time by the laws of classical mechanics, and with an appropriate effort can be, in principle, measured and determined. What makes the theory 'indeterministic' is a practical impossibility to follow trajectories of individual particles because of their number and/or the sensitiveness to changes of initial conditions. In fact such a sensitiveness was pointed as a source of chance by Poincaré15 and Smoluchowski16 soon after modern statistical physics was born, but it is hard to argue that this gives to the chance an ontological status. It is, however, worth mentioning that Poincaré was aware that randomness might have not only epistemic character. In the above cited Introduction to his Calcul des probabilités he states 'Il faut donc bien que le hasard soit autre chose que le nom que nous donnons à notre ignorance'17, ('So it must be well that chance is something other than the name we give our ignorance'18).

Still, the very existence of deterministic chaos implies that classical mechanics is unpredictable in general in any practical sense. The technical question how important this unpredictability can be is, actually, the subject of intensive studies in the last decades (for recent monographs see Vallejo and Sanjuan (2017) and Rajasekar and Sanjuan (2016)).

It is commonly believed (and consistent with the above cited descriptions of determinism in mechanical systems) that on the mathematical level the deterministic character of classical mechanics takes form of Newton's second law

Equation (1)

where the second derivatives of the positions, ${\bf x}(t)$ , are given in terms of some (known) forces ${\bf F}({\bf x(t)}, t)$ . But, to be able to determine uniquely the fate of the system we need something more than merely initial positions ${\bf x}(0)$ and velocities ${\rm d}{\bf x}(t)/{\rm d}t\vert _{t=0}$ . To guarantee uniqueness of the solutions of the Newton's equations (1), we need some additional assumptions about the forces ${\bf F}({\bf x(t)}, t)$ . According to the Picard theorem19 Coddington and Levinson (1955), an additional technical condition that is sufficient for the uniqueness is the Lipschitz condition, limiting the variability of the forces with respect to the positions. Breaking it opens possibilities to have initial positions and velocities that do not determine uniquely the future trajectory. A world, in which there are systems governed by equations admitting non-unique solutions is not deterministic according to our definition. We can either defend determinism in classical mechanics by showing that such pathologies never occur in our world, or agree that classical mechanics admits, at least in some cases, a nondeterministic evolution. Each choice is hard to defend. In fact it is relatively easy to construct examples of more or less realistic mechanical systems for which the uniqueness is not guaranteed. Norton (2007) (see also Norton (2008)) provided a model of a point particle sliding on a curved surface under the gravitational force, for which the Newton equation reduces to $\frac{{\rm d}^2r}{{\rm d}t^2}=\sqrt{r}$ . For the initial conditions $r(0)=0, \frac{{\rm d}r}{{\rm d}t}\vert _{t=0}=0$ , the equation has not only an obvious solution $r(t)=0$ , but, in addition, a one parameter family given by

Equation (2)

where T is an arbitrary parameter. For a given T the solution describes the particle staying at rest at $r=0$ until T and starting to accelerate at T. Since T is arbitrary we can not predict when the change from the state of rest to the one with a non-zero velocity takes place.

The example triggered discussions (Korolev 2007a, 2007b, Kosyakov 2008, Malament 2008, Roberts 2009, Wilson 2009, Zinkernagel 2010, Fletcher 2012, Laraudogoitia 2013), raising several problems, in particular its physical relevance in connection with simplifications and idealizations made to construct it. However, they do not seem to be different from ones commonly adopted in descriptions of similar mechanical situations, where the answers given by classical mechanics are treated as perfectly adequate. At this point classical mechanics is not a complete theory of the part of the physical reality it aspires to describe. We are confronted with a necessity to supplement it by additional laws dealing with situations where the Newton's equation do not posses unique solutions.

The explicit assumption of incompleteness of classical mechanics has its history, astonishingly longer than one would expect. Possible consequences of non-uniqueness of solutions attracted attention of Boussinesq who in his Mémoire for the French Academy of Moral and Political Sciences writes: '...les phénomènes de mouvement doivent se diviser en deux grandes classes. La première comprendra ceux où les lois mécaniques exprimées par les équations différentielles détermineront à elles seules la suite des états par lesquels passera le système, et où, par conséquent, les forces physico-chimiques ne laisseront aucun rôle disponible à des causes d'une autre nature. Dans la seconde classe se rangeront, au contraire, les mouvements dont les équations admettront des intégrales singulières, et dans lesquels il faudra qu'une cause distincte des forces physico-chimiques intervienne, de temps en temps ou d'une manière continue, sans d'ailleurs apporter aucune part d'action mécanique, mais simplement pour diriger le système a chaque bifurcation d'intégrales qui se présentera'20.

Boussinesq does not introduce any probabilistic ingredient to the reasoning, but definitely, there is a room to go from mere indeterminism to the awaited 'intrinsic randomness'. To this end, however, we need to postulate an additional law supplementing classical mechanics by attributing probabilities to different solutions of non-Lipschitzian equations21. It is hard to see how to discover (or even look for) such a law, and how to check its validity. What we find interesting is an explicit introduction to the theory a second kind of motion. It is strikingly similar to what we encounter in quantum mechanics, where to explain all observed phenomena one has to introduce two kinds of kinematics of a perfectly deterministic Schrödinger evolution and indeterministic state reductions during measurements. Similarity consist in the fact, that deterministic (Schrödinger, Newton) equations are not sufficient to describe the full evolution: they have to be completed, for instance by probabilistic description of the results of measurements in quantum mechanics, or by probabilistic choice of non-unique solutions in the Norton's example22.

It is interesting to note the ideas of Boussinesq have been in fact a subject of intensive discussion in the recent years in philosophy of science within the, so called, 'second Boussinesq debate'. The first Boussinesq debate took place in France between 1874–1880. As stated by Michael Mueller (2015): 'in 1877, a young mathematician named Joseph Boussinesq presented a mémoire to the Académie des Sciences, which demonstrated that some differential equations may have more than one solution. Boussinesq linked this fact to indeterminism and to a possible solution to the free will versus determinism debate'. The more recent debate discovered, in fact, that some hints for the Boussinesq ideas can be also found in the works of James Clerk Maxwell (Isley 2017). The views of Maxwell, important in this debate and not known very much by physicists, show that he was very much influenced by the work of Joseph Boussinesq and Adhémar Jean Claude Barré de Saint-Venant (Michael Mueller 2015). What is also quite unknown by many scientists is that Maxwell learn the statistical ideas from Adolphe Quetelet, a Belgian mathematician, considered to be one of the founders of statistics. Excellent account on the concepts of determinism versus indeterminism, on the notion of uncertainty, also associated to the idea of randomness, as well as on different meanings that randomness has for different audiences may be found in the set of blogs of Sanjuán (2009c), Sanjuán (2009b), Sanjuán (2009a) and in the outstanding book (Dahan-Dalmedico et al 1992). These references cover also a lot of details of the the first and recent Boussinesq debate. A Polish text by Koleżyński (2007) discusses related quotations from Boussinesq, Maxwell and Poincaré in a philosophical context of the determinism.

Of course, to great extend Boussinesq debate was stimulated by the attempts toward understanding of nonlinear dynamics and hydrodynamics in general, and the phenomenon of turbulence in particular. A nice review of of various approaches and ideas until 1970s is presented in the lecture by Farge (1991). The contemporary approach to turbulence is very much related to the Boussinesq suggestions and the use of non-Lipschitzian, i.e. nondeterministic hydrodynamics, has been develop in the recent years by Falkovich, Gawedzki, Vargassola and others (for outstanding reviews see Falkovich et al (2001) and Gawedzki (2001)). The history of these works is nicely described in the presentation (Bernard et al 1998), while the most important particular articles include the series of papers by Bernard et al (1998), Gawedzki and Vergassola (2000), Vanden Eijnden and Vanden Eijnden (2000), Weinan and Vanden Eijnden (2001), Le Jan and Raimond (1998, 2002) and Le Jan and Raimond (2004).

2.3. Randomness in quantum physics

The chances of proving the existence of 'intrinsic randomness' in the world seem to be much higher, when we switch to quantum mechanics. The Born's interpretation of the wave function implies that we can count only on a probabilistic description of reality, therefore quantum mechanics is inherently probabilistic.

Obviously, one should ask what is the source of randomness in quantum physics. As pointed out by one of the referees: 'in my view all the sources of randomness originate because of interaction of the system (and/or the measurement apparatus) with an environment. The randomness that affects pure states due to measurement is, in my view, due to the interaction of the measurement apparatus with an environment. The randomness that affects open systems (those that directly interact with an environment) is again due to environmental effects'. This point of view is, as considered by many physicists, of course, parallel to the contemporary theory of quantum measurements, and collapse of the wave function (Wheeler and Zurek 1983, Zurek 2003, 2009).

Still, the end result of such approach to randomness and quantum measurements is that the Born's rule and the traditional Copenhagen interpretation is not far from being rigorously correct. At the same time, quantum mechanics viewed from the device independent point of view, i.e. by regarding only probabilities of outcomes of individual or correlated measurements, incorporates randomness, which cannot be reduced to our lack of knowledge or imperfectness of our measurements (this will be discussed with more details below). In this sense for the purpose of the present discussion, the detailed form of the major source of the randomness is not relevant, as long as this randomness leads to contextual results of measurements, or nonlocal correlations.

Let us repeat, both the pure Born's rule and the advanced theory of quantum measurement imply that the measurement outcomes (or expectation value of an observable) may have some randomness. However, a priori there are no obvious reasons for leaving the Democritean ground and switch to the Epicurean view. It might be so that quantum mechanics, just as statistical physics, is an incomplete theory admitting deterministic hidden variables, values of which were beyond our control. To be precise, one may ask how 'intrinsic' this randomness is and if it can be considered as an epistemic one. To illustrate it further, we consider two different examples in the following.

2.3.1. Contextuality and randomness.

Let us consider a case of a spin-s particle. Now if the particle is measured in the z-direction, there could be $2s+1$ possible outcomes and each appears with certain probability. Say, the outcomes are labeled by $\{m\}$ , where $m \in [-s, -s+1, \ldots, s-1, s]$ and the corresponding probabilities by $\{\,p_m\}$ . It means that, with many repetitions, the experimenter will observe an outcome m with the frequency approaching pm, as predicted by the Born's rule of quantum mechanics. The outcomes contain some randomness as they appear probabilistically. Moreover, these probabilities are indistinguishable from classical probabilities. Therefore, the randomness here could be explained with the help of a deterministic hidden-variable model23 and it is simply a consequence of the ignorance of the hidden-variable(s).

But, as we stress in the definition in the Introduction: intrinsic randomness of quantum mechanics does not exclude existence of hidden variable models that can describe outcomes of measurements. Obviously, if the system is in the pure state corresponding to m0, the outcome of the measurement of z-component of the spin will be deterministic: m0 with certainty. If we measured x-component of the spin, however, the result would be again non-deterministic and described only probabilistically. In fact, this is an instance of the existence of the, so called, non-commuting observables in quantum mechanics that cannot be measured simultaneously with certainty. Uncertainty of measurements of such non-commuting observables is quantitatively bounded from below by generalized Heisenberg Uncertainty Principle (Cohen-Tannoudji et al 1991, Messiah 2014).

One of important consequences of the existence of non-commuting observables is the fact that quantum mechanics is contextual, as demonstrated in the famous Kochen–Specker theorem ((Kochen and Specker 1967), for philosophical discussion see Bub (1999) and Isham and Butterfield (1998)). The Kochen–Specker (KS) theorem (Kochen and Specker 1967), also known as the Bell–Kochen–Specker theorem (Bell 1966), is a 'no go' theorem (Bub 1999), proved by Bell in 1966 and by Kochen and Specker in 1967. KS theorem places certain constraints on the permissible types of hidden variable theories, which try to explain the randomness of quantum mechanics as an apparent randomness, resulting from lack of knowledge of hidden variables in an underlying deterministic model. The version of the theorem proved by Kochen and Specker also gave an explicit example for this constraint in terms of a finite number of state vectors (see Peres (1995)). The KS theorem deals with single quantum systems and is thus a complement to Bell's theorem that deals with composite systems.

As proved by the KS theorem, there is a contradiction between two basic assumptions of the hidden variable theories, which is intended to reproduce the results of quantum mechanics where all hidden variables corresponding to quantum mechanical observables have definite values at any given time, and that the values of those variables are intrinsic and independent of the measurement devices. An immediate contradiction can be caused by non-commutative observables, that are allowed by quantum mechanics. If the Hilbert space dimension is at least three, it turns out to be impossible to simultaneously embed all the non-commuting sub-algebras of the algebra of these observables in one commutative algebra, which is expected to represent the classical structure of the hidden variable theory24.

The Kochen–Specker theorem excludes hidden variable theories that require elements of physical reality to be non-contextual (i.e. independent of the measurement arrangement). As succinctly worded by Isham and Butterfield (1998), the Kochen–Specker theorem 'asserts the impossibility of assigning values to all physical quantities whilst, at the same time, preserving the functional relations between them'.

In a more recent approach to contextuality, i.e. where the measurement results depend on the context with which they are measured, one proves that non-contextual hidden variable theories lead to probabilities of measurement outcomes that fulfill certain inequalities (Cabello 2008), similar to Bell's inequalities for composite systems. More specifically there are Bell-type inequalities for non-contextual theories that are violated by any quantum state. Many of these inequalities between the correlations of compatible measurements are particularly suitable for testing this state-independent violation in an experiment, and indeed violations have been experimentally demonstrated (Kirchmair et al 2009, Bartosik et al 2009). Quantifying and characterizing contextuality of different physical theories is particularly elegant in a general graph-theoretic framework (Cabello et al 2014, Acín et al 2015).

This novel approach to contextuality is on hand parallel to the earlier observation by Bohr (1935) that EPR-like paradoxes may occur in the quantum systems without the need for entangled composite systems. On the other hand it offers a way to certify intrinsic randomness of quantum mechanics. If Cabello-like inequalities are violated in an experiment, it implies that there exist no non-contextual deterministic hidden variable theory that can reproduce results of this experiment, ergo the results are intrinsically random. Unfortunately, this kind of randomness certification is not very secure, since it explicitly depends on the non-commuting observables that are measured, and in effect is not device independent.

2.3.2. Nonlocality and randomness.

It is important to extend the situation beyond the one mentioned above to multi-party systems and local measurements. For example, consider multi-particle system with each particle placed in a separated region. Now, instead of observing the system as a whole, one may get interested to observe only a part of it, i.e. perform local measurements. Given two important facts that QM allows superposition and no quantum system can be absolutely isolated, spatially separated quantum systems can be non-trivially correlated, beyond classical correlations allowed by classical mechanics. In such situation, the information contained in the whole system is certainly larger than that of sum of individual local systems. The information residing in the nonlocal correlations cannot be accessed by observing locally individual particles of the systems. It means that local descriptions cannot completely describe the total state of the system. Therefore, outcomes due to any local observation are bound to incorporate a randomness in the presence of nonlocal correlation, as long as we do not have access to the global system or ignore the nonlocal correlation. In technical terms, the randomness appearing in the local measurement outcomes cannot be understood in terms of deterministic local hidden variable model and a 'true' local indeterminacy is present25. Moreover, randomness on the local level appears even if the global state of the system is pure and completely known—the necessary condition for this is just entanglement of the pure state in question. That is typically referred as 'intrinsic' randomness in the literature, and that is the point of view we adopt in this report.

Before we move further in discussing quantum randomness in the presence of quantum correlation, let us make a short detour through the history of foundation of quantum mechanics. The possibility of nonlocal correlation, also known as quantum entanglement, was initially put forward with the question if quantum mechanics respects local realism, by Einstein et al (1935). According to EPR, two main properties any reasonable physical theory should satisfy are realism and locality. The first one states that if a measurement outcome of a physical quantity, pertaining to some system, is predicted with unit probability, then there must exits 'an element of reality' correspond to this physical quantity having a value equal to the predicted one, at the moment of measurement. In other words, the values of observables, revealed in measurements, are intrinsic properties of the measured system. The second one, locality, demands that elements of reality pertaining to one system cannot be affected by measurement performed on another sufficiently far system. Based on these two essential ingredients, EPR studied the measurement correlations between two entangled particles and concluded that the wave function describing the quantum state 'does not provide a complete description of physical reality'. Thereby they argued that quantum mechanics is an incomplete but effective theory and conjectured that a complete theory describing the physical reality is possible.

In these discussions, one needs to clearly understand what locality and realism mean. In fact, they could be replaced with no-signaling and determinism, respectively. The no-signaling principle states that infinitely fast communication is impossible. The relativistic limitation of the speed, by the velocity of light, is just a special case of no-signaling principle. If two observers are separated and perform space-like separated measurements (as depicted in figure 1), then the principle ascertains that the statistics seen by one observer, when measuring her particle, is completely independent of the measurement choice made on the space-like separated other particle. Clearly, if it were not the case, one observer could, by changing her measurement choice, make a noticeable change on the other and thereby instantaneously communicate with an infinite speed.

Figure 1.

Figure 1. Schematic of a two-party Bell-like experiment. The experimenters Alice and Bob are separated and cannot communicate as indicated by the black barrier. The measurements settings and outcomes, of Alice and Bob, are denoted by $x, \ y$ and $a, \ b$ respectively.

Standard image High-resolution image

Determinism, the other important ingredient, implies that correlations observed in an experiment can be decomposed as mixtures of deterministic ones i.e. occurring in situations where all measurements have deterministic outcomes. A deterministic theory accepts the fact that the apparent random outcomes in an experiment, like in coin tossing, are only consequences of ignorance of the actual state of the system. Therefore, each run of the experiment does have an a priori definite result, but we have only an access to averages.

In 1964, Bell showed that all theories that satisfy locality and realism (in the sense of EPR) are incompatible with quantum mechanics (Bell 1964, 1966). In a simple experiment, mimicking Bell's scenario, two correlated quantum particles are sent to two spatially separated measuring devices (see figure 1), and each device can perform two different measurements with two possible outcomes. The measurement processes are space-like separated and no communication is possible when these are performed. With this configuration a local-realistic model gives bounds on the correlation between the outcomes observed in the two measurement devices, called Bell inequalities (Bell 1964). In other words, impossibility of instantaneous communication (no-signaling) between spatially separated systems together with full local determinism imply that all correlations between measurement results must obey the Bell inequalities.

Strikingly, these inequalities are violated with correlated (entangled) quantum particles, and therefore have no explanations in terms of deterministic local hidden variables. In fact, the correlations predicted by the no-signaling and determinism are exactly the same as predicted by EPR model, and they are equivalent. The experimental violations of the Bell inequalities in 1972 (Freedman and Clauser 1972), in 1981 (Aspect et al 1981) and in 1982 (Aspect et al 1982), along with the recent loophole-free Bell-inequality violations (Hensen et al 2015, Giustina et al 2015) confirm that any local-realistic theory is unable to predict the correlations observed in quantum mechanics. It immediately implies that either no-signaling or local determinism has to be abandoned. For the most physicists, it is favorable to dump local determinism and save no-signaling. Assuming that the nature respects no-signaling principle, any violation of Bell inequality implies thus that the outcomes could not be predetermined in advance.

Thus, once the no-signaling principle is accepted to be true, the experimental outcomes, due to local measurements, cannot be deterministic and therefore are random. Of course, a valid alternative is to abandon the no-signaling principle, allow for non-local hidden variables, but save the determinism, as for instance is done in Bohm's theory (Bohm 1951, 1952). In any case, some kind of non-locality is needed to explain Bell correlations. One can, also, abandon both no-signaling and local determinism: such sacrifice is, however, hard to be accepted by majority of physicists, and scientists in general.

Another crucial assumption is considered for Bell experiments, that is the measurements performed with the local measurement devices have to be chosen 'freely'. In other words, the measurement choices cannot, in principle, be predicted in advance. If the free-choice condition is relaxed and the chosen measurements could be predicted in advance, then it is easy to construct a no-signaling, but deterministic theory that leads to Bell violations. It has been shown in Hall (2010) and Koh et al (2012) that one does not have to give up measurement independence completely to violate Bell inequalities. Even, relaxing free-choice condition to a certain degree, the Bell inequities could be maximally violated using no-signaling and deterministic model (Hall 2010). However, in the Bell-like experiment scenarios where the local observers are separated, it is very natural to assume that the choices of the experiments are completely free (this is often referred to free-will assumption). Therefore, the Bell-inequality violation in the quantum regime, with the no-signaling principle, implies that local measurement outcomes are 'intrinsically' random.

The lesson that we should learn from the above discussion is that the question raised by Einstein, Rosen, Podolsky found its operational meaning in Bell's theorem that showed incompatibility of hidden-variable theories with quantum mechanics (Bell 1964, 1966). Experiment could now decide about existence or non-existence of nonlocal correlations. Exhibiting non-local correlations in an experiment gave, under the assumption of no-signaling, a proof of a non-deterministic nature of quantum mechanical reality, and allowed certifying the existence of truly random processes. These experiments require, however, random adjustments of measuring devices (Bell 1964). There must exist a truly random process controlling their choice. This, ironically, closes an unavoidable circulus vitiosus. We can check the indeterministic character of the physical reality only assuming that it is, in fact, indeterministic.

3. Quantum randomness and physics

In this section we consider randomness form the point of view of physics or in particular, quantum physics. In doing so, first we briefly introduce quantum measurements, nonlocality and information theoretic measures of randomness. Then we turn to outline, how the quantum feature such as nonlocality can be exploited not only to generate 'true' randomness but also to certify, expand and amplify randomness.

3.1. Quantum measurements

According to standard textbook approach, quantum mechanics (QM) is an inherently probabilistic theory (see Messiah (2014), Cohen-Tannoudji et al (1991) and Wheeler and Zurek (1983)—the prediction of QM concerning results of measurements are typically probabilistic. Only in very rare instances measurements give deterministic outcomes—this happens when the systems is in an eigenstate of an observable to be measured. Note, that in general, even if we have full information about the quantum mechanical state of the system, the outcome of the measurements is in principle random. The paradigmatic example is provided a d-state system (a qudit), whose space of states is spanned by the states $\vert 1\rangle$ , $\vert 2\rangle$ ,..., $\vert d\rangle$ . Suppose that we know the system is in the superposition state $\vert \phi\rangle=\sum_{j=1}^d\alpha_j\vert\,j\rangle$ , where $\alpha_j$ are complex probability amplitudes and $\sum_{j=1}^d\vert \alpha_j\vert ^2 =1$ , and we ask whether it is in a state $\vert i\rangle$ . To find out, we measure an observable $\hat P=\vert i\rangle \langle i\vert $ that projects on the state $\vert i\rangle$ . The result of such measurement will be one (yes, the system is in the state $\vert i\rangle$ ) with probability $\vert \alpha_i\vert ^2$ and zero with probability $1-\sum_{j \ne i}^d\vert \alpha_j\vert ^2$ .

We do not want to enter here deeply into the subject of the foundations of QM, but we want to remind the readers the 'standard' approach to QM.

3.1.1. Postulates of QM.

The postulates of QM for simple mechanical systems (single or many particle), as given in Cohen-Tannoudji et al (1991), read:

  • First postulate. At a fixed time t0, the state of a physical system is defined by specifying a wave function $\psi(x; t_0)$ , where x represents collection of parameters to specify the state.
  • Second postulate. Every measurable physical quantity Q is described by an operator $\hat Q$ ; this operator is called an observable.
  • Third postulate. The only possible result of the measurement of a physical quantity Q is one of the eigenvalues of the corresponding observable $\hat Q$ .
  • Fourth postulate (non-degenerate case). When the physical quantity Q is measured on a system in the normalized state ψ, the probability $P(q_n)$ of obtaining the non-degenerate eigenvalue qn of the corresponding observable $\hat Q$ is
    where $ \varphi_n$ is the normalized eigenvector of $\hat Q$ associated with the eigenvalue qn.
  • Fifth postulate (collapse). If the measurement of the physical quantity Q on the system in the state ψ gives the result qn, the state of the system immediately after the measurement is $\varphi_n$ .
  • Sixth postulate (time evolution). The time evolution of the wave function $\psi(x; t)$ is governed by the Schrödinger equation
    where $\hat H$ is the observable associated with the total energy of the system.
  • Seventh postulate (symmetrization). When a system includes several identical particles, only certain wave functions can describe its physical states (leads to the concept of bosons and fermions). For electrons (which are fermions), the wave function must change sign whenever the coordinates of two electrons are interchanged. For hydrogen atoms (regarded as composite bosons) the wave function must not change whenever the coordinates of two bosons are interchanged.

3.1.2. Measurement theory.

Evidently, the inherent randomness of QM is associated with the measurement processes (Fourth and Fifth Postulates). The quantum measurement theory has been a subject of intensive studies and long debate, see e.g. Wheeler and Zurek (1983). In particular the question of the physical meaning of the wave function collapse has been partially solved only in the last 30 years by analyzing interactions of the measured system with the environment (reservoir), describing the measuring apparatus (see seminal works of Zurek (2003, 2009))

In the abstract formulation in the early days of QM, one has considered von Neumann measurements (Neumann 1955), defined in the following way. Let the observable $\hat Q$ has (possibly degenerated) eigenvalues qn and let $\hat E_n$ denote projectors on the corresponding invariant subspaces (one dimensional for non-degenerate eigenvalues, k-dimensional for k-fold degenerated eigenvalues). Since the invariant subspace are orthogonal, we have $\hat E_n\hat E_m=\delta_{nm}\hat E_n$ , where $\delta_{mn}$ is the Kronecker delta. If $\hat P_\psi$ denotes the projector, which describes a state of a system, the measurement outcome corresponds to the eigenvalue qn of the observable will appear with probability $p_ n={\rm Tr}(\hat P_\psi\hat E_n)$ , where ${\rm Tr}(.)$ denotes the matrix trace operation. Also, after the measurement, the systems is found in the state $\hat E_n\hat P_\psi\hat E_n/p_n$ with probability pn.

In the contemporary quantum measurement theory the measurements are generalized beyond the von Neumann projective ones. To define the, so called, positive-operator valued measures (POVM), one still considers von Neumann measurements, but on a system plus an additional ancilla system (Peres 1995). POVMs are defined by a set of Hermitian positive semidefinite operators $\{F_i\}$ on a Hilbert space $\mathcal{H}$ that sum to the identity operator,

This is a generalization of the decomposition of a (finite-dimensional) Hilbert space by a set of orthogonal projectors, $\{E_i\}$ , defined for an orthogonal basis $\{\left\vert \phi_{i}\right\rangle\}$ by

hence,

An important difference is that the elements of POVM are not necessarily orthogonal, with the consequence that the number K of elements in the POVM can be larger than the dimension N of the Hilbert space they act on.

The post-measurement state depends on the way the system plus ancilla are measured. For instance, consider the case where the ancilla is initially a pure state $\vert 0\rangle_B$ . We entangle the ancilla with the system, taking

and perform a projective measurement on the ancilla in the $\{\vert i\rangle_B\}$ basis. The operators of the resulting POVM are given by

Since the Mi are not required to be positive, there are an infinite number of solutions to this equation. This means that there are infinitely many different experimental apparatus giving the same probabilities for the outcomes. Since the post-measurement state of the system (expressed now as a density matrix)

depends on the Mi, in general it cannot be inferred from the POVM alone.

If we accept quantum mechanics and its inherent randomness, then it is possible in principle to implement measurements of an observable on copies of a state that is not an eigenstate of this observable, to generate a set of perfect random numbers. Early experiments and commercial devices attempted to mimic a perfect coin with probability 1/2 of getting head and tail. To this aim quantum two-level systems were used, for instance single photons of two orthogonal circular polarizations. If such photons are transmitted through a linear polarizer of arbitrary direction then they pass (do not pass) with probability 1/2. In practice, the generated numbers are never perfect, and randomness extraction is required to generate good random output. The challenges of sufficiently approximating the ideal two-level scenario, and the complexity of detectors for single quantum systems, have motivated the development of other randomness generation strategies. In particular, continuous-variable techniques are now several orders of magnitude faster, and allow for randomness extraction based on known predictability bounds. See section 4.

It is worth mentioning that the Heisenberg uncertainty relation (Heisenberg 1927) and its generalized version, i.e. the Robertson–Scrödinger relation (Robertson 1929, Schrödinger 1930, Wheeler and Zurek 1983), often mentioned in the context of quantum measurements, signify how precisely two non-commuting observables can be measured on a quantum state. Quantitatively, for a given state ρ and observables X and Y, it gives a lower bound on the uncertainty when they are measures simultaneously, as

Equation (3)

where $\delta X^2={\rm{Tr}}\rho X^2-({\rm{Tr}} \rho X){\hspace{0pt}}^2$ is the variance and $[X, Y]=XY-YX$ is the commutator. A non-vanishing $\delta X$ represents a randomness in the measurement process and that may arise from non-commutativity (misalignment in the eigenbases) between state and observable, or even it may appear due to classical uncertainty present in the state (i.e. not due to quantum superposition). In fact equation (3) does allow to have either $\delta X$ or $\delta Y$ vanishing, but not simultaneously for a given state ρ and $[X, Y]\neq 0$ . However, when $\delta X$ vanishes, it is nontrivial to infer on $\delta Y$ and vice versa. To overcome this drawback, the uncertainty relation is extended to sum-uncertainty relations, both in terms of variance (Maccone and Pati 2014) and entropic quantities (Beckner 1975, Białynicki-Birula and Mycielski 1975, Deutsch 1983, Maassen and Uffink 1988). We refer to Coles et al (2015) for an excellent review on this subject. The entropic uncertainty relation was also considered in the presence of quantum memory (Berta et al 2010). It has been shown that, in the presence of quantum memory, any two observables can simultaneously be measured with arbitrary precision. Therefore the randomness appearing in the measurements can be compensated by the side information stored in the quantum memory. As we mentioned in the previous section, Heisenberg uncertainty relations are closely related to the contextuality of quantum mechanics at the level of single systems. Non-commuting observables are indeed responsible for the fact that there does not exist non-contextual hidden variable theories that can explain all results of quantum mechanical measurements on a given system.

The inherent randomness considered in this work is steaming out the Born's rule in quantum mechanics, irrespective of the fact if there is more than one observable being simultaneously measured or not. Furthermore the existence of nonlocal correlations (and quantum correlations) in the quantum domain give rise to possibility of, in a sense, a new form of randomness. In the following we consider such randomness and its connection to nonlocal correlations. Before we do so, we shall discuss nonlocal correlations in more detail.

3.2. Nonlocality

3.2.1. Two-party nonlocality.

Let us now turn to nonlocality, i.e. property of correlations that violate Bell inequalities (Bell 1964, Brunner et al 2014). As we will see below, nonlocality is intimately connected to the intrinsic quantum randomness. In the traditional scenario a Bell nonlocality test relies on two spatially separated observers, say Alice and Bob, who perform space-like measurements on a bipartite system possibly produced by a common source. For a schematic see figure 1. Suppose Alice's measurement choices are $x\in \mathcal{X}=\{1, \ldots, M_A\}$ and Bob's choices $y\in \mathcal{Y}=\{1, \ldots, M_B\}$ and the corresponding outcomes $a\in \mathcal{A}=\{1, \ldots, m_A\}$ and $b\in \mathcal{B}=\{1, \ldots, m_B\}$ respectively. After repeating many times, Alice and Bob communicate their measurement settings and outcomes to each other and estimate the joint probability $p(a, b\vert x, y)=p(A=a, B=b\vert X=x, Y=y)$ where X, Y are the random variables that govern the inputs and A, B are the random variables that govern the outputs. The outcomes are considered to be correlated, for some $x, y, a, b$ , if

Equation (4)

Observing such correlations is not surprising as there are many classical sources and natural processes that lead to correlated statistics. These can be modeled with the help of another random variable Λ with the outcomes λ, which has a causal influence on both the measurement outcomes and is inaccessible to the observers or ignored.

In a local hidden-variable model, considering all possible causes Λ, the joint probability can then be expressed as

Equation (5)

One, thereby, could explain any observed correlation in accordance with the fact that Alice's outcomes solely depends on her local measurement settings x, on the common cause λ, and are independent of Bob's measurement settings. Similarly Bob's outcomes are independent of Alice's choices. This assumption—the no-signaling condition—is crucial—it is required by the theory of relativity, where nonlocal causal influence between space-like separated events is forbidden. Therefore, any joint probability, under the local hidden-variable model, becomes

Equation (6)

with the implicit assumption that the measurement settings x and y could be chosen independently of λ, i.e. $p(\lambda\vert x, y)=p(\lambda)$ . Note that so far we have not assumed anything about the nature of the local measurements, whether they are deterministic or not. In a deterministic local hidden-variable model, Alice's outcomes are completely determined by the choice x and the λ. In other words, for an outcome a, given input x and hidden cause λ, the probability $p(a\vert x, \lambda)$ is either 1 or 0 and so as for Bob's outcomes. Importantly, the deterministic local hidden-variable model has been shown to be fully equivalent to the local hidden-variable model (Fine 1982). Consequently, the observed correlations that admit a join probability distribution as in (6), can have an explanation based on a deterministic local hidden-variable model.

In 1964, Bell showed that any local hidden-variable model is bound to respect a set of linear inequalities, which are commonly know as Bell inequalities. In terms of joint probabilities they can be expressed as

Equation (7)

where $\alpha^{xy}_{ab}$ are some coefficients and $\mathcal{S}_L$ is the classical bound. Any violation of Bell inequalities (7) implies a presence of correlations that cannot be explained by a local hidden-variable model, and therefore have a nonlocal character. Remarkably, there indeed exists correlations violating Bell inequalities that could be observed with certain choices of local measurements on a quantum system, and hence do not admit a deterministic local hidden-variable model.

To understand it better, let us consider an example of the most studied two-party Bell inequalities, also known as Clauser–Horne–Shimony–Holt (CHSH) inequalities, introduced in Clauser et al (1969). Assume the simplest scenario (as in figure 1) in which Alice and Bob both choose one of two local measurements $x, y \in \{0, 1\}$ and obtain one of two measurement outcomes $a, b \in \{-1, 1\}$ . Let the expectation values of the local measurements are $\langle a_x b_y \rangle = \sum_{a, b} a\cdot b \cdot p(a, b\vert x, y)$ , then the CHSH-inequality reads:

Equation (8)

One can maximize ICHSH using local deterministic strategy and to do so one needs to achieve the highest possible values of $\langle a_0 b_0 \rangle, \ \langle a_0 b_1 \rangle, \ \langle a_1 b_0 \rangle$ and the lowest possible value of $ \langle a_1 b_1 \rangle$ . By choosing $p(1, 1\vert 0, 0)=p(1, 1\vert 0, 1)=p(1, 1\vert 1, 0)=1$ , the first three expectation values can be maximized. However, in such situation the $p(1, 1\vert 1, 1)=1$ and ICHSH could be saturated to 2. Thus the inequality is respected. However, it can be violated in a quantum setting. For example, considering a quantum state $\vert \Psi^+\rangle=\frac{1}{\sqrt{2}}(\vert 00\rangle + \vert 11\rangle)$ and measurement choices $A_0=\sigma_z$ , $A_1=\sigma_x$ , $B_0=\frac{1}{\sqrt{2}}(\sigma_z+\sigma_x)$ , $B_1=\frac{1}{\sqrt{2}}(\sigma_z-\sigma_x)$ one could check that for the quantum expectation values $ \newcommand{\ot}[0]{\otimes} \langle a_\alpha b_\beta \rangle = \langle \Psi^+ \vert A_\alpha \otimes B_\beta \vert \Psi^+ \rangle $ we get $I_{\rm CHSH}=2\sqrt{2}$ . Here $\sigma_z$ and $\sigma_x$ are the Pauli spin matrices and $\vert 0\rangle$ and $\vert 1\rangle$ are two eigenvectors of $\sigma_z$ . Therefore the joint probability distribution $p(a, b\vert x, y)$ cannot be explained in terms of local deterministic model.

3.2.2. Multi-party nonlocality and device independent approach.

Bell-type inequalities can be also constructed in multi-party scenario. Their violation signifies nonlocal correlations distributed over many parties. A detailed account may be found in Brunner et al (2014).

Here we introduce the concept of nonlocality using the contemporary language of device independent approach (DIA) (Brunner et al 2014). Recent successful hacking attacks on quantum cryptographic devices (Lydersen et al 2010) triggered this novel approach to quantum information theory in which protocols are defined independently of the inner working of the devices used in the implementation. That leads to avalanches of works in the field of device independent quantum information processing and technology (Brunner 2014, Pironio et al 2015).

The idea of DIA is schematically given in figure 2. We consider here the following scenario, usually referred to as the $(n, m, d)$ scenario. Suppose n spatially separated parties $A_1, \ldots, A_n$ . Each of them possesses a black box with m measurement choices (or observables) and d measurement outcomes. Now, in each round of the experiment every party is allowed to perform one choice of measurement and acquires one outcome. The accessible information, after the measurements, is contained in a set of $(md){\hspace{0pt}}^n$ conditional probabilities $p(a_1, \ldots, a_n\vert x_1, \ldots, x_n)$ of obtaining outputs $a_1, a_2, \ldots, a_n$ , provided observables $x_1, x_2, \ldots, x_n$ were measured. The set of all such probability distributions forms a convex set; in fact, it is a polytope in the probability manifold. From the physical point of view (causality, special relativity) the probabilities must fulfill the no-signaling condition, i.e. the choice of measurement by the k-th party, cannot be instantaneously signalled to the others. Mathematically it means that for any $k=1, \ldots, n$ , the following condition

Equation (9)

is fulfilled.

Figure 2.

Figure 2. Schematic representation of device independent approach. In this approach several users could access to uncharacterized black-boxes (shown as squares) possibly prepared by an adversary. The users are allowed to choose inputs $(x_1, \ldots, x_k, \ldots, x_n)$ for the boxes and acquire outputs $(a_1, \ldots, a_k, \ldots, a_n)$ as results. The joint probability with which the outputs appear is $p(a_1, \ldots, a_k, \ldots, a_n \vert x_1, \ldots, x_k, \ldots, x_n)$ .

Standard image High-resolution image

The local correlations are defined via the concept of a local hidden variable λ with the associated probability $q_{\lambda}$ . The correlations that the parties are able to establish in such case are of the form

Equation (10)

where $D(a_k\vert x_k, \lambda)$ are deterministic probabilities, i.e. for any λ, $D(a_k\vert x_k, \lambda)$ equals one for some outcome, and zero for all others. What is important in this expression is that measurements of different parties are independent, so that the probability is a product of terms corresponding to different parties. In this n-party scenario the local hidden variable model bounds the joint probabilities to follow the Bell inequalities, given as

Equation (11)

where $\alpha_{a_1, \ldots, a_n}^{x_1, \ldots, x_n}$ are some coefficients and $\mathcal{S}_L^n$ is the classical bound.

The probabilities that follow local (classical) correlations form a convex set that is also a polytope, denoted $\mathbb {P}$ (see figure 3). Its extremal points (or vertices) are given by $\prod_{i=1}^n D(a_i\vert x_i, \lambda)$ with fixed λ. The Bell theorem states that the quantum-mechanical probabilities, which also form a convex set $\mathcal{Q}$ , may stick out of the classical polytope (Bell 1964, Fine 1982). The quantum probabilities are given by the trace formula for the set of local measurements

Equation (12)

where ρ is some n-partite state and $M_{a_i}^{x_i}$ denote the measurement operators (POVMs) for any choice of the measurement xi and party i. As we do not impose any constraint on the local dimension, we can always choose the measurements to be projective, i.e. the measurement operators additionally satisfy $M_{a^{\prime}_i}^{x_i}M_{a_i}^{x_i}=\delta_{a^{\prime}_i, a_i} M_{a_i}^{x_i}$ .

Figure 3.

Figure 3. Schematic representation of different sets of correlations: classical (grey area) and quantum (the area bounded by the thick line). Clearly, the former is the subset of the latter and, as has been shown by Bell (1964), they are not equal—there are quantum correlations that do not fall into the grey area. The black dots represent the vertices of the classical polytope $\mathbb {P}$ —deterministic classical correlations—satisfying deterministic local hidden variable models. The dashed lines represent Bell inequalities. In particular, the black dashed line is tight and it corresponds to the facet of the classical set.

Standard image High-resolution image

This approach towards the Bell inequalities is explained in figure 3. Any hyperplane in the space of probabilities that separates the classical polytope from the rest determines a Bell inequality: everything that is above the upper horizontal dashed line is obviously nonlocal. But the most useful are the tight Bell inequalities corresponding to the facets of the classical polytope, i.e. its walls of maximal dimensions (lower horizontal dashed line).

In general $(n, m, d)$ scenarios, the complexity of characterizing the corresponding classical polytope is enormous. It is fairly easy to see that, even for $(n, 2, 2)$ , the number of its vertices (extremal points) is equal to $2^{2n}$ , hence it grows exponentially with n. Nevertheless, a considerable effort has been made in recent time to characterize multi-party nonlocality (Brunner et al 2014, Tura et al 2014a, 2014b, Liang et al 2015, Tura et al 2015, Rosicka et al 2016).

Among the many other device independent applications, the nonlocality appears to be a valuable resource in random number generation, certification, expansion and amplification, which we outline in the following sections. In fact, it has been shown that Bell nonlocal correlation is a genuine resource, in the framework of a resource theory, where the allowed operations are restricted to device independent local operations (Gallego et al 2012, Vicente 2014).

3.3. Randomness: information theoretic approach

Before turning to the quantum protocols involving randomness, we discuss in this section randomness from the information theory standpoint. It is worth mentioning the role of randomness in various applications, beyond its fundamental implications. In fact randomness is a resource in many different areas—for a good overview see Motwani and Raghavan (1995) and Menezes et al (1996). Random numbers play important role in cryptographic applications, in numerical simulations of complex physical, chemical, economical, social and biological systems, not to mention gambling. That is why, much efforts were put forward to (1) develop good, reliable sources of random numbers, and (2) to design reliable certification tests for a given source of random numbers.

In general, there exists three types of random number generators (RNG). They are 'true' RNGs, pseudo-RNGs and the quantum-RNGs. The true RNGs are based on some physical processes that are hard to predict, like noise in electrical circuits, thermal or atmospheric noises, radioactive decays etc. The pseudo-RNGs rely on the output of a deterministic function with a shorter random seed possibly generated by a true RNG. Finally, quantum RNGs use genuine quantum features to generate random bits.

We consider here a finite sample space and denote it by the set Ω. The notions of ideal and weak random strings describe distributions over Ω with certain properties. When a distribution is uniform over Ω, we say that it has ideal randomness. A uniform distribution over n-bit strings is denoted by Un. The uniform distributions are very natural to work with. However, when we are working with physical systems, the processes or measurements are usually biased. Then the bit strings resulting from such sources are not uniform. A string with nonuniform distribution, due to some bias (could be unknown), is referred to have weak randomness and the sources of such strings are termed as weak sources.

Consider the random variables denoted by the letters $(X, Y, \ldots)$ . Their values will be denoted by $(x, y, \ldots)$ . The probability of a random variable X with a value x is denoted as $p(X=x)$ and when the random variable in question is clear we use the shorthand notation $p(x)$ . Here we briefly introduce the operational approach to define randomness of a random variable. In general, the degree of randomness or bias of a source is unknown and it is insufficient to define a weak source by a random variable X with a probability distribution $P(X)$ . Instead one needs to model the weak randomness by a random variable with an unknown probability distribution. In another words, one need to characterize a set of probability distributions with desired properties. If we suppose that the probability distribution $P(X)$ of the variable X comes from a set Ω, then the degree of randomness is determined by the properties of the set, or more specifically, by the least random probability distribution(s) in the set. The types of weak randomness differ with the types of distribution $P(X)$ on Ω and the set Ω itself—they are determined by the allowed distributions motivated by a physical source. There are many ways to classify the weak random sources, and an interested reader may go through (Pivoluska and Plesch 2014). Here we shall consider two types of weak random sources, Santha–Vazirani (SV) and min-entropy (ME) sources, which will be sufficient for our later discussions.

A Santha–Vazirani (SV) source (Santha and Vazirani 1986) is defined as a sequence of binary random variables $(X_1, X_2, \ldots, X_n)$ , such that

Equation (13)

where the conditional probability $p(x_i=1\vert x_1, \ldots, x_{i-1})$ is the probability of the value $x_i=1$ conditioned on the values $x_1, \ldots, x_{i-1}$ . The $ \newcommand{\e}{{\rm e}} 0 \leqslant \epsilon \leqslant \frac{1}{2}$ represents bias of the source. For fixed epsilon and n, the SV-source represents a set of probability distributions over n-bit strings. If a random string satisfies (13), then we say that it is epsilon-free. For $ \newcommand{\e}{{\rm e}} \epsilon=0$ the string is perfectly random—uniformly distributed sequence of bits Un. For $ \newcommand{\e}{{\rm e}} \epsilon=\frac{1}{2}$ , nothing can be inferred about the string and it can be even deterministic. Note that in SV sources the bias can not only change for each bit Xi, but it also can depend on the previously generated bits. It requires that each produced bit must have some amount of randomness, when $ \newcommand{\e}{{\rm e}} \epsilon \neq \frac{1}{2}$ , and even be conditioned on the previous one.

In order the generalize it further one considers block source (Chor and Goldreich 1988), where the randomness is not guaranteed in every single bit, but rather for a block of n-bits. Here, in general, the randomness is quantified by the min-entropy, which is defined as:

Equation (14)

for an n-bit random variable Y. For a block source, the randomness is guaranteed by the most probable n-bit string appearing in the outcome of the variable—simply by guessing the most probable element—provided that the probability is less than one. A block $(n, k)$ source can now be modeled, for n-bit random variables $(X_1, X_2, ..., X_n)$ , such that

Equation (15)

These block sources are generalizations of SV-sources; the latter are recovered with $n=1$ and $ \newcommand{\e}{{\rm e}} \epsilon = 2^{-H_{\infty}(X)}-\frac{1}{2}$ . The block sources can be further generalized to sources of randomness of finite output size, where no internal structure is given, e.g. guaranteed randomness in every bit (SV-sources) or every block of certain size (block sources). The randomness is only guaranteed by its overall min-entropy. Such sources are termed as the min-entropy sources (Chor and Goldreich 1988) and are defined, for an n-bit random variable X, such that

Equation (16)

Therefore, a min-entropy source represents a set of probability distributions where the randomness is upper-bounded by the probability of the most probable element, measured by min-entropy.

Let us now briefly outline the randomness extraction (RE), as it is one of the most common operations that is applied in the post-processing of weak random sources. The randomness extractors are the algorithms that produce nearly perfect (ideal) randomness useful for potential applications. The aim of RE is to convert randomness of a string from a weak source into a possibly shorter string of bits that is close to a perfectly random one. The closeness is defined as follows. The random variables X and Y over a same domain Ω are ε-close, if:

Equation (17)

With respect to RE, the weak sources can be divided into two classes—extractable sources and non-extractable sources. Only from extractable sources a perfectly random string can be extracted by a deterministic procedure. Though there exist many non-trivial extractable sources (see for example Kamp et al (2011)), most of the natural sources, defined by entropies, are non-extractable and in such cases non-deterministic (stochastic) randomness extractors are necessary.

Deterministic extraction fails for the random strings from SV-sources, but it is possible to post-process them with a help of an additional random string. As shown in Vazirani (1987), for any epsilon and two mutually independent epsilon-free strings from SV-sources, it is possible to efficiently extract a single almost perfect bit ($ \newcommand{\e}{{\rm e}} \epsilon^\prime \rightarrow 0$ ). For two n-bit independent strings $X=(X_1, ..., X_n)$ and $Y=(Y_1, ...Y_n)$ , the post-processing function, Ex, has been defined as

Equation (18)

where $\oplus$ denotes the sum modulo 2. The function Ex is the inner product between the n-bit strings X and Y modulo 2. Randomness extraction of SV-sources are sometime referred as randomness amplification as two epsilon-free strings from SV-sources are converted to one-bit string of $ \newcommand{\e}{{\rm e}} \epsilon^\prime$ -free and with $ \newcommand{\e}{{\rm e}} \epsilon^\prime < \epsilon$ .

Deterministic extraction is also impossible for the min-entropy sources. Nevertheless, an extraction might be possible with the help of seeded extractor in which an extra resource of uniformly distributed string, called the seed, is exploited. A function, $Ex: \ \{0, 1\}^n \ \times \ \{0, 1\}^r \mapsto \{0, 1\}^m $ is seeded $(k, \varepsilon)$ -extractor, for every string from block $(n, k)$ -source of random variable X, if

Equation (19)

Here Ur (Um) is the uniformly distributed r-bit (m-bit) string. In fact, for a variable X, min-entropy gives the upper bound on the number of extractable perfectly random bits (Shaltiel 2002). Randomness extraction is well developed area of research in classical information theory. There are many randomness extraction techniques using multiple strings (Nisan and Ta-Shma 1999, Shaltiel 2002, Dodis et al 2004, Raz 2005, Gabizon and Shaltiel 2008, Barak et al 2010), such as universal hashing extractor, Hadamard extractor, DEOR extractor, BPP extractor etc, useful for different post-processing.

3.4. Nonlocality, random number generation and certification

Here we link the new form of randomness, i.e the presence of nonlocality (in terms of Bell violation) in the quantum regime, to random number generation and certification. To do so, we outline how nonlocal correlations can be used to generate new types of random numbers, what has been experimentally demonstrated in Pironio et al (2010). Consider the Bell-experiment scenario (figure 1), as explained before. Two separated observers perform different measurements, labeled as x and y, on two quantum particles in their possession and get measurement outcomes a and b, respectively. With many repetitions they can estimate the joint probability, $p(a, b\vert x, y)$ , for the outcomes a and b with the measurement choices x and y. With the joint probabilities the observers could check if the Bell inequalities are respected. If a violation is observed the outcomes are guaranteed to be random. The generation of these random numbers is independent of working principles of the measurement devices. Hence, this is a device independent random number generation. In fact, there is a quantitative relation between the amount of Bell-inequality violation and the observed randomness. Therefore, these random numbers could be (a) certifiable, (b) private, and (c) device independent (Colbeck 2007, Pironio et al 2010). The resulting string of random bits, obtained by N uses of measurement device, would be made up of N pair of outcomes, $(a_1, b_1, ..., a_N, b_N)$ , and their randomness could be guaranteed by the violation of Bell inequalities.

There is however an important point to be noted. A priori, the observers do not know whether the measurement devices violate Bell inequalities or not. To confirm they need to execute statistical tests, but such tests cannot be carried out in predetermined way. Of course, if the measurement settings are known in advance, then an external agent could prepare the devices that are completely deterministic and the Bell-inequality violations could be achieved even in the absence of nonlocal correlations. Apparently there is a contradiction between the aim of making random number generator and the requirement of random choices to test the nonlocal nature of the devices. However, it is natural to assume that the observers can make free choices when they are separated.

Initially it was speculated that the more particles are nonlocally correlated (in the sense of Bell violation), the stronger would be the observed randomness. However, this intuition is not entirely correct, as shown in Acín et al (2012)—a maximum production of random bits could be achieved with a non-maximal CHSH violation. To establish a quantitative relation between the nonlocal correlation and generated randomness, let us assume that the devices follow quantum mechanics. There exist thus a quantum state ρ and measurement operators (POVMs) of each device $M^x_a$ and $M^y_b$ such that the joint probability distribution $P(a, b\vert x, y)$ could be expressed, through Born rule, as

Equation (20)

where the tensor product reflects the fact that the measurements are local, i.e. there are no interactions between the devices, while the measurement takes place. The set of quantum correlations consists of all such probability distributions. Consider a linear combinations of them,

Equation (21)

specified by real coefficients $\alpha^{xy}_{ab}$ . For local hidden-variable model, with certain coefficients $\alpha^{xy}_{ab}$ , the Bell inequalities can be then expressed as

Equation (22)

This bound could be violated ($\mathcal{S} > \mathcal{S}_L$ ) for some quantum states and measurements indicating that the state contains nonlocal correlation.

Let us consider the the measure of randomness quantified by min-entropy. For a d-dimensional probability distribution $P(X)$ , describing a random variable X, the min-entropy is defined as $H_{\infty}(X)=-{\rm{log}}_2\left[ {\rm{max}}_x p(x) \right]$ . Clearly, for the perfectly deterministic distribution this maximum equals one and the min-entropy is zero. On the other hand, for a perfectly random (uniform) distribution, the entropy acquires the maximum value, ${\rm{log}}_2 d$ . In the Bell scenario, the randomness in the outcomes, generated by the pair of measurements x and y, reads $H_{\infty}(A, B\vert x, y)=-{\rm{log}}_2 c_{xy}$ , where $c_{xy}={\rm{max}}_{ab}P_Q(a, b\vert x, y)$ . For a given observed value $\mathcal{S} > \mathcal{S}_L$ , violating Bell inequality, one could find a quantum realization, i.e. the quantum states and set of measurements, that minimizes the min-entropy of the outcomes $H_{\infty}(A, B\vert x, y)$ (Navascués et al 2008). Thus, for any violation of Bell inequalities, the randomness of a pair of outcomes satisfies

Equation (23)

where f is a convex function and vanishes for the case of no Bell-inequality violation, $\mathcal{S} \leqslant \mathcal{S}_L$ . Hence, the (23) quantitatively states that a violation of Bell inequalities guarantees some amount randomness. Intuitively, if the joint probabilities admit (22), then for each setting $x, \ y$ and a hidden cause λ, the outcomes a and b can be deterministically assigned. However, the violation of (22) rules out such possibility. As a consequence, the observed correlation cannot be understood with a deterministic model and the outcomes are fundamentally undetermined at the local level.

Although there are many different approaches to generate random numbers (Marsaglia 2008, Bassham et al 2010), the certification of randomness is highly non-trivial. However, this problem could be solved, in one stroke, if the random sequence shows a Bell violation, as it certifies a new form of 'true' randomness that has no deterministic classical analogue.

3.5. Nonlocality and randomness expansion

Nonlocal correlations can be also used for the construction of novel types of randomness expansion protocols. In these protocols, a user expands an initial random string, known as seed, into a larger string of random bits. Here, we focus on protocols that achieve this expansion by using randomness certified by a Bell inequality violation. Since the first proposals in Colbeck (2007) and Pironio et al (2010), there have been several different works studying Bell-based randomness expansion protocols, see for instance (Colbeck 2007, Pironio et al 2010, Colbeck and Kent 2011, Vazirani and Vidick 2012, Coudron and Yuen 2013, Miller and Shi 2016, Chung et al 2014, Miller and Shi 2014, Arnon-Friedman et al 2016). It is not the scope of this section to review the contributions of all these works, which in any case should be interpreted as a representative but non-exhaustive list. However, most of them consider protocols that have the structure described in what follows. Note that the description aims at providing the main intuitive steps in a general randomness expansion protocols and technicalities are deliberately omitted (for details see the references above).

The general scenario consists of a user who is able to run a Bell test. He thus has access to $n\geqslant 2$ devices where he can implement m local measurements of d outputs. For simplicity, we restrict the description in what follows to protocols involving only two devices, which are also more practical form an implementation point of view. The initial seed is used to randomly choose the local measurement settings in the Bell experiment. The choice of settings does not need to be uniformly random. In fact, in many situations, there is a combination of settings in the Bell test that produces more randomness than the rest. It is then convenient to bias the choice of measurement towards these settings so that (i) the amount of random bits consumed from the seed, denoted by Ns, is minimize and (ii) the amount of randomness produced during the Bell test is maximized.

The choice of settings is then used to perform the Bell test. After N repetitions of the Bell test, the user acquires enough statistics to have a proper estimation of the non-locality of the generated data. If not enough confidence about a Bell violation is obtained in this process, the protocol is aborted or more data are generated. From the observed Bell violation, it is possible to bound the amount of randomness in the generated bits. This is often done by means of the so-called min-entropy, $H_{\infty}$ . In general, for a random variable X, the min-entropy is expressed in bits and is equal to $H_{\infty}=-\log_2\max_x P(X=x)$ . The observed Bell violation is used to establish a lower bound on the min-entropy of the generated measurement outputs. Usually, after this process, the user concludes that with high confidence the $N_G\leqslant N$ generated bits have an entropy at least equal to $R\leqslant N_g$ , that is $H_{\infty}\geqslant R$ .

This type of bounds is useful to run the last step in the protocol: the final randomness distillation using a randomness extractor (Nisan and Ta-Shma 1999, Trevisan 2001). This consists of classical post-processing of the measurement outcomes, in general assisted by some extra Ne random bits from the seed, which map the Ng bits with entropy at least R to R bits with the same entropy, that is, R random bits. Putting all the things together, the final expansion of the protocols is given by the ration $R/(N_s+N_e)$ .

Every protocol comes with a security proof, which guarantees that the final list of R bits is unpredictable to any possible observer, or eavesdropper, who could share correlated quantum information with the devices used in the Bell test. Security proofs are also considered in the case of eavesdroppers who can go beyond the quantum formalism, yet without violating the no-signaling principle. All the works mentioned above represent important advances in the design of randomness expansion protocols. At the moment, it is for instance known that (i) any Bell violation is enough to run a randomness expansion protocol (Miller and Shi 2016) and (ii) in the previous simplest configuration presented here, there exist protocol attaining an exponential randomness expansion (Vazirani and Vidick 2012). More complicated variants, where devices are nested, even attain an unbounded expansion (Coudron and Yuen 2013, Chung et al 2014). Before concluding, it is worth mentioning that another interesting scenario consists of the case in which a trusted source of public randomness is available. Even if public, this trusted randomness can safely be used to run the Bell test and does not need to be taken into account in the final rate.

3.6. Nonlocality and randomness amplification

Here we discuss the usefulness of nonlocal correlation for randomness amplification, a task related but in a way complementary to randomness expansion. While in randomness expansion one assumes the existence of an initial list of perfect random bits and the goal is to generate a longer list, in randomness amplification the user has access to a list of imperfect randomness and the goal is to generate a source of higher, ideally arbitrarily good, quality. As above, the goal is to solve this information task by exploiting Bell violating correlations.

Randomness amplification based on non-locality was introduced in Colbeck and Renner (2012). There, the initial source of imperfect randomness consisted of a SV source. Recall that the amplification of SV sources is impossible by classical means. A protocol was constructed based on the two-party chained Bell inequalities that was able to map an SV source with parameter $ \newcommand{\e}{{\rm e}} \epsilon < 0.058$ into a new source with epsilon arbitrarily close to zero. This result is only valid in an asymptotic regime in which the user implements the chained Bell inequality with an infinite number of measurements. Soon after, a more complicated protocol attained full randomness amplification (Gallego et al 2013), that is, it was able to map SV sources of arbitrarily weak randomness, $ \newcommand{\e}{{\rm e}} \epsilon<1/2$ , to arbitrarily good sources of randomness, $ \newcommand{\e}{{\rm e}} \epsilon\rightarrow 0$ . The final result was again asymptotic, in the sense that to attain full randomness amplification the user now requires an infinite number of devices. Randomness amplification protocols have been studied by several other works, see for instance (Coudron and Yuen 2013, Bouda et al 2014, Chung et al 2014, Grudka et al 2014, Mironowicz et al 2015, Wojewódka et al 2016, Ramanathan et al 2016, Brandão et al 2016). As above, the scope of this section is not to provide a complete description of all the works studying the problem of randomness amplification, but rather to provide a general framework that encompasses most of them. In fact, randomness amplification protocols (see e.g. figure 4) have a structure similar to randomness expansion ones.

Figure 4.

Figure 4. Scheme of randomness amplification using four devices, as in Brandão et al (2016). The devices are shielded from each other as indicated with black barriers. The local measurement choices, in each run, are governed by the part of the SV-source, x, and corresponding output forms random bits a. After n-runs Bell test is performed (denoted by—Test). If the test violates Bell inequalities, then the outputs and rest of the initial SV-source t are fed into an extractor (denoted by—Extractor) in order to obtain final outputs. If the test does not violate Bell inequalities, the protocol is aborted.

Standard image High-resolution image

The starting point of a protocol consists of a source of imperfect randomness. This is often modelled by an SV source, although some works consider a weaker source of randomness, known as min-entropy source, in which the user only knows a lower bound on the min-entropy of the symbols generated by the source (Bouda et al 2014, Chung et al 2014). The bits of imperfect randomness generated by the user are used to perform N repetitions of the Bell test. If the observed Bell violation is large enough, with enough statistical confidence, bits defining the final source are constructed from the measurement outputs, possibly assisted by new random bits from the imperfect source. Note that contrarily to the previous case, the extraction process cannot be assisted with a seed of perfect random numbers, as this seed could be trivially be used to produce the final source. As in the case of expansion protocols, any protocol should be accompanied by a security proof showing that the final bits are unpredictable to any observer sharing a system correlated with the devices in the user's hands.

4. Quantum randomness and technology

Random numbers have been a part of human technology since ancient times. If Julius Caesar indeed said 'Alea iacta est' ('the die is cast') when he crossed the Rubicon, he referred to a technology that had already been in use for thousands of years. Modern uses for random numbers include cryptography, computer simulations, dispute resolution, and gaming. The importance of random numbers in politics, social science and medicine should also not be underestimated; randomized polling and randomized trials are essential methodology in these areas.

A major challenge for any modern randomness technology is quantification of the degree to which the output could be predicted or controlled by an adversary. A common misconception is that the output of a random number generator can be tested for randomness, for example using statistical tests such as Diehard/Dieharder (Brown 2004, Marsaglia 2008), NIST SP800-22 (Rukhin et al 2010), or TestU01 (L'Ecuyer and Simard 2007). While it is true that failing these tests indicates the presence of patterns in the output, and thus a degree of predictability, passing the tests does not indicate randomness. This becomes clear if you imagine a device that on its first run outputs a truly random sequence, perhaps from ideal measurements of radioactive decay, and on subsequent runs replays this same sequence from a recording it kept in memory. Any of these identical output sequences will pass the statistical tests, but only the first one is random; the others are completely predictable. We can summarize this situation with the words of John von Neumann: 'there is no such thing as a random number—there are only methods to produce random numbers' (von Neumann 1951).

How can we know that a process does indeed produce random numbers? In light of the difficulties in determining the predictability of the apparent randomness seen in thermal fluctuations and other classical phenomena, using the intrinsic randomness of quantum processes is very attractive. One approach, described in earlier sections, is to use device-independent methods. In principle, device-independent randomness protocols can be implemented with any technology capable of a strong Bell-inequality violation, including ions (Pironio et al 2010), photons (Giustina et al 2015, Shalm et al 2015), nitrogen-vacancy centres (Hensen et al 2015), neutral atoms (Rosenfeld et al 2011) and superconducting qubits (Jerger et al 2016).

Device-independent randomness expansion based on Bell inequality violations was first demonstrated using a pair of ${\rm Yb}^+$ ions held in spatially-separated traps (Pironio et al 2010). In this protocol, each ion is made to emit a photon which, due to the availability of multiple decay channels with orthogonal photon polarizations, emerges from the trap entangled with the internal state of the ion. When the two photons are combined on a beamsplitter, the Hong-Ou-Mandel effect causes a coalescence of the two photons into a single output channel, except in the case that the photons are in a polarization-antisymmetric Bell state. Detection of a photon pair, one at each beamsplitter output, thus accomplishes a projective measurement onto this antisymmetric Bell state, and this projection in turn causes an entanglement swapping that leaves the ions entangled. Their internal states can then be detected with high efficiency using fluorescence readout. This experiment strongly resembles a loophole-free Bell test, with the exception that the spatial separation of about one meter is too short to achieve space-like separation. Due to the low probability that both photons were collected and registered on the detectors, the experiment had a very low success rate, but this does not reduce the degree of Bell inequality violation or the quality of the randomness produced. The experiment generated 42 random bits in about one month of continuous running, or $1.6 \times 10^{-5}$ bits ${\rm s}^{-1}$ .

A second experiment, in principle similar but using very different technologies, was performed with entangled photons and high-efficiency detectors (Christensen et al 2013) to achieve a randomness extraction rate of 0.4 bits ${\rm s}^{-1}$ . While further improvements in speed can be expected in the near future (National Institutes of Standards and Technology 2011), at present device-independent techniques are quite slow, and nearly all applications must still use traditional quantum randomness techniques.

It is also worth noting that device-independent experiments consume a large quantity of random bits in choosing the measurement settings in the Bell test. Pironio et al used publicly available randomness sources drawn from radioactive decay, atmospheric noise, and remote network activity. Christensen et al used photon-arrival-time random number generators to choose the measurement settings. Although it has been argued that no additional physical randomness is needed in Bell tests (Pironio 2015), there does not appear to be agreement on this point. At least in practice if not in principle, it seems likely that there will be a need for traditional quantum randomness technology also in device-independent protocols.

If one does not stick to the device-independent approach, it is in fact fairly easy to obtain signals from quantum processes, and devices to harness the intrinsic randomness of quantum mechanics have existed since the 1950s. This began with devices to observe the timing of nuclear decay (Isida and Ikeda 1956), followed by a long list of quantum physical processes including electron shot noise in semiconductors, splitting of photons on beamsplitters, timing of photon arrivals, vacuum fluctuations, laser phase diffusion, amplified spontaneous emission, Raman scattering, atomic spin diffusion, and others. See Herrero-Collantes and Garcia-Escartin (2017) for a thorough review.

While measurements on many physical processes can give signals that contain some intrinsic randomness, any real measurement will also be contaminated by other signal sources, which might be predictable or of unknown origin. For example, one could make a simple random number generator by counting the number of electrons that pass through a Zener diode in a given amount of time. Although electron shot noise will make an intrinsically-random contribution, there will also be an apparently-random contribution from thermal fluctuations (Johnson–Nyquist noise), and a quite non-random contribution due to technical noises from the environment. If the physical understanding of the device permits a description in terms of the conditional min-entropy (see section 3.3)

Equation (24)

where Xi is the i'th output string and hi is the 'history' of the the device at that moment, including all fluctuating quantities not ascribable to intrinsic randomness, then randomness extraction techniques can be used to produce arbitrarily-good output bits from this source. Establishing this min-entropy level can be an important challenge, however.

The prevalence of optical technologies in recent work on quantum random number generators is in part in response to this challenge. The high coherence and relative purity of optical phenomena allows experimental systems to closely approximate idealized quantum measurement scenarios. For example, fluorescence detection of the state of a single trapped atom is reasonably close to an ideal von Neumann projective measurement, with fidelity errors at the part-per-thousand level (Myerson et al 2008). Some statistical characterizations can also be carried out directly using the known statistical properties of quantum systems. For example, in linear optical systems shot noise can be distinguished from other noise sources based purely on scaling considerations, and provides a very direct calibration of the quantum versus thermal and technical contributions, without need for detailed modeling of the devices used. Considering an optical power measurement, the photocurrent I1 that passes in unit time will obey

Equation (25)

where A is the 'electronic noise' contribution, typically of thermal origin, $C \langle I_1 \rangle^2$ is the technical noise contribution, and $B \langle I_1 \rangle$ is the shot-noise contribution. Measuring $ {\rm var}(I_1)$ as a function of $\langle I_1 \rangle$ then provides a direct quantification of the noise contributed by each of these distinct sources. This methodology has been used to estimate entropies in continuous-wave phase diffusion random number generators (Xu et al 2012).

To date, the fastest quantum random number generators are based on laser phase diffusion (Jofre et al 2011, Xu et al 2012, Abellán et al 2014, Yuan et al 2014), with the record at the time of writing being 68 Gbits ${\rm s}^{-1}$ (Nie et al 2015). These devices, illustrated in figure 5 work entirely with macroscopic optical signals (the output of lasers), which greatly enhances their speed and signal-to-noise ratios. It is perhaps surprising that intrinsic randomness can be observed in the macroscopic regime, but in fact laser phase diffusion (and before it maser phase diffusion) was one of the first predicted quantum-optical signals, described by Schawlow and Townes (1958).

Figure 5.

Figure 5. Laser phase-diffusion quantum random number generator (LPD-QRNG). (a) schematic diagram showing components of a LPD-QRNG using a pulsed laser and single-bit digitization. A laser, driven with pulses of injection current, produces optical pulses with very similar wave-forms and with relative phases randomized due to phase diffusion between the pulses. The pulses enter a single-mode fiber unbalanced Mach–Zehnder interferometer (uMZI), which produces interference between subsequent pulses, converting the random relative phase into a strong variation in the power reaching the detector, a linear photodiode. A comparator produces a digital output in function of the detector signal. (b) Statistics of the pulse shapes produced by the laser, obtained by blocking one arm of the interferometer and recording on an oscilloscope. Main image shows the distribution of the pulse shapes warmer colors show higher probability density. Strong 'relaxation oscillations' are seen, but are highly reproducible; all traces show the same behavior. Side image shows histogram taken in the region labelled in orange, showing a narrow peak indicating very small variation in power from one pulse to the next. (c) Same acquisition strategy, but using both arms of the interferometer and thus producing interference. The variation due to the random phase $\Delta \phi$ is orders of magnitude stronger that the noise, and the minimum values approach zero power, indicating high interference visibility. The histogram shows the classic 'arcsine' distribution expected for the cosine of a random phase. (d) illustration of the digitization process. Curve and points show expected and observed frequencies for the input voltage, approximating an arcsine distribution. The finite width of the peaks is a result of convolving the ideal arcsine distribution with the noise distribution, of order 10 mV. The comparator splits assigns a value $d=0$ or $d=1$ in function of the input voltage. The probability of a noise-produced error can be bounded by considering the effect of noise on digitization, giving an experimentally-guaranteed min-entropy for the output bits.

Standard image High-resolution image

Because stimulated emission is always accompanied by spontaneous emission, the light inside a laser cavity experiences random phase-kicks due to spontaneous emission. The laser itself has no phase-restoring mechanism; its governing equations are phase-invariant, and the phase diffuses in a random walk. As the kicks from spontaneous emission accumulate, the phase distribution rapidly approaches a uniform distribution on $[0, 2\pi)$ , making the laser field a macroscopic variable with one degree of freedom fully randomized by intrinsic randomness. The phase diffusion accumulated in a given time can be detected simply by introducing an optical delay and interfering earlier output with later output in an unbalanced Mach–Zehnder interferometer.

It is worth noting that the phase distribution is fully insensitive to technical and thermal contributions; it is irrelevant if the environment or an adversary introduces an additional phase shift if the phase, a cyclic variable, is already fully randomized, i.e. uniformly distributed on $[0, 2\pi)$ .

Considerable effort has gone into determining the min-entropy due to intrinsic randomness of laser phase-diffusion random number generators (Mitchell et al 2015), especially in the context of Bell tests (Abellán et al 2015). To date, laser phase diffusion random number generators have been used to choose the settings for all loophole-free Bell tests (Hensen et al 2015, Giustina et al 2015, Shalm et al 2015). Here we outline the modeling and measurement considerations used to bound the min-entropy of the output of these devices.

Considering an interferometer with two paths, short (S) and long (L) with relative delay τ, fed by the laser output $ \newcommand{\e}{{\rm e}} E(t) = \vert E(t)\vert \exp[i \phi(t)]$ , the instantaneous power that reaches the detector is

Equation (26)

where $ \newcommand{\pS}{p_{\rm S}} \newcommand{\e}{{\rm e}} \pS(t) \equiv \frac{1}{4} \vert E(t)\vert ^2$ , $ \newcommand{\pL}{p_{\rm L}} \newcommand{\e}{{\rm e}} \pL(t) \equiv \frac{1}{4} \vert E(t-\tau)\vert ^2$ , $\Delta \phi(t) = \phi(t) $ $ - \phi(t-\tau)$ and $ \newcommand{\vis}{{{\mathcal V}}} \vis$ is the interference visibility. Assuming τ gives sufficient time for a full phase diffusion, $\Delta \phi(t)$ is uniformly distributed on $[0, 2\pi)$ due to intrinsic quantum randomness. The contributions of $ \newcommand{\pS}{p_{\rm S}} \pS(t)$ and $ \newcommand{\pL}{p_{\rm L}} \pL(t)$ , however, may reflect technical or thermal fluctuations, and constitute a contamination of the measurable signal $p_I(t)$ . The process of detection converts this to a voltage $V(t)$ , and in doing so adds other technical and thermal noises. Also, the necessarily finite speed of the detection system implies that $V(t)$ is a function of not only of $p_I(t)$ , but also to a lesser extent of prior values $p_I(t^{\prime}), t^{\prime}<t$ . This 'hangover', which is predictable based on the previous values, must be accounted for so as to not overestimate the entropy in $p_I(t)$ .

Digitization is the conversion from the continuous signal V to a digital value d. Considering only the simplest case of binary digitization, we have

Equation (27)

where V0 is the threshold voltage, itself a random variable influenced by a combination of thermal and technical noises.

We can now express the distribution of d in function of the total noise $ \newcommand{\noise}{{\rm noise}} \newcommand{\vc}{V_\noise} \vc$

Equation (28)

where $ \newcommand{\pS}{p_{\rm S}} \newcommand{\pL}{p_{\rm L}} \newcommand{\vrange}{\Delta V} \newcommand{\vis}{{{\mathcal V}}} 2 \vrange \propto 4 \vis \sqrt{\pS \pL}$ is the peak-to-peak range of the signal due to the random $\Delta \phi$ . This derives from the 'arcsin' distribution that describes the cumulative distribution function of the cosine of a uniformly-random phase.

The noise contributions can all be measured in ways that conservatively estimate their variation; for example interrupting one or the other path of the interferometer we can measure the distribution of $ \newcommand{\pS}{p_{\rm S}} \pS$ and $ \newcommand{\pL}{p_{\rm L}} \pL$ , and comparing digitizer input to output we can upper bound the variation in V0. With the measured distributions in hand, we can assign probabilities to $ \newcommand{\noise}{{\rm noise}} \newcommand{\vc}{V_\noise} \vc$ and thus to the min-entropy of d. For example, if the total noise $ \newcommand{\noise}{{\rm noise}} \newcommand{\vc}{V_\noise} \vc$ is described by a normal distribution with zero mean and width $ \newcommand{\noise}{{\rm noise}} \sigma_{\noise} = 10$ mV, and $ \newcommand{\vrange}{\Delta V} \vrange = 0.5$ V, a probability $ \newcommand{\noise}{{\rm noise}} \newcommand{\vc}{V_\noise} P(d\vert \vc) > P(d=1\vert 8 \sigma_{\noise}) \approx \frac{1}{2} + 0.0511$ will occur as often as $ \newcommand{\noise}{{\rm noise}} \newcommand{\vc}{V_\noise} \vc$ exceeds $ \newcommand{\noise}{{\rm noise}} 8 \sigma_{\noise}$ , which is to say with probability $\approx 6 \times 10^{-16}$ . Since $ \newcommand{\noise}{{\rm noise}} \newcommand{\vc}{V_\noise} P(d\vert \vc) \leqslant \frac{1}{2} + 0.0511$ implies a single-bit min entropy $ \newcommand{\noise}{{\rm noise}} \newcommand{\vc}{V_\noise} H_{\infty}(d\vert \vc) > 0.86$ , a randomness extraction based on this min-entropy can then be applied to give fully-random output strings.

It is worth emphasizing that the characterizations used to guarantee randomness of this kind of source are not measurements of the digital output of the source, which as mentioned already can never demonstrate randomness. Rather, they are arguments based on physical principles, backed by measurements of the internal workings of the device. To summarize the argument, the trusted random variable $\Delta \phi(t)$ is known to be fully randomized by the intrinsic quantum randomness of spontaneous emission. This statement relies on general principles that concern laser physics, such as Einstein's A and B coefficient argument linking spontaneous emission to stimulated emission and the fact that lasers have no preferred phase, due to the time-translation invariance of physical law. The next step of the guarantee follows from a model of the interference process, equation (26), whose simplicity mirrors the simplicity of the experimental situation, in which single-mode devices (fibres) are used to ensure a low-dimensional field characterized only by the time variable. Finally there is an experimental characterization of the noises and a simple computation to bound their effects on the distribution of outputs. The computation can and should be performed with worst-case assumptions, assuming for example that all noise contributions are maximally correlated, unless the contrary has been experimentally demonstrated.

5. Quantum randomness and future

Randomness is a fascinating concept that absorbs human attention since centuries. Nowadays we are witnessing a novel situation, when the theoretical and experimental developments of quantum physics allow to investigate quantum randomness from completely novel points of view. The present review provides an overview of the problem of quantum randomness, and covers the implications and new directions emerging in the studies of this problem.

From a philosophical and fundamental perspective, the recent results have significantly improved our understanding of what can and cannot be said about randomness in nature using quantum physics. While the presence of randomness cannot be proven without making some assumptions about the systems, theses assumptions are constantly weakened and it is an interesting open research problem to identify the weakest set of assumption sufficient to certify the presence of randomness.

From a theoretical physics perspective, the recent results have provided a much better understanding of the relation between non-locality and randomness in quantum theory. Still, the exact relation between these two fundamental concepts is not fully understood. For instance, small amount of non-locality, or even entanglement, sometimes suffice to certify the presence of maximal randomness in the measurement outputs of a Bell experiment (Acín et al 2012). The relation between non-locality and randomness can also be studied in the larger framework of no-signaling theories, that is theories only limited by the no-signaling principle, which can go beyond quantum physics (Popescu and Rohrlich 1994). For instance it is known that in these theories maximal randomness certification is impossible, while it is in quantum physics (de la Torre et al 2015).

From a more applied perspective, quantum protocols for randomness generation follow different approaches and require different assumptions. Until very recently, all quantum protocol required a precise knowledge of the devices used in the protocol and certified the presence of randomness by means of standard statistical tests. The resulting protocols are cheap, feasible to implement in practice, including the development of commercial products, and lead to reasonably high randomness generation rates. Device-independent solutions provide a completely different approach, in which no modeling of the devices is needed and the certification comes from a Bell inequality violation. Their implementation is however more challenging and only few much slower experimental realizations have until now been reported 26.

Due to the importance and need of random numbers in our information society, we expect important advances in all these approaches, resulting in a large variety of quantum empowered solutions for randomness generation.

Acknowledgments

We thank anonymous referees for constructive criticism and valuable suggestions. We are very grateful to Krzysztof Gawędzki, Alain Aspect, Philippe Grangier and Miguel A F Sanjuan for enlightening discussions about non-deterministic theories and unpredictability in classical physics. We acknowledge financial support from the John Templeton Foundation, the European Commission (FETPRO QUIC, STREP EQuaM and RAQUEL), the European Research Council (AdG OSYRIS, AdG IRQUAT, StG AQUMET, CoG QITBOX and PoC ERIDIAN), the AXA Chair in Quantum Information Science, the Spanish MINECO (Grants No. FIS2008-01236, No. FIS2013-40627-P, No. FIS2013-46768-P FOQUS, FIS2014-62181-EXP, FIS2015-68039-P, FIS2016-80773-P, and Severo Ochoa Excellence Grant SEV-2015-0522) with the support of FEDER funds, the Generalitat de Catalunya (Grants No. 2014-SGR-874, No. 2014-SGR-875, No. 2014-SGR-966 and No. 2014-SGR-1295 and CERCA Program), and Fundació Privada Cellex.

Footnotes

  • 'Nothing happens at random; everything happens out of reason and by necessity', from the lost work Perí notilde u On Mind, see Diels (1906), p 350, Freeman (1948), p 140, fr. 2.

  • 'All things happen by virtue of necessity', Laertius (1925), IX, 45.

  • 'Men have fashioned an image of chance as an excuse for their own stupidity', Diels (1906), p 407, Freeman (1948), p 158, fr. 119.

  • 'Epicurus saw that if the atoms traveled downwards by their own weight, we should have no freedom of the will, since the motion of the atoms would be determined by necessity. He therefore invented a device to escape from determinism (the point had apparently escaped the notice of Democritus): he said that the atom while traveling vertically downward by the force of gravity makes a very slight swerve to one side' Cicero (1933), I, XXV.

  • 'We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes'. Laplace (1951) p 4.

  • 'A process is said to be deterministic if its entire future course and its entire past are uniquely determined by its state at the present instant of time', Arnol'd (1973), p 1.

  • 10 

    See footnote 9.

  • 11 

    'The initial state of a mechanical system (the totality of positions and velocities of its points at some moment of time) uniquely determines all of its motion', Arnol'd (1989), p 4.

  • 12 

    See footnote 11.

  • 13 

    See footnote 11.

  • 14 

    Landau and Lifshitz (1960), p 1.

  • 15 

    'Le premier exemple que nous allons choisir est celui de l'équilibre instable; si un cône repose sur sa pointe, nous savons bien qu'il va tomber, mais nous ne savons pas de quel côté; il nous semble que le hasard seul va en décider', Poincaré (1912) page 4, ('The first example we select is that of unstable equilibrium; if a cone rests upon its apex, we know well that it will fall, but we do not know toward what side; it seems to us chance alone will decide' Newman (1956), vol 2, p 1382).

  • 16 

    '...ein ganz wesentliches Merkmal desjenigen, was man im gewöhnlichen Leben oder in unserer Wissenschaft als Zufall bezeichnet ... läßt sich ... kurz in die Worte fassen: kleine Ursache—große Wirkung', Smoluchowski (1918) ('...fundamental feature of what one calls chance in everyday life or in science allows a short formulation: small cause—big effect').

  • 17 

    Poincaré (1912) p 3.

  • 18 

    Newman (1956), vol 2, p 1381.

  • 19 

    In mathematics of differential equations, the Picard's existance theorem (also known as Cauchy–Lipschitz theorem) is important to ensure existence and uniqueness of solutions to first-order equations with given initial conditions. Consider an initial value problem, say, $y^{\prime}(t)=f(t, y(t))$ with $y(t_0)=y_0$ . Also assume $f(., .)$ is is uniformly Lipschitz continuous in y (i.e. the Lipschitz constant can be taken independent of t) and continuous in t. Then for some values of $\varepsilon >0$ , there exists a unique solution of $y(t)$ , given the initial condition, in the interval $[t_0 - \varepsilon, t_0 + \varepsilon]$ .

  • 20 

    Boussinesq (1878), p 39. 'The movement phenomena should be divided into two major classes. The first one comprises those for which the laws of mechanics expressed as differential equations will determine by themselves the sequence of states through which the system will go and, consequently, the physico-chemical forces will not admit causes of different nature to play a role. On the other hand, to the second class we will assign movements for which the equations will admit singular integrals, and for which one will need a cause distinct from physico-chemical forces to intervene, from time to time, or continuously, without using any mechanical action, but simply to direct the system at each bifurcation point which will appear'. The 'singular integrals' mentioned by Boussinesq are the additional trajectories coexisting with 'regular' ones when conditions guaranteeing uniqueness of solutions are broken.

  • 21 

    Thus in the Norton's model, the new law of nature should, in particular, ascribe a probability $p(T)$ to the event that the point staying at rest at $r=0$ starts to move at time T.

  • 22 

    Similar things seem to happen also in so called 'general no-signaling theories' where, in comparison with quantum mechanics, the only physical assumption concerning the behavior of a system is the impossibility of transmitting information with an infinite velocity, see Tylec and Kuś (2015).

  • 23 

    Note, here we do not impose any constraint on the hidden variables and these could be even nonlocal. In fact, the quantum theory becomes deterministic if one assumes the hidden variables to be nonlocal (Gudder 1970).

  • 24 

    In fact, it was Gleason (1975), who pointed out first that quantum contextuality may exist in dimensions greater than two. For a single qubit, i.e. for the especially simple case of two dimensional Hilbert space, one can explicitly construct the non-contextual hidden variable models that describes all measurements (see Wódkiewicz (1985), Wódkiewicz (1995) and Scully and Wódkiewicz (1989)). In this sense, a single qubit does not exhibit intrinsic randomness. For the consistency of approach, we should thus consider that intrinsic randomness could appear in all quantum mechanics, with exception of quantum mechanics of single qubits. In this report we will neglect this subtlety, and talk about intrinsic randomness for the whole quantum mechanics without exceptions, remembering, however, Gleason's result.

  • 25 

    Of course one could argue that such randomness appears only to be 'intrinsic', since it is essentially epistemic in nature and arises due to the inaccessibility or ignorance of the information that resides in the nonlocal correlations. In another words, this kind of randomness on the local level is caused by our lack of knowledge of the global state, and further, it can be explained using deterministic nonlocal hidden variable models.

  • 26 

    An important discussion of the commercial and practical aspects of quantum random number generation, and cryptography based on device dependent and independent protocols can be found in the lecture No 7 by Aspect and Brune (2016).

Please wait… references are loading.

Biographies

Manabendra Nath Bera

Manabendra Nath Bera is currently a post-doctoral researcher at ICFO-The Institute of Photonic Sciences, Barcelona. He pursued his PhD in experimental atomic physics in 2011 from Laboratoire Aime Cotton, CNRS & Universite Paris-Sud 11,Campus d´Orsay, France. Then he shifted to theoretical physics since his first post-doctoral research at Harish-Chandra Research Institute, Allahabad, India. He carries out research in the area of quantum information theory and its possible role in understanding fundamental problems in physics, e.g., quantum foundations, thermodynamics, quantum optics, quantum field theory and relativity.

Antonio Acín

Antonio Acín is an ICREA Professor at ICFO-The Institute of Photonic Sciences, where he leads the Quantum Information Theory group. He got his PhD in Theoretical Physics in 2001 from the University of Barcelona. Acín's work focuses on quantum information theory, but also covers aspects of foundations of quantum physics, quantum optics, quantum thermodynamics and condensed matter physics. He has been awarded with three grants from the European Research Council: Starting, Proof of Concept and, more recently, a Consolidator Grant. In 2014 he was also awarded with the AXA Chair in Quantum Information Science.

Marek Kuś

Marek Kuś, received PhD in Physics from Warsaw University. He is now a Professor at the Center for Theoretical Physics, Polish Academy of Sciences and the head of the International Center for Formal Ontology at Warsaw Technical University. His research focuses on mathematical foundations of quantum information theory, quantum chaotic phenomena, and fundamentals of quantum theory.

Morgan W. Mitchell

Morgan W. Mitchell is ICREA Professor of Quantum Optics at ICFO - The Institute of Photonic Sciences. He got his PhD in Physics in 1999 from the University of California at Berkeley. His research interests include fundamental quantum optics, quantum sensing, quantum random number generation and Bell tests. His laboratory efforts include quantum optics with room-temperature, cold and ultra-cold atoms, squeezed light and entangled photons.

Maciej Lewenstein

Maciej Lewenstein is currently an ICREA Professor of Quantum Optics Theory at ICFO - The Institute of Photonic Sciences, Barcelona. He received his PhD from Universität Essen, Germany. His research interests lie in wide range of theoretical physics, e.g., standard quantum optics, physics of matter in ultra-intense and ultra-short laser pulses, attosecond physics, quantum information theory (mathematical foundations and implementations in atomic, quantum optical and condensed matter systems, quantum simulators), and physics of ultracold/degenerate gases of atoms, ions, molecules, photons (from weakly interacting systems, non-linear atom optics or optics to strongly correlated systems, lattice gases, Hubbard models etc).

10.1088/1361-6633/aa8731