Paper The following article is Open access

Quantum collapse rules from the maximum relative entropy principle

, and

Published 8 January 2016 © 2016 IOP Publishing Ltd and Deutsche Physikalische Gesellschaft
, , Citation Frank Hellmann et al 2016 New J. Phys. 18 013022 DOI 10.1088/1367-2630/18/1/013022

1367-2630/18/1/013022

Abstract

We show that the von Neumann–Lüders collapse rules in quantum mechanics always select the unique state that maximises the quantum relative entropy with respect to the premeasurement state, subject to the constraint that the postmeasurement state has to be compatible with the knowledge gained in the measurement. This way we provide an information theoretic characterisation of quantum collapse rules by means of the maximum relative entropy principle.

Export citation and abstract BibTeX RIS

Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

The dynamics of quantum states in the orthodox (von Neumann's) foundations of quantum mechanics consist of two different prescriptions: the unitary evolution and the so-called 'collapse' of a quantum state to a subspace encoding the knowledge gained in the outcome of a measurement. The mappings (rules) describing this collapse were originally formulated by von Neumann [1] and later improved by Lüders [2]. There are two different forms of collapse. When one knows only that a measurement corresponding to an observable (a self-adjoint operator with a discrete spectrum) $O$ has taken place, the 'weak' rule applies. It is defined as $\rho \mapsto {\displaystyle \sum }_{i\in I}{P}_{i}\rho {P}_{i}$, where ρ is the original quantum state (in general, a density operator), while $O={\displaystyle \sum }_{i\in I}{\lambda }_{i}{P}_{i}$ is a spectral decomposition with some countable index set I (hence, ${\displaystyle \sum }_{i\in I}{P}_{i}={\mathbb{I}}$, ${P}_{i}{P}_{j}={P}_{i}{\delta }_{{ij}}$, and ${\lambda }_{i}\in {\mathbb{R}}$ $\forall i,j\in I$). If a measurement corresponding to $O$ has resulted in a specific value ${\lambda }_{k}\in \{{\lambda }_{i}| i\in I\}$ associated to a projector ${P}_{k}\in \{{P}_{i}| i\in I\}$, then the 'strong' rule, $\rho \mapsto {P}_{k}\rho {P}_{k}/\mathrm{tr}(\rho {P}_{k})$, is applied.

The negative of Umegaki's quantum relative entropy [3, 4], $D(\rho ,\sigma )=-S(\rho ,\sigma ):= \mathrm{tr}(\rho \mathrm{ln}\rho -\rho \mathrm{ln}\sigma )\in [0,\infty ]$, can be used as a measure of distinguishability, or relative information content, of the quantum state σ from the state ρ. The use of $D$ instead of $S$ follows Wiener's idea that the 'amount of information is the negative of the quantity defined as entropy' [5]. Note that we call $S=-D$ the relative entropy, following the convention of [6] that makes the Gibbs–Shannon and von Neumann entropies the special cases of $S$, after adding a constant: ${S}_{\mathrm{vN}}(\rho )=S(\rho ,{\mathbb{I}}/n)+\mathrm{log}(n)$.

The function $D$ can be considered as a nonsymmetric distance: in general, $D(\rho ,\sigma )\ne D(\sigma ,\rho )$. If a given state is σ and we believe it to be ρ, it can be easier or harder to find our error than if their roles were reversed. Say, $\sigma =P$ with P some projector and $\rho ={\mathbb{I}}/n$. If we measure the property corresponding to ${\mathbb{I}}$ − P, a single measurement can tell us that the state is not σ, whereas no single measurement could reveal the same of ρ. See e.g. [7, 8] for an overview of reasons for using $D(\rho ,\sigma )$ as a measure of distinguishability and relative information content.

A key information theoretic property of the strong collapse rule is that the probability of measuring the value ${\lambda }_{k}$ again, after having measured it once, is 1, which follows from $\mathrm{tr}({P}_{k}\frac{{P}_{k}\rho {P}_{k}}{\mathrm{tr}(\rho {P}_{k})})=1$. Repeated measurements add no new information. Clearly, the state ${P}_{k}\rho {P}_{k}/\mathrm{tr}(\rho {P}_{k})$ is not the only state that has this property (note that Pk is not necessary a rank 1 projector). What we demonstrate in this letter is that, among all states that have this property, the strong collapse rule selects the state that is least distinguishable from the initial state ρ, that is, it has the minimum relative information $D(\rho ,\cdot )$, in a suitably regularised sense. This allows for an information theoretic characterisation of the strong collapse rule: the state after measurement is the state that is least distinguishable from the previous state, while being compatible with the new information gained by the measurement.

In order to derive the strong collapse rule, we will need two intermediate results. First we will show that the weak collapse rule produces the least distinguishable state among the block diagonal states. We then show that a weighted version of the strong collapse rule, $\rho \mapsto {\displaystyle \sum }_{i}{p}_{i}{P}_{i}\rho {P}_{i}/\mathrm{tr}(\rho {P}_{i})$, is the least distinguishable amongst the states with blocks of fixed trace. This rule can be interpreted as corresponding to a measurement where we believe that the result Pi occurred with probability pi. This intermediate step regularises the problem of a strong collapse, which is then obtained as a limiting case, by taking ${p}_{i}\mapsto {\delta }_{{ik}}$ with k  =  1.

Our derivation of the collapse rules from the constrained maximisation of Umegaki's quantum relative entropy is of special importance in the context of epistemic and information theoretic approaches to the foundations of quantum theory. In this context, collapse rules have been considered as analogues of the Bayes–Laplace rule [912]. This analogy rested on mathematical and conceptual similarity, but was not derived from any single unifying principle. In the meantime, the Bayes–Laplace rule has been shown to be a special case of the constrained maximisation of the Kullback–Leibler relative entropy [1316]. Our result provides the missing piece of the puzzle. Both the Bayes–Laplace and von Neumann–Lüders rules are special cases of a single epistemic principle of inductive inference (or, in other words, information theoretic state updating). This issue will be discussed in more detail in section 5.

After finishing this paper, we were informed about reference [17], where it is shown that a state $\sigma ={\displaystyle \sum }_{i}{P}_{i}{\rho }_{i}{P}_{i}$, where Pi are rank 1 projectors, minimises the functional $D(\rho ,\sigma )$. This is a special case of our result for the weak collapse rule. The generalisation to arbitrary projectors is suggested in [18], but without a proof or an indication of a method of proving this statement. The technique used by us to prove a general theorem is essentially different from one applied in [17] (and it shows that this result for the rank $\gt 1$ case is more substantial and nontrivial than for the rank 1 case).

A closely related paper [26] deals with the same type of problem as addressed here, but using a different mathematical approach, allowing for treatment of the infinite dimensional case. Further conceptual and mathematical discussion associated with the results of both papers is carried out there and in [28]. A recent work [19] proves that a partial trace is also a constrained maximiser of quantum relative entropy.

2. The setup

We will consider the finite dimensional case. Hence, quantum states will be identified with non-negative matrices of trace 1, which form the convex set ${ \mathcal D }$ in the space of all hermitian n × n complex matrices.

The function $D(\cdot ,\cdot )$ is jointly convex in both arguments [20], which implies that $D(\rho ,\cdot )$ is convex on ${ \mathcal D }$ for all $\rho \in { \mathcal D }$. Due to the finite dimensionality of the problem, we can use the first order condition for the existence of a minimum of a convex function (see e.g. [21], Theorems 1.2.7 and 2.2.1): if ${ \mathcal V }$ is a convex subset of a finite dimensional topological vector space, and $f:{ \mathcal V }\to {\mathbb{R}}$ is convex then x is a global minimum of f on ${ \mathcal V }$ if and only if all directional derivatives of f at x are non-negative.

For a function differentiable at x this condition states that if x is in the interior of ${ \mathcal V }$ then the derivatives of f need to vanish. If x belongs to some strata of the boundary of ${ \mathcal V }$ then all tangential derivatives need to vanish whereas derivatives in inward transversal direction need to be non-negative.

In our minimisation problem we have a subspace ${ \mathcal V }\subset { \mathcal D }$ of density matrices that is defined by a linear equation and thus is a subsimplex. The function $D(\rho ,\cdot )$ restricts to a convex and differentiable function on ${ \mathcal V }$ and we want to find its minimum. Thus we simply differentiate in the directions preserving ${ \mathcal V }$ and set the derivatives to be positive. We will denote this condition by

Equation (1)

The next two sections will be concerned with evaluating this set of equations.

3. Weak collapse

In the case of a weak collapse due to the measurement of $O={\displaystyle \sum }_{i}{\lambda }_{i}{P}_{i}$, the constraint set is given by the block diagonal density matrices

Equation (2)

The condition (2) is equivalent with $\sigma \in {{ \mathcal V }}_{w}$ iff $\sigma ={\displaystyle \sum }_{i}{P}_{i}\sigma {P}_{i}$, as well as with $\sigma \in {{ \mathcal V }}_{w}$ iff $[O,\sigma ]=0$ (see [22] for a discussion).

We can parametrise ${{ \mathcal V }}_{w}$ in terms of the singular value decomposition of σ. Every element of ${{ \mathcal V }}_{w}$ is of the form

Equation (3)

with ${\rm{\Lambda }}$ a trace 1 diagonal matrix with positive entries, U a unitary that is a product $U={\displaystyle \prod }_{i}{U}_{i}$, where Ui is an identity on the range of ${\text{}}{\mathbb{I}}-{P}_{i}$. We have that $[U,{P}_{i}]=0$, $[{\rm{\Lambda }},{P}_{i}]=0$, and thus, writing ${\sigma }_{i}=\sigma {| }_{\mathrm{ran}({P}_{i})}$ we have $f(\sigma )={\displaystyle \oplus }_{i}f({\sigma }_{i})={\displaystyle \oplus }_{i}f({\sigma }_{i});$ that is, functions (in the sense of the functional calculus) act blockwise on the space ${{ \mathcal V }}_{w}$.

Let us consider first the variation ${\partial }_{{{ \mathcal V }}_{w}}\mathrm{tr}(\rho \mathrm{ln}(\cdot ))=0$ in the direction parametrised by the Ui. Given a function on a Lie group f(U) we can take the directional derivative by looking at the parameter derivative of a one parameter group of diffeomorphisms on U. As multiplication in a Lie group is differentiable we can pick the one parameter group of diffeomorphisms generated by left multiplication with the one-dimensional subgroup $\mathrm{exp}({tL})$,

Equation (4)

We then define the directional derivative in direction L as the derivative of the pushforward of f along ${\phi }_{t}$, ${\partial }_{L}f(\cdot )=\displaystyle \frac{{\rm{d}}}{{\rm{d}}t}{\phi }_{t}^{\sharp }f(\cdot ){| }_{t=0}$. For a function that is the trace of U in a particular representation this can be easily evaluated:

Equation (5)

A straightforward calculation shows that we further have $\displaystyle \frac{{\rm{d}}}{{\rm{d}}t}{\phi }_{t}^{\sharp }\mathrm{tr}({{AUBU}}^{*}){| }_{t=0}=$ $\mathrm{tr}({{ALUBU}}^{*})-\mathrm{tr}({{AUBU}}^{*}L)$.

Note that $[{L}_{i},{P}_{j}]=0$, and in particular ${L}_{i}{P}_{j}={\delta }_{{ij}}{L}_{i}$. The derivative then takes the form

Equation (6)

We thus see that if σ and ${\displaystyle \sum }_{i}{P}_{i}\rho {P}_{i}$ are concurrently diagonalisable, the above equation vanishes. In fact, since $[\mathrm{ln}{\sigma }_{i},{\rho }_{i}]$ is traceless and $\{{L}_{i},{\rm{i}}{L}_{i}\}$ spans the space of all traceless matrices in the i-th matrix block, this is also a necessary condition.

Let us next consider the variation in the direction of the spectrum, that is the direction of ${\rm{\Lambda }}$. We are interested in the case where σ and ${\displaystyle \sum }_{i}{P}_{i}\rho {P}_{i}$ are concurrently diagonalisable. Let ${\kappa }_{k}^{\sigma }$ and ${\kappa }_{k}^{\rho }$ be the eigenvalues of σ and ${\displaystyle \sum }_{i}{P}_{i}\rho {P}_{i}$ respectively. If ${\kappa }_{i}^{\rho }\quad \ne \quad 0$ and ${\kappa }_{i}^{\sigma }=0$ then $D(\rho ,\sigma )=\infty $, so this can not be the minimum if a state with finite relative entropy exists, and we can disregard this case here.

Let us first consider the case that all ${\kappa }_{i}^{\rho }\quad \ne \quad 0$. We have the condition

Equation (7)

The derivatives ${\partial }_{{{\rm{\Lambda }}}_{\sigma }}$ have to preserve the trace. An overcomplete basis of such derivatives is given by ${\partial }_{{\kappa }_{k}^{\sigma }}-{\partial }_{{\kappa }_{l}^{\sigma }}$. Thus, for all $k,l,m$

Equation (8)

So, the ratios of the eigenvalues of ${\displaystyle \sum }_{i}{P}_{i}\rho {P}_{i}$ and σ are fixed. As they both are trace 1, this implies they are the same.

Let assume now that I is the index set of all i such that ${\kappa }_{i}^{\rho }=0$. If this set is nonempty then the above conditions cannot be satisfied. However, there is still a possibility that the minimum is on the boundary. The condition for the minimum on the boundary is weaker than the above. Namely, all derivatives in directions pointing in toward the set need to be positive. Such directions can be written as a linear combination

Equation (9)

with ${\alpha }_{{ij}}\geqslant 0$ for $i\in I$, $j\notin I$ and otherwise ${\alpha }_{{ij}}$ arbitrary, since the derivatives with negative coefficients at ${\partial }_{{\kappa }_{i}^{\sigma }}$ would otherwise point outside the set. For that it is enough to check basis derivatives

Equation (10)

Equation (11)

and we see this is a global minimum.

Recall that if ${\kappa }_{i}^{\rho }\ne 0$ when ${\kappa }_{i}^{\sigma }=0$ then $D(\rho ,\sigma )=\infty $. We now also need to consider the case that ${\kappa }_{i}^{\rho }=0$ when ${\kappa }_{i}^{\sigma }\quad \ne \quad 0$. In that case we would get the full derivatives in the i direction, thus the equations (11) apply, which can not be satisfied unless all ${\kappa }_{j}^{\sigma }=0$, which can not occur in ${ \mathcal D }$.

Combining this with the above we have that

Equation (12)

The state $\sigma ={\displaystyle \sum }_{i}{P}_{i}\rho {P}_{i}$ is the only state σ satisfying ${C}_{{{ \mathcal V }}_{w}}^{\rho }(\sigma )\geqslant 0$. The set ${{ \mathcal V }}_{w}$ is convex, so from (12) and convexity of $D(\rho ,\cdot )$, this is the unique global minimum.

4. Strong collapse

The conditions defining 'strong' collapse that were specified in Introduction lead us to a troubling situation, because for such states (containing zero eigenvalues) the relative entropy is almost always infinite. We will overcome the problem by deriving a generalised version of the strong collapse rule that is a quantum counterpart of Jeffrey's rule. The ordinary strong collapse rule will be then obtained by a limiting procedure.

Consider a constraint set given in terms of ${p}_{i}\in {\mathbb{R}}$ such that ${\displaystyle \sum }_{i}{p}_{i}=1$ by

Equation (13)

where $\{{P}_{i}| i\in I\}$ is again determined by the spectral decomposition of an observable $O={\displaystyle \sum }_{i\in I}{\lambda }_{i}{P}_{i}$. The set (13) can be interpreted as encoding the knowledge that the measurement outcome ${\lambda }_{i}$ corresponding to a projection Pi occurs with a probability pi.

Here we encounter a problem. If we have a pi nonzero but $\mathrm{tr}(\rho {P}_{i})=0$, then every state in ${{ \mathcal V }}_{s}$ will have relative entropy $-\infty $ to ρ. Moreover, even if we subtract the infinite constant, we find that the regularised distance does not depend on the state in the block Pi and there is no unique minimum. We thus will always assume that $\mathrm{tr}(\rho {P}_{i})\ne 0$ for ${p}_{i}\ne 0$.

The variation in the Ui direction goes through as before. However the variation in the direction of the spectrum changes in that a basis is now given in terms of ${\partial }_{{\kappa }_{k}^{{\sigma }_{i}}}-{\partial }_{{\kappa }_{l}^{{\sigma }_{i}}}$, with ${\kappa }_{k}^{{\sigma }_{i}}$ and ${\kappa }_{l}^{{\sigma }_{i}}$ belonging to the same block Pi and thus being eigenvalues of ${\sigma }_{i}$. Thus only the fractions of eigenvalues within each block are fixed. This implies that the eigenvalues of ${\sigma }_{i}$ are uniformly scaled relative to the eigenvalues of ${\rho }_{i}$. The condition ${\displaystyle \sum }_{k}{\kappa }_{k}^{{\sigma }_{i}}={p}_{i}$ fixes ${\sigma }_{i}$ to be ${p}_{i}{\rho }_{i}/\mathrm{tr}({\rho }_{i})$.

This shows that

Equation (14)

The state $\sigma ={\displaystyle \sum }_{i}{p}_{i}\frac{{P}_{i}\rho {P}_{i}}{\mathrm{tr}({P}_{i}\rho {P}_{i})}$ is the only state σ satisfying ${\partial }_{{{ \mathcal V }}_{s}}D(\rho ,\sigma )\geqslant 0$.

The strong collapse is a limiting case of the above projection, with all pi going to zero except of one, p1, corresponding to a projection P1 that, in turn, corresponds to a measurement result given by an eigenvalue ${\lambda }_{1}$. We obtain this by taking the weak continuous limit

Equation (15)

Note that in the finite dimensional case that we consider here the weak topology and norm topology coincide.

5. The foundational view

In the orthodox formulation of quantum mechanics the 'collapse rules' are postulated. Thus, they are not deduced from any other more fundamental principle. They can be derived from several different conditions, see [23, 24] for a review, but none of these conditions possesses the status of a fundamental principle of quantum theory. The weak collapse rule can be derived by taking the tensor product with an auxilliary state, followed by unitary evolution and a partial trace. This may serve as a derivation independent of interpretational issues (when this procedure is interpreted as an interaction with some ontic environment, it is usually considered as an instance of decoherence). However, no such construction exists for the strong rule. This fact, as well as the unclear relationship between the strong collapse rule and unitary evolution, renders the orthodox mathematical foundations conceptually insufficient, asking for further insights.

In general, an ontic interpretation of the quantum state leads to considering quantum collapse as a change of the 'state of being' of some 'material object/thing'. On the other hand an epistemic interpretation leads to considering quantum collapse as a change of the 'state of information' of some 'experiencing user/agent'. (There also is a corresponding difference in the meaning of the term 'measurement'.) In particular, the dynamical reduction approach of [25], belongs to the former class, providing an ontic explanation by means of a general dynamical principle from which the quantum collapse rule is derived. On the other hand, an epistemic interpretation of collapse rules as quantum mechanical analogues of the Bayes–Laplace rule $p(x)\mapsto p(x)p(b| x)/p(b)$ was proposed in [912]. However, no epistemic explanation, understood as a derivation from some fundamental principle of information theory (or statistical inference theory) has been offered. Our paper (as well as the closely related paper [26]) provides such a derivation.

Following the postulates of [27, 28] (which aim at reapproaching the foundations of quantum theory in the spirit of [2932]), we demonstrated that the mapping to the unique solution of constrained minimisation of the relative information $D$,

Equation (16)

can serve as the general principle of quantum state change due to the acquisition of new information (represented by the constraints ${ \mathcal Q }$). This amounts to selecting the quantum state that is the least distinguishable from the original state among all states that are in a strict agreement with the new knowledge (represented by the constraints).

In order to derive the quantum collapse rules from the principle (16), we needed to identify the information theoretic constraints that define the situations of weak and strong collapse. The 'weak' collapse amounts to encoding the information that a specific observable $O$ has been subjected to measurement. A quantum state σ that carries such information has to be compatible with the possibility of measuring all eigenvalues of $O$ precisely. Such a situation can be characterised by the condition $[\sigma ,O]=0$ (or, equivalently, $[{P}_{i},\sigma ]=0$ $\forall {P}_{i}$). The 'strong' collapse should additionally result in a state that would reproduce the result of measurement of a particular eigenvalue with certainty (that is, with probability equal 1). That is, given a projector P encoding the outcome λ of the measurement, the post-collapse density operator σ should satisfy the condition of a 'weak' collapse, as well as $\mathrm{tr}(P\sigma )=1$. This provides an interesting general insight into a structure of quantum theory: why it is possible to use (16) in order to derive various quantum state change rules without assuming the probabilistic interpretation carried under the label of the 'Born rule', the latter seems to be required for justification of the choice of constraints leading to a specific class of rules, including 'strong' collapse.

Our results can be considered as a quantum counterpart of derivations [1316] of the Bayes–Laplace rule from the constrained maximisation of the Kullback–Leibler relative entropy [33], $S(p,q)\quad := $ $-{\displaystyle \int }_{{ \mathcal X }}\mu (x)p(x)\mathrm{log}(p(x)/q(x))$, where $x\in { \mathcal X }$, while p and q are densities of probability measures with respect to a measure μ on ${ \mathcal X }$. The functional $S(p,q)$ is a special case of Umegaki's quantum relative entropy $S(\sigma ,\rho )$ for discrete ${ \mathcal X }$ and $[\sigma ,\rho ]=0$. This strengthens the analogy between the Bayes–Laplace and the von Neumann–Lüders rules: they are just two special cases of a single general principle of inductive inference, given by (16). From the Bayesian perspective, the state ρ is a prior, while σ, satisfying the constraints and maximising $S(\rho ,\sigma )$, is a posterior.

6. Remarks

It has been known for quite a long time (see e.g. [34]) that a 'weak' collapse leads to an increase of the absolute entropy $-\mathrm{tr}(\rho \mathrm{log}\rho )$. Our result uncovers an unexpectedly strong inverse of this fact: a 'weak' collapse is a result of maximisation of the relative entropy $-\mathrm{tr}(\rho \mathrm{log}\rho -\rho \mathrm{log}\sigma )$ under specific constraints.

All earlier results on derivation of weak and strong collapse rules from minimisation of two point functionals on the space of quantum states [22, 3542] were obtained for (various) symmetric quantum information distances. The importance of our result stems from the importance of (the negative of) Umegaki's relative entropy in quantum information theory as opposed to symmetric quantum information distances, which do not carry a similar semantic significance. This statement can be approached either axiomatically or pragmatically. On the axiomatic side, $D(\rho ,\sigma )$ is characterised [43] by the direct sum property, invariance under automorphisms (so, in particular, unitaries), additive decomposition under conditional expectations (onto subalgebra), and measurability over the state space. These properties eliminate all above symmetric information distances. An extensive discussion of the reasons for (and, in particular, applications of) these properties can be found in [48]. Furthermore, $D(\rho ,\sigma )$ is a direct quantum generalisation of $-S(p,q)$, which can be also characterised 'dynamically' as a unique functional ${\rm{\Phi }}(p,q)$ such that the mapping $q\mapsto {\mathrm{arginf}}_{p\in C}\{{\rm{\Phi }}(p,q)\}$ satisfies a few very reasonable desiderata for information processing [15, 32, 44]. On the pragmatic side, $D(\rho ,\sigma )$ is widely used in quantum information theory as the most fundamental measure of distinguishability of quantum states (see e.g. [7, 8, 4547]). Hence, from the perspective of quantum information theoretic approaches to foundations of quantum theory, our results provide an essential, new perspective on the mathematical form of collapse rules due to quantum measurement.

As noted by one of the referees, this leads to a question whether the results of this paper can be reproduced (or extended) in the setting of generalised probabilistic theories [49, 50]. This setting lacks a general analogue of the collapse rules, but it allows us to introduce a well defined notion of information distance [51, 52] (which reduces to the Umegaki and Kullback–Leibler distances in quantum mechanical and probabilistic case, respectively)5 . Hence, the possible extension of our result to generalised probabilistic theories can bring in new foundational insights (in particular—as suggested by a referee—one can ask whether defining a post-measurement state as a minimiser of a specific information distance given some type of constraints preselects some type of theories). The main open technical problem is how to replace the use of block diagonal decomposition and variational analysis of the spectrum of operators by some other method. It may be possible that a restriction to a subclass of theories satisfying some sort of spectral condition (see e.g. [56, 57]) will be necessary for this. We hope to return to this problem in another paper.

Acknowledgments

We would like to thank Carlos S Guedes for many important and insightful discussions throughout the development of this result. We thank also Patrick Coles for informing us about [17, 18], and Daniel Ranard for some suggestions and comments. This research was supported in part by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research and Innovation. This research was also partially financed by the National Science Center of the Republic of Poland (Narodowe Centrum Nauki) through the grant number DEC2011/01/N/HS3/03273.

Footnotes

  • This problem seems to reflect quite similar issue in quantum logic, where the collapse rules are also not a part of the basic framework, so they require an additional justification or derivation (see e.g. [53]), pointing towards a possibility of some more fundamental principle. Such derivations were provided (under some assumptions) by means of minimisation of suitable symmetric distances (see e.g. [38, 39, 42]), corresponding (in some cases) to symmetric transition probability functionals. It was shown in [54] that nonsymmetric transition probability [55] plays more fundamental role in quantum logic, but its relationship to nonsymmetric distances and to the problem of derivation of collapse rules was not investigated.

Please wait… references are loading.
10.1088/1367-2630/18/1/013022