Paper The following article is Open access

Uncertainty relations for angular momentum

, and

Published 25 September 2015 © 2015 IOP Publishing Ltd and Deutsche Physikalische Gesellschaft
, , Citation Lars Dammeier et al 2015 New J. Phys. 17 093046 DOI 10.1088/1367-2630/17/9/093046

1367-2630/17/9/093046

Abstract

In this work we study various notions of uncertainty for angular momentum in the spin-s representation of SU(2). We characterize the 'uncertainty regions' given by all vectors, whose components are specified by the variances of the three angular momentum components. A basic feature of this set is a lower bound for the sum of the three variances. We give a method for obtaining optimal lower bounds for uncertainty regions for general operator triples, and evaluate these for small s. Further lower bounds are derived by generalizing the technique by which Robertson obtained his state-dependent lower bound. These are optimal for large s, since they are saturated by states taken from the Holstein–Primakoff approximation. We show that, for all s, all variances are consistent with the so-called vector model, i.e., they can also be realized by a classical probability measure on a sphere of radius $\sqrt{s(s+1)}.$ Entropic uncertainty relations can be discussed similarly, but are minimized by different states than those minimizing the variances for small s. For large s the Maassen–Uffink bound becomes sharp and we explicitly describe the extremalizing states. Measurement uncertainty, as recently discussed by Busch, Lahti and Werner for position and momentum, is introduced and a generalized observable (POVM) which minimizes the worst case measurement uncertainty of all angular momentum components is explicitly determined, along with the minimal uncertainty. The output vectors for the optimal measurement all have the same length $r(s),$ where $r(s)/s\to 1$ as $s\to \infty .$

Export citation and abstract BibTeX RIS

Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

The textbook literature on quantum mechanics seems to agree that the uncertainty relations for angular momentum, and indeed for any pair of quantum observables $A,B$ should be given by Robertson's [25] inequality

Equation (1)

valid for any density operator ρ, with ${{\rm{\Delta }}}_{\rho }^{2}(A)$ denoting the variance of the outcomes of a measurement of A on the state ρ. Perhaps the main reason for the ubiquity of this relation in textbooks is that it is such a convenient intermediate step to the proof of uncertainty relations for position and momentum. In that case the right-hand side is ${{\hslash }}^{2}/4,$ independently of the state ρ. For any pair $A,B$ other than a canonical pair, however, the relation (1) makes a much weaker statement, requiring some prior information about the state. This begs the question: When and with what bounds it is true that

[Preparation uncertainty:] One cannot choose a state ρ so that ${{\rm{\Delta }}}_{\rho }^{2}(A)$ and ${{\rm{\Delta }}}_{\rho }^{2}(B)$ simultaneously become arbitrarily small.

Robertson's relation supports no such conclusion, but on the other hand such a statement does hold in many situations. In fact, in a finite dimensional context it is true whenever A and B do not have a common eigenvector. In this paper we will provide optimal bounds for angular momentum components, establishing the methods for deriving optimal bounds in the general case along the way.

The second reason that (1) is unsatisfactory is that it addresses only the preparation side of uncertainty, in the sense loosely described in the italicized sentence above. However, there is always also a measurement aspect to uncertainty, for which Heisenberg's γ-ray microscope [8] is a paradigm. The error disturbance tradeoff would be stated as

[Error-disturbance uncertainty:] An approximate measurement of A of accuracy ${\rm{\Delta }}A$ disturbs the system in such a way that from the post-measurement state and the measurement result for A the distribution for observable B can only be inferred with accuracy ${\rm{\Delta }}B,$ where ${\rm{\Delta }}A$ and ${\rm{\Delta }}B$ cannot be simultaneously arbitrarily small.

It is often easier to think of the whole experiment as a joint measurement of A and B, and state relations of the kind:

[Measurement uncertainty:] For any measurement device with both an A- type and a B-type output, the marginals will have worst case error ${\rm{\Delta }}A,$ ${\rm{\Delta }}B$ with respect to ideal measurements of A and B, satisfying a tradeoff relation.

Again, generic observables and angular momenta satisfy non-trivial relations of this kind. Errors ${\rm{\Delta }}A={\rm{\Delta }}B=0$ can occur only if A and B commute, i.e., under an even more stringent condition than for preparation uncertainty. In this paper we will provide some sharp measurement uncertainty relations for angular momentum, establishing along the way some methods which may be of interest in more general cases.

There is a third reason that one should not be satisfied with (1) with $A={L}_{1},B={L}_{2}:$ it involves only two of the three components of angular momentum. But there is no reason tradeoff-relations as described above should not be stated for more than two observables. For angular momentum this seems especially natural. Moreover, it seems natural to state relations for all components simultaneously, i.e., not only for the three components along the axes of an arbitrarily chosen Cartesian reference frame, but for the angular momenta along arbitrary rotation axes, restoring the rotational symmetry of the problem.

Indeed the idea that uncertainty should involve just pairs of observables can be traced to Bohr's habit of expressing complementarity as a relation between 'opposite' aspects, like 'in vitro' and 'in vivo' biology. This dualistic preference had more to do with his philosophy than with the actual structure of quantum mechanics. Other founding fathers of quantum mechanics did not share this preference. As Wigner said in an interview [34] in 1963:

I always felt also that this duality is not solved and in this I may have been under Johnnys (John von Neumann) influence, who said, 'Well, there are many things which do not commute and you can easily find three operators which do not commute.' I also was very much under the influence of the spin where you have three variables which are entirely, so to speak, symmetric within themselves and clearly show that it isnt two that are complementary; and I still dont feel that this duality is a terribly significant and striking property of the concepts.

In this spirit, an uncertainty relation for triples of canonical operators was recently proposed and proved [16], and further generalizations are clearly possible. However, we will stick to angular momentum in this paper, and particularly seek to establish relations which do not break rotation invariance.

Our paper responds to an increasing interest in quantitative uncertainty relations. This interest is connected to an increasing number of experiments reaching the uncertainty dominated regime, so that that rather than qualitative or order-of-magnitude statements one is more interested in the precise location of the quantum limits. Measurement uncertainty was made rigorous in [2, 5, 26]. There is also a controversial [3, 4] state-dependent version [22]. That adequate uncertainty relations are sometimes better stated in terms of the sum of variances rather than their product has been noted repeatedly [10, 14, 20]. There has been some renewed interest also in the uncertainty between angular momentum and angular position [7], angular momentum of certain states [24] and other non-standard complementary pairs [18].

1.1. Setting and notation

In physics angular momentum appears as orbital or as spin angular momentum. Our theory applies to both, but it must be noted that the bounds obtained do depend on the quantum number for ${{\bf{L}}}^{2}.$ For example, there are states with vanishing orbital angular momentum uncertainties (precisely the rotation invariant ones, i.e., s = 0) but none for a $s=\frac{1}{2}$ degree of freedom. Therefore, one first has to decompose the given space into irreducible angular momentum components (integer or half integer), and then use the results for the appropriate s. Hence we will consider throughout a system of spin s, with $s=1/2,1,3/2,\ldots $ in its $(2s+1)$-dimensional Hilbert space ${\mathcal{H}}={{\mathbb{C}}}^{2s+1}.$ The three angular momentum components will be denoted by Lk, $k=1,2,3,$ and the component along a unit vector ${\bf{e}}\in {{\mathbb{R}}}^{3}$ by ${\bf{e}}\cdot {\bf{L}}.$ We denote by $| m\rangle $ the eigenvectors of L3, so that ${L}_{3}| m\rangle =m| m\rangle $ where $-s\leqslant m\leqslant s$ and, with ${L}_{\pm }={L}_{1}\pm {\rm{i}}{L}_{2},$

Equation (2)

Rotation matrices, whether they are considered as elements of SO(3) or of SU(2), will typically be denoted by R, the corresponding matrix in the spin s representation by UR, and normalized Haar measure on SO(3) or SU(2) by dR. We will always set ${\hslash }=1.$ Observables are in general always allowed to be normalized positive operator valued measures, with a typical letter F. For a self-adjoint operator A the spectral measure is an observable in this sense, denoted by EA. For a component of angular momentum, i.e., $A={\bf{e}}\cdot {\bf{L}}$ we write ${E}_{{\bf{e}}}$ for short. For the unit vectors ${{\bf{e}}}_{k}$ along the axes we further abbreviate this to Ek.

For the variance of the probability distribution obtained by measuring F on a state (density operator) ρ we write

Equation (3)

This minimum is taken over a quadratic expression in ξ, and it is attained when $\xi =\int x\mathrm{tr}\rho F({\rm{d}}x)$ is the mean value of the distribution. The most familiar case is that of the spectral measure for an operator A, in which case we abbreviate the variance by ${{\rm{\Delta }}}_{\rho }^{2}(A).$ Then the second moment $\int {x}^{2}\mathrm{tr}\rho F({\rm{d}}x)={A}^{2}$ can also be expressed by A and we get

Equation (4)

We say that a unit vector $| \phi \rangle \in {\mathcal{H}}$ is a maximal weight vector, if for some direction ${\bf{e}}\in {{\mathbb{R}}}^{3}$ it satisfies ${\bf{e}}\cdot {\bf{L}}| \phi \rangle =s| \phi \rangle .$ This is the same as saying that, for some rotation $R\;\in \;$ SU(2), $| \phi \rangle ={U}_{R}| s\rangle $ up to a phase. For such a vector we call $\rho =| \phi \rangle \langle \phi | $ a spin coherent state. These states are candidates for states of minimal uncertainty.

1.2. Summary of main results

We now describe the structure of our paper and the main results.

Section 2: Preparation uncertainty. The basic object of study is the variance ${{\rm{\Delta }}}_{\rho }^{2}({\bf{e}}\cdot {\bf{L}})$ of the angular momentum in direction ${\bf{e}}$ as a function of the unit vector ${\bf{e}},$ especially properties which hold for an arbitrary state ρ. After clarifying some general features and explicitly solving the two cases $s\leqslant 1$ (section 2.2), we look at the traditional setting of just two components ${L}_{1},{L}_{2}.$ The set of uncertainty pairs $\left({{\rm{\Delta }}}_{\rho }^{2}({L}_{1}),{{\rm{\Delta }}}_{\rho }^{2}({L}_{2})\right)$ is studied, and the fact that not both variances can be small is found to be well expressed by a lower bound not on the product but on the sum of the variances. We compute numerically (and exactly up to $s=3/2$) the best constants in

Equation (5)

and find that they asymptotically behave like ${c}_{2}(s)\sim {s}^{2/3}$ (section 2.4). For three components the uncertainty region is also studied in some detail. A prominent feature is again given by a linear bound [10]

Equation (6)

which is very easy to prove (see section 2.1, (17)).

Turning to features of the whole function ${\bf{e}}\mapsto {{\rm{\Delta }}}_{\rho }^{2}({\bf{e}}\cdot {\bf{L}}),$ we show in section 2.5 that for any ρ there is at least one direction ${\bf{e}}$ such that ${{\rm{\Delta }}}_{\rho }^{2}({\bf{e}}\cdot {\bf{L}})\geqslant s/2,$ i.e.

Equation (7)

This bound is optimal, since it is saturated by spin coherent states. We generalize from the maximum (seen as the ${L}^{\infty }$-norm) to all Lp-norms (proposition 1).

For large s equation (6) suggests the scaling $\sim 1/s$ (section 2.7). Indeed, the triples $({{\rm{\Delta }}}_{\rho }^{2}({L}_{1})/s,{{\rm{\Delta }}}_{\rho }^{2}({L}_{2})/s,{{\rm{\Delta }}}_{\rho }^{2}({L}_{3})/s)$ converge as $s\to \infty $ (theorem 4). The lower bound on the limit set is obtained by a generalization of Robertson's method for proving (1) section 2.6, which for finite s is (39)

Equation (8)

where the components are ordered so that ${{\rm{\Delta }}}_{\rho }^{2}({L}_{1})\geqslant {{\rm{\Delta }}}_{\rho }^{2}({L}_{2})\geqslant {{\rm{\Delta }}}_{\rho }^{2}({L}_{3}).$ The upper bound in theorem 4 is provided by a family of states suggested by the Holstein–Primakoff approximation.

Section 3.1: Vector model and moment problems. We revisit the so-called vector model of angular momentum, a classical model which is still found in some textbooks. We show that it can correctly portray the moments up to second order (i.e., means and variances) of the angular momentum observables, but fails on higher moments and, of course, on correlations.

Section 3.2: Entropic uncertainty relations. We discuss entropic uncertainty relations only very briefly. We point out that the criteria 'variance' and 'entropy' may disagree on which of two distributions is 'more sharply concentrated'. This effect is illustrated by the uncertainty diagrams for s = 1. We show also that the general Maassen–Uffink bound [19] while suboptimal for s = 1, becomes sharp for $s\to \infty ,$ and determine a family of states saturating it.

Section 4: Measurement uncertainty. We consider two measures for the deviation of an approximate observable from an ideal reference, called metric error and calibration error. We then discuss uncertainty relations for the joint measurement of all angular momentum components. The output of such an observable is an angular momentum three-vector ${\boldsymbol{\eta }},$ from which one can obtain a measurement of the ${\bf{e}}$-component (for any unit vector ${\bf{e}}$) simply by taking ${\bf{e}}\cdot {\boldsymbol{\eta }}$ as the output. Such a marginal observable can in turn be compared with the quantum observable ${\bf{e}}\cdot {\bf{L}}.$ The uncertainty relation in this case gives a lower bound on the error in the worst case with respect to ${\bf{e}}.$ Our main result (theorem 12) is a determination of the optimal bound, and an observable saturating it. It turns out that the optimal observable is covariant with respect to rotations, and this implies that it simultaneously minimizes the maximal metric error and the maximal calibration error. All the output vectors have the same length ${r}_{\mathrm{min}}(s),$ which depends in a non-trivial way on s but is close to s for large s.

2. Preparation uncertainty

In this section we consider the preparation uncertainty, i.e., a property of a given state ρ. For every unit vector ${\bf{e}}\in {{\mathbb{R}}}^{3}$ we can form the variance of the angular momentum component ${\bf{e}}\cdot {\bf{L}},$ and hence study the function

Equation (9)

on the unit sphere. For the purposes of this section, this function summarizes all the uncertainty properties of the state ρ, and all results in this section are statements about properties of this function, which are valid for all ρ. To visualize the function v, we can use a three-dimensional radial plot, i.e., the surface containing all vectors $v({\bf{e}}){\bf{e}},$ as ${\bf{e}}$ runs over all unit vectors. A typical radial plot is shown in figure 1. Often we are also interested in the components with respect to some Cartesian reference frame. In this case the best visualization is an uncertainty diagram, which represents the possible pairs/triples etc of variances in the same state. In our case this will be the set of pairs $(v({{\bf{e}}}_{1}),v({{\bf{e}}}_{2})),$ or triples $(v({{\bf{e}}}_{1}),v({{\bf{e}}}_{2}),v({{\bf{e}}}_{3})).$ The diagrams for s = 1 are shown in figure 4. In this diagram it can be seen that the uncertainty region is not convex in general. Since we are only interested in lower bounds, we therefore always take the monotone closure of the uncertainty region, i.e., we also include with every point the whole quadrant/octant of points in which one or more of the coordinates increase. This is described in more detail in section 2.3.

Figure 1.

Figure 1. The function $v({\bf{e}})$ from equation (10), where Λ is diagonal.

Standard image High-resolution image
Figure 2.

Figure 2. The orbit of a point under permutations of the coordinates, and its convex hull.

Standard image High-resolution image
Figure 3.

Figure 3. Pure state uncertainties for s = 1: left panel: parabola after (19) with its permutations. Adding to every point the hexagon generated by its permutation orbit generates the solid shown in the right panel. This is precisely the set of all variance triples for pure states. Its monotone closure is shown in figure 4.

Standard image High-resolution image
Figure 4.

Figure 4. The monotone closure of the uncertainty region of spin 1. Since this turns out to be convex, it is equal also to the lower convex hull (see text). Left panel: for two orthogonal spin components. The light gray area belongs to the monotone closure, but these points cannot be realized as uncertainty pairs. The parabolas outline the shape (compare also figure 3) the orange lines correspond to coherent states. Right panel: the analogue for three spin components. Projecting this body onto one coordinate plane gives the shape shown in the left panel.

Standard image High-resolution image

It turns out that after a rotation to suitable principal axes (which has already been carried out in figure 1), the function v depends only on three real parameters ${\mu }_{1},{\mu }_{2},{\mu }_{3}.$ To see this we introduce the 3 × 3-matrix ${\rm{\Lambda }}={\rm{\Lambda }}(\rho )$ by

Equation (10)

Equation (11)

Equation (12)

Since the Lk transform as a vector operator (i.e., with respect to the spin-1 representation of SU(2)) we see that by choice of an appropriate coordinate basis in ${{\mathbb{R}}}^{3}$ we can diagonalize Λ, i.e., we can choose ${{\rm{\Lambda }}}_{{jk}}(\rho )={\mu }_{j}{\delta }_{{jk}}$ with ${\mu }_{1}\geqslant {\mu }_{2}\geqslant {\mu }_{3}\geqslant 0.$

The eigenvalues $({\mu }_{1},{\mu }_{2},{\mu }_{3})$ of any matrix ${\rm{\Lambda }}(\rho )$ are also a possible triple of variances, namely for a suitably rotated state. In fact, we can find the uncertainty triples for all rotated versions of ρ quite easily: When R is the rotation matrix taking the eigenbasis of Λ to the basis ${{\bf{e}}}_{1},{{\bf{e}}}_{2},{{\bf{e}}}_{3}$ under consideration, then

Equation (13)

Now the squared rotation matrix is doubly stochastic, so by Birkhoff's theorem it is a convex combination of permutation matrices. We therefore find the variance triple in basis ${\bf{e}}$ in the convex hull of the six points, arising from the triple of ${\mu }_{k}$ by permutation. These six points lie in a plane orthogonal to the vector $(1,1,1),$ so they form a hexagon (see figure 2), which degenerates into a triangle if two of the ${\mu }_{k}$ are equal. One can easily check that the full hexagon is attained by squared rotation matrices.

2.1. Basic bounds

For an L3-eigenstate $\rho =| m\rangle \langle m| $ we find $\lambda =(0,0,m)$ and

Equation (14)

which re-inforces that, for eigenstates, the variances in all directions are smallest for the maximal weight m = s. Maximal variance is attained for an equal weight mixture $\rho =1/2\left(| s\rangle \langle s| +| -s\rangle \langle -s| \right),$ which has L3-variance s2. Hence, for all ${\bf{e}}:$

Equation (15)

The average $\bar{v({\bf{e}})}$ over ${\bf{e}}$ with respect to the surface measure on the sphere, or equivalently the average of $v(R{\bf{e}})$ over Haar-random rotations R, is readily computed from (10), since the average of ${e}_{j}{e}_{k}$ over the unit sphere is just ${\delta }_{{jk}}/3.$ Therefore, from (10),

Equation (16)

In the same simple way we can get an inequality for the variances along the three coordinate directions of a Cartesian coordinate system:

Equation (17)

In both cases equality holds precisely for $| {\boldsymbol{\lambda }}| =s,$ i.e., if ρ is an eigenstate of one of the operators ${\bf{e}}\cdot {\bf{L}}$ for the maximum eigenvalue m = s.

2.2. Special features for $s=1/2$ and s = 1

For $s=1/2,$ it happens that Lj and Lk (i.e., up to a factor the Pauli matrices) anticommute for different $j,k,$ so that

Equation (18)

The eigenvalues are $({\mu }_{1},{\mu }_{2},{\mu }_{3})=1/4(1,1,1-4{| \lambda | }^{2}).$ Of course, pure states are characterized by $| {\boldsymbol{\lambda }}| =\frac{1}{2}.$ The uncertainties are $1/4-{\lambda }_{j}^{2},$ and so the uncertainty region is described by a triangle.

The case s = 1 is still special because the 3 + 6 operators Lk and $({L}_{j}{L}_{k}+{L}_{k}{L}_{j})/2$ form a basis of the operators on ${{\mathbb{C}}}^{3}.$ Therefore, ρ can be reconstructed from $({\boldsymbol{\lambda }},{\rm{\Lambda }})$ and, in particular, the set of pure ρ can be characterized in terms of conditions on the eigenvalues ${\mu }_{k}$ and ${\lambda }_{k}.$ In order to analyze these conditions, let us take the representation of the group SU(2) by real orthogonal matrices. Now consider a vector $\psi \in {{\mathbb{C}}}^{3},$ which we can split into $\psi ={\mathfrak{R}}e\psi +{\rm{i}}\;{\mathfrak{I}}m\psi $ with ${\mathfrak{R}}e\psi ,{\mathfrak{I}}m\psi \in {{\mathbb{R}}}^{3}.$ Note that the real continuous function $t\mapsto \left\langle{\mathfrak{R}}e\left({{\rm{e}}}^{{\rm{i}}t}\psi \right),{\mathfrak{I}}m\left({{\rm{e}}}^{{\rm{i}}}t\psi \right)\right\rangle$ changes sign between t = 0 and $t=\pi /2,$ so we can choose a complex phase for ψ to make ${\mathfrak{R}}e\psi $ and ${\mathfrak{I}}m\psi $ orthogonal. Moreover, we can apply a rotation, so that ${\mathfrak{R}}e\psi $ and ${\mathfrak{I}}m\psi $ are along the first two coordinate axes. Hence, up to a rotation, we have

Equation (19)

where $t\in [-\pi ,\pi ]$ or $\tau ={(\mathrm{sin}t)}^{2}\in [0,1].$ In the three-component diagram the curve parameterized by τ is a parabola lying in a diagonal plane. This parabola, and the two copies arising by coordinate permutation are shown in figure 3, as well as the body of uncertainty triples of all pure states, which arises by adding to each point on the parabola the hexagon formed by its permutation orbit. A paper cutout model of this solid is provided as a supplement2 .

2.3. General minimization method

Consider now, a little more generally, any collection of Hermitian operators ${A}_{1},\ldots ,{A}_{n}.$ We can then form, for any state ρ, the variance n-tuple $({{\rm{\Delta }}}_{\rho }^{2}({A}_{1}),\ldots ,{{\rm{\Delta }}}_{\rho }^{2}({A}_{n})),$ and ask which region Ω in ${{\mathbb{R}}}^{n}$ is filled, when ρ runs over the whole state space. We call Ω the uncertainty region of the operator tuple. Typically, this is not a convex set, because ${{\rm{\Delta }}}_{\rho }^{2}(A)$ contains a term quadratic in ρ, which consequently does not respect convex mixtures. Ω will be simply connected (as a continuous image of the state space), but beyond that there are few general facts. It can happen that starting from a point in the uncertainty region we can leave the region by increasing one of the coordinates, i.e., the region encodes upper bounds on variances as well as lower bounds. This is clearly not relevant to the theme of uncertainty relations, where we ask for universal lower bounds only. We can therefore consider the monotone closure of the uncertainty region, by including all points with larger uncertainties, i.e.

Equation (20)

This is still not necessarily a convex set. We will denote the convex hull of ${{\rm{\Omega }}}^{+}$ by ${{\rm{\Omega }}}^{\vee }$ and call it the lower convex hull of Ω (see figure 4). It is this set which has an efficient characterization. Indeed, as a closed convex set it is the intersection of all half spaces containing it, and the monotonicity condition restricts these half spaces to those whose normal vector w has all components non-negative. In other words

Equation (21)

Equation (22)

In this double infimum we can exchange the order, leading to two kinds of operations: With fixed ρ the global minimum over the aα is obviously at ${a}_{\alpha }=\mathrm{tr}\rho {A}_{\alpha }.$ On the other hand, with fixed aα the global minimum over ρ is computed by finding the ground state of the positive semidefinite operator $H(a)={\displaystyle \sum }_{\alpha }{w}_{\alpha }{\left({A}_{\alpha }-{a}_{\alpha }{\mathbb{1}}\right)}^{2}.$ An efficient algorithm is therefore obtained by alternating between these two steps. The upper estimates on m obtained in this way are non-increasing and in practice converge quite well, and independently of the starting value. However, we do not have a theorem to this effect. An analytic consequence of this algorithm (independent of convergence) is that we can restrict the infimum to pure states, since this is sufficient to get the ground state energies.

The algorithm is then run for a suitable set of tuples $({w}_{1},\ldots ,{w}_{n}),$ so that for each run, one obtains a tangent plane to ${{\rm{\Omega }}}^{\vee }$ but also the state ρ and with it, the tuple of variances in Ω. We illustrate the results in figure 4 for the case of spin 1, and the operator tuples $({L}_{1},{L}_{2})$ and $({L}_{1},{L}_{2},{L}_{3}),$ respectively. For low spin these diagrams can be determined analytically (see the next subsection). The most prominent feature of two-component diagram is the symmetric linear bound, which depends on s and is determined in section 2.4.

2.4. The linear two-component bound

For every s, let ${c}_{2}(s)$ be the best constant in the inequality

Equation (23)

For $s=1/2,1$ it is readily computed from the eigenvalues of Λ given in section 2.2. For arbitrary s we can use a slightly simplified version of the variational principle (21). We have ${w}_{1}={w}_{3}=1,$ and can assume that ${a}_{1}=0$ by rotation invariance around the two-axis. Thus

Equation (24)

where the first infimum runs over all pure states (for fixed a a ground state problem) and a over the reals (for fixed ϕ the expectation value of L3). One notes that in this operator only matrix elements with even $m-m^{\prime} $ are non-zero, so the problem can be further reduced. For up to $s=3/2$ it effectively leads to two-dimensional ground state problems. In this way (resp. by using the results of section 2.2) we get

Equation (25)

Equation (26)

Equation (27)

Note that the bound ${c}_{2}(1)$ was already obtained in [10]. It is readily seen numerically that ${c}_{2}(s)$ increases with s, but sub-linearly. This means that if we scale the diagram of ${{\rm{\Omega }}}^{\vee }$ (see figure 4, right) by a factor $1/s$ so that the bottom triangle described by (17) stays fixed, the two-component inequality excludes an asymptotically small prism around the axes. Figure 5 shows the asymptotic behavior of c2 in a log–log plot, which suggests that

Equation (28)

Figure 5.

Figure 5. Left: log–log plot of the numerical calculations of the two component bound ${c}_{2}(s)$ in black and the arising fit (28) in blue. Right: numerics and the fit for small s.

Standard image High-resolution image

2.5. Power mean and maximal uncertainty

A natural way to characterize states with small variance is to look for the maximum of the variance function $v({\bf{e}})$ defined in (10). An uncertainty relation would then put a lower bound c(s) on this maximum. In other words we would like to prove the following statement: for every state ρ there is some direction e such that ${{\rm{\Delta }}}_{\rho }^{2}({E}_{e})$ is larger than c. By considering coherent states we can immediately see that $c(s)\leqslant s/2.$ The following proposition shows that coherent states in fact have minimal variance in terms of this criterion, and we even have equality.

Such a result can be seen as one end of a one-parameter family of criteria, of which (16) is the other end: we can judge the 'size' of the function v by its ${{\mathcal{L}}}^{p}$ norm, of which the maximum is the special case $p=\infty ,$ and the mean the case p = 1. We therefore formulate a proposition to cover all these cases.

Proposition 1. For every $s\in {\mathbb{N}}/2$ and every $p\in [1,\infty ]$ there is a constant $c(p,s)$ such that, for every density operator ρ in the spin s representation

Equation (29)

with equality whenever ρ is a spin coherent state. For $p\lt \infty $ these are the only states with equality. For $p=\infty $ equality holds also for mixtures $\rho ={p}_{+}| +s\rangle \langle +s| +{p}_{-}| -s\rangle \langle -s| ,$ and rotations thereof, provided that ${p}_{+}{p}_{-}\geqslant 1/(8s).$

The constant is

Equation (30)

with special values $c(1,s)=s/3,$ $c(2,s)=s\sqrt{8/15}\approx 0.73s,$ $c(\infty ,s)=s/2.$

Proof. Let ${\lambda }_{j}=\mathrm{tr}\rho {L}_{j}$ be the vector of expectation values, and consider the set of density operators ${\rho }^{\beta }$ arising from ρ by rotation Rβ around the vector λ by the angle β. For each ${\rho }^{\beta },$ we call the variance function ${v}_{\beta }({\bf{e}})=v({R}_{\beta }{\bf{e}}).$ By averaging over β we find a state $\bar{\rho },$ with variance function

Equation (31)

where we used, crucially, that all ${\rho }^{\beta }$ and $\bar{\rho }$ have the same expectations ${\lambda }_{j}$. By the triangle inequality for the p-norm, we have ${| | \bar{v}| | }_{p}\leqslant {| | v| | }_{p}.$ Hence we can restrict the search for the ρ with minimal ${| | v| | }_{p}$ to those which are rotation invariant around some axis, say the three-axis.

Such a state can be jointly diagonalized with L3, and is hence of the form $\rho ={\displaystyle \sum }_{m}{p}_{m}| m\rangle \langle m| .$ Then $\lambda =(0,0,{\displaystyle \sum }_{m}{p}_{m}m),$ and ${\rm{\Lambda }}(\rho )$ is diagonal with

Equation (32)

Equation (33)

Equation (34)

The last equation shows that the function v becomes pointwise smaller (and hence smaller in p-norm) if we decrease some ${{\rm{\Lambda }}}_{{ii}}.$ That is, we have to go to the minimum on both ${{\rm{\Lambda }}}_{11}$ and ${{\rm{\Lambda }}}_{33}.$ The minimum in (32) is attained precisely when ${p}_{m}\ne 0$ only for $m=\pm s.$ Then minimality in (32) forces ρ to be a spin coherent state. For $p=\infty $ the norm only sees the maximum, so the pointwise minimum need not be chosen, and we may allow $0\leqslant {{\rm{\Lambda }}}_{33}\leqslant {{\rm{\Lambda }}}_{11}$ without changing the maximum. The latter inequality translates to the one given in proposition 1.

The concrete constants follow easily by integrating the $p\mathrm{th}$ power of (34) with ${{\rm{\Lambda }}}_{11}=s/2$ and ${{\rm{\Lambda }}}_{33}=0$ with respect to the normalized surface measure on the sphere, i.e.

Equation (35)

2.6. Robertson's technique: a generalization

We have criticized the Robertson inequality (1) for not giving a state independent bound. However, with only little effort it can be used to derive such a bound. Indeed, abbreviating ${v}_{j}={{\rm{\Delta }}}_{\rho }^{2}({L}_{j}),$ and ${\lambda }_{j}=\mathrm{tr}\rho {L}_{j}$ we can add the three inequalities of the form ${v}_{1}{v}_{2}\geqslant {\lambda }_{3}^{2}/4$ and use that ${\displaystyle \sum }_{j}({v}_{j}+{\lambda }_{j}^{2})=s(s+1)$ to obtain

Equation (36)

Clearly, this no longer allows ${v}_{1}={v}_{2}=0,$ since ${v}_{3}\leqslant {s}^{2}.$ The set of variance triples satisfying this is shown in figure 6. Comparison with figure 4 readily shows that this bound is not optimal. However, we can generalize Robertson's technique from two to three components rather than extend his two component result in this trivial way. The basis of the technique is to utilize the observation that for any finite collection of operators Xj (not necessarily Hermitian or normal) the matrix ${m}_{{jk}}=\mathrm{tr}\rho {X}_{j}^{*}{X}_{k}$ is positive definite, which is the same as saying that for any complex linear combination $X={\displaystyle \sum }_{j}{a}_{j}{X}_{j}$ the expectation of ${X}^{*}X$ must be positive. In order to get Robertson's inequality for ${L}_{1},{L}_{2}$ this idea is applied to the three operators ${\mathbb{1}},{L}_{1},{L}_{2}.$ In fact, this leads to Schrödinger's improvement of the inequality [27] which also contains the square of the covariance matrix element ${{\rm{\Lambda }}}_{12}{(\rho )}^{2}$ on the right-hand side.

Figure 6.

Figure 6. Left: region bounded by the inequality (36). Right: plane of constant γ orthogonal to the $(1,1,1)$ direction.

Standard image High-resolution image

We will apply the method to the four operators ${\mathbb{1}},{L}_{1},{L}_{2},{L}_{3}.$ In order to simplify the expressions, however, we will not look for variances and the off-diagonal elements of ${\rm{\Lambda }}(\rho ),$ but for inequalities involving the eigenvalues ${\mu }_{j}.$ As discussed at the beginning of this section, this will contain all the information needed. In other words, we will take the matrix ${\rm{\Lambda }}(\rho )$ as the diagonal matrix with entries ${\mu }_{1}\geqslant {\mu }_{2}\geqslant {\mu }_{2}.$ The matrix which then needs to be positive is

Equation (37)

The positivity of this matrix is equivalent (see e.g. [13 theorem 7.2.5]) to the positivity of the principal minors, i.e., the determinants of the submatrices of the first k rows and columns for $k=1,2,3,4.$ The first three of these are 1, ${\mu }_{1}$, and ${\mu }_{1}{\mu }_{2}-{\lambda }_{3}^{2}/4.$ The positivity of the third one is Robertson's inequality (1). The only remaining condition for the positivity of M is $\mathrm{det}M\geqslant 0,$ which evaluates to

Equation (38)

This will be combined with the normalization condition

Equation (39)

The condition on the triples $({\mu }_{1},{\mu }_{2},{\mu }_{3})$ we have to evaluate is the existence of ${\lambda }_{i}$ satisfying both these relations. Since only the squares enter, let us set ${x}_{j}={\lambda }_{j}^{2}.$ Then (39) describes a triangle in the positive quadrant with equal intercept $s(s+1)-({\mu }_{1}+{\mu }_{2}+{\mu }_{3})$ with the axes. The inequality (38) describes a tetrahedron spanned by the origin and the axis intercepts ${x}_{1}^{0}=4{\mu }_{2}{\mu }_{3},$ and cyclic. Note that Robertson's inequality is automatically satisfied on this tetrahedron. Obviously the tetrahedron and the triangle intersect if an only if one of the axis intercepts of the tetrahedron reaches or lies above the triangle. Since we can take the eigenvalues ordered: ${\mu }_{1}\geqslant {\mu }_{2}\geqslant {\mu }_{3}$ this means

Equation (40)

This is a bound to the eigenvalue of the Λ-matrix. By Birkhoffs theorem, the variances arising from such Λ also includes all convex combinations of permutations of the ${\mu }_{i}$ (see beginning of section 2). In order to characterize the set of variance triples generated in this way we need the following Lemma. In its formulation the variables $\sigma \in {S}_{3}$ run over the permutation group on three elements, and are applied to the components of a three-vector (see also figure 2).

Lemma 2. With the notation from above, the following sets K1 and K2 are equal

Equation (41)

Equation (42)

Proof. For the equality of K1 and K2 it is sufficient to show that H1 and H2 coincide for every γ. The restriction ${\displaystyle \sum }_{i}{v}_{i}={\displaystyle \sum }_{i}{\mu }_{i}=\gamma ,$ together with the three-fold symmetry of the problem, tells us that ${H}_{1}(\gamma )$ and ${H}_{2}(\gamma )$ are subsets of the triangle, whose corners lie on the axes at a distance γ from the origin. In this triangle the ordering of the vi and ${\mu }_{i}$ reduces h1 and h2 to the dashed subset marked in figure 6.

Now the first and last condition in the definition of h2 can be combined to obtain

Equation (43)

so we get

Equation (44)

Because ${v}_{1}\geqslant 0$ we have to choose the positive sign, which means that ${H}_{2}(\gamma )$ is the intersection of the triangle with the three halfspaces ${v}_{i}\leqslant c(\gamma ),$ whose boundaries are marked as a orange lines in figure 6, i.e.

Equation (45)

${H}_{2}(\gamma )$ is clearly a convex polytope. The extremal points of ${H}_{2}(\gamma )$ have to saturate at two of the defining inequalities. In the ordered triangle (${v}_{1}\geqslant {v}_{2}\geqslant {v}_{3}$) the only extreme point is given by $p(\gamma ):= (c(\gamma ),\gamma -c(\gamma ),0),$ and all others can be obtained by permutations. Hence H2 can be described as the hexagon ${H}_{2}(\gamma )=\mathrm{conv}({\bigcup }_{\sigma }\sigma (p(\gamma ))).$ On the one hand, by comparing the defining inequalities for h1 and h2, we can see that every triple ${\mu }_{i}\in {h}_{1}$ is also part of h2. So including the permutations and by the fact that ${H}_{2}(\gamma )$ is convex, we get ${H}_{1}(\gamma )\subseteq {H}_{2}(\gamma ).$

On the other hand, the three-component of p is zero, so it is also part of ${h}_{1}\in {H}_{1}(\gamma ).$ While the point p and its permutations are the extremal points of ${H}_{2}(\gamma )$ and ${H}_{1}(\gamma )$ is convex, we have ${H}_{2}(\gamma )\subseteq {H}_{1}(\gamma ).$

Therefore we get the following statement:

Proposition 3. Let ${v}_{1}\geqslant {v}_{2}\geqslant {v}_{3}\geqslant 0$ be variances of the angular momentum components, then the following holds:

Equation (46)

As one can see in figure 6, the boundaries of the corresponding uncertainty region on the coordinate planes are given by permutations of the hyperbolic curve $4{v}_{1}{v}_{2}=s(s+1)-{v}_{1}-{v}_{2}.$ This uncertainty region is monotonously closed and given by the convex hull of the above hyperbolic curves. This is shown in figure 7.

Figure 7.

Figure 7. Region bounded by the generalized Robertson inequality. Left: hyperbolic curves on the faces for the eigenvalues. Right: uncertainty region given by the variances and the base triangle formed by the spin coherent states.

Standard image High-resolution image

2.7. Asymptotic case

Now we take a look at the behavior of the asymptotic uncertainty region for $s\to \infty .$ We already know that ${{\rm{\Delta }}}_{\rho }^{2}({L}_{1})+{{\rm{\Delta }}}_{\rho }^{2}({L}_{2})+{{\rm{\Delta }}}_{\rho }^{2}({L}_{3})\geqslant s$ and hence it is appropriate to scale the problem by $1/s,$ which will fix the sum of the variance in the lower base triangle to 1. We start with the asymptotic behavior of the generalized Robertson inequality derived in the previous section. On the scale of $1/s,$ i.e. ${v}_{i}/s={\nu }_{i},$ and the ordering ${\nu }_{1}\geqslant {\nu }_{2}\geqslant {\nu }_{3}$ this inequality reads

Equation (47)

and as s goes to infinity the set of possible variances shrink to

Equation (48)

because ${\displaystyle \sum }_{i}{\nu }_{i}\geqslant 1.$ Hence the inequality (47) gets stronger for increasing s.

In this section we will show that this bound is attained by states, which will be constructed in the following way. Using the technique described in part 2.3, we look for the states ψ, which minimize the expectation of the operator

Equation (49)

for a normal vector ${\bf{w}}.$ We do this using the Holstein–Primakoff transformation [11]:

Equation (50)

Here a and a are the creation and annihilation operators, so we have a representation of the angular momentum algebra in the oscillator basis. For large s and appropriate states, this transformation can be reduced to

Equation (51)

Notice that in the Holstein–Primakoff basis, the spin coherent state $| s\rangle $ is transformed to the ground state ${| 0\rangle }_{\mathrm{HP}},$ hence the state ${| n\rangle }_{\mathrm{HP}}$ corresponds to $| s-n\rangle $ in the standard angular momentum L3 eigenbasis. Now we rewrite H using the above transformation and the relation for position ${L}_{1}=\frac{1}{2}({L}_{+}+{\rm{i}}{L}_{-})=\frac{\sqrt{2s}}{2}(a+{a}^{*})=\sqrt{s}X$ and momentum ${L}_{2}=\frac{1}{2}({L}_{+}-{\rm{i}}{L}_{-})=\frac{\sqrt{2s}}{2i}(a-{a}^{*})=\sqrt{s}P.$ We arrive at

Equation (52)

Here $\xi ,\eta $ and ζ denote the transformed expectation values. From 2.1 we know that $| s\rangle $ has minimal uncertainty for $w\sim (1,1,1)$ and arbitrary s. Based on this observation we make the assumption that we are close to the L3 spin coherent state. We thus have $s\gg \langle {a}^{*}a\rangle $ and ${\lambda }_{3}\approx s,$ hence ζ is linear in s. Furthermore we can order the weights, such that ${w}_{1}\leqslant {w}_{2}\leqslant {w}_{3}$ to minimize the expectation value. Now we take the limit and let s becomes large, the operator converges to harmonic oscillator

Equation (53)

Here we use that the expectation value of the harmonic oscillator is translation-invariant in phase space, so that we can choose ξ and η to be zero. The state which minimizes the expectation of this operator is simply the harmonic oscillator ground state $\psi (m,\omega ),$ with $m=\frac{1}{2{w}_{2}}$ and $\omega =\sqrt{4{w}_{1}{w}_{2}}.$ In the following these will be combined in the parameter $\alpha := m\omega =\sqrt{\frac{{w}_{1}}{{w}_{2}}}.$ For the comparison of this result with numerical calculations using the above described algorithm, we must express these ground states in a common basis ${| n\rangle }_{\mathrm{HP}},$ i.e. decomposing $\psi (\alpha )$ in the basis of a harmonic oscillator with $\alpha =1.$ This transformation is given by

Equation (54)

which is zero for odd n and can be solved for even n through

Equation (55)

The corresponding probability distribution is given by

Equation (56)

Because this is zero for odd n, we can set $n=2k$ and get

Equation (57)

The above approximation does not necessarily yield the optimal states and it is not rigorously justified so far. As a first step, we compare the distribution pn with numerically determined ones for finite s. These tend to converge as shown in figure 8.

Figure 8.

Figure 8. Comparison of the occupation number from $\psi (\alpha )$ with the numerical calculations.

Standard image High-resolution image

Theorem 4. The lower bound of the asymptotic uncertainty region on a scale of $1/s$ is fully described by the generalized Robertson inequality (48) and is saturated by the states $\psi (\alpha ).$

Proof. First we will show that the approximation (51) is justified for $\psi (\alpha )$ and evaluate the corresponding asymptotic variances. While the generalized Robertson inequality gets stronger for increasing s, every extremal point of the corresponding boundary is attained by $\psi (\alpha ),$ which will prove the above statement. Moreover, by truncating the sequence ${\psi }_{n}(\alpha )$ at $n=2s+1$ and renormalizing, we get a sequence of spin-s states well approximating $\psi (\alpha )$ as s goes to infinity. With this in mind, we will prove the above statement in two steps:

(i) On the one hand we have to verify that ${\mathrm{lim}}_{s\to \infty }\;\frac{1}{\sqrt{s}}{L}_{+}| \psi (\alpha )\rangle =\sqrt{2}a| \psi (\alpha )\rangle $ and ${\mathrm{lim}}_{s\to \infty }\;\frac{1}{\sqrt{s}}{L}_{-}| \psi (\alpha )\rangle =\sqrt{2}{a}^{*}| \psi (\alpha )\rangle $ which is true if $\psi (\alpha )$ is in the domain of ${a}^{*}a.$ On the other hand we have to show that the term $\frac{{w}_{3}}{s}{(s-{a}^{*}a-\zeta )}^{2}$ from (52) will vanish for $\psi (\alpha )$ with $\zeta =\langle \psi (\alpha )| s-{a}^{*}a| \psi (\alpha )\rangle $ as s goes to infinity. Both requirements are fulfilled if the moments $\langle \psi (\alpha )| {a}^{*}a| \psi (\alpha )\rangle $ and $\langle \psi (\alpha )| {({a}^{*}a)}^{2}| \psi (\alpha )\rangle $ are finite. In the Holstein–Primakoff occupation basis ${| n\rangle }_{\mathrm{HP}},$ these moments are given by series of the form

Equation (58)

and can be computed as derivatives using the generating function

Equation (59)

of the probability distribution p2k (57). By straightforward calculations we get

Equation (60)

Equation (61)

which is finite for $\alpha \gt 0.$

(ii) Now the asymptotic variances for $\psi (a)$ can be determined. For $\psi (\alpha )$ the operators $\frac{1}{\sqrt{s}}{L}_{1},$ $\frac{1}{\sqrt{s}}{L}_{2}$ and $\frac{1}{\sqrt{s}}{L}_{3}$ converge to P, Q and a multiple of the identity, we obtain

Equation (62)

Equation (63)

Equation (64)

This set of variance triples $\left(\frac{1}{2\alpha },\frac{\alpha }{2},0\right)$ saturates the asymptotic generalized Robertson bound (48). Moreover they describe the extremal boundary curves, see the proof of theorem 2, of the associated uncertainty region.

3. Preparation uncertainty: special topics

3.1. The vector model and moment problems

This may be a good place to comment on the so-called vector model of angular momentum, as it was suggested by old quantum theory. It still seems to be quite popular in teaching, although theoreticians tend to deride it as ridiculously classical and obviously inconsistent. Indeed, its two-particle version gives manifestly false predictions even for spin-1/2, as witnessed by Bell's (CHSH) inequality. Since any local classical model fails this test, not much can be learned about angular momentum from this observation. Therefore we consider here only the one-particle version, and try to sort out how far it can be trusted.

The basic rationale of the vector model is shown in figure 9: angular momentum is thought of as a classical random variable taking values on a sphere of radius ${r}_{s}=\sqrt{s(s+1)}.$ For an eigenstate $| m\rangle $ the corresponding classical distribution is supposed to be concentrated at latitude m, and uniform with respect to rotations around the three-axis. The expectation value of this distribution is $(0,0,m).$ Moreover, its matrix of second moments is also diagonal, since the coordinate axes are clearly the inertial axes of a mass uniformly distributed on a circle of fixed latitude. One readily checks that all second moments are the same as for the corresponding quantum state. This can be generalized:

Figure 9.

Figure 9. The well known vector model of anguluar momentum for s = 2.

Standard image High-resolution image

Proposition 5. For any quantum state ρ on ${\mathcal{H}}$ there is a classical probability distribution μ on the sphere of radius $\sqrt{s(s+1)}$ which has the same first and second moments as the angular momentum in ρ, i.e.

Equation (65)

For the proof we only need a characterization of the moments $({m}_{j},{M}_{{jk}})$ of probability measures on a sphere of radius r, which turns out to be quite simple. This in turn provides an immediate proof of proposition 5, since the quantum moments ${M}_{{jk}}={\mathfrak{R}}e\mathrm{tr}\rho {L}_{j}{L}_{k},$ ${m}_{j}=\mathrm{tr}\rho {L}_{j}$ obviously have the required properties with radius ${r}^{2}=s(s+1).$

Lemma 6. Let $({m}_{j})\in {\mathbb{R}}$ for $j=1,2,3,$ and $({M}_{{jk}}{)}_{j,k=1}^{3}$ a real symmetric 3 × 3-matrix. Then these numbers are the first and second moments of a probability distribution on the sphere of radius r if and only if $\mathrm{tr}M={r}^{2}$ and the covariance matrix ${M}_{{jk}}-{m}_{j}{m}_{k}$ is positive semi-definite.

Proof. Necessity is obvious because covariance matrices are always positive and the function $v\mapsto {\displaystyle \sum }_{j}{v}_{j}^{2}={r}^{2}$ is constant on the sphere. For the converse consider the set K of moments (m, M) of probability measures on the sphere. This is a compact convex set, which we can think of as embedded into $3+6-1$ dimensional real space, because the real symmetric matrix M is specified by six parameters, and we have an additional linear constraint $\mathrm{tr}M={r}^{2}.$ By the separation theorems for compact convex sets the set K is therefore completely characterized by a collection of affine inequalities

Equation (66)

where A is real symmetric, $b\in {{\mathbb{R}}}^{3}$ with the dot indicating scalar product, and $\gamma \in {\mathbb{R}}.$ The functionals for which these inequalities have to be satisfied are precisely those for which the above inequality holds for all pure probability measures, i.e., for ${M}_{{jk}}={v}_{j}{v}_{k}$ mj = vj for some $v\in {{\mathbb{R}}}^{3}$ on the sphere. In this case we slightly abuse notation and write $f(m,M)=f(v).$

Not all inequalities are needed to characterize K, but only the extremal ones, which furnish a minimal subset from which all the others follow as linear combinations with positive scalar factors. In particular, we can assume that f is not strictly positive, so has a zero $f(u)=0,$ which then also has to be a minimum. The extremality condition gives $2{Au}-b=2\lambda u,$ where $\lambda \in {\mathbb{R}}$ is a Lagrange multiplier. This determines b, and from $f(u)=0$ we get γ, so that we can rewrite

Equation (67)

Now since $u,v$ lie on a sphere of radius r, we can write $2u\cdot (v-u)={-(v-u)}^{2}+({v}^{2}-{u}^{2})={-(v-u)}^{2},$ so we can combine the two terms, and obtain again the form (67), with A modified by a multiple of the identity, and $\lambda =0.$

It remains to determine all real symmetric matrices A such that $(v-u)\cdot A(v-u)\geqslant 0,$ whenever $u,v$ lie on a sphere of radius r. Equivalently, $\xi \cdot A\xi \geqslant 0$ for all multiples of vectors of the form $v-u.$ But this set is dense in ${{\mathbb{R}}}^{3}.$ Hence the desired condition is just the positive semi-definiteness of A. The resulting inequality for (m, M) can be rewritten in terms of the covariance matrix ${V}_{{jk}}={M}_{{jk}}-{m}_{j}{m}_{k}$ as

Equation (68)

Since the second term is anyhow positive, the positive semidefiniteness of V is sufficient for all these inequalities. This shows the sufficiency of the conditions stated in the lemma.

Let us make some remarks, which all fit into a fruitful analogy here with the phase space case, i.e., the case of two canonical operators $P,Q,$ and moment problems posed in the respective contexts.

  • (1)  
    The phase space analogue of proposition 5 is the statement that for any quantum state the first and second moments can also be realized by a classical probability distribution on phase space. Of course, not all classically allowed first and second moments can arise in this way: This is just the theme of preparation uncertainty relations.
  • (2)  
    The classical probability measure μ is not uniquely defined by ρ. For example, the density operator $\rho ={(2s+1)}^{-1}{\mathbb{1}}$ can either be represented by the uniform distribution on the sphere, or by an equal-weight mixture of the distributions with constant latitude m (in any direction). In the phase space case it is well-known that with the given, quantum-realizable moments one can always find a Gaussian state, which is defined as the distribution with the maximal entropy given those moments. The same idea also works for angular momentum, and it gives probability densities which are the exponential of a quadratic form in the variables. In contrast to the phase space case, when approaching eigenstates (any direction, any m) this entropy will go to $-\infty ,$ since for eigenstates only the singular measures depicted in figure 9 can be used.
  • (3)  
    Proposition 5 is certainly false if we include higher than second moments. For example, consider a pure qubit state with $m=+1/2.$ Without loss of generality we can choose the measure μ invariant under rotations around the three-axis. Since μ must be concentrated on $m=1/2,$ this uniquely fixes the measure μ, and hence the moments to all orders. Now consider a direction ${\bf{e}}$ which is at an angle strictly between 0 and $\pi /2$ to ${{\bf{e}}}_{3}.$ Then the quantum expectation of ${({\bf{e}}\cdot {\bf{L}})}^{3}={\bf{e}}\cdot {\bf{L}}/4$ is $({\bf{e}}\cdot {{\bf{e}}}_{3})/8,$ but the classical expectation of ${(x\cdot {\bf{e}})}^{3}$ is larger, reflecting the nonlinearity of the cube function.
  • (4)  
    The quantum analogue of the classical Hamburger moment would be to reconstruct a quantum state from the set of moments, i.e., the expectations of the monomials in the basic operators ($P,Q,$ or ${L}_{1},{L}_{2},{L}_{3}$). Commutation relations impose some constraints on these moments, so that in the end only monomials like ${L}_{1}^{{n}_{1}}{L}_{2}^{{n}_{2}}{L}_{3}^{{n}_{3}}$ need to be considered. Of course, the expectation values of such operators will generally be complex numbers. Can we do the reconstruction for arbitrary states in the spin-s representation? Indeed, we can, and it is actually much easier than in the phase space case since only finitely many moments suffice. The basic observation is that the moments fix all expectations on the von Neumann algebra ${\mathcal{A}}$ generated by the Li. Because the representation is irreducible, the commutant of this algebra consists of the multiples of the identity. Hence ${\mathcal{A}}$ must be the full matrix algebra, and the state is uniquely determined. That finitely moments suffice is clear because $\mathrm{dim}{\mathcal{A}}\lt \infty .$
  • (5)  
    Noncommutative moment problems are plagued by 'operator ordering' issues. But in some sense we have already adopted a standard 'symmetrized' solution for operator ordering, namely to form moments only of the operators ${\bf{e}}\cdot {\bf{L}}$ for all fixed ${\bf{e}}.$ This is analogous in the phase space case to considering the moments of linear combinations of P and Q. Now, famously, the full distributions of all such combinations are correctly rendered by the Wigner distribution function, which is itself hardly ever positive [15]. The analogy to the angular momentum case is immediate. So what do we get if we accept 'quasi-probability distributions'? Can every state be represented like that? This is answered by the following proposition.

Proposition 7. Let ρ be a quantum state in the spin-s representation. Then there is a unique tempered distribution ${{\mathcal{W}}}_{\rho }$ on ${{\mathbb{R}}}^{3},$ which is formally real, has support in a ball of radius s and satisfies, for all ${\bf{e}}$ and $n\in {\mathbb{N}}$

Equation (69)

Proof. We can compute the Fourier transform of ${{\mathcal{W}}}_{\rho }$ directly from (69), by multiplying with ${({\rm{i}}k)}^{n}/n!$ and summing over n. This turns the left side into the Fourier integral over ${{\mathcal{W}}}_{\rho }$ allowing the sum to be evaluated also on the right-hand side:

Equation (70)

where ${\bf{k}}=k{\bf{e}}.$ Strictly speaking this computation should be regularized by multiplying with an arbitrary test function before summation, but this would lead to the same explicit representation of the Fourier transform ${\widehat{{\mathcal{W}}}}_{\rho }$ as a bounded ${{\mathcal{C}}}^{\infty }$-function. This shows that the desired tempered distribution ${{\mathcal{W}}}_{\rho }$ is essentially unique, and can be defined for every ρ. It is formally real, because ${\widehat{{\mathcal{W}}}}_{\rho }(-{\bf{k}})=\bar{{\widehat{{\mathcal{W}}}}_{\rho }({\bf{k}})}.$ For the claim about the support we invoke the distributional version of the Paley–Wiener–Schwartz theorem [12 theorem 7.3.1]. According to that theorem we need to show only that for real vectors ${\bf{k}},\kappa $ the estimate

Equation (71)

holds for some constant C. Treating the sum in the exponential by the Trotter formula, which we may, because these are finite dimensional matrices, we get

Equation (72)

Clearly this implies the desired estimate.

This proposition is remarkable in comparison to proposition 5: if we insist on positivity but require only the first two moments to be correct, the vector model requires a sphere of radius $\sqrt{s(s+1)}.$ On the other hand, if we waive instead the positivity of the classical distribution, we are forced to use a ball of radius s.

One might ask at this point, in continuation of the above analogies to the phase space case, whether there are 'classical' states, for which the Wigner function ${{\mathcal{W}}}_{\rho }$ is a positive function. However, this is easily seen to be impossible. Indeed, when the marginal of a proper probability distribution has support on $\{-s,\ldots ,s\},$ the measure itself has to have support on a union of hyperplanes $\{{\boldsymbol{\eta }}\;| \;{\boldsymbol{\eta }}\cdot {\bf{e}}=m\}.$ But these families of hyperplanes, drawn for various ${\bf{e}}$ have empty intersection, contradicting the normalization of the measure.

The main use of Wigner functions on phase space is the visualization of quantum states. Unfortunately, the much more singular nature of ${{\mathcal{W}}}_{\rho }$ for angular momentum will prevent these Wigner functions from becoming similarly popular. This irregularity can be tamed by replacing on the right-hand side of (70)

Equation (73)

The corresponding distribution is then a sum of point measures sitting on a finite cubical grid [6]. This may actually be useful in quantum information, where it relates to a discrete phase space structure over the cyclic group of $d=2s+1$ elements. However, for angular momentum proper we find this breaking of rotational symmetry abhorrent.

3.2. Entropic uncertainty

In this section we will have a look at the entropic uncertainty relations. Given a measurement of a Hermitian operator $A={\displaystyle \sum }_{i}{a}_{i}{P}_{i},$ with eigenprojectors Pi, the probability of obtaining the $i\mathrm{th}$ measurement outcome will be denoted by ${\pi }_{i}(\rho )=\mathrm{tr}(\rho {P}_{i})$ and the associated probability distribution as $\pi (\rho )=\{{\pi }_{1}(\rho ),\cdots ,{\pi }_{d}(\rho )\}.$ Then the output entropy of A in the state ρ is defined as the Shannon entropy of $\pi (\rho )$

Equation (74)

which serves as an uncertainty measure. Note that we normalize the Shannon entropy by its maximal value $\mathrm{log}d$ so that all occurring entropies are bounded by 1. In contrast to the variance, the entropy of a probability distribution does not change by permuting or rescaling the measurement outcomes and so only depends on the choice of the Pi (up to permutations) and not on the eigenvalues ai. This implies that an entropic uncertainty relation, which constrains the output entropies of two (for simplicity non-degenerate) observables $A,B,$ only depends on the unitary operator U connecting the respective eigenbases. A well-known bound in this setting is the general Maassen–Uffink bound [19]

Equation (75)

Equation (76)

For angular momentum measurements in two orthogonal directions the connecting unitary operators are rotations, i.e., given as a rotation by $\pi /2$ around the third coordinate axis according to the spin-s representation of SO(3) on ${{\mathbb{C}}}^{2s+1}.$ For arbitrary angles these representations are called Wigner-D matrices [1] and will also be used in section 4.5. It turns out that the Maassen–Uffink bound is in general not optimal, but describes precisely the uncertainty region for $s\to \infty .$

For spin s = 1, the uncertainty region can still be reliably investigated by parameterizing the set of pure states in the L3 eigenbasis. Numerics suggests that real valued states and their permutations characterize the lower bound of the uncertainty region. The resulting uncertainty regions for two and three components are shown in figure 10, which should be compared directly to figures 4(left) and 3(right).

Figure 10.

Figure 10. Entropic uncertainty regions for measurements of two and three orthogonal spin components (s = 1). The orange line in the left panel is the Maassen–Uffink bound (75), which is tight only in the case s = 1.

Standard image High-resolution image

The marked lines in figure 10 correspond to states of a form $| \phi (t)\rangle := \left(\displaystyle \frac{\mathrm{cos}(t)}{\sqrt{2}},\mathrm{sin}(t),\displaystyle \frac{\mathrm{cos}(t)}{\sqrt{2}}\right),$ written in L3-eigenbasis, and their permutations. They correspond exactly to the extremal curves found for variance uncertainty as shown in figure 3. Remarkably, however, the ordering of 'uncertainties' turns out to be different in the two cases. Consider the L2-eigenstates, which are shown in the left panels of both 2D figures 4 and 10 as the points on the horizontal axis. The respective L1-probability distribution are $(1/4,1/2,1/4)$ for $m=\pm 1$ and $(1/2,0,1/2)$ for m = 0. The former distribution has the larger entropy ($(3/2){\mathrm{log}}_{3}2\gt {\mathrm{log}}_{3}2$) but the smaller variance ($1/2\lt 1$).

For larger s this inversion no longer holds. Figure 11 shows this effect. Because we can exchange the roles of L1 and L2 by a unitary rotation, the uncertainty diagrams are symmetric with respect to the diagonal. Therefore the optimal linear bound must be of the form (75), with a suitable c. The entropy sums for the eigenstates $| m\rangle $ with minimal and maximal $| m| $ are shown in figure 11. For all half-integer s and for integer $s\gt 7$ the coherent state $| s\rangle $ produces not only the lowest variance, but also the lowest entropy. Figure 11 also shows the Maassen–Uffink bound, which has been computed by Sánchez–Ruiz [29]. It is attained for the overlap of two spin coherent states and is given by

Equation (77)

s = 1 seems to be the only case in which the bound is tight.

Figure 11.

Figure 11. Entropic uncertainty sum of the coherent state $| s\rangle $ (blue squares) and the state $| 0\rangle $ (green triangles) in comparison to the Maassen–Uffink bound (orange circles), for integer spin (left) and half integer spin (right).

Standard image High-resolution image

However, for large s, the bound is again optimal, as the following result shows.

Proposition 8. In the limit $s\to \infty $ the optimal lower bound on the entropic uncertainty region of L1 and L2 is given by the Maassen–Uffink inequality, which converges to

Equation (78)

Proof. As a first step we will compute the asymptotic behavior of the bound $-{\mathrm{log}}_{2s+1}[{c}^{2}],$ with c given in (77). Expanding the central binomial coefficient in factorials and using the Stirling approximation up to order ${\mathrm{log}}_{2s+1}(s)$ gives

Equation (79)

which proves the convergence of the Maassen–Uffink bound to the right-hand side of (78).

In order to show that this bound describes the asymptotic uncertainty region, we have to exhibit sequences of states saturating for every point on the boundary curve. We first show that the endpoint $(0,1/2)$ is asymptotically attained by the L1-eigenstates $| s\rangle :$ the output entropy of $| s\rangle $ in the L1 basis is always zero, whereas the output entropy in the L2 basis can be evaluated as $H({L}_{2},| s\rangle )=H({L}_{1},{U}_{3}| s\rangle ),$ with ${U}_{3}={{\rm{e}}}^{-\frac{i\pi }{2}{L}_{3}}.$ Because $| s\rangle $ has maximal quantum number m, the probability amplitudes of ${U}_{3}| s\rangle $ are given by the last column of the Wigner-D Matrix ${U}_{3}=D(0,\pi /2,0).$ By expanding the Wigner-D matrix in terms of Jacobi polynomials [1], one can verify that ${\pi }_{m}({U}_{3}| s\rangle )={| \langle m| {U}_{3}| s\rangle | }^{2}$ is a binomial distribution in m, symmetric on the domain $\{-s,s\}.$ The entropy of this distribution is $-1/2{\mathrm{log}}_{2s+1}\left(2\pi e\displaystyle \frac{2s+1}{4}\right)+{\mathcal{O}}(\displaystyle \frac{1}{n}),$ which converges to $\frac{1}{2}$ as s goes to infinity, hence (78) can be saturated:

Equation (80)

Finally we have to construct states saturating every point of this bound. For $| s\rangle ,$ still in the eigenbasis of L1, and arbitrary s, we define a family of unit vectors $| {\psi }_{\alpha }\rangle $ as

Equation (81)

The L1-outcome probability distribution associated with this vector is

Equation (82)

because the two probability distributions have practically no overlap for large s, and hence also ${c}_{\alpha }\approx 1.$ For the same reason we can evaluate the entropy of $\pi ({{\rm{\Psi }}}_{\alpha })$ as a sum, obtaining

Equation (83)

where H2 is the binary entropy function. In the L2-basis the roles of the two terms in ${\psi }_{\alpha }$ are exchanged, and we get

Equation (84)

Hence the sequence ${\psi }_{\alpha }$ realizes the point $\frac{1}{2}({\mathrm{sin}}^{2}(\alpha ),{\mathrm{cos}}^{2}(\alpha ))$ on the boundary.

4. Measurement uncertainty

4.1. Introduction

As mentioned in the introduction, a measurement uncertainty relation is a quantitative bound on the accuracy with which two observables can be measured approximately on the same device. Already in Kennard's 1927 paper [17] it is clearly stated that in quantum mechanics the notion of a 'true value' loses its meaning, so that we should not think of 'measurement error' as the deviation of the observed value from a true value. What we can always do, however, is to compare the performance of two measuring devices, one of which is an (perhaps hypothetical) 'ideal' measurement and the other an approximate one. The only requirement is that these two measurements give outputs which lie in the same space X and whose distance is somehow defined. A good approximate measurement is then one which will give, on every input state, almost the same output distribution as the ideal one. This operational focus on the output distributions is also in keeping with the way one would detect a disturbance of the system. Consider how we discover that trying to detect through which of two slits the particles pass disturbs them: the interference pattern, i.e., the output distribution of the interferometer is changed and fringes washed out.

Two related ways to build up a quantitative comparison of distributions, and thereby a quantitative approximation measure between observables, were introduced in the papers [2, 5] and applied to the standard situation of a position and a momentum operator. These two notions, called calibration error and metric error will be described in the following subsections. Either way we get a natural figure of merit for an observable F jointly measuring two or more components of angular momentum. In fact, we will only treat the case where F jointly measures all components. By this we simply mean an observable whose output is not a single number but a vector ${\boldsymbol{\eta }}.$ From this, one derives a 'marginal measurement' ${F}^{{\bf{e}}}$ of the ${\bf{e}}$-component by post-processing, i.e., by taking the ${\bf{e}}$-component ${\bf{e}}\cdot {\boldsymbol{\eta }}$ of the output vector as the output of ${F}^{{\bf{e}}}.$ These marginals can then be compared with the standard projection valued measurement of the angular momentum component ${\bf{e}}\cdot {\bf{L}}.$ When $D(G,E)$ is the quantity chosen to characterize the error of an observable G with respect to the ideal reference E we get, in our special case

Equation (85)

is the desired figure of merit. This is the quantity which we will minimize. But first we have to be more explicit about the two choices for the error quantity $D(G,E).$ This will be done in the next two subsections.

4.2. Calibration error

The simpler one assumes that the 'ideal' observable is projection valued, so that we can produce states which have a very narrow distribution around one of its eigenvalues (or points in the continuous spectrum). In other words, we have some states available which come close to having a 'true value' in the sense that the ideal distribution is sharp around a known value. A good approximate measurement should then have an output distribution, which is also well peaked around this value. Thus we only have to compare probability distributions to δ-function like distributions, i.e., point measures ${\delta }_{x}$ with $x\in X.$ This is straightforward, and we set, for any probability measure μ on the space X, $1\leqslant \alpha \lt \infty $

Equation (86)

where D under the integral is the given metric on X. This could be called the power-α deviation of μ from the point x. We are mostly interested in quadratic deviations, i.e., $\alpha =2.$ However, in this section we keep α general, which causes no extra difficulty, but makes clear which numbers '2' arise directly from the role of the averaging power α in (86) and similar equations.

We apply this now to Fρ the output distribution Fρ obtained by measuring the observable F on the input state ρ, and its ideal counterpart Eρ. The $\varepsilon $-deviation or $\varepsilon $-calibration error of the observable F with respect to the ideal observable E is

Equation (87)

where the supremum is over all $x\in X$ and 'calibration states' ρ, which are sharply concentrated on x up to quality $\varepsilon .$ Note that as a function of $\varepsilon $ this expression is decreasing as $\varepsilon \to 0,$ because the supremum is taken over smaller and smaller sets. Therefore the limit exists, and we define the calibration error of F with respect to E by

Equation (88)

For observables E with discrete spectrum (like angular momentum components) we can also take $\varepsilon =0,$ in (87), and directly get ${{\rm{\Delta }}}_{\alpha }^{c}(F,E)={{\rm{\Delta }}}_{\alpha }^{0}(F,E).$

4.3. Metric error

A possible issue with the calibration error is that it describes the performance of F only on a very special subclass of states. On the one hand this makes it easier to determine it experimentally, but on the other hand we get no guarantee about the performance of the device on general inputs. Classically this problem does not arise, because broad distributions can be represented as mixtures of sharply peaked ones, and this allows us to give an estimate also on the similarity of output distributions for general inputs. The form of this estimate gives a good hint towards how to define the distance of probability distributions both of which are diffuse. Indeed suppose ρ is an input state such that $\rho =\int \mu ({\rm{d}}x){\rho }_{x},$ where ${\rho }_{x}$ is an $\varepsilon $-calibration state at point x, and μ is an arbitrary probability measure. Then we can define measure γ on X × X by $\gamma ({\rm{d}}x\;{\rm{d}}y)=\mu ({\rm{d}}x){F}_{{\rho }_{x}}({\rm{d}}y),$ which gives the probability of the joint event of having a 'true value' $x\in {\rm{d}}x$ and finding $y\in {\rm{d}}y.$ If one integrates out the x variable one gets the output distribution for ρ, because Fρ is linear in ρ, and if one integrates out y one gets μ, because each ${F}_{{\rho }_{x}}$ is normalized. To within $\varepsilon $ this is the output distribution Eρ, and with known calibration error we get the bound

Equation (89)

This suggests the following definitions. For two probability distributions μ and ν on X we define a coupling to be a measure γ on X × X whose first marginal is μ and whose second marginal is ν. The set of couplings will be denoted by ${\rm{\Gamma }}(\mu ,\nu ),$ and is always non-empty because it contains the product measure. We then define the Wasserstein α-distance of μ and ν as

Equation (90)

This is also called a transport distance, because of the following interpretation, first seen by Gaspar Monge in the 18th century who considered the building of fortifications. We consider μ and ν as some distribution of earth, and the task of a builder who wants to transform distribution μ into distribution ν. The workers are paid by the bucket and the power α of the distance travelled with each bucket (giving a bonus pay on long distances). The builder's plan is precisely the coupling γ saying how many units are to be taken from x to y, and the integral is the total cost. The infimum is just the price of the optimal transport plan. The theory of such metrics is well developed, and we recommend the book of Villani [31] on the subject, but in the present context we only need some simple observations.

With a metric between probability distributions we define the distance of two observables as the worst case distance of their output distributions:

Equation (91)

For the connection between this metric error and the calibration error introduced above, note first that when ν is the point measure ${\delta }_{x},$ and μ is arbitrary the product is the only coupling, and the two definitions ${D}_{\alpha }(\mu ,{\delta }_{x})$ from equations (86) and (90) coincide. Therefore, if ${D}_{\alpha }({E}_{\rho },{\delta }_{x})\leqslant \varepsilon ,$ we have

Equation (92)

By taking the supremum (87) and letting $\varepsilon \to 0,$ we hence have

Equation (93)

Intuitively, this merely indicates that for calibration we test deviations only in the small subset of highly concentrated states. Then (89) is a partial converse: if ρ has a convex decomposition into $\varepsilon $-concentrated states, ${D}_{\alpha }({F}_{\rho },\mu )\leqslant {{\rm{\Delta }}}_{\alpha }^{\varepsilon }(F,E),$ and since ${D}_{\alpha }(\mu ,{E}_{\rho })\leqslant \varepsilon $ we get ${D}_{\alpha }({F}_{\rho },{E}_{\rho })\leqslant {{\rm{\Delta }}}_{\alpha }^{\varepsilon }(F,E)+\varepsilon .$ In the classical case such a decomposition always exists, so we have equality ${D}_{\alpha }({F}_{\rho },{E}_{\rho })={{\rm{\Delta }}}_{\alpha }^{c}(F,E).$ In the quantum case, however, we not only have convex mixtures of sharply concentrated states but also coherent superpositions. Using these it is easy to build examples in which (94) is strict.

There is a second 'quasi-classical' setting, in which calibration and metric error coincide, and this will actually be used below. This is the case when F and E differ only by classical noise generated in the measuring apparatus. More formally this is described by a transition probability kernel $P(x,{\rm{d}}y),$ which is for every x the probability measure in y describing the output of F, given that E has been given the value x. We can think of this as classical probabilistic post-processing or noise. It is, of course, not necessary that F actually operates in two steps, but only that it could be simulated in this way, i.e., the relation $F({\rm{d}}y)=\int E({\rm{d}}x)\ P(x,{\rm{d}}y)$ holds. This is enough to conclude ${{\rm{\Delta }}}_{\alpha }^{c}(F,E)={D}_{\alpha }(F,E),$ and to give a formula for both in terms of the size of the noise kernel P. In the following lemma the E-essential supremum of a measurable function f with respect to a measure E (denoted $E-{\mathrm{ess}\;\mathrm{sup}}_{x\in X}f(x)$) is the supremum of all λ such that the upper level set $\{x| f(x)\gt \lambda \}$ has non-zero E-measure. In our application E is the spectral measure of a component ${\bf{e}}\cdot {\bf{L}},$ so it is concentrated on the finite set $\{-s,\ldots ,s\}.$ The essential supremum is then simply the maximum of f over this set.

Lemma 9. Let E be a projection valued observable on a separable metric space $(X,D).$ Let F be an observable arising from E by post-processing with a transition probability kernel P. Then, for all α,

Equation (94)

Proof. Let I, II, III be the three terms in this equation. Then ${\rm{I}}\leqslant $ II is given by (94). To show II ≤ III, note that for any state ρ we get a coupling γ between Fρ and Eρ by $\gamma ({\rm{d}}x\;{\rm{d}}y)={E}_{\rho }({\rm{d}}x)P(x,{\rm{d}}y).$ Hence

We introduce the function $f(x)={\left(\int P(x,{\rm{d}}y)\ D{(x,y)}^{\alpha }\right)}^{1/\alpha }$ and split the integral with respect to Eρ into an integral over ${X}_{\gt }=\{x| f(x)\gt t\}$ and an integral over its complement X , where and $t\gt E-{\mathrm{ess}\;\mathrm{sup}}_{x}f(x).$ Then, by definition of the essential supremum, Eρ vanishes on ${X}_{\gt }$ and on X the integrand is bounded by ${t}^{\alpha }.$ Hence ${D}_{\alpha }{({F}_{\rho },{E}_{\rho })}^{\alpha }\lt {t}^{\alpha }.$ Taking the supremum over ρ and the αth root we get ${D}_{\alpha }(F,E)\lt t$ for every $t\;\gt $ III, proving II ≤ III.

It remains to show that III ≤ I. This time we pick a $t\lt E-{\mathrm{ess}\;\mathrm{sup}}_{x}f(x).$ By definition, this means that $E\left(\{x| f(x)\gt t\}\right)\ne 0.$ Now let $\varepsilon \gt 0.$ Then because we have assumed X to be separable, it is covered by a countable collection of $\varepsilon $-balls ${B}_{\varepsilon }({x}_{i})=\{y\in X| D({x}_{i},y)\leqslant \varepsilon \},$ $i=1,2,\ldots .$ Hence due to countable additivity of E

Equation (95)

Hence for some $z={x}_{i}$ we have $E\left({B}_{\varepsilon }(z)\cap \{x| f(x)\gt t\}\right)\ne 0.$ Since E is projection valued, we find a state ρ such that the probability measure ${E}_{\rho }({\rm{d}}x)$ is concentrated on this set. In particular, ${D}_{\alpha }({E}_{\rho },{\delta }_{z})\leqslant \varepsilon .$ Moreover, ${m}^{\alpha }=\int {E}_{\rho }({dx})f{(x)}^{\alpha }\geqslant {t}^{\alpha }$ and

Equation (96)

where the second function defines a function g. We interpret these quantities as ${L}^{\alpha }$-norms with respect to Eρ, i.e., $m={| | f| | }_{\alpha }$ and ${D}_{\alpha }({F}_{\rho },{\delta }_{z})={| | g| | }_{\alpha }.$ Then by the triangle inequality

Equation (97)

To get an upper bound on ${| | f-g| | }_{\alpha }$ note that the expression for ${| | f-g| | }_{\alpha }^{\alpha }$ is the integral over ${E}_{\rho }({\rm{d}}x)$ of

Equation (98)

Again we can read the outer parenthesis as a difference of norms, namely the ${L}^{\alpha }$-norms of the functions ${h}_{x}(y)=D(x,y)$ and ${h}_{z}(y)=D(z,y)$ with respect to integration by $P(x,{\rm{d}}y)$ where x is considered a fixed parameter. But by the triangle inequality for the metric D we have $| {h}_{x}-{h}_{z}| \leqslant D(x,z)$ independent of y. Since the transition kernel P is a probability measure with respect to dy, we find that (98) is bounded above by ${({| | {h}_{x}| | }_{\alpha }-{| | {h}_{z}| | }_{\alpha })}^{\alpha }\leqslant {| | {h}_{x}-{h}_{z}| | }_{\alpha }^{\alpha }\leqslant D{(x,z)}^{\alpha }.$ Hence in (97) we have

Equation (99)

because by construction Eρ has support in ${B}_{\varepsilon }(z).$ Combining the estimates we get ${D}_{\alpha }({F}_{\rho },{\delta }_{z})\geqslant t-\varepsilon .$ The supremum over all calibrating states can only increase the left-hand side, and on the right we use that the only condition on t was that $t\lt E-{\mathrm{ess}\;\mathrm{sup}}_{x}f(x),$ so that

Equation (100)

Now III $\;\leqslant \;{\rm{I}}$ follows in the limit $\varepsilon \to 0.$

In [5] a special case of this Lemma was used to show ${{\rm{\Delta }}}^{c}=D$ for the position and momentum marginals of a covariant phase space measurement. In that case the noise kernel P is even translation invariant, i.e., the output of the marginal observable can be simulated by just adding some state-independent noise to the output of the ideal position or momentum observable. Such translation invariance makes no sense in the case of angular momentum, since the range of the outputs $m\in \{-s,\ldots ,s\}$ of the ideal observable is bounded. This is why the above generalization was needed, in which the noise can depend on the ideal output value. The reason for the existence of a post-processing kernel, however, will be the same as in the phase space case: the covariance of the joint measurement. Roughly speaking this makes the marginal corresponding to ${\bf{e}}\cdot {\bf{L}}$ invariant under rotations around the ${\bf{e}}$-axis, which in an irreducible representation means that it must be a function of ${\bf{e}}\cdot {\bf{L}}.$ It is therefore crucial to argue that the optimal joint measurement is covariant, which will be done in the next section.

4.4. Covariant observables

Consider a general observable F with outcome space X. Suppose some group G acts on X, with the action written $(g,x)\mapsto {gx}$ as usual. Suppose that the group also acts as a symmetry group of the quantum system. That is, there is a representation $g\mapsto {U}_{g}$ of G by operators Ug, which are unitary or antiunitary, and satisfy the group law (possibly up to a phase factor). The observable F is then called covariant if ${U}_{g}^{*}F(S){U}_{g}=F({g}^{-1}S)$ for all $g\in G$ and every measurable set S. In other words, shifting the input state by Ug will result in the entire output distribution shifted by g. For our purposes it will be convenient to express this in terms of an action $F\mapsto {T}_{g}F$ of G on the the set of observables:

Equation (101)

Then the covariant observables are precisely those for which ${T}_{g}F=F$ for all $g\in G.$

For angular momentum the group will be the rotation group with its action on the three-vectors ($X={{\mathbb{R}}}^{3}$). The representation U is then up to a factor ±1. Alternatively, we can take G as the covering group $\mathrm{SU}(2).$ Since the covariant observables are exactly the same this choice is completely equivalent.

Covariance is certainly a reasonable condition to impose on a 'good' observable, so it would make sense to study uncertainty relations just for these. However, there is no need for such an ad hoc restriction, because the minimum of uncertainty over all observables is anyway attained on a covariant one. The basic reason for this is that our figure of merit (85) does not single out a direction in space, so that it is invariant under the action Tg. We therefore only have to show that there is no symmetry breaking, i.e., the symmetric variational problem has a symmetric solution. This will be done in the following lemma.

Lemma 10. For any observable F with an outcome set $X={{\mathbb{R}}}^{3},$ and $1\leqslant \alpha \lt \infty $ let

Equation (102)

Equation (103)

Then

  • (1)  
    Both these functionals are invariant under the action T, and ${D}_{\mathrm{max}}{(F)}^{\alpha }$ and ${{\rm{\Delta }}}_{\mathrm{max}}{(F)}^{\alpha }$ are convex.
  • (2)  
    Each of the infima ${\mathrm{inf}}_{F}{D}_{\mathrm{max}}(F)$ and ${\mathrm{inf}}_{F}{{\rm{\Delta }}}_{\mathrm{max}}(F)$ is independent of whether it is taken over all observables or just the covariant ones.

Proof. By definition of ${F}^{{\bf{e}}}$ we have ${({T}_{R}F)}^{{\bf{e}}}={U}_{R}{F}^{{R}^{-1}{\bf{e}}}{U}_{R}^{*}.$ When ${E}^{{\bf{e}}}$ denotes the spectral measure of ${\bf{e}}\cdot {\bf{L}},$ the relation ${U}_{R}{\bf{L}}{U}_{R}^{*}={R}^{-1}{\bf{L}}$ similarly implies ${U}_{R}{E}^{{R}^{-1}{\bf{e}}}{U}_{R}^{*}={E}^{{\bf{e}}}.$ Moreover, due to the supremum over all states, ${D}_{\alpha }(F,E)$ does not change, if both observables are rotated with the same unitary. Hence

Equation (104)

Hence the supremum over ${\bf{e}}$ is unchanged and ${D}_{\mathrm{max}}({T}_{R}F)={D}_{\mathrm{max}}(F).$ For Δ note that we can carry out the limit $\varepsilon \to 0$ directly, because ${\bf{e}}\cdot {\bf{L}}$ has finite spectrum and states with $D({E}^{{\bf{e}}},{\delta }_{x})$ small are norm-close to eigenstates with x = m. Hence

Equation (105)

If we now insert the definition of TRF, rewrite the maximum over ψ in terms of ${\psi }^{\prime }={U}_{R}^{*}\psi ,$ and substitute ${\bf{x}}^{\prime} ={R}^{-1}{\bf{x}}$ in the integral, we find ${{\rm{\Delta }}}_{\alpha }^{c}({({T}_{R}F)}^{{\bf{e}}},{\bf{e}}\cdot {\bf{L}})={{\rm{\Delta }}}_{\alpha }^{c}({F}^{{R}^{-1}{\bf{e}}},{E}^{{R}^{-1}{\bf{e}}}),$ and again ${{\rm{\Delta }}}_{\mathrm{max}}({T}_{R}F)={{\rm{\Delta }}}_{\mathrm{max}}(F).$

Convexity for Dα follows from the corresponding property of transport distances. Indeed let $\mu ={\displaystyle \sum }_{k}{\lambda }_{k}{\mu }_{k}$ and $\nu ={\displaystyle \sum }_{k}{\lambda }_{k}{\nu }_{k}$ be convex combinations of measures with the same weights ${\lambda }_{k},$ and let ${\gamma }_{k}$ be a coupling between ${\mu }_{k}$ and ${\nu }_{k}.$ Then $\gamma ={\displaystyle \sum }_{k}{\lambda }_{k}{\gamma }_{k}$ is a coupling of the convex combinations. Moreover

Equation (106)

Here the first inequality holds because Dα is defined as the infimum over couplings. Then if we take the infimum over each of the ${\gamma }_{k},$ we get ${D}_{\alpha }{(\mu ,\nu )}^{\alpha }\leqslant {\displaystyle \sum }_{k}{\lambda }_{k}{D}_{\alpha }{({\mu }_{k},{\nu }_{k})}^{\alpha }.$ In particular, $F\mapsto {D}_{\alpha }({F}_{\rho }^{{\bf{e}}},{E}_{\rho }^{{\bf{e}}})$ is convex. Since the pointwise supremum of convex functions is convex, ${D}_{\mathrm{max}},$ as the supremum with respect to ρ and ${\bf{e}},$ is convex. The same observation, applied to (105) with an additional supremum over ${\bf{e}},$ shows that ${{\rm{\Delta }}}_{\mathrm{max}}$ is likewise convex.

Given any observable F we now form its average $\bar{F}=\int {\rm{d}}R\ {T}_{R}F$ with respect to the normalized Haar measure dR. Then $\bar{F}$ is covariant and ${D}_{\mathrm{max}}\left(\bar{F}\right)\leqslant \int {\rm{d}}R\ {D}_{\mathrm{max}}({T}_{R}F)={D}_{\mathrm{max}}(F).$ Taking the infimum here, and using that $\bar{F}$ is covariant, and all covariant observables are such averages ($\bar{F}=F$) gives the second inequality in

Equation (107)

while the first follows trivially because the covariant observables are a subset. Hence the two infima are the same, and the same argument also applies to ${{\rm{\Delta }}}_{\mathrm{max}},$ proving the second claim.

We could have included also a statement that the infima in this Lemma are all attained. The argument for that is the compactness of the set of observables (in a suitable topology) and the lower semi-continuity of ${D}_{\mathrm{max}}$ and ${{\rm{\Delta }}}_{\mathrm{max}}$ which follows, like convexity, from the representation of these functionals as the pointwise supremum of continuous functionals. However, since we will later anyhow explicitly exhibit minimizers, we will skip the abstract arguments. We also remark that one of the main difficulties in the position/momentum case [5] does not arise here: in contrast to the group of phase space translations, the rotation group is compact, so the average is an integral and not an 'invariant mean', which has the potential of producing singular measures with some support on infinitely far away points.

The main importance of this Lemma is to make the variational problem much more tractable. For covariant observables we have a fairly explicit parameterization, which allows us to explicitly compute the minimizers. In contrast, for the seemingly easier case of joint measurement of just two components covariance gives only a very weak constraint, and we were not able to complete the minimization.

To develop the form of covariant observables, let us first consider the case when the output vectors have a fixed length r. A plausible value would be r = s, but we will leave this open. In this case X reduces to a sphere of radius r, which is a homogeneous space for the rotation group. We could thus apply the covariant version [28, 32] of the Naimark [21] dilation theorem to obtain a complete classification. But we do not need this machinery in this elementary case. Note first, that $\mathrm{tr}F({\rm{d}}x)$ is an invariant measure on the sphere, and all probability densities $\mathrm{tr}\rho F({\rm{d}}x)$ are bounded by this measure (since ρ is bounded). Hence, for every ρ there is a bounded probability density with respective to the uniform measure. Since this depends linearly on ρ, it is given by a bounded operator depending on ${\bf{x}}.$ The ${\bf{x}}$-dependence is then completely resolved by covariance, and it is sufficient to know this density at one point, say the north pole $r{\boldsymbol{\nu }}.$ Moreover, by covariance this density must commute with the rotations around ${\boldsymbol{\nu }},$ and is hence a linear combination of the eigenstates $| n\rangle \langle n| $ with $n\in \{-s,\ldots ,s\}.$ The only choices to be made are hence the coefficients Fn of this liner combination. We write the resulting observable in terms of its integrals with an arbitrary function h on the sphere:

Equation (108)

Here the first integral is over Haar measure on the rotation group (or $\mathrm{SU}(2)$), whereas the second is expressed in polar coordinates $(\theta ,\phi )$ on the sphere, with ${{\bf{e}}}_{\theta ,\phi }$ the corresponding unit vector, and ${U}_{\theta ,\phi }$ some rotation rotating the north pole ${\boldsymbol{\nu }}$ to ${{\bf{e}}}_{\theta ,\phi }.$ It does not matter which rotation we choose, because $| n\rangle \langle n| $ is invariant with respect to rotations around the three-axis. The two expressions are related by introducing Euler angles on the rotation group and integrating out the initial rotation around the three-axis. The normalization factor $(2s+1)$ is chosen so that the constraints on Fn are exactly ${F}_{n}\geqslant 0$ and ${\displaystyle \sum }_{n}{F}_{n}=1,$ i.e., the observable is represented as a convex combination of observables using only one fixed $| n\rangle \langle n| $ as the density. What changes when r is not fixed is simply that we get an additional integration over r, where the Fn may also depend on r. Effectively, we get a probability measure Fn(dr) on $\{-s,\ldots ,s\}\times {{\mathbb{R}}}_{+}$ and the second version of (108) just becomes

Equation (109)

The criterion for joint measurability does not depend on the full observable, but only on the marginals along the various directions ${\bf{e}}.$ It is one of the direct consequences of covariance, evident from the proof of lemma 10, that ${D}_{\alpha }({F}^{{\bf{e}}},{E}^{{\bf{e}}})$ and ${{\rm{\Delta }}}_{\alpha }^{c}({F}^{{\bf{e}}},{E}^{{\bf{e}}})$ do not depend on ${\bf{e}}.$ We will therefore only consider the case ${\bf{e}}={\boldsymbol{\nu }}$ in the following. In (109) this just means that we specialize to functions of the form $h({\bf{x}})={h}_{1}({\bf{x}}\cdot {\boldsymbol{\nu }})$ with ${h}_{1}\;:{\mathbb{R}}\to {\mathbb{R}}.$ Thus in the integrand we get ${h}_{1}(r{{\bf{x}}}_{\theta ,\phi }\cdot {\boldsymbol{\nu }})={h}_{1}(r\mathrm{cos}\theta ),$ which no longer depends on ϕ. We can therefore carry out the ϕ-integration. The resulting operator will commute with rotations around the three-axis, so we can express it as a linear combination of operators $| m\rangle \langle m| :$

Equation (110)

The first line establishes the connection with the premise of lemma 9: for covariant observables the marginals can be simulated by an exact measurement of m, with a post-processing kernel P. Therefore, for covariant F we have ${D}_{\mathrm{max}}(F)={{\rm{\Delta }}}_{\mathrm{max}}(F).$ Since by lemma 10 the infimum of this quantity, say ${{\rm{\Delta }}}_{\mathrm{min}}(s),$ is the same as the minimization over all observables. Therefore we can state the measurement uncertainty in the forms

Equation (111)

for all observables F, whether covariant or not. We will now compute ${{\rm{\Delta }}}_{\mathrm{min}}(s),$ and show that both minima are attained for a unique covariant observable.

4.5. Minimal uncertainty

While the above holds for arbitrary exponent α, we will now restrict to the standard variance case, i.e. $\alpha =2.$ So far, we have derived that the optimal observable ${\bf{F}}$ is covariant, leading to the parametrization (110). In particular, ${F}^{{\bf{e}}}$ arises from ${\bf{e}}\cdot {\bf{L}}$ by a transition probability kernel, so that metric and calibration error coincide. In the sequel, we will therefore only consider the calibration error, which is easier to evaluate. By covariance the calibration error ${{\rm{\Delta }}}_{2}^{c}({F}^{{\bf{e}}},{\bf{e}}\cdot {\bf{L}})$ is independent of ${\bf{e}},$ so we can choose ${\bf{e}}={{\bf{e}}}_{3}.$ Observing that for discrete valued observables we can take $\varepsilon =0$ in (87), and we get from (110) the basic figure of merit

Equation (112)

Before calculating the optimal case, we introduce the following lemma, which provides a more manageable expression $I(s,r,n,m)$ for the integral over θ, such that (112) reads

Equation (113)

Lemma 11. For $s\geqslant 1$ the integral $I(s,r,n,m)$ can be written as

Equation (114)

where

Equation (115)

and

Equation (116)

For $s=\displaystyle \frac{1}{2}$ we have

Equation (117)

Proof. Here we have to solve the integral

Equation (118)

where ${{\rm{d}}}_{{nm}}^{(s)}(\theta )=\langle n| {U}_{\theta }| m\rangle $ is the small Wigner d-matrix [1]. First we expand ${{\rm{d}}}_{{nm}}^{(s)}(\theta )$ in terms of the Jacobi polynomials:

Equation (119)

In the following we use a recurrence relation for the Jacobi polynomials. This three-term relation does not hold for $s=1/2,$ so that we have to treat this case separately: the integrals $I\left(\displaystyle \frac{1}{2},r,n,\pm \displaystyle \frac{1}{2}\right)$ can indeed be calculated directly from the above expression, and the results are given in the statement of the lemma. From now on we assume $s\geqslant 1.$

We can simplify some case distinctions by introducing $k=\mathrm{min}(s+m,s-m,s+n,s-n)$ and substitute the arising positive integers $s-m,m-n,m+n$ according to table 1, which is possible due to the symmetries of the Wigner d-matrix [1]. Our expression then depends implicitly on $(s,m,n)$ through $(\mu ,\nu ,k):$

Equation (120)

Substituting $x=\mathrm{cos}\theta $ yields

Equation (121)

where

Equation (122)

This integral can be solved by expanding the factor ${({rx}-m)}^{2},$ using the Jacobi polynomial orthogonality relation [30]

Equation (123)

and the recurrence relation [30]

Equation (124)

Then the expressions in the lemma arise by simplifying the corresponding polynomials:

Equation (125)

We will use this lemma to simplify the minimization over m. Moreover, the integral over r and the sum over n can be seen as taking a convex combination over two-dimensional vectors $\left({A}_{s}(r,n),{B}_{s}(r,n)\right).$ Hence optimizing F can be analyzed geometrically in terms of the set of such pairs (see figures 13). This is solved in the next theorem, whose results are visualized in figure 12.

Figure 12.

Figure 12. Optimal radii ${r}_{\mathrm{min}}(s)$ and measurement uncertainties ${{\rm{\Delta }}}_{\mathrm{min}}$ according to equation (126). The radii are scaled by s, showing that for $1\leqslant s\leqslant 5/2$ the outputs of the optimal observable are vectors of modulus $\gt s.$ For larger s, the output vectors are shorter than s, and, after a minimum around $s=27/2$ we have ${r}_{\mathrm{min}}(s)/s\to 1.$ In both panels, the functional expression valid for $s\gt 1$ is plotted in blue.

Standard image High-resolution image
Figure 13.

Figure 13. Left: example for s = 6 and the n = s curve marked in red. Right: intersection of ${K}_{{\bf{v}}}$ with ϕ in the s = 1 case.

Standard image High-resolution image

Table 1.  Wigner-d matrix substituion.

  k μ ν
(I) s + m nm $-n-m$
(II) sm mn n + m
(III) s + n mn $-n-m$
(IV) sn nm n + m

Theorem 12. The minimal measurement uncertainty ${{\rm{\Delta }}}_{\mathrm{min}}(s)$ in the sense of (111) is attained at a unique covariant observable, for which Fn(dr) is a point measure at n = s and $r={r}_{\mathrm{min}},$ with

Equation (126)

Except for s = 1, the maximum over m in (113) is trivial for the optimal observable, i.e., the calibration error is the same for all calibration inputs m.

Proof. We consider first the case $s\geqslant 1.$ For arbitrary $s\geqslant 1$ we reformulate the problem using that ${\rm{\Delta }}{({F}^{3},{L}_{3})}^{2}$ is a convex combination of the functions ${A}_{s}(r,n)$ and ${B}_{s}(r,n).$ Here we must find the best n as well as the probability distribution Fn(dr) for the worst m. We denote the convex set of all possible combinations by ${\rm{\Omega }}\in {{\mathbb{R}}}^{2},$ i.e.

Equation (127)

All information about a possible observable is now contained in $(a,b)\in {\rm{\Omega }}.$ Furthermore a maximum over m is part of the definition of ${{\rm{\Delta }}}_{c}^{2}{({F}^{3},{L}_{3})}^{2},$ so we can rewrite it as a functional on Ω:

Equation (128)

The problem is now to minimize the functional $K(a,b).$ Since, for general n and s, Ω is hard to describe, we choose the following strategy, which is illustrated in the left panel of figure 13.

We will show that, for $s\gt 1,$ K takes its minimum at

Equation (129)

by constructing a line ϕ which separates the set Ω from the convex level set ${K}_{{\bf{v}}}:= \{(a,b)\in {{\mathbb{R}}}^{2}| K(a,b)\leqslant K({\bf{v}})\}.$

For this we will take the line ϕ to be the tangent of the curve $r\mapsto (A(r,s),B(r,s))$ at the point ${\bf{v}}.$ The normal ${\bf{u}}$ of ϕ is

Equation (130)

and ${\rm{\Phi }}:= \{{\bf{x}}\in {{\mathbb{R}}}^{2}| {\bf{x}}\cdot {\bf{u}}\geqslant {\bf{v}}\cdot {\bf{u}}\}$ is the half plane above ϕ.

Now we show that ${\rm{\Omega }}\subset {\rm{\Phi }},$ i.e.

Equation (131)

with equality iff n = s and $r={r}_{\mathrm{min}}(s).$ Note that the function ${g}_{s}(n,r)$ is quadratic in r, so for verifying (131) it is sufficient to show that ${\partial }_{r}^{2}\ {g}_{s}(n,r)\geqslant 0$ and ${g}_{s}(s,{r}_{n})\geqslant 0,$ where rn is the stationary point determined by ${\partial }_{r}\ {g}_{s}(n,{r}_{n})=0.$

Indeed we have

Equation (132)

Now for $s\geqslant 3$ the factor with the square root is positive, and since $| n| \lt s$ we can estimate the expression (132) as $\geqslant 2{s}^{2}-s\geqslant 0.$ For $1\leqslant s\geqslant 3,$ the minimum with respect to n is assumed at n = 0, for which (132) can be evaluated explicitly, and shown to be positive.

The stationary points rn can be straightforwardly computed as

Equation (133)

with

Equation (134)

The denominator of ${g}_{s}(n,{r}_{n})$ in (134) is again the quadratic coefficient of the expression (131), i.e., (132) which was already shown to be positive. Hence ${\rm{\Omega }}\subset {\rm{\Phi }}.$

Finally we have to certify that ${K}_{{\bf{v}}}\cap {\rm{\Phi }}=\{{\bf{v}}\}.$ Using the gradient of ϕ and comparing it to the linear boundaries of ${K}_{{\bf{v}}}$ we get the conditions

Equation (135)

which are true for $s\gt 1.$ This concludes the proof for $s\gt 1.$

For s = 1, this last step of the above proof fails. Indeed, the level set ${K}_{{\bf{v}}}$ which was determined by taking that point v on the horizontal axis which is also on the boundary of Ω does intersect Ω, as can be seen in figure 13. We therefore have to take a level set of K for a slightly smaller value. Since the tangents of the level sets are all the same for $b\gt 0,$ we can readily find the level set which is tangent to Ω. This gives the optimal radius ${r}_{\mathrm{min}}(s=1)=5/4$, and the

Equation (136)

Analogous to the above arguments, one verifies easily that $\{{\bf{v}}\}=K({\bf{v}})\cap {\rm{\Omega }}$.

Finally, for $s=1/2$ one can draw the conclusion directly from the form of $I(1/2,r,n,m)$ given in lemma 11: this expression does not depend on m, and has a unique global minimum at ${r}_{\mathrm{min}}(1/2)=1/2$ and $n=+1/2$, where the optimal probability measure F must therefore be concentrated.

In all cases, the optimal value ${{\rm{\Delta }}}_{\mathrm{min}}(s)$ is computed by substituting the obtained optimal ${r}_{\mathrm{min}}(s)$ and n = s in (113).

5. Conclusions and outlook

Uncertainty relations can be built for any collection of observables. In this paper we provided some methods, which work in a general setting, but chiefly looked at angular momentum as one of the paradigmatic cases of non-commutativity in quantum mechanics.

The basic mathematical methods are well-developed for the case of preparation uncertainty, so that even in a general case the optimal tradeoff curves can be generated efficiently. We resorted to numerics quite often, since it turns out that the salient optimization problems can rarely be solved analytically for general s. One of the features one might hope to settle analytically in the future is the asymptotic estimate ${c}_{2}(s)\propto {s}^{2/3}$ which comes out with a precision that suggests an exact result.

Much is left to be done for entropic uncertainty. Here we gave only some basic comparisons to the variance case. It would be interesting to see whether the entropic relations can be refined to the point that they can be used to derive sharp variance inequalities as Hirschman did in the phase space case [9].

For measurement uncertainty the general situation is not so favourable, perhaps due to the much more recent introduction of the subject. At this point we know of no efficient way to derive sharp bounds for generic pairs of observables. Nevertheless, we were able to treat the case of a joint measurement of all components in arbitrary directions, because in this case rotational symmetry is not broken and leads to considerable simplification. One of these simplifications is the observation that the two basic error criteria, namely metric uncertainty and calibration error lead to the same results. This was already familiar from the phase space case. However, a further simplification one might have expected from this analogy definitely does not hold: there seems to be no quantitative link between preparation and measurement uncertainty for angular momentum. Further research will show whether useful general connections between the two faces of the uncertainty coin can be established.

The limit large $s\to \infty $ can be understood as a mean field limit [23], when the spin-s representation is considered as $2s$ copies of a spin-$1/2$ system in a symmetric state. We can also see this as a classical limit ${\hslash }\to 0$ [33], in the sense that the angular momentum in physical units, i.e., ℏ is fixed, and hence the dimensionless half-integral representation parameter s has to diverge. This offers a way to treat not just the uncertainty aspects of this limit, but also the limit of the whole theory of angular momentum.

Acknowledgments

The authors thank Ashley Milsted, Ciara Morgan and David Reeb for critically reading our manuscript. We also acknowledge the financial support from the ERC grant DQSIM, RTG1991 funded by the DFG and the collaborative research project Q.com-Q funded by the BMBF.

We acknowledge support by Deutsche Forschungsgemeinschaft and the Open Access Publishing Fund of Leibniz Universität Hannover.

Footnotes

Please wait… references are loading.