Brought to you by:
Review Article

Quantum non-Markovianity: characterization, quantification and detection

, and

Published 21 August 2014 © 2014 IOP Publishing Ltd
, , Citation Ángel Rivas et al 2014 Rep. Prog. Phys. 77 094001 DOI 10.1088/0034-4885/77/9/094001

0034-4885/77/9/094001

Abstract

We present a comprehensive and up-to-date review of the concept of quantum non-Markovianity, a central theme in the theory of open quantum systems. We introduce the concept of a quantum Markovian process as a generalization of the classical definition of Markovianity via the so-called divisibility property and relate this notion to the intuitive idea that links non-Markovianity with the persistence of memory effects. A detailed comparison with other definitions presented in the literature is provided. We then discuss several existing proposals to quantify the degree of non-Markovianity of quantum dynamics and to witness non-Markovian behavior, the latter providing sufficient conditions to detect deviations from strict Markovianity. Finally, we conclude by enumerating some timely open problems in the field and provide an outlook on possible research directions.

Export citation and abstract BibTeX RIS

1. Introduction

In recent years, renewed attention has been paid to the characterization of quantum non-Markovian processes. Different approaches have been followed and several methods have been proposed, which in some cases yield inequivalent conclusions. Given the considerable amount of literature that has built up on the subject, we believe that the time is right to summarize most of the existing results in a review article that clarifies both the underlying structure and the interconnections between the different approaches.

On the one hand, we are fully aware of the risk we take in writing a review of a quite active research field, with new results continuously arising during the writing of this work. We do hope, on the other hand, that this possible shortcoming will be well balanced by the potential usefulness of such a review in order to, hopefully, clarify some misconceptions and generate further interest in the field.

Essentially, the subject of quantum non-Markovianity addresses two main questions, namely:

  • 1.  
    What is a quantum Markovian process and hence what are non-Markovian processes?(A characterization problem)
  • 2.  
    If a given process deviates from Markovianity, by how much does it deviate?(A quantification problem)

In this work, we examine both questions in detail. More specifically, concerning the characterization problem, we adopt the so-called called divisibility property as a definition of quantum Markovian processes. As this is not the only approach to non-Markovianity, in section 3.4 we introduce and discuss other proposed definitions and compare them to the divisibility approach. In this regard, we would like to stress that it is neither our intention, nor is the field at a stage that allows us, to decide on a definitive definition of quantum Markovian processes. It is our hope, however, that we will convince the reader that the strong analogy between the definition for non-Markovianity taken in this work and the classical definition of Markov processes, and the ensuing good mathematical properties that will allow us to address the characterization problem in simple terms, represents a fruitful approach to the topic. Concerning the quantification problem, we discuss most of the quantifiers present in the literature, and we classify them into measures and witnesses of non-Markovianity, depending on whether they are able to detect all non-Markovian dynamics or just a subset. Given the large body of literature that explores the application of these methods to different physical realizations, we have opted for keeping the presentation mainly on the abstract level and providing a detailed list of references. However, we have also included some specific examples for the sake of illustration of fundamental concepts.

This work is organized as follows. In section 2, we recall the classical concept of the Markovian process and some of its main properties. This is crucial in order to understand why the divisibility property provides a good definition of quantum Markovianity. In section 3, we introduce the concept of the quantum Markovian process by establishing a step-by-step parallelism with the classical definition, and we explain in detail why these quantum processes can be considered as memoryless. Section 4 gives a detailed review of different measures of non-Markovianity and section 4 describes different approaches in order to construct witnesses able to detect non-Markovian dynamics. Finally, section 6 is devoted to conclusions and to outline some of the problems that remain open in this field and possible future research lines.

2. Markovianity in classical stochastic processes

In order to give a definition of a Markov process in the quantum regime, it is essential to understand the concept of Markov process in the classical setting. Thus, this section is devoted to revising the definition of classical Markov processes and to sketching the most interesting properties for our purposes without getting too concerned with mathematical rigor. More detailed explanations on the foundations of stochastic processes can be found in [16].

2.1. Definition and properties

Consider a random variable X defined on a classical probability space $(\Omega,\Sigma,\mathbb{P})$ , where Ω is a given set (the sample space), Σ (the possible events) is a σ-algebra of subsets of Ω, containing Ω itself, and the probability $\mathbb{P}:\Sigma\rightarrow [0,1]$ is a σ-additive function with the property that $\mathbb{P}(\Omega)=1$ , (cf [16]). In order to avoid further problems when considering conditional probabilities (see, for example, the Borel–Kolmogorov paradox [7]), we shall restrict attention from now on to discrete random variables, i.e. random variables which take values on a finite set denoted by $\mathcal{X}$ .

A classical stochastic process is a family of random variables $\{X(t),t\in I\subset\mathbb{R}\}$ . Roughly speaking, this is nothing but a random variable X depending on a parameter t which usually represents time. The story starts with the following definition.

Definition 2.1 (Markov process).  A stochastic process {X(t), t ∈ I} is a Markov process if the probability that the random variable X takes a value xn at any arbitrary time tn ∈ I, provided that it took the value xn−1 at some previous time tn−1 < tn ∈ I, is uniquely determined, and is not affected by the possible values of X at previous times to tn−1. This is formulated in terms of conditional probabilities as follows:

Equation (1)

and informally it is summarized by the statement that 'a Markov process does not have memory of the history of past values of X'. This kind of stochastic process is named after the Russian mathematician Markov4.

From the previous definition (1), it is possible to work out further properties of Markov processes. For instance, it follows immediately from (1) that for a Markov process

Equation (2)

where $\mathbb{E}(x|y)=\sum_{x\in\mathcal{X}}x\mathbb{P}(x|y)$ denotes the so-called conditional expectation.

In addition, Markov processes satisfy another remarkable property. If we take the joint probability for any three consecutive times t3 > t2 > t1 and apply the definition of conditional probability twice, we obtain

Equation (3)

Since the Markov condition (1) implies that $\mathbb{P}(x_3,t_3|x_2,t_2;$ $x_1,t_1)=\mathbb{P}(x_3,t_3|x_2,t_2)$ , by taking the sum over x2 and dividing both sides by $\mathbb{P}(x_1,t_1)$ , we arrive at

Equation (4)

which is called the Chapman–Kolmogorov equation. Moreover, the next theorem gives an answer to the converse statement.

Theorem 2.1. A family of conditional probabilities $\mathbb{P}(x_n$ , tn|xn−1, tn−1) with tn > tn−1 satisfying (4) can always be seen as the conditional probabilities of a Markov process {X(t), t ∈ I}.

Proof. The proof is by construction. Take some probabilities $\mathbb{P}(x_n,t_n)$ and define the two-point joint probabilities by

Then, set

Equation (5)

and construct higher joint probabilities by using expressions analogous to equation (3). This construction is always possible as it is compatible with (4), which is the presupposed condition satisfied by $\mathbb{P}(x_n,t_n|x_{n-1},t_{n-1})$ . □

2.2. Transition matrices

In this section, we shall focus on the evolution of one-point probabilities $\mathbb{P}(x,t)$ during a stochastic process. Thus, consider a linear map T that connects the probability of a random variable X, at different times t0 and t1:

Equation (6)

Since $\sum_{x_1\in\mathcal{X}}\mathbb{P}(x_1,t_1)=1$ and $\mathbb{P}(x_1,t_1)\geq0$ for every $\mathbb{P}(x_0,t_0)$ , we conclude that

Equation (7)

Equation (8)

Matrices T fulfilling these properties are called stochastic matrices.

Consider t = t0 to be the initial time of some (not necessarily Markovian) stochastic process {X(t), t ∈ I}. From the definition of conditional probability,

Equation (9)

and therefore $T(x_2,t_2|x_0,t_0)=\mathbb{P}(x_2,t_2|x_0,t_0)$ for every t2. This relation is not valid in general for t1 > t0, $T(x_2,t_2|x_1,t_1)\neq\mathbb{P}(x_2,t_2|x_1,t_1)$ . The reason is that $\mathbb{P}(x_2,t_2|x_1,t_1)$ is not fully defined for a general stochastic process; we need to know the value of X for previous time instants as $\mathbb{P}(x_2,t_2|x_1,t_1;x_0,t_0)$ could be different from $\mathbb{P}(x_2,t_2|x_1,t_1;x'_0,t_0)$ for x0 ≠ x'0. However that is not the case for Markov processes, which satisfy the following result.

Theorem 2.2. Consider a Markov process {X(t), t ∈ I}. Given any two time instants t1 and t2, we have

Equation (10)

Proof. It follows from the fact that we can write $\mathbb{P}(x_2,t_2;x_1,t_1)=\mathbb{P}(x_2,t_2|x_1,t_1)\mathbb{P}(x_1,t_1)$ , as $\mathbb{P}(x_2,t_2|x_1,t_1)$ is well defined for any t1 and t2. □

From this theorem and the Chapman–Kolmogorov equation (4), we obtain the following corollary.

Corollary. Consider a Markov process {X(t), t ∈ I}, then for any t3 ⩾ t2 ⩾ t1 ⩾ t0, the transition matrix satisfies the properties

Equation (11)

Equation (12)

Equation (13)

In summary, for a Markov process the transition matrices are the two-point conditional probabilities and satisfy the composition law of equation (13). Essentially, equation (13) states that the evolution from t1 to t3 can be written as the composition of the evolution from t1 to some intermediate time t2, and from this t2 to the final time t3.

In the case of non-Markovian processes, T(x2, t2|x1, t1) might be not well defined for t1 ≠ t0. Nevertheless, if the matrix $\mathbb{P}(x_1,t_1|x_0,t_0)$ is invertible for every t1, then T(x2, t2|x1, t1) can be written in terms of well-defined quantities. Since the evolution from t1 to t2 (if it exists) has to be the composition of the backward evolution to the initial time t0 and the forward evolution from t0 to t2, we can write

Equation (14)

In this case the composition law equation (13) is satisfied and equation (11) also holds. However, condition equation (12) may be not fulfilled, which prevents any interpretation of T(x2, t2|x1, t1) as a conditional probability and therefore manifests the non-Markovian character of such a stochastic process.

Definition 2.2 (Divisible process).  A stochastic process {X(t), t ∈ I} for which the associates transition matrices satisfy equations (11), (12) and (13) is called divisible.

There are divisible processes which are non-Markovian. As an example (see [4, 6]), consider a stochastic process {X(t), t ∈ I} with two possible results $\mathcal{X}=\{0,1\}$ , and just three discrete times I = {t1, t2, t3}(t3 > t2 > t1). Define the joint probabilities as

Equation (15)

By computing the marginal probabilities, we obtain $\mathbb{P}(x_3,t_3;x_2,t_2)=\mathbb{P}(x_2,t_2;x_1,t_1)= \mathbb{P}(x_3,t_3;x_1,t_1)=1/4$ , and then

Equation (16)

Therefore the process is non-Markovian as, for example, $\mathbb{P}(1,t_3|0,t_2;0,t_1)=1$ and $\mathbb{P}(1,t_3|0,t_2;1,t_1)=0$ . However the transition matrices can be written as

and similarly T(x2, t2|x1, t1) = T(x3, t3|x1, t1) = 1/2. Hence, the conditions (11), (12) and (13) are clearly fulfilled. Other examples of non-Markovian divisible processes can be found in [913].

Despite the existence of non-Markovian divisible processes, we can establish the following key theorem.

Theorem 2.3. A family of transition matrices T(x2, t2|x1, t1) with t2 > t1 which satisfies equations (11), (12) and (13) can always be seen as the transition matrices of some underlying Markov process {X(t), t ∈ I}.

Proof. Since the matrices T(x2, t2|x1, t1) satisfy (11) and (12), they can be understood as conditional probabilities $\mathbb{P}(x_2,t_2|x_1,t_1)=T(x_2,t_2|x_1,t_1)$ , and since (13) is also satisfied, the process fulfils equation (4). Then the final statement follows from theorem 2.1. □

Thus, we conclude that:

Corollary 2.2. At the level of one-point probabilities, divisible and Markovian processes are equivalent. The complete hierarchy of time-conditional probabilities has to be known to make any distinctions.

2.3. Contractive property

There is another feature of Markov processes that will be useful in the quantum case. Consider a vector v(x), where x denotes its different components. Then its L1-norm is defined as

Equation (17)

This norm is particularly useful in hypothesis testing problems. Namely, consider a random variable X which is distributed according to either probability p1(x) or probability p2(x). We know that, with probability q, X is distributed according to p1(x), and, with probability 1 − q, X is distributed according to p2(x). Our task consists of sampling X just once with the aim of inferring the correct probability distribution of X[p1(x) or p2(x)]. Then the minimum (averaged) probability to give the wrong answer turns out to be

Equation (18)

where w(x) := qp1(x) − (1 − q)p2(x). The proof of this result follows the same steps as in the quantum case (see section 3.3.1). Thus the L1-norm of the vector w(x) gives the capability to distinguish correctly between p1(x) and p2(x) in the two-distribution discrimination problem.

In particular, in the unbiased case q = 1/2, we have

which is known as the Kolmogorov distance, L1-distance, or variational distance between p1(x) and p2(x).

In the identification of non-divisible processes, the L1-norm also plays an important role.

Theorem 2.4. Let T(x2, t2|x1, t1) be the transition matrices of some stochastic process. Then such a process is divisible if and only if the L1-norm does not increase when T(x2, t2|x1, t1) is applied to every vector v(x), $x\in\mathcal{X}$ , for all t1 and t2,

Equation (19)

Proof. The 'only if' part follows from the properties (11) and (12):

Equation (20)

For the 'if' part, as we mentioned earlier, if T(x2, t2|x1, t1) does exist, it always satisfies equations (11) and (13). Taking a vector to be a probability distribution v(x) = p(x) ⩾ 0 for all $x\in\mathcal{X}$ , because of equation (11) we have

Equation (21)

Since, by hypothesis, equation (19) holds for any vector, we obtain the following chain of inequalities

Equation (22)

Therefore,

for any probability p(x1), which is only possible if $\sum_{x_1\in\mathcal{X}}T(x_2,t_2|x_1,t_1)p(x_1)\geq0$ . Then equation (12) has to be satisfied. □

Because of this theorem and equation (13), $\mathbb{P}_{\rm min}({\rm fail})$ increases monotonically with time for a divisible process. In this regard, if the random variable X undergoes a Markovian process, the best chance to rightly distinguish between the two possible distributions p1(x) and p2(x) is to sample X at time instants as close as possible to the initial time t0. However, that is not the case if X is subject to a non-divisible (and then non-Markovian) process. Then, in order to decrease the error probability, it could be better to wait until some time, say t1, where ||w(x1, t1)||1 = ||qp1(x1, t1) − (1 − q)p2(x1, t1)||1 increases again (without exceeding its initial value). The fact that the error probability may decrease for some time t1 after the initial time t0 can be understood as a trait of the underlying memory in the process. That is, the system retains some information about the probability of X at t0, which arises at a posterior time in the evolution.

In summary, classical Markovian processes are defined via multi-time conditional probabilities, as in equation (1). However, if the experimenter only has access to one-point probabilities, Markovian processes become equivalent to divisible processes. The latter are more easily characterized, as they only depend on properties of transition matrices and the L1-norm.

3. Markovianity in quantum processes

After the succinct review of classical Markovian processes in the previous section, here we shall try to adapt those concepts to the quantum case. By the adjective 'quantum' we mean that the system undergoing evolution is a quantum system. Our aim is to find a simple definition of a quantum Markovian process by keeping a close analogy to its classical counterpart. Since this is not straightforward, we comment first on some points that make a definition of quantum Markovianity difficult to formulate. For the sake of simplicity, in the following we shall consider finite dimensional quantum systems unless otherwise stated.

3.1. Problems of a straightforward definition

Since the quantum theory is a statistical theory, it seems meaningful to ask for some analogue to classical stochastic processes and particularly Markov processes. However, the quantum theory is based on non-commutative algebras, and this makes its analysis considerably more involved. Indeed, consider the classical definition of a Markov process, equation (1); to formulate a similar condition in the quantum realm we demand a way to obtain $\mathbb{P}(x_n,t_n|x_{n-1},t_{n-1};\ldots;x_0,t_0)$ for quantum systems. The problem arises because we can sample a classical random variable without affecting its posterior statistics; however, in order to 'sample' a quantum system, we need to perform measurements, and these measurements disturb the state of the system and affect the subsequent outcomes. Thus, $\mathbb{P}(x_n,t_n|x_{n-1},t_{n-1};\ldots;x_0,t_0)$ does not only depend on the dynamics but also on the measurement process, and a definition of quantum Markovianity in terms of it, even if possible, does not seem very appropriate. Actually, in such a case the Markovian character of a quantum dynamical system would depend on which measurement scheme is chosen to obtain $\mathbb{P}(x_n,t_n|x_{n-1},t_{n-1};\ldots;x_0,t_0)$ . This is very inconvenient as the definition of Markovianity should be independent of what is required to verify it.

3.2. Definition in terms of one-point probabilities: divisibility

Given the aforementioned problems to construct $\mathbb{P}(x_n,t_n|x_{n-1}$ , tn−1; ... ;x0, t0) in the quantum case, a different approach focuses on the study of one-time probabilities $\mathbb{P}(x,t)$ . For these, the classical definition of Markovianity reduces to the concept of divisibility (see definition 2.2), and a very nice property is that divisibility may be defined in the quantum case without any explicit mention of measurement processes. To define quantum Markovianity in terms of divisibility may seem to lose generality; nevertheless theorem 2.3 and corollary 2.2 assert that this loss is innocuous, as divisibility and Markovianity are equivalent properties for one-time probabilities. These probabilities are the only ones that can be constructed in the quantum case avoiding the difficulties associated with measurement disturbance.

Let us consider a system in a quantum state given by some (non-degenerate) density matrix ρ, where the spectral decomposition yields

Equation (23)

Here the eigenvalues p(x) form a classical probability distribution, which may be interpreted as the probabilities for the system to be in the corresponding eigenstate |ψ(x)〉,

Equation (24)

Consider now some time evolution of the quantum system such that the spectral decomposition of the initial state is preserved; ρ(t0) = ∑xp(x, t0)|ψ(x)〉〈ψ(x)| is mapped to

Equation (25)

where $\mathcal{S}$ denotes the set of quantum states with the same eigenvectors as ρ(t0). Since this process can be seen as a classical stochastic process on the variable x, which labels the eigenstates |ψ(x)〉, we consider it to be divisible if the evolution of p(x, t) satisfies the classical definition of divisibility (definition 2.2). In such a case, there are transition matrices T(x1, t1|x0, t0), such that

Equation (26)

fulfilling equations (11), (12) and (13). This equation (26) can be written in terms of density matrices as

Equation (27)

Here, $\mathcal{E}_{(t_1,t_0)}$ is a dynamical map that preserves the spectral decomposition of ρ(t0) and satisfies

Equation (28)

Furthermore, because of equations (11), (12) and (13), $\mathcal{E}_{(t_2,t_1)}$ preserves positivity and the trace of any state in $\mathcal{S}$ , and obeys the composition law

Equation (29)

On the other hand, since the maps $\{\mathcal{E}_{(t_2,t_1)},t_2\geq t_1\geq t_0\}$ are supposed to describe some quantum evolution, they are linear (there is no experimental evidence against this fact [1517]). Thus, their action on another set $\mathcal{S}'$ of quantum states with different spectral projectors to $\mathcal{S}$ is physically well defined provided that the positivity of the states of $\mathcal{S}'$ is preserved (i.e. any density matrix in $\mathcal{S}'$ is transformed in another valid density matrix). Hence, by consistence, we formulate the following general definition of a P-divisible process.

Definition 3.1 (P-divisible process).  We say that a quantum system subject to some time evolution characterized by the family of trace-reserving linear maps $\{\mathcal{E}_{(t_2,t_1)},t_2\geq t_1\geq t_0\}$ is P-divisible if, for every t2 and t1, $\mathcal{E}_{(t_2,t_1)}$ is a positive map (preserving the positivity of any quantum state) and fulfils equation (29).

The reason to use the terminology 'P-divisible' (which stands for positive-divisible) instead of 'divisible' comes from the difference between positive and complete positive maps, which is essential in quantum mechanics. More explicitly, a linear map ϒ acting on a matrix space $\mathcal{M}$ is a positive map if, for $A\in \mathcal{M}$ ,

Equation (30)

i.e. ϒ transforms positive semidefinite matrices into positive semidefinite matrices. In addition, ϒ is said to be completely positive if for any matrix space $\mathcal{M}'$ such that $\dim(\mathcal{M}')\geq\dim(\mathcal{M})$ , and $B\in\mathcal{M}'$ ,

Equation (31)

These concepts are properly extended to the infinity-dimensional case [18].

Complete positive maps are much easier to characterize than maps that are merely positive [19, 20]; they admit the so-called Kraus representation, $\Upsilon(\cdot)=\sum_j K_j(\cdot)K_j^\dagger$ , and it can be shown that if equation (31) is fulfilled with $\dim(\mathcal{M}')=\dim(\mathcal{M})^2$ , it is also true for any $\mathcal{M}'$ such that $\dim(\mathcal{M}')\geq\dim(\mathcal{M})$ .

It is well-know that the requirement of positivity alone for a dynamical map presents difficulties. Concretely, in order to keep the positivity of density matrices in the presence of entanglement with another extra system, we must impose complete positivity instead of positivity [14, 2126]. Thus, now we are able to give a definition of quantum Markovian process.

Definition 3.2 (Markovian quantum process).  We shall say that a quantum system subject to a time evolution given by some family of trace-reserving linear maps $\{\mathcal{E}_{(t_2,t_1)},t_2\geq t_1\geq t_0\}$ is Markovian (or divisible [27]) if, for every t2 and t1, $\mathcal{E}_{(t_2,t_1)}$ is a complete positive map and fulfils the composition law equation (29).

For the sake of comparison, the following table shows the clear parallelism between classical transition matrices and quantum evolution families in a Markovian process.

  Classical Quantum
Normalization $\sum_{x_2\in\mathcal{X}}T (x_2,t_2|x_1,t_1)=1$ $\mathcal{E}_{(t_2,t_1)}$ trace-preserving
Positivity T(x2, t2|x1, t1)⩾0 $\mathcal{E}_{(t_2,t_1)}$ completely positive
Composition T(x3, t3|x1, t1) $\mathcal{E}_{(t_3,t_1)}=\mathcal{E}_{(t_3,t_2)} \mathcal{E}_{(t_2,t_1)}$
Law $=\sum_{x_2\in\mathcal{X}}T(x_3,t_3|x_2,t_2) \times T(x_2,t_2|x_1,t_1)$  

Before we move on, it is worth summarizing the argument leading to the definition of Markovian quantum process, as it is the central concept of this work. Namely, since a direct definition from the classical condition equation (1) is problematic because of quantum measurement disturbance, we focus on one-time probabilities. For those, classical Markovian processes and divisible processes are equivalent, thus we straightforwardly formulate the divisibility condition for quantum dynamics preserving the spectral decomposition of a certain set of states $\mathcal{S}$ . Then the Markovian (or divisibility) condition for any quantum evolution follows by linearity when taking into account the completely positive requirement in the quantum evolution. We have sketched this reasoning in the scheme presented in figure 1.

Figure 1.

Figure 1. Scheme of the arguments employed in the definition of a quantum Markovian process (see main text). The equality in the second box is a consequence of corollary 2.2.

Standard image High-resolution image

Finally, we review a fundamental result regarding differentiable quantum Markovian processes (i.e. processes such that the limit $\lim_{\epsilon\downarrow0}[\mathcal{E}_{(t+\epsilon,t)}-1\hspace{-4pt}1]/\epsilon:=\mathcal{L}_t$ is well-defined). In this case, there is a mathematical result which is quite useful to characterize Markovian dynamics.

Theorem 3.1 (Gorini-Kossakowski-Susarshan-Lindblad).  An operator $\mathcal{L}_t$ is the generator of a quantum Markov (or divisible) process if and only if it can be written in the form

Equation (32)

where H(t) and Vk(t) are time-dependent operators, with H(t) self-adjoint, and γk(t) ⩾ 0 for every k and time t.

This theorem is a consequence of the pioneering work by Kossakowski [28, 29] and co-workers [30], and independently by Lindblad [31], who analyzed the case of time-homogeneous equations, i.e. time-independent generators $\mathcal{L}_t\equiv\mathcal{L}$ . For a complete proof, including possible time-dependent $\mathcal{L}_t$ , see [26, 32].

3.3. Where is the memoryless property in quantum Markovian processes?

As mentioned before, the motivation behind definition 3.2 for quantum Markovian processes has been to keep a formal analogy with the classical case. However, it is not immediately apparent that the memoryless property present in the classical case is also present in the quantum domain. There are at least two ways to visualize this property, which is hidden in definition 3.2. As discussed below, one is based on the contractive properties of the completely positive maps and the other resorts to a collisional model of system-environment interactions.

3.3.1. Contractive property of a quantum Markovian process.

Similarly to the classical case (see section 2.3), the memoryless properties of quantum Markovian processes become quite clear in hypothesis testing problems [33, 34]. In the quantum case, we consider a system, with associated Hilbert space $\mathcal{H}$ , whose state is represented by the density matrix ρ1 with probability q, and ρ2 with probability (1 − q). We wish to determine which density matrix describes the true state of the quantum system by performing a measurement. If we consider some general positive operator valued measure (POVM) {Πx} (cf [14]), where $x\in\mathcal{X}$ is the set of possible outcomes, we may split this set into two complementary subsets. If the outcome of the measurement is inside some $A\subset\mathcal{X}$ , then we say that the state is ρ1. Conversely, if the result of the measurement belongs to the complementary set Ac such that $A\cup A^c=\mathcal{X}$ , we say that the state is ρ2. Let us group the results of this measurement in another POVM given by the pair $\{T,\mathbb{I}-T\}$ , with T = ∑xAΠx.

Thus, when the true state is ρ1 (which happens with probability q) we erroneously identify the state as ρ2 with probability

Equation (33)

On the other hand, when the true state is ρ2 (which happens with probability 1 − q), we erroneously identify the state as ρ1 with probability

Equation (34)

The problem in one-shot two-state discrimination is to examine the trade-off between the two error probabilities Tr[ρ2T] and $\Tr[\rho_1 (\mathbb{I}-T)]$ . Thus, consider the best choice of T that minimizes the total averaged error probability

Equation (35)

where Δ = qρ1 − (1 − q)ρ2 is a Hermitian operator, with trace Tr Δ = 2q − 1 vanishing in the unbiased case q = 1/2. Δ is sometimes called the Helstrom matrix [35]. We have the following result.

Theorem 3.2. With the best choice of T, the minimum total error probability in the one-shot two-state discrimination problem becomes

Equation (36)

where $\|\Delta\|_1=\Tr\sqrt{\Delta^\dagger\Delta}$ is the trace norm of the Helstrom matrix Δ.

Thus, note that when q = 0 or q = 1, we immediately obtain zero probability of wrongly identifying the true state.

Proof. The proof follows the same steps as for the unbiased q = 1/2 case (see [14, 36]). The spectral decomposition allows us to write Δ = Δ+ − Δ, with positive operators $\Delta^\pm= \pm\sum_k\lambda_k^\pm{| {\psi_k^{\pm}} \rangle}{\langle {\psi_k^{\pm}} |}$ , where $\lambda_k^+$ are the positive eigenvalues of Δ and $\lambda_k^-$ are the negative ones. Then it is clear that for $0\leq T\leq \mathbb{I}$

Equation (37)

so that

Equation (38)

On the other hand, because $|\psi_j^\pm\rangle\langle\psi_j^\pm|$ are orthogonal projections (in other words as ||Δ||1 = ∑j|λj|), the trace norm of Δ is

Equation (39)

Since

Equation (40)

we have

Equation (41)

Using this relation in (38), we straightforwardly obtain the result (36). □

Thus the trace norm of Δ = qρ1 − (1 − q)ρ2 gives us the capability of distinguishing correctly between ρ1 and ρ2 in the one-shot two-state discrimination problem.

On the other hand, the following theorem connects trace-preserving and positive maps with the trace norm. It was first proven by Kossakowski in [28, 29], while Ruskai also analyzed the necessary condition in [37].

Theorem 3.3. A trace-preserving linear map $\mathcal{E}$ is positive if and only if for any Hermitian operator Δ acting on $\mathcal{H}$ ,

Equation (42)

Proof. Assume that $\mathcal{E}$ is positive and trace preserving, then for every positive semidefinite Δ⩾0 the trace norm is also preserved: $\|\mathcal{E}(\Delta)\|_1=\|\Delta\|_1$ . Consider Δ not to be necessarily positive semidefinite, then by using the same decomposition as in the proof of theorem 3.2, Δ = Δ+ − Δ, we have

Equation (43)

where the penultimate equality follows from the positivity of Δ±. Therefore, $\mathcal{E}$ fulfils equation (42).

Conversely, assume that $\mathcal{E}$ satisfies equation (42) and preserves the trace; then for a positive semidefinite Δ we have the next chain of inequalities:

hence $\|\mathcal{E}(\Delta)\|_1={\rm Tr}[\mathcal{E}(\Delta)]$ . Since ||Δ||1 = Tr(Δ) if and only if Δ ⩾ 0, we obtain that $\mathcal{E}(\Delta)\geq0$ . □

There is a clear parallelism between this theorem and theorem 2.4 for classical stochastic processes. As a result, quantum Markov processes are also characterized in the following way.

Theorem 3.4. A quantum evolution $\{\mathcal{E}_{(t_2,t_1)},t_2\geq t_1\geq t_0\}$ is Markovian if and only if for all t2 and t1, t2 ⩾ t1,

Equation (44)

for any Hermitian operator $\tilde{\Delta}$ acting on $\mathcal{H}\otimes\mathcal{H}$ .

Proof. Since for a quantum Markovian process $\mathcal{E}_{(t_2,t_1)}$ is completely positive for any t2 ⩾ t1, the map $\mathcal{E}_{(t_2,t_1)}\otimes1\hspace{-4pt}1$ is positive, and the results follows from theorem 3.3. □

Therefore, similarly to the classical case, a quantum Markovian process increases monotonically the averaged probability $\mathbb{P}_{\rm min}({\rm fail})$ , equation (36), to give the wrong answer in the one-shot two-state discrimination problem. More concretely, consider a quantum system 'S' that evolves from t0 to the current time instant t1, through some dynamical map $\mathcal{E}_{(t,t_0)}$ . This system S was prepared at t0 in the state ρ1 with probability q and ρ2 with probability (1 − q), and we aim at guessing which state was prepared by performing a measurement on S at the present time t1. If the dynamics $\mathcal{E}_{(t,t_0)}$ is Markovian, the best we can do is to measure at the present time t1; however for non-Markovian processes it may be better to wait for some posterior time t2 > t1 where the trace norm of the Helstrom matrix $\Delta(t)=\mathcal{E}_{(t,t_0)}(\Delta)=q\rho_1(t)-(1-q)\rho_2(t)$ increases with respect to its value at time t1 (see illustration in figure 2).

Figure 2.

Figure 2. Illustration of the quantum two-state discrimination problem. Under Markovian dynamics (blue line), the trace norm of the Helstrom matrix Δ(t) = qρ1(t) − (1 − q)ρ2(t) decreases monotonically from its initial value at t0. On the other hand, for a non-Markovian dynamics (red line) there exist revivals at some time instants, say t2, where the trace norm of Δ(t) is larger than the previously attained values, for example at t1. Thus, the memoryless property of a Markovian process implies that information about the initial state is progressively lost as time goes by. That is not the case for non-Markovian dynamics.

Standard image High-resolution image

Moreover, the same result applies if we make measurements including a static (and arbitrary dimensional) ancillary system 'A', in such a way that the evolution of the enlarged system 'S + A' is given by $\mathcal{E}_{(t_2,t_1)}\otimes1\hspace{-4pt}1$ . That is not the case for a P-divisible dynamics where just the positivity of $\mathcal{E}_{(t_2,t_1)}$ is required instead of the complete positivity.

The fact that for a quantum non-Markovian process $\mathbb{P}_{\rm min}({\rm fail})$ can decrease for some time period may again be interpreted as a signature of the underlying memory in the process. The system seems to retain information about its initial state, which arises at some posterior time t2.

3.3.2. Memoryless environment.

A different way to visualize the memoryless properties characteristic of a quantum Markovian process is in the context of system-environment interactions. Since for a closed quantum system the evolution is given by some two-parameter family of unitary operators U(t1, t0), which fulfil U(t2, t0) = U(t2, t1)U(t1, t0) (see for instance [38]), the evolution of a closed quantum system is trivially Markovian. However, the situation changes regarding the time evolution of open quantum systems. Despite the fact that the most studied models to describe such a dynamics result in Markovian master equations of the form of equation (32) [25, 3941], it is well-known that the exact dynamics of an open quantum system is essentially non-Markovian [22, 24, 26, 42].

To illustrate in what sense Markovian dynamics is memoryless in this context, consider the collisional model in the formulation proposed in [4345], which is depicted in figure 3. In this model, the interaction between system and environment is made up of a sequence of individual collisions at times t1, t2, ..., tn. Each collision produces a change in the state of the system ρS given by

Equation (45)

where ρE is the state of the environment assumed to be the same for every collision, and U(tn+1, tn) is a unitary operator describing the system-environment interaction. Moreover, $\mathcal{E}_{(t_{n+1},t_n)}(\cdot)=\sum_{ij} K_{ij}(\cdot)K_{ij}^\dagger$ is a completely positive map whose Kraus operators are given by $K_{ij}=\sqrt{p_j^{\rm E}}\langle\phi^i_{\rm E}|U(t_{n+1},t_n)|\phi^j_{\rm E}\rangle$ , for $\rho_{\rm E}=\sum_j p_j^{\rm E}|\phi^j_{\rm E}\rangle\langle\phi^j_{\rm E}|$ . The successive concatenations of these collisions lead to a quantum Markovian process. Indeed, if we write

Equation (46)

as

Equation (47)

we conclude that

Equation (48)

and since $\mathcal{E}$ are completely positive maps, the process is Markovian. In addition, if the limit maxn |tn+1 − tn| → 0 does exist, it is possible to obtain equations with the form of (32) for these models [44, 45].

Figure 3.

Figure 3. Schematic action of a memoryless environment as described by a collisional model. At every time tn, the system interacts with the environmental state ρE via some unitary operation U. At the following time step tn+1, the system again finds the environment in the same state ρE, forgetting any correlation caused by the previous interaction at tn. Provided the limit maxn|tn+1 − tn| → 0 is well-defined, this process can be seen as a discrete version of quantum Markovian dynamics.

Standard image High-resolution image

Notably, any Markovian dynamics can be seen as a collisional model like this. This is a consequence of the following theorem [14, 25, 26].

Theorem 3.5 (Stinespring  [46]). A completely positive dynamics $\mathcal{E}(\rho_{\rm S})$ can be seen as the reduced dynamics of some unitary evolution acting on an extended state with the form ρSρE, where ρE is the same, independent of ρS.

Thus, since for Markovian evolutions $\mathcal{E}_{(t_2,t_1)}$ exists for all t2 ⩾ t1 and is completely positive, we may write it as

Equation (49)

where U(t2, t1) may depend on t2 and t1, but ρE can be taken to be independent of time. Hence, Markovian evolutions may be thought to be made up of a sequence of memoryless collisions, where the environmental state is the same and the total state of system and environment is uncorrelated in every collision as if there were no previous interaction (figure 3). Note that this does not mean that we must impose the system and environment to be uncorrelated at any time to get Markovian evolutions [26]. Actually the total state of the system and environment may be highly correlated, even for dynamics leading to Markovian master equations [47]. Rather, the conclusion is that the obtained evolution may also be thought of as the result of memoryless system–environment infinitesimal collisions.

Interestingly, this kind of collisional model can be adapted to simulate non-Markovian dynamics by breaking the condition of uncorrelated collisions [4852].

3.4. Comparison with other definitions of quantum Markovianity

Our approach, which is based on the divisibility property, is not a unique approach to non-Markovianity and, indeed, alternative approaches are being pursued in the literature. Before moving on, it is therefore worthwhile to dedicate a brief section to present these alternative definitions of quantum Markovianity, to comment on these alternative lines of research and to refer the reader to the most relevant literature.

  • Semigroup definition. Historically, the absence of memory effects in quantum dynamics was commonly associated with the formulation of differential dynamical equations for ρ(t) with time-independent coefficients. In contrast, differential equations with time-dependent coefficients or integro-differential equations were linked to non-Markovian dynamics (see for instance [24, 5365], and references therein). From this point of view, Markovian evolutions would be given only by quantum dynamical semigroups [25], i.e. families of trace preserving and completely positive maps, $\mathcal{E}_{\tau}$ , fulfilling the condition
    Equation (50)
    It should be noted however, that this definition does not coincide with the definition adopted in this review and, in our view, suffers from certain drawbacks. The semigroup law equation (50) is just a particular case of the two-parameter composition law equation (29), which encompasses the case of time-inhomogeneous Markovian processes. In other words, this approach does not distinguish between non-Markovian and Markovian equations of motion with time-dependent coefficients. Moreover, this problem persists in the classical limit.
  • Algebraic definition. In the 1980s, a rigorous definition of quantum stochastic process was introduced by using the algebraic formulation of quantum mechanics [66, 67]. It is difficult to summarize those results in a few words, but we will try to sketch the main idea for those in the readership that is familiar with C*-algebras. In this context, a quantum stochastic process on a C*-algebra $\mathcal{A}$ with values in a C*-algebra $\mathcal{B}$ is defined by a family {jt}t⩾0 of *-homomorphism $j_t:\mathcal{B}\rightarrow \mathcal{A}$ . To define a Markov property, two ingredients are necessary. The first one is the following sub-algebra of $\mathcal{A}$ ,
    Equation (51)
    which is called a past filtration or a filtration [68]. Here the symbol ∨S denotes the C*-algebra generated by the subset S of $\mathcal{A}$ . The second one is the introduction of the concept of conditional expectation on $\mathcal{A}$ [6871], which is a generalization of the usual conditional expectation (see equation (2)) to non-commutative algebras. Mathematically, a conditional expectation of $\mathcal{A}$ on a sub-algebra $\mathcal{A}_0\subset\mathcal{A}$ is a linear map
    Equation (52)
    satisfying the properties:
    • (i)  
      For $a\in\mathcal{A}$ , $\mathbb{E}[a|\mathcal{A}_0]\geq0$ whenever a ⩾ 0.
    • (ii)  
      $\mathbb{E}[\mathbb{I}|\mathcal{A}_0]=\mathbb{I}$ .
    • (iii)  
      For $a_0\in \mathcal{A}_0$ and $a\in \mathcal{A}$ , $\mathbb{E}[a_0a|\mathcal{A}_0]=a_0\mathbb{E}[a|\mathcal{A}_0]$ .
    • (iv)  
      For $a\in\mathcal{A}$ , $\mathbb{E}[a^\ast|\mathcal{A}_0]=(\mathbb{E}[a|\mathcal{A}_0])^\ast$ .
    Thus, the stochastic process {jt}t⩾0 is said to be Markovian if for all s, t ⩾ 0 and all $X\in\mathcal{A}_{0]}$ a condition analogous to equation (2) is fulfilled,
    Equation (53)
    We will not go into further details here. What is important for our purposes is that, on the one hand, Accardi, Frigerio and Lewis proved in their seminal paper [66] that this definition of a Markovian process implies our definition 3.2 (rewritten in the Heisenberg picture). On the other hand, the opposite problem, namely to prove that any Markovian evolution according to definition 3.2 is also Markovian according to (53) requires a technically complicated step known as the dilation problem (see [72] and references therein). That is quite closely related to what was explained informally in section 3.3.2, but we do not enter into detail here. Fortunately, under well-chosen and reasonable conditions (the boundedness of operators, fulfilment of Lipschitz conditions, etc.) [7276], it is possible to prove that definition 3.2 also implies (53). Therefore, within the scope this paper, i.e. finite dimensional systems, we can consider the algebraic definition of Markovianity to be essentially equivalent to the one given here in terms of the divisibility condition.
  • BLP definition. Recently, Breuer, Laine and Piilo (BLP) proposed a definition of non-Markovian dynamics in terms of the behavior of the trace distance [33, 77, 78]. Concretely, they state that a quantum evolution, given by some dynamical map $\mathcal{E}_{(t,t_0)}$ , is Markovian if the trace distance between any two initial states ρ1 and ρ2 decreases monotonically with time. This definition is a particular case of definition 3.2. As was explained in section 3.3.1, for any Markovian dynamics $\mathcal{E}_{(t,t_0)}$ and Hermitian operator $\tilde{\Delta}$ acting on $\mathcal{H}\otimes\mathcal{H}$ , $\|[\mathcal{E}_{(t,t_0)}\otimes1\hspace{-4pt}1](\tilde{\Delta})\|_1$ monotonically decreases with time, and so does $\|\mathcal{E}_{(t,t_0)}(\Delta)\|_1$ for any Hermitian Δ acting on $\mathcal{H}$ . Concretely, for $\Delta=\frac{1}{2}(\rho_1-\rho_2)$ , which corresponds to the unbiased case in the two-state discrimination problem q = 1/2, the property
    Equation (54)
    reduces to the BLP definition; this is, for all ρ1 and ρ2,
    Equation (55)
    However, the reverse implication fails to hold, i.e. not every dynamics fulfilling equation (55) satisfies theorem 3.4 (e.g. [34, 79, 80]). Thus, we believe that it is more appropriate to consider the BLP definition as a particular case that arises in the study of memory properties in unbiased two-state discrimination problems. Note that the apparent lack of memory in an unbiased case does not imply a general memoryless property; it only manifests in a general biased case (and taking into account possible ancillary systems). Nevertheless, from equation (55), it is possible to construct a very useful witness of non-Markovianity, as we will see in section 5.1.1.

Remarkably, the previous different definitions of quantum Markovianity satisfy a hierarchical relation with our definition 3.2 based on the divisibility condition. That is sketched in figure 4.

Figure 4.

Figure 4. Relation between divisibility, semigroup, algebraic and BLP definitions of quantum Markovianity. The divisibility definition is essentially equivalent to the algebraic one (see main text). In addition, any dynamics that is Markovian according to the semigroup definition is also Markovian according to the divisibility definition, and hence Markovian according to the BLP definition. However, the converse implication does not hold.

Standard image High-resolution image
  • Markovianity in microscopic derivations. When deriving evolution equations for open quantum systems from microscopic models, the adjective 'Markovian' is widely used to design master equations obtained under the so-called 'Born-Markov' approximation. More concretely, if ρ(t) is the state of the open quantum system, the Born approximation truncates the perturbative expansion in the interaction Hamiltonian, V = ∑iAi ⊗ Bi, at the first non-trivial order. This leads to some differential equation of the form [6, 24, 26, 40, 42, 8183]:
    Equation (56)
    where HS stands for the free Hamiltonian of the open system, and
    Equation (57)
    Here, Cij(s) = Tr(Bje−iHBsBieiHBsρB) are the correlation functions of the bath, which is in the state ρB and has the free Hamiltonian HB. Equation (56) is sometimes called Bloch–Redfield equation (e.g. [83]). Now, if the correlation functions of the bath Cij(s) are narrow in comparison to the typical time scale of ρ(t) due to V, the upper limit in the integral of Ωij(t) can be safely extended to infinity. This conforms what is sometimes called the 'Markov' approximation.Two comments are pertinent regarding the connection of these dynamical equations with the Markovian processes as defined in this work. Firstly, despite the fact that the 'Born-Markov' approximation leads to master equations with time-independent coefficients, they do not always define a valid quantum dynamical semigroup [84, 85]. This is because they break complete positivity. Thus, these models should not be referred as 'Markovian' in the strict sense, as a Markovian process must preserve the positivity of any state, or of any probability distribution in the classical limit. Secondly, if the 'Born-Markov' approximation is combined with the secular approximation (i.e. neglecting fast oscillating terms in the evolution equation), a valid quantum dynamical semigroup is obtained [2426, 39, 41], and then the dynamics can be certainly called Markovian. However, the fact that the 'Born–Markov-secular' approximation generates Markovian dynamics should not be understood as the only framework that can be used to obtain Markovian dynamics.

Further to this short summary of definitions for quantum non-Markovianity different from definition 3.2, the reader may also find proposals based on the behavior of multi-time correlation functions [8688], initial-time-dependent generators [8991], or properties of the asymptotic state [65]. See also [9295] for a definition of non-Markovianity in the context of stochastic Schrödinger equations.

4. Measures of quantum non-Markovianity

After introducing the concept of quantum non-Markovianity in previous sections, we may ask about its quantification in terms of suitable measures and its detection in actual experiments. As we shall see, recently there have been several developments towards these goals, and we shall present them separately. Thus, the present section is devoted to the quantification problem whereas the detection of non-Markovian dynamics by witnesses is left to section 5.

In order to quantify non-Markovianity, the so-called measures of non-Markovianity are introduced. Basically, a measure of non-Markovianity is a function that assigns a number (positive or zero) to each dynamics, in such a way that the zero value is obtained if and only if the dynamics is Markovian. We will also use the name degree of non-Markovianity for a normalized measure of non-Markovianity, with values between 0 and 1, although other normalizations may eventually be taken.

4.1. Geometric measures

Consider a dynamical map $\mathcal{E}_{(t,t_0)}$ describing the evolution from some initial time t0. A first attempt to formulate a measure of non-Markovianity may be a distance-based approach. Here the measure of non-Markovianity is expressed as a distance between $\mathcal{E}_{(t,t_0)}$ and its closest Markovian dynamics (see figure 5). Specifically, let $\mathfrak{M}$ denote the set of all Markovian dynamics, and $\mathcal{D}(\mathcal{E}_1,\mathcal{E}_2)\in[0,1]$ be some (normalized) distance measure in the space of dynamical maps. We define the geometric non-Markovianity at time t as

Equation (58)

which is zero if and only if $\mathcal{E}_{(t,t_0)}$ belongs to the set of Markovian dynamics $\mathfrak{M}$ .

Figure 5.

Figure 5. Illustration of the geometric measure of non-Markovianity. At each t, $\mathcal{N}^{\rm geo}_t[\mathcal{E}_{(t,t_0)}]$ measures the distance between the map $\mathcal{E}_{(t,t_0)}$ and the non-convex set of Markovian maps $\mathfrak{M}$ . For a time interval t ∈ I, $\mathscr{D}_{\rm NM (g)}^I$ in equation (59) is the maximum of every value of $\mathcal{N}^{\rm geo}_t[\mathcal{E}_{(t,t_0)}]$ for t ∈ I.

Standard image High-resolution image

The geometric measure of non-Markovianity in some time interval I may be defined as the maximum value of the geometric non-Markovianity for t ∈ I,

Equation (59)

This quantity lies between 0 and 1 and is positive if and only if the process is non-Markovian, therefore it is a degree of non-Markovianity.

Despite the conceptually clear meaning of $\mathscr{D}_{\rm NM (g)}^{I}$ , it suffers from an important drawback, as it is very hard to compute in practice because of the involved optimization process. In fact, note that the set of Markovian maps $\mathfrak{M}$ is non-convex [96], which makes the problem computationally intractable as the dimension of the system grows.

This approach was originally proposed by Wolf and collaborators [27, 96] to quantify the non-Markovianity of a quantum channel. A quantum channel is a completely positive and trace preserving map $\mathcal{R}$ acting on the set of quantum states. Then, $\mathcal{R}$ is said to be Markovian if it is the 'snapshot of some Markovian dynamics', i.e. there exists some Markovian dynamics specified by the family of maps $\{\mathcal{E}_{(t,t_0)}, t\geq t_0\}$ such that $\mathcal{R}=\mathcal{E}_{(t_1,t_0)}$ for some t1 ⩾ t0. Those authors put forward the aforementioned practical problems of the geometric measure of non-Markovianity and introduced an alternative measure (see also [97]). Later on, the problem to decide whether a quantum channel is Markovian was shown to be very hard in the complexity theory sense [98, 99]; however it has been analyzed for small-sized systems in [77] and [100]. For a review about quantum channels with memory, see [101].

4.2. Optimization of the Helstrom matrix norm

Another approach to quantify non-Markovianity is based on the result of theorem 3.4. Recall that the trace norm of a Helstrom matrix Δ = qρ1 − (1 − q)ρ2 is a measure of the capability to distinguish between the states ρ1 and ρ2 given the outcome of some POVM; see theorem 3.2. Thus, if the dynamics is such that for some t and epsilon > 0, ||Δ(t)||1 < ||Δ(t + epsilon)||1, the probability of distinguishing whether the system was in state ρ1 or ρ2 at time t0 is higher at t + epsilon than it was at time t. As commented in section 3.3.1, this phenomenon denotes the existence of memory effects in the dynamics, as an increase of information at time t + epsilon with respect to t suggests that the system is 'remembering' its original state at t + epsilon. In fact, the intuitive understanding of the word 'memory' demands that a memoryless process does not have the property to keep information, and that this always decreases with time.

This revival of information at t+epsilon may be understood as a positive flow of information from the environment to the system. Thus, for purely Markovian dynamics the flow of information always goes from the system to the environment. However, as pointed out in [102, 103], this interpretation in terms of information flowing between the system and the environment may be problematic if taken strictly, because it is possible to obtain quantum non-Markovian dynamics with the form

Equation (60)

This type of evolution can be generated simply by applying randomly the unitary evolutions Ui(t, t0) in accordance with the probabilities pi. It is a fact that these probabilities can be generated independently of the dynamics of ρ by some random (or pseudo-random) number generator.

On the other hand, since we cannot discard the presence of a decoupled and inert, arbitrary dimensional, ancillary space 'A' (actually, it is enough to consider $\dim({\mathcal H}_{\rm A})=\dim {\mathcal H})$ , we generally consider an enlarged Helstrom matrix $\tilde{\Delta}=q\rho_{1{\rm A}}-(1-q)\rho_{2{\rm A}}$ , where $\Delta=\Tr_{\rm A}(\tilde{\Delta})$ , and $\|\tilde{\Delta}(t)\|_1=\|[\mathcal{E}_{(t,t_0)}\otimes1\hspace{-4pt}1][\tilde{\Delta}(t_0)]\|_1$ . Thus, as an increment of information as accounted for by $\|\tilde{\Delta}\|_1$ denotes non-Markovianity in the dynamics, we can take the maximum of information gained to assess how non-Markovian the evolution is. Explicitly, we may write

Equation (61)

by adding up every increment of information in some interval I:

Equation (62)

Then by maximizing over the initial Helstrom matrix $\tilde{\Delta}$ (i.e. maximizing over ρ1A, ρ2A and the bias q), we define

Equation (63)

where the subindex 'H' stands for Helstrom, as a measure of non-Markovianity. In virtue of theorem 3.4, $\mathcal{N}_{\rm H}^I=0$ if and only if the process is Markovian in the interval I. The quantity $\mathcal{N}_{\rm H}^I$ can be normalized via exponential or rational functions, for instance $\mathscr{D}_{\rm NM (exp-H)}^I:=1-{\rm e}^{-\mathcal{N}_{\rm H}^I}$ or $\mathscr{D}_{\rm NM (rat-H)}^I:=\mathcal{N}_{\rm H}^I/(1+\mathcal{N}_{\rm H}^I)$ .

This proposal was first suggested in [34]. For the unbiased case q = 1/2 and without taking into account the possible presence of ancillary systems, it was previously formulated in [33]. As in the case of geometric measures, the main drawback of the quantity $\mathcal{N}_{\rm H}^I$ is the difficult optimization process, which makes this measure rather impractical. For the restricted case of [33], there has been some progress along this line [104, 105]; see also section 5.1.1 and references therein.

4.3. The RHP measure

As pointed out, even though the two previous measures enjoy several nice geometric or informational interpretations, they are very difficult to compute in practice. A computationally simpler measure of non-Markovianity was introduced in [106] by Rivas, Huelga and Plenio. Given a family $\{\mathcal{E}_{(t,t_0)}, t\geq t_0\}$ , the basic idea of this measure is to quantify how much non-completely positive the intermediate dynamics $\{\mathcal{E}_{(t,t_1)}, t\geq t_1\geq t_0\}$ is for every time t1. To obtain these partitions, by time-continuity we have:

Equation (64)

After right-multiplication with the inverse of $\mathcal{E}_{(t_1,t_0)}$ on both sides, we obtain the desired partitions

Equation (65)

If these maps are completely positive (CP) for all t1, the time evolution is Markovian (definition 3.2). At the moment, we shall assume that $\mathcal{E}^{-1}_{(t_1,t_0)}$ does exist (we will come back to this point later on), so that $\mathcal{E}_{(t,t_1)}$ is well defined by equation (65). For non-Markovian dynamics, there must be some t1 such that $\mathcal{E}_{(t,t_1)}$ is not completely positive. Therefore, by measuring how much the intermediate dynamics $\{\mathcal{E}_{(t,t_1)}, t\geq t_1\geq t_0\}$ departs from completely positive maps, we are measuring up to what extent the time evolution is non-Markovian. Note that $\mathcal{E}_{(t,t_1)}$ is always trace-preserving as it is a composition of two trace-preserving maps.

In order to quantify the degree of non-complete positiveness of the maps $\{\mathcal{E}_{(t,t_1)}, t\geq t_1\geq t_0\}$ , we resort to the Choi–Jamiołkowski isomorphism [19, 20]. Consider the maximally entangled state between two copies of our system $|\Phi\rangle=\frac{1}{\sqrt{d}}\sum_{n=0}^{d-1}|n\rangle |n\rangle$ (here d denotes the dimension); we associate the map $\mathcal{E}_{(t,t_1)}$ to a (Choi–Jamiołkowski) matrix constructed by the rule

Equation (66)

Choi's theorem asserts that $\mathcal{E}_{(t,t_1)}$ is completely positive if and only if the matrix equation (66) is positive semidefinite. In addition, note that since $\mathcal{E}_{(t,t_1)}$ is trace-preserving, the trace norm of matrix (66) provides a measure of the non-completely positive character of $\mathcal{E}_{(t,t_1)}$ . More concretely,

Equation (67)

Following [106], we define a function g(t) via the right derivative of the trace norm as

Equation (68)

so that g(t) > 0 for some t if and only if the evolution is non-Markovian. Therefore the total amount of non-Markovianity in an interval t ∈ I will be given by

Equation (69)

where 'RHP' stands for Rivas, Huelga and Plenio [106].

The quantity $\mathcal{N}_{\rm RHP}^I$ may be normalized via exponential or rational methods, for instance $\mathscr{D}_{\rm NM (\exp-RHP)}^I:=1-{\rm e}^{-\mathcal{N}_{\rm RHP}^I}$ or $\mathscr{D}_{\rm NM ( rat-RHP)}^I:=\mathcal{N}_{\rm RHP}^I/(1+\mathcal{N}_{\rm RHP}^I)$ . However, these normalizations turn out to be not very discriminative. Another way to obtain a more useful normalized measure was proposed in [107]. We explain here a slightly modified method in terms of g(t).

Define the function

Equation (70)

where g(t) is given by equation (68). Therefore $1\geq\bar{g}(t)\geq0$ with $\bar{g}(t)=0$ for all t if and only if the evolution is Markovian. Then, for a bounded interval t ∈ I, we define its normalized degree of non-Markovianity as

Equation (71)

where the indicator function χ(x) is defined as

Equation (72)

Thus, the degree, equation (71), is basically the non-Markovianity accumulated for each t ∈ I, divided by the total length of the subintervals of I where the dynamics is non-Markovian. It is easy to prove that $\mathscr{D}_{\rm NM (RHP)}^I$ is normalized. Let In ⊂ I be the collection of subintervals such that $\bar{g}(t)>0$ for t ∈ In. If |In| denotes the length of the subinterval In, we have

Equation (73)

because of the bound $\bar{g}(t)\leq1$ .

It is worth mentioning several points that one should keep in mind when evaluating this measure of non-Markovianity.

  • (i)  
    Note that, in general [see an exception in the point (iii) below], we need to know the complete dynamical map $\{\mathcal{E}_{(t,t_0)}, t\geq t_0\}$ to compute the function g(t). The standard way to obtain it is resorting to process tomography. Thus, one considers the evolution for different final times t of a complete set of states which span the space of dynamical maps. Then the dynamical map is reconstructed by the tomography of the evolved final states (see for instance [14]). This is the only experimental way to proceed. However, if we know the theoretical evolution, for example by means of some model, there is a trick which sometimes helps. In that case, we may consider directly the evolution of the basis {|i〉〈j|} for different final times t. We write the resulting matrix as a (column) vector by stacking the columns on top of one another. This process is sometimes called vectorization and denoted by 'vec' [108, 109]. As a result, the dynamical map $\mathcal{E}_{(t,t_0)}$ can be seen as a matrix E(t, t0) acting on states written as (column) vectors, and moreover in the basis of {|i〉〈j|} such a matrix is given by
    Equation (74)
    where the column vectors are vij(t) = vec[|i〉〈j|(t)]. These are the vectorization of the matrix |i〉〈j|(t), which denotes the matrix obtained by evolving the basis element |i〉〈j| from t0 to t.Once E(t, t0) is known for some interval t ∈ I, we can compute the intermediate dynamics in I according to equation (65), ${\boldsymbol E}_{(t,t_1)}={\boldsymbol E}_{(t,t_0)}{\boldsymbol E}^{-1}_{(t_1,t_0)}$ , where ${\boldsymbol E}^{-1}_{(t_1,t_0)}$ is just the standard matrix inverse. Finally, g(t) can be computed in the following way: first, construct the matrix $U_{2\leftrightarrow 3}[{\boldsymbol E}_{(t+\epsilon,t)}\otimes\mathbb{I}]U_{2\leftrightarrow 3}$ where U2↔3 is the commutation (or 'swap') matrix between the 'second' and the 'third' subspace5 ; second, apply $U_{2\leftrightarrow 3}[{\boldsymbol E}_{(t+\epsilon,t)}\otimes\mathbb{I}]U_{2\leftrightarrow 3}$ on vec(|Φ〉〈Φ|); third, write the result as a matrix, i.e. 'devectorize'; forth, compute the trace norm of that matrix, which will correspond to $\Vert[\mathcal{E}_{(t+\epsilon,t)}\otimes1\hspace{-4pt}1](|\Phi\rangle\langle\Phi|)\Vert_1$ ; and finally, evaluate the right limit of equation (68).
  • (ii)  
    It may happen that for some t1, the map $\mathcal{E}_{(t_1,t_0)}$ is not bijective, so that the intermediate map $\mathcal{E}_{(t,t_1)}$ given by equation (65) is ill-defined. There are several ways to deal with this problem. If the singularity in t1 is isolated, and we know the dynamics in some neighborhood of t1, one can evaluate the function g(t) in this neighboring region of t1. By taking the limit t → t1, we usually obtain a divergence, $\lim_{t\rightarrow t_1} g(t)\rightarrow\infty$ . However, since the hyperbolic tangent removes the divergence $\lim_{t\rightarrow t_1} \bar{g}(t)=1$ , we can compute $\mathscr{D}_{\rm NM (RHP)}^I$ without further problems. Another way to remove the singularity may be to compute indirectly the inverse of $\mathcal{E}_{(t_1,t_0)}$ by finding the inverse of $1\hspace{-4pt}1\epsilon+\mathcal{E}_{(t_1,t_0)}$ , which always exist. Then, at the end of the computation of $\bar{g}(t)$ , we proceed by taking the limit epsilon → 0. Another more sophisticated (and sometimes inequivalent) method has been proposed using the Moore–Penrose pseudoinverse [110]. See also [111, 112] for other considerations about singularities in dynamical maps.
  • (iii)  
    There are cases where we know the dynamics fulfils some linear differential equation,
    Equation (75)
    where the decay rates may be negative γk(t) < 0 for some t, and so it describes non-Markovian evolutions. Then, there is a very practical way to obtain the function g(t). Since for small enough epsilon we have [26]
    Equation (76)
    the function g(t) can be computed directly from the generator $\mathcal{L}_t$ :
    Equation (77)
  • (iv)  
    It is possible to extend the definition equation (71) to unbounded intervals, typically I = [t0, ). However this extension must be carefully handled. It can be understood as a limiting procedure of bounded intervals In, such that limn → In = [t0, ), for example In = [t0, n). Very crucially, this limit has to be taken at the last step in the computation:
    Equation (78)

Example 4.1. Consider the following dynamical map of a two-dimensional quantum system (qubit), describing the evolution from t0 = 0 (without loss of generality),

Equation (79)

and σz is the Pauli matrix. This dynamics describes the process where the nondiagonal elements (coherences) of ρ change the sign with a probability p(t), and with a probability 1 − p(t) the qubit remains in the same state ρ. Note that for p(t) = 1/2, the coherences vanish completely. Let us compute the function g(t). The first step is to obtain $\mathcal{E}_{(t+\epsilon,t)}$ via equation (65). As suggested in point (i) above, it is useful to employ the 'vec' operation to obtain the inverse. We have

Equation (80)

where we have used the property vec(ABC) = (Ct ⊗ A)vec(B) (cf [108, 109]), and $\mathbb{I}_k$ stands for the k × k identity matrix. Therefore,

Equation (81)

where 'diag(a1, a2, ..., aN)' denotes the diagonal matrix with entries a1, a2, ..., aN. Hence,

Equation (82)

Now, as commented in point (i) above, we have

Equation (83)

In this case, $U_{2\leftrightarrow 3}=\mathbb{I}_2\otimes \left(\begin{array}{@{}llll@{}} 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \end{array}\right) \otimes \mathbb{I}_2$ and $|\Phi\rangle=\frac{1}{\sqrt{2}}(1,0,0,1)^{\rm t}$ , so that after some straightforward algebra, equation (83) reads

Equation (84)

By 'devectorizing', i.e. writing this vector as the corresponding 4 × 4 matrix and computing the trace norm, we immediately obtain

Equation (85)

Finally, by expanding at first order p(t + epsilon) ≃ p(t) + p'(t)epsilon, the limit in equation (68) can be easily computed to arrive at

Equation (86)

Thus, given the function p(t) and some interval I, with this result one immediately calculates $\mathcal{N}_{\rm RHP}^I$ or $\mathscr{D}_{\rm NM (RHP)}^I$ .

As aforementioned, the measure of non-Markovianity $\mathcal{N}_{\rm RHP}$ , equation (69), was first introduced in [106]. The more discriminative degree equation (71) is a variant based on the same ideas as [107], where the normalization problem is further analyzed. Examples where this measure is studied can be found in [79, 113118] for qubits coupled to bosonic environments, in [119] for more general spin systems coupled to bosonic environments, in [120, 121] for qubits coupled to other d–level systems, in [122124] for qubits interacting with composite environments, in [11, 34, 125] for classical stochastic dynamics and in [11] for the so-called semi-Markov quantum processes. In addition, the application of the Choi–Jamiołkowski criterion to study the complete positivity of intermediate dynamics for some specific examples is considered in [126] as well.

4.4. Decay rates measures

Since Markovian dynamics is characterized by generators with the form of equation (32), in [127] Hall, Cresser, Li and Andersson proposed a measure of non-Markovianity focused on properties of the generator. Let us consider some dynamical evolution given by its generator,

Equation (87)

In order to characterize its non-Markovianity, we may write $\mathcal{L}_t$ in an orthonormal basis $\{G_j\}_{j=0}^{d^2-1}$ with respect to the Hilbert–Schmidt product $\Tr(G_m^\dagger G_n)=\delta_{mn}$ . More specifically, in [127] it is proposed to use a self-adjoint basis with $G_0=\mathbb{I}/\sqrt{d}$ , so that $\{G_j\}_{j=1}^{d^2-1}$ can be taken to be the (normalized) generators of the $\mathfrak{su}(d)$ algebra. Thus, by expanding every operator of the dissipative part of the generator,

Equation (88)

Equation (89)

Introducing this in equation (87), one obtains

Equation (90)

where $\tilde{c}_{mn}(t)=\sum_{k,\ell}w_{\ell n}^\ast(t) c_{k\ell}(t) v_{km}(t)$ forms a Hermitian matrix, $\tilde{c}_{mn}(t)=\tilde{c}_{nm}^\ast(t)$ , because $\mathcal{L}_t$ preserves the Hermiticity of ρ. Therefore, this matrix is diagonalized via some unitary operation, $\tilde{c}_{mn}(t)=\sum_j u_{mj}(t)\gamma_j(t) u_{nj}^\ast(t)$ , and the generator can be rewritten in the form

Equation (91)

with Lj(t) = ∑mumj(t)Gm, keeping orthonormality $\Tr[L_i^\dagger(t)L_j(t)]=\delta_{ij}$ . Note that since the eigenvalues γj(t) are independent of the basis, this form is unique (up to degeneracy). Now, Hall, Cresser, Li and Andersson define some functions of the eigenvalues (canonical decay rates) γj(t),

Equation (92)

Because of theorem 3.1, every fj(t) vanishes at any time if and only if the evolution is Markovian. Therefore, the functions fj(t) can be used to construct a measure of non-Markovianity. For example, defining $f(t):=\sum_{j=1}^{d^2-1} f_j(t)$ , for a (bounded) time interval I,

Equation (93)

is a measure of non-Markovianity. Actually, it can be proven [127] that $f(t)=\frac{d}{2}g(t)$ (see equation (68)), so this quantity is proportional to $\mathcal{N}_{\rm RHP}^I$ , equation (69),

Equation (94)

Interestingly, this approach also suggests a discrete measure—by computing $F_j^I=\int_I f_j(t)\,\rmd t$ , a non-Markovianity index can be defined by the rule

Equation (95)

i.e. the number of non-zero $F_j^I$ s in the interval I.

Example 4.2. Consider the evolution of a qubit given by the following master equation:

Equation (96)

subject to the conditions $\int_{t_0}^t\gamma_{-}(s)\,\rmd s\geq0$ and $\int_{t_0}^t\gamma_{z}(s)\,\rmd s\geq0$ to ensure the complete positivity of the dynamical map $\mathcal{E}_{(t,t_0)}$ . Let us compute the functions g(t) and f(t). For the first one, we use the formula in terms of the generator $\mathcal{L}_t$ , equation (77). By computing the eigenvalues of $[1\hspace{-4pt}1+\epsilon(\mathcal{L}_t\otimes1\hspace{-4pt}1)] (|\Phi\rangle\langle\Phi|)$ and expanding each of them to the first order in epsilon, we obtain

Equation (97)

Thus, the limit of equation (77) is readily computed:

Equation (98)

Now, in order to find the functions fj(t), equation (92), we have to write equation (96) in a orthonormal basis with respect to the Hilbert–Schmidt product. However, since $\sigma_\pm=\frac{1}{2}(\sigma_x\pm{\rm i}\sigma_y)$ and because of the orthogonality of the Pauli matrices, $\big\{\frac{1}{\sqrt{2}}\mathbb{I}_2,\sigma_-,\sigma_+,\frac{1}{\sqrt{2}}\sigma_z\big\}$ forms an orthonormal basis. Thus, the canonical decay rates are γ(t) and 2γz(t). By noting that $\max\{-\gamma_j(t),0\}=\frac{1}{2}[|\gamma_j(t)|-\gamma_j(t)]$ , we obtain

Equation (99)

Therefore, g(t) = f(t), as expected in this case since d = 2. For Markovian evolution, γ(t) ⩾ 0, γz(t) ⩾ 0 for all t and g(t) = f(t) = 0.

Other examples where this measure is applied can be found in [125, 127]. See also [128] for an experimental proposal to probe non-Markovianity by negative decay rates.

4.5. Hierarchical k-divisibility degrees

Recently, Chruściński and Maniscalco have proposed a hierarchical way to assess non-Markovianity [129]. Their approach, based on the concept of k-divisibility, is interesting as it provides a way to define some kind of maximally non-Markovian dynamics. Basically, a family of dynamical maps, $\{\mathcal{E}_{(t_2,t_1)},t_2\geq t_1\geq t_0\}$ , is k-divisible if $\mathcal{E}_{(t_2,t_1)}\otimes1\hspace{-4pt}1_k$ is a positive map for all t2 ⩾ t1 ⩾ t0 (here $1\hspace{-4pt}1_k$ denotes the identity map acting on the space of k × k matrices). Therefore, if the dimension of the quantum system is d, a k-divisible process with k ⩾ d is what in this work has been called a divisible or Markovian process (see definition 3.2). The 1-divisible processes are the P-divisible processes as introduced in definition 3.1, and the 0-divisible processes are processes where $\mathcal{E}_{(t_2,t_1)}$ is not a positive operator for some t1 and t2 ⩾ t1.

Moreover, analogously to theorems 3.3 and 3.4, we have that a process is k-divisible if and only if $\tilde{\sigma}_k(\tilde{\Delta},t):=\frac{\rmd}{\rmd t}\big\|[\mathcal{E}_{(t,t_0)}\otimes1\hspace{-4pt}1_k] \tilde{\Delta}\big\|_1\leq0$ for every Helstrom matrix $\tilde{\Delta}=q\rho_{1A}-(1-q)\rho_{2A}$ with an ancillary space of dimension k. In a similar fashion to equation (63), Chruściński and Maniscalco define a set of degrees to quantify the departure from k-divisibility for t ∈ I:

Equation (100)

where $N_\pm^I(\tilde{\Delta},t):=\int_{t\in I,\tilde{\sigma}\gtrless 0}dt\tilde{\sigma}_k(\Delta,t)$ , and the subindex 'ND' stands for non-divisibility.

Since $\mathcal{E}_{(t,t_0)}$ is completely positive for any final time t, it is easy to prove that $N_{k}^{+}(\tilde{\Delta},I)\leq |N_{k}^{-}(\tilde{\Delta},I)|$ [129], therefore $\mathscr{D}_{{\rm ND} (k)}^I\in [0,1]$ for all k. Moreover, as k increases, so does the dimension of the space in the optimization problem equation (100), and hence it is clear that

Equation (101)

In this equation, only $\mathscr{D}_{{\rm ND} (d)}^I$ is a degree of non-Markovianity as defined in this work. The other quantities are zero for non-Markovian but k-divisible (k < d) dynamics. This hierarchy of degrees of non-divisibility is particularly useful to try a definition of maximally non-Markovian dynamics. Indeed, Chruściński and Maniscalco propose to give the name 'maximally non-Markovian dynamics' to those that $\mathscr{D}_{{\rm ND} (1)}^I=1$ , and consequently $\mathscr{D}_{{\rm ND} (2)}^I=\mathscr{D}_{{\rm ND} (3)}^I=\ldots=\mathscr{D}_{{\rm ND} (d)}^I=1$ . A particular example of this kind dynamics for a qubit is the one generated by the master equation $\frac{\rmd}{{\rm d}t}\rho=\gamma(t)(\sigma_z\rho\sigma_z-\rho)$ , for an interval I such that γ(t) is periodic in I. For instance, γ(t) = sin(t) or γ(t) = tan(t) in t ∈ [0, 2π]. Interestingly, for these two examples the Choi–Jamiolkowski measure equation (71) provides different values. We obtain $\mathscr{D}_{\rm NM (RHP)}^I=0.758$ and $\mathscr{D}_{\rm NM (RHP)}^I=0.803$ , for γ(t) = sin(t) and γ(t) = tan(t), respectively.

5. Witnesses of Quantum non-Markovianity

In this section, we revise the different ways to detect non-Markovianity via witnesses. A witness of non-Markovianity is a quantity that vanishes for all Markovian dynamics (see also [32, 130, 131]), but it may also vanish for some non-Markovian dynamics. Thus, when a witness of non-Markovianity gives a non-zero value, we are sure that the dynamics is non-Markovian.

In general, we can classify the witnesses of non-Markovianity that have been presented in the literature according to two guiding principles. There are witnesses based on monotonic quantities under completely positive maps, and witnesses based on monotonic quantities under local completely positive maps. In the following, we review several proposals in these two classes and illustrate their use with a simple example.

5.1. Witnesses based on monotonicity under completely positive maps

5.1.1. Trace distance and the BLP quantifier.

If we consider the unbiased situation in the two-state discrimination problem, q = 1/2 and the Helstrom matrix reads as Δ = (ρ1 − ρ2)/2, where we have neglected the possible presence of ancillary systems. Thus, the trace norm of Δ becomes the trace distance between states ρ1 and ρ2:

Equation (102)

Analogously to equations (61) and (62), we write,

Equation (103)

and for some interval, I,

Equation (104)

If this quantity is not zero for some pair of states ρ1 and ρ2, we are sure the dynamics is non-Markovian in I, as it is a lower bound to the non-Markovianity measure $\mathcal{N}_{\rm H}^I$ , equation (63). In particular, we may be interested in finding the largest value of equation (104) in the time interval (0, ). To this end, Breuer, Laine and Piilo [33, 77] define the quantifier

Equation (105)

Example 5.1. Consider the following master equation of a qubit system

Equation (106)

with $\int_{t_0}^t\gamma(s)ds\geq0$ for completely positive dynamics. This equation can be easily integrated—by writing $\rho(0)=\left(\begin{array}{@{}ll@{}} \rho_{11} & \rho_{12}\\ \rho_{21}& \rho_{22} \end{array}\right)$ , we obtain (t0 = 0 without loss of generality)

Equation (107)

Note that 0 ⩽ R(t) ⩽ 1. Let us compute the trace distance between two different initial states, for example $\rho_1=\frac{1}{2}\big(\scriptsize\begin{array}{@{}ll@{}} 1 & 1\\ 1 & 1 \end{array}\big)$ and $\rho_2=\frac{1}{2} \big(\scriptsize\begin{array}{@{}ll@{}} 1 & -1\\ -1 & 1 \end{array}\big)$ . Because of equation (107), we immediately obtain

Equation (108)

so that

Equation (109)

Therefore, if σ(ρ1, ρ2, t) > 0 for some t, then γ(t) < 0, and the dynamics is non-Markovian.

The use of the trace distance to witness non-Markovianity was originally proposed in [33] and further analyzed in [77]. Due to its simplicity and intuitive physical interpretation, it has been applied to detect non-Markovian features in the dynamics of qubit [79, 113118, 132152] and qutrit systems [153] coupled to bosonic environments, qubits coupled to other finite dimensional systems [120, 121, 154], and to composite [123, 124, 155157] and chaotic [158160] environments. It has also been employed to analyze memory-kernel master equations [161, 162], quantum semi-Markov processes [11], classical noise [11, 34, 125, 163165], in fermionic systems [166] and collisional models [167], and to study exciton–phonon dynamics in the energy transfer of photosynthetic complexes [168, 169]. Moreover, this witness has been implemented experimentally within a linear optics set-up [105, 170172]. On the other hand, some connections have been found between the non-monotonic behavior of the trace distance and geometric phases [173], the Loschmidt echo [174, 175], the dynamical recovering of the quantum coherence by applying local operations [176], and the appearance of system-environment correlations [177]. In this regard, this witness has also been proposed as a tool to detect the presence of initial system–environment correlations [140, 143, 145, 178184].

While very efficient under certain conditions, there are some non-Markovian processes that cannot be witnessed by the trace distance, for example those where the non-Markovianity is encoded just in the 'non-unital part' of the dynamics [185]. This part corresponds to the affine vector c(t, t0) when the dynamics is visualized in the Bloch space—see equation (145). Necessary and sufficient conditions for the trace distance to witness non-Markovianity can be found in [127].

As a matter of curiosity, the trace distance has been also adapted to measure the degree of non-Markovianity of musical compositions [186].

5.1.2. Fidelity.

The fidelity F(ρ1, ρ2) between two quantum states ρ1 and ρ2 is a generalization of the transition of probability |〈ψ1|ψ2〉|2 between two pure states |ψ1〉 and |ψ2〉 to density matrices. Specifically, the fidelity is defined [187, 188] by the equation

Equation (110)

Here, |Ψ1〉 and |Ψ2〉 are two purifications of ρ1 = TrA(|Ψ1〉〈Ψ1|) and ρ2 = TrA(|Ψ2〉〈Ψ2|), where TrA denotes the partial trace on some ancillary system A, and the maximum is taken over all possible purifications (see [14] for a pedagogical introduction6). Uhlmann [187] solved the optimization problem, obtaining

Equation (111)

Among several properties, the fidelity is monotonic under complete positive maps $\mathcal{E}$ ,

Equation (112)

reaching the equality if and only if the completely positive map is unitary $\mathcal{E}(\cdot)=U(\cdot)U^\dagger$ [189]. Thus, the fidelity is monotonically increasing for Markovian evolutions, and therefore it may be used to witness non-Markovian behavior.

Example 5.2. Consider again the simple model of pure dephasing, equations (106) and (107). Again, for the initial states $\rho_1=\frac{1}{2}\big(\scriptsize\begin{array}{@{}ll@{}} 1 & 1\\ 1 & 1 \end{array}\big)$ and $\rho_2=\frac{1}{2}\big(\scriptsize\begin{array}{@{}ll@{}} 1 & -1\\ -1 & 1 \end{array}\big)$ , a straightforward computation of the fidelity gives

Equation (113)

so that

Equation (114)

Hence, if the fidelity decreases at some time t, then γ(t) < 0 and the dynamics is non-Markovian.

In [190], the approach of [33, 77], originally proposed for the trace distance (see previous section), is reconsidered with the so-called Bures distance [191]:

Equation (115)

Since the authors of [190] aim at quantifying non-Markovianity in Gaussian states of harmonic oscillators, the use of the fidelity instead the trace distance is more convenient because a closed formula for the latter for Gaussian states is still lacking. Other examples can be found in [192194]. Regarding Gaussian states, an alternative approach to witness non-Markovianity is suggested in [195].

A different witness in terms of fidelity was previously proposed in [196]; however that was only able to detect deviations from time-homogeneous Markov processes, i.e. quantum dynamical semigroups where $\mathcal{E}_{(t_2,t_1)}=\mathcal{E}_{(t_2-t_1)}$ for every t1 and t2. See [97] for another work exclusively focused on time-homogeneous dynamics.

5.1.3. Quantum relative entropies.

Another similar witness is constructed with the (von Neumann) relative entropy between two quantum states ρ1 and ρ2:

Equation (116)

Despite the relative entropy, neither being symmetric S(ρ1||ρ2) ≠ S(ρ2||ρ1), nor satisfying the triangle inequality, it is often intuited as a distance measure because S(ρ1||ρ2)⩾0, vanishing if and only if ρ1 = ρ2 (Klein's inequality [14]). Moreover, if the intersection of the kernel of ρ2 with the support of ρ1 is non-trivial, then S(ρ1||ρ2) becomes infinity.

Analogously to the Bures and the trace distance, the quantum relative entropy is monotonic under completely positive and trace-preserving maps $\mathcal{E}$ ,

Equation (117)

The proof of this result was fist given by Lindblad [197] for finite dimensional systems, and Uhlmann [198] extended it to the general case (see also [36] and [199]).

Therefore, the quantum relative entropy between any two states is monotonically decreasing with time in a Markovian process, and any increment of it at some time instant reveals the non-Markovian character of the dynamics.

Example 5.3. For the model of pure dephasing, equations (106) and (107), and the initial states $\rho_1=\frac{1}{2}\big(\scriptsize\begin{array}{@{}ll@{}} 1 & 1\\ 1 & 1 \end{array}\big)$ and $\rho_2=\frac{1}{2}\big(\scriptsize\begin{array}{@{}ll@{}} 1 & -1\\ -1 & 1 \end{array}\big)$ , the quantum relative entropy becomes

Equation (118)

so that its derivative is

Equation (119)

Since 0 ⩽ R(t) ⩽ 1, everything multiplying γ(t) in the above equation is negative. Hence, an increment in the quantum relative entropy at some t implies γ(t) < 0 and non-Markovianity.

The use of the quantum relative entropy to witness non-Markovianity was originally proposed in [77]. In [32], it is suggested to use more general relative entropies due to Renyi [200] and Tsallis [201] for the same task. In this regard, [202] enumerates several distances fulfilling the monotonicity condition. Additionally, [203] proposed to use the monotonicity of the relative entropy to detect the presence of initial system–environment correlations.

5.1.4. Quantum Fisher information.

Following [204, 205] (see also [82]), the quantum Fisher information can be defined as the infinitesimal Bures distance (115) between two quantum states. For simplicity, assume some one-parametric family of quantum states ρθ, then

Equation (120)

where $\mathcal{F}(\rho_\theta)$ is the so-called quantum Fisher information of the family ρθ. Equivalently, we write

Equation (121)

Thus, the quantum Fisher information of ρθ measures the sensitivity of the Bures distance when θ is varied. In turn, this can be interpreted as the information about θ that is contained in the family ρθ, in such a way that if ρθ does not depend on θ, $\mathcal{F}(\rho_\theta)=0$ . We will come back to this point later.

Additionally, the quantum Fisher information admits other different but equivalent definitions [35, 82, 204, 205]. For example [205], it can be defined as the maximum Fisher information of classical probabilities p(x|θ) = Tr(Πxρθ), where the optimization is made over all possible POVMs {Πx},

Equation (122)

Recall that the Fisher information of a probability distribution p(x|θ) is defined as

Equation (123)

Another definition is given in terms of the so-called symmetric logarithmic derivative operator L, which is defined via the implicit equation

Equation (124)

and depends on the particular form of ρθ, L = L(ρθ). The quantum Fisher information is given by the variance of this operator in the family ρθ [35],

Equation (125)

The equivalence between equations (121) and (125) can be found to be explicitly proven in [191].

Going back to the problem of witnessing non-Markovianity, the quantum Fisher information is also monotonically decreasing under Markovian dynamics, as it cannot increase under completely positive maps. This can be showed directly from equation (121). Because of (112), the Bures distance (and its square) is monotonically decreasing under a completely positive $\mathcal{E}$ , so that

Equation (126)

The use of the quantum Fisher information to witness non-Markovianity is originally due to Lu et al in [206]. These authors provided a proof of the monotonicity of the Fisher information by using the definition equation (125), and introduced a flow of quantum Fisher information by

Equation (127)

Thus if $\mathcal{I}_{\rm QFI}(t)>0$ for some t, the time evolution is non-Markovian. Moreover, if the evolution is given by some master equation,

Equation (128)

the quantum Fisher information flow can be written as

Equation (129)

Equation (130)

Therefore, $\mathcal{I}_{\rm QFI}$ is negative if all γk(t) ⩾ 0, in accordance with theorem 3.1.

Example 5.4. Consider the family of states $\rho_\theta=\frac{1}{2}\big(\scriptsize\begin{array}{@{}ll@{}} 1 & {\rm e}^{-{\rm i}\theta} \\ {\rm e}^{{\rm i}\theta} & 1 \end{array}\big)$ , which is typically generated by applying the phase shift θ to the state ${| {\psi} \rangle}=\frac{1}{\sqrt{2}}(1,1)^{\rm t}$ . If ρθ is subject to pure dephasing, equations (106) and (107), we can compute the Fisher information directly by expanding the squared Bures distance between ρθ(t) and ρθ+δθ(t) up to the second order in δθ (equation (120)). After some algebra, we find

Equation (131)

Thus, the quantum Fisher information flow, equation (127), is

Equation (132)

and $\mathcal{I}_{\rm QFI}(t)>0$ for some t denotes γ(t) < 0 and non-Markovianity.

Other examples where the quantum Fisher information flow is computed can be found in the original reference [206] and in [174, 194, 207], where its possible relation with the Loschmidt echo was explored.

Notably, this witness of non-Markovianity may be relevant in the context of quantum parameter estimation. Specifically, the error (variance) of any (unbiased) estimation of the parameter θ is related to the quantum Fisher information through the quantum Cramer–Rao bound [35, 82, 204, 205]:

Equation (133)

Thus, an increment in $\mathcal{F}(\rho_\theta)$ could be linked with an increment of information about the parameter θ. Nevertheless, note that the quantum Fisher information provides just a lower bound to the error on θ, and in fact there are cases where this bound is not achievable.

5.1.5. Capacity measures.

In [208] Bylicka et al have suggested the use of capacity measures to detect non-Markovianity. Specifically, given a complete positive and trace-preserving map $\mathcal{E}$ and some quantum state ρ, we introduce the mutual information between ρ and $\mathcal{E}(\rho)$ via

Equation (134)

Here S(ρ) = −Tr(ρ log ρ) is the von Neumann entropy and $I_{\rm c}(\rho,\mathcal{E})$ is the so-called quantum coherent information, defined as [14],

Equation (135)

where $|\Psi_\rho\rangle\in\mathcal{H}\otimes\mathcal{H}_{\rm A}$ is a purification of ρ = TrA(|Ψρ〉〈Ψρ|). Remarkably, the quantity $S\{[\mathcal{E}\otimes1\hspace{-4pt}1](|\Psi_\rho\rangle \langle\Psi_\rho|)\}$ does not depend on the particular choice of purification. The quantum coherent information is monotonic under completely positive maps7:

Equation (136)

and the same equation is satisfied for $I(\rho,\mathcal{E})$ . Thus, in [208], the following two capacity measures are proposed to witness non-Markovianity:

Equation (137)

Equation (138)

The entanglement assisted capacity Cea sets a bound on the amount of classical information that can be transmitted along the dynamical process described by $\mathcal{E}_{(t,t_0)}$ when the sender at t0 and the receiver at t are allowed to share an unlimited amount of entanglement. Similarly, the capacity Q provides the limit to the rate at which quantum information can be reliably sent by the quantum process (for a single use).

Example 5.5. Let us calculate the capacity measures for the pure dephasing model, equations (106) and (107). It is immediately possible to check that the dynamical map in this case is given by

Equation (139)

It can be shown [118, 209] that the maximum for both measures Cea and Q is reached for a maximally mixed state, $\rho=\mathbb{I}_2/2$ , and then the required purification has to be a maximally entangled state, e.g. $|\Psi_\rho\rangle=|\Phi\rangle=\frac{1}{\sqrt{2}}(1,0,0,1)^{\rm t}$ ,

Equation (140)

Thus a direct computation leads to

Equation (141)

Equation (142)

Therefore, both quantities have the same derivative

Equation (143)

which can be positive only for non-Markovian dynamics, γ(t) < 0.

Further examples of the use of these witnesses are found in [165] and [118, 210] for a qubit interacting with a random classical field and with a bosonic environment, respectively.

5.1.6. Bloch volume measure.

Another interesting proposed way to witness non-Markovianity was suggested by Lorenzo et al in [211]. These authors expand the state ρ in the basis $\{G_j\}_{j=0}^{d^2-1}$ , where $G_0=\mathbb{I}/\sqrt{d}$ and $\{G_j\}_{j=1}^{d^2-1}$ are the (normalized) generators of the $\mathfrak{su}(d)$ algebra,

Equation (144)

Then, it is well-known that the action of a dynamical map can be seen as an affine transformation of the Bloch vector r = (r1, ..., rd2−1)t,

Equation (145)

where $[{\boldsymbol M}_{(t,t_0)}]_{ij}=\Tr[G_i\mathcal{E}_{(t,t_0)}(G_j)]$ and $[{\boldsymbol c}_{(t,t_0)}]_i=\Tr[G_i\mathcal{E}_{(t,t_0)}(\mathbb{I})]/d$ for i, j > 0.

It can be proven that, since $\mathcal{E}_{(t,t_0)}$ is a composition of completely positive maps, the absolute value of the determinant of M(t, t0) decreases monotonically with time [27]. Interestingly, |det[M(t, t0)]| describes the change in volume of the set of states accessible through the evolution [211], so that Markovian evolutions reduce (or leave invariant) the volume of accessible states. Thus, this witness enjoys a nice geometrical interpretation. However, similarly to the trace distance, since the volume of accessible states is independent of the affine vector c(t, t0), it is not sensitive to dynamics where the non-Markovianity is encoded in c(t, t0). More concretely, it can be shown that the volume of accessible states only detects non-Markovian dynamics such that Tr[M(t, t0)] > 0 [127].

Example 5.6. For the pure dephasing model, equation (106), we take the (normalized) generators of $\mathfrak{su}(2)$ algebra and the identity as the basis, i.e. $\big\{\frac{1}{\sqrt{2}}\mathbb{I}_2, \frac{1}{\sqrt{2}}\sigma_x,\frac{1}{\sqrt{2}}\sigma_y, \frac{1}{\sqrt{2}}\sigma_z\big\}$ . From (139), it is immediately possible to obtain that for this model c(t,0) = 0 and

Equation (146)

so that,

Equation (147)

and we arrive at the same conclusions as with the quantum Fisher information in equation (131).

For other examples of the use of |det[M(t, t0)]| to track non-Markovianity, see the original reference [211] and [123, 212, 213].

5.2. Witnesses based on monotonicity under local completely positive maps

These second kinds of witnesses are typically correlation measures between the dynamical system and some ancilla A, in such a way that they do not increase under the action of local maps, $\mathcal{E}\otimes1\hspace{-4pt}1_{\rm A}$ . Let us analyze three of them.

5.2.1. Entanglement.

From an operational point of view, entanglement can be defined as those correlations between different quantum systems which cannot be generated by local operations and classical communication (LOCC) procedures [214]. Thus, entanglement turns out to be a resource to perform tasks which cannot be done just by LOCC.

The degree of entanglement of a quantum state may be assessed by the so-called entanglement measures. These must fulfill a set of axioms in order to account for the genuine properties present in the concept of entanglement [214217]. One of these requirements is the monotonicity axiom, which asserts that the amount of entanglement cannot increase by the application of LOCC operations. Actually, the quantifiers of entanglement that fulfil this axiom but do not coincide with the entropy of entanglement for pure states are simply called entanglement monotones. Since local operations are a particular example of LOCC, if some entanglement measure (or monotone) increases under a local map $\mathcal{E}\otimes1\hspace{-4pt}1$ , $\mathcal{E}$ cannot be completely positive.

Thus, consider a system S evolving according to some dynamical map $\mathcal{E}_{(t,t_0)}$ . We will study the evolution of an entangled state ρSA between S and some static ancillary system A, $\rho_{\rm SA}(t)=[\mathcal{E}_{(t,t_0)}\otimes 1\hspace{-4pt}1](\rho_{\rm SA})$ . Then, an increment in the amount of entanglement of ρSA(t) witnesses non-Markovianity. More specifically, consider initially a maximally entangled state ρSA = |Φ〉〈Φ|, $|\Phi\rangle=\frac{1}{\sqrt{d}}\sum_{n=0}^{d-1}|n\rangle_{\rm S} |n\rangle_{\rm A}$ . Provided that E is an entanglement measurement (or monotone), the positive quantity

Equation (148)

is different from zero only if $\mathcal{E}_{(t,t_0)}$ is non-Markovian in the interval (t0, t1). Here ΔE := E[ρSA(t1)] − E[ρSA(t0)].

The use of the entanglement to witness non-Markovianity was first proposed in [106], where the expression (148) was suggested. This proposal has been theoretically addressed for cases of qubits coupled to bosonic environments [113, 117, 134, 144, 218, 219], for a damped harmonic oscillator [106, 192, 193], and for random unitary dynamics and classical noise models [102, 220]. Experimentally, this witness has been analyzed in [170, 221].

In addition, a link between the generation of entanglement by non-Markovian dynamics and the destruction of accessible information [222] has been established in [223]. Also, there is a relation between the dynamical recovering of quantum coherence by applying local operations and entanglement generation—see [176]. Further connections between entanglement and non-Markovianity can be found in [142, 224].

5.2.2. Quantum mutual information

The total amount of correlations (as classical as quantum) as measured by the quantum mutual information is another witness of non-Markovianity [225]. This quantity is defined as

Equation (149)

where S(ρ) := −Tr(ρ log ρ) is the von Neumann entropy and ρS,A = TrA,S(ρSA). This expression can be rewritten as a relative entropy,

Equation (150)

Thus, if we apply a local operation, we have

Equation (151)

where we have used that $\Tr_{\rm A}[(\mathcal{E}\otimes1\hspace{-4pt}1)\rho_{\rm SA}]=\mathcal{E}(\rho_{\rm S})$ and $\Tr_{\rm S}[(\mathcal{E}\otimes1\hspace{-4pt}1)\rho_{\rm SA}]=\rho_{\rm A}$ , and the monotonicity of the relative entropy, equation (117). Hence, the quantum mutual information is monotonic under local trace-preserving completely positive maps. For references where quantum mutual information has been used to study non-Markovianity, see [117, 118, 123, 164, 226, 227].

5.2.3. Quantum discord.

Finally, the quantum discord [228230] can also be used to detect non-Markovian dynamics. Recall that the quantum discord between two quantum systems is a non-symmetric measure of correlations, so that it is not the same to measure the quantum discord between S and A, with respect to A, as with respect to S. For our purposes, we consider the quantum discord as measured by the ancillary system:

Equation (152)

where the minimization is taken over all POVM $\{\Pi_j^{\rm A}\}$ on A, and $\rho_{S|\Pi_j^{\rm A}}$ is the system state after the outcome corresponding to $\Pi_j^{\rm A}$ has been detected,

Equation (153)

The quantum discord, equation (152), is monotonic under local maps on the system $\mathcal{E}\otimes1\hspace{-4pt}1$ (see [230])

Equation (154)

However, note that this equation does not hold for local maps on the ancilla $1\hspace{-4pt}1\otimes\mathcal{E}$ [231, 232]. Therefore, as long as we are certain that the ancilla does not evolve, we may use quantum discord to probe non-Markovianity.

The possible usefulness of the concept of quantum discord to witness non-Markovianity has been first discussed in [233]. Other relations between Markovianity and quantum discord can be found in [234236]. Note, however, that the existence of quantitative connections between quantum discord and completely positive maps remains controversial [237243].

Example 5.7. For the sake of illustration, let us analyze the behavior of entanglement, quantum mutual information and quantum discord in the pure dephasing model, equation (106). Consider initially a maximally entangled state ρSA = |Φ〉〈Φ|, so that $\rho_{\rm SA}(t)=[\mathcal{E}_{(t,t_0)}\otimes 1\hspace{-4pt}1](|\Phi\rangle\langle\Phi|)$ is given by equation (140).

As an entanglement monotone, we may take the logarithmic negativity [214, 244], arriving at

Equation (155)

where the superscript tA denotes the partial transpose with respect to the ancillary system. Immediately, we obtain

Equation (156)

so entanglement can increase only for non-Markovian evolution γ(t) < 0.

In this simple model, the quantum mutual information and quantum discord are found to be

Equation (157)

Equation (158)

where Cea and Q are given in equations (141) and (142). The equality follows from the choice of the maximally entangled state ρSA = |Φ〉〈Φ| as the initial state. This is a purification of the maximally mixed state, which is the one that solves the optimization problem in equations (137) and (138) for this model as commented in example 5.5. For the quantum mutual information, the equality is obvious. For the quantum discord, note that the state (140) belongs to the subclass known as 'X-states', for which the optimization problem in equation (152) can be efficiently solved [230, 245]. Concretely in this case, we take the measurement of the σz observable to obtain $\rho_{S|\Pi_{\pm1}^{\rm A}}=|z_{\pm}\rangle\langle z_{\pm} |$ , where σz|z±〉 = ±|z±〉, and so the von Neumann entropy of any system state after the measurement vanishes.

6. Conclusion and outlook

In this work we have reviewed the topic of quantum non-Markovianity in the light of recent developments regarding its characterization and quantification. Quantum Markovian processes have been defined by taking the divisibility approach, which allows us to circumvent the problem of constructing a hierarchy of probabilities in quantum mechanics. We have also discussed the emergence of memorylessness properties within this definition and compared the divisibility approach with other suggested ways to define Markovianity in the quantum realm. We have surveyed recently proposed measures and witnesses of non-Markovianity, explaining their foundations, as well as their motivation and interpretation. Each measure and witness of non-Markovianity has its pros and cons, and the ultimate question of which of them is preferable in practice strongly depends on the context.

We hope that this article can be useful for future research in open quantum systems, and its implications for other areas such as quantum information or statistical mechanics. Despite the tremendous quantity of new results in the characterization and quantification of non-Markovianity in recent years, there are still several important open questions that remain to be addressed. We conclude this review by providing a non-exhaustive list and some possible research directions.

  • Classification of completely positive non-Markovian master equations. This is probably the most general open problem regarding non-Markovian evolutions. For instance, in equation (87) we have written a generic master equation without taking care of the completely positive character of the dynamics that it generates. When the evolution is non-Markovian, the structure of the generators which leads to completely positive dynamics is pretty much unknown, although a few partial results have been obtained [90, 246, 247]. The problem basically rests upon the difficult characterization of the generators of dynamical maps $\{\mathcal{E}_{(t_2,t_1)},t_2\geq t_1\geq t_0\}$ under the weak assumption of complete positivity just for the instants t2 ⩾ t1 = t0, and not for any t2 ⩾ t1 ⩾ t0 [26].
  • Computation of some measures of non-Markovianity. Despite their well-grounded physical motivation, it would be desirable to provide efficient ways to compute some of the proposed measures of non-Markovianity, for instance, the geometric degree of non-Markovianity, $\mathscr{D}_{\rm NM (g)}^I$ , equation (59). Similarly, the measure $\mathcal{N}_{\rm H}^I$ , equation (63), has been calculated just for unbiased problems or isolated cases without solving the complete optimization problem.
  • Performance of witnesses. Some questions may be posed regarding the performance of witnesses of non-Markovianity. For example, which kind of witnesses, be it the ones based on monotonicity or the ones based on local monotonicity, is more sensitive to non-Markovian dynamics? Moreover, which of them is more efficient to detect non-Markovian dynamics? A recent study with some partial results on this issue is [118].
  • Witnesses of non-Markovianity without resorting to full-state tomography. A question of practical interest is to formulate ways to probe non-Markovianity avoiding full-state (or process) tomography. For example, if we manage to find good enough lower and upper bounds to properties like trace distance [14] or entanglement [248] in terms of simple measurements, we would be able to detect its non-monotonic behavior without resorting to expensive tomographic procedures.
  • Relation between different measures of non-Markovianity. Another fundamental question is to elucidate whether different measures of non-Markovianity induce the same order. Probably the answer is negative, but more progress has to be made along this line.
  • Non-Markovianity as a resource theory. Related to the previous point is the possible formulation of a resource theory for non-Markovianity. Similarly to other resource theories [214217, 249255], we may wonder if non-Markovianity can be seen as a resource to perform whatever tasks which cannot be done solely by Markovian evolutions. Then, an order relation follows, i.e. some dynamics $\mathcal{E}^{(1)}_{(t,t_0)}$ has a smaller amount of Markovianity than another dynamics $\mathcal{E}^{(2)}_{(t,t_0)}$ , if $\mathcal{E}^{(1)}_{(t,t_0)}$ can be constructed by $\mathcal{E}^{(2)}_{(t,t_0)}$ and Markovian evolutions. This approach also allows us to introduce the notion of maximally non-Markovian evolutions, which would be the ones that cannot be generated in terms of other non-Markovian maps and Markovian evolutions. Would these maximally non-Markovian evolutions be the ones defined in section 4.5?
  • Non-Markovianity and other properties. It will be very relevant to find systems where the presence of non-Markovianity is associated with other notable phenomena. For instance, some quantitative relations between non-Markovianity effects and criticality and phase transitions [175, 256258], Loschmidt echo [174, 175, 256], symmetry breaking [166], and Zeno and anti-Zeno effects [150] have already been described.
  • Potential applications of non-Markovianity. As final question, we may wonder 'What is a non-Markovian process good for in practice?'. There are already studies showing its usefulness to prepare steady entangled states [122], to enhance the achievable resolution in quantum metrology [259, 260] or to assist certain tasks in quantum information and computation [208, 261, 262]. However, more research on this direction is required for the formulation of quantitative results [117].

Definitely, this list can be extended with other open questions. However, we think the enumerated points are representative enough to hopefully stimulate the readers into addressing some of these problems and to shed further light on this remarkable phenomenon.

Acknowledgments

A R acknowledges J M R Parrondo for discussions about the classical definition of Markovianity and to L Accardi and K B Sinha for their detailed explanations about the algebraic definition of stochastic quantum processes. Moreover, it has been a pleasure to share ideas about quantum non-Markovianity with D Chruściński, A Kossakowski, M M Wolf, F Ticozzi, M Paternostro and M J W Hall. We acknowledge financial support from Spanish MINECO grant FIS2012-33152, CAM research consortium QUITEMAD S2009-ESP-1594, UCM-BS grant GICC-910758, EU STREP project PAPETS, the EU Integrating project SQIS, the ERC Synergy grant BioQ and an Alexander von Humboldt Professorship.

Footnotes

  • Andréi Andréyevich Márkov (Ryazan, 14 June 1856–Petrograd, 20 July 1922) was a Russian mathematician known because his contributions to number theory, analysis, and probability theory. For a reference about his life and work see [8].

  • A permutation or commutation matrix (cf. [108, 109]) is a matrix UP with the property that UP(A ⊗ B)UP = B ⊗ A. They are also called unitary swap matrices in quantum information literature. In order to compute them, we may use the relation UPvec(A) = vec(At). For the case of $U_{2\leftrightarrow 3}[{\boldsymbol E}_{(t+\epsilon,t)}\otimes\mathbb{I}]U_{2\leftrightarrow 3}$ , it is tacitly assumed that ${\boldsymbol E}_{(t+\epsilon,t)}\otimes\mathbb{I}$ acts in a tensor product of four spaces with the same dimension d, $\mathcal{H}_1\otimes\mathcal{H}_2\otimes \mathcal{H}_3\otimes \mathcal{H}_4$ . Then U2↔3 denotes the permutation matrix interchanging the second and third subspace, i.e. $U_{2\leftrightarrow 3}=\mathbb{I}_d\otimes U_{\rm P}\otimes\mathbb{I}_d$ .

  • Note that in [14] the fidelity is defined without square F(ρ1, ρ2) = max1〉, |Ψ2|〈Ψ12〉|.

  • This is sometimes called quantum data processing inequality, for a proof see [14].

Please wait… references are loading.
10.1088/0034-4885/77/9/094001