Brought to you by:
Letter

A framework for the direct evaluation of large deviations in non-Markovian processes

and

Published 4 November 2016 © 2016 IOP Publishing Ltd
, , Citation Massimo Cavallaro and Rosemary J Harris 2016 J. Phys. A: Math. Theor. 49 47LT02 DOI 10.1088/1751-8113/49/47/47LT02

1751-8121/49/47/47LT02

Abstract

We propose a general framework to simulate stochastic trajectories with arbitrarily long memory dependence and efficiently evaluate large deviation functions associated to time-extensive observables. This extends the 'cloning' procedure of Giardiná et al (2006 Phys. Rev. Lett. 96 120603) to non-Markovian systems. We demonstrate the validity of this method by testing non-Markovian variants of an ion-channel model and the totally asymmetric exclusion process, recovering results obtainable by other means.

Export citation and abstract BibTeX RIS

1. Introduction

The theory of large deviations has been widely applied to equilibrium statistical mechanics [1]. Far from equilibrium, we can still deploy the same formalism, but targeting trajectories in space–time rather than static configurations [2]. Instead of asking for the probability of observing a configuration with a given energy, we require the probability of a trajectory of duration T with a value J of a time-extensive observable. For this purpose, the details of the time evolution take on a major role.

When the stochastic dynamics of a model system are Markovian, i.e., memoryless, we can specify the rules for its evolution in time by means of the constant rates ${\beta }_{{x}_{j},{x}_{i}}$ of transitions from configuration xi to configuration xj. The full set of rates encodes inter-event times with exponential waiting-time distributions, which indeed possess the memoryless property. However, to model real-world systems, such a simplified description may not be appropriate. In fact, non-exponential waiting times seem to be relevant in many contexts, see, e.g., [37].

Large deviation functionals have been computed in selected non-Markovian systems (e.g., assuming the so-called temporal additivity principle [8], or by defining hidden variables [9]) and have revealed structure hidden in the stationary state. Analytical progress is difficult and simulations are necessary to explore systematically more realistic models with memory. However, to our knowledge, numerical schemes able to efficiently probe large deviation functionals have been discussed in general only for memoryless systems [1014]. In this letter, we fill this literature gap and provide a general numerical method to generate trajectories corresponding to arbitrarily rare values of J, based on the 'cloning' procedure of [10]. The text is organised as follows. In section 2, we set up the general formalism, while in section 3 we present the simulation scheme for non-Markovian systems and the numerical method to evaluate large deviation functionals. Section 4 deals with the special case of the semi-Markov process, where the formalism has a particularly lucid interpretation in terms of the generalised master equation (GME). In section 5 we test the method in some examples of increasing complexity, where the large deviation functions can be computed exactly. We conclude with a discussion in section 6.

2. Thermodynamics of trajectories

A trajectory or history of a stochastic process starting at time t0 and ending at time t, on a configuration space ${ \mathcal S }$, is defined as the sequence

Equation (1)

where ${x}_{i}\in { \mathcal S }$ is the initial configuration, ${t}_{0}\leqslant {t}_{1}\leqslant \ldots \,\leqslant \,t$, and ti (for $i=1,\ldots ,n$) denotes the instant where the system jumps from configuration ${x}_{i-1}$ to xi. Specifically, we are interested in the probability density $\varrho [w(t)]$ that a trajectory $w(t)$ is observed. Hereafter, we will consider a discrete configuration space ${ \mathcal S }$, although most of the arguments presented remain valid in the continuous case. In general, we can separate $\varrho [w(t)]$ into a product over the contributions of each jump, multiplied by the probability ${P}_{{x}_{0}}({t}_{0})$ that the configuration at t0 is x0, i.e.

Equation (2)

where the generic factor ${\psi }_{{x}_{n},{x}_{n-1}}[{t}_{n}-{t}_{n-1};w({t}_{n-1})]$ is the waiting-time probability density (WTD) that the transition from ${x}_{n-1}$ to xn occurs during the infinitesimal interval $[{t}_{n},{t}_{n}+{\rm{d}}t)$ given the history w(tn−1); it obeys the normalisation ${\sum }_{{x}_{n}}{\int }_{{t}_{n-1}}^{\infty }{\psi }_{{x}_{n},{x}_{n-1}}[{t}_{n}-{t}_{n-1};w({t}_{n-1})]{\rm{d}}{t}_{n}=1$. Also, ${\phi }_{{x}_{n}}[t-{t}_{n};w({t}_{n})]={\sum }_{{x}_{n}}{\int }_{t}^{\infty }{\psi }_{{x}_{n},{x}_{n-1}}[{t}_{n}-{t}_{n-1};w({t}_{n-1})]{\rm{d}}{t}_{n}$ is the survival probability of the configuration xn for the interval $[{t}_{n},t)$. The special case without dependence on the history before the last jump will be considered in section 4. The probability that the system has configuration $x\in { \mathcal S }$ at $t\gt {t}_{0}$ is

Equation (3)

We characterise a non-equilibrium system by means of time-extensive functionals $J[w(t)]$ of the trajectory $w(t)$, that can be written as the sum ${\sum }_{i=0}^{n-1}{\theta }_{{x}_{i+1},{x}_{i}}$ of elementary contributions corresponding to configuration changes. Alternatively it is possible to consider 'static' contributions ${\theta }_{{x}_{i-1}}({t}_{i}-{t}_{i-1})$. Functionals of these types may represent physical observables integrated over the observation time T = t − t0, e.g., the dynamical activity in a glassy system [15], the moles of metabolites produced in a biochemical pathway [16, 17], the customers served in a queuing network [18], the flow in an interacting particle system [19], or certain quantities in stochastic thermodynamics [20, 21]. Hereafter, we will refer to $J[w(t)]$ as the time-integrated current and to its empirical average $J[w(t)]/T=j(t)$ simply as current.

The latter observable still fluctuates in time, although it is doomed to converge to its ensemble average $\langle j\rangle $ in the limit $t\to \infty $ (with t0 fixed and finite). The consequent computational difficulty in obtaining the probability ${ \mathcal P }(j,t)$ of having a given value of $j(t)$, can be alleviated by introducing the 'canonical' density ${{\rm{e}}}^{-{sJ}[w(t)]}\varrho [w(t)]$, the 'partition function'

Equation (4)

and the scaled cumulant generating function (SCGF)

Equation (5)

where s is an intensive conjugate field. Assuming a large deviation principle, i.e.

Equation (6)

the rate function $\hat{e}(j)$ can be obtained by a Legendre–Fenchel transform

Equation (7)

when the SCGF is differentiable [22]. Equation (4) can be written as

Equation (8)

where

Equation (9)

can be seen as the 'biased' WTD of a stochastic dynamics that does not conserve total probability. This immediately suggests that we can access the SCGF by computing the exponential rate of divergence of an ensemble of trajectories.

3. A numerical approach

3.1. Non-Markovian stochastic simulation

The WTD can be expressed in terms of a time-dependent rate or hazard ${\beta }_{{x}_{n},{x}_{n-1}}[{t}_{n}-{t}_{n-1};w({t}_{n-1})]$, which is the probability density that there is a jump from ${x}_{n-1}$ to xn in $[{t}_{n},{t}_{n}+{\rm{d}}t)$, conditioned on having no transitions during the interval $[{t}_{n-1},{t}_{n})$,

Equation (10)

For brevity we define $\tau ={t}_{n}-{t}_{n-1}$, which is the value of the age, i.e. the time elapsed since the last jump, at which the next jump takes place (see figure 1). Roughly speaking, the hazard ${\beta }_{{x}_{n},{x}_{n-1}}[\tau ;w({t}_{n-1})]$ is the likelihood of having an almost immediate transition from a state ${x}_{n-1}$ known to be of age τ, to a state xn, and, crucially, can also depend on the history $w({t}_{n-1})$. From equation (10), summing over ${x}_{n}\in { \mathcal S }$, and defining ${\beta }_{{x}_{n-1}}[\tau ;w({t}_{n-1})]$ $=\,{\sum }_{{x}_{n}}{\beta }_{{x}_{n},{x}_{n-1}}[\tau ;w({t}_{n-1})]$ and ${\psi }_{{x}_{n-1}}[\tau ;w({t}_{n-1})]$ $=\,{\sum }_{{x}_{n}}{\psi }_{{x}_{n},{x}_{n-1}}[\tau ;w({t}_{n-1})]$, we get

Equation (11)

The age-dependent sum ${\beta }_{{x}_{n-1}}[\tau ;w({t}_{n-1})]$ is also referred to as the escape rate from ${x}_{n-1}$ and, using equation (11), can be written as a logarithmic derivative

Equation (12)

Integrating equation (12) with initial condition ${\phi }_{{x}_{n-1}}[0;w({t}_{n-1})]=1$ gives:

Equation (13)

Equation (14)

These let us cast equation (10) in the more convenient form

Equation (15)

where

Equation (16)

is the probability that the system jumps into the state xn, given that it escapes the state ${x}_{n-1}$ at age τ. It is important to notice that the normalisation conditions ${\sum }_{{x}_{n}}{p}_{{x}_{n},{x}_{n-1}}[\tau ;w({t}_{n-1})]=1$ and ${\int }_{0}^{\infty }{\psi }_{{x}_{n-1}}[\tau ;w({t}_{n-1})]{\rm{d}}\tau =1$ are satisfied. Hence, we can sample a random waiting time τ, according to the density ${\psi }_{{x}_{n-1}}[\tau ;w({t}_{n-1})]$ and, after that, a random arrival configuration xn, according to the probability mass ${p}_{{x}_{n},{x}_{n-1}}[\tau ;w({t}_{n-1})]$. This suggests a standard Monte Carlo algorithm for the generation of a trajectory (1):

Figure 1.

Figure 1. Representation of a portion of a trajectory. The time elapsed from the last jump is called age. The conditional probability of having a configuration xn at an instant ${t}_{n}\gt {t}_{n-1}$ depends on the age, as well as on events which happened during the history (e.g., the one marked by the red star).

Standard image High-resolution image

  • (1)  
    Initialise the system to a configuration x0 and a time t0. Set a counter to n = 1.
  • (2)  
    Draw a value τ according to the density (11) and update the time to ${t}_{n}={t}_{n-1}+\tau $.
  • (3)  
    Update the system configuration to xn, with probability given by (16).
  • (4)  
    Update n to $n+1$ and repeat from (2) until tn reaches the desired simulation time.

3.2. The cloning step

We now need to take into account the effect of the factor ${{\rm{e}}}^{-s{\theta }_{{x}_{n},{x}_{n-1}}}$ on the dynamics, which is to increment (if ${\theta }_{{x}_{n},{x}_{n-1}}\lt 0$) or decrement (if ${\theta }_{{x}_{n},{x}_{n-1}}\gt 0$) the 'weight' of a trajectory, within an ensemble. This can be implemented by means of the so-called 'cloning' method, which consists of assigning to each trajectory of the ensemble a population of identical clones proportional to its weight. As reviewed, e.g., in [23, 24], the idea is not new. It seems to be born in the context of quantum physics and can be traced back to Enrico Fermi [25]. Noticeably, cloning was proposed as a general scheme for the evaluation of large deviation functionals of non-equilibrium Markov processes in [10] (with further continuous-time implementation in [11]) and has also been extensively applied within equilibrium statistical physics [2628]. Here, we refine and extend this idea for the case of non-Markovian processes. One of the devices used in [10, 11] is to define modified transition probabilities—valid only under the Markovian assumption—and a modified cloning factor, encoding the contraction or expansion of the trajectory weight. In fact, it is implicit in the original work that the redefinition of such quantities is unnecessary; in some cases it may also be inconvenient, see section 4. An arguably more natural choice, especially for non-Markovian dynamics, is to focus on the WTDs. Specifically, equations (8) and (9) suggest the following procedure:

  • (1)  
    Set up an ensemble of N clones and initialise each with a given time t0, a random configuration x0, and a counter n = 0. Set a variable C to zero. For each clone, draw a time τ until the next jump from the density ${\psi }_{{x}_{0}}[\tau ;w({t}_{0})]$, and then choose the clone with the smallest value of t = t0 + τ.
  • (2)  
    For the chosen clone, update n to $n+1$ and then the configuration from ${x}_{n-1}$ to xn according to the probability mass ${p}_{{x}_{n},{x}_{n-1}}[\tau ;w(t-\tau )]$.
  • (3)  
    Generate a new waiting time τ for the updated clone according to ${\psi }_{{x}_{n-1}}[\tau ;w(t)]$ and increment its value of t to $t+\tau $.
  • (4)  
    Cloning step. Compute $y=\lfloor {{\rm{e}}}^{-s{\theta }_{{x}_{n},{x}_{n-1}}}+u\rfloor $, where u is drawn from a uniform distribution on $[0,1)$.
    • (1)  
      If y = 0, prune the current clone. Then replace it with another one, uniformly chosen among the remaining $N-1$.
    • (2)  
      If $y\gt 0$, produce y copies of the current clone. Then, prune a number y of elements, uniformly chosen among the existing N + y.
  • (5)  
    Increment C to $C+\mathrm{ln}[(N+{{\rm{e}}}^{-s{\theta }_{{x}_{n},{x}_{n-1}}}-1)/N]$. Choose the clone with the smallest t, and repeat from (2) until tt0 for the chosen clone reaches the desired simulation time T.

The SCGF is finally recovered as −C/T for large T. The net effect of step (4) is to maintain a constant population of samples whose mean current does not decay to $\langle j\rangle $.

4. Semi-Markov systems

A common framework for modelling systems with finite-range memory is the formalism of semi-Markov processes, also referred to as continuous-time random walks (CTRWs) in the physics literature. CTRWs were introduced to model transport on lattices [29, 30] and later used in many other contexts, e.g., to describe quantum dots [31], temporal networks [32], animal movements [33, 34], biochemical reactions [35], and single-molecule kinetics [36, 37]. In such systems, the probability of having a jump depends only on the departure state and on the age, meaning that the memory is lost after each jump and the dependence on the previous history is removed. Under this assumption, the probability density of observing a trajectory (1) is

Equation (17)

where the primed WTD can differ from the others3 . When ${p}_{{x}_{n},{x}_{n-1}}[\tau ;w({t}_{n-1})]$ has no dependence on τ (as well as, of course, no dependence on $w({t}_{n-1})$, due to the semi-Markov dynamics), we can write ${\psi }_{{x}_{n},{x}_{n-1}}(\tau )={p}_{{x}_{n},{x}_{n-1}}{\psi }_{{x}_{n-1}}(\tau )$ and the process is said to satisfy direction-time independence (DTI)4 .

In a slight shift in notation we now use xi and xj as configuration labels. The probability ${P}_{{x}_{i}}(t)$ for the system to be in the configuration xi at time t, in a semi-Markov process, follows a convenient differential equation, which is referred to as the GME:

Equation (18)

where ${K}_{{x}_{i},{x}_{j}}(t-\tau )$ is the memory kernel, taking into account the configuration at time $t-\tau $, and ${I}_{{x}_{i}}(t-{t}_{0})$ depends on the primed WTDs (see details in the online supplementary material). The memory kernel is defined through an equation similar to (10), but in the Laplace domain

Equation (19)

with $\overline{f}(\nu )={\int }_{0}^{\infty }{{\rm{e}}}^{-\nu T}f(T){\rm{d}}T$.

The statistics of time-extensive variables in semi-Markov processes have been studied in [31, 38, 40] and compared to memoryless processes. In systems described by a standard master equation, one strategy is to analyse a process that obeys a modified rate equation, obtained replacing the time-independent rates ${\beta }_{{x}_{i},{x}_{j}}$, with the products ${{\rm{e}}}^{-s{\theta }_{{x}_{i},{x}_{j}}}{\beta }_{{x}_{i},{x}_{j}}$, which are referred to as 'biased' rates. This is particularly simple when ${\theta }_{{x}_{i},{x}_{j}}$ can only take values $-1,0,1$, i.e., at each step, the total current varies, at most, by one unit. In semi-Markov systems it is possible to investigate the statistics of J in a similar, but more general, way. Instead of the standard master equation, we deploy the GME (18). The probability ${P}_{({x}_{i},J)}(t)$ of having a configuration xi with total current J at time t, under the constraint that the current can only grow or decrease by one unit at each jump, obeys the following GME:

Equation (20)

We now make the assumption that the memory kernels are independent of the total integrated current J (only depending on the current increment), i.e.

Equation (21)

where $c=-1,0,1$. The system is diagonalised with respect to the current subspace by means of the discrete Laplace transform

Equation (22)

and is then equivalent to

Equation (23)

which can be represented in a more compact form as

Equation (24)

where $\hat{{ \mathcal L }}(t)$ is a linear s-dependent integral operator and $| \tilde{P}(t)\rangle $ has components ${\tilde{P}}_{{x}_{i}}(s,t)$. The limit as $t\to \infty $ of $\mathrm{ln}\langle 1| \tilde{P}(t)\rangle /t$ (where $\langle 1| $ is a row vector with all entries equal to one) is the SCGF of J. Clearly, equation (24) does not conserve the product $\langle 1| \tilde{P}(t)\rangle $, except for s = 0 when ${\sum }_{{x}_{i}}{P}_{{x}_{i}}(t)=1$. The dynamics described by equation (23) is equivalent to the dynamics described by the GME (20), with the memory kernels, corresponding to jumps that contribute a unit c in the total current, multiplied by a factor ${{\rm{e}}}^{-{cs}}$. From linearity, it follows that the Laplace-transformed kernels are

Equation (25)

This confirms that the modified dynamics can be simulated biasing the WTDs ${\psi }_{{x}_{i},{x}_{j},c}(t)$, i.e., multiplying them by ${{\rm{e}}}^{-{cs}}$.

The Markovian case is recovered for ${K}_{{x}_{i},{x}_{j},c}(t)={\beta }_{{x}_{i},{x}_{j},c}\delta (t)$. Using this kernel, equations (23) and (24) can be written as

Equation (26)

where $\tilde{G}$ is the s-modified stochastic generator of the Markov process with time independent rates ${\beta }_{{x}_{i},{x}_{j}}$ and components

Equation (27)

Equation (28)

where ${\beta }_{{x}_{i}}={\sum }_{c,{x}_{j}\ne {x}_{i}}{\beta }_{{x}_{j},{x}_{i},c}$ is the rate of escape from5 xi. This shows that biasing the rates is consistent with biasing the WTDs (see also some related discussions in [15]). However, from a numerical point of view, the latter choice remains convenient even for the Markovian case, as it avoids us having to define the modified transition probabilities of [11]. To see this, we consider the biased Markovian WTD

Equation (29)

which is the product of an exponential probability density ${\psi }_{{x}_{j}}(\tau )={\beta }_{{x}_{j}}\exp (-{\beta }_{{x}_{j}}\tau )$, a time-independent probability mass ${p}_{{x}_{i},{x}_{j},c}={\beta }_{{x}_{i},{x}_{j},c}/{\beta }_{{x}_{j}}$, and a simple cloning factor ${{\rm{e}}}^{-{cs}}$. These specify the two steps of the standard Doob–Gillespie algorithm for Markov processes [41], followed by a cloning step of weight ${{\rm{e}}}^{-{cs}}$. Another legitimate choice is to define ${\tilde{\beta }}_{{x}_{i},{x}_{j},c}={{\rm{e}}}^{-{cs}}{\beta }_{{x}_{i},{x}_{j},c}$, set ${\tilde{\beta }}_{{x}_{j}}={\sum }_{{x}_{i},c}{\tilde{\beta }}_{{x}_{i},{x}_{j},c}$, and write

Equation (30)

With such an arrangement, we recognise the algorithm of [11], i.e., at each step, the configuration evolves according to a stochastic generator with rates ${\tilde{\beta }}_{{x}_{i},{x}_{j},c}$, and the ensemble is modified with the cloning factor $\exp [\tau ({\tilde{\beta }}_{{x}_{j}}-{\beta }_{{x}_{j}})]$. As the cloning factor here is exponential in time, during long intervals the relative number of new clones can be large. This can cause major finite-ensemble errors, which are shown to be important, e.g., in [9, 42]. Conversely, an implementation based on equation (29) seems to be one way to reduce (but not completely eliminate) such a problem.

5. Examples

We now test our procedure against three non-Markovian models, whose exact large deviations are known from the literature or can be deduced from Markovian models.

5.1. Semi-Markov models for ion-channel gating with and without DTI

The current through an ion channel in a cellular membrane can be modelled with only two states, corresponding to the gate being singly occupied (x1) or empty (x0); an ion can enter or leave this channel via the left (L) or right (R) boundary and non-exponential waiting times lead to a complex behaviour [43]. Specifically, we denote the WTD for a particle succeeding in entering (or leaving) through the boundary L by ${\psi }_{{x}_{1},{x}_{0},1}(\tau )$ (or ${\psi }_{{x}_{0},{x}_{1},-1}(\tau )$) with respective density ${\psi }_{{x}_{1},{x}_{0},0}(\tau )$ (or ${\psi }_{{x}_{0},{x}_{1},0}(\tau )$) for the boundary R. The rightwards current is measured by a counter that increases (decreases) by one when a particle enters (leaves) the system through the boundary L. Its exact SCGF is obtained numerically in [38] as the leading pole of the time-Laplace transform of $Z(s,t)$, for the DTI-case ${\psi }_{{x}_{i},{x}_{j},c}(\tau )={p}_{{x}_{i},{x}_{j},c}{\psi }_{{x}_{j}}(\tau )$ with ${\sum }_{c=-\mathrm{1,0}}{p}_{{x}_{0},{x}_{1},c}={\sum }_{c=\mathrm{0,1}}{p}_{{x}_{1},{x}_{0},c}=1$, and the particular choice ${\psi }_{{x}_{j}}(\tau )=g(\tau ;{k}_{j},{\lambda }_{j})$, where

Equation (31)

with ${\rm{\Gamma }}(k)$ as the Gamma function; the Markovian case is recovered for k = 1. Notably, the cloning method of section 3 can be implemented for any WTD, as only a bias of ${{\rm{e}}}^{s}$, for ions leaving the channel leftwards, and a bias of ${{\rm{e}}}^{-s}$, for ions entering from left, are needed. Figure 2(a) shows that this method reproduces, within numerical accuracy, the solution given in [38].

Figure 2.

Figure 2. SCGF of current in ion channel. (a) DTI model with $({k}_{0},{\lambda }_{0},{k}_{1},{\lambda }_{1})=(0.1,0.01,1,1)$ and $({p}_{{x}_{1},{x}_{0},1},{p}_{{x}_{0},{x}_{1},0},{p}_{{x}_{0},{x}_{1},-1},{p}_{{x}_{1},{x}_{0},0})=(0.5,0.6,0.4,0.5);$ the cloning result is consistent with the solution given in [38]. (b) Non-DTI model with Markov representation and inverse scales $({\lambda }_{0}^{L},{\lambda }_{0}^{R},{\lambda }_{1}^{L},{\lambda }_{1}^{R})=(20,10,10,20/3);$ the cloning reproduces the leading eigenvalue of the Markovian s-modified generator. In both cases $N={10}^{3}$ and $T={10}^{3}$.

Standard image High-resolution image

In a quest for more general classes of models to illustrate the power of our approach, we now relax the constraint of DTI and assume that each transition can be triggered independently by two mechanisms, corresponding to the two boundaries. We still assume that memory of the previous history is lost as soon as the system changes state, thus preserving the semi-Markov nature. At the instant when the gate is emptied, a particle attempts to enter the system from the left boundary after a waiting time ${T}_{0}^{L}$ with density distribution ${{\boldsymbol{\psi }}}_{0}^{L}(\tau )$, while another particle attempts to arrive from the right boundary after a time ${T}_{0}^{R}$ distributed according to ${{\boldsymbol{\psi }}}_{0}^{R}(\tau )$. The waiting times ${T}_{1}^{L}$ and ${T}_{1}^{R}$, as well as the densities ${{\boldsymbol{\psi }}}_{1}^{L}(\tau )$ and ${{\boldsymbol{\psi }}}_{1}^{R}(\tau )$ are defined similarly. In order to have a right (left) jump during the interval $[\tau ,\tau +{\rm{d}}\tau )$, we also require that the left (right) mechanism remains silent until time τ. Consequently, the WTDs are

Equation (32)

Equation (33)

where ${{\boldsymbol{\varphi }}}_{j}^{(\rho )}(\tau )={\int }_{\tau }^{\infty }{{\boldsymbol{\psi }}}_{j}^{(\rho )}(t){\rm{d}}t$ are survival probabilities, with ρ denoting the mechanism L or R. As a concrete choice, we again assign a Gamma probability distribution to the waiting time of each event

Equation (34)

so that the survival probabilities are

Equation (35)

where ${\rm{\Gamma }}(k,x)$ is the upper incomplete Gamma function. The time to the next jump, given that the system just reached state xj (i.e., its age is zero) is $\min \{{T}_{j}^{L},{T}_{j}^{R}\}$ and is associated to the total survival probability

Equation (36)

Once the transition time is known, either the left or right trigger is chosen, according to the age-dependent rates

Equation (37)

where $\gamma (k,x)$ is the lower incomplete Gamma function. The SCGF of the left current is computed by biasing the WTDs ${\psi }_{{x}_{1},{x}_{0},1}(\tau )$ and ${\psi }_{{x}_{0},{x}_{1},-1}(\tau )$ with ${{\rm{e}}}^{\mp s}$, respectively.

While the implementation of the method of section 3.2 remains straightforward for this model, a general solution for the exact SCGF is missing. We thus specialise to the case with ${k}_{0}^{R}={k}_{1}^{R}={k}_{0}^{L}={k}_{1}^{L}=2$, for which the Laplace transform of ${\psi }_{{x}_{i},{x}_{j}}(t)={\sum }_{c}{\psi }_{{x}_{i},{x}_{j},c}(t)$ is a rational function of ν, viz.

Equation (38)

where we defined for convenience ${\lambda }_{j}={\lambda }_{j}^{L}+{\lambda }_{j}^{R}$ and

Equation (39)

This can be though of as resulting from a walker spending exponentially distributed times in three hidden stages, as in [44]. Hence, the two-state semi-Markov process with WTD (32) and (33) can be seen as a six-state Markov process, see online supplementary material for more details and an alternative formulation. The linearity of the Laplace transform permits the distinction of the left and right contributions in (38), hence it remains easy to bias the rates that correspond to a change in J and the SCGF can be exactly found as the leading eigenvalue of a modified Markovian stochastic generator (see again supplementary material). Figure 2(b) shows convincing agreement of our method with this exact approach.

5.2. Totally asymmetric exclusion process (TASEP) with history dependence

More general non-Markovian systems are those whose WTDs depend on events which occurred during the whole observation time. Systems in this class are the 'elephant' random walk [45] and its analogues, where the transition probabilities at time t depend on the history through the time-averaged current $j(t)$. We focus here on an interacting particle system with such current-dependent rates, namely the TASEP of [46].

Non-Markovian interacting particle systems can be described by assigning a trigger for jump attempts with WTD ${{\boldsymbol{\psi }}}_{i}[\tau ;w(t)]$ and a corresponding survival function ${{\boldsymbol{\varphi }}}_{i}[\tau ;w(t)]$ to each elementary event i that controls the particle dynamics. The probability density that the next transition is of type i and occurs in the time interval $[t+\tau ,t+\tau +{\rm{d}}t)$, given that, for each j, a time τj has elapsed since the last event of type j, is given by6

Equation (40)

where ${{\boldsymbol{\psi }}}_{i}[{\tau }_{i}+\tau ;w(t)| {\tau }_{i}]$ $=\,{{\boldsymbol{\psi }}}_{i}[{\tau }_{i}+\tau ;w(t)]/{{\boldsymbol{\varphi }}}_{i}[{\tau }_{i};w(t)]$ and ${{\boldsymbol{\varphi }}}_{i}[{\tau }_{i}+\tau ;w(t)| {\tau }_{i}]$ $=\,{{\boldsymbol{\varphi }}}_{i}[{\tau }_{i}+\tau ;w(t)]/{{\boldsymbol{\varphi }}}_{i}[{\tau }_{i};w(t)]$. With exact expressions for these WTDs, we can implement the algorithms of section 3.

The TASEP consists of a one-dimensional lattice of length L, where each lattice site l, $1\leqslant l\leqslant L$, can be either empty (${\eta }_{l}=0$) or occupied by a particle (${\eta }_{l}=1$). Particles on a site $l\lt L$ are driven rightwards: they attempt a bulk jump to site $l+1$ with WTD ${{\boldsymbol{\psi }}}_{b}[\tau ;w(t)]$, the attempt being successful if ${\eta }_{l+1}=0$, as in [4749]. With open boundaries, a particle that reaches the rightmost site L leaves the system with WTD ${{\boldsymbol{\psi }}}_{L}[\tau ;w(t)]$. Also, as soon as ${\eta }_{1}=0$, a further boundary mechanism turns on and particles arrive on the leftmost site with WTD ${{\boldsymbol{\psi }}}_{0}[\tau ;w(t)]$. The special choice ${{\boldsymbol{\psi }}}_{0}[\tau ;w(t)]=\alpha {{\rm{e}}}^{-\alpha \tau }$, ${{\boldsymbol{\psi }}}_{b}[\tau ;w(t)]=p{{\rm{e}}}^{-p\tau }$, and ${{\boldsymbol{\psi }}}_{L}[\tau ;w(t)]=\beta {{\rm{e}}}^{-\beta \tau }$ corresponds to the standard Markovian TASEP with constant left, bulk and right rates α, p, and β.

We now assume that only the left boundary has a non-exponential WTD, while the particle triggers have exponential WTDs with rate 1 for free particles in the bulk, and rate β for the particle on the rightmost site. Consequently the inter-event time density distribution, conditioned on a time τ0 having elapsed since the last arrival, is

Equation (41)

The probability mass distribution, conditioned on an age τ and elapsed time τ0 is

Equation (42)

Equation (43)

Equation (44)

where η0 and ηL encode the exclusion rules at boundaries, $i=1,2,\ldots ,{\mathsf{n}}$, and n is the number of free particles in the bulk, which depends on the lattice configuration before the jump.

Let us impose now that the arrival rate α depends linearly on the input current $j(t)$, i.e., $\alpha (j)={\alpha }_{0}+{aj}$, which defines a time-dependent rate ${\beta }_{0}(t):= \alpha [j(t)]$. Similar functional dependence (but on the instantaneous output current) has been used to model ribosome recycling in protein translation [50, 51]. Generically, such rates describe a simple form of positive feedback (for $a\gt 0$), whose effect on the stationary state of the TASEP is to shrink the low-density phase [46, 51]. The current fluctuations are also altered; the rate function $\hat{e}(j)$ in this phase has already been computed, for our model, by means of the temporal additivity principle [8, 46], hence this provides a testing ground for the cloning method of section 3. The particle arrival mechanism starts when the leftmost site is emptied, when we set an age of $\tau =0$. Denoting by q the current immediately after the last arrival, which occurred at $t-{\tau }_{0}$, the value $j(t+\tau )$ at age τ can be expressed as $q(t-{\tau }_{0})/(t+\tau )$, hence the trigger hazard is

Equation (45)

where τ is the trigger age. Initial values of τ0 and $q$ are chosen to be 1 and 0, respectively. This allows us to derive the trigger survival probability and trigger WTD (see also [46]) which are, respectively

Equation (46)

and ${{\boldsymbol{\psi }}}_{0}[\tau ;w(t)]={\beta }_{0}(t+\tau ){{\boldsymbol{\varphi }}}_{0}[\tau ;w(t)]$. Using these in equations (41) and (44) allows us to generate the trajectories, and the large deviation function can be evaluated by applying a bias ${{\rm{e}}}^{-s}$ to the arrival WTD and following the algorithm of section 3. The results are plotted in figure 3 and validated by the exact numerical calculation of [46] (which assumes the temporal additivity principle).

Figure 3.

Figure 3. (a) Cloning evaluation of the SCGF for the non-Markovian TASEP, with $({\alpha }_{0},a,\beta ,L)=(0.2,0.1,1,{10}^{3})$, using $T={10}^{3}$. Ensemble size is $N=5\times {10}^{3}$ ($N={10}^{4}$) for $s\gt -2$ ($s\lt -2$). The markers correspond to the direct evaluation of $e(s)$. Numerical errors are of the order of the symbol size, except for large negative s, where finite-ensemble effects still seem to play a role, as documented in [42]. The red line is obtained as ${\int }_{0}^{s}({\rm{d}}e(\sigma )/{\rm{d}}\sigma ){\rm{d}}\sigma $, according to the thermodynamic integration of [11]. (b) Comparison between the Legendre–Fenchel transform of the red line in (a) and the rate function of [46]. The dotted line is a numerical artefact due to the finite range of s in (a); the Legendre–Fenchel transform maps the whole linear branch of $e(s)$ to the value at ${j}^{* }$ and larger values of j are, in fact, not probed.

Standard image High-resolution image

It is worth noting that, for large negative values of s, the SCGF displays a linear branch with slope ${j}^{* }$. If $e(s)$ remains linear with the same slope for $s\lt -4$, its Legendre–Fenchel transform will be defined only for $j\leqslant \,{j}^{* }$. This appears to be related to the dynamical phase transition seen in the Markovian TASEP, where large current fluctuations require correlations on the scale of the system size and the rate function, in the corresponding regime, diverges with L [52]. Indeed, space–time diagrams (not shown) of the density profile from the cloning simulations seem to suggest that the correlation length increases as s becomes more negative. It would be interesting to further reduce the finite-ensemble errors (as discussed in section 6) in order to probe larger negative values of s.

6. Discussion

We have demonstrated that the 'cloning' algorithm for the evaluation of large deviations can be applied consistently for both Markovian and non-Markovian dynamics. In fact, the cloning/pruning of trajectories at each temporal step can be performed according to a very simple factor multiplying the WTDs, as in equation (9). Our analysis encompasses classes of systems with different memory dependence and exploits the similarities between their different formalisms. The efficacy of this approach is confirmed by numerical results for some of the rare non-Markovian models whose large deviation functions can be obtained exactly.

For general non-Markovian cases, the implementation of our procedure is not much harder than the exact simulation of the original trajectories. In Markov processes, the procedure is equivalent to those of [10, 11], where biased dynamics involving alternative rates or transition probabilities have been defined. We expect that, to minimise finite-ensemble effects, an optimal choice of modified WTDs and cloning factors exists for both non-Markovian and Markovian systems, along the lines of the feedback control of [14]. Further developments can thus be anticipated.

We also mention that the discrete-time case of [10] is interesting as the jumps and the cloning steps occur simultaneously for each ensemble element. This feature can be used to prevent a single clone replacing a macroscopic fraction of the ensemble, thus reducing finite size effects (see online supplementary material). In continuous time an equivalent strategy is to mimic the discrete-time steps, as in, e.g., [53], so each trajectory evolves independently for a constant interval ${\rm{\Delta }}t;$ in this case, the product of the cloning factors encountered during the interval, as well as the time elapsed since the last jump must be stored. This permits the application of the cloning step to all clones simultaneously.

Large deviation functionals are often hard to obtain analytically, and such a difficulty is exacerbated in non-Markovian systems, which better describe real-world situations; we believe that the results of this work open up a promising avenue for numerical studies.

Acknowledgments

It is a pleasure to thank Raúl J Mondragón, Stefan Grosskinsky, Oscar Bandtlow, and Arturo Narros for many helpful discussions.

Footnotes

  • A natural situation is when we observe a portion of a trajectory that started before t0. In this case, $\psi {^{\prime} }_{{x}_{1},{x}_{0}}({t}_{1}-{t}_{0})={\psi }_{{x}_{1},{x}_{0}}({t}_{1}-{t}_{-1})/{\phi }_{{x}_{0}}({t}_{0}-{t}_{-1})$, as it depends on the time ${t}_{-1}$ of the last jump before t0, being conditioned on the survival until t0. Also, in this case, the probability that x0 survives the time $t-{t}_{0}$ is $\phi {^{\prime} }_{{x}_{0}}(t-{t}_{0})={\phi }_{{x}_{0}}(t-{t}_{-1})/{\phi }_{{x}_{0}}({t}_{0}-{t}_{-1})$.

  • As proved in [38], for finite configuration space, the DTI condition leads to the so-called Gallavotti–Cohen symmetry [39], which reduces to the time-reversal symmetry in the special case where the detailed balance condition is also satisfied [36, 37].

  • Note that the equation (28), and also the earlier (23), takes into account events that do not alter the configuration, but modify the current statistics.

  • In a slight abuse of the notation, we continue to use the full w(t) as a parameter but explicitly show the conditioning on the τi.

Please wait… references are loading.
10.1088/1751-8113/49/47/47LT02