Brought to you by:
Paper: Classical statistical mechanics, equilibrium and non-equilibrium

Steady state, relaxation and first-passage properties of a run-and-tumble particle in one-dimension

, , , , , , and

Published 26 April 2018 © 2018 IOP Publishing Ltd and SISSA Medialab srl
, , Citation Kanaya Malakar et al J. Stat. Mech. (2018) 043215 DOI 10.1088/1742-5468/aab84f

1742-5468/2018/4/043215

Abstract

We investigate the motion of a run-and-tumble particle (RTP) in one dimension. We find the exact probability distribution of the particle with and without diffusion on the infinite line, as well as in a finite interval. In the infinite domain, this probability distribution approaches a Gaussian form in the long-time limit, as in the case of a regular Brownian particle. At intermediate times, this distribution exhibits unexpected multi-modal forms. In a finite domain, the probability distribution reaches a steady-state form with peaks at the boundaries, in contrast to a Brownian particle. We also study the relaxation to the steady-state analytically. Finally we compute the survival probability of the RTP in a semi-infinite domain with an absorbing boundary condition at the origin. In the finite interval, we compute the exit probability and the associated exit times. We provide numerical verification of our analytical results.

Export citation and abstract BibTeX RIS

1. Introduction

Active particles are self-driven systems, where the dynamics has a dissipative and a stochastic part. Their dynamics violates fluctuation-dissipation relation. This system naturally breaks detailed balance and has been widely used to understand various non-equilibrium phenomena which are driven at the level of individual constituents, for example motion of bacteria, flocking of birds and vibrated granular matter [16]. Run-and-tumble particles (RTPs) and active Brownian particles (ABPs) are the simplest examples of such active particles and are known to exhibit interesting features such as non-Boltzmann distributions in the steady-state [717], clustering [18, 19], spontaneous segregation of mixtures of active and passive particles [20], ratchet effects [21] and motility-induced phase separation [2225]. Recent studies show that, unlike equilibrium systems, these systems may not have an equation of state for the mechanical pressure [2628].

The stochastic dynamics used to describe the motion of RTPs and other active particles has been studied earlier in the context of systems with colored noise, and some exact as well as approximate analytic results for steady-states and first-passage properties were obtained [29, 30]. The dynamics of active particles is related to the equilibrium properties of semi-flexible polymers, where many analytic results are known [31, 32]. There have been recent attempts to understand the time evolution of the probability distributions of active particles in unbounded geometries [33]. In confined geometries, RTPs and ABPs are known to accumulate near the boundaries of the domain [7]. The steady-state distribution of such active particles in confining potentials are non-Boltzmannian [30, 3436] and can exhibit jammed states [18, 37]. More recently there have been a number of studies on computing the steady-state distribution for both RTPs and ABPs in various confined geometries, but using approximate methods in most cases [3841]. However, so far, the approach to the steady-state has not been studied in detail.

Given the rich behavior of RTPs, it is worthwhile to study them in the simplest possible setting where we can derive explicit results for basic dynamical observables. In this spirit, we investigate the dynamics of non-interacting RTPs with an additional Brownian diffusion term. We investigate the motion on: (i) the infinite line, (ii) a one-dimensional bounded domain with reflecting walls, and (iii) the semi-infinite line and the bounded domain with absorbing boundaries. The restriction to one dimension greatly simplifies the analysis, without sacrificing phenomenological richness.

We implement run and tumble motion by imposing a particle velocity $v$ that switches sign at a random Poisson rate. Naively, one might anticipate that this velocity switching merely renormalizes the diffusion coefficient. Such an interplay between advection and diffusion underlies, for example, the phenomenon of hydrodynamic dispersion [4245]. Here, a diffusing tracer is passively carried by a flow field, such as Poiseuille flow in a pipe, and the combination of microscopic diffusion and convection leads to a greatly enhanced spread of the tracer in the longitudinal direction. A similar phenomenon arises for RTPs in the unbounded geometry in the long-time limit. However, there are surprising pre-asymptotic effects. For a wide range of parameters, the probability distribution evolves from unimodal, to multimodal, before finally converging to a Gaussian in the long-time limit. We also compute the steady-state of a RTP inside a finite domain, and examine at the approach to the steady-state. The approach to the steady-state is studied by examining the spectral structure of the relevant Fokker–Planck operator, and we find that this problem is highly non-trivial. Finally we study first-passage properties of the RTP inside a semi-infinite domain where we obtain exact analytic results for the first-passage distribution and exit time probabilities. We compare these results with the usual diffusive case and point out the qualitative differences.

This paper is organized as follows. In section 2, we define the model and discuss the relevant boundary conditions for the probability distributions in a finite interval. In section 3.1, we calculate the propagators for RTPs with superimposed diffusion in an unbounded domain and thereby derive the exact probability distribution. We study RTPs in a bounded domain and calculate their steady-state and time-dependent distributions in section 3.2. Finally we turn to first-passage properties of an RTP where we calculate its survival probability in a semi-infinite one-dimensional domain with absorbing walls (section 4.1) and the exit times in this domain (section 4.2). Throughout this work, we compare our exact results with numerical simulations of the Langevin equations for the RTPs and numerical solutions of the associated Fokker–Planck equation.

2. RTP model

We study a particle that moves on the one-dimensional line whose motion is described by the following stochastic equation

Equation (1)

where the random variable $\sigma(t)$ switches between $\pm 1$ at a Poisson rate γ, and $ \newcommand{\e}{{\rm e}} \eta(t)$ is Gaussian white noise with

Equation (2)

Equation (1) can be reduced to a Markovian model if we specify the particle state by both its position x and its current velocity ($\pm 1$ ). It is convenient to define P+ (x,t) and P(x,t) as the probability density for the particle to be at position x with velocities $+v$ and $-v$ , respectively. These state probabilities evolve according to the generalized form of telegrapher's equation

Equation (3a)

This equation was perhaps derived first in the context of electromagnetic theory [46] and was later derived in several other contexts (see the review [47] and the references therein). The probability $P(x, t)$ to find the particle at position x at time t is the sum of the probabilities $P_{\pm}(x, t)$ of finding the particle in the two states, i.e. $P = P_+ + P_-$ . We choose $\gamma^{-1}$ as the unit of time and $v\gamma^{-1}$ as the unit of length to recast (3a) in the dimensionless form

Equation (3b)

where the dimensionless diffusion constant $\mathcal{D}=D\gamma/v^2$ is the only parameter for the unbounded system.

For the finite interval $[-L, L]$ , there is a second parameter: the dimensionless interval length $ \newcommand{\e}{{\rm e}} \ell = L\gamma/v$ . When the particle is restricted to a finite domain $ \newcommand{\e}{{\rm e}} x\in[-\ell, \ell]$ , we impose the boundary condition that when the particle hits the boundary, it stays stuck there until its internal state (±) changes, upon which it can move away from the boundary. Hence there is no particle current across these walls. From equations (3b), we identify the particle currents $J_{\pm}(x, t)$ at position x and time t as:

Equation (4)

The following four boundary conditions are obtained by demanding that the value of these currents is zero at $ \newcommand{\e}{{\rm e}} x=\pm \ell$ , that is,

Equation (5)

3. The occupation probability $\boldsymbol{P(x, t)=P_+(x, t)+P_-(x, t)} $

We now determine the RTP occupation probability $P(x, t)$ , namely, the probability that the particle is at position x at time t for: (a) the infinite line and (b) the finite interval $ \newcommand{\e}{{\rm e}} [-\ell, \ell]$ . To compute $P(x, t)$ , we need to solve the coupled Fokker–Planck equations (3b) for $P_{\pm}(x, t)$ with the appropriate boundary conditions.

3.1. Infinite domain: $\boldsymbol{\ell = \infty} $

It is useful to define the Fourier transforms $ \tilde{P}_\pm(k)=\int_{-\infty}^\infty P_\pm(x, t) {\rm e}^{i k x}~{{\rm d}x}$ . Fourier transforming equation (3b) with respect to x, we obtain (in matrix form):

Equation (6)

where

Diagonalizing the matrix $\mathbb{A}_k$ for each k and solving the resulting linear equations gives

Equation (7)

where

with $\mathcal{N}_\pm =\sqrt{2}~(1 -k^2 \pm {\rm i}k \sqrt{1-k^2}){}^{1/2}$ and $\alpha_{\pm}=-(1 +\mathcal{D}~k^2) \pm \sqrt{1-k^2}$ .

Consider the natural initial condition in which the particle starts at x  =  0, with equal probability to be either in the  +  or the  −  state. The Fourier transform of the initial probability is then $\tilde{P}(k, 0)= (4 \pi){}^{-1}$ . Using this in (7) and simplifying, we find

Equation (8)

From (8), the Fourier transform of the total probability $\tilde{P}(k, t) = \tilde{P}_+(k, t)+\tilde{P}_-(k, t)$ is

Equation (9)

We can alternatively derive this result as follows: The displacement of an RTP that starts at x  =  0, can be written formally by integrating the Langevin equation (1) to give $ \newcommand{\e}{{\rm e}} x(t) = \int_0^t \sigma(t) {\rm d}t + \sqrt{2\mathcal{D}} \int_0^t \xi(t) {\rm d}t\equiv A(t) +B(t)$ . Since the random processes $A(t)$ and $B(t)$ are independent of each other, $\tilde{P}(k, t) = \left\langle {\rm e}^{{\rm i}kA(t)} \right\rangle \left\langle {\rm e}^{{\rm i}kB(t)} \right\rangle$ . It is easy to see that the expression in (9) is actually in this product form once one identifies $\left\langle {\rm e}^{{\rm i}kB(t)} \right\rangle = {\rm e}^{-\mathcal{D} k^2 t}$ for the Brownian motion $B(t)$ . The process $A(t)$ is the motion of an RTP with $\mathcal{D}=0$ , whose dynamics is described by the telegrapher's equation, for which $\left\langle {\rm e}^{{\rm i}kA(t)} \right\rangle$ can be computed explicitly (see e.g. [31, 32]). This immediately leads to the expression in (9).

Using the product structure of $\tilde{P}(k, t)$ , we invert the Fourier transform in (9) to derive the probability $P(x, t)$ in the convolution form $P(x, t)= \int_{-\infty}^\infty g(x-y, t)h(y, t)$ where $ \newcommand{\e}{{\rm e}} g(x, t)= \exp(-x^2/4\mathcal{D}t)/\sqrt{4 \pi \mathcal{D} t}$ is the inverse Fourier transform of $\left\langle {\rm e}^{{\rm i}kB(t)} \right\rangle = {\rm e}^{-\mathcal{D} k^2 t}$ and $h(x, t)$ is the inverse Fourier transform of $\left\langle {\rm e}^{{\rm i}kA(t)} \right\rangle$ . Using the explicit expression of $h(x, t)$ from [31, 32], we obtain

Equation (10)

where In is the nth-order modified Bessel function of the first kind and Θ is the Heaviside step function. Note that in the limit $x, t \to \infty$ , $P(x, t)$ reduces to a simple Gaussian with diffusion constant $(\mathcal{D}+1/2)$ . This can be easily seen from (9) where $ \newcommand{\e}{{\rm e}} \tilde{P}(k, t) \to \exp [- (\mathcal{D}+1/2)k^2 t ]$ as $t \to \infty$ and $k\to 0$ .

It is instructive to examine the spatial moments of the probability distribution. All odd moments are zero by symmetry. Formally, the even moments of the distribution are given by

For the second moment, we find

Equation (11)

where in the last simplification we have put in all the dimensional parameters. The above result has two non-trivial limiting cases. For $\gamma \neq 0$ and $t\to\infty$ , (11) reduces to

Equation (12)

In the $t\to\infty$ limit, the finite switching rate γ leads to an enhancement of the microscopic diffusion coefficient in a manner that is reminiscent of hydrodynamic dispersion [4245]. On the other hand, in the limit $\gamma\to 0$ , we find

Equation (13)

Thus the mean-square displacement crosses over from growing linearly with t to quadratically with t as $\gamma\to 0$ .

We can also compute higher-order derivatives of $\tilde{P}(k, t)$ from which higher moments of the displacement can be deduced. The fourth moment is

Equation (14)

The important feature of this last result is that as $t\to\infty$ , $\langle x^4\rangle/ 3 \langle x^2\rangle^2\to 1$ , which is just the relation between the fourth and second moments for a Gaussian distribution. The behavior of the higher moments also conforms to those of the Gaussian distribution as $t\to\infty$ .

In figure 1, we plot the temporal evolution of this occupation probability $P(x, t)$ for different values of the dimensionless diffusion coefficient $\mathcal{D}$ . For $\mathcal{D}$ greater than a critical value $\mathcal{D}_c$ , the probability distribution is unimodal for all times. However, for $\mathcal{D} < \mathcal{D}_c~(\approx 0.175)$ , the occupation probability evolves from a unimodal distribution at short times, to a multimodal distribution at intermediate times and finally back to a unimodal distribution at long times. This non-trivial behavior of $P(x, t)$ for small $\mathcal{D}$ arises from the competition between the stochastic flipping of particle states (at rate γ) and translational diffusion. For a small $\mathcal{D}$ , since $P_{\pm}(x, 0)=\delta(x)/2$ , the RTPs in the  +  (−) states move to the right (left) in an almost ballistic manner. This splits the initial unimodal distribution into a bimodal distribution with two symmetric peaks (see figure 1). While these two peaks are moving ballistically in opposite directions, they are also broadening because of the true diffusion term. As a result, at intermediate times when the tails of these two separated peaks meet at the centre, there again starts accumulation of particles (see figure 1(a)). This may lead to a central peak before the two ballistically moving side peaks disappear, which depends on the relative strengths of γ and $\mathcal{D}$ . Once developed, the central peak starts continuously broadening and on time scales much longer than the stochastic flipping rate $\gamma^{-1}$ , the RTPs remix, leading to an effective diffusion constant as discussed above. As a result, the multimodal distribution at intermediate times merges into a unimodal distribution, which as $t\to \infty$ , converges to a Gaussian. On the other hand, for large $\mathcal{D}$ , the split peaks of the two RTP states overlap to such an extent that the full distribution always remains unimodal. This behavior suggests that there exists a critical $\mathcal{D}$ where the effects of translational diffusion and stochastic flipping balance each other.

Figure 1.

Figure 1. Plot of the probability density $P(x, t)$ in (10) for three different values of the diffusion constant $\mathcal{D}$ .

Standard image High-resolution image

To understand this transition, we plot $P(x=0, t)$ for various $\mathcal{D}$ values in figure 2(a), and notice that the occupation probability at x  =  0 is higher for smaller $\mathcal{D}$ at short times as compared to that for larger $\mathcal{D}$ . This then crosses over to a lower value at intermediate times and finally becomes larger at long times. Furthermore, to investigate the nature of the occupation probability at x  =  0, we plot the second derivative $\partial_x^2 P(x, t) \vert_{x=0}$ . We find that for small $\mathcal{D}$ , $P(x=0, t)$ has a maximum at short times, crosses over to a minimum at intermediate times, and finally crosses over to a maximum again at long times. However, for the critical value of $\mathcal{D}_{c}\approx 0.175$ , these two crossover times merge, resulting in a unimodal distribution. For $\mathcal{D} \geqslant 0.175$ , we find that $\partial_x^2 P(x, t) \vert_{x=0}$ is always negative and hence $P(x, t)$ is always unimodal.

Figure 2.

Figure 2. (a) The time-evolution of occupation probability $P(x, t)$ at x  =  0 for three-different values of the dimensionless diffusion coefficient $\mathcal{D}$ . The occupation probability $P(0, t)$ is higher for smaller values of $\mathcal{D}$ at early-times, is lower at intermediate times and is again higher at long times. (b) The nature of the extremum of $P(x, t)$ at x  =  0 changes from being a maximum at earlier times to a minimum at intermediate times, and finally to a maximum at long times. This non-trivial behavior of the central extrema exists only for $\mathcal{D} < \mathcal{D}_{c} \approx 0.175$ . For $\mathcal{D} \geqslant \mathcal{D}_c$ , $P(x=0, t)$ is always a maxima for all times.

Standard image High-resolution image

In summary, we find that, in contrast to the Gaussian form for a Brownian particle, the probability distribution for an RTP can be multimodal depending on the value of the dimensionless diffusion coefficient $\mathcal{D}$ . This diversity in the probability distribution also occurs in other systems in which the motion of a diffusing particle is influenced by an interplay with a convection field that changes sign [48].

3.2. Bounded interval

We now treat an RTP in the interval $ \newcommand{\e}{{\rm e}} x \in [-\ell, \ell]$ . In this case, the probability distribution will reach a steady-state in the long time limit. For $v=0$ , the particle performs pure Brownian motion and reaches a spatially uniform steady-state at long times. On the other hand if $\mathcal{D}=0$ , the particle is subjected to only the dichotomous noise $\sigma(t)$ . Here, the particle reaches a different steady-state in which, for any finite flipping rate, there is an accumulation of particles at the boundaries. However, for very large flipping rates, one regains a spatially uniform distribution with an diffusion effective coefficient $v^2/\gamma$ . In the case where both $v$ and $\mathcal{D}$ are nonzero, we anticipate a steady-state which is intermediate to these two extreme cases. We first solve for the probability distribution in the steady-state and then we turn to the more complicated time-dependent solution.

In the steady-state, the dimensionless Fokker–Planck equation (3b) reduce to

Equation (15)

which we have to solve subject to the boundary conditions (5). The details of this calculation are given in appendix A. The final result for the probability distribution is:

Equation (16)

In figure 3, we compare (16) for the steady-state probability distribution $P(x)$ with results of simulation of the Langevin equation (1), and find nice agreement. We observe that probabilities are higher near the boundaries than at the center of the interval, in contrast to the uniform density which one would observe if there was no activity i.e. $v=0$ . Such accumulation of active particles near the boundaries of a confined domain is quite generic and has been observed in experimental systems such as motile rods [49] and bacterial suspensions [50].

Figure 3.

Figure 3. Comparison of the steady-state probability density equation (16) with explicit Langevin simulations for various $\mathcal{D}$ . The histogram of the numerical simulation was constructed, at t  =  5, using 106 different realizations of the stochastic process.

Standard image High-resolution image

In the limit $\mathcal{D} \to 0$ , the peaks near the boundaries become progressively sharper, and eventually become delta-function peaks. The full distribution is given by

Equation (17)

We observe that the probability is uniform everywhere except for the delta function peaks at the boundaries. This $\mathcal{D} \to 0$ case has recently been considered in a similar context [40] and our method reproduces their results.

Let us now turn to the full time-dependent solution. We are interested in how the distribution $P(x, t)$ approaches the steady-state in the $t \to \infty$ limit. To this end, we have to solve the coupled time-dependent Fokker–Planck equation (3b) within the interval $ \newcommand{\e}{{\rm e}} x \in [-\ell, \ell]$ , subject to the boundary conditions (5). For the time-dependent solution, we expand $P(x, t)$ in terms of the complete set of basis functions as

Equation (18)

where $\lambda_n$ are the eigenvalues and the eigenfunctions $[\phi^+_n(x), \phi^-_n(x)]$ satisfy

Equation (19)

subject to the boundary conditions (5). The coefficients an are given in terms of the left eigenvectors $\langle \chi_n|=[\chi_n^+(x), \chi_n^-(x)]$ as

Equation (20)

The left eigenvectors can be obtained as solutions of the adjoint Fokker–Planck operator. It can be shown that this has the same form as that in equation (19), with the sign of the $\partial_x$ term changed, and with Neumann boundary conditions for both $\chi_n^+(x)$ and $\chi_n^-(x)$ .

The details of calculating the eigenstates $\phi_n^\pm(x)$ are given in appendix B. Here we compare the time-evolution of the probability density obtained from a numerical solution of the Fokker–Planck equation (3b) using the boundary conditions (5), with the spectral expansion given by (18). The ground state eigenvalue $\lambda_0=0$ , and the corresponding eigenstate (the steady-state) is known exactly and given by equation (16). In figure 4, we show the time-evolution of $P(x, t)$ obtained from a numerical solution of equation (3b). The unimodal to bimodal crossover discussed in the unbounded system can be seen in the figure. We also observe that our numerical solution converges to the exact steady-state. All the other eigenvalues $\lambda_n, ~ n>0$ have to be found numerically from the zeros of the determinant of the matrix ${M}$ in (B.6). The long-time relaxation of the system to the steady-state would be determined by the eigenvalue with the largest non-zero real part, and the corresponding eigenfunction. As an illustrative example, we choose $\mathcal{D}=0.5$ and $ \newcommand{\e}{{\rm e}} \ell=1$ , in which case the dominant eigenvalue is given by $\lambda_1 \approx -1.556\, 84$ , while the corresponding eigenfunction is given by (B.5), with

Figure 4.

Figure 4. Time evolution of $P(x, t)$ in a interval obtained from solving the Fokker–Planck equations with appropriate boundary conditions. The diffusion constant was set to $\mathcal{D}=0.1$ and the data corresponds to an initial condition $P_+(x, 0)=P_-(x, 0)=\delta(x)/2$ . It can be seen that at late times, the distribution converges to the exact steady-state distribution (16).

Standard image High-resolution image

Thus we know the functions $\phi_1^+(x)$ , $\phi_1^-(x)$ explicitly, and in terms of these, we expect at long times

Equation (21)

The parameter a1 depends on initial conditions and can be obtained from the corresponding left eigenvector $\langle \chi_1 |$ and we find $a_1=\langle \chi|P(t=0)\rangle=0.7789...$ . In figure (5(a)), we plot the evolution of $P(x, t)$ obtained from a direct numerical solution of equation (3b) and (5), starting from an initial condition $\delta(x-1/2)$ . The plot in figure (5(b)) shows P(x,t)  −  PSS and compares this with the prediction from the first term in the spectral representation equation (21). It is seen that agreement is very good. In general, we find that eigenvalues can have imaginary parts (in which case they come in complex conjugate pairs) and so one can see oscillatory relaxation.

Figure 5.

Figure 5. (a) Time evolution of $P(x, t)$ in an interval obtained from solving the Fokker–Planck equations with appropriate boundary conditions. The diffusion constant was set to $\mathcal{D}=0.5$ and the data corresponds to an initial condition $P_+(x, 0)=P_-(x, 0)=\delta(x-1/2)/2$ . It can be seen that at long times, the distribution converges to the exact steady-state distribution. (b) Comparison of $ ({\rm e}^{-\lambda_1 t} [ P(x, t)-P_{SS}(x)])$ with the eigenstate $\phi(x)$ corresponding to first excited state. The initial state here was chosen as $P_\pm(x, t=0)=\delta{(x-1/2)}/2$ , same as in (a), which has $a_1 \approx 0.7789$ . The inset shows the unscaled data.

Standard image High-resolution image

4. First-passage properties

In biological systems, we are often interested in the time required for a molecule diffusing in the interior of the cell to get adsorbed at the cell boundaries, as well as in the time required by a diffusing protein to find the correct binding sites. Similarly, in chemical reactions, an important quantity is the time spent by a reactive agent before it reaches catalytic boundaries. Hence it is important to compute quantities such as first-passage distributions, survival probabilities, and exit time distributions for the RTP. First-passage and survival probabilities of stochastic processes have been widely studied in the past (for reviews see [51, 52]). In the context of RTP in one dimension, with the position evolving via equation (1) but without the diffusion term, i.e. for D  =  0, first-passage properties have been studied before [47]. More recently, the mean first-passage time between two points in space was computed for RTP (again for D  =  0) analytically [54]. For a RTP in one dimension, the mean first-passage time was recently measured numerically [55]. In this section we study the first-passage probability analytically for an RTP on a semi-infinite line and exit problem from a finite interval, in presence of telegraphic as well as the diffusive noise, i.e. when both terms in equation (1) are present.

4.1. First-passage on the semi-infinite line

We are interested in the probability for an RTP, which starts from a point x on the semi-infinite line with velocity $\pm 1$ , to arrive at the origin for the first time at time t. This quantity is directly related to the survival probability of the RTP in the same geometry in the presence of an absorbing boundary at x  =  0. Let S+ (x,t) [S(x,t)] denote the probability that the RTP, starting initially at $x\geqslant 0$ with a positive [negative] velocity, survives being absorbed at the origin x  =  0 until time t, i.e. it does not cross the origin up to time t. Given the Langevin equation (1), it is convenient to write the backward Fokker–Planck equations for the evolution of $S_{\pm}(x, t)$ , where the initial position x is treated as a variable [53]. Consider the evolution over the time window $[0, t+{\rm d}t]$ and break it into two sub-intervals $[0, {\rm d}t]$ and $[{\rm d}t, t+{\rm d}t]$ . It follows from equation (1) that in a small time ${\rm d}t$ following t  =  0 (i.e. during the first interval $[0, {\rm d}t]$ ), the position of the particle evolves to a new position $ \newcommand{\e}{{\rm e}} x'= x+ v\sigma(0)\, {\rm d}t + \sqrt{2 {D}}\eta(0)\, {\rm d}t$ , where $\sigma(0)$ and $ \newcommand{\e}{{\rm e}} \eta(0)$ are the initial noises. For the subsequent evolution in the time interval $[{\rm d}t, t+{\rm d}t]$ , the new starting position is then $x'$ . Thus the survival probability satisfies the evolution equations

Equation (22)

where the $\langle\, \rangle$ denotes the average over $ \newcommand{\e}{{\rm e}} \eta(0)$ . Expanding in Taylor series for small ${\rm d}t$ , using the properties of $ \newcommand{\e}{{\rm e}} \eta(0)$ and taking ${\rm d}t\to 0$ limit, one directly arrives at a pair of backward equations which read, in dimensionless units,

Equation (23)

These equations are valid for $x\geqslant 0$ , with the initial conditions $S_{\pm}(x, 0)= 1$ for all x  >  0. In addition, we need to specify the boundary conditions. As the starting point $x\to \infty$ , it is clear that $S_{\pm}(x\to \infty, t)=1$ , since the particle will surely not cross the origin in a finite time. In contrast, the boundary condition at x  =  0 is subtle: it depends on whether ${{\mathcal D}}=0$ or ${{\mathcal D}}>0$ . Consider first the case ${{\mathcal D}}=0$ , i.e. in absence of normal diffusion. In this case, if the particle starts at x  =  0 with a negative velocity, it will surely cross the origin in a finite time. Hence

Equation (24a)

However, note that if the particle starts with a positive velocity, it can survive up to finite t, hence the boundary condition S+ (0,t) is unspecified. We will see below that just one boundary condition in equation (24a) is sufficient to make the solution of equation (23) unique. Under normal diffusion, it is well known that if a particle crosses the origin at some time, it recrosses it immediately infinitely often [52]. Hence, if the particle starts at the origin, no matter whether the initial velocity is positive or negative, it will surely cross zero within a short time ${\rm d}t$ , provided ${{\mathcal D}}>0$ . This follows from the fact that in the ${\rm d}t \to 0$ limit, the Brownian noise dominates over the drift term irrespective of its sign. Hence, in this case, we have the two boundary conditions

Equation (24b)

We will see later that indeed for ${{\mathcal D}}>0$ , we will need both boundary conditions in equation (24b) to fix the solutions of equation (23) uniquely.

It is convenient to first take a Laplace transform, with respect to time t, of the pair of equation (23). Using the initial conditions $S_{\pm}(x, 0)=1$ , it is easy to see that the Laplace transforms satisfy

Equation (25)

where $\tilde{S}_{\pm}(x, s)= \int_0^{\infty} {\rm d}t\, {\rm e}^{-st}\, S_{\pm}(x, t)$ is the Laplace transform.

These equations can be made homogeneous by the shift: $\tilde{S}_{\pm}(x, s)= 1/s + U_{\pm}(x, s)$ , where $U_{\pm}$ satisfy

Equation (26)

Furthermore, by differentiating twice, one can write closed equations for U+ and U

Equation (27)

Below, we first solve the simpler case ${{\mathcal D}}=0$ , followed by the more complex ${{\mathcal D}}>0$ case.

4.1.1. The case ${{\mathcal D}}=0$ .

This particular case has been considered earlier with space-dependent transition rates, where only the mean first-passage time was computed [56]. Here we are interested in the full first-passage time distribution, which we can obtain using the above backward Fokker–Plank equation approach. For ${{\mathcal D}}=0$ , equation (27) are ordinary second-order differential equations with constant coefficients. Hence, we can try solutions of the form: $U_{\pm}(x, s)\sim {\rm e}^{-\lambda x}$ . Substituting this in either of equation (27), we find that λ satisfies the quadratic equation, $(\lambda+1+s)(\lambda-1-s)=1$ , which gives two roots: $\lambda(s)= \pm \sqrt{s^2+2s}$ . Obviously, the negative root is not admissible, since the solution must remain finite as $x\to \infty$ . Retaining only the positive root, the general solutions of equation (27) can be written as

Equation (28)

The two unknown constants B and A are however related, as they must also satisfy the pair of first-order equation (26) (upon setting ${{\mathcal D}}=0$ ). This gives $B= A/[1+s+\lambda(s)]$ . Hence, finally, using $\tilde{S}_{\pm}(x, s)=1/s+ U_{\pm}(x, s)$ , we get

Equation (29)

where $\lambda(s)=\sqrt{s^2+2s}$ . It remains to fix the only unknown constant A. This is done by using the boundary condition $\tilde{S}_{-}(0, s)= 0$ which fixes A  =  −1/s. Hence, we obtain the final solutions

Equation (30)

with $\lambda(s)=\sqrt{s^2+2s}$ .

The first-passage time distribution is simply related to the survival probability via

Equation (31)

or in Laplace variables

Equation (32)

where we recall $\lambda(s)=\sqrt{s^2+2s}$ .

It turns out that the Laplace transforms in equation (32) can be exactly inverted. Before doing so, it is useful to extract the long-time asymptotics directly from the Laplace transforms in equation (32), by considering the $s\to 0$ limit. A scaling limit then naturally emerges where $s\to 0$ , $x\to \infty$ but with the product $x\sqrt{s}$ fixed. This corresponds, in the time domain, to the scaling limit $t\to \infty$ , $x\to \infty$ , but keeping the ratio $x/\sqrt{t}$ fixed. In this limit, $\lambda(s)=\sqrt{s^2+2s}\to \sqrt{2s}$ as $s\to 0$ . Then, using the Laplace inversion

Equation (33)

we find that both $f_{\pm}(x, t)$ converge, in the scaling limit, to the Holtsmark distribution

Equation (34)

Now we recall that for a Brownian particle evolving via $ \newcommand{\e}{{\rm e}} {{\rm d}x}/{\rm d}t= \sqrt{2D_0}\, \eta(t)$ , the first-passage probability $f(x, t)$ is given precisely [51] by the formula in equation (34). Hence, for our RTP that evolves via the telegraphic noise in equation (1) with ${{\mathcal D}}=0$ , its first-passage probability in the scaling limit is equivalent to that for a normally diffusing particle with diffusion constant D0  =  1/2. Indeed, this result is also consistent with our findings in equation (12), where we showed that, for ${{\mathcal D}}=0$ , $\langle x^2(t)\rangle \to t$ , which also corresponds to an effective normal diffusion at late times, with diffusion constant D0  =  1/2.

To find the behavior of $f_{\pm}(x, t)$ for finite t, we need to invert the Laplace transforms in equation (32) exactly. Fortunately, this can be done using the following Laplace inversions

where I0,1(t) are modified Bessel functions and $\lambda(s)=\sqrt{s^2+2s}$ . Taking the derivative with respect to x, we obtain

Equation (35)

These results match with those obtained by Orsingher [57] using a different approach.

In figure 6 we verify these results for the first-passage time distributions with simulations. For any given x, the large t limit of (35) can be taken by using the asymptotic behavior $I_1(z)\to e^z/\sqrt{2\pi z}$ as $z\to \infty$ . This yields, as $t\to\infty$ , for any x,

Equation (36)
Figure 6.

Figure 6. Comparison of the first-passage probability distributions $\gamma f_{\pm}(x\gamma/v, t\gamma)$ from the exact results in (35) with with direct simulations of (1), with $ \newcommand{\e}{{\rm e}} D \equiv v^2 \mathcal{D} / (2\gamma)=0, v=1, \gamma=1$ (pure active process). The starting point is taken to be x  =  5. The colored points correspond to simulation results while the black solid lines correspond to the exact result. Note that $\gamma f_{-}(x\gamma/v, t\gamma)$ has a δ-function peak at t  =  x corresponding to particles which reach the origin without any scattering. For comparison we also plot results for the pure diffusion case (with $D=1/2, v=0, \gamma=0$ ) and a mixed case. For the mixed case, the parameters are chosen as $D=1/4, v=1/2^{1/2}, \gamma=1$ so that the asymptotic effective diffusion constant is still $D+v^2/(2 \gamma)=1/2$ .

Standard image High-resolution image

While the tail of f(x,t) behaves exactly as in the case of Brownian diffusion with a diffusion coefficient D0  =  1/2 with a starting position x, the tail of f+ (x,t) is equivalent to that in a Brownian diffusion with a starting position x  +  1. The extra length $1 \, (=v/\gamma)$ is the average distance the RTP with a positive velocity moves before taking the first turn. In figure 6 we compare the asymptotic results of (36) with numerical simulations and find very good agreement. From (36), the large time behavior of the survival probabilities are given by

Equation (37)

In comparison with a particle with the negative starting velocity, a particle with the positive starting velocity has a higher probability of survival.

4.1.2. The case ${{{\mathcal D}}\ne 0} $ .

As explained in the beginning of this subsection, the survival probabilities $S_{\pm}(x, t)$ satisfy the boundary conditions in equation (24b), i.e. in the Laplace domain, $\tilde{S}_{\pm}(x=0, s)=0$ . In this case, the shifted functions $U_{\pm}(x, s)$ each satisfy a fourth-order ordinary differential equation with constant coefficients, as seen from equation (27). Trying again a solution of the form: $U_{\pm}(x, s)\sim {\rm e}^{-\lambda x}$ , we find that λ has now 4 possible values that are the roots of the fourth-order polynomial

Equation (38)

There are 4 solutions given by $\pm \lambda_1(s)$ and $\pm \lambda_2(s)$ where

Equation (39)

Evidently, $\lambda_1(s)<\lambda_2(s)$ . Again discarding the negative roots $-\lambda_1(s)$ and $-\lambda_2(s)$ (since the solution cannot diverge as $x\to \infty$ ), the general solutions of equation (27) can be written as

Equation (40)

where $\lambda_{1, 2}(s)$ are given in equation (39). However, these solutions must also satisfy the individual second-order equation (26). This indicates that A1, A2 are related to B1 and B2. Indeed, by substituting these solutions in equation (26) gives the following relations

Equation (41)

Note that we have used equation (38) to obtain the last two relations.

Hence, the solutions for the survival probabilities are given by

Equation (42)

where B1, B2 are related to A1 and A2 via equation (41). We are still left with two unknown constants A1 and A2. To fix them, we use the two boundary conditions: $\tilde{S}_{\pm}(x=0, s)=0$ . This gives two linear equations for A1 and A2 whose solution is

Equation (43)

This then uniquely determines the solutions for the survival probabilities $\tilde{S}_{\pm}(x, s)$ . The corresponding first-passage probabilities are given by

Equation (44)

where the constants A1, A2, B1 and B2 are determined explicitly above and $\lambda_{1, 2}(s)$ are given in equation (39).

The first nontrivial check is the limit ${{\mathcal D}}\to 0$ . In this limit, it is easy to verify, from equation (39), that

Equation (45)

In addition, one finds that as ${{\mathcal D}}\to 0$ ,

Equation (46)

and consequently

Equation (47)

We therefore recover the ${{\mathcal D}}=0$ results in equation (32).

We now turn to the long-time asymptotic solutions of equation (44) for arbitrary ${{\mathcal D}}$ . Hence we consider the $s\to 0$ limit, with finite ${{\mathcal D}}$ . In this limit, it is easy to check that, to leading order for small s

Equation (48)

Similarly, one can check that to leading order for small s

and consequently

Substituting these results together in equation (44), we find that in the scaling limit ($s\to 0$ , $x\to \infty$ with the product $x\sqrt{s}$ fixed)

Equation (49)

Upon inverting the Laplace transform using equation (33), we obtain our final results in the scaling limit

Equation (50)

This result is precisely the same as the first-passage time density of an ordinary Brownian motion with diffusion constant $D_1= {{\mathcal D}} +1/2$ . Note that for $D_1\to D_0=1/2$ as ${{\mathcal D}}\to 0$ . Moreover, this effective diffusion constant $D_1= {{\mathcal D}}+1/2$ is consistent with our result $\langle x^2(t)\rangle \to (2{{\mathcal D}}+1)\, t$ in equation (12).

Unfortunately, unlike in the ${{\mathcal D}}=0$ case, for nonzero ${{\mathcal D}}$ , we are not able to obtain the finite time result for $f_{\pm}(x, t)$ explicitly, due to the fact that the Laplace transforms are difficult to invert.

4.2. Exit probabilities and exit times in the finite interval

We now investigate an RTP on a finite interval and address two questions: (a) the probability for the particle, which starts at x, to eventually reach either of the boundaries, and (b) the mean time for the particle to exit the interval by either of the boundaries. Let E+ (x) (E(x)) denote the exit probabilities, namely the probability for a particle that starts at x with velocity  +1 (−1) exits through the boundary at $ \newcommand{\e}{{\rm e}} x=-\ell$ . By comparison, the exit probability to $ \newcommand{\e}{{\rm e}} x=-\ell$ for isotropic diffusion, $E(x)$ , is simply $ \newcommand{\e}{{\rm e}} \frac{1}{2}(1-\frac{x}{\ell})$ ; that is, the exit probability decreases linearly with the initial distance from the left edge.

It is easily seen that these hitting probabilities obey the backward equations [51]

Equation (51)

subject to the appropriate boundary conditions, which are $ \newcommand{\e}{{\rm e}} E_\pm(-\ell)=1$ and $ \newcommand{\e}{{\rm e}} E_\pm(\ell)=0$ . These boundary conditions fix the constants in $E_\pm$ and thus the problem is formally solved. The calculation is conceptually straightforward but tedious, and the details were performed by Mathematica. The basic steps and the final expressions for the exit probabilities are given by (C.5) in appendix C.

Figure 7 shows the exit probabilities $E_\pm(x)$ for representative values of the dimensionless diffusion coefficient $\mathcal{D}$ . As one expects, for $\mathcal{D}\gg 1$ , the exit probabilities are close to the isotropic random-walk form $ \newcommand{\e}{{\rm e}} \frac{1}{2}(1-\frac{x}{\ell})$ . However, for $\mathcal{D}\ll 1$ , E+ and E become very distinct. Moreover, the exit probability E+ decreases much more rapidly with x than $ \newcommand{\e}{{\rm e}} (1-\frac{x}{\ell})/2$ , while E decreases much more slowly. Notice also that $ \newcommand{\e}{{\rm e}} \mathcal{E}(x)\equiv \frac{1}{2}[E_+(x)+E_-(x)]\ne \frac{1}{2}(1-\frac{x}{\ell})$ . That is, the exit probability, averaged over the two velocity states deviates significantly from the corresponding exit probability for unbiased diffusion.

Figure 7.

Figure 7. The exit probabilities E+ (x) (black) and E(x) (blue), and their average (orange), as a function of x on an interval of scaled length $ \newcommand{\e}{{\rm e}} \ell=1$ for a particle with various diffusion (scaled) coefficient (a) $\mathcal{D}=1.0$ , (b) $\mathcal{D}=0.25$ , and (c) $\mathcal{D}=0.05$ .

Standard image High-resolution image

Let us now turn to the exit times. Let t+ (x,t) (t(x,t)) be the mean first-passage time (to either boundary) for a particle that is at x and is also in the  +  (−) state. Again using the formalism given in [51], it is easily seen that these exit times obey the backward equations

Equation (52)

with boundary conditions $ \newcommand{\e}{{\rm e}} t_\pm(\pm\ell)=0$ , which corresponds to the particle being immediately absorbed if it starts at either end of the interval. The solution to equation (52) are obtained using the same approach as that given for equation (51) (see appendix D). While the resulting expressions for $t_\pm$ for the finite interval are too long to be displayed, the form of the first-passage times are easily visualized (figure 8). For small bias velocity the scaled diffusion constant $\mathcal{D}=D\gamma/v^2$ (see section 2) is large. In this limit the diffusive part of the dynamics for both types of the particles dominates. As a result, $t_+\approx t_-$ , and both t+ and t become very close to the exit probability for isotropic diffusion $ \newcommand{\e}{{\rm e}} t=(\ell^2-x^2)/2\mathcal{D}$ . On the other hand, if $v$ is increased $\mathcal{D}$ is decreases and the active contribution to the motion dominates. In this limit the exit times t+ and t strongly deviate from each other.

Figure 8.

Figure 8. The unconditional exit times t+ (x) (black) and t(x) (orange) to either side of the interval as a function of x for an interval of scaled length $ \newcommand{\e}{{\rm e}} \ell=1$ for a particle with various diffusion (scaled) coefficient (a) $\mathcal{D}=1.0$ , (b) $\mathcal{D}=0.5$ , and (c) $\mathcal{D}=0.1$ .

Standard image High-resolution image

5. Summary

We studied a one-dimensional model of run-and-tumble particles in the presence of an additional diffusion term. On the infinite line we find that an initial localized distribution of particles evolves to a Gaussian distribution at long times, with the diffusion constant renormalized by the active particle speed and the tumble rate, while at intermediate times the density distribution can have a multimodal structure.

In a finite domain with reflecting walls, we found that the RTPs reach a steady-state with peaks in the density distributions at the boundaries, which is in agreement with earlier observations of particle accumulation at walls. We also studied the approach to the steady-state by examining the spectral structure of the corresponding Fokker–Planck operator. The eigenvalues of this operator appear as the zeros of a complicated determinant, and finding them is highly non-trivial, even numerically. We numerically evaluated the two eigenvalues with largest real parts. It is an interesting mathematical problem to find the full spectrum as well as the associated eigenvectors of the Fokker–Planck operator.

We also investigated the first-passage probability distribution of an RTP on the semi-infinite line and obtained an explicit closed form expression for the distribution in the limiting case of zero diffusion and in the more challenging case of non-zero diffusion. In a finite domain, we obtained exact results for the exit time probability and the mean exit time.

We believe that our results for non-interacting RTPs in one dimension will be informative for the study of models of other active particle systems in higher dimensions. Another possible extension of this work is to study active particles in external potentials and in the presence of mutual interactions. It will be interesting to verify some of our analytic observations in experimental systems such as vibrated granular systems and Janus particles [7].

Acknowledgment

KM acknowledges the S N Bhatt Memorial Excellence Fellowship Program 2016 at ICTS, and INSPIRE-SHE (awarded by DST, Government of India) for funding her research. VJ is supported by a post-doctoral fellowship in the Max Planck partner group at ICTS AD, AK, SNM and SS acknowledge support from the Indo-French Centre for the promotion of advanced research (IFCPAR) under Project No. 5604-2. AD, AK, SNM and SS also acknowledge the large deviation theory program at ICTS (code: ICTS/Prog-ldt/2017/8) during which many discussions were held. SNM wishes to thank U Basu. M R Evans, A Rosso, and G Schehr for useful discussions, and acknowledges a Simon foundation grant from ICTS. KVK's research is supported by the Department of Biotechnology, India, through a Ramalingaswami reentry fellowship and by the Max Planck Society and the Department of Science and Technology, India, through a Max Planck Partner Group at ICTS-TIFR. SR acknowledges support from grants DMR16-08211 and DMR-1623243 from the National Science Foundation and from the ICTS for supporting his participation in the Bangalore school on statistical physics - VIII (code: ICTS/Prog-bssp/2017/06). He also thanks Uttam Bhat for many helpful discussions.

Appendix A. Steady-state probability distribution in the interval

We define $Q=P_+-P_-$ and use $P= P_{+}+P_{-}$ . With these definitions, (15) can be rewritten as:

Equation (A.1)

and the boundary conditions (5) now read

Equation (A.2)

Integrating the first of (A.1), we get $\mathcal{D} \partial_{x}P-Q+C_1=0 $ where C1 is an integration constant. From the second boundary condition in (A.2), we get C1  =  0. Hence $\mathcal{D} \partial_{x}P(x)=Q(x)$ for all $ \newcommand{\e}{{\rm e}} x \in [-\ell, \ell]$ , and substituting this into the second of (A.1) leads to

Equation (A.3)

This equation has the general solution

Equation (A.4)

and a and b are constants to be determined.

Once $Q(x)$ is known, $P(x)$ can be obtained by integrating $\mathcal{D} \partial_{x}P(x, t)=Q(x, t)$ :

Equation (A.5)

where C2 is another integration constant. The three constants $a, b$ and C2 can be obtained using the boundary conditions and the normalization condition. Substituting the solutions (A.4) and (A.5) into the first of (A.2) gives

Equation (A.6)

whose solution is

Equation (A.7)

Finally, we invoke the normalization condition $ \newcommand{\e}{{\rm e}} \int_{-\ell}^{\ell}P(x) {{\rm d}x}=1$ to obtain

Equation (A.8)

and

Equation (A.9)

The latter is (16) in the main text.

Appendix B. Time-dependent probability distribution in the interval

We now construct the eigenstates $\phi_n^\pm(x)$ . First we try a solution of the form

Equation (B.1)

Inserting this form in (19), we get

Equation (B.2)

To get non-zero solutions for $\bar{r}_\pm$ , we require the determinant of the matrix in the above equation to be zero: $(\mathcal{D}~\beta^2 -\lambda-1){}^2-(\beta^2+1)=0$ which provides β as function of λ. This is a fourth order equation in β whose solutions are

Equation (B.3)

where $\sigma=\pm 1$ and $\tau=\pm1$ . Corresponding to the resulting four values of β, we get the four corresponding solutions

Equation (B.4)

where $\alpha_{\sigma \tau}= -{(\mathcal{D}\beta_{\sigma \tau}^2 -\beta_{\sigma \tau} -\lambda-1)}$ and $\sigma= \tau=\pm1$ . We use these four states to construct the eigenstates $[\phi^+_n(x), \phi^-_n(x)]$ that satisfy the boundary conditions. Thus let

Equation (B.5)

Substituting this solution into the required boundary conditions (5), and after some rearrangement, we get

Equation (B.6)

with $ \newcommand{\e}{{\rm e}} \nu_{\sigma \tau}^{r s}= {\rm e}^{r \beta_{\sigma \tau} \ell}(\mathcal{D}\beta_{\sigma \tau} +s)$ , and $\sigma, \tau, r, s$ allowed to take values $\pm 1$ . To get non-zero solutions for $C^{\sigma\tau}_n$ , we require $\det(M)=0$ . This equation has both real and imaginary parts and both have to be set to zero. This is possible only at certain values (in general complex) of λ and these values then give us the required eigenvalue set $\{\lambda_n\}$ , $n=0, 1, 2, \ldots$ . We assume that the eigenvalues are ordered according to decreasing value of their real part. For each allowed $\lambda_n$ one can find the corresponding value of $\beta^{n}_{\sigma \tau}$ from (B.3). If the βs are non-degenerate then the associated eigenvector $(C^{++}_n, ~C^{+-}_n, ~C^{-+}_n, ~C^{--}_n){}^T$ can be obtained from (B.6). This then determines the eigenstates completely, up to a normalization constant.

We expect that there should be a real largest eigenvalue $\lambda_0=0$ corresponding to the steady-state and this was already determined, see (16). This solution can be recovered from our present approach but needs some extra care since for this case, $\beta_{\pm-}=0$ and $\beta_{\pm+}=\pm \sqrt{2\mathcal{D}+1}/\mathcal{D}$ . The two independent states corresponding to $\beta=0$ are given by

Equation (B.7)

Taking a linear combination and imposing the boundary conditions leads us to the solution given in equations (16) and (A.8).

Appendix C. Solution for E±(x) on the finite interval

To solve equations (51), we first define $S_e=E_++E_-$ and $\Delta_e=E_+-E_-$ to recast (51) as

Equation (C.1)

Differentiating the second of (C.1) and using the first to eliminate $S_e''$ gives $\Delta_e'''-\alpha^2\Delta_e'=0$ , with

The solution for $ \newcommand{\e}{{\rm e}} \delta_e\equiv \Delta_e'$ is $\delta_e= Ae^{\alpha x}+Be^{-\alpha x}$ , where A and B are constants. Integrating once gives $\Delta_e$ and integrating $\mathcal{D}S_e''=-\Delta_e'$ gives Se. The final result is

Equation (C.2)

where $C, E, F$ are constants. However, to satisfy the second of equation (C.1), we must have E  =  2C. Using this and finally solving for $E_\pm$ gives

Equation (C.3)

For exit via the left edge of the finite interval $ \newcommand{\e}{{\rm e}} [0, \ell]$ , the appropriate boundary conditions are $E_\pm(0)=1$ and $ \newcommand{\e}{{\rm e}} E_\pm(\ell)=0$ . Thus, from equation (C.3), we need to solve

Equation (C.4)

where we have introduced

Solving these four linear equations by Mathematica, substituting the coefficients $A, B, F$ , and C into (C.3), and then performing some simplifications, the exit probabilities are:

Equation (C.5)

Some representative graphs of $E_\pm(x)$ are given in figure 7.

Appendix D. Solution for t±(x) on the finite interval

To solve (52), we again define $S_t=t_++t_-$ , $\Delta_t=t_+-t_-$ to give

Equation (D.1)

We follow similar steps to those in appendix C to obtain

Equation (D.2)

where C and F are constants, and the additional terms compared to those in (C.2) stem from the additional inhomogeneous term in equation (D.1) compared to (C.1). The solutions for $t_\pm$ are

Equation (D.3)

For the exit times $t_\pm$ in a finite interval of length $ \newcommand{\e}{{\rm e}} \ell$ , the boundary conditions are $ \newcommand{\e}{{\rm e}} t_\pm(0)=t_\pm(\ell)=0$ . Applying these boundary conditions to (D.3) fixes the constants $A, B, C$ and F, from which the results shown in figure 8 are obtained, again using Mathematica.

Please wait… references are loading.