Abstract
We investigate the motion of a run-and-tumble particle (RTP) in one dimension. We find the exact probability distribution of the particle with and without diffusion on the infinite line, as well as in a finite interval. In the infinite domain, this probability distribution approaches a Gaussian form in the long-time limit, as in the case of a regular Brownian particle. At intermediate times, this distribution exhibits unexpected multi-modal forms. In a finite domain, the probability distribution reaches a steady-state form with peaks at the boundaries, in contrast to a Brownian particle. We also study the relaxation to the steady-state analytically. Finally we compute the survival probability of the RTP in a semi-infinite domain with an absorbing boundary condition at the origin. In the finite interval, we compute the exit probability and the associated exit times. We provide numerical verification of our analytical results.
Export citation and abstract BibTeX RIS
1. Introduction
Active particles are self-driven systems, where the dynamics has a dissipative and a stochastic part. Their dynamics violates fluctuation-dissipation relation. This system naturally breaks detailed balance and has been widely used to understand various non-equilibrium phenomena which are driven at the level of individual constituents, for example motion of bacteria, flocking of birds and vibrated granular matter [1–6]. Run-and-tumble particles (RTPs) and active Brownian particles (ABPs) are the simplest examples of such active particles and are known to exhibit interesting features such as non-Boltzmann distributions in the steady-state [7–17], clustering [18, 19], spontaneous segregation of mixtures of active and passive particles [20], ratchet effects [21] and motility-induced phase separation [22–25]. Recent studies show that, unlike equilibrium systems, these systems may not have an equation of state for the mechanical pressure [26–28].
The stochastic dynamics used to describe the motion of RTPs and other active particles has been studied earlier in the context of systems with colored noise, and some exact as well as approximate analytic results for steady-states and first-passage properties were obtained [29, 30]. The dynamics of active particles is related to the equilibrium properties of semi-flexible polymers, where many analytic results are known [31, 32]. There have been recent attempts to understand the time evolution of the probability distributions of active particles in unbounded geometries [33]. In confined geometries, RTPs and ABPs are known to accumulate near the boundaries of the domain [7]. The steady-state distribution of such active particles in confining potentials are non-Boltzmannian [30, 34–36] and can exhibit jammed states [18, 37]. More recently there have been a number of studies on computing the steady-state distribution for both RTPs and ABPs in various confined geometries, but using approximate methods in most cases [38–41]. However, so far, the approach to the steady-state has not been studied in detail.
Given the rich behavior of RTPs, it is worthwhile to study them in the simplest possible setting where we can derive explicit results for basic dynamical observables. In this spirit, we investigate the dynamics of non-interacting RTPs with an additional Brownian diffusion term. We investigate the motion on: (i) the infinite line, (ii) a one-dimensional bounded domain with reflecting walls, and (iii) the semi-infinite line and the bounded domain with absorbing boundaries. The restriction to one dimension greatly simplifies the analysis, without sacrificing phenomenological richness.
We implement run and tumble motion by imposing a particle velocity that switches sign at a random Poisson rate. Naively, one might anticipate that this velocity switching merely renormalizes the diffusion coefficient. Such an interplay between advection and diffusion underlies, for example, the phenomenon of hydrodynamic dispersion [42–45]. Here, a diffusing tracer is passively carried by a flow field, such as Poiseuille flow in a pipe, and the combination of microscopic diffusion and convection leads to a greatly enhanced spread of the tracer in the longitudinal direction. A similar phenomenon arises for RTPs in the unbounded geometry in the long-time limit. However, there are surprising pre-asymptotic effects. For a wide range of parameters, the probability distribution evolves from unimodal, to multimodal, before finally converging to a Gaussian in the long-time limit. We also compute the steady-state of a RTP inside a finite domain, and examine at the approach to the steady-state. The approach to the steady-state is studied by examining the spectral structure of the relevant Fokker–Planck operator, and we find that this problem is highly non-trivial. Finally we study first-passage properties of the RTP inside a semi-infinite domain where we obtain exact analytic results for the first-passage distribution and exit time probabilities. We compare these results with the usual diffusive case and point out the qualitative differences.
This paper is organized as follows. In section 2, we define the model and discuss the relevant boundary conditions for the probability distributions in a finite interval. In section 3.1, we calculate the propagators for RTPs with superimposed diffusion in an unbounded domain and thereby derive the exact probability distribution. We study RTPs in a bounded domain and calculate their steady-state and time-dependent distributions in section 3.2. Finally we turn to first-passage properties of an RTP where we calculate its survival probability in a semi-infinite one-dimensional domain with absorbing walls (section 4.1) and the exit times in this domain (section 4.2). Throughout this work, we compare our exact results with numerical simulations of the Langevin equations for the RTPs and numerical solutions of the associated Fokker–Planck equation.
2. RTP model
We study a particle that moves on the one-dimensional line whose motion is described by the following stochastic equation
where the random variable switches between at a Poisson rate γ, and is Gaussian white noise with
Equation (1) can be reduced to a Markovian model if we specify the particle state by both its position x and its current velocity (). It is convenient to define P+ (x,t) and P−(x,t) as the probability density for the particle to be at position x with velocities and , respectively. These state probabilities evolve according to the generalized form of telegrapher's equation
This equation was perhaps derived first in the context of electromagnetic theory [46] and was later derived in several other contexts (see the review [47] and the references therein). The probability to find the particle at position x at time t is the sum of the probabilities of finding the particle in the two states, i.e. . We choose as the unit of time and as the unit of length to recast (3a) in the dimensionless form
where the dimensionless diffusion constant is the only parameter for the unbounded system.
For the finite interval , there is a second parameter: the dimensionless interval length . When the particle is restricted to a finite domain , we impose the boundary condition that when the particle hits the boundary, it stays stuck there until its internal state (±) changes, upon which it can move away from the boundary. Hence there is no particle current across these walls. From equations (3b), we identify the particle currents at position x and time t as:
The following four boundary conditions are obtained by demanding that the value of these currents is zero at , that is,
3. The occupation probability
We now determine the RTP occupation probability , namely, the probability that the particle is at position x at time t for: (a) the infinite line and (b) the finite interval . To compute , we need to solve the coupled Fokker–Planck equations (3b) for with the appropriate boundary conditions.
3.1. Infinite domain:
It is useful to define the Fourier transforms . Fourier transforming equation (3b) with respect to x, we obtain (in matrix form):
where
Diagonalizing the matrix for each k and solving the resulting linear equations gives
where
with and .
Consider the natural initial condition in which the particle starts at x = 0, with equal probability to be either in the + or the − state. The Fourier transform of the initial probability is then . Using this in (7) and simplifying, we find
From (8), the Fourier transform of the total probability is
We can alternatively derive this result as follows: The displacement of an RTP that starts at x = 0, can be written formally by integrating the Langevin equation (1) to give . Since the random processes and are independent of each other, . It is easy to see that the expression in (9) is actually in this product form once one identifies for the Brownian motion . The process is the motion of an RTP with , whose dynamics is described by the telegrapher's equation, for which can be computed explicitly (see e.g. [31, 32]). This immediately leads to the expression in (9).
Using the product structure of , we invert the Fourier transform in (9) to derive the probability in the convolution form where is the inverse Fourier transform of and is the inverse Fourier transform of . Using the explicit expression of from [31, 32], we obtain
where In is the nth-order modified Bessel function of the first kind and Θ is the Heaviside step function. Note that in the limit , reduces to a simple Gaussian with diffusion constant . This can be easily seen from (9) where as and .
It is instructive to examine the spatial moments of the probability distribution. All odd moments are zero by symmetry. Formally, the even moments of the distribution are given by
For the second moment, we find
where in the last simplification we have put in all the dimensional parameters. The above result has two non-trivial limiting cases. For and , (11) reduces to
In the limit, the finite switching rate γ leads to an enhancement of the microscopic diffusion coefficient in a manner that is reminiscent of hydrodynamic dispersion [42–45]. On the other hand, in the limit , we find
Thus the mean-square displacement crosses over from growing linearly with t to quadratically with t as .
We can also compute higher-order derivatives of from which higher moments of the displacement can be deduced. The fourth moment is
The important feature of this last result is that as , , which is just the relation between the fourth and second moments for a Gaussian distribution. The behavior of the higher moments also conforms to those of the Gaussian distribution as .
In figure 1, we plot the temporal evolution of this occupation probability for different values of the dimensionless diffusion coefficient . For greater than a critical value , the probability distribution is unimodal for all times. However, for , the occupation probability evolves from a unimodal distribution at short times, to a multimodal distribution at intermediate times and finally back to a unimodal distribution at long times. This non-trivial behavior of for small arises from the competition between the stochastic flipping of particle states (at rate γ) and translational diffusion. For a small , since , the RTPs in the + (−) states move to the right (left) in an almost ballistic manner. This splits the initial unimodal distribution into a bimodal distribution with two symmetric peaks (see figure 1). While these two peaks are moving ballistically in opposite directions, they are also broadening because of the true diffusion term. As a result, at intermediate times when the tails of these two separated peaks meet at the centre, there again starts accumulation of particles (see figure 1(a)). This may lead to a central peak before the two ballistically moving side peaks disappear, which depends on the relative strengths of γ and . Once developed, the central peak starts continuously broadening and on time scales much longer than the stochastic flipping rate , the RTPs remix, leading to an effective diffusion constant as discussed above. As a result, the multimodal distribution at intermediate times merges into a unimodal distribution, which as , converges to a Gaussian. On the other hand, for large , the split peaks of the two RTP states overlap to such an extent that the full distribution always remains unimodal. This behavior suggests that there exists a critical where the effects of translational diffusion and stochastic flipping balance each other.
To understand this transition, we plot for various values in figure 2(a), and notice that the occupation probability at x = 0 is higher for smaller at short times as compared to that for larger . This then crosses over to a lower value at intermediate times and finally becomes larger at long times. Furthermore, to investigate the nature of the occupation probability at x = 0, we plot the second derivative . We find that for small , has a maximum at short times, crosses over to a minimum at intermediate times, and finally crosses over to a maximum again at long times. However, for the critical value of , these two crossover times merge, resulting in a unimodal distribution. For , we find that is always negative and hence is always unimodal.
Download figure:
Standard image High-resolution imageIn summary, we find that, in contrast to the Gaussian form for a Brownian particle, the probability distribution for an RTP can be multimodal depending on the value of the dimensionless diffusion coefficient . This diversity in the probability distribution also occurs in other systems in which the motion of a diffusing particle is influenced by an interplay with a convection field that changes sign [48].
3.2. Bounded interval
We now treat an RTP in the interval . In this case, the probability distribution will reach a steady-state in the long time limit. For , the particle performs pure Brownian motion and reaches a spatially uniform steady-state at long times. On the other hand if , the particle is subjected to only the dichotomous noise . Here, the particle reaches a different steady-state in which, for any finite flipping rate, there is an accumulation of particles at the boundaries. However, for very large flipping rates, one regains a spatially uniform distribution with an diffusion effective coefficient . In the case where both and are nonzero, we anticipate a steady-state which is intermediate to these two extreme cases. We first solve for the probability distribution in the steady-state and then we turn to the more complicated time-dependent solution.
In the steady-state, the dimensionless Fokker–Planck equation (3b) reduce to
which we have to solve subject to the boundary conditions (5). The details of this calculation are given in appendix A. The final result for the probability distribution is:
In figure 3, we compare (16) for the steady-state probability distribution with results of simulation of the Langevin equation (1), and find nice agreement. We observe that probabilities are higher near the boundaries than at the center of the interval, in contrast to the uniform density which one would observe if there was no activity i.e. . Such accumulation of active particles near the boundaries of a confined domain is quite generic and has been observed in experimental systems such as motile rods [49] and bacterial suspensions [50].
Download figure:
Standard image High-resolution imageIn the limit , the peaks near the boundaries become progressively sharper, and eventually become delta-function peaks. The full distribution is given by
We observe that the probability is uniform everywhere except for the delta function peaks at the boundaries. This case has recently been considered in a similar context [40] and our method reproduces their results.
Let us now turn to the full time-dependent solution. We are interested in how the distribution approaches the steady-state in the limit. To this end, we have to solve the coupled time-dependent Fokker–Planck equation (3b) within the interval , subject to the boundary conditions (5). For the time-dependent solution, we expand in terms of the complete set of basis functions as
where are the eigenvalues and the eigenfunctions satisfy
subject to the boundary conditions (5). The coefficients an are given in terms of the left eigenvectors as
The left eigenvectors can be obtained as solutions of the adjoint Fokker–Planck operator. It can be shown that this has the same form as that in equation (19), with the sign of the term changed, and with Neumann boundary conditions for both and .
The details of calculating the eigenstates are given in appendix B. Here we compare the time-evolution of the probability density obtained from a numerical solution of the Fokker–Planck equation (3b) using the boundary conditions (5), with the spectral expansion given by (18). The ground state eigenvalue , and the corresponding eigenstate (the steady-state) is known exactly and given by equation (16). In figure 4, we show the time-evolution of obtained from a numerical solution of equation (3b). The unimodal to bimodal crossover discussed in the unbounded system can be seen in the figure. We also observe that our numerical solution converges to the exact steady-state. All the other eigenvalues have to be found numerically from the zeros of the determinant of the matrix in (B.6). The long-time relaxation of the system to the steady-state would be determined by the eigenvalue with the largest non-zero real part, and the corresponding eigenfunction. As an illustrative example, we choose and , in which case the dominant eigenvalue is given by , while the corresponding eigenfunction is given by (B.5), with
Download figure:
Standard image High-resolution imageThus we know the functions , explicitly, and in terms of these, we expect at long times
The parameter a1 depends on initial conditions and can be obtained from the corresponding left eigenvector and we find . In figure (5(a)), we plot the evolution of obtained from a direct numerical solution of equation (3b) and (5), starting from an initial condition . The plot in figure (5(b)) shows P(x,t) − PSS and compares this with the prediction from the first term in the spectral representation equation (21). It is seen that agreement is very good. In general, we find that eigenvalues can have imaginary parts (in which case they come in complex conjugate pairs) and so one can see oscillatory relaxation.
Download figure:
Standard image High-resolution image4. First-passage properties
In biological systems, we are often interested in the time required for a molecule diffusing in the interior of the cell to get adsorbed at the cell boundaries, as well as in the time required by a diffusing protein to find the correct binding sites. Similarly, in chemical reactions, an important quantity is the time spent by a reactive agent before it reaches catalytic boundaries. Hence it is important to compute quantities such as first-passage distributions, survival probabilities, and exit time distributions for the RTP. First-passage and survival probabilities of stochastic processes have been widely studied in the past (for reviews see [51, 52]). In the context of RTP in one dimension, with the position evolving via equation (1) but without the diffusion term, i.e. for D = 0, first-passage properties have been studied before [47]. More recently, the mean first-passage time between two points in space was computed for RTP (again for D = 0) analytically [54]. For a RTP in one dimension, the mean first-passage time was recently measured numerically [55]. In this section we study the first-passage probability analytically for an RTP on a semi-infinite line and exit problem from a finite interval, in presence of telegraphic as well as the diffusive noise, i.e. when both terms in equation (1) are present.
4.1. First-passage on the semi-infinite line
We are interested in the probability for an RTP, which starts from a point x on the semi-infinite line with velocity , to arrive at the origin for the first time at time t. This quantity is directly related to the survival probability of the RTP in the same geometry in the presence of an absorbing boundary at x = 0. Let S+ (x,t) [S−(x,t)] denote the probability that the RTP, starting initially at with a positive [negative] velocity, survives being absorbed at the origin x = 0 until time t, i.e. it does not cross the origin up to time t. Given the Langevin equation (1), it is convenient to write the backward Fokker–Planck equations for the evolution of , where the initial position x is treated as a variable [53]. Consider the evolution over the time window and break it into two sub-intervals and . It follows from equation (1) that in a small time following t = 0 (i.e. during the first interval ), the position of the particle evolves to a new position , where and are the initial noises. For the subsequent evolution in the time interval , the new starting position is then . Thus the survival probability satisfies the evolution equations
where the denotes the average over . Expanding in Taylor series for small , using the properties of and taking limit, one directly arrives at a pair of backward equations which read, in dimensionless units,
These equations are valid for , with the initial conditions for all x > 0. In addition, we need to specify the boundary conditions. As the starting point , it is clear that , since the particle will surely not cross the origin in a finite time. In contrast, the boundary condition at x = 0 is subtle: it depends on whether or . Consider first the case , i.e. in absence of normal diffusion. In this case, if the particle starts at x = 0 with a negative velocity, it will surely cross the origin in a finite time. Hence
However, note that if the particle starts with a positive velocity, it can survive up to finite t, hence the boundary condition S+ (0,t) is unspecified. We will see below that just one boundary condition in equation (24a) is sufficient to make the solution of equation (23) unique. Under normal diffusion, it is well known that if a particle crosses the origin at some time, it recrosses it immediately infinitely often [52]. Hence, if the particle starts at the origin, no matter whether the initial velocity is positive or negative, it will surely cross zero within a short time , provided . This follows from the fact that in the limit, the Brownian noise dominates over the drift term irrespective of its sign. Hence, in this case, we have the two boundary conditions
We will see later that indeed for , we will need both boundary conditions in equation (24b) to fix the solutions of equation (23) uniquely.
It is convenient to first take a Laplace transform, with respect to time t, of the pair of equation (23). Using the initial conditions , it is easy to see that the Laplace transforms satisfy
where is the Laplace transform.
These equations can be made homogeneous by the shift: , where satisfy
Furthermore, by differentiating twice, one can write closed equations for U+ and U−
Below, we first solve the simpler case , followed by the more complex case.
4.1.1. The case .
This particular case has been considered earlier with space-dependent transition rates, where only the mean first-passage time was computed [56]. Here we are interested in the full first-passage time distribution, which we can obtain using the above backward Fokker–Plank equation approach. For , equation (27) are ordinary second-order differential equations with constant coefficients. Hence, we can try solutions of the form: . Substituting this in either of equation (27), we find that λ satisfies the quadratic equation, , which gives two roots: . Obviously, the negative root is not admissible, since the solution must remain finite as . Retaining only the positive root, the general solutions of equation (27) can be written as
The two unknown constants B and A are however related, as they must also satisfy the pair of first-order equation (26) (upon setting ). This gives . Hence, finally, using , we get
where . It remains to fix the only unknown constant A. This is done by using the boundary condition which fixes A = −1/s. Hence, we obtain the final solutions
with .
The first-passage time distribution is simply related to the survival probability via
or in Laplace variables
where we recall .
It turns out that the Laplace transforms in equation (32) can be exactly inverted. Before doing so, it is useful to extract the long-time asymptotics directly from the Laplace transforms in equation (32), by considering the limit. A scaling limit then naturally emerges where , but with the product fixed. This corresponds, in the time domain, to the scaling limit , , but keeping the ratio fixed. In this limit, as . Then, using the Laplace inversion
we find that both converge, in the scaling limit, to the Holtsmark distribution
Now we recall that for a Brownian particle evolving via , the first-passage probability is given precisely [51] by the formula in equation (34). Hence, for our RTP that evolves via the telegraphic noise in equation (1) with , its first-passage probability in the scaling limit is equivalent to that for a normally diffusing particle with diffusion constant D0 = 1/2. Indeed, this result is also consistent with our findings in equation (12), where we showed that, for , , which also corresponds to an effective normal diffusion at late times, with diffusion constant D0 = 1/2.
To find the behavior of for finite t, we need to invert the Laplace transforms in equation (32) exactly. Fortunately, this can be done using the following Laplace inversions
where I0,1(t) are modified Bessel functions and . Taking the derivative with respect to x, we obtain
These results match with those obtained by Orsingher [57] using a different approach.
In figure 6 we verify these results for the first-passage time distributions with simulations. For any given x, the large t limit of (35) can be taken by using the asymptotic behavior as . This yields, as , for any x,
Download figure:
Standard image High-resolution imageWhile the tail of f−(x,t) behaves exactly as in the case of Brownian diffusion with a diffusion coefficient D0 = 1/2 with a starting position x, the tail of f+ (x,t) is equivalent to that in a Brownian diffusion with a starting position x + 1. The extra length is the average distance the RTP with a positive velocity moves before taking the first turn. In figure 6 we compare the asymptotic results of (36) with numerical simulations and find very good agreement. From (36), the large time behavior of the survival probabilities are given by
In comparison with a particle with the negative starting velocity, a particle with the positive starting velocity has a higher probability of survival.
4.1.2. The case .
As explained in the beginning of this subsection, the survival probabilities satisfy the boundary conditions in equation (24b), i.e. in the Laplace domain, . In this case, the shifted functions each satisfy a fourth-order ordinary differential equation with constant coefficients, as seen from equation (27). Trying again a solution of the form: , we find that λ has now 4 possible values that are the roots of the fourth-order polynomial
There are 4 solutions given by and where
Evidently, . Again discarding the negative roots and (since the solution cannot diverge as ), the general solutions of equation (27) can be written as
where are given in equation (39). However, these solutions must also satisfy the individual second-order equation (26). This indicates that A1, A2 are related to B1 and B2. Indeed, by substituting these solutions in equation (26) gives the following relations
Note that we have used equation (38) to obtain the last two relations.
Hence, the solutions for the survival probabilities are given by
where B1, B2 are related to A1 and A2 via equation (41). We are still left with two unknown constants A1 and A2. To fix them, we use the two boundary conditions: . This gives two linear equations for A1 and A2 whose solution is
This then uniquely determines the solutions for the survival probabilities . The corresponding first-passage probabilities are given by
where the constants A1, A2, B1 and B2 are determined explicitly above and are given in equation (39).
The first nontrivial check is the limit . In this limit, it is easy to verify, from equation (39), that
In addition, one finds that as ,
and consequently
We therefore recover the results in equation (32).
We now turn to the long-time asymptotic solutions of equation (44) for arbitrary . Hence we consider the limit, with finite . In this limit, it is easy to check that, to leading order for small s
Similarly, one can check that to leading order for small s
and consequently
Substituting these results together in equation (44), we find that in the scaling limit (, with the product fixed)
Upon inverting the Laplace transform using equation (33), we obtain our final results in the scaling limit
This result is precisely the same as the first-passage time density of an ordinary Brownian motion with diffusion constant . Note that for as . Moreover, this effective diffusion constant is consistent with our result in equation (12).
Unfortunately, unlike in the case, for nonzero , we are not able to obtain the finite time result for explicitly, due to the fact that the Laplace transforms are difficult to invert.
4.2. Exit probabilities and exit times in the finite interval
We now investigate an RTP on a finite interval and address two questions: (a) the probability for the particle, which starts at x, to eventually reach either of the boundaries, and (b) the mean time for the particle to exit the interval by either of the boundaries. Let E+ (x) (E−(x)) denote the exit probabilities, namely the probability for a particle that starts at x with velocity +1 (−1) exits through the boundary at . By comparison, the exit probability to for isotropic diffusion, , is simply ; that is, the exit probability decreases linearly with the initial distance from the left edge.
It is easily seen that these hitting probabilities obey the backward equations [51]
subject to the appropriate boundary conditions, which are and . These boundary conditions fix the constants in and thus the problem is formally solved. The calculation is conceptually straightforward but tedious, and the details were performed by Mathematica. The basic steps and the final expressions for the exit probabilities are given by (C.5) in appendix C.
Figure 7 shows the exit probabilities for representative values of the dimensionless diffusion coefficient . As one expects, for , the exit probabilities are close to the isotropic random-walk form . However, for , E+ and E− become very distinct. Moreover, the exit probability E+ decreases much more rapidly with x than , while E− decreases much more slowly. Notice also that . That is, the exit probability, averaged over the two velocity states deviates significantly from the corresponding exit probability for unbiased diffusion.
Download figure:
Standard image High-resolution imageLet us now turn to the exit times. Let t+ (x,t) (t−(x,t)) be the mean first-passage time (to either boundary) for a particle that is at x and is also in the + (−) state. Again using the formalism given in [51], it is easily seen that these exit times obey the backward equations
with boundary conditions , which corresponds to the particle being immediately absorbed if it starts at either end of the interval. The solution to equation (52) are obtained using the same approach as that given for equation (51) (see appendix D). While the resulting expressions for for the finite interval are too long to be displayed, the form of the first-passage times are easily visualized (figure 8). For small bias velocity the scaled diffusion constant (see section 2) is large. In this limit the diffusive part of the dynamics for both types of the particles dominates. As a result, , and both t+ and t− become very close to the exit probability for isotropic diffusion . On the other hand, if is increased is decreases and the active contribution to the motion dominates. In this limit the exit times t+ and t− strongly deviate from each other.
Download figure:
Standard image High-resolution image5. Summary
We studied a one-dimensional model of run-and-tumble particles in the presence of an additional diffusion term. On the infinite line we find that an initial localized distribution of particles evolves to a Gaussian distribution at long times, with the diffusion constant renormalized by the active particle speed and the tumble rate, while at intermediate times the density distribution can have a multimodal structure.
In a finite domain with reflecting walls, we found that the RTPs reach a steady-state with peaks in the density distributions at the boundaries, which is in agreement with earlier observations of particle accumulation at walls. We also studied the approach to the steady-state by examining the spectral structure of the corresponding Fokker–Planck operator. The eigenvalues of this operator appear as the zeros of a complicated determinant, and finding them is highly non-trivial, even numerically. We numerically evaluated the two eigenvalues with largest real parts. It is an interesting mathematical problem to find the full spectrum as well as the associated eigenvectors of the Fokker–Planck operator.
We also investigated the first-passage probability distribution of an RTP on the semi-infinite line and obtained an explicit closed form expression for the distribution in the limiting case of zero diffusion and in the more challenging case of non-zero diffusion. In a finite domain, we obtained exact results for the exit time probability and the mean exit time.
We believe that our results for non-interacting RTPs in one dimension will be informative for the study of models of other active particle systems in higher dimensions. Another possible extension of this work is to study active particles in external potentials and in the presence of mutual interactions. It will be interesting to verify some of our analytic observations in experimental systems such as vibrated granular systems and Janus particles [7].
Acknowledgment
KM acknowledges the S N Bhatt Memorial Excellence Fellowship Program 2016 at ICTS, and INSPIRE-SHE (awarded by DST, Government of India) for funding her research. VJ is supported by a post-doctoral fellowship in the Max Planck partner group at ICTS AD, AK, SNM and SS acknowledge support from the Indo-French Centre for the promotion of advanced research (IFCPAR) under Project No. 5604-2. AD, AK, SNM and SS also acknowledge the large deviation theory program at ICTS (code: ICTS/Prog-ldt/2017/8) during which many discussions were held. SNM wishes to thank U Basu. M R Evans, A Rosso, and G Schehr for useful discussions, and acknowledges a Simon foundation grant from ICTS. KVK's research is supported by the Department of Biotechnology, India, through a Ramalingaswami reentry fellowship and by the Max Planck Society and the Department of Science and Technology, India, through a Max Planck Partner Group at ICTS-TIFR. SR acknowledges support from grants DMR16-08211 and DMR-1623243 from the National Science Foundation and from the ICTS for supporting his participation in the Bangalore school on statistical physics - VIII (code: ICTS/Prog-bssp/2017/06). He also thanks Uttam Bhat for many helpful discussions.
Appendix A. Steady-state probability distribution in the interval
We define and use . With these definitions, (15) can be rewritten as:
and the boundary conditions (5) now read
Integrating the first of (A.1), we get where C1 is an integration constant. From the second boundary condition in (A.2), we get C1 = 0. Hence for all , and substituting this into the second of (A.1) leads to
This equation has the general solution
and a and b are constants to be determined.
Once is known, can be obtained by integrating :
where C2 is another integration constant. The three constants and C2 can be obtained using the boundary conditions and the normalization condition. Substituting the solutions (A.4) and (A.5) into the first of (A.2) gives
whose solution is
Finally, we invoke the normalization condition to obtain
and
The latter is (16) in the main text.
Appendix B. Time-dependent probability distribution in the interval
We now construct the eigenstates . First we try a solution of the form
Inserting this form in (19), we get
To get non-zero solutions for , we require the determinant of the matrix in the above equation to be zero: which provides β as function of λ. This is a fourth order equation in β whose solutions are
where and . Corresponding to the resulting four values of β, we get the four corresponding solutions
where and . We use these four states to construct the eigenstates that satisfy the boundary conditions. Thus let
Substituting this solution into the required boundary conditions (5), and after some rearrangement, we get
with , and allowed to take values . To get non-zero solutions for , we require . This equation has both real and imaginary parts and both have to be set to zero. This is possible only at certain values (in general complex) of λ and these values then give us the required eigenvalue set , . We assume that the eigenvalues are ordered according to decreasing value of their real part. For each allowed one can find the corresponding value of from (B.3). If the βs are non-degenerate then the associated eigenvector can be obtained from (B.6). This then determines the eigenstates completely, up to a normalization constant.
We expect that there should be a real largest eigenvalue corresponding to the steady-state and this was already determined, see (16). This solution can be recovered from our present approach but needs some extra care since for this case, and . The two independent states corresponding to are given by
Taking a linear combination and imposing the boundary conditions leads us to the solution given in equations (16) and (A.8).
Appendix C. Solution for E±(x) on the finite interval
To solve equations (51), we first define and to recast (51) as
Differentiating the second of (C.1) and using the first to eliminate gives , with
The solution for is , where A and B are constants. Integrating once gives and integrating gives Se. The final result is
where are constants. However, to satisfy the second of equation (C.1), we must have E = 2C. Using this and finally solving for gives
For exit via the left edge of the finite interval , the appropriate boundary conditions are and . Thus, from equation (C.3), we need to solve
where we have introduced
Solving these four linear equations by Mathematica, substituting the coefficients , and C into (C.3), and then performing some simplifications, the exit probabilities are:
Some representative graphs of are given in figure 7.
Appendix D. Solution for t±(x) on the finite interval
To solve (52), we again define , to give
We follow similar steps to those in appendix C to obtain
where C and F are constants, and the additional terms compared to those in (C.2) stem from the additional inhomogeneous term in equation (D.1) compared to (C.1). The solutions for are
For the exit times in a finite interval of length , the boundary conditions are . Applying these boundary conditions to (D.3) fixes the constants and F, from which the results shown in figure 8 are obtained, again using Mathematica.