Letter The following article is Open access

Learning run-and-tumble chemotaxis with support vector machines

and

Published 12 May 2023 Copyright © 2023 The author(s)
, , Editor's Choice Citation Rahul O. Ramakrishnan and Benjamin M. Friedrich 2023 EPL 142 47001 DOI 10.1209/0295-5075/acd0d3

0295-5075/142/4/47001

Abstract

To navigate in spatial fields of sensory cues, bacterial cells employ gradient sensing by temporal comparison for run-and-tumble chemotaxis. Sensing and motility noise imply trade-off choices between precision and accuracy. To gain insight into these trade-offs, we learn optimal chemotactic decision filters using supervised machine learning, applying support vector machines to a biologically motivated training dataset. We discuss how the optimal filter depends on the level of sensing and motility noise, and derive an empirical power law for the optimal measurement time $T_{\textrm{eff}}\sim D_{\textrm{rot}}^{-\alpha }$ with $\alpha =0.2, \ldots ,0.3$ as a function of the rotational diffusion coefficient Drot characterizing motility noise. A weak amount of motility noise slightly increases chemotactic performance.

Export citation and abstract BibTeX RIS

Published by the EPLA under the terms of the Creative Commons Attribution 4.0 International License (CC-BY). Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

Introduction

Motile biological cells can steer their swimming paths upwards a concentration gradient of chemoattractant molecules in a process termed chemotaxis [16]. At the microscopic scale of cells, sensing and motility noise severely limit the precision of chemotaxis [714]. Biological cells measure chemoattractant concentration by registering binding events of diffusing chemoattractant molecules to receptors on the cell surface. The number of molecules registered in a short time interval is thus a random variable, resulting in inevitable sensing noise. Brownian motion as well as active fluctuations of cell motility randomize the orientation of the cell, resulting in motility noise [15,16]. In a seminal paper, Berg and Purcell argued that the relative error in sensing a concentration c due to molecular shot noise scales as $\delta c/c \sim (r\,T)^{-1/2}$ , where $r\sim D c L$ is the detection rate of chemoattractant molecules, which depends on the diffusivity D of these molecules and the size L of the cell, while T is measurement time [7]. The choice of optimal measurement time implies a trade-off between precision and accuracy: longer measurement times increase the precision of concentration measurements, as measurement noise can be effectively averaged out, but reduce the accuracy as the true concentration dynamically changes in time [17]. For a concentration that fluctuates on a characteristic time-scale τ, the optimal choice of measurement time $T_{\textrm{eff}}\sim \sqrt {\tau /r}$ results in the modified scaling $\delta c/c \sim (r\tau )^{-1/4}$  [18]. An analogous scaling was later found for optimal gradient sensing by spatial comparison, where a stationary cell determines the direction of a spatial concentration gradient by comparing local concentration across its diameter [19].

To determine not only a concentration, but a spatial concentration gradient, biological cells use different strategies [3,20]. Possibly the most natural strategy, spatial comparison, compares concentrations across the diameter of the cell, and is employed, e.g., by crawling immune cells [4]. Intriguingly, many smaller cells such as the bacterium E. coli do not use spatial comparison, but instead determine concentration gradients by temporal comparison, i.e., by estimating the time derivative of the local concentration signal $c(\textbf{r}(t))$ , while the cell actively moves along a path $\textbf{r}(t)$ in a concentration gradient [1]. Generally, small and comparatively fast cells $(L\lesssim 10\,\mathrm{\mu } \textrm {m},\,v\gtrsim 10\,\mathrm{\mu } \textrm {m}/\textrm {s})$ employ temporal comparison for gradient sensing, while larger and slower cells often employ spatial comparison [20]. In fact, the signal-to-noise ratio of spatial comparison $(\textrm{SNR} = D L^{3} T\,|\nabla c|^{2}/c)$ depends strongly on agent size L. Moreover, the measurement time T is ultimately limited by the persistence time $1/D_{\textrm{rot}}$ , which is set by the rotational diffusion coefficient $D_{\textrm{rot}} \sim L^{-3}$ of Brownian motion for small cells [15]. For micrometer-sized bacteria, gradient sensing by spatial comparison would be simply too noisy. Instead, these cells employ a chemotaxis strategy known as run-and-tumble, where short episodes of approximately straight swimming (runs) are interrupted by random reorientation events (tumbles), during which the cell picks a new swimming direction at random [1]. By estimating the time derivative of the chemoattractant concentration signal during a run, and extending the duration of runs if this temporal change is positive, these cells realize a biased random walk with net motion up the concentration gradient. Several works investigated optimal gradient-sensing strategies in the presence of noise [21,22]. A key question concerns the optimal measurement time Teff for taking the binary decision whether to tumble or not.

Here, we employ supervised machine learning to determine kernels for optimal decision filters for run-and-tumble chemotaxis using support vector machines. Time-continuous decision filters are effectively represented by finite weight vectors that can be learned efficiently. The learned navigation strategy enables search agents to find a target with a chemotactic index close to that of an ideal run-and-tumble agent, demonstrating that machine learning can identify near-optimal functional behavior. We empirically show that the optimal measurement time follows a power law $T_{\textrm{eff}} \sim D_{\textrm{rot}}^{-\alpha }$ with exponent $0.2,\ldots ,0.3$ .

Model

To describe a minimal chemotactic agent, we extend the popular model of an Active Brownian Particle (ABP) [23] by introducing tumbles as random re-orientation events, which are triggered by a decision rule that takes noisy concentration measurements of the ABP along its trajectory as input. Specifically, we consider a point-like agent moving along a trajectory $\textbf{r}(t)$ in two-dimensional space in a spatial concentration field $c(\textbf{x})$ of chemoattractant molecules released by a hidden target at the origin, see fig. 1(a). The diffusion of molecules to an absorbing surface, here the circumference of the agent, is well described by a Poisson point process of arriving molecule with rate $r(t) = \lambda \, c(\textbf{r}(t))$ (see footnote 1 ). Here, λ is a binding rate constant (that depends as $\lambda \sim D L$ on the diffusion coefficient D of the chemoattractant molecules, and the size L of the agent and its surface coverage with chemoattractant receptors [15]). We write $s(t) = \sum _{k} \delta (t-t_{k})$ for the train of stochastic binding events occurring at times tk ; s(t) represents an (inhomogeneous) Poisson point process with rate r(t): $\langle s(t) \rangle = r(t)$ .

Fig. 1:

Fig. 1: Learning run-and-tumble chemotaxis. (a) Schematic of run-and-tumble chemotaxis as employed by swimming bacteria. The agent is subject to both motility noise, manifested in a non-zero rotational diffusion coefficient Drot, and sensing noise due to stochastic binding events of chemoattractant molecules to cell surface receptors. (b) Short training trajectories of duration 1 s pointing either up-gradient (green) or down-gradient (red) at the end of the trajectory (with uniformly distributed initial position and isotropic initial direction), used for training a machine-learning (ML) agent performing run-and-tumble chemotaxis. (c) Typical time series of noisy concentration measurements, corresponding to different initial radial distances R0 = 200 μm, 400 μm, respectively. (d) Example memory kernel $w(\tau )$ learned for this training data set using support vector machines. (e) Typical trajectories of run-and-tumble agents using the memory kernel from panel (d) for decision making. The majority of agents (green) successfully find the hidden target at the origin (magenta) of the disk-like search domain with absorbing boundary conditions at Rmax = 500 μm. Parameters: $c(R_{\textrm{target}}) = 10\,\mathrm{\mu }{\textrm {M}}$ , $c(R_{\textrm{max}}) = 0.1\,\textrm {nM}$ , thus, $1/k \approx 43\,\mathrm{\mu }{\textrm {m}}$ ; v0 = 20 μm s−1, $D_{\textrm{rot}} = 0.2\,\textrm {s}^{-1}$ (≈ 2 times the measured rotational diffusion coefficient of E. coli to emphasize motility noise [26]), λ = 107 μM−1 s−1.

Standard image

For the sake of explicitness, we assume a radially symmetric concentration profile with exponential decay 2 $c(\textbf{x}) \sim \exp (-k |\textbf{x}|)$ . The agent performs runs interrupted by tumbling events that randomize its swimming direction; specifically, the swimming path $\textbf{r}(t)$ obeys $\dot {\textbf{r}} = v_{0}\,\textbf{t}$ , where v0 is a constant swimming speed and $\textbf{t}(t)$ the local tangent vector. The agent is subject to motility noise modeled as rotational diffusion with rotational diffusion coefficient Drot, i.e., the direction angle ψ characterizing the direction of the tangent $\textbf{t} = \cos \psi \,\textbf{e}_{x} + \sin \psi \,\textbf{e}_{y}$ undergoes Brownian motion as $\dot {\psi } = \sqrt {2D_{\textrm{rot}}}\,\xi (t)$ , where ξ (t) is Gaussian white noise with zero mean and unit variance $\langle \xi (t)\xi (t^{\prime }) \rangle = \delta (t-t^{\prime })$ . Thus, the agent moves like an Active Brownian Particle (ABP) [23] during runs. Tumbling events are assumed to be instantaneous and cause a randomization of swimming direction with uniformly distributed new swimming direction angle $\psi \in [0,2\pi )$ . After tumbling, agents exhibit a refractory period of Tmax before they may tumble again.

Supervised learning of memory kernels using support vector machines

Run-and-tumble chemotaxis relies on dynamically adjusting the length of runs, by triggering tumbling events earlier or later, depending on whether the agent is moving down-gradient or up-gradient, respectively. For this, the Machine-Learning (ML) agent needs to decide whether the concentration signal measured during its current run is decreasing or increasing with time. This is a non-trivial decision task, as concentration measurements are corrupted by both sensing and motility noise. As a result, run-and-tumble chemotaxis is highly stochastic on time-scales of a single run, but, if tuned correctly, can give robust net motion up-gradient on longer time-scales.

Deciding whether a train of binding events s(t) represents a time-dependent concentration $c(\textbf{r}(t))$ that decreases or increases with time t, can be efficiently addressed by a linear decision filter with memory kernel $w(\tau )$ and bias b

Equation (1)

where the integral represents a weighted sum of the recent past of the input s(t) up to a maximal measurement time T. We use supervised machine learning to learn optimal memory kernels $w_{j}=w(t_{j})$ using an appropriate discretization of time, and systematically explore how these memory kernels depend on noise levels.

For efficient learning, we use training datasets consisting of a large number of short trajectories that were evenly distributed over the search domain to ensure unbiased learning, see fig. 1(b). Specifically, we constructed training datasets consisting of short trajectories $\textbf{r}(t)$ of duration Tmax = 1 s representing uninterrupted runs with rotational diffusion coefficient Drot. Start points of training trajectories were uniformly distributed in a disk-like search domain with isotropically distributed initial tangent directions. For each training trajectory, we simulated time-discrete realizations of the train of stochastic binding events s(t), see fig. 1(c). Specifically, we represent s(t) as a vector with components $s_{j}=\int _{t_{j-1}}^{t_{j}}\!\textrm{d}t\,s(t)$ , $j=1,\ldots ,m$ , which are determined as Poissonian random variables with mean and variance $\lambda \,c(\textbf{r}(t_{j}))$ , while $t_{j}=j\Delta t$ with $\Delta t=T_{\textrm{max}}/m$ , where m is the number of temporal bins (here, m = 100).

Next, we implemented supervised learning by first assigning each realization $\{s_{j}\}$ an m-dimensional input vector u with components $u_{m+1-j} = s_{j}/\langle s_{j} \rangle $ , where we normalized by $\langle s_{j} \rangle = \sum _{j=1}^{m} s_{j}/m$ , see fig. 1(c). Second, we assign a class label $y=\pm 1$ , where y = 1 if the tangent vector $\textbf{t}(T_{\textrm{max}})$ at the end of run is pointing up-gradient, and $y=-1$ otherwise. The rational behind this is that it would be favorable for a ML agent to continue its run beyond $t>T_{\textrm{max}}$ along its currently favorable direction $\textbf{t}(T_{\textrm{max}})$ if the class label is $y=+1$ , but it would be more favorable to tumble if $y=-1$ . Our goal is now to find out an m-dimensional weight vector w and scalar bias b, such that the predicted class label given by the sign of $\textbf{w}^{T}\textbf{u} + b$ matches the true label y as often as possible.

We used a training dataset consisting of $n=10^{4}$ short training trajectories, with corresponding input vectors $\textbf{u}_{i}$ and class labels yi for $i=1, \ldots , n$ .

To find the optimal weight vector w and bias b, we used LinearSVC to solve the optimization problem [24]

Equation (2)

where C acts as an inverse regularization parameter (here C = 10). The learned weight vector w represents a time-discrete representation of a memory kernel $w(\tau )$ with $w(j\Delta t) = w_{j}$ , see fig. 1(d) for a typical example. Memory kernels for additional values of λ and Drot are shown in fig. S1 in the Supplementary Material Supplementarymaterial.pdf (SM).

On a technical note, we expect $w(\tau )$ to vary smoothly with τ. This could be achieved by using very large training datasets, or, more efficiently, by averaging learned $w(\tau )$ for 103 learning sessions each using 104 input vectors (economically constructed from 10 sets of $n=10^{4}$ short trajectories, for which 100 different realizations of noisy concentration measurements were simulated). Additional technical details on the learning of memory kernels is provided in the SM, sect. 1.

We tested the accuracy of the trained machine-learning (ML) agent on a different, analogously defined test dataset comprising $n=10^{4}$ short trajectories, see fig. S2 in the SM.

Learning from short trajectories is sufficient to build functional run-and-tumble navigation

Although the memory kernel $w(\tau )$ was learned on short trajectory segments only, agents can use this kernel perpetually in time to find a hidden target, see fig. 1(e). Specifically, these agents record local concentrations $c(\textbf{r}(t))$ while performing runs as ABP with rotational diffusion coefficient Drot, and execute a tumbling reorientation event, whenever $\int _{0}^{T_{\textrm{max}}}\!\textrm{d}\tau \, w(\tau ) s(t-\tau ) < 0$ , where the memory kernel $w(\tau )$ is applied to the most recent past of noisy concentration measurements up to a maximum delay of Tmax = 1 s.

Chemotactic performance

We can characterize the performance of chemotactic agents in terms of the distribution p(t) of first-passage times (FPT) to find a target at the origin, see fig. 2(a). As expected, increasing motility noise increases typical FPTs, and reduces the probability to find the target within the simulation time tsim = 200 s. We imposed absorbing boundary conditions both at the target boundary at radius Rtarget = 10 μm and at the domain boundary Rmax = 500 μm. For comparison, we consider an ideal agent that has perfect knowledge whether it swims up- or down-gradient at any point in time. Remarkably, the chemotactic performance of the ML agent subject to sensing noise (with λ = 105 μM−1 s−1), albeit smaller than that of the ideal agent, is still comparatively high.

Fig. 2:

Fig. 2: Chemotactic performance in the presence of noise. (a) Distribution p(t) of first-passage times (FPT) to find a hidden target at the center of the search domain for a ML agent exposed to sensing noise (λ = 105 μM−1 s−1); for comparison, we show FPTs also for an ideal run-and-tumble agent that always has perfect knowledge at every time point whether the current swimming direction points up-gradient or down-gradient. Inset: probability that agents successfully found the target within the simulation time tsim = 200 s. (b), (c): ensemble mean of the chemotactic index as well as mean residence time as a function of radial distance from the target. (d) Relative frequency of false-positive and false-negative decisions as a function of radial distance from the target. Ratio of false-positive tumbling events among all tumbling events (FP/(FP+TP)), as well as ratio of false-negative continuation of down-gradient run among all time steps without tumbling (FN/(FN+TN)). Parameters as in fig. 1.

Standard image

The chemotactic index $\textrm{CI}(R) = \langle \textbf{t}\cdot \nabla c / | \nabla c | \rangle _{|\textbf{x}|=R}$ characterizes the ensemble-averaged net motion up-gradient at different radial distances R, see fig. 2(b). A value of 1 would correspond to agents that head straight to the target, whereas unbiased random walks with isotropically distributed swimming direction correspond to $\textrm{CI} = 0$ . In fig. 2(b), the CI decreases and even becomes negative close to the domain boundary, reflecting the net flux of agents to this absorbing boundary. The CI also decreases close to the target, presumably due to geometric nonlinearities of the radial concentration field, which are not sufficiently represented in the training. Results are comparable for ML agents and ideal agents, reaching maximal values of $\textrm{CI} \approx 0.6$ . We find $\textrm{CI}<1$ because run-and-tumble agents cannot head directly to the target, but execute a biased random walk at best. The observed values are typical for E. coli bacteria at optimal conditions [25]. Consistent with the mostly positive CI, we find that the mean residence time tres of agents as a function of the distance $|\textbf{x}|$ from the target is maximal in the vicinity of the target (and drops near the absorbing boundaries right at the target and the domain boundary, respectively), see fig. 2(c).

Binary decision making to tumble or not implies decision errors in the presence of noise: false-negative (FN) decisions to wrongly continue a down-gradient run, and false-positive (FP) decisions to wrongly tumble during an up-gradient run. For the training data, due to its symmetry, FN and FP errors have equal probability. But for the long trajectories of chemotactic agents, the positivity of their CI implies a class imbalance between up-gradient and down-gradient swimming. The result are strikingly different dependencies of FN and FP errors as a function of target distance, see fig. 2(d). The relative frequency of FP errors among all tumbling events is maximal far from the target, because chemoattractant concentrations are low there, implying a strong impact of sensing noise. The relative frequency of FN errors is high both close to the target and close to the domain boundary, where it even exceeds the chance level of 0.5. This can be understood from the fact that $\textrm{CI}<0$ close to the outer absorbing boundary, which implies that only few trajectories point up-gradient, i.e., the prevalence of true negative decisions (TN) is low. Similarly, the relative frequency of FN decisions increases again close to the target, because agents spend more time there, see the spatial distribution of residence times in fig. 2(c).

Optimal memory kernels depend on sensing and motility noise, revealing a novel power law for effective measurement time

To assess the impact of sensing and motility noise, we computed memory kernels $w(\tau )$ for different values of the binding rate constant λ and the rotational diffusion coefficient Drot, see fig. 3(a) (as well as fig. S3 in the SM for the corresponding test scores). All kernels display a characteristic shape comprising a lobe of positive values followed by a lobe with negative values, with total integral close to zero, ideally suited to compute a smoothed time derivative [22]. This characteristic shape of the kernel is markedly different from the optimal kernel $w(\tau ) \approx 1-2\tau /T_{\textrm{max}}$ with approximately linear shape that one would obtain in the limit of zero motility noise $D_{\textrm{rot}} = 0$ .

Fig. 3:

Fig. 3: Optimal memory kernels as a function of sensing and motility noise. (a) Typical memory kernels $w(\tau )$ for run-and-tumble decision making for different levels of sensing and motility noise. (b) We can define an effective measurement time Teff such that $w(T_{\textrm{eff}}) = 0.05\, w_{\textrm{min}}$ , where wmin is the minimum of the kernel. We observe an empirical power-law scaling of the effective measurement time Teff as a function of rotational diffusion coefficient Drot. To ensure a homogeneous level of sensing noise, the initial radial distance of training trajectories was set to R0 = 147 μm, corresponding to c0 = c(R0) = 0.4 μM. To accommodate for $T_{\textrm{eff}}>1\,\textrm{s}$ , we used Tmax = 2 s for the duration of training trajectories.

Standard image

We define an effective measurement time Teff by the simple condition $w(T_{\textrm{eff}}) = 0.05\, w_{\textrm{min}}$ , where wmin is the minimum of the kernel. Figure 3(b) shows that Teff follows an empirical power law with exponent α:

Equation (3)

The exponent α depends weakly on λ and approaches $\alpha \approx 1/3$ for $\lambda \rightarrow \infty $ . We can simulate this limit case of zero sensing noise, corresponding to the limit $\lambda \rightarrow \infty $ , directly, by letting $s(t)=\lambda \,c(\textbf{r}(t))$ . A different definition of Teff, e.g., $w(T_{\textrm{eff}}) = 0.1\, w_{\textrm{min}}$ gave similar results. Likewise, a larger initial radial distance R0 gave virtually the same power law, see fig. S4 in the SM.

The empirical power law reported in eq. (3) formalizes the trade-off choice between precision (requiring long Teff to average out sensing noise), and accuracy (requiring short Teff to reflect the current value of the rapidly changing swimming direction).

Weak motility noise can be beneficial

We quantified chemotactic performance as a function of both sensing and motility noise, see fig. 4. Remarkably, a small amount of motility noise can increase the chemotactic index, presumably because it facilitates course corrections for agents moving almost perpendicular to the gradient. This parallels the observation in fig. 2(a) that for ideal agents typical FPTs are shorter for Drot = 0.1 s−1 compared to the case $D_{\textrm{rot}} = 0$ .

Fig. 4:

Fig. 4: Weak motility noise can increase the chemotactic index. Time-averaged chemotactic index CI as a function of rotational diffusion coefficient Drot for different values of the binding rate constant λ. Parameters as in fig. 1 unless stated otherwise; uniformly distributed initial conditions.

Standard image

Discussion

We used supervised machine learning, applying support vector machines to a suitable training dataset comprising short trajectories, to determine optimal memory kernels for run-and-tumble chemotaxis in the presence of sensing and motility noise. Chemotactic agents employing these learned kernels for binary decision making in continuous time can find hidden targets releasing chemoattractant molecules efficiently. The chemotactic performance of these machine-learning agents is close to that of an ideal agent without sensing noise. The corresponding positive chemotactic index implies a class imbalance between swimming directions pointing up-gradient or down-gradient, which breaks a symmetry between false-positive and false-negative decisions. By computing memory kernels for sub-ensembles with homogeneous levels of sensing noise, we can dissect the dependence of an effective measurement time Teff of gradient sensing by temporal comparison on sensing and motility noise, which revealed an empirical power law $T_{\textrm{eff}} \sim D_{\textrm{rot}}^{-\alpha }$ with α in a tight range $\sim 0.2,\ldots ,0.3$ . In contrast, in a limit of high sensing noise, Strong et al. previously predicted the different power law $T_{\textrm{eff}} \sim D_{\textrm{rot}}^{-1}$  [21]. Our empirical power law for gradient sensing by temporal comparison is also different from the analytical result $T_{\textrm{eff}} \sim D_{\textrm{rot}}^{-1/2}$ previously obtained for tracking a time-varying concentration [18], or estimating a time-varying concentration gradient by spatial comparison [19]. Lastly, we showed that chemotactic performance becomes maximal at a small, but non-zero level of motility noise.

Previous experiments estimated the rotational diffusion coefficient of E. coli bacteria during runs as $D_{\textrm{rot}} \approx 0.1\,\textrm{s}^{-1}$  [26], which matches the predicted value for a Brownian particle of same size [21]. Although, this value is close to the optimal rotational diffusion coefficient that maximizes the time-averaged chemotactic index in our simulations, it seems unlikely that bacteria would optimize Drot by adjusting their size.

Using a diffusion coefficient of chemoattractant D = 103 μm2/s [27], we estimate a maximal binding rate constant λ = 107 μM−1 s for the idealized case that every molecule diffusing to the surface of spherical bacterial cell of radius a = 1 μm becomes absorbed. Previous experiments showed that chemotactic performance is maximal at a concentration gradient c ∼ 0.1 μM/μm [25], which corresponds to the gradient strength right at the target in our simulations. Our choice of radial concentration profile probes concentrations in a range of $c(R_{\textrm{target}}) = 10\,\mathrm{\mu }{\textrm {M}}$ to $c(R_{\textrm{max}}) = 0.1\,\textrm {nM}$ , spanning from a gradient strength close to the source that gave maximal chemotactic performance in previous experiments [26], to a concentration just below the presumed sensitivity threshold for bacterial chemotaxis [28].

Experimentally measured chemotactic response kernels to the chemoattractant aspartate have a shape very similar to the optimal memory kernels predicted here [29]; a similar result was previously obtained using a minmax-strategy for an ensemble of different concentration gradients [22].

While we assumed a total randomization of swimming direction after instantaneous tumbling events, random re-orientations are only partial for E. coli and tumbling takes about 0.15 s. Biological cells performing chemotaxis exhibit sensory adaption, i.e., respond to relative changes of concentration [30]. In our case, the symmetry of the training problem implies symmetric memory kernels with total integral approximately zero; thus, decisions are insensitive to any constant offset of the concentration signal, see also [22]. In future work, one could envision more sophisticated navigation strategies, with agents that estimate local signal-to-noise ratios and adapt their decision kernels appropriately [31], with different costs for false-positive and false-negative decision errors.

Our work contributes to a recent interest to employ machine learning to theoretically understand optimal navigation strategies in the presence of noise using various learning algorithms [3135]. Insights from these studies are already been put to use to improve navigation capabilities of artificial microswimmers [32].

Acknowledgments

We thank Julian Rode and all members of the Biological Algorithms group, as well as Marc Timme for stimulating discussions. ROR was supported by the Ministry of Science and Art of the Federal State of Saxony, Germany through grant 100400118 to Marc Timme and BMF, financed with tax funds on the basis of the budget adopted by the Saxon State Parliament (Forschungsprojektförderung Titelgruppe 70 des Sächsischen Staatsministerium für Wissenschaft und Kunst). BMF is supported by the DFG through FR3429/4-1 (Heisenberg grant), and under Germany's Excellence Strategy - EXC-2068 - 390729961. ROR and BMF acknowledge support through the Center for Advancing Electronics Dresden (cfaed).

Data availability statement: The data that support the findings of this study are openly available at the following URL/DOI: https://doi.org/10.5281/zendo.7540545

Footnotes

  • At typical chemoattractant concentrations, we can ignore perturbation of the absorbing agent on the concentration field.

  • Such exponential decay is an excellent approximation to the solution $c(\textbf{x}) = -i\,J_{0}(i|\textbf{x}|\sqrt {\beta /D})-Y_{0}(-i|\textbf{x}|\sqrt {\beta /D})$ of the diffusion-degradation equation in two space dimensions $0 = D\nabla ^{2} c - \beta c$ , with point source at the origin in terms of modified Bessel functions.

Please wait… references are loading.
10.1209/0295-5075/acd0d3