Paper The following article is Open access

Echoes in correlated neural systems

, and

Published 1 February 2013 © IOP Publishing and Deutsche Physikalische Gesellschaft
, , Citation M Helias et al 2013 New J. Phys. 15 023002 DOI 10.1088/1367-2630/15/2/023002

1367-2630/15/2/023002

Abstract

Correlations are employed in modern physics to explain microscopic and macroscopic phenomena, like the fractional quantum Hall effect and the Mott insulator state in high temperature superconductors and ultracold atoms. Simultaneously probed neurons in the intact brain reveal correlations between their activity, an important measure to study information processing in the brain that also influences the macroscopic signals of neural activity, like the electroencephalogram (EEG). Networks of spiking neurons differ from most physical systems: the interaction between elements is directed, time delayed, mediated by short pulses and each neuron receives events from thousands of neurons. Even the stationary state of the network cannot be described by equilibrium statistical mechanics. Here we develop a quantitative theory of pairwise correlations in finite-sized random networks of spiking neurons. We derive explicit analytic expressions for the population-averaged cross correlation functions. Our theory explains why the intuitive mean field description fails, how the echo of single action potentials causes an apparent lag of inhibition with respect to excitation and how the size of the network can be scaled while maintaining its dynamical state. Finally, we derive a new criterion for the emergence of collective oscillations from the spectrum of the time-evolution propagator.

Export citation and abstract BibTeX RIS

Content from this work may be used under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

Correlations are an established feature of neural activity [1] and evidence is increasing that they can provide insights into information processing in the brain [2]. The temporal relationship between the activity of pairs of neurons is described by correlation functions. Their shape has early been related to the direct coupling between neurons and to the common input shared by pairs of neurons. On the one hand, correlations may limit the signal-to-noise ratio of population rate signals [3], on the other hand they have shown to increase the amount of information available to unbiased observers [4]. Furthermore, synchronous neural activity has been proposed to bind elementary representations into more complex objects [5] and experimental evidence for such a correlation code is provided by task-related modulation of synchrony in the primary visual cortex [6] and the motor cortex [7].

The small magnitude [8] of pairwise correlations in the asynchronous irregular state [9] of the cortex has recently been related to the balance between excitation and inhibition in local networks [10, 11] and inhibitory feedback was identified as a general mechanism of decorrelation [12]. However, a quantitative theory explaining the temporal shape of correlation functions in recurrently impulse-coupled networks of excitatory and inhibitory cells remained elusive.

Assuming random connectivity with identical numbers and strengths of incoming synapses per neuron, as illustrated in figure 1, suggests by mean field arguments [13, 14] that the resulting activity of two arbitrarily selected neurons, and hence the power spectra of activities averaged over excitatory or inhibitory neurons, should be the same. Direct simulations, however, exhibit different power spectra for these sub-populations [15]. A similar argument holds for the covariance cff between the two neurons: if the covariance c between any pair of inputs is known, the covariance between their outgoing activity cff is fully determined [1524]. By self-consistency, as both neurons belong to the same recurrent network, one concludes that cff = c. In particular the covariance averaged over excitatory pairs should be identical to the corresponding average over inhibitory pairs, which is in contrast to direct simulation (figure 1(b)). In this work, we elucidate why this mean field argument for covariances fails and derive a self-consistency equation for pairwise covariances in recurrent random networks which explains the differences in the power spectra and covariances.

Figure 1.

Figure 1. The self-consistency argument fails for covariances in a homogeneous recurrent random network. (a) Each neuron (black circles) receives input from the same number of randomly chosen excitatory (e) and inhibitory (i) neurons in the network, so the input statistics of all neurons is the same. The covariance c within the network determines the covariance between the inputs to a pair of neurons and hence the covariance cff of their outputs. Self-consistency seems to require cff = c = cee = cii. (b) Covariance functions averaged over pairs of excitatory (cee) and over pairs of inhibitory (cii) integrate-and-fire model neurons are different in a direct simulation. Other parameters are given in appendix E.

Standard image

Theories for pairwise covariances have been derived for binary neuron models [11, 25] and for excitatory stochastic point process models [26]. However, the lack of either inhibition or delayed pulsed interaction limits the explanatory power of these models. A theory for networks of leaky integrate-and-fire (LIF) model neurons [27] is required, because this model has been shown to well approximate the properties of mammalian pyramidal neurons [28] and novel experimental techniques allow to reliably assess the temporal structure of correlations in the cortex [29]. Moreover, the relative timing of action potentials is the basis for models of synaptic plasticity [30], underlying learning in biological neural networks. Analytical methods to treat population fluctuations in spiking networks are well advanced [31] and efficient hybrid analytical–numerical schemes exist to describe pairwise covariances [32]. Here we present an analytically solvable theory of pairwise covariances in random networks of spiking leaky integrate-and-fire model neurons with delayed pulsed interaction in the asynchronous irregular regime.

2. Results

We consider recurrent random networks of N excitatory and γN inhibitory leaky integrate-and-fire model neurons receiving pulsed input (spikes) from other neurons in the network. Each neuron has K = pN incoming excitatory synapses independently and randomly drawn from the pool of excitatory neurons, and γK = γpN inhibitory synapses (a homogeneous Erdős–Rényi random network with fixed in-degree). An impulse at time t arrives at the target neuron after the synaptic delay d and elicits a synaptic current Ii that decays with a time constant τs and causes a response in the membrane potential Vi (with a time constant τm) proportional to the synaptic efficacy J (excitatory) or −gJ (inhibitory), respectively. The coupled set of differential equations governing the subthreshold dynamics of a single neuron i is [33]

Equation (1)

where the membrane resistance was absorbed into Jij. If Vi reaches the threshold Vθ at time point tik the neuron emits an action potential and the membrane potential is reset to Vr, where it is clamped for the refractory time τr. The spiking activity of neuron i is described by this sequence of action potentials, the spike train $s_{i}(t)=\sum _{k}\delta (t-t_{k}^{i})$ .

The activity of a given neuron i depends on the history of the other neurons' activities s(t) = (s1(t),...,sN(t))T in the network, so formally we can consider the spike train of neuron i as a functional of all other spike trains. The time-averaged covariance matrix expresses these interrelations and is defined as $\bar {\mathbf {c}}(\tau )=\langle \mathbf{s}(t+\tau )\mathbf {s}^{\mathrm {T}}(t)\rangle _{t}-\mathbf {r}\mathbf {r}^{\mathrm {T}}$ , where r = 〈st is the vector of time-averaged firing rates. The diagonal contains the autocovariance functions (diagonal matrix a(τ)) which are dominated by a δ-peak at zero time lag and for τ ≠ 0 exhibit a continuous shape mostly determined by refractoriness, the inability of the neuron to fire spikes in short succession due to the voltage reset, as shown in figure 2(d). The off-diagonal elements contain the cross covariance functions c(τ) that originate from interactions. We therefore decompose the covariance matrix into $\bar {\mathbf {c}}=\mathbf {a}+\mathbf {c}$ . A basic property of covariance matrices is the symmetry $\bar {\mathbf {c}}(\tau )=\bar {\mathbf {c}}^{\mathrm {T}}(-\tau )$ , so we only need to consider τ > 0 and obtain the solution for τ < 0 by symmetry. Each spike at time t' can influence the other neurons at time t > t'. Formally we express this influence of the history up to time t on a particular neuron i in the network as si(t,{s(t')|t' < t}), which is a functional of all spike times until t. In the asynchronous state of the network [9] and for small synaptic amplitudes Jij a single spike of neuron j causes only a small perturbation of si. The other inputs to neuron i effectively act as additional noise. We may therefore perform a linear approximation of j's direct influence on neuron i and average over the realizations of the remaining inputs s\sj, as illustrated in figure 2(a). This linearization and the superposition principle lead to the convolution equation

Equation (2)

where we define as the linear response kernel $h_{ij}(t,t^{\prime })=\langle \frac {\delta s_{i}(t)}{\delta s_{j}(t^{\prime })} \rangle _{\mathbf {s}{\backslash } s_{j}}$ the functional derivative of si(t) with respect to sj(t'), formally defined in appendix A. The kernel hij quantifies the effect of a single spike at time t' of neuron j on the expected density of spikes of neuron i at time t by a direct synaptic connection from neuron j to neuron i, earlier introduced as the 'synaptic transmission curve' [34]. This density vanishes for t < t' due to causality. For the stationary network state studied here, the kernel further only depends on the time difference τ = t − t'. In a consistent linear approximation there are no higher order terms, so the effects of two inputs sj and sk superimpose linearly. In general, the response of a cell is typically supra-linear in the number of synchronously arriving excitatory spikes [3436]. If, however, the network state to be described is sufficiently asynchronous, as is the case here, a linear approximation is adequate. Using this linear expansion and the definition of the covariance matrix, the off-diagonal elements of the covariance matrix fulfil a linear convolution equation

Equation (3)

The equation is by construction valid for τ > 0 and needs to be solved simultaneously obeying the symmetry condition c(t) = c(−t)T. Of up to linear order in the interaction, equation (3) is the correct replacement of the intuitive self-consistency argument sketched in figure 1.

Figure 2.

Figure 2. Mapping the integrate-and-fire dynamics to a linear coupling kernel. (a) The kernel hij determines the transient effect of an incoming impulse at time point tj (black arrow) on the density si(t) of outgoing action potentials, averaged over realizations of the stochastic activity of the remaining inputs (indicated as red and blue triangles). (b) Response kernel (4) (green) compared to direct simulation for an impulse of amplitude J = 1 mV (black dots) and J = −1 mV (gray dots). The time constant τe = 4.07 ms is determined by a least squares fit to a single exponential. The background activity causes a mean μi = 15 mV and fluctuations σi = 10 mV. (c) Linear and quadratic dependence (A.3) of the integral response wij on Jij (dark gray curve) and linear term alone (light gray line). (d) Autocovariance function of the spike train with a δ peak at t = 0 and covariance trough due to refractoriness.

Standard image

In order to relate the kernel hij to the leaky integrate-and-fire model, we employ earlier results based on Fokker–Planck theory [33]. For small synaptic amplitudes J ≪ Vθ − Vr and weak pairwise covariances the summed synaptic input $\tau _{\mathrm {m}}\sum _{j=1}^{N}J_{ij}s_{j}(t-d)$ can be approximated as a Gaussian white noise with mean $\mu _{i}=\tau _{\mathrm {m}}\sum _{j}J_{ij}r_{j}$ and variance $\sigma _{i}^{2}=\tau _{\mathrm {m}}\sum _{j}J_{ij}^{2}r_{j}$ for neurons firing with rates rj and Poisson statistics. For short synaptic time constants τs ≪ τm the stationary firing rate ri(μi,σ2i) (A.1) depends on these two moments [33]. In a homogeneous random recurrent network the input to each neuron is statistically the same, so the stationary rate ri = r is identical for all neurons. It is determined by the self-consistent solution of r(μi,σ2i), taking into account the dependence of μi and σ2i on the rate r itself [9].

The integral $w_{ij}=\int _{0}^{\infty }h_{ij}(t)\,\mathrm {d}t$ of the response kernel is equivalent to the dc susceptibility $w_{ij}=\frac {\partial r_{i}}{\partial r_{j}}$  [37] and has earlier been termed 'asynchronous gain' [34]. The approximation is second order in the synaptic amplitude wij = αJij + βJ2ij. The first order term originates from the dependence of μi on rj, the second order term stems from the dependence of σ2i on rj (A.3).

Throughout this work we choose the working point of the neurons in the network such that the firing of the cells is driven by strong fluctuations and the mean membrane potential is close to threshold, in order to have irregular firing. This is achieved by appropriate choices of the external driving Poisson sources, as described in appendix E. For the small networks considered here, it is realistic to assume that about 50% of the inputs to a neuron come from outside the local network [38]. Figure 2(b) shows the deflection of the firing rate from the baseline caused by an impulse in the input averaged over many repetitions. For sufficiently strong fluctuations σi, the quadratic term in Jij is negligible, as seen from figure 2(c). For smaller membrane potential fluctuations σi we expect the linear approximation to be less accurate.

The kernel shows exponential relaxation with an effective time constant τe that depends on the working point (μi,σi) and the parameters of the neuron, which is obtained by a least squares fit of a single exponential to the simulated response in figure 2(b) for one particular amplitude Jij. We therefore approximate the response hij(t) as

Equation (4)

where d is the synaptic delay and Θ the Heaviside step function.

In experiments covariance functions are typically averaged over statistically equivalent pairs of neurons. Such averages are important, because they determine the behavior of the network on the macroscopic scale of populations of neurons. We therefore aim at a corresponding effective theory. Assuming identical dynamics (1), all neurons have, to good approximation, the same autocovariance function a(t) and response kernel h(t). So each incoming excitatory impulse causes a response wh(t), an inhibitory impulse −gwh(t). We define the covariance function averaged over all pairs of excitatory neurons as $c_{\mathrm {e}\mathrm {e}}(\tau )=\frac {1}{N^{2}}\sum _{i,j\in \mathcal {E},i\neq j}c_{ij}(\tau )$ (setting N(N − 1) ≃ N2 for N ≫ 1), where $\mathcal {E}$ denotes the set of all excitatory neurons. The pairings cei,cie, and cii are defined analogously. Inserting equation (3) into the average cee(τ), the first term proportional to the autocovariance a(t) only contributes if neuron j projects to neuron i. For fixed i, there are K such indices j, so the first term yields $\sum _{i,j\in \mathcal {E},i\neq j}h_{ij}\ast a_{j}=NKw\, h\ast a$ . The second sum $\sum _{i,j\in \mathcal {E},i\neq j}h_{ik}\ast c_{kj}$ can be decomposed into a sum over all intermediate excitatory neurons $k\in \mathcal {E}$ and over all inhibitory neurons $k\in \mathcal {I}$ projecting to neuron i. Replacing the individual covariances by their population average, cee and cie, respectively, and considering the number of connections and their amplitude we obtain $wNK\, h\ast \left (c_{\mathrm {e}\mathrm {e}}-\gamma g\, c_{\mathrm {i}\mathrm {e}}\right )$ , with γ = Ni/Ne and g = wi/we the relative number of inhibitory neurons and the relative linearized inhibitory synaptic amplitude. Similar relations hold for the remaining three averages, so we arrive at a two-by-two convolution matrix equation for the pairwise-averaged covariances for τ > 0

Equation (5)

The convolution equation only holds for positive time lags τ. For negative time lags it is determined by the symmetry c(−τ) = cT(τ). The solution of this equation can be obtained by an extension of the method used in [26] employing Wiener–Hopf theory [39] to the cross spectrum $\mathbf {C}(\omega )=\int _{-\infty }^{\infty }\mathbf {c}(t)\mathrm {e}^{-\mathrm {i}\,\omega t}\, \mathrm {d}t$ in the frequency domain, as shown in appendix B (here capital letters denote the Fourier transform of the respective lower case letters). With the definition of the propagator $\mathbf {P}(\omega )=\left (\mathbf {1}-\mathbf {M} H(\omega )\right )^{-1}$ the cross spectrum takes the form $\mathbf {C}(\omega )=\mathbf {P}(\omega )\left [D_{+}(\omega )\mathbf {Q}+D_{+}(-\omega )\mathbf {Q}^{\mathrm {T}}-A(\omega )|H(\omega )|^{2}\mathbf {Q}\mathbf {M}^{\mathrm {T}}\right ]\mathbf {P}^{\mathrm {T}}(-\omega ),$ where we split the term H(ω)A(ω) = D+(ω) + D(ω) so that d+(τ) and d(τ) vanish for times τ < 0 and τ > 0, respectively. For the averaged cross spectrum, the matrix Q is defined in (5), the non-averaged cross spectrum can be recovered as a special case setting Q = M = W, because the convolution equations (3) and (5) have the same structure and symmetries. If all eigenvalues of MH(ω) have an absolute value smaller than unity, the propagator P(ω) can be expanded into a geometric series in which the nth term contains only interactions via n steps in the connectivity graph. This expansion has been used to obtain the contribution of different motifs to the integral of covariance functions [40] and their temporal shape [41].

In the following, we neglect the continuous part of the autocovariance function, setting a(t) = rδ(t), because the δ-peak is typically dominant. With this replacement we mainly neglect the trough around 0 due to the relative refractoriness. An estimate of the error can be obtained considering the respective weights of the delta peak (r) and the trough. From the relation $\int _{-\infty }^{\infty }a(t)\, \mathrm {d}t=r\mathrm {CV^{2}}$  [42, 43] the integral weight of the trough follows as r(CV2 − 1), which is small for irregular spike trains with a coefficient of variation CV close to unity.

For an arbitrary causal kernel h it follows that D+(ω) = rH(ω), so the cross spectrum takes the form

Equation (6)

The limit ω → 0 corresponds to the time integral of the cross covariance function, approximating the count covariance for long time bins [2, 8]. With A(0) = r, the integral correlation coefficient averaged over neuron pairs fulfils the equation

Equation (7)

which has previously been derived from noise-driven linear rate dynamics [12]. The quantity L plays a key role here: it determines the feedback magnitude of in-phase fluctuations of the excitatory and inhibitory populations. The stability of the average firing rate requires this feedback to be sufficiently small [9], i.e. L < 1, indicated by the pole in equation (7) at L = 1. Typically cortical networks are in the balanced regime [9, 13, 14], i.e. L < 0. For such inhibition-dominated networks, the denominator in equation (7) is larger than unity, indicating a suppression of covariances [12]. As shown in figure 3(a), the prediction (7) agrees well with the results of simulations of leaky integrate-and-fire networks for a broad range of network sizes N.

Figure 3.

Figure 3. Shape invariance of covariances with network scale. Synapses scaled as J∝1/N with network size N to conserve the population feedback L = const. (a) Integral correlation coefficient averaged over different pairs of neurons and theory (7) confirming the ∝1/N dependence. (b) Rescaled covariance functions Ncei averaged over excitatory–inhibitory pairs of neurons for different network sizes (color-coded) and theory (14) (black).

Standard image

Previous works have investigated neural networks in the thermodynamic limit N →  [1113], scaling the synaptic amplitudes $J\propto 1/\sqrt {N}$ in order to arrive at analytical results. Such a scaling increases the feedback on the network level $L\propto \sqrt {N}$ and therefore changes the collective network state. Equation (7) provides an alternative criterion to scale the synapses while keeping the dynamics comparable: The two terms in equation (7) depend differently on the feedback L. In order to maintain their ratio, we need to keep the population feedback L constant. The synaptic amplitude J (approximately ∝w (A.3)) hence needs to scale as J∝1/N. In addition, the response kernel h of each single neuron must remain unchanged, requiring the same working point, characterized by the mean μi and fluctuations σi in the input into each cell. A constant mean directly follows from L = const, but the variance due to local input from other neurons in the network decreases as 1/N. To compensate, we supply each neuron with an additional external uncorrelated balanced noise whose variance appropriately increases with N (as described in detail in appendix E). Figure 3(b) shows that the shape of the covariance functions is invariant over a large range of network sizes N; in particular the apparent time lag of inhibition behind excitation observed as the asymmetry in figure 3(b) does not vanish in the limit N → . The magnitude of the covariance decreases as 1/N as expected from equation (7), because Kw = const.

Global properties of the network dynamics can be inferred by considering the spectrum of equation (6), those complex frequencies zk at which the expression has a pole due to the function U(z). These poles are resonant modes of the network, where the real part ℜ(zk) denotes the damping of the mode and the imaginary part ℑ(zk) is the oscillation frequency. A pole appears whenever zk is a single root of U−1(zk) = H−1(−izk) − L = 0. With the Fourier representation $H(\omega )=\frac {\mathrm {e}^{-\mathrm {i}\omega d}}{1+\mathrm {i}\omega \tau _{\mathrm {e}}}$ of the response kernel (4), the poles correspond to the spectrum of the delay differential equation $\tau _{\mathrm {e}}\frac {\mathrm {d}y}{\mathrm {d}t}(t)=-y(t)+Ly(t-d)$ (cf [44]) which describes the evolution of the population-averaged activity. As shown in appendix C, the location of the poles can be expressed by the branches k of the Lambert W function, the solution of WkeWk = x [45], as

Equation (8)

The spectrum only depends on the population feedback L, the delay d and the effective time constant τe of the neural response kernel. This explains why keeping L constant while scaling the network in figure 3 yields shape-invariant covariance functions. A typical spectrum is shown in figure 4(a) as an inset, where each dot marks one of the poles zk (8). The two principal branches of Wk are the modes with the largest real part ℜ(zk), and hence with the least damping, dominating the network dynamics. The remaining branches appear as conjugate pairs and their real parts are more negative, corresponding to stronger damping. Investigating the location of the principal branches therefore enables us to classify the dynamics in the network. Their dependence on the delay is shown in figure 4(a) as a parametric plot in d. The point at which the two real principal solutions turn into a complex conjugate pair marks a transition from purely exponentially decaying dynamics to damped oscillations. This happens at a sufficiently strong negative coupling L or a sufficiently long delay d, precisely when the argument of Wk is smaller than −e−1 [45], leading to the condition

Equation (9)

The gray cross marks this point in figure 4(a); the gray curve shows the corresponding relation of feedback and delay in the phase diagram of figure 4(b). In the region below the curve, the dominant mode of fluctuations in the network is thus damped oscillatory, whereas above the curve fluctuations are relaxing exponentially in time.

Figure 4.

Figure 4. Phase diagram determined by spectral analysis. Throughout all panels colors correspond to delays as given in a. (a) Each dot in the inset represents a pole zk (8) for delay d = 1 ms (two rightmost poles appear as one point). The two rightmost poles change with delay d. At d = 0.753 ms (gray St Andrews cross) (9) the poles become a conjugate pair, at d = 6.88 ms (black crosses) (11) both poles have a zero real part, causing oscillations (Hopf bifurcation). (b) Right: phase diagram spanned by τe/d and feedback L. Onset of oscillations below the black curve (11) (Hopf bifurcation, black crosses in a), damped oscillations below the gray curve (9) (gray cross in a). Left: oscillation frequency (11) at the Hopf bifurcation. (c) Averaged cross covariance between excitatory neurons and theory (14) (black). Simulated data averaged over 106 neuron pairs for 100 s. (d) Autocovariance of excitatory neurons (δ-peak not shown) averaged over 2500 neurons for 100 s.

Standard image

For a sufficiently long delay d the principal poles may assume positive real values, leading to ongoing oscillations—a Hopf bifurcation. The condition under which this happens can be derived from H−1(ωcrit) = L, as detailed in appendix C. Equating the absolute values on both sides leads to the condition $\omega _{\mathrm {crit}}\tau _{\mathrm {e}}=\sqrt {L^{2}-1}$ : oscillations can only be sustained if the negative population feedback is sufficiently strong, L < −1. The oscillation frequency increases the stronger the negative feedback. The condition for the phases leads to the critical delay required for the onset of oscillations (see appendix C for details)

Equation (10)

This relation is shown as the black curve in the phase diagram of figure 4(b). The oscillatory frequency on the bifurcation line, at the onset of oscillations, can be expressed as

Equation (11)

which is shown in the left sub-panel of the phase diagram of figure 4(b). Consequently, the oscillation frequency fcrit at the onset is between (4dcrit)−1 and (2dcrit)−1, depending on the strength of the feedback L, approaching fcrit = (4dcrit)−1 at the onset with increasing negative feedback.

Changing the synaptic delay homogeneously for all synapses in the network allows us to observe the transition of the network from exponentially damped to oscillatory damped and finally to oscillatory dynamics. For a short delay of d = 0.5 ms the dynamics is dominated by the single real pole near −1/τe (brown dot in figure 4(a)) and the covariance function is exponentially decaying (figure 4(c)). Increasing the delay to d = 1 ms the principal poles split into a complex conjugate pair as the delay crosses the gray curve in figure 4(b) so that side troughs become visible in the covariance function in figure 4(c). Further increasing the delay, the network approaches the point of oscillatory instability, where a Hopf bifurcation occurs, marked by black crosses in figure 4(a) and the black curve in figure 4(b). The damping of oscillations decreases as the system approaches the bifurcation (figure 4(c)). The structure of the autocovariance function of single spike trains (figure 4(d)) is dominated by the dip due to refractoriness of the neuron after reset. At pronounced network oscillations, the autocorrelation function shows a corresponding modulation. The neglect of the oscillating continuous part of the autocorrelation function in the theory does apparently not have a pronounced effect on the obtained cross correlation functions, evidenced by the good agreement between direct simulation and theory for d = 5 ms. For weaker oscillations, e.g. at d = 3 ms, the coherence time of the oscillations is typically shorter than the width of the dip in the autocorrelation. The phase coherence of the oscillation is then shorter than the typical inter-spike-interval and single neuron spike trains are irregular. This state is known as synchronous irregular activity [9], where the activity of a single neuron is irregular, but collectively the neurons participate in a global oscillation.

If the network is not in the oscillatory state, all modes are damped in time, i.e. all poles zk of the function U(z) appearing in equation (6) lie in the left complex half-plane, ℜ(zk) < 0. Hence, the dynamics is stable and we can expect to obtain a unique solution for the covariance as an observable of this stable dynamics. We perform the Fourier transform to the time domain using the residue theorem $u(t)=\frac {1}{2\pi \mathrm {i}}\oint _{E_{t}}U(z)\, \mathrm {e}^{zt}\, \mathrm {d}z=\sum _{z_{k}\in E_{t}}\mathrm {Res}(U,z_{k})\, \mathrm {e}^{z_{k}t}$ , where the integration path Et proceeds along the imaginary axis from −i to i and is then closed in infinity in the left half-plane (for t > d) or the right half-plane (for 0 < t < d) to ensure convergence, resulting in (see appendix D for the detailed calculation)

Equation (12)

The back transform of $V(\omega){\stackrel{\mathrm{def}}{\,=\,}}|U(\omega)|^{2}$ proceeds along similar lines and results in

Equation (13)

The population-averaged covariance functions in the time domain then follow from equation (6) for t > 0 as

Equation (14)

which is the central result of our work. Figure 5 shows the comparison of the theory (14) with the covariance functions obtained by direct simulation. The analytical expression reveals that the covariance functions are composed of two components: the first term in equation (14) has heterogeneous matrix elements and hence depends on the neuron types under consideration. Its origin is illustrated in figure 5(a): if one of the neurons emits a spike, as indicated, this impulse travels along its axon and reaches the target neurons after one synaptic delay d. Depending on the type of the source neuron, the impulse excites (synaptic amplitude J) or inhibits (−gJ) its targets. Its effect is therefore visible in the pairwise covariance function as a positive (blue) or negative (red) deflection, respectively, in figure 5(c). This deflection not only contains the direct synaptic effect, but also infinitely many reverberations of the network, seen formally in equation (12). This expression is not proportional to the kernel h(t) directly, but is rather a series including the whole spectrum of the network. The shape of the spike echo consequently shows onsets of reverberations at integer multiples of the synaptic delay (figure 5(c)), being transmitted over multiple synaptic connections. The contribution of the second term in equation (14) follows the intuitive argument illustrated in figure 5(b). The incoming activity from the network to a pair of neurons is correlated. As the input statistics is the same for each neuron, this contribution is identical for any pair of neurons (green curve in figure 5(c)). The sum of both components results in the covariance functions shown in figures 5(d)–(f). The same analytical solution is shown for different delays in figure 4(c) showing good agreement with direct simulation. For different sizes of simulated networks in figures 3(b)–(d) the analytical expression (14) explains why the spike echo does not become negligible in the thermodynamic limit N → : for fixed population feedback L, both contributions in equation (14) scale as 1/N, so the relative contribution of the echo stays the same. This also explains the apparent paradox (see figure 1), that covariance functions in recurrent networks not only depend on the input statistics, but in addition the spike feedback causes a reverberating echo. The power spectrum of population averages is dominated by pairwise covariances, explaining the different spectra observed in the excitatory and inhibitory population activity [15]. Scaling the network so as to keep the marginal statistics of single neurons constant, $J\propto w\propto 1/\sqrt {N}$  [11, 13] changes the spectrum (8), because the feedback increases as $L\propto \sqrt {N}$ which can ultimately lead to oscillations as shown in figure 4(c).

Figure 5.

Figure 5. Composition of covariance functions. (a) Network echo caused by a spike sent at time t = 0 sets in after one synaptic delay (d = 3 ms): red curve in c, inhibitory spike, blue curve in c, excitatory spike. (b) Correlated inputs from the network to a pair of neurons (black circles) cause covariance cff between their outputs (green curve in c). (d)–(f) Covariance functions averaged over pairs of neurons (black dots, d: excitatory pairs; e: excitatory–inhibitory pairs; f: inhibitory pairs) and theory (14) (gray underlying curves). Inset shows the two components from c that are added together.

Standard image

3. Discussion

The present work qualitatively explains certain features of the correlation structure of simultaneously recorded synaptic currents of two cells in vivo. Novel experimental techniques are able to separate contributions of excitatory and inhibitory inputs [29]. We calculate such covariances in a random network and show that the covariance between synaptic impulses decomposes into a linear combination of the covariance of the spiking activity and the autocovariance functions (see the caption of figure 6). Each synaptic impulse has a certain time course, here modeled as a first-order low-pass filter with time constant τs = 2 ms (see (1)). The covariances between these filtered currents are shown in figure 6. Their temporal structure resembles those measured in the cortex in vivo [29, their figures 1(e) and (f)]: covariances between afferents of the same type are monophasic and positive, while the covariances between excitatory and inhibitory afferents are biphasic and mostly negative. The lag reported between inhibitory and excitatory activity [29, their figure 2(b)], which was also observed in binary random networks [11, 25], is explained by the echo of the spike contributing to the covariance function. In contrast to previous work, we take the delayed and pulsed synaptic interaction into account. Without delays and with binary neurons [11, 25] the echo appears as a time lag of inhibition with respect to excitation.

Figure 6.

Figure 6. Covariance between synaptic currents of a pair of neurons in a recurrent random network in analogy to in vivo experiments [29]. Covariance of excitatory contributions (blue, cIeIe = q*(J2pKae + J2K2cee)), analogously between inhibitory contributions (red), between excitatory and inhibitory contributions (green, cIeIi =  −q*(gJ2γK2cei)), and CIiIe analogously (brown). Currents filter spiking input by an exponential kernel with time constant τs = 2 ms (1), leading to the filtering of the covariances by $q(t)=\frac {\tau _{\mathrm {m}}^{2}}{2\tau _{\mathrm {s}}}\,\mathrm {e}^{-|t|/\tau _{\mathrm {s}}}$ .

Standard image

Measurements of membrane potential fluctuations during quiet wakefulness in the barrel cortex of mice [46] showed that correlations between inhibitory neurons are typically narrower than those between two excitatory neurons [46, their figures 4(A), 5(B) and 5(C) and (E)]. These results qualitatively agree with our theory for covariances between the spiking activity, because fluctuations of the membrane potential are uniformly transferred to fluctuations of the instantaneous firing intensity. The direct measures of spiking activity [46, their figure 6] confirm the asymmetric correlation between excitation and inhibition. The low correlation between excitatory neurons reported in that study may partly be due to the unnormalized, firing rate-dependent measure and the low rates of excitatory neurons.

The qualitative features of the cross correlation functions, namely their different widths and their asymmetry for excitatory–inhibitory pairs, are generic and agree with experimental results. They are fully explained by the decomposition into an echo term and a term corresponding to the feed-forward transmission of correlation. This decomposition merely relies on the fact that the autocorrelation of a spike train has a delta peak and that a spike triggers a response in the target cell with positive or negative sign for excitatory and inhibitory connections, respectively. Hence, we expect that these features are robust and survive also for more realistic network models with many heterogeneous subpopulations. For weakly correlated fluctuations, if the input to each cell is sufficiently noisy, a linear approximation of the neuronal response provides a viable first order approximation also for nonlinear neuron dynamics, as shown here. We suspect that a deeper reason why such a linearization is possible is the inherent decorrelation [12] by negative feedback in networks in the inhibition dominated regime. The decorrelation keeps population fluctuations small and hence prevents strong excursions that would exceed the validity of the linear approximation.

Oscillations in the γ range (25–100 Hz) are ubiquitous in population measures of neural activity in humans, and have earlier been explained in networks of leaky integrate-and-fire model neurons [9] by the Hopf bifurcation induced by delayed negative feedback. For the regime of high noise we here uncover a simpler analytical condition for the onset (10) and frequency (11) of fast global oscillations. For lower noise, deviations of the nonlinear leaky integrate-and-fire dynamics from the linear theory presented here are expected.

A traditional motivation to scale the network size to infinity is to obtain an analytic solution for an otherwise difficult problem. Biologically realistic networks have a finite size. In the present work we have determined the correlation structure for such finite-sized networks analytically. We present the scaling of the correlation functions in figure 3 to relate our work to previous results that applied scaling arguments to obtain the correlation structure [11]. The latter work investigated networks of binary neurons without conduction delays and assumed a scaling of the synaptic amplitudes $J\propto 1/\sqrt {N}$ . Such a scaling increases the overall feedback strength $L\propto pNw\propto \sqrt {N}$ . This has two consequences: firstly, as seen from (7), the relative contributions of the spike echo and the feed forward term change with L and hence with network size. Therefore the shape of correlation functions depends on the network size. Secondly, the overall dynamic regime of the network is affected by the scaling. This can be seen from figure 4(b). For non-zero synaptic conduction delays, the network eventually becomes oscillatory if the network size exceeds a certain value. This happens precisely at the point where L crosses the bifurcation line shown in figure 4(b). We therefore propose an alternative scaling J∝1/N here for which we show that it preserves the shape of correlation functions and the overall network state. Only the magnitude of the cross correlation functions decreases ∝1/N. However, a caveat of this scaling is that while it preserves the global network properties, it affects the working point of each individual neuron, because the fluctuations due to the local synaptic input decrease $\propto 1/\sqrt {N}$ . We alleviated this shortcoming by supplying each neuron with additional, uncorrelated noise.

Our results are based on a simplified network model composed of two homogeneous (excitatory and inhibitory) subpopulations of leaky integrate-and-fire neurons. The real cortex, in contrast, is highly heterogeneous in several respects: Its layered structure with layer-specific neuron and connection properties (e.g. time constants, spike thresholds, synaptic weights, in-degrees) requires the distinction of more than two subpopulations. Even within each subpopulation and for each combination of subpopulations, the neuron and connection parameters are not constant but broadly distributed. Further, a plethora of cortical neurons, in particular various types of interneurons, exhibit a much richer dynamical repertoire (e.g. resonating behavior) than the leaky integrate-and-fire neuron model (regular spiking integrator). In principle, the mathematical framework presented in this paper can be extended to networks composed of n (n > 2) heterogeneous subpopulations of different neuron types. This would require to account for subpopulation-specific working points (e.g. due to layer-specific firing rates; see [47]) and for the effect of parameter distributions [48, 49] and the single-neuron dynamics on the effective linearized subpopulation responses. This extended theory would result in an n-dimensional linear algebraic equation for the subpopulation-averaged cross spectra, similar to (6) where n = 2. For n > 2, this equation most likely needs to be solved numerically.

A fundamental assumption of the presented theory is that the network states show irregular single neuron dynamics. This requirement arises from the analytical description replacing spike trains by spike densities and a stochastic realization of spikes. Regular spike trains are outside the scope of such a description. Moreover, the approximation of the neuronal response to linear order is only a viable approach in sufficiently asynchronous network states with low correlations. States with stronger correlations, such as observed in convergent–divergent feed-forward structures [34, 50], require an explicit treatment of the nonlinear response [36].

From a physics viewpoint, neuronal networks unite several interesting properties. They do not reach thermodynamic equilibrium even in the stationary state, as detailed balance does not hold for all pairs of states of the system. In detailed balance, the rate of transition from one state to another is equal to the rate of the reverse transition. For a leaky integrate-and-fire neuron the state of the neuron is uniquely determined by its membrane voltage. In the stationary state neurons fire with a constant rate, so there is a continuous flux of the neurons' voltage from reset up to threshold. Imagining a discretization of the voltage axis we see that a pair of adjacent voltage-intervals between reset and threshold is more often traversed from lower to higher voltage than in the reverse direction, so obviously detailed balance does not hold even for a single neuron in the stationary state. Moreover, the interaction between pairs of neurons is directed, delayed, pulsed and depends on the flux of the sending neuron's state variable at threshold. In contrast, pairwise interactions frequently studied in physics, like the Coulomb interaction or exchange interaction, can be expressed by a pair potential and are thus symmetric (undirected), instantaneous and depend directly on the state variables (e.g. spatial coordinates or spins) of the pair of interacting particles. Non-equilibrium systems are at the heart of ubiquitous transport phenomena, like heat or electric conduction. Understanding fluctuations in such a system marks the starting point to infer macroscopic properties by the assertion of the fluctuation–dissipation theorem that connects microscopic fluctuations to macroscopic transport properties. Despite the non-equilibrium dynamics and the non-conservative pairwise interaction, in this paper we develop a simple analytical framework merely based on linear perturbation theory that explains the time-dependent covariance functions of the activity of pairs of integrate-and-fire model neurons in a recurrent random network. Formally our approach resembles the step from the kinetic Ising model near equilibrium to its non-equilibrium counterpart, the network of binary neurons [25]. A difference is the spiking interaction considered in our work, which led us to describe each neuron in terms of the flux over threshold (spike train) rather than by its state variables (membrane voltage and synaptic current). In this respect, we follow the established mean-field approach for spiking neuronal systems [9, 14]. However, while this mean field approach proceeds by assuming vanishing correlations to obtain the dynamics of the ensemble-averaged activity, we here derive and solve the self-consistency equation for the pairwise-averaged covariances of the microscopic system.

The typical time scale of covariance functions found here coincides with the time window of biological synaptic plasticity rules [30], so that non-trivial interactions of dynamics and structure are expected. It is our hope that the novel capability to resolve the temporal structure of covariances in spiking networks presented here proves useful as a formal framework to further advance the theory of these correlated non-equilibrium systems and in particular serves as a further stepping stone in the endeavor to understand how learning on the system level is implemented by the interplay of neuronal dynamics and synaptic plasticity.

Acknowledgments

We are grateful to Birgit Kriener, Sonja Grün and George Gerstein for discussions and exploratory simulations in fall 2005 that led to the observation of asymmetric covariance functions. This work is partially supported by the Helmholtz Association: HASB and portfolio theme SMHB, the Next-Generation Supercomputer Project of MEXT, EU grant 15879 (FACETS), EU grant 269921 (BrainScaleS). All network simulations were carried out with NEST (http://www.nest-initiative.org).

Appendix A.: Response kernel of the leaky integrate-and-fire model

The response kernel hij needs to be related to the dynamics of the neuron model (1). Here we present an approximation of this kernel which is sufficiently accurate to allow quantitative predictions, but yet simple enough to enable an analytical solution for the correlation structure. If the synaptic time constant is short, τs ≪ τm, the synaptic amplitude J can be thought of as the amplitude of the jump in the membrane potential V caused upon arrival of an incoming impulse. If correlations between incoming spike trains are sufficiently small, the first and second moments of the summed impulses $\tau _{\mathrm {m}}\sum _{j}J_{ij}s_{j}(t-d)$ are $\mu _{i}=\tau _{\mathrm {m}}\sum _{j}J_{ij}r_{j}$ and $\sigma _{i}^{2}=\tau _{\mathrm {m}}\sum _{j}J_{ij}^{2}r_{j}$ , respectively, if the inputs' statistics can be approximated by Poisson processes of rate rj each. For small J and a high total rate, the system of differential equations (1) is hence approximated by a stochastic differential equation driven by a unit variance Gaussian white noise ξ

The stationary firing rate in this limit is given by [33]

Equation (A.1)

with Riemann's zeta function ζ. The rate ri is the density of action potentials over time. The response of the firing density of the neuron i at time point t with respect to a point-like deflection of the afferent input sj at time point t' defines the response kernel as the functional derivative

Here we used the homogeneity, namely the identical input statistics of each neuron i, leading to the same temporal shape h(t) independent of i and the stationarity, so that the kernel only depends on the time difference t − t'. We choose h(t) to have unit integral and define wij as the integral of the kernel. We determine the temporal integral of the kernel as

Equation (A.2)

The second equality holds because the integral of the impulse response equals the step response [51]. Further, a step in the density sj corresponds to a step of rj. Up to linear approximation the effect of the step in the rate rj on the rate ri can be expressed by the derivative considering the perturbation of the mean μi and the variance σ2i upon change of rj. Using equation (A.1) we note the chain rule $\frac {\partial r_{i}}{\partial r_{j}}=-r_{i}^{2}\frac {\partial r_{i}^{-1}}{\partial r_{j}}$ . The latter derivative follows as $\frac {\partial r_{i}^{-1}}{\partial r_{j}}=\frac {\partial r_{i}^{-1}}{\partial y_{\theta }}\frac {\partial y_{\theta }}{\partial r_{j}}+\frac {\partial r_{i}^{-1}}{\partial y_{r}}\frac {\partial y_{r}}{\partial r_{j}}$ . The first derivative in both terms yields $\frac {\partial r_{i}^{-1}}{\partial y_{A}}=\sqrt {\pi }\tau _{\mathrm {m}} f(y_{A})$ with yA∈{yθ,yr}. The second derivative evaluates with $y_{A}=\frac {A-\mu _{i}}{\sigma _{i}}+\frac {\alpha }{2}\sqrt {\frac {\tau _{\mathrm {s}}}{\tau _{\mathrm {m}}}}$ to $\frac {\partial y_{A}}{\partial r_{i}}=-\frac {1}{\sigma _{i}}\tau _{\mathrm {m}} J_{ij}-\frac {A-\mu _{i}}{\sigma _{i}^{2}}\frac {\tau _{\mathrm {m}} J_{ij}^{2}}{2\sigma _{i}}=-\tau _{\mathrm {m}}\frac {J_{ij}}{\sigma _{i}}(1+\frac {A-\mu _{i}}{\sigma _{i}}\frac {J_{ij}}{2\sigma _{i}})$ . So together we obtain

Equation (A.3)

Appendix B.: Cross spectral matrix in the frequency domain

The autocovariance A(ω) in the frequency domain ($F(\omega )=\mathcal {F}[f](\omega )=\int _{-\infty }^{\infty }f(t)\,\mathrm {e}^{-\mathrm {i}\,\omega t}\, \mathrm {d}t$ ) has two different terms. The first term is a constant r due to the spiking with rate r resulting from the delta peak rδ(t) in the time domain. The second term is the continuous function ac(t), for example due to the refractoriness of the neuron. It further follows from a(t) = a(−t) that A(ω) = A(−ω). For τ > 0 the covariance matrix fulfils the linear convolution equation (5). As this equation only holds for the positive half of the time axis, we cannot just apply the Fourier transform to obtain the solution. For negative time lags τ < 0 the covariance matrix is determined by the symmetry c(τ) = cT(−τ). Here we closely follow [26] and employ the Wiener–Hopf theory [39] to derive an equation for the cross spectral matrix in the frequency domain that has the desired symmetry and solves (5) simultaneously. To this end we introduce the auxiliary matrix $\mathbf {b}(\tau )=\left (h\ast (\mathbf {M}\mathbf {c})\right )(\tau )+\mathbf {Q}(h\ast a)(\tau )-\mathbf {c}(\tau )$ for − < τ < . Obviously, b(τ) = 0 for τ > 0. Since the defining equation for b holds on the whole time axis, we may apply the Fourier transform to obtain $\mathbf {B}(\omega )=H(\omega )\left (\mathbf {M}\mathbf {C}(\omega )+\mathbf {Q} A(\omega )\right )-\mathbf {C}(\omega )$ . Solving for C

Equation (B.1)

and using the symmetry C(ω) = CT(−ω) we obtain the equation

We observe that QMT = MQT is symmetric and with A(ω) = A(−ω) the term proportional to |H(ω)|2 cancels on both sides, resulting in

Equation (B.2)

We next introduce D(ω) = H(ω)A(ω) which we split into D(ω) = D+(ω) + D(ω), chosen such that d+(t) (in the time domain) vanishes for t < 0 and d(t) vanishes for t > 0. Consequently the Fourier transforms of both terms may have poles in distinct complex half-planes: D+(ω) may only have poles in the upper half-plane ℑ(ω) > 0 and the function vanishes for lim|ω|→,ℑ(ω)<0D+(ω) = 0, following from the definition of the Fourier integral. For D(ω) the half-planes are reversed. The analytical properties of H(ω) are thus similar to those of D+(ω); those of B(ω) are similar to D(ω). We sort the terms in (B.2) such that the left hand side only contains terms that vanish at infinity in the lower half-plane ℑ(ω) < 0, the right hand side those that vanish at infinity in the upper half-plane ℑ(ω) > 0

Equation (B.3)

The left hand side consequently is analytic for ℑ(ω) < 0, the right hand side is analytic for ℑ(ω) > 0, so (B.3) defines a function that is analytic on the whole complex plane and that vanishes at the border for |ω| → . Hence by Liouville's theorem it is 0 and we can solve the right hand side of (B.3) for B

Inserted into (B.1) this yields the definition $\mathbf {P}(\omega )=\left (\mathbf {1}-\mathbf {M} H(\omega )\right )^{-1}$

The latter expression can be brought to the form

Equation (B.4)

which has the advantage that the first two terms have poles in distinct half-planes ℑ(ω) > 0 and ℑ(ω) < 0, respectively. This means these terms contribute for either positive and negative times, respectively, and the last term contributes for positive and negative times.

Appendix C.: Spectrum of the propagator

With the Fourier representation $H(\omega )=\frac {\mathrm {e}^{-\mathrm {i}\,\omega d}}{1+\mathrm {i}\,\omega \tau _{\mathrm {e}}}$ of the delayed exponential kernel (4) the averaged cross spectrum (6) contains the two functions U(z) and V (z) = |U(z)|2 defined on the complex frequency plane z = i ω. These functions may exhibit poles. The function U has a pole zk whenever the denominator has a single root H−1(−iz) − L = 0, which amounts to the condition (1 + zkτe) ezkd = L. These complex frequencies can be expressed by the Lambert W function, the solution of WeW = x [45], by

Equation (C.1)

as

leading to (8). The Lambert Wk(x) function has infinitely many branches k [45]. The principal branch has two real solutions, if x > −e−1. The remaining branches appear in conjugate pairs. For x < −e−1 the principal solutions turn into a complex conjugate pair. This happens at a sufficiently strong negative coupling L or long delays d

or

The principal poles may assume positive real values, leading to oscillations. The condition under which this happens can be derived from (C.1). At the point of transition the pole can be written as z = iωcrit; it is a solution to (1 + iωcritτe) ei ωcritd = L. In order for this equation to be fulfilled, the absolute value and the phase must be identical on both sides. The equation for the absolute value requires 1 + (ωcritτe)2 = L2. This means there are only oscillatory solutions if the magnitude of the feedback exceeds unity L < −1. Since the poles come in conjugate pairs, we can assume w.l.o.g. that ωcrit > 0. The condition for the absolute value hence reads

Equation (C.2)

This is the frequency of oscillation at the onset of the Hopf bifurcation. For strong feedback |L| ≫ 1 the frequency increases linearly with the magnitude of the feedback. The condition for the agreement of the phase angles reads ∠(1 + iωcritτe) + ωcritd = 0, so $\frac {\Im (1+\mathrm {i}\,\omega _{\mathrm {crit}}\tau _{\mathrm {e}})}{\Re (1+\mathrm {i}\,\omega _{\mathrm {crit}}\tau _{\mathrm {e}})}=-\frac {\Im (\mathrm {e}^{\mathrm {i}\,\omega _{\mathrm {crit}}d})}{\Re (\mathrm {e}^{\mathrm {i}\,\omega _{\mathrm {crit}}d})}=\tan \omega _{\mathrm {crit}}d$ , which leads to tan ωcritd = −ωcritτe. This equation has a solution in $\frac {\pi }{2}\leqslant \omega _{\mathrm {crit}}d\leqslant \pi $ . In the limit of the vanishing delay d → 0 the frequency goes to infinity, as the solution converges to $\omega _{\mathrm {crit}}d=\frac {\pi }{2}$ . This corresponds to the frequency $f_{\mathrm {crit}}=\frac {1}{4d}$ . Inserting (C.2) leads to $\tan (\frac {d}{\tau _{\mathrm {e}}}\sqrt {L^{2}-1})=-\sqrt {L^{2}-1}$ , which can be solved for the critical delay

Equation (C.3)

where we took care that the argument of the tangent is in $[\frac {\pi }{2},\pi ]$ . So with (C.2) and (C.3) the oscillatory frequency at the transition can be related to the synaptic delay as

Appendix D.: Back transform by residue theorem

In the non-oscillatory state all poles zk (8) have a negative real part. The function U(z) = ((1 + zτe) ezd − L)−1 in (6) then has all poles in the left complex half-plane, ℜ(zk) < 0 ∀k. We perform the Fourier back transform

Equation (D.1)

replacing the integration path by a closed contour Et following the imaginary axis from −i to i. In order to ensure convergence of the integral, for t < d we need ℜ(z) > 0, so we close Et<d in infinity within the right half-plane, where the integrand vanishes. Since there are no poles in the right half-plane, for t < d the path Et<d does not enclose any poles, so u(t) = 0. For t ⩾ d the path Etd must be closed in the left half-plane to ensure convergence of (D.1), so the residue theorem yields

Equation (D.2)

The residue can be calculated by linearizing the denominator of U(zk + z) around zk

which yields

where we used (C.1) in the last step. The poles of V (z) = U(z)U(−z) are located in both half-planes, consequently v(t) is non-zero on the whole time axis. Here we only calculate v(t) for positive times t > 0, because it follows for negative times by symmetry v(−t) = v(t). The path has to be closed in the left half-plane, where the poles zk have the residues

So by applying (D.2) the functions u and v are

Equation (D.3)

The amplitude of the modes decreases with k. For all figures in the paper we truncated the series after k = 30.

Appendix E.: The simulation parameters used in the figures

All network simulations were performed using NEST [52]. The parameters of the leaky integrate-and-fire neuron model (1) throughout this work are τm = 20 ms, τs = 2 ms, τr = 2 ms, Vθ = 15 mV, Vr = 0 mV. All simulations are performed with precise spike timing and time stepping of 0.1 ms [53]. Figures 1 and 36 of the main text all consider recurrent random networks of N excitatory and γN inhibitory leaky integrate-and-fire model neurons receiving input from randomly drawn neurons in the network and external excitatory and inhibitory Poisson input, so that the first and second moments are μi = 15 mV and σ2i = 10 mV, respectively. Unless stated explicitly, we use N(1 + γ) = 10 000 neurons except in figure 3 where the number of neurons is given in the legend. Each neuron has K = pN incoming excitatory synapses with synaptic amplitude J independently and randomly drawn from the pool of excitatory neurons, and γK = γpN inhibitory synapses with amplitude −gJ (a homogeneous Erdős–Rényi random network with fixed in-degree), realizing a connection probability p = 0.1. Cross covariance functions are measured throughout as the covariance between two disjoint populations of 1000 neurons each taken from the indicated populations in the network. Correlation functions are evaluated with a time resolution of 0.1 ms. In figure 1 we use a synaptic delay d = 1 ms.

In figure 3 we keep the feedback of the population rate constant L = Kw(1 − γg) = const. Increasing the size of the network N the synaptic amplitude J (which is proportional to w in the linear approximation) needs to scale as J = J0N0/N, where we chose J0 = 0.1 mV and N0 = 10 000 here. The variance caused by local input from the network then decreases with increasing network size ∝1/N, while the local mean is constant because L = const. Each cell receives in addition uncorrelated external balanced Poisson input, adjusted to keep the mean μi = 15 mV, and fluctuations σi = 10 mV constant. This is achieved by choosing the rates of the external excitatory (re,ext, amplitude Jext = 0.1 mV) and inhibitory (ri,ext, amplitude −gJext) inputs as

Equation (E.1)

where μloc = τmrKJ(1 − γg) and σ2loc = τmrKJ2(1 + γg2) are the mean and variance due to local input from other neurons of the network firing with rate r. From L = const. follows that also Kw = const., so that (7) predicts a scaling of the magnitude of the covariance functions in proportion to 1/N. Other network parameters are d = 3 ms and g = 5. The firing rate in the network is r = 23.6 Hz.

In figures 4 and 5 we use N = 104, g = 6, J = 0.1 mV and the delay d as described in the captions; the remaining parameters are as in figure 3.

In figure 6 we use N = 104, J = 0.1 mV, g = 5 and d = 2 ms and the remaining parameters as in figure 3. We obtain the filtered synaptic currents by filtering the spike trains with an exponential filter of time constant τs = 2 ms. This results in an effective filter for the cross covariances of $q(t)=\frac {\tau _{\mathrm {m}}^{2}}{2\tau _{\mathrm {s}}}\,\mathrm {e}^{-|t|/\tau _{\mathrm {s}}}$ . The different contributions shown are cIeIe = q*(J2pKae + J2K2cee), cIiIi = q*(J2pKai + J2K2cii), cIeIi = −q*(gJ2γK2cei) and cIiIe = −q*(gJ2γK2cie), where * denotes the convolution.

For figure 2, we simulate two populations of N = 1000 neurons each. Each neuron receives independent background activity from Poisson processes and in addition input from a common Poisson process with rate rc = 25 Hz causing in population 1 a positive synaptic amplitude of J and for population 2 a negative synaptic amplitude J (J is given on the x-axis). The synaptic amplitude of the background inputs is Je = 0.1 mV for an excitatory impulse and Ji = −0.5 mV for an inhibitory impulse. The rates of the excitatory and inhibitory background inputs are chosen so that the first and second moments μi = τm(Jere + Jiri + Jrc) = 15 mV and σ2i = τm(J2ere + J2iri + Jr2c) = 10 mV are independent of J. The spikes produced by each population are triggered to the arrival of an impulse in the common input and averaged over a duration of 10 s to obtain the impulse response.

Please wait… references are loading.
10.1088/1367-2630/15/2/023002