Brought to you by:
Letter

Data fusion via intrinsic dynamic variables: An application of data-driven Koopman spectral analysis

, , and

Published 26 February 2015 Copyright © EPLA, 2015
, , Citation Matthew O. Williams et al 2015 EPL 109 40007 DOI 10.1209/0295-5075/109/40007

0295-5075/109/4/40007

Abstract

We demonstrate that the Koopman eigenfunctions and eigenvalues define a set of intrinsic coordinates, which serve as a natural framework for fusing measurements obtained from heterogeneous collections of sensors in systems governed by nonlinear evolution laws. These measurements can be nonlinear, but must, in principle, be rich enough to allow the state to be reconstructed. We approximate the associated Koopman operator using extended dynamic mode decomposition, so the method only requires time series of data for each set of measurements, and a single set of "joint" measurements, which are known to correspond to the same underlying state. We apply this procedure to the FitzHugh-Nagumo PDE, and fuse measurements taken at a single point with principal-component measurements.

Export citation and abstract BibTeX RIS

In many applications, our understanding of a system comes from sets of partial measurements (functions of the system state) rather than observations of the full state. Linking these heterogeneous partial measurements (different sensors, different measurement times) is the objective of a broad collection of techniques referred to as data fusion methods [13]. Since the system state can itself be a measurement, these methods also encompass traditional techniques for state estimation such as Kalman filters [46] and stochastic estimation methods [714]. One might subdivide these approaches into "dynamic" methods, which require models of the underlying evolution [46], and "static" methods [15], which do not require dynamical information but, in general, need more extensive measurements to be successful. Other, more recent, methods such as the gappy Proper Orthogonal Decomposition [16], nonlinear intrinsic variables [17], or compressed sensing-based methods [18] can be thought of as solving, in effect, the same problem, but make different assumptions about the nature of the underlying system and the type of measurements available. Though the exact details and assumptions differ, the overarching goal in data fusion is to develop a mapping from one set of measurements to another.

In this manuscript, we propose a method that generates such a mapping with the help of a set of intrinsic coordinates; these coordinates are based on the (computationally approximated) eigenfunctions of the Koopman operator [1923]. To merge measurements from heterogeneous sources, the algorithm requires data in the form of time series from each source, and a set of "joint" measurements (i.e., measurements known to correspond to the same underlying state). In this manuscript, we assume that the mapping from the system state (or a quantity effectively equivalent to the state such as the coefficients of the leading principal components [24]) to each set of measurements is invertible. However, this limitation can likely be overcome by "enriching" the measurement sets through the use of time delays [2527]. The benefits of this approach are that it is naturally applicable to nonlinear systems and sensors, and minimizes the number of joint measurements required; in many systems, only a single pair of joint measurements is needed.

Problem formulation

Suppose we have two different sets of measurements, generated by two different sets of (heterogeneous) sensors observing the same fundamental behavior (the evolution of the state $\boldsymbol{x}(t)$ ). Let ${\boldsymbol{\tilde x}=\boldsymbol{\tilde g}(\boldsymbol{x})}$ denote a measurement of this state obtained with the first set of sensors, and $\boldsymbol{\hat x}=\boldsymbol{\hat g}(\boldsymbol{x})$ a measurement with the second set; "a measurement" is, in general, a vector valued observable obtained at a single moment in time by one of our two collections of sensors. Here, $\boldsymbol{\tilde g}\!:\mathcal{M}\to\tilde{\mathcal{M}}$ and $\boldsymbol{\hat g}\!:\mathcal{M}\to\hat{\mathcal{M}}$ are the functions that map from the instantaneous system state ($\boldsymbol{x} \in\mathcal{M}$ ) to the corresponding instantaneous measurements. We record time series of such smeasurements from each of our sets of sensors, and divide each of the time series into sets of measurement pairs, $\{(\boldsymbol{\tilde x}_m,\boldsymbol{\tilde y}_m)\}_{m=1,\ldots,\tilde{M}}$ and $\{(\boldsymbol{\hat x}_{m},\boldsymbol{\hat y}_{m})\}_{m=1,\ldots,\hat{M}}$ , where $\boldsymbol{\tilde x}_{m}$ (respectively, $\boldsymbol{\hat x}_{m}$ ) is the m-th measurement, and $\boldsymbol{\tilde y}_{m}$ (respectively, $\boldsymbol{\hat y}_{m}$ ) is its value after a single sampling interval. These time series can be obtained independently, and the total number of measurements, $\tilde{M}$ and $\hat{M}$ , can differ. For simplicity, we assume the sampling interval, $\Delta t$ , is the same for both data sets; this too can vary with only a slight modification to the algorithm. The only requirements we place on these data sets are that i) the data they contain are generated by the same system, ii) the mappings from the system state to either set of measurements formally have inverses in the region of interest, and iii) the sampling interval remains constant within a data set. The joint data set is denoted as $\left\{ (\boldsymbol{\tilde x}_{m},\boldsymbol{\hat x}_{m})\right\}_{m=1,\ldots,M_\text{joint}} $ ; the subscripts denote the m-th measurement in the joint data set, and $M_\text{joint}=1$ in this paper. This approach is applicable to an arbitrary number of different measurements; the restriction to two is solely for simplicity of presentation.

The Koopman operator

The crux of our approach is the use of these time series data to computationally approximate the leading eigenfunctions of the Koopman operator [1921,23,28,29], thus generating a mapping from measurement space to an intrinsic variable space. The Koopman operator is defined for a specific autonomous dynamical system, whose evolution law we denote as ${\boldsymbol{x}\mapsto\boldsymbol{F}(\boldsymbol{x}),}$ where $\boldsymbol{x}\in\mathcal{M}\subseteq\mathbb{R}^{N}$ is the system state, $\boldsymbol{F}\!\!\!:\mathcal{M}\to\mathcal{M}$ is the evolution operator, and $n\in\mathbb{N}$ is discrete time. The action of the Koopman operator is

Equation (1)

where $\psi\!\!:\mathcal{M}\to\mathbb{R}$ is a scalar observable. For brevity, we refer to $\varphi_k$ and $\mu_k$ , as the k-th Koopman eigenfunction and eigenvalue, respectively. We also define $\lambda_{k}=\frac{\log(\mu_{k})}{\Delta t}$ , which is the discrete time approximation of the continuous time eigenvalue. Accompanying the eigenfunctions and eigenvalues are the Koopman modes, which are vectors in $\mathbb{C}^N$ (or spatial profiles if the dynamical system is a spatially extended one) that can be used to reconstruct (or predict) the full state when combined with the Koopman eigenfunctions and eigenvalues [30,31]. The Koopman eigenvalues, eigenfunctions, and modes have been used in many applications including the analysis of fluid flows [30,3235], power systems [3638], nonlinear stability [28], and state space parameterization [21]; here we exploit their properties for data fusion purposes.

For this application, the ability of the Koopman eigenfunctions to generate a parameterization of state space is key. In the simple example that follows we use the phase of an "oscillatory" eigenfunction, which has $|\mu_k|=1$ , and the magnitude of a "decaying" eigenfunction, which has $|\mu_k| < 1$ , as an intrinsic (quasi–action-angle) coordinate system for the slow manifold (i.e., the "weakest" stable manifold) of a limit cycle. While there are many data-driven methods for nonlinear manifold parameterization (see, e.g., refs. [24,39]), the benefit of this approach is that the resulting parameterization is, in principle, invariant to invertible transformations of the underlying system and, in that sense, are an intrinsic set of coordinates for the system.

If it is possible to reconstruct the underlying system state from one snapshot of observation data, then $\boldsymbol{\tilde g}$ formally has an inverse, which we denote as T (i.e.${{\boldsymbol{x}}=\boldsymbol{T}(\boldsymbol{\tilde x})}$ ). When this is not the case naturally, one can sometimes construct an "extended" $\boldsymbol{\tilde x}$ where such a T does exist by combining measurements taken at the current and a finite number of previous times [5,25,27]. Then if $\varphi\!\!:\mathcal{M}\to\mathbb{C}$ is a Koopman eigenfunction, $\tilde{\varphi}=\tilde{\alpha}\varphi\circ\boldsymbol{T}$ , where $\tilde{\varphi}\!\!\!:\tilde{M}\to\mathbb{C}$ , is formally an eigenfunction of the Koopman operator with the eigenvalue μ for one set of sensors (rather than for the full state). The constant $\tilde{\alpha}\in\mathbb{C}$ highlights that this eigenfunction is only defined up to a constant. The evolution operator expressed in terms of $\boldsymbol{\tilde x}$ is $\boldsymbol{\tilde{F}}=\boldsymbol{\tilde g}\circ\boldsymbol{F}\circ\boldsymbol{T}$ and the action of the associated Koopman operator is $\tilde{\mathcal{K}}\psi=\psi\circ\boldsymbol{\tilde{F}}$ . Then, $(\tilde{\mathcal{K}}\tilde{\varphi})(\boldsymbol{\tilde x})=(\tilde{\varphi}\circ\boldsymbol{\tilde{F}})(\boldsymbol{\tilde x})=\tilde{\alpha}(\varphi\circ\boldsymbol{T}\circ\boldsymbol{\tilde g}\circ\boldsymbol{F}\circ\boldsymbol{T}\circ\boldsymbol{\tilde g})(\boldsymbol{x})=\tilde{\alpha}(\varphi\circ\boldsymbol{F})(\boldsymbol{x})=\tilde{\alpha}(\mathcal{K}\varphi)(\boldsymbol{x})=\mu\tilde{\alpha}\varphi(\boldsymbol{x})=\mu\tilde{\varphi}(\boldsymbol{\tilde x})$ , where we have assumed that $\tilde{\varphi}$ is still an element of some space of scalar observables. This same argument could be used to obtain a $\hat{\varphi}$ and $\hat{\alpha}$ for the measurements represented by $\boldsymbol{\hat x}$ . Finally, we define the ratio $\alpha = \hat{\alpha}/\tilde{\alpha}$ , whose role we will explain shortly.

To approximate these quantities, we use extended dynamic mode decomposition (Extended DMD) [31], which is a recently developed data-driven method that approximates the Koopman "tuples" (i.e., triplets of related eigenvalues, eigenfunctions and modes). The inputs to the Extended DMD procedure are sets of snapshot pairs, $\{(\boldsymbol{x}_{m},\boldsymbol{y}_{m})\}_{m=1,\ldots,M}$ , and a set of dictionary elements that span a subspace of scalar observables, which we represent as the vector valued function ${{\boldsymbol{{\psi}}}({\boldsymbol{x}})=[\psi_{1}({\boldsymbol{x}}),\psi_{2}({\boldsymbol{x}}),\ldots,\psi_{K}({\boldsymbol{x}})]^{T}}$ , where $\psi_{k}\!\!\!:\mathcal{M}\to\mathbb{R}$ and ${\boldsymbol{{\psi}}}\!\!:\mathcal{M}\to\mathbb{R}^{K}$ . This procedure results in the matrix

Equation (2)

which is a finite-dimensional approximation of the Koopman operator, where ${\boldsymbol{{G}}}=\sum_{m=1}^{M}{\boldsymbol{{\psi}}}(\boldsymbol{x}_{m}){\boldsymbol{{\psi}}}(\boldsymbol{x}_{m})^{T}$ , ${{\boldsymbol{{A}}}=\sum_{m=1}^{M}{\boldsymbol{{\psi}}}(\boldsymbol{x}_{m}){\boldsymbol{{\psi}}}(\boldsymbol{y}_{m})^{T}}$ , and + denotes the pseudo-inverse. The k-th eigenvalue and eigenvector of ${\boldsymbol{{{K}}}}$ , which we denote as $\mu_k$ and $\boldsymbol{\xi}_k$ , produce an approximation of the k-th eigenvalue and eigenfunction of the Koopman operator [31]. We denote the approximate eigenfunction as $\varphi_k(\boldsymbol{x})={\boldsymbol{{\psi}}}^{T}(\boldsymbol{x})\boldsymbol{{\xi}}_{k} = \sum_{i=1}^K\psi_i(\boldsymbol{x})\boldsymbol{\xi}_k^{(i)}$ where $\boldsymbol{\xi}_k^{(i)}\in\mathbb{C}$ is the i-th element of $\boldsymbol{\xi}_k$ .

The numerical procedure

Because ${\boldsymbol{x}}$ is, by assumption, unknown, we instead apply Extended DMD to the measurement data. The $\boldsymbol{\tilde x}$ data generates approximations of the set of $\tilde{\varphi}_{k}$ and $\tilde{\mu}_{k}$ , and the $\boldsymbol{\hat x}$ data generates approximations of the set of $\hat{\varphi}_{k}$ and $\hat{\mu}_{k}$ . To map between these separate sets of observations, note that

Equation (3)

because $\varphi_{k}(\boldsymbol{x}_{m})=\tilde{\alpha}_{k}\tilde{\varphi}_{k}(\boldsymbol{\tilde x}_{m})=\hat{\alpha}_{k}\hat{\varphi}_{k}(\boldsymbol{\hat x}_{m})$ when $\tilde{\varphi}_{k}$ is the eigenfunction that "corresponds" to $\hat{\varphi}_{k}$ . To determine which eigenfunctions correspond to one another, we require that $\tilde{\mu}_{k}\approx\hat{\mu}_{k}$ . This is also a sanity check that Extended DMD is indeed producing a reasonable approximation of the Koopman operator; if no eigenvalues satisfy this constraint, then the approximation generated by Extended DMD is not accurate enough to be useful. Finally, to determine the $\alpha_{k}$ , we use the joint data set along with (3) to solve for $\alpha_{k}$ in a least squares sense and thus register the data.

Taken together, the steps above produce an approximation of $\tilde{\varphi}_k$ given a measurement of $\boldsymbol{\hat x}$ . The final step in the procedure is to obtain a mapping from $\tilde{\varphi}_k$ to $\boldsymbol{\tilde x}$ . One conceptually appealing way is by expressing $\boldsymbol{\tilde x}$ as the sum of its Koopman modes and eigenfunctions [23,30,31]. In this manuscript, we take a simpler approach and use interpolation routines for scattered data (see, e.g., refs. [40,41]) to approximate the inverse mapping, ${(\varphi_1(\boldsymbol{\tilde x}), \varphi_2(\boldsymbol{\tilde x}),\ldots) \mapsto \boldsymbol{\tilde x}}$ . In particular, we use two-dimensional piecewise linear interpolation as implemented by the griddata command in Scipy [42], which constructs a piecewise linear interpolant based on a Delaunny triangulation of the data.

Overall, the data fusion procedure is as follows:

  • 1)  
    Approximate the eigenfunctions and eigenvalues with the data $\{(\boldsymbol{\tilde x}_m,\boldsymbol{\tilde y}_m)\}_{m=1,\ldots,\tilde{M}}$ and $\{(\boldsymbol{\hat x}_{m},\boldsymbol{\hat y}_{m})\}_{m=1,\ldots,\hat{M}}$ , and determine the number of eigenfunctions required to usefully parameterize state space.
  • 2)  
    Match the $\tilde{\varphi}_{k}$ with its corresponding $\hat{\varphi}_{k}$ by finding the closest pair of eigenvalues, $\tilde\mu_k$ and $\hat\mu_k$ .
  • 3)  
    Use the joint data set, $\left\{(\boldsymbol{\tilde x}_{m},\boldsymbol{\hat x}_{m})\right\}_{m=1,\ldots,M_\text{joint}}$ , to compute the $\alpha_{k}$ for each eigenfunction.
  • 4)  
    Given a new measurement, $\boldsymbol{\hat x}$ , compute $\hat{\varphi}_{k}(\boldsymbol{\hat x})$ and use (3) to approximate $\tilde{\varphi}_{k}(\boldsymbol{\tilde x})$ .
  • 5)  
    Finally, approximate $\boldsymbol{\tilde x}$ from the $\tilde{\varphi}_{k}(\boldsymbol{\tilde x})$ using an interpolation routine.

This method can be considered a hybrid of static and dynamic state estimation techniques: dynamic information is required to construct the mapping to and from the set of intrinsic variables, but is not used beyond that.

An illustrative example

To demonstrate the efficacy of this technique, we apply it to the FitzHugh-Nagumo PDE in 1D, which is a simple model for signal propagation in the axon of a neuron, and a prototypical example of a reaction diffusion system in one spatial dimension [43]. The governing equations are

Equation (4a)
Equation (4b)

where v is the activation field, w is the inhibition field, $c_{0}=-0.03$ , $c_{1}=2.0$ , $\delta=4.0$ , $\epsilon=0.017$ , for $x\in[0,20]$ with Neumann boundary conditions. These parameter values are chosen so that (4) has a spatio-temporal limit cycle. While the Koopman operator can be defined for (4), the large dimension of the state space (e.g., for a finite difference discretization) would make the necessary computations intractable. Instead, the simpler task of constructing this mapping on a low-dimensional "slow" manifold, where the dynamics are effectively two-dimensional, is undertaken.

We start by collecting a large, representative data set, and performing principal-component analysis (PCA) [44] on it. For the first set of measurements, we project v and w onto the leading three principal components of the complete data set, so ${\boldsymbol{\tilde x}_n=[a_{1}(t_n),a_{2}(t_n),a_{3}(t_n)]}$ where ak is the coefficient of the k-th principal component evaluated at tn. Together these modes capture over 95% of the energy in the data set [44], and serve as an accurate, low-dimensional representation of the full state. The other data come from pointwise measurements taken at x = 10 (i.e.$\boldsymbol{\hat x}(t)=[v(10,t),w(10,t)]$ ). Both sets of data are collected by generating 20 trajectories with a sampling interval of $\Delta t=2$ , where each trajectory consists of 103 snapshot pairs. Each trajectory is computed by perturbing the unstable fixed point associated with the limit cycle, evolving (4) for $\Delta T = 1000$ (roughly ten periods of oscillation) to allow fast transients to decay, and then recording the evolution of each measurement. The data sets are generated independently, so no snapshot in one data set corresponds exactly to any of the snapshots in the other, and then "whitened" so that each measurement set has unit variance. The registration data set consists of a single measurement from the PCA data set and the corresponding pointwise values.

We approximate the space of observables with a finite-dimensional dictionary, whose elements we denote as $\tilde{\psi}_k$ or $\hat{\psi}_k$ . In this application, $\tilde{\psi}_{k}$ and $\hat{\psi}_{k}$ are the k-th shape function of a moving least squares interpolant [40,41,45] with up to linear terms in each variable [4648] and cubic spline weight functions [48]. Using a quad- or oct-tree to determine the node centers [49] resulted in a set of 1622 basis functions for the PCA data and 1802 basis functions for the pointwise data. Though nontrivial to implement, this set was chosen because it can exactly reproduce constant and linear functions, but other choices of basis functions are also viable [45,5053].

We applied the Extended DMD procedure to the dynamical data sets to obtain approximations of the Koopman eigenfunctions and eigenvalues. Figure 1 shows the PCA and point data colored by the magnitude of the Koopman eigenfunction with $\lambda_{1}\approx-8\times10^{-4}$ and by the phase of the Koopman eigenfunction with $\lambda_{2}\approx4.7i\,\times\,10^{-2}$ . In particular, $\tilde{\lambda}_{1}=-7.26\times 10^{-4}$ , $\hat{\lambda}_{1}=-8.57\times 10^{-4}$ , $\tilde{\lambda}_{2}=0.0473i$ , and $\hat{\lambda}_{2}=0.0473i$ , where $\tilde{\lambda}_{k}$ is the k-th eigenvalue computed using the $\boldsymbol{\tilde x}$ measurements, and $\hat{\lambda}_{k}$ the eigenvalue obtained using $\boldsymbol{\hat x}$ measurements. Despite the differences in the nature of data, both relevant sets of Koopman eigenfunctions (i.e., either $\tilde\varphi_1$ and $\angle\tilde{\varphi}_2$ or $\alpha_1\hat{\varphi}_1$ and $\angle(\alpha_2\hat{\varphi}_2))$ generate (effectively) the same parameterization of the slow manifold.

Fig. 1:

Fig. 1: (Colour on-line) The data-driven parameterization generated using the Koopman eigenfunctions with eigenvalues near $\lambda_{1}=-8\times10^{-4}$ and $\lambda_{2}=4.7i\times10^{-2}$ for the magnitude and phase figures, respectively. Plots (a) and (c) show $\varphi_{1}$ , the value of the first eigenfunction, and plots (b) and (d) show $\angle\varphi_{2}$ , the "phase" of the second eigenfunction. These pairs of eigenfunctions can be used to parameterize the data; for the PCA case, the data effectively lie on a nonlinear, two-dimensional manifold embedded in $\mathbb{R}^{3}$ , and for the pointwise data, on a subset of $\mathbb{R}^{2}$ .

Standard image

Figure 2 shows the three principal-component coefficients as functions of the intrinsic embedding defined by (numerical approximations of) the two of the leading Koopman eigenfunctions computed using the PCA data set. The principal-component coefficients associated with new data points are determined by approximating these values using $\alpha_1\hat{\phi}_1$ and $\angle(\alpha_2\hat{\phi}_2)$ . The single data point used to determine $\alpha_1$ and $\alpha_2$ is indicated by the black x in the figure. To approximate the principal-component coefficients given (an approximation of) the Koopman eigenfunction values, we construct a piecewise linear interpolant for each of the coefficients individually using the data points shown in the figure. In a more complex system or with "noisy" data, more sophisticated approaches to approximating this mapping may be required. However, as we will demonstrate shortly, even simple interpolation routines appear to be sufficient in our example.

Fig. 2:

Fig. 2: (Colour on-line) Colored scatter plots of the first three principal-component coefficients, a1, a2, and a3, as a function of two of the Koopman eigenfunctions, $\tilde{\varphi}_{1}$ and $\angle\tilde{\varphi}_{2}$ . Note that all three of these coefficients appear to be smooth functions of the Koopman eigenfunction values, and hence $\tilde{\varphi}_{1}$ and $\angle\tilde{\varphi}_{2}$ form an effective action-angle coordinate system for the nonlinear "slow manifold" in principal-component space. The black * denotes the point in the joint data set that was used to determine the $\alpha_k$ .

Standard image

Figure 3 demonstrates the quality of the mapping obtained using our approach in combination with piecewise linear interpolation. In the figure, the black lines denote the true coefficient of each principal component, and the red dots indicate the predicted value. Note that this trajectory was not used to compute the Koopman eigenfunctions; furthermore, the fact that the data are a time series was not used in making this prediction, and each measurement of $\boldsymbol{\hat x}$ was considered individually. Qualitatively, our method captures both the oscillations and the amplitude envelopes associated with all three of the principal-component coefficients. However, there are small numerical errors in the reconstruction. Over the time window ${t\in[0, 400]}$ , which is shown in fig. 3, the relative error in each of the principal components is $e_{1}=0.0405$ , $e_{2}=0.0655$ , and $e_{3}=0.0496$ , where $e_{i}=\|a_{i}^{\text{(true)}}-a_{i}^{\text{(pred)}}\|/\|a_{i}^{(\text{true})}\|$ , where $\|\cdot\|=(\int_{0}^{400}(\cdot)^{2}\; \text{d}t)^{1/2}$ . In general, the accuracy of this approach is better for points nearer to the limit cycle, and the relative errors over the window $t\in[0, 4000]$ , which contains points closer to the limit cycle, are only $e_{1}=0.0140$ , $e_{2}=0.0295$ , and $e_{3}=0.0234$ , respectively.

Fig. 3:

Fig. 3: (Colour on-line) The principal-component coefficients reconstructed from the pointwise data (red) for a new trajectory that was not used in the Koopman eigenfunction computation. The actual values are indicated by the black lines. There is good agreement between the predicted and true principal-component values, and our method accurately captures the approach of the trajectory to the limit cycle. We reiterate that each principal-component coefficient was reconstructed independently.

Standard image

In this example, we have not yet discussed the dimensionality of the data, but as stated previously, the number of measurements in each data set is critical for this approach to be justifiable mathematically. Our focus is on dynamics near the limit cycle, which in this problem are effectively two dimensional. However, the data lie on a two-dimensional nonlinear manifold that PCA, which fits the data with a hyperplane, requires three principal components to accurately represent. Therefore, the identified mapping is from $\mathbb{R}^2$ to a two-dimensional nonlinear manifold in $\mathbb{R}^3$ , and not from $\mathbb{R}^2$ to $\mathbb{R}^3$ .

Conclusions

We have presented a method for data fusion or state reconstruction that is suitable for nonlinear systems and exploits the existence of an intrinsic set of variables generated by the eigenfunctions of the Koopman operator. Rather than mapping directly between different sets of measurements, our method focuses on generating an independent mapping to and from the intrinsic variables for each heterogeneous set of measurements. In principle, this can be accomplished as long as a mapping from each set of measurements (or measurements and their time delays) to the system state exists, and the benefit of this approach is that the majority of the required data can be obtained independently, and only a single "joint" pair of data is needed. The key to this method are two properties the eigenvalues and eigenfunctions of the Koopman operator are known to have. These properties are i) the invariance of the Koopman eigenvalues to invertible transformations of the measurements, and ii) the ability of the leading Koopman eigenfunctions to parameterize the relevant "state space". To perform data fusion, both are necessary; while the eigenfunctions themselves generate the needed parameterization, their associated eigenvalues ensure our coordinate system is consistently defined for the various sets of measurements we wish to fuse.

The Koopman operator has long been acknowledged as a powerful method for analyzing systems governed by nonlinear evolution laws, but because it acts on observables rather than states, has traditionally been difficult to approximate effectively. However, recently developed data-driven methods, such as Extended DMD, have the promise to enable techniques that rely upon properties of the Koopman operator to be used in practice. In particular, Extended DMD often produces useful albeit not fully converged approximations of the leading Koopman eigenvalues and eigenfunctions given a data set of snapshot pairs. While the errors in the eigenfunctions result in an imperfect mapping, our approximations were "accurate enough" to enable the task of data fusion or state reconstruction to be accomplished in a nonlinear system with minimal amounts of joint data.

Acknowledgments

The authors gratefully acknowledge support from the National Science Foundation (DMS-1204783), the AFOSR, and the ARO (ARO W911NF-11-1-0511).

Please wait… references are loading.
10.1209/0295-5075/109/40007