Logo
Petroleum Engineer

Introduction to Dynamic Mode Decomposition

16 Nov 2021 - Categories: Python, Dynamic_Mode_Decompostion, and Data-Driven_Methods

Outline

Introduction

The rapid growth of abundant, high-resolution data sources and machine learning techniques has increased interest in data-driven approaches that are capable of accurately describing modern dynamical systems in order to develop new insights and physical interpretations, improve optimization and design, implement active control, and generate future state predictions. Dynamical systems encompass a wide array of topics within science and engineering such as electrical circuits, fluid turbulence, climatology, finance, ecology, neural science, epidemiology, and any system that evolves in time. In addition to big data and machine learning, Koopman operator theory has as also augmented the general comprehension of dynamical systems by representing nonlinearities in these systems in terms of an infinite-dimensional linear operator that acts on a Hilbert space of measurement functions of the system’s state. The ability to obtain linear representations of these complex nonlinear dynamical systems coupled with rich data and modern machine learning techniques, has created great potential to improve our ability to predict and control these systems.

Dynamic mode decomposition (DMD) is a powerful data-driven technique used to discover dynamical systems from high-dimensional data. In essence, the DMD is an algorithm that identifies the best-fit linear dynamical system (in a least-squares sense) that advances high-dimensional measurements forwards in time. This is similar to the framework of the Koopman operator which also advances the observation of a state at a given time to the next time step.

Part of the growing attraction of DMD is attributed to its equation-free nature. Although some dynamical systems could be modeled from an existing set of governing equations, many of the aforementioned dynamical systems have no known governing equations. Even within systems with known governing equations, it is difficult uncover patterns that allow for the characterization of how dominant behaviors evolve in time. Another property that highlights the use of the DMD algorithm is that the spatiotemportal structures generated by DMD allow for future state prediction and control.

In this post I will provide a brief mathematical overview of the notation and DMD architecture. This will be followed by a discussion of the DMD algorithm and its implementation on a generated dataset to demonstrate its simple numerical implementation. Lastly, a brief discussion will include some of the limitations of the DMD and other viable extensions of this algorithm.

Mathematical Overview

To motivate the application of the DMD algorithm on dynamic data, consider dynamical systems in the form:

\[\frac{d\mathbf{x}}{dt} = \mathbf{f}(\mathbf{x}, t; \mathbf{\mu}),\]

where \(\mathbf{x} \in \mathbb{R}^n\) is the system state at time \(t\), \(\mathbf{f}(\cdot)\) are the dynamics and \(\mu\) are the system parameters. To illustrate this notation with an actual dynamic system, consider the Lorenz equations

\[\frac{dx}{dt} = \sigma (y - x)\] \[\frac{dy}{dt} = x (\rho - z) - y\] \[\frac{dz}{dt} = xy - \beta z\]

In this context, \(\mathbf{x} = \left[\begin{array}{c} x \\ y \\ z\end{array} \right]\) and \(\mathbf{\mu} = \left[\begin{array}{c} \sigma \\ \rho \\ \beta\end{array} \right]\)

For simplicity, the dynamic system considered in this example will not depend on parameters and thus \(\frac{d\mathbf{x}}{dt} = \mathbf{f}(\mathbf{x}, t; \mathbf{\mu})\) is reduced to

\[\frac{d\mathbf{x}}{dt} = \mathbf{f}(\mathbf{x}, t)\]

The discrete-time dynamical system is also considered

\[\mathbf{x_{k+1}}= \mathbf{F}(\mathbf{x_k})\]

Discrete-time systems may also be induced from continuous-time dynamics if \(\mathbf{x}_k\) is sampled from \(\frac{d\mathbf{x}}{dt} = \mathbf{f}(\mathbf{x}, t)\) discretely in time, so that \(\mathbf{x}_k = \mathbf{x}(k\Delta t)\). For an arbitrary time \(t\), a flow map \(\mathbf{F}_t\) is defined as

\[\mathbf{F}_t(\mathbf{x}(t_0)) = \mathbf{x}(t_0) + \displaystyle \int_{t_0}^{t_0+t} \mathbf{f}(\mathbf{x}(\tau))\,d\tau\]

The DMD procedure models \(\frac{d\mathbf{x}}{dt} = \mathbf{f}(\mathbf{x}, t)\) by computing the spectral decomposition (eigen decomposition) of the matrix \(\mathbf{A}\) and thus constructing a locally linear dynamical system

\[\frac{d\mathbf{x}}{dt} = \mathbf{Ax}\]

which advances the current state of the dynamic system into the future.

It is desirable to work with such systems because they are well understood and thus solutions for these systems are readily available. Furthermore, linear dynamical systems are fully characterized by the eigenvalues and eigenvectors of the matrix \(\mathbf{A}\) and when expressed in the intrinsic eigenvector coordinates, the dynamics are decoupled. To demonstrate this, the solution of \(\frac{d\mathbf{x}}{dt} = \mathbf{Ax}\) is first derived. Systems of linearly dependent differential equations are solved by solutions in the form of \(\mathbf{x}(t) = \exp(\mathbf{A}t)\mathbf{x}(0)\). The derivative of this expression must be taken with respect to time, \(t\). Recall that \(\exp(x)\) is defined as \(1+x+\frac{x^{2}}{2!}+\frac{x^{3}}{3!}+\cdots+\frac{x^{n}}{n!}\). Replacing \(x\) with \(\exp(\mathbf{A} t)\) yields

\[\exp(\mathbf{A} t) = I + \mathbf{A}t + \frac{(\mathbf{A}t)^2}{2!} + \frac{(\mathbf{A}t)^3}{3!} + \cdots + \frac{(\mathbf{A}t)^n}{n!}\]

Taking the derivative with respect to \(t\) then yields

\[\frac{d}{dt}\exp(\mathbf{A} t) = \mathbf{A} + \mathbf{A}^{2}t + \frac{(\mathbf{A}^{3}t^{2})}{2!} + \cdots + \frac{(\mathbf{A}^{n}t^{n-1})}{(n-1)!}\]

Factoring out the matrix \(\mathbf{A}\) from this expression yields the definition of \(\exp(\mathbf{A} t)\) and thus can be simplified to

\[\frac{d}{dt}\exp(\mathbf{A} t) = \mathbf{A}\exp(\mathbf{A}t)\]

Substituting these results back in to \(\frac{d\mathbf{x}}{dt} = \mathbf{Ax}\), the following expression is obtained

\[\mathbf{A} \exp(\mathbf{A} t) = \mathbf{A} \exp(\mathbf{A} t)\]

and thus verifying that our solution is in the form of \(\mathbf{x}(t) = \exp(\mathbf{A}t)\mathbf{x}(0)\).

Because the system is entirely characterized by the eigenvalues and eigenvectors of the matrix \(\mathbf{A}\), it is paramount that the spectral decomposition of this matrix is derived and the solution be expressed in terms of its eigenvalues and eigenvectors.

Consider the eigenvalue problem defined as

\[\mathbf{A\Xi} = \mathbf{\Xi\Lambda}\]

where \(\mathbf{\Xi}\) is a matrix whose columns are composed of the eigenvectors of the matrix \(\mathbf{A}\) and \(\mathbf{\Lambda}\) is a matrix whose diagonal entries are the eigenvalues of the matrix \(\mathbf{A}\). In the case where \(\mathbf{A}\) has \(n\) distinct eigenvalues, the matrix \(\mathbf{\Xi}\) will be composed of \(n\) linearly independent eigenvectors and thus \(\det(\mathbf{\Xi}) \neq 0\). The matrix \(\mathbf{A}\) can be solved entirely in terms of the spectral decomposition by the right multiplication of the matrix \(\mathbf{\Xi}^{-1}\). This yields

\[\mathbf{A} = \mathbf{\Xi\Lambda\Xi}^{-1}\]

Similarly, \(\mathbf{\Lambda}\) can be expressed as

\[\mathbf{\Lambda} = \mathbf{\Xi}^{-1}\mathbf{A\Xi}\]

Looking back at the taylor expansion of \(\exp(\mathbf{A} t)\), if the spectral decomposition of the matrix \(\mathbf{A}\) is instead substituted, the following series expansion is obtained

\[\exp(\mathbf{A} t) = I + (\mathbf{\Xi \Lambda} \mathbf{\Xi}^{-1})t + \frac{(\mathbf{\Xi \Lambda} \mathbf{\Xi}^{-1})(\mathbf{\Xi \Lambda} \mathbf{\Xi}^{-1})t^{2}}{2!} + \frac{(\mathbf{\Xi \Lambda} \mathbf{\Xi}^{-1})(\mathbf{\Xi \Lambda} \mathbf{\Xi}^{-1})(\mathbf{\Xi \Lambda} \mathbf{\Xi}^{-1})t^{3}}{3!} + \cdots + \frac{(\mathbf{\Xi \Lambda} \mathbf{\Xi}^{-1})^{n}t^{n}}{n!}\]

Note that the term \(\mathbf{\Xi}^{-1} \mathbf{\Xi}\) appears when consecutive terms of the spectral decomposition are multiplied. This matrix multiplication equates to the identity matrix and can therefore be eliminated. The expansion above can thus be rewritten as

\[\exp(\mathbf{A} t) = \mathbf{\Xi}(\mathbf{I} + \mathbf{\Lambda}t + \frac{(\mathbf{\Lambda}t)^2}{2!} + \frac{(\mathbf{\Lambda}t)^3}{3!} + \cdots + \frac{(\mathbf{\Lambda}t)^n}{n!})\mathbf{\Xi}^{-1}\]

if \(\mathbf{\Xi}\) and \(\mathbf{\Xi}^{-1}\) are factored out. This expression is then furthered simplified

\[\exp(\mathbf{A} t) = \mathbf{\Xi} \exp(\mathbf{\Lambda} t) \mathbf{\Xi}^{-1}\]

Given this fact, one could also express the solution \(\mathbf{x}(t) = \exp(\mathbf{A}t)\mathbf{x}(0)\) as

\[\mathbf{x}(t) = \mathbf{\Xi} \exp(\mathbf{\Lambda} t) \mathbf{\Xi}^{-1}\mathbf{x}(0)\]

This is an extremely useful result because each eigenvector can be characterized by a particular oscillation frequency and growth or decay factor determined by the corresponding eigenvalue. Recall that eigenvalues \(\mathbf{\Lambda} \in \mathbb{C}^n\) and thus generally contain an imaginary component. Thus, for a particular eigenvalue, \(\lambda = a + bi\). The real component of the eigenvalue determines the growth or decay of an eigenvector and its imaginary component describes its oscillatory behavior. The example outlined in this post will demonstrate how the oscillatory behavior of the given data is correctly reconstructed.

Another valuable property observed when expressing this solution in terms of the spectral decomposition is that the dynamical system becomes decoupled. To observe this, a new variable is defined as \(\mathbf{z} = \mathbf{\Xi}^{-1} \mathbf{x}\). The matrix product yields a transformation into the intrinsic eigenvector coordinates. To demonstrate this property, the derivative of \(\mathbf{z}\) with respect to \(t\) is computed

\[\frac{d}{dt}(\mathbf{z} )= \frac{d}{dt}(\mathbf{\Xi}^{-1}\mathbf{x}) \rightarrow \frac{d\mathbf{z}}{dt} = \mathbf{\Xi}^{-1}\frac{d\mathbf{x}}{dt}\]

The definition of a linear system can then be substituted for \(\frac{d\mathbf{x}}{dt}\)

\[\frac{d\mathbf{z}}{dt} = \mathbf{\Xi}^{-1}\mathbf{Ax}\]

The variable \(\mathbf{z}\) can also be rewritten as

\[\mathbf{x} = \mathbf{\Xi z}\]

\(\frac{d\mathbf{z}}{dt}\) can then be expressed as

\[\frac{d\mathbf{z}}{dt} = \mathbf{\Xi}^{-1}\mathbf{A\Xi z}\]

This is the exact definition for \(\mathbf{\Lambda}\) that was defined earlier and thus

\[\frac{d\mathbf{z}}{dt} = \mathbf{\Lambda}\mathbf{z}\]

The righthand side of this expression is a matrix multiplication operation that includes a diagonal matrix. This can be visualized as

\[\begin{bmatrix} \lambda_1 & 0 & \cdots & 0\\ 0 & \lambda_2 & ... & 0\\ \vdots & \vdots & \ddots & 0\\ 0 & 0 & \cdots & \lambda_n\\ \end{bmatrix} \left[\begin{array}{c} z_1 \\ z_2 \\ \vdots \\ z_n\end{array} \right]\]

This evaluates to

\[\lambda_1 z_1 , \lambda_2 z_2 , \cdots , \lambda_n z_n\]

This is expression demonstrates how the coupled system \(\frac{d\mathbf{x}}{dt}\) is transformed into an equivalent uncoupled system via the spectral decomposition. Summarizing what has been covered to this point, the dynamic system of interest was defined as

\[\frac{d\mathbf{x}}{dt} = \mathbf{f}(\mathbf{x}, t)\]

Complex dynamics of such form are often approximated as linear systems of the form of

\[\frac{d\mathbf{x}}{dt} = \mathbf{Ax}\]

due to the available analytical techniques that allow for generating solutions to these systems. These solutions can then be utilized to fully characterize the system of interest and develop future state predictions. These properties motivated the spectral decomposition of the matrix \(\mathbf{A}\). It is at this point in which a serious complication arises.

In practice, the matrix \(\mathbf{A}\) poses serious computational issues. Consider a dynamical system in which the state vector \(\mathbf{x} \in \mathbb{R}^n\) contains several millions of degrees of freedom (\(n\gg 1e10^{6}\)). This is not an uncommon situation to encounter within complex dynamical systems that are described by modern data sources which can capture data at a high resolution. This renders the representation of \(\mathbf{A}\) intractable because its dimensions are \(n^{2}\). For example, consider the case where the state vector contains \(1,000,000\) entries, the matrix \(\mathbf{A}\) would then contain \(1e^{12}\) (1 trillion) entries!!

The DMD algorithm’s objective is to compute the spectral decomposition in a numerically efficient manner. This task is accomplished by leveraging dimensionality reduction to approximate the matrix \(\mathbf{A}\) and thus avoid any explicit computation and representation of this matrix. To apply the DMD algorithm, data is first arranged into two data matrices.

\[\mathbf{X} = \begin{bmatrix} | & | & &|\\ \mathbf{x}(t_1) & \mathbf{x}(t_2) & ... & \mathbf{x}(t_{m-1})\\ | & | & &| \end{bmatrix}\] \[\mathbf{X'} = \begin{bmatrix} | & | & &|\\ \mathbf{x}(t_2) & \mathbf{x}(t_3) & ... & \mathbf{x}(t_m)\\ | & | & &| \end{bmatrix}\]

The matrix \(\mathbf{A}\) relates these two matrices in time as follows

\[\mathbf{X}^{'} \approx \mathbf{A}\mathbf{X}\]

Given this relation, the matrix \(\mathbf{A}\) can then be defined as

\[\mathbf{A} = \mathbf{X}^{'}\mathbf{X}^{\dagger}\]

where \(\dagger\) is the Moore-Penrose pseudoinverse. This solution minimizes the error

\[\begin{Vmatrix} \mathbf{X}^{'} - \mathbf{A}\mathbf{X} \end{Vmatrix}_F\]

where \(\begin{Vmatrix}\cdot\end{Vmatrix}_F\) is the Frobenius norm and is given by

\[\sqrt{\sum_{j=1}^{n}\sum_{k=1}^{m}x_{jk}^2}\]

This minimization corresponds to the euclidean norm and thus corresponds to a least-square best-fit optimization. The exact DMD algorithm is will now be outlined.

The DMD Algorithm

Step 1:

The first step of the DMD algorithm is to compute the singular value decomposition (SVD) of the data matrix \(\mathbf{X}\):

\[\mathbf{X} = \mathbf{U \Sigma V}^{*}\]

in practice it is only required to use the low-rank approximation and thus the SVD will be truncated to the leading \(r\) singular values and vectors and denote it as

\[\mathbf{X} \approx \mathbf{\tilde{U} \tilde{\Sigma} \tilde{V}}^*\]

where \(\mathbf{\tilde{U}} \in \mathbb{C}^{n \times r}\), \(\mathbf{\tilde{\Sigma}} \in \mathbb{C}^{r \times r}\), \(\mathbf{\tilde{V}} \in \mathbb{C}^{m \times r}\), and \(^{*}\) denotes the conjugate transpose. A useful property of the matrices \(\mathbf{U}\) and \(\mathbf{V}\) is that they are unitary matrices thus \(\mathbf{U}\mathbf{U}^{*} = \mathbf{U}^{*}\mathbf{U} = \mathbf{I}\) and \(\mathbf{V}\mathbf{V}^{*} = \mathbf{V}^{*}\mathbf{V}=\mathbf{I}\). This property will become extremely useful in the following steps.

Step 2:

Compute \(\mathbf{\tilde{A}}\):

Recall that \(\mathbf{A} = \mathbf{X}^{'}\mathbf{X}^{\dagger}\). Substituting the truncated SVD of \(\mathbf{X}\) in this equation yields:

\[\mathbf{A} = \mathbf{X}^{'}(\mathbf{\tilde{U} \tilde{\Sigma} \tilde{V}})^{\dagger} = \mathbf{X}^{'} \mathbf{\tilde{V} \tilde{\Sigma}}^{-1} \mathbf{\tilde{U}}^{*}\]

Because it representing the matrix \(\mathbf{A}\) is computationally prohibitive, it must be projected onto the POD modes of \(\mathbf{\tilde{U}}\) and is achieved by

\[\mathbf{\tilde{A}} = \mathbf{\tilde{U}}^{*} \mathbf{A} \mathbf{\tilde{U}}\]

Expressing the projection in terms of the SVD yields

\[\mathbf{\tilde{A}} = \mathbf{\tilde{U}}^{*} (\mathbf{X}^{'} \mathbf{\tilde{V} \tilde{\Sigma}}^{-1} \mathbf{\tilde{U}}^{*}) \mathbf{\tilde{U}} = \mathbf{\tilde{U}}^{*} \mathbf{X}^{'} \mathbf{\tilde{V} \tilde{\Sigma}}^{-1}\]

Step 3:

Compute the spectral decomposition of \(\mathbf{\tilde{A}}\)

\[\mathbf{\tilde{A}W = W \Lambda}\]

Step 4:

Reconstruct the high-dimensional DMD modes \(\Phi\) using the eigenvectors of the reduced matrix \(\mathbf{\tilde{A}}\) and the time-shifted matrix \(\mathbf{X}^{'}\)

\[\Phi = \mathbf{X}^{'} \mathbf{\tilde{V} \tilde{\Sigma}}^{-1} \mathbf{W}\]

An interesting fact to point out is that although the low-dimensional eigenvectors \(\mathbf{W}\) were used to compute the high-dimensional DMD modes, it can be shown that the reconstructed DMD modes are eigenvectors of the high-dimensional matrix \(\mathbf{A}\)

Observe the following product

\[\mathbf{A} \Phi = (\mathbf{X}^{'}\mathbf{V}\mathbf{\Sigma}^{-1} \mathbf{U}^{*}) (\mathbf{X}^{'} \mathbf{\tilde{V} \tilde{\Sigma}}^{-1} \mathbf{W})\]

Looking at this expression, the definition of \(\mathbf{\tilde{A}}\) lies is within this expression above and thus \(\mathbf{\tilde{A}}\) can be substituted in for \(\mathbf{\tilde{U}}^{*} \mathbf{X^{'} \tilde{V} \tilde{\Sigma}^{-1}}\)

\[\mathbf{A} \Phi = \mathbf{X'}\mathbf{V}\mathbf{\Sigma^{-1}} \mathbf{\tilde{A}} \mathbf{W}\]

Using \(\mathbf{\tilde{A}W = W \Lambda}\),

\[\mathbf{A} \Phi = \mathbf{X'}\mathbf{V}\mathbf{\Sigma^{-1}} \mathbf{W} \mathbf{\Lambda}\]

Lastly, the definition of a DMD mode can be substituted in

\[\mathbf{A} \mathbf{\Phi} = \mathbf{\Phi \Lambda}\]

Thus arriving at the conclusion that the DMD modes determined by the low-dimensional eigenvectors of \(\mathbf{\tilde{A}}\) are in fact the eigenvectors of the high-dimensional matrix \(\mathbf{A}\).

Now that the eigenvectors and eigenvalues of the matrix \(\mathbf{A}\) have been calculated, they can be utilized to make future state predictions. Recall the solution derived above for linear systems. Substituting the DMD modes and eigenvalues into the solution yields

\[\mathbf{x}(t) = \mathbf{\Phi} \exp(\mathbf{\Lambda} t) \mathbf{\Phi}^{-1}\mathbf{x}(0)\]

The product \(\mathbf{\Phi}^{-1}\mathbf{x}(0)\) can be defined as a vector \(\mathbf{b}\) which are the initial conditions transformed into the eigenvector basis. Thus our solution is as follows

\[\mathbf{x}(t) = \mathbf{\Phi} \exp(\mathbf{\Lambda} t) \mathbf{b}\]

The matrix \(\mathbf{\Lambda}\) actually has been defined for a discretized system thus to estimate the system state at an arbitrary time, \(\mathbf{\Lambda}\) must be transformed before the DMD algorithm is implemented. The transformation is as follows

\[\mathbf{\Omega} = \frac{\log{\mathbf{\Lambda}}}{\Delta t}\]

This allows for the system state to be estimated at an arbitrary time \(t\). Now that the algorithm has been outlined, it will be demonstrated on a simple example to highlight its equation-free nature and its simple implementation.

Python Code Overview

In this example, two mixed spatiotemporal signals are combined and then the DMD algorithm is applied to decompose the mixed signal into its constituents.

The two signals are defined by

\[f(x,t) = f_1(x,t) + f_2(x,t) = \frac{1}{\cosh}(x+3)\exp(i2.3t) + \frac{2}{\cosh}(x) \tanh(x) \exp(i2.8t)\]

First, import the necessary libraries

import numpy as np
#These libraries are used for creating the plots
import plotly.graph_objects as go
from plotly.subplots import make_subplots

Next, the two functions are defined

#Define the functions 
def f1(xx, tt):
    y_1 = (1. / np.cosh(xx + 3)) * np.exp(2.3j * tt)
    return y_1

def f2(xx, tt):
    y_2 = (2. / np.cosh(xx)) * np.tanh(xx) * np.exp(2.8j * tt)
    return y_2

Create the space and time discretizations and generate the data for analysis

#Define time and space discretizations
xi = np.linspace(-10, 10, 400)
t = np.linspace(0, 4*np.pi, 200)
dt = t[1] - t[0]
xx, tt = np.meshgrid(xi, t)
X = f1(xx, tt) + f2(xx, tt)

The individual spatiotemporal signals \(f_1(x,t)\) and \(f_2(x,t)\) are displayed below. The two frequencies present in the functions are \(\omega_1\) = 2.3 and \(\omega_2\) = 2.8. The interactive display demonstrates how each function’s spatial structure oscillates at a different frequency.

The individual signals are summed and the resulting data is displayed in the third column in the figure below. Notice the distortion in the amplitude of the summed functions due to the different oscillation frequencies of the individual functions.

Now that the data is generated, the DMD algorithm can be applied. First, the data matrices \(\mathbf{X}_1\) and \(\mathbf{X}_2\) are created

X_1 = X.T[:, :-1]
X_2 = X.T[:, 1:]

Next, the SVD of the matrix \(\mathbf{X}_1\) is computed

U, Sigma, V = np.linalg.svd(X_1)
V = V.conj()

Plotting the first ten singular values, the first two singular values capture over 99% of the system’s energy and thus it can be truncated to rank = 2.

U, Sigma, V = U[:, :2], Sigma[:2], V[:2, :]

\(\mathbf{\tilde{A}}\) and its spectral decomposition are computed as follows

A_tilde = np.linalg.multi_dot([U.T, X_2, V.T, np.diag(np.reciprocal(Sigma))])
Lambda, W = np.linalg.eig(A_tilde)

Lastly, the high-dimensional DMD modes are reconstructed

Phi = np.linalg.multi_dot([X_2, V.T, np.diag(np.reciprocal(Sigma)), W])

The following code block calculates \(\mathbf{b}\)

b, residuals, rank, sigma = np.linalg.lstsq(Phi, x_1, rcond=None)

The last step that is required is to transform the eigenvalues to their continuous-time form. This is done by taking the natural logarithm of each eigenvalue and then dividing it by \(\Delta t\)

Omega = np.log(Lambda)/dt

Notice that the imaginary components of Omega display the underlying frequencies in

\[\frac{2}{\cosh}(x) \tanh(x) \exp(i2.8t), \frac{1}{\cosh}(x+3)\exp(i2.3t)\]
[-8.54346131e-15+2.8j -6.84026324e-15+2.3j]

If the DMD modes are plotted on a graph, they correspond to the two different spatial signals generated by \(f_1(x,t), f_2(x,t)\)

Although no knowledge of the underlying functions was introduced into the DMD algorithm, it successfully managed to extract the underlying coherent spatiotemporal features that characterize the system from data alone. This is exactly why the DMD algorithm has garnered large interest within modern data-driven approaches. Its implementation scales efficiently as it exploits the SVD to reduce dimensionality, its equation-free nature renders it highly flexible, and it is easy to implement. Another reason why the DMD algorithm is a great tool to analyze dynamic systems is that it allows for future state predictions.

To make future predictions of the system, the dynamics of the system are first constructed by evaluating \(\exp({\mathbf{\Omega} t}) \mathbf{b}\)

t_exp = np.arange(X.T.shape[1]) * dt
temp = np.repeat(Omega.reshape(-1,1), t_exp.size, axis=1)
dynamics = np.exp(temp * t_exp) * b.reshape(2, -1)

To demonstrate the DMD algorithm’s accuracy, the original data is first reconstructed from the time dynamics and the DMD modes.

X_dmd = Phi @ dynamics

The accuracy of the reconstructed data is visualized by evaluating the difference between the two datasets

The differences between the true data and the reconstructed data are essentially \(0\) as their differences can only be observed on a femto scale.

Future state prediction can then be evaluated arbitrarily long. The following plot displays the last state of the system and the predictions for 200 time steps into the future.

Now that the implementation of the DMD algorithm has been demonstrated, the following section will cover some of its extensions, applications, and limitations.

DMD Applications and Limitations

Because the DMD algorithm’s simple implementation and equation-free nature, it is highly applicable in a variety of scenarios outside of its original inception within fluid dynamics. DMD has been applied to datasets from epidemiological systems, neuroscience, video processing, robotics, finance, and even plasma physics. Specifically, within epidemiology, the DMD modes and eigenvalues give insight into the spatial propagation of a disease and temporal variations, respectively. Additionally, this application also lead to an important extension of the DMD algorithm to incorporate the effect of actuation as these systems are generally not representative of unforced dynamics. This wide range of topics is only expected to increase.

Although the DMD algorithm has displayed great results in a variety of applications, it does possess limitations that must be understood as they could have serious implications on the implementation of the algorithm and its results. Because the SVD is at the core of the DMD algorithm, it will have difficulty capturing translational and rotational invariances in a dataset. Typically, these invariances will drastically increase the rank of the SVD decomposition far above its true rank. This reduces the dimensionality reduction achieved in DMD and thus diminishing a major highlight of this decomposition. Similar to these invariances, transients and intermittent phenomena are not accurately detected by DMD. Extensions to the DMD have been developed to resolve these weaknesses and will be explored in other posts on this page.