Learning Koopman eigenfunctions for prediction and control

0 downloads 0 Views 5MB Size Report
Oct 20, 2018 - imization, carried out by a simple linear least-squares regression. ... these works rely crucially on the concept of embedding (or lifting) of the original state ... not exploit the particular Koopman operator structure and enjoy only weak spectral con .... The distinctive feature of this work is the requirement that the.
Learning Koopman eigenfunctions for prediction and control: the transient case

arXiv:1810.08733v1 [math.OC] 20 Oct 2018

Milan Korda1 , Igor Mezi´c1 October 23, 2018 Abstract This work presents a data-driven framework for learning eigenfunctions of the Koopman operator geared toward prediction and control. The method relies on the richness of the spectrum of the Koopman operator in the transient, off-attractor, regime to construct a large number of eigenfunctions such that the state (or any other observable quantity of interest) is in the span of these eigenfunctions and hence predictable in a linear fashion. Once a predictor for the uncontrolled part of the system is obtained in this way, the incorporation of control is done through a multi-step prediction error minimization, carried out by a simple linear least-squares regression. The predictor so obtained is in the form of a linear controlled dynamical system and can be readily applied within the Koopman model predictive control framework of [11] to control nonlinear dynamical systems using linear model predictive control tools. The method is entirely data-driven and based purely on convex optimization, with no reliance on neural networks or other non-convex machine learning tools. The novel eigenfunction construction method is also analyzed theoretically, proving rigorously that the family of eigenfunctions obtained is rich enough to span the space of all continuous functions. In addition, the method is extended to construct generalized eigenfunctions that also give rise Koopman invariant subspaces and hence can be used for linear prediction. Detailed numerical examples demonstrate the approach, both for prediction and feedback control. Keywords: Koopman operator, eigenfunctions, model predictive control, data-driven methods

1

Introduction

The Koopman operator framework is becoming an increasingly popular tool for data-driven analysis of dynamical systems. In this framework, the nonlinear system is represented by an infinite dimensional linear operator, thereby allowing for spectral analysis of the nonlinear system akin to the classical spectral theory of linear systems or Fourier analysis. The theoretical foundations of this approach were laid out by Koopman in [10] but it was not 1

Milan Korda and Igor Mezi´c are with the University of [email protected], [email protected]

1

California,

Santa

Barbara,

until the early 2000’s that the practical potential of these methods was realized in [19] and [17]. The framework became especially popular with the realization that the Dynamic mode decomposition (DMD) algorithm [26] developed in fluid mechanics constructs an approximation of the Koopman operator, thereby allowing for theoretical analysis and extensions of the algorithm (e.g., [31, 2, 12]). This has spurred an array of applications in fluid mechanics [25], power grids [24], neurodynamics [5], energy efficiency [7], or molecular physics [33], to name just a few. Besides descriptive analysis of nonlinear systems, the Koopman operator approach was also utilized to develop systematic frameworks for control [11] (with earlier attempts in, e.g., [23, 30]), state estimation [27, 28] and system identification [16] of nonlinear systems. All these works rely crucially on the concept of embedding (or lifting) of the original statespace to a higher dimensional space where the dynamics can be accurately predicted by a linear system. In order for such prediction to be accurate over an extended time period, the embedding mapping must span an invariant subspace of the Koopman operator, i.e., the embedding mapping must consist of the (generalized) eigenfunctions of the Koopman operator (or linear combinations thereof). It is therefore of paramout importance to construct accurate approximations of the Koopman eigenfunctions. The leading data-driven algorithms are either based on the Dynamic mode decomposition (e.g., [26, 31]) or the Generalized Laplace averages (GLA) algorithm [20]. The DMD-type methods can be seen as finite section operator approximation methods, which do not exploit the particular Koopman operator structure and enjoy only weak spectral convergence guarantees [12]. On the other hand, the GLA method does exploit the Koopman operator structure and ergodic theory and comes with spectral convergence guarantees, but suffers from numerical instabilities for eigenvalues that do not lie on the unit circle (discrete time) or the imaginary axis (continuous time). Among the plethora of more recently introduced variations of the (extended) dynamic mode decomposition algorithm, let us mention the variational approach [32], the sparsity-based method [9] or the neural-networks-based method [29]. In this work, we propose a new algorithm for construction of the Koopman eigenfunctions from data. The method is geared toward transient, off-attractor, dynamics where the spectrum of the Koopman operator is extremely rich. In particular, provided that a non-recurrent surface exists in the state-space, any complex number is an eigenvalue of the Koopman operator with an associated continuous (or even smooth if so desired) eigenfunction, defined everywhere except for singularities and attractors. What is more, the associated eigenspace is infinite-dimensional, parametrized by functions defined on the boundary of the non-recurrent surface. We leverage this richness to obtain a large number of eigenfunctions in order to ensure that the observable quantity of interest (e.g., the state itself) lies within the span of the eigenfunctions (and hence within an invariant subspace of the Koopman operator) and is therefore predictable in a linear fashion. The requirement that the embedding mapping spans an invariant subspace and the quantity of interest belongs to this subspace are crucial for practical applications: they imply both a linear time evolution in the embedding space as well the possibility reconstruct the quantity of interest in a linear fashion. On the other hand, having only a nonlinear reconstruction mapping from the embedding space may lead comparatively low-dimensional embeddings but does not buy us much practically since in that case we essentially replace one nonlinear problem with another. In addition 2

to eigenfunctions, the proposed method can be extended to construct generalized eigenfunctions that also give rise to Koopman invariant subspaces and can hence be used for linear prediction; this further enriches the class of embedding mappings constructible using the proposed method. On an algorithmic level, given a set of initial conditions lying on distinct trajectories, a set of complex numbers (the eigenvalues) and a set of continuous functions, the proposed method constructs eigenfunctions by simply “flowing” the values of the continuous functions forward in time according the eigenfunction equation, starting from the values of the continuous functions defined on the set of initial conditions. Provided the trajectories are non-periodic, this consistently and uniquely defines the eigenfunctions on the entire data set. These eigenfunctions are then extended to the entire state-space by interpolation or approximation. We prove that such extension is possible (i.e., there exist continuous eigenfunctions taking the computed values on the data set) provided that there is a non-recurrent surface passing through the initial conditions of the trajectories and we prove that such surface always exists provided the flow is rectifiable in the considered time interval. Under the same assumption, we also prove that the eigenfunctions constructed in this way span the space of all continuous functions in the limit as the number of boundary function-eigenvalue pairs tends to infinity. This implies that in the limit any continuous observable can be arbitrarily accurately approximated by linear combinations of the eigenfunctions, a crucial requirement for practical applications. The minimized objective in the learning procedure is simply the projection error of the observables of interest on the span of the eigenfunctions. This is all that is required to construct the linear predictors in the uncontrolled setting. In the controlled setting, we follow a two-step procedure. First, we construct a predictor for the uncontrolled part of the system (i.e., with the control being zero or any other fixed value). Next, using a second data set generated with control we minimize a multi-step prediction error in order to obtain the input matrix for the linear predictor. Crucially, the multi-step error minimization boils down to a simple linear least-squares problem; this is due to the fact that the dynamics and output matrices are already identified. This is a distinctive feature of the approach, compared to (E)DMD-based methods (e.g., [12, 23]) where only a one-step prediction error can be minimized in a convex fashion. The predictors obtained in this way are then applied within the Koopman model predictive control (Koopman MPC) framework of [11], which we briefly review in this work. However, the eigenfunction and linear predictor construction methods are completely general and immediately applicable, for example, in the state estimation setting [27, 28]. The fact that the spectrum of the Koopman operator is very rich in the space of continuous functions is a well known fact in the Koopman operator community; see, e.g., [15, Theorem 3.0.2]. In particular, the fact that, away from singularities, eigenfunctions corresponding to arbitrary eigenvalues can be constructed was noticed in [18] where these were termed open eigenfunctions and they were subsequently used in [4] to find conjugacies between dynamical systems. This work is, to the best of our knowledge, the first one to exploit the richness of the spectrum for prediction and control using linear predictors and to provide a theoretical analysis of the set of eigenfunctions obtained in this way. On the other hand, the spectrum of the Koopman operator “on attractor”, in a post-transient regime, is much more structured and can be analyzed numerically in a great level of detail (see, e.g., [13, 8]). 3

Notation The set of real numbers is denoted by R, the set of complex numbers by C and N = {0, 1, . . .} denotes the set of natural numbers. The space of continuous functions defined on a set X is denoted by C(X). The Moore-Penrose pseudoinverse of a matrix A ∈ Cn×n is denoted by A† , the transpose by A> and the conjugate (Hermitian) transpose by AH . The identity matrix will be denoted by I. The symbol diag(·, . . . , ·) denotes a (block-)diagonal matrix composed of the arguments.

2

Koopman operator

We first develop our framework for uncontrolled dynamical systems and generalize it to controlled systems in Section 5. Consider therefore the nonlinear dynamical system x˙ = f (x)

(1)

with the state x ∈ X ⊂ Rn and f Lipschitz continuous on X. The flow of this dynamical system is denoted by St (x), i.e., d St (x) = f (St (x)) dt

(2)

for all x ∈ X and all t ≥ 0. The Koopman operator semigroup (Kt )t≥0 is defined by Kt g = g ◦ S t for all g ∈ C(X), where ◦ denotes the function composition and C(X) the space of all continuous functions on X. Since the flow of a dynamical system with Lipschitz vector field is also Lipschitz, it follows that Kt : C(X) → C(X), i.e., each element of the Koopman semigroup maps continuous functions to continuous functions. Crucially for us, each Kt is a linear operator. With a slight abuse of language, from here on, we will refer to the Koopman operator semigroup simply as the Koopman operator. Eigenfunctions An eigenfunction of the Koopman operator associated to an eigenvalue λ ∈ C is any function φ ∈ C(X) satisfying (Kt φ)(x) = eλt φ(x),

(3)

φ(St (x)) = eλt φ(x).

(4)

which is equivalent to Therefore, any such eigenfunction defines a coordinate evolving linearly along the flow of (1) and satisfying the linear ordinary differential equation (ODE) d φ(St (x)) = λφ(St (x)). dt 4

(5)

2.1

Linear predictors from eigenfunctions

Since the eigenfunctions define linear coordinates, they can be readily used to construct linear predictors for the nonlinear dynamical system (1). The goal is to predict the evolution of a quantity of interest ξ(x) (often referred to as “observable” or an “output” of the system) along the trajectories of (1). The function ξ : Rn → Rnξ often represents the state itself, i.e., ξ(x) = x or an output of the system (e.g., the attitude of a vehicle or the kinetic energy of a fluid) or the cost function to be minimized within an optimal control problem or a nonlinear constraint on the state of the system (see Section 5.1 for concrete examples). The distinctive feature of this work is the requirement that the predictor constructed be a linear dynamical system. This facilitates the use of linear tools for state estimation and control, thereby greatly simplifying the design procedure as well as drastically reducing computational and deployment costs (see [27] for applications of this idea to state estimation and [11] for model predictive control). Let φ1 , . . . , φN be eigenfunctions of the Koopman operator with the associated (not necessarily distinct) eigenvalues λ1 , . . . , λN . Then we can construct a linear predictor of the form z˙ = Az z0 = φ(x0 ), yˆ = Cz, where

 λ1  .. A= .

 λN

 ,

(6a) (6b) (6c) 

 φ1   φ =  ...  ,

(7)

φN

and where yˆ is the prediction of ξ(x). To be more precise, the prediction of ξ(x(t)) = ξ(St (x0 )) is given by   eλ1 t   ... ξ(x(t)) ≈ yˆ(t) = CeAt z0 = C   z0 . λN t e

The matrix C is chosen such that the projection of ξ onto span{φ1 , . . . , φN } is minimized, i.e., C solves the optimization problem min kξ − Cφk, nξ ×N

(8)

C∈C

where k · k is a norm on the space of continuous functions (e.g., the sup-norm or the L2 norm).

5

Prediction error Since φ1 , . . . , φN are the eigenfunctions of the Koopman operator, the prediction of the evolution of the eigenfunctions along the trajectory of (1) is error-free, i.e., z(t) = φ(x(t)). Therefore, the sole source of the prediction error kξ(x(t)) − yˆ(t)k

(9)

is the error in the projection of ξ onto span{φ1 , . . . , φN }, quantified by Eq. (8). In particular, if ξ ∈ span{φ1 , . . . , φN } (with the inclusion understood componentwise), we have kξ(x(t)) − yˆ(t)k = 0 ∀ t ≥ 0. This observation will be crucial for defining a meaningful objective function when learning the eigenfunctions.

3

Non-recurrent sets and eigenfunctions

In this section we show how non-recurrent sets naturally give rise to eigenfunctions. Let time T > 0 be given. A set Γ ⊂ X is called non-recurrent if x ∈ Γ =⇒ St (x) ∈ / Γ ∀t ∈ (0, T ]. Given any function g ∈ C(Γ) and any λ ∈ C, we can construct an eigenfunction of the Koopman operator by simply solving the defining ODE (5) “initial condition by initial condition” for all initial conditions x0 ∈ Γ. Mathematically, we define for all x0 ∈ Γ φλ,g (St (x0 )) = eλt g(x0 ),

(10)

which defines the eigenfunction on the entire image XT of Γ by the flow St (·) for t ∈ [0, T ]. Written explicitly, this image is [ [ {St (x0 ) | x0 ∈ Γ}. St (Γ) = XT = t∈[0,T ]

t∈[0,T ]

See Figure 1 for an illustration. To get an explicit expression for φλ,g (x) we flow backward in time until we hit the non-recurrent set Γ, obtaining φλ,g (x) = e−λτ (x) g(Sτ (x) (x)) for all x ∈ XT , where

(11)

τ (x) = inf {t | St (x) ∈ Γ} t∈R

is the first time that the trajectory of (1) hits Γ starting from x. By construction, for x ∈ XT we have τ (x) ∈ [−T, 0]. The results of this discussion are summarized in the following theorem: 6

AAAEGnicdZPNbtNAEMe3MdASPprCDS4rUiQuRHYv0Fsl1CRSBSoVoZXiKNpdT1Ir+2HtrguRZYkX4coV3oET4sqFR+At2CSmxK67kqXR/H8z/s+sliY8Ntb3f280vBs3b21u3W7euXvv/nZr58F7o1LNYMAUV/qMEgM8ljCwseVwlmgggnI4pbNXC/30ArSJlXxn5wmMBJnKeBIzYl1q3Hr0RsnnGliqNUiLDVi8G/aIEGR33Gr7HX958NUgKII2Ks7xeKfxJ4wUS4XrxDgxZhj4iR1lRNuYccibYWogIWxGpjB0oSQCzChbDpHjpy4T4YnS7nNOltn1iowIY+aCOlIQe26q2iJZpw1TO3k5ymKZpBYkW/1oknJsFV5sBEexm9/yuQsI07Hzitk50YRZt7dSp+giTkzh+uPKdrNk4zIn4QNTbokyysLDPAsXrrTIQjrBh3le1l0RzYfBKFuGSmeUp5DjdlAD6hKoIbqGm5a4rtJgbE8DyGt4UeJfu1mkJXWsWDn4N1ANQP8DlNYA1KwAqni0uE/Fa6ATml+2OKmKR6wQGeHZUVXtr6v9qtpdV7tVtbeu9pxavl6q1MwSN0Cz6R5HUH0KV4PBXme/47/dax/4xSvZQo/RE/QMBegFOkB9dIwGiKFP6Av6ir55n73v3g/v5wptbBQ1D1HpeL/+AvCPaDA=

AAAEN3icdZNNb9MwGMe9hpdR3jo4crFokYaEqmSXwQFpElpbaUIaE2WTmqqyHbe16pfIdsaqKB+FL8KVK3wAbpwQHPkGuGkYTZb59M/z/z3O/7FlHHNmrO9/32p4N27eur19p3n33v0HD1s7jz4YlWhCh0Rxpc8wMpQzSYeWWU7PYk2RwJye4sWblX96TrVhSr63y5iOBZpJNmUEWVeatPY7YTxnkzTkridCL2bZ7sXEfw5fw1kuOnCqNEScw477DJkM+0gI1Jm02n7Xzxe8KoJCtEGxjic7jd9hpEgiqLSEI2NGgR/bcYq0ZYTTrBkmhsaILNCMjpyUSFAzTvMJM/jMVaI8ylRJC/PqZkeKhDFLgR0pkJ2bqrcq1nmjxE5fjlMm48RSSdY/miYcWgVXxwUjpimxfOkEIpq5rJDMkUbEukMt7RSds9gUqS/WsZulGJc1ST8S5Q5RRml4mKXhKpUWaYin8DDLyr5rwtkoGKe5VDrFPKEZbAc1oC6BmkbXcLMS11OaGtvXlMpreFHi37pZpEV1rFgn+DdQDYD/AxjXANisAax4tLpPxWugE5xdbnFSNY9IYRLE06OqO9h0B1W3t+n2qm5/0+07t3y9WKmFRW6AZtM9jqD6FK6K4V73Vdd/t9c+8ItXsg2egKdgFwRgHxyAATgGQ0DAJ/AFfAXfvM/eD++n92uNNraKnsegtLw/fwFvwXHO

AAAEA3icdZPLbtNAFIanMZcSbi0s2VikSKwiuxtgVwk1iVQhlQrTSkkUzYxP0lHmYs2MSyPLW7Zs4R1YIbY8CI/AWzCOTYlddyRLR+f7ZvwfW0MSzowNgt9bHe/W7Tt3t+917z94+Ojxzu6Tj0almkJEFVf6jGADnEmILLMczhINWBAOp2T5tuCnF6ANU/KDXSUwFXgh2ZxRbF0r2rucBXuznV7QD9bLv16EVdFD1Tqe7Xb+TGJFUwHSUo6NGYdBYqcZ1pZRDnl3khpIMF3iBYxdKbEAM83WaXP/hevE/lxp90jrr7ubOzIsjFkJ4kyB7blpsqLZxsapnb+eZkwmqQVJyxfNU+5b5Rej+zHTQC1fuQJTzVxWn55jjal1H6h2UnzBElOlvixjd2sxrnoSPlElBJZxNjnMs0mRSotsQub+YZ7XudtE8nE4zdal0hnhKeR+L2wRdU3UEN/gLWreQGkwdqgB5A2+qPnv3CzS4jZXlAn+DdQikP8CIS0CMaVAFI+L/6l4i3RC8qsjTprwiFaQYp4dNelok46adLBJB0063KRDR+u/lyi1tNgN0O26yxE2r8L1Itrvv+kH7/d7B0F1S7bRM/QcvUQheoUO0AgdowhRxNAX9BV98z57370f3s9S7WxVe56i2vJ+/QXVvl83 sha1_base64="vdsb0r5Hl4JiZScOA6Gsc3xX6/s=">AAAD8HicdZPfatRAFManG//UWLW99ia4CF4tiTfqnSDdXShCLW5b2A1lZnKyHXb+hJlJdQkBr731HbwS38dH8C2cbGK7SdOBwMf5fjPznQyHZJwZG4Z/dgbevfsPHu4+8h/v+U+ePtvfOzUq1xRmVHGlzwk2wJmEmWWWw3mmAQvC4YysPlT+2RVow5T8bNcZxAIvJUsZxdaVji/2h+Eo3KzgtogaMUTNujgY/F0kiuYCpKUcGzOPwszGBdaWUQ6lv8gNZJiu8BLmTkoswMTFJmcZvHSVJEiVdp+0waa6vaPAwpi1II4U2F6arlcV+7x5btO3ccFklluQtL4ozXlgVVA1HSRMA7V87QSmmrmsAb3EGlPrfk3rpOSKZaZJ/bWO7bdiXNckfKFKCCyTYnFYFosqlRbFgqTBYVm2fbeJlPMoLjZS6YLwHMpgGPWAugVqSO7gli1urDQYO9EA8g5etPiPrhdpcR8r6gT/G+oByA1ASA9ATA0QxZPqPRXvgU5IeX3ESdc8oo1JMS+Ouu5025123fG2O+66k2134tz28xKlVha7BnzfzUbUnYTbYvZ69G4UfgrRLnqOXqBXKEJv0Hs0RcdohihK0Hf0w/vm/fR+1SM02Glm6QC1lvf7HypvW28= sha1_base64="nDhCx03s7QY0JqVdRuMzMQdtU9Y=">AAAD+HicdZNPb9MwGMa9BtgIAzauXCI6JE5VwoVxQ0JrK01IY6JsUlNVtvOms+o/ke1sq6JcuXKF78AJ8WX4CHwL3DaMJsssRXr0Pj/bzxvrJRlnxobh762Od+/+g+2dh/6j3cdPnu7t7342KtcURlRxpc8JNsCZhJFllsN5pgELwuGMzN8v/bNL0IYp+ckuMpgIPJMsZRRbVxodXE/Dg+leN+yFqxXcFlEluqhaJ9P9zp84UTQXIC3l2JhxFGZ2UmBtGeVQ+nFuIMN0jmcwdlJiAWZSrNKWwUtXSYJUafdJG6yqmzsKLIxZCOJIge2FaXrLYps3zm16OCmYzHILkq4vSnMeWBUsWw8SpoFavnACU81c1oBeYI2pdT+odlJyyTJTpb5ex/ZrMW5qEq6oEgLLpIiPyiJeptKiiEkaHJVl3XebSDmOJsVKKl0QnkMZdKMWUNdADckd3KzG9ZUGYwcaQN7Bixr/wfUiLW5jxTrBv4ZaAPIfIKQFIGYNEMWT5Xsq3gKdkvLmiNOmeUwrk2JeHDfd4aY7bLr9TbffdAeb7sC59eclSs0tdg34vhuOqDkKt8Xode9tL/wYoh30HL1Ar1CE3qB3aIhO0AhRxNBX9A199754P7yf6ynqbFXj9AzVlvfrL7OpXds= sha1_base64="0N1dE99ZURnlcPzTvmPW2Wwbxw0=">AAAEA3icdZPLbtNAFIanMZcSbi0s2VikSKwim01hVwk1iVQhlQrTSkkUzYxP0lHmYs2MSyPLW7Zs4R1YIbY8CI/AWzCOTYlddyRLR+f7ZvwfW0MSzowNgt9bHe/W7Tt3t+917z94+Ojxzu6Tj0almkJEFVf6jGADnEmILLMczhINWBAOp2T5tuCnF6ANU/KDXSUwFXgh2ZxRbF0r2rucBXuznV7QD9bLv16EVdFD1Tqe7Xb+TGJFUwHSUo6NGYdBYqcZ1pZRDnl3khpIMF3iBYxdKbEAM83WaXP/hevE/lxp90jrr7ubOzIsjFkJ4kyB7blpsqLZxsapnb+eZkwmqQVJyxfNU+5b5Rej+zHTQC1fuQJTzVxWn55jjal1H6h2UnzBElOlvixjd2sxrnoSPlElBJZxNjnMs0mRSotsQub+YZ7XudtE8nE4zdal0hnhKeR+L2wRdU3UEN/gLWreQGkwdqgB5A2+qPnv3CzS4jZXlAn+DdQikP8CIS0CMaVAFI+L/6l4i3RC8qsjTprwiFaQYp4dNelok46adLBJB0063KRDR+u/lyi1tNgN0O26yxE2r8L1InrVf9MP3ge9g6C6JdvoGXqOXqIQ7aMDNELHKEIUMfQFfUXfvM/ed++H97NUO1vVnqeotrxffwHVHl81

,g (x0 )

= g(x0 )

,g (St (x0 ))

x0

x0 2

= e t g(x0 )

AAAEMnicdZPdahNBFICnWX9q/Gmql94MpkIKEnZzo14IBWkaKEKtxhaycZmZnSRD5meZma0Ny76HL+Ktt/oK4pV4JfgQTpJtzW7TgYXD+b4ze84MgxPOjPX9Hxs178bNW7c379Tv3rv/YKux/fCDUakmtE8UV/oUI0M5k7RvmeX0NNEUCczpCZ6+nvOTM6oNU/K9nSV0KNBYshEjyLpU1OjshMmERVnIXU2Mno3z1rvIts4jf3cXvoL04wWBNh8v0jtRo+m3/cWCV4OgCJqgWEfRdu1PGCuSCiot4ciYQeAndpghbRnhNK+HqaEJIlM0pgMXSiSoGWaL4XL41GViOFLafdLCRXa1IkPCmJnAzhTITkyVzZPr2CC1oxfDjMkktVSS5Y9GKYdWwflJwZhpSiyfuQARzVyvkEyQRsS68yztFJ+xxBRdny/brpfauMxJ+okoIZCMs3A/z8J5V1pkIR7B/Twvc1eE80EwzBah0hnmKc1hM1gj6pKoaXyNNy55XaWpsQeaUnmNL0r+GzeLtGidK5YdXAy0RsD/BYzXCNgsBax4PL9PxddIxzi/3OK4Cg9JAQni2WGV9lZpr0q7q7RbpQer9MDR8vVipaYWuQHqdfc4gupTuBr0O+2Xbf9tp7nnF69kEzwGT0ALBOA52AM9cAT6gIDP4Cv4Br57X7yf3i/v91KtbRQ1j0BpeX//AWEscH4= sha1_base64="vdsb0r5Hl4JiZScOA6Gsc3xX6/s=">AAAD8HicdZPfatRAFManG//UWLW99ia4CF4tiTfqnSDdXShCLW5b2A1lZnKyHXb+hJlJdQkBr731HbwS38dH8C2cbGK7SdOBwMf5fjPznQyHZJwZG4Z/dgbevfsPHu4+8h/v+U+ePtvfOzUq1xRmVHGlzwk2wJmEmWWWw3mmAQvC4YysPlT+2RVow5T8bNcZxAIvJUsZxdaVji/2h+Eo3KzgtogaMUTNujgY/F0kiuYCpKUcGzOPwszGBdaWUQ6lv8gNZJiu8BLmTkoswMTFJmcZvHSVJEiVdp+0waa6vaPAwpi1II4U2F6arlcV+7x5btO3ccFklluQtL4ozXlgVVA1HSRMA7V87QSmmrmsAb3EGlPrfk3rpOSKZaZJ/bWO7bdiXNckfKFKCCyTYnFYFosqlRbFgqTBYVm2fbeJlPMoLjZS6YLwHMpgGPWAugVqSO7gli1urDQYO9EA8g5etPiPrhdpcR8r6gT/G+oByA1ASA9ATA0QxZPqPRXvgU5IeX3ESdc8oo1JMS+Ouu5025123fG2O+66k2134tz28xKlVha7BnzfzUbUnYTbYvZ69G4UfgrRLnqOXqBXKEJv0Hs0RcdohihK0Hf0w/vm/fR+1SM02Glm6QC1lvf7HypvW28= sha1_base64="SmSFOj2wG1kaUncXEZmyeSNT1og=">AAAEJ3icdZPdahNBFICniT81Vk299WYwFVKQsNub6oUgSJNCEWo1tpCNy8zsJBkyP8vMbG0Y9j18EW+91VcQr8Qr38LJT2t2ux1YOJzvm9lzZjg45czYIPi5Uavfun3n7ua9xv2tBw8fNbe3PhqVaUL7RHGlzzAylDNJ+5ZZTs9STZHAnJ7i6Zs5Pz2n2jAlP9hZSocCjSUbMYKsT8XNvZ0onbDYRdzvSdDzcd5+H9v2RRzs7sJXkH66JNDm40V6J262gk6wWPB6EK6CFlit43i79jdKFMkElZZwZMwgDFI7dEhbRjjNG1FmaIrIFI3pwIcSCWqGbtFcDp/5TAJHSvtPWrjIru9wSBgzE9ibAtmJKbN5sooNMjt6MXRMppmlkix/NMo4tArObwomTFNi+cwHiGjma4VkgjQi1t9n4aTknKVmVfXFsuxGoYyrnKSfiRICycRFB7mL5lVp4SI8ggd5XuR+E84H4dAtQqUd5hnNYSusEHVB1DS5wRsXvK7S1NieplTe4IuC/9b3Ii2qcsWygsuGKgT8X8C4QsBmKWDFk/l7Kl4hneD86oiTMjwiK0gQd0dlerhOD8u0u067Zdpbpz1Pi8+LlZpa5BtoNPxwhOVRuB709zovO8G7AGyCJ+ApaIMQ7IPX4BAcgz4g4Av4Br6DH/Wv9V/138spqm2sxukxKKz6n3+JbW7z sha1_base64="WMHSo1gAkRSFX6WF0O05qE6LSTE=">AAAEMnicdZPdahNBFICnWX9q/Gmql94MpkIKEnZ7o14IBWkSKEKtxhayMczMTpIh87PMzNaGZd/DF/HWW30F8Uq8EnwIJ8m2ZrfbgYXD+b4ze84Mg2POjPX9Hxs178bNW7c379Tv3rv/YKux/fCDUYkmtE8UV/oUI0M5k7RvmeX0NNYUCczpCZ69XvCTM6oNU/K9ncd0KNBEsjEjyLrUqLG3E8ZTNkpD7moi9GyStd6NbOt85O/uwleQfrwg0GaTZXpn1Gj6bX+54NUgyIMmyNfRaLv2J4wUSQSVlnBkzCDwYztMkbaMcJrVw8TQGJEZmtCBCyUS1AzT5XAZfOoyERwr7T5p4TK7XpEiYcxcYGcKZKemzBbJKjZI7PjFMGUyTiyVZPWjccKhVXBxUjBimhLL5y5ARDPXKyRTpBGx7jwLO0VnLDZ51+ertuuFNi5zkn4iSggkozQ8yNJw0ZUWaYjH8CDLitwV4WwQDNNlqHSKeUIz2AwqRF0QNY2u8SYFr6M0NbarKZXX+KLgv3GzSIuqXLHq4GKgCgH/FzCuELBZCVjxaHGfildIxzi73OK4DA9JDgni6WGZ9tZpr0w767RTpt112nW0eL1YqZlFboB63T2OoPwUrgb9vfbLtv/Wb+77+SvZBI/BE9ACAXgO9kEPHIE+IOAz+Aq+ge/eF++n98v7vVJrG3nNI1BY3t9/YIxwfA==

St (x0 ) AAAECHicdZPLbtNAFIanMZcSbi0s2VikSGUT2d0Au0qoSaQKqRRCK+IomhmfpKPMxZoZl0aWn4AtW3gHVogtb8Ej8BZMYlNi1x3J0tH5vhn/x9aQhDNjg+D3Rsu7cfPW7c077bv37j94uLX96INRqaYwpIorfUqwAc4kDC2zHE4TDVgQDidk/nrJT85BG6bke7tIYCzwTLIpo9i61seddxO7ezEJnu9MtjpBN1gt/2oRlkUHletost36E8WKpgKkpRwbMwqDxI4zrC2jHPJ2lBpIMJ3jGYxcKbEAM85WkXP/mevE/lRp90jrr7rrOzIsjFkI4kyB7Zmps2WziY1SO305zphMUguSFi+apty3yl/O78dMA7V84QpMNXNZfXqGNabWfaXKSfE5S0yZ+qKI3a7EuOxJ+ESVEFjGWXSQZ9EylRZZRKb+QZ5XudtE8lE4zlal0hnhKeR+J2wQdUXUEF/jzSpeT2kwtq8B5DW+qPhv3CzS4iZXFAn+DdQgkP8CIQ0CMYVAFI+X/1PxBumY5JdHHNfhIS0hxTw7rNPBOh3UaW+d9uq0v077jlZ/L1FqbrEboN12lyOsX4WrxXCv+6obvN3r7AflLdlET9BTtItC9ALtowE6QkNEkURf0Ff0zfvsffd+eD8LtbVR7nmMKsv79Rfr3WDg

AAAEA3icdZPLbtNAFIanMZcSbi0s2VikSKwiOxtgVwk1iVQhlaqmkRIrmhmfpKPMxZoZFyLLW7Zs4R1YIbY8CI/AWzBJTIlddyRLR+f7ZvwfW0NSzowNgt87Le/W7Tt3d++17z94+Ojx3v6TD0ZlmkJEFVd6RLABziREllkOo1QDFoTDOVm8XfHzS9CGKXlmlynEAs8lmzGKrWtFB6Pp2cF0rxN0g/XyrxdhWXRQuU6m+60/k0TRTIC0lGNjxmGQ2jjH2jLKoWhPMgMppgs8h7ErJRZg4nydtvBfuE7iz5R2j7T+uru9I8fCmKUgzhTYXpg6WzWb2Dizs9dxzmSaWZB086JZxn2r/NXofsI0UMuXrsBUM5fVpxdYY2rdB6qclFyy1JSpP21itysxrnoSPlIlBJZJPjkq8skqlRb5hMz8o6KocreJFOMwztel0jnhGRR+J2wQdUXUkNzgzSteX2kwdqAB5A2+qPjv3CzS4iZXbBL8G6hBIP8FQhoEYjYCUTxZ/U/FG6RTUlwdcVqHx7SEFPP8uE6H23RYp/1t2q/TwTYdOFr9vUSphcVugHbbXY6wfhWuF1Gv+6YbvO91DnvlLdlFz9Bz9BKF6BU6REN0giJEEUNf0Ff0zfvsffd+eD83amun3PMUVZb36y/kiF89

XT

Figure 1: Construction of eigenfunctions using a non-recurrent set Γ and a continuous function g defined on Γ.

Theorem 1 Let Γ be a non-recurrent set, g ∈ C(Γ) and λ ∈ C. Then φλ,g defined by (11) is an eigenfunction of the Koopman operator on XT . In particular, φλ,g satisfies (4) and (5) for all x ∈ XT and all t such that St (x) ∈ XT . In addition, if g is Lipschitz continuous, then also ∇φλ,g · f = λφλ,g (12) almost everywhere in XT (and everywhere in XT if g is differentiable). Proof: The result follows by construction. Since Γ is non-recurrent, the definition (10) is consistent for all t ∈ [0, T ] and equivalent to (11). Since St (x0 ) = x0 , we have φλ,g (x0 ) = g(x0 ) for all x0 ∈ Γ and hence Eq. (10) is equivalent to the defining Eq. (4) defining the Koopman eigenfunctions. To prove (12) observe that g Lipschitz implies that φλ,g is Lipschitz and the result follows from (5) by the chain rule and the Rademacher’s theorem which asserts almost-everywhere differentiability of Lipschitz functions.  Several remarks are in order. Richness We emphasize that this construction works for an arbitrary λ ∈ C and an arbitrary function g continuous1 on Γ. Therefore, there are uncountably many eigenfunctions that can be generated in this way and in this work we exploit this to construct a sufficiently rich collection of eigenfunctions such that the projection error (8) is minimized. The richness of the class of eigenfunctions is analyzed theoretically in Section 3.2 and used practically in Section 4 for data-driven learning of eigenfunctions. Time direction The same construction can be carried out backwards in time or forward and backward in time, as long as Γ is non-recurrent for the time interval considered. In this work we focus on forward-in-time construction which naturally lends itself to data-driven applications where typically only forward-in-time data is available. 1

In this work we restrict our attention to functions g continuous on Γ but in principle discontinuous functions of a suitable regularity class could be used as well.

7

History This construction is very closely related to the concept of open eigenfunctions introduced in [18], which were subsequently used in [4] to find conjugacies between dynamical systems. This work is, to the best of our knowledge, the first one to use such construction for prediction and control using linear predictors.

3.1

Non-recurrent set vs Non-recurrent surface

It is useful to think of the non-recurrent set Γ as an n−1 dimensional surface so that XT is full dimensional. Such surface can be for example any level set of a Koopman eigenfunction with non-zero real part (e.g., isostable) or a level set of a Lyapunov function. However, these level sets can be hard to obtain in practice; fortunately, their knowledge is not required. The reason for this is that the set Γ can be a finite discrete set in which case XT is simply the collection of all trajectories with initial conditions in Γ; since trajectories are one-dimensional, any randomly generated finite (or countable) discrete set will be non-recurrent with probability one. This is a key feature of our construction that will be utilized in Section 4 for a datadriven learning of the eigenfunctions. A natural question arises: can one find a non-recurrent surface passing through a given finite discrete non-recurrent set Γ? The answer is positive, provided that the the points in Γ do not lie on the same trajectory and the flow can be rectified: Lemma 1 Let Γ = {x1 , . . . , xM } be a finite set of points in X and let X 0 be a full dimensional compact set containing Γ on which the flow of (1) can be rectified, i.e., there exists a diffeomorphism h : Y 0 → X 0 through which (1) is conjugate to y˙ = (0, . . . , 0, 1)

(13)

with Y 0 ⊂ Rn convex. Assume that no two points in the set Γ lie on the same trajectory ˆ ⊃ Γ, closed in the standard topology of (1). Then there exists an n − 1 dimensional surface Γ n ˆ ˆ of R , such that x ∈ Γ implies St (x) ∈ / Γ for any t > 0 satisfying St0 (x) ∈ X 0 for all t0 ∈ [0, t]. Proof: Let y j = h−1 (xj ), j = 1, . . . , M and let ΓY = {y 1 , . . . , y M } = h−1 (Γ). The goal ˆ Y , closed in Rn , passing through the points is to construct an n − 1 dimensional surface Γ 1 M ˆ Y ⊃ ΓY and satisfying {y , . . . , y }, i.e., Γ ˆ Y =⇒ Sˆt (y) ∈ ˆY y∈Γ /Γ

(14)

for any t > 0 satisfying Sˆt0 (y) ∈ Y 0 for all t0 ∈ [0, t], where Sˆt0 (y) denotes the flow of (13). ˆ Y is constructed, the required surface Γ ˆ is obtained as Γ ˆ = h(Γ ˆ Y ). Once Γ ˆ Y is Given the nature of the rectified dynamics (13), the condition (14) will be satisfied if Γ a graph of a Lipschitz continuous function γ : Rn−1 → R such that ˆ Y = {(y1 , . . . , yn ) ∈ Y 0 | yn = γ(y1 , . . . , yn−1 )}. Γ j We shall construct such function γ. Denote y¯j = (y1j , . . . , yn−1 ) the first n − 1 components of j n each point y ∈ R . The nature of the rectified dynamics (13), convexity of Y 0 and the fact that xj ’s (and hence y j ’s) do not lie on the same trajectory implies that y¯j ’s are distinct.

8

Therefore, the pairs (¯ y j , ynj ) ∈ Rn−1 × R, j = 1, . . . , M , can be interpolated with a Lipschitz continuous function γ : Rn−1 → R. One such example of γ is n  o γ(y1 , . . . , yn−1 ) = max 1 − k(y1 , . . . , yn−1 ) − y¯j k ynj , j∈{1,...,M }

where we assume that ynj ≥ 0 (which can be achieved without loss of generality by translating the yn -th since Y 0 is compact) and k · k is any norm on Rn−1 . Another example is a multivariate polynomial interpolant of degree d which always exists for any d satisfying  n−1+d ˆ is n−1 dimensional ≥ M . Since both h and γ are Lipschitz continuous, the surface Γ d n and is closed in the standard topology of R . 

3.2

Span of the eigenfunctions

A crucial question arises: can one approximate an arbitrary continuous function by a linear combination of the eigenfunctions constructed using the approach described in Section 3 by selecting more and more boundary functions g and eigenvalues λ? Crucially for our application, if this is the case, we can make the projection error (8) and thereby also the prediction error (9) arbitrarily small by enlarging the set of eigenfunctions φ. If this is the case, does one have to enlarge the set of eigenvalues or does it suffice to only increase the number of boundary functions g? In this section we give a precise answer to these questions. Before we do so, we set up some notation. Given any set Λ ⊂ C, we define mesh(Λ) =

p nX k=1

o αk λk | λk ∈ Λ, αk ∈ N0 , p ∈ N .

(15)

A basic result in the Koopman operator theory asserts that if Λ is a set of eigenvalues of the Koopman operator, then so is mesh(Λ). Now, given Λ ⊂ C and G ⊂ C(Γ), we define φΛ,G = {φλ,g | λ ∈ Λ, g ∈ G},

(16)

where φλ,g is given by (11). In words, φΛ,G is the set of all eigenfunctions arising from all combinations of boundary functions in G and eigenvalues in Λ using the procedure described in Section 3. Now we are ready to state the main result of this section: Theorem 2 Let Γ be a non-recurrent set, closed in the standard topology of Rn , and let the vector field f be rectifiable in XT , i.e., the dynamics (1) is conjugate to y˙ = (0, . . . , 0, 1)

(17)

through a homeomorphism h : YT → XT . Let Λ0 ⊂ C be an arbitrary2 set of complex ¯ 0 . Set Λ = mesh(Λ0 ) numbers such that at least one has a non-zero real part and Λ0 = Λ and let G = {gi }∞ i=1 denote an arbitrary set of functions whose span is dense in C(Γ). Then 2

In the extreme case, the set Λ0 can consists of a single non-zero real number or a single conjugate pair with non-zero real part.

9

the span of ΦΛ,G is dense in C(XT ), i.e., for every ξ ∈ C(XT ) and any  > 0 there exist eigenfunctions φ1 , . . . , φN ∈ ΦΛ,G such that N X sup ξ(x) − ci φi (x) < . x∈XT i=1

for some ci ∈ C.

Proof: Step 1. First, we observe that it is sufficient to prove the density of ΦΛ = {φλ,g | λ ∈ Λ, g ∈ C(Γ)} in C(XT ). To see this, assume ΦΛ is dense in C(XT ) and consider any function ξ ∈ C(XT ) and  > 0. Then there exists φλ,g ∈ ΦΛ such that sup |φλ,g (x) − ξ(x)| < .

x∈XT

Since span{G} is dense in C(Γ), there exists a function g˜ ∈ span{G} such that sup |g − g˜| < min{1, |eλT |}. x∈Γ

In addition, because Eq. (11) defining φλ,g is linear in g for any fixed λ, it follows that φλ,˜g ∈ span{ΦΛ,G }. Therefore it suffices to bound the error between ξ and φλ,˜g . We have sup |φλ,˜g (x) − ξ(x)| ≤ sup |φλ,g (x) − ξ(x)| + sup |φλ,g (x) − φλ,˜g (x)|

x∈XT

x∈XT

x∈XT

≤  + sup |e x∈XT

−λτ (x)

g(Sτ (x) (x)) − e−λτ (x) g˜(Sτ (x) (x))|

≤  + sup |e−λτ (x) | sup |g(x) − g˜(x)| ≤  + max{1, |e−λT |} min{1, |eλT |} x∈Γ

x∈XT

≤ 2, where we used the facts that τ (x) ∈ [0, T ] and Sτ (x) (x) ∈ Γ. Step 2. Now we prove that if a set of eigenfunctions of the rectified system (17) is dense ˆ be a set in C(YT ), then the associated conjugate eigenfunctions are dense in C(XT ). Let Φ ˆ be dense in C(YT ). of eigenfunctions of the Koopman operator associated to (17) and let Φ The associated conjugate eigenfunctions of (1) are ˆ Φ = {φˆ ◦ h−1 | φˆ ∈ Φ}. Let a function ξ ∈ C(XT ) be given and let  > 0 be arbitrary. Denote ξˆ = ξ ◦ h and observe ˆ in C(YT ) there exists φˆ ∈ Φ ˆ such that that ξˆ ∈ C(YT ). By density of Φ ˆ C(Y ) = sup |ξˆ − φ| ˆ < . kξˆ − φk T y∈YT

But since h is bijective, we have ˆ = sup |ξˆ ◦ h−1 − φˆ ◦ h−1 | = sup |ξ − φ|  > sup |ξˆ − φ| y∈YT

x∈XT

x∈XT

10

with φ ∈ Φ as desired. Step 3. We shall prove that the set of eigenfunctions ˆ Λ = {φˆλ,ˆg | λ ∈ Λ, gˆ ∈ C(ΓY )} Φ is dense in C(YT ). Fix λ ∈ Λ with Re(λ) 6= 0. Observe that ΓY := h−1 (Γ) is a nonrecurrent set for the rectified dynamics (17). Since ΓY is non-recurrent, for each y ∈ YT there exists a unique time τˆ(y) ∈ [−T, 0] such that Sˆτ (y) (y) ∈ ΓY , where Sˆτ (y) (y) is the flow of (17). By nature of the dynamics (17) we notice that Sˆτ (y) (y) does not depend on yn , i.e., Sˆτ (y) (y) = γ(y1 , . . . , yn−1 ) for some continuous function γ : YT0 → ΓY , where YT0 is the projection of YT onto the first n − 1 components of y. With this observation, we have  ΓY = (y1 , . . . , yn ) | yn = γn (y1 , . . . , yn−1 ) , where γn is the nth component of the vector-valued function γ and τˆ(y) = γn (y1 , . . . , yn−1 ) − yn . Therefore, given any gˆ ∈ C(ΓY ), we have φˆλ,ˆg = e−λˆτ (y) gˆ(Sˆτˆ(y) (y)) = eλyn e−λγn (y1 ,...,yn−1 ) gˆ(γ(y1 , . . . , yn−1 )). Observe also that the function γ is invertible with the inverse being γ −1 (y) = (y1 , . . . , yn−1 ) for all y ∈ ΓY . Define now gˆ(y) = [eλγn (y1 ,...,yn−1 ) g˜(y1 , . . . , yn−1 )] ◦ γ −1 . With this choice of gˆ, we obtain φˆλ,ˆg = eλyn g˜(y1 , . . . , yn−1 ).

(18)

Now we distinguish two cases. (i) λ real (and nonzero) and (ii) λ complex (with non-zero real part). For the former case we observe that span{ekλyn }∞ k=0 is a subalgebra of C(R) which contains the constant function and separates points (since λ 6= 0). Therefore it follows from the Stone-Weirstrass theorem that this set of functions is dense in C(YT00 ) where YT00 is the projection of YT on yn (which is a compact subset of R). Since ˆ Λ contains all functions of the form (18) g˜ is an arbitrary continuous function and since Φ with λ replaced by kλ (because for λ real we have kλ ∈ Λ for all k ∈ N by (15) and the fact ˆ Λ is dense in C(Y 0 × Y 00 ) because a tensor product that Λ = mesh(Λ0 )), we conclude that Φ T T of dense subalgebras of continuous functions is dense in the associated product space (this is is a direct consequence of the Stone-Weirstrass theorem). Notice that YT ⊂ YT0 × YT00 with YT closed. To see the closedness of YT , observe that YT = h−1 (XT ) with h continuous and XT closed; to see the closedness of XT , consider a sequence xn ∈ XT converging to an x ∈ Rn . Then there exists (tn , x0n ) ∈ [0, T ] × Γ such that xn = Stn (x0n ); by compactness of [0, T ] × Γ, there exists a subsequence (tni , x0ni ) → (t∞ , x0∞ ) ∈ [0, T ] × Γ. By continuity of St , ˆ Λ in C(YT ) then we have x = St∞ (x0∞ ) ∈ XT , proving closedless of XT . The density of Φ 11

follows because any function continuous on the closed set YT can be continuously extended to YT0 × YT00 (by the Tietze extension theorem). For λ complex with non-zero real part, the result follows by observing that the set ( N ) N X X ¯ ck ekλyn + c¯k ekλyn | ck ∈ C, N ∈ N k=0

k=1

is a subalgebra of C(R) containing the constant function and separating points and proceeding ¯ ∈ Λ since Λ0 = Λ ¯ 0 by assumption). as in the previous case (noticing that kλ ∈ Λ and k λ Indeed, we have N N N X X X ¯ n kλyn kλy ck e + c¯k e = ck ekλyn k=0

k=1

k=−N

with c−k = c¯k , from which the subalgebra property is apparent.

ˆΛ Step 4. The proof is finished by observing that the set of eigenfunctions conjugate to Φ is precisely ΦΛ , which follows from the fact that h is a homeomorphism (i.e., a continuous bijection with a continuous inverse).  Remark 1 (Rectifyability) It is a well-known fact that for the dynamics (1) to be rectifyable on a certain domain, this domain should not contain singularities (see, e.g., [3, Chapter 2, Corollary 12]). Note, however, that the assumption of rectifyability in Theorem 2 is sufficient but not necessary for the conclusions of the theorem to hold. An analogous result can be proven, for example, in the basin of attraction of a stable equilibrium, where the flow is not rectifyable and we conjecture that similar result holds in a neighborhood of more general singularities. The only fundamental obstruction is recurrence, which restricts the set of eigenvalues of the Koopman operator and hence also restricts the richness of the class of its eigenfunctions. Selection of λ’s and g’s An interesting question arises regarding an optimal selection of the eigenvalues λ ∈ Λ and boundary functions g ∈ G assuring that the projection error (8) converges to zero as fast as possible. The optimal selection of either is a difficult and largely open problem; what is clear is that the optimal choice depends both on the dynamics (1) and on the function ξ to be approximated. To see this, consider the linear system x˙ = ax, x ∈ [0, 1] and the non-recurrent set Γ = {1}. In this case, any function on Γ is constant, so the set of boundary functions G can be chosen to consist of the constant function equal to one. Then it follows from (11) that given any complex number λ ∈ C, the associated λ eigenfunction is φλ (x) := φλ,1 (x) = x a . Given an observable ξ, the optimal choice of the set of eigenvalues Λ ⊂ C is such that the projection error (8) is minimized, which in this case translates to making

X λ

a min ξ − cλ x (19) (cλ ∈C)λ∈Λ

λ∈Λ

as small as possible with the choice of Λ. Clearly, the optimal choice (in terms of the number of eigenfunctions required to achieve a given projection error) of Λ depends on ξ. For example, for ξ = x, the optimal choice is Λ = {a}, leading to a zero projection error with 12

only one eigenfunction. For other observables, however, the choice of Λ = {a} need not be optimal. For example, for ξ = xb , b ∈ R, the optimal choice leading to zero projection error with only one eigenfunction is Λ = {a·b}. The statement of Theorem 2 then translates to the statement that the projection error (19) is zero for any Λ = {k · λ0 | k ∈ N0 } with λ0 < 0 and any continuous observable ξ. The price to pay for this level of generality is the asymptotic nature of the result, requiring the cardinality of Λ (and hence the number of eigenfunctions) going to infinity. From a practical perspective of control and estimation, the number of eigenfunctions required to achieve a given projection error is of secondary importance (see Section 5.1.1). From a theoretical perspective, this interesting question remains open and appears to be related to the Prony method [22].

3.3

Generalized eigenfunctions

This section describes how generalized eigenfunctions can be constructed with a simple modification of the proposed method. Importantly for this work, generalized eigenfunctions also give rise to Koopman invariant subspaces and therefore can be readily used for linear prediction. Given a complex number λ and g1 , . . . , gnλ , consider the Jordan block   λ 1   Jλ =  . . . 1  λ

and define 

   ψλ,g1 (St (x0 )) g1 (x0 )   .. .  J t   = e λ  ..  . ψλ,gnλ (St (x0 ))

gnλ (x0 )

for all x0 ∈ Γ or equivalently     ψλ,g1 (x) g1 (Sτ (x) (x))    .. .. −J τ (x)   =e λ   . . ψλ,gnλ (x)

for all x ∈ XT . Define also

(20)

(21)

gnλ (Sτ (x) (x))

 > ψ = ψλ,g1 , . . . , ψλ,gnλ .

With this notation, we have the following theorem:

Theorem 3 Let Γ be a non-recurrent set, gi ∈ C(Γ), i = 1, . . . , nλ , and λ ∈ C. Then the subspace span{ψλ,g1 , . . . , ψλ,gnλ } ⊂ C(XT ) is invariant under the action of the Koopman semigroup Kt . Moreover ψ(St (x)) = eJλ t ψ(x) 13

(22)

and

d ψ(St (x)) = Jλ ψ(St (x)) dt for any x ∈ XT and any t ∈ [0, T ] such that St0 (x) ∈ XT for all t0 ∈ [0, t].

(23)

Proof: Let ξ ∈ span{ψλ,g1 , . . . , ψλ,gnλ }. Then ξ = c> ψ for some c ∈ Cnλ . Given x ∈ XT , we have     g1 (Sτ (x) (x)) g1 (Sτ (x)−t (St (x)))    .. .. J t J t −J (τ (x))  ψ(St (x)) = e−Jλ (τ (x)−t)    = e λ ψ(x), =e λe λ . . gnλ (Sτ (x) (x))

gnλ (Sτ (x)−t (St (x)))

which is (22). Therefore Kt ξ = c> ψ ◦ St = c> etJλ ψ ∈ span{ψλ,g1 , . . . , ψλ,gnλ } as desired. Eq. (23) follows immediately from (22). 

Beyond Jordan blocks The proof of Theorem 3 reveals that there was nothing special of using a Jordan block in (20). Indeed, the entire construction works with an arbitrary matrix A in place of Jλ . However, nothing is gained by using an arbitrary matrix A since the span of the generalized eigenfunctions constructed using A is identical to that of the corresponding Jordan normal form of A, which is just the direct sum of the spans associated to the individual Jordan blocks.

4

Learning eigenfunctions from data

Now we use the construction of Section 3 to learn eigenfunction from data. In particular, we leverage the freedom in choosing the eigenvalues λ as well as the boundary functions g to learn a rich set of eigenfunctions such that the projection error (8) (and thereby also the prediction error (9)) is minimized. Assume we have available data in the form of Mt distinct equidistantly sampled trajectories with Ms + 1 samples each, where Ms = T /Ts with Ts being the sampling interval (what follows straightforwardly generalizes to non-equidistantly sampled trajectories of unequal length). That is, the data is of the form  Mt s D = (xjk )M , k=0

(24)

j=1

where the superscript indexes the trajectories and the subscript the discrete time within the trajectory, i.e., xjk = SkTs (xj0 ), where xj0 is the initial condition of the j th trajectory. The non-recurrent set Γ is simply defined as t Γ = {x10 , . . . , xM 0 }.

We also assume that we have chosen a set of complex numbers (i.e., the eigenvalues) Λ = {λ1 , . . . , λNΛ } 14

AAAEBXicdZPLbtNAFIanMZcSbi0s2VikSKwiu5vCrhJqEqlCKhUhleIQzYyP01HmYs2MSyPLe7Zs4R1YIbY8B4/AWzBJTIlddyRLR+f7ZvwfW0NSzowNgt9bLe/W7Tt3t++17z94+Ojxzu6TD0ZlmsKQKq70GcEGOJMwtMxyOEs1YEE4jMj8zZKPLkAbpuR7u0hhIvBMsoRRbF1rtHc5DT6Ge9OdTtANVsu/XoRl0UHlOpnutv5EsaKZAGkpx8aMwyC1kxxryyiHoh1lBlJM53gGY1dKLMBM8lXewn/hOrGfKO0eaf1Vd3NHjoUxC0GcKbA9N3W2bDaxcWaTV5OcyTSzIOn6RUnGfav85fB+zDRQyxeuwFQzl9Wn51hjat0nqpwUX7DUlKkv17HblRhXPQmfqBICyziPjoo8WqbSIo9I4h8VRZW7TaQYh5N8VSqdE55B4XfCBlFXRA3xDd6s4vWUBmP7GkDe4IuK/9bNIi1ucsU6wb+BGgTyXyCkQSBmLRDF4+X/VLxBOiXF1RGndXhMS0gxz4/rdLBJB3Xa26S9Ou1v0r6j1d9LlJpb7AZot93lCOtX4Xox3O++7gbv9juHQXlLttEz9By9RCE6QIdogE7QEFE0R1/QV/TN++x99354P9dqa6vc8xRVlvfrLyrLX9o=

x10

ˆ

x11

x12

AAAEA3icdZPLbtNAFIanMdCQcmlhCQuLCMQCRTYb6K4SahKpQioVppWSEM2MT9JR5mLNjEsjy1u2bOEdWCDElgfgEXgE3oJJHErsuiNZOjrfN+P/2BqScGZsEPzeaHjXrt/YbN5sbd26fefu9s69d0almkJEFVf6hGADnEmILLMcThINWBAOx2T2asGPz0AbpuRbO09gJPBUsgmj2LpWdD4O34fj7XbQCZbLv1yEq6K9t/XtSeNX8+HheKfxZxgrmgqQlnJszCAMEjvKsLaMcshbw9RAgukMT2HgSokFmFG2TJv7j10n9idKu0daf9ld35FhYcxcEGcKbE9NlS2adWyQ2snLUcZkklqQtHjRJOW+Vf5idD9mGqjlc1dgqpnL6tNTrDG17gOVTorPWGJWqc+L2K1SjIuehA9UCYFlnA3382y4SKVFNiQTfz/Py9xtIvkgHGXLUumM8BRyvx3WiLokaoiv8KYlr6s0GNvTAPIKX5T8124WaXGdK4oE/waqEch/gZAagZhCIIrHi/+peI10RPKLI46q8ICuIMU8O6jS/jrtV2l3nXartLdOe46Wfy9RamaxG6DVcpcjrF6Fy0X0vLPbCd64S/IMFauJHqBH6CkK0Qu0h/roEEWIIoY+oc/oi/fR++p9934UamNjtec+Ki3v51/vO2GJ sha1_base64="MiMqm/6DOOc5tLqogdryUUzq+nc=">AAAEA3icdZPNbtNAEMe3MR8l5aOFIxeLCMQBRTYX4FYJNYlUIZWK0EpxiHbX43SV/bB216WR5StXrvAOHBDiyoPwCH2LbuJQYtcdydJo/r/Z/Y9XQ1LOjA2Cvxst78bNW7c377S37t67/2B75+FHozJNYUgVV/qYYAOcSRhaZjkcpxqwIByOyOztQj86BW2Ykh/sPIWxwFPJEkaxdaXh2ST8FE62O0E3WIZ/NQlXSWd36+cz5OJgstM6j2JFMwHSUo6NGYVBasc51pZRDkU7ygykmM7wFEYulViAGedLt4X/1FViP1HafdL6y+p6R46FMXNBHCmwPTF1bVFs0kaZTV6PcybTzIKk5UVJxn2r/MXofsw0UMvnLsFUM+fVpydYY2rdD6qcFJ+y1Kxcn5W22xUblzUJn6kSAss4j/aKPFq40iKPSOLvFUVVd02kGIXjfJkqnROeQeF3wgZQV0AN8TXctML1lAZj+xpAXsOLCv/OzSItbmJF6eDfQA0A+Q8Q0gAQUwJE8Xjxnoo3QIekuDzisC7u05VIMc/36+pgXR3U1d662qur/XW179Tq8xKlZha7AdpttxxhfRWuJsOX3Tfd4L1bkheojE30GD1Bz1GIXqFdNEAHaIgoYugr+oa+e1+8H94v73eJtjZWPY9QJbw/F7uaYJo=

AAAEA3icdZPLbtNAFIanMZeQAm1hCQuLCMQCRXY3wK4SahKpQioVppWSEM2MT9JR5mLNjEsjy1u2bOEdWCDElgfgEXgE3oJJHErsuiNZOjrfN+P/2BqScGZsEPzeaHjXrt+42bzV2rx95+7W9s69d0almkJEFVf6hGADnEmILLMcThINWBAOx2T2asGPz0AbpuRbO09gJPBUsgmj2LpWdD7efR+Ot9tBJ1gu/3IRror23ua3J41fzYeH453Gn2GsaCpAWsqxMYMwSOwow9oyyiFvDVMDCaYzPIWBKyUWYEbZMm3uP3ad2J8o7R5p/WV3fUeGhTFzQZwpsD01VbZo1rFBaicvRhmTSWpB0uJFk5T7VvmL0f2YaaCWz12BqWYuq09PscbUug9UOik+Y4lZpT4vYrdKMS56Ej5QJQSWcTbcz7PhIpUW2ZBM/P08L3O3ieSDcJQtS6UzwlPI/XZYI+qSqCG+wpuWvK7SYGxPA8grfFHyX7tZpMV1rigS/BuoRiD/BUJqBGIKgSgeL/6n4jXSEckvjjiqwgO6ghTz7KBK++u0X6Xdddqt0t467Tla/r1EqZnFboBWy12OsHoVLhfRbudlJ3jjLskzVKwmeoAeoacoRM/RHuqjQxQhihj6hD6jL95H76v33ftRqI2N1Z77qLS8n38B8thhig== sha1_base64="yk9L/YHWJTAHfyhLgkcS+78uoUA=">AAAEA3icdZPNbtNAEMe3MR8l5aOFIxeLCMQBRXYvhVsl1CRShVQqQislIdpdj9NV9sPaXZdGlq9cucI7cECIKw/CI/AWrONQYtcdydJo/r/Z/Y9XQxLOjA2C3xst78bNW7c377S37t67/2B75+F7o1JNYUgVV/qUYAOcSRhaZjmcJhqwIBxOyPx1oZ+cgzZMyXd2kcBE4JlkMaPYutLwYrr7IZxud4JusAz/ahKuks7+1vdnyMXRdKf1ZxwpmgqQlnJszCgMEjvJsLaMcsjb49RAgukcz2DkUokFmEm2dJv7T10l8mOl3Setv6yud2RYGLMQxJEC2zNT14pikzZKbfxykjGZpBYkLS+KU+5b5Rej+xHTQC1fuARTzZxXn55hjal1P6hyUnTOErNyfVHabldsXNYkfKRKCCyjbHyQZ+PClRbZmMT+QZ5XdddE8lE4yZap0hnhKeR+J2wAdQXUEF3DzSpcT2kwtq8B5DW8qPBv3CzS4iZWlA7+DdQAkP8AIQ0AMSVAFI+K91S8ATom+eURx3XxkK5Einl2WFcH6+qgrvbW1V5d7a+rfadWn5coNbfYDdBuu+UI66twNRnudl91g7duSV6gMjbRY/QEPUch2kP7aICO0BBRxNBn9AV99T5537wf3s8SbW2seh6hSni//gK/N2Cb

AAAEBXicdZPLbtNAFIanMZcSbi0s2VikSKwiO5vCrlLVJFKFVCpCKiUhmhmfpKPMxZoZt40s79myhXdghdjyHDwCb8E4MSV23ZEsHZ3vm/F/bA2JOTM2CH5vNbw7d+/d337QfPjo8ZOnO7vPPhqVaAoDqrjSZwQb4EzCwDLL4SzWgAXhMCSLw5wPL0AbpuQHu4xhIvBcshmj2LrWcO9qGnzq7E13WkE7WC3/ZhEWRQsV62S62/gzjhRNBEhLOTZmFAaxnaRYW0Y5ZM1xYiDGdIHnMHKlxALMJF3lzfxXrhP5M6XdI62/6m7uSLEwZimIMwW256bK8mYdGyV29maSMhknFiRdv2iWcN8qPx/ej5gGavnSFZhq5rL69BxrTK37RKWTogsWmyL11Tp2sxTjuifhkiohsIzS8VGWjvNUWqRjMvOPsqzM3SaSjcJJuiqVTglPIPNbYY2oS6KG6BZvXvK6SoOxPQ0gb/FFyX/nZpEW17lineDfQDUC+S8QUiMQsxaI4lH+PxWvkU5Jdn3EaRUe0wJSzNPjKu1v0n6Vdjdpt0p7m7TnaPn3EqUWFrsBmk13OcLqVbhZDDrtt+3gfad1EBS3ZBu9QC/RaxSifXSA+ugEDRBFC/QFfUXfvM/ed++H93OtNraKPc9RaXm//gIuZ1/b

x21

x13

x20

AAAEJnicdZNNb9MwGIC9ho9RPtbBkYtFB+KAqqQgAQekSWhtpQlpTJRNakplO04X1R+R7YxVUf4Bf4QrV/gPnBDixpV/gZOG0WSZpViv3uex8762jGMWaeO6vzZazpWr165v3mjfvHX7zlZn++57LRNF6JhIJtUxRpqySNCxiQyjx7GiiGNGj/Didc6PTqnSkRTvzDKmU47mIgojgoxNzTqPdvwh4hzBV9BPz2buB+9JPveL+WkxP/OznVmn6/bcYsCLgVcGXVCOg9l2648fSJJwKgxhSOuJ58ZmmiJlIsJo1vYTTWNEFmhOJzYUiFM9TYuGMvjQZgIYSmU/YWCRXV+RIq71kmNrcmROdJ3lySY2SUz4YppGIk4MFWT1ozBh0EiYnw4MIkWJYUsbIKIiWyskJ0ghYuwZVnYKTqNYl1WfrcpuV8o4zwn6kUh7xCJI/b0s9fOqFE99HMK9LKtyuwhnE2+aFqFUKWYJzWDXaxBVRVQ0uMSbV7yBVFSboaJUXOLziv/G9iIManL5qoJ/DTUI+L+AcYOA9UrAkgX5fUrWIB3i7HyLwzrcJyUkiKX7dTpap6M6HazTQZ0O1+nQ0ur1YikXBtkG2m37OLz6U7gYjPu9lz33bb+72y9fySa4Dx6Ax8ADz8EuGIEDMAYEfAJfwFfwzfnsfHd+OD9XamujXHMPVIbz+y+C8mr7

= {x10 , x20 , x30 , x40 }

AAAEA3icdZPLbtNAFIanMZeQAm1hCQuLCMQCRXY3wK4SahKpQioVppWSEM2MT9JR5mLNjEsjy1u2bOEdWCDElgfgEXgE3oJJHErsuiNZOjrfN+P/2BqScGZsEPzeaHjXrt+42bzV2rx95+7W9s69d0almkJEFVf6hGADnEmILLMcThINWBAOx2T2asGPz0AbpuRbO09gJPBUsgmj2LpWdD4O3++Ot9tBJ1gu/3IRror23ua3J41fzYeH453Gn2GsaCpAWsqxMYMwSOwow9oyyiFvDVMDCaYzPIWBKyUWYEbZMm3uP3ad2J8o7R5p/WV3fUeGhTFzQZwpsD01VbZo1rFBaicvRhmTSWpB0uJFk5T7VvmL0f2YaaCWz12BqWYuq09PscbUug9UOik+Y4lZpT4vYrdKMS56Ej5QJQSWcTbcz7PhIpUW2ZBM/P08L3O3ieSDcJQtS6UzwlPI/XZYI+qSqCG+wpuWvK7SYGxPA8grfFHyX7tZpMV1rigS/BuoRiD/BUJqBGIKgSgeL/6n4jXSEckvjjiqwgO6ghTz7KBK++u0X6Xdddqt0t467Tla/r1EqZnFboBWy12OsHoVLhfRbudlJ3jjLskzVKwmeoAeoacoRM/RHuqjQxQhihj6hD6jL95H76v33ftRqI2N1Z77qLS8n38B8tZhig== sha1_base64="A3+tekcsShktxmVPpgsEm1dyVt0=">AAAEA3icdZPNbtNAEMe3MR8l5aOFIxeLCMQBRXYvhVsl1CRShVQqQislIdpdj9NV9sPaXZdGlq9cucI7cECIKw/CI/AWrONQYtcdydJo/r/Z/Y9XQxLOjA2C3xst78bNW7c377S37t67/2B75+F7o1JNYUgVV/qUYAOcSRhaZjmcJhqwIBxOyPx1oZ+cgzZMyXd2kcBE4JlkMaPYutLwYhp+2J1ud4JusAz/ahKuks7+1vdnyMXRdKf1ZxwpmgqQlnJszCgMEjvJsLaMcsjb49RAgukcz2DkUokFmEm2dJv7T10l8mOl3Setv6yud2RYGLMQxJEC2zNT14pikzZKbfxykjGZpBYkLS+KU+5b5Rej+xHTQC1fuARTzZxXn55hjal1P6hyUnTOErNyfVHabldsXNYkfKRKCCyjbHyQZ+PClRbZmMT+QZ5XdddE8lE4yZap0hnhKeR+J2wAdQXUEF3DzSpcT2kwtq8B5DW8qPBv3CzS4iZWlA7+DdQAkP8AIQ0AMSVAFI+K91S8ATom+eURx3XxkK5Einl2WFcH6+qgrvbW1V5d7a+rfadWn5coNbfYDdBuu+UI66twNRnudl91g7duSV6gMjbRY/QEPUch2kP7aICO0BBRxNBn9AV99T5537wf3s8SbW2seh6hSni//gK/NWCb

x22

AAAEA3icdZPLbtNAFIanMZeQcmlhCQuLCMQCRXZZALtKqEmkCqlUhFZKQjQzPk5HmYs1My6NLG/ZsoV3YIEQWx6AR+AReAsmcSix645k6eh834z/Y2tIwpmxQfB7o+FduXrtevNGa/Pmrdt3trbvvjMq1RQGVHGljwk2wJmEgWWWw3GiAQvC4YjMXi340Slow5R8a+cJjAWeShYziq1rDc4mz96Hk6120AmWy79YhKuivbv57XHjV/PBwWS78WcUKZoKkJZybMwwDBI7zrC2jHLIW6PUQILpDE9h6EqJBZhxtkyb+49cJ/Jjpd0jrb/sru/IsDBmLogzBbYnpsoWzTo2TG38YpwxmaQWJC1eFKfct8pfjO5HTAO1fO4KTDVzWX16gjWm1n2g0knRKUvMKvVZEbtVinHek/CBKiGwjLLRXp6NFqm0yEYk9vfyvMzdJpIPw3G2LJXOCE8h99thjahLooboEm9a8rpKg7E9DSAv8UXJf+1mkRbXuaJI8G+gGoH8FwipEYgpBKJ4tPifitdIhyQ/P+KwCvfpClLMs/0q7a/TfpV212m3SnvrtOdo+fcSpWYWuwFaLXc5wupVuFgMdjovO8Ebd0meomI10X30ED1BIXqOdlEfHaABooihT+gz+uJ99L56370fhdrYWO25h0rL+/kX9nVhiw== sha1_base64="seOlXqFAEMqLKRIWbziZ3IyJPTI=">AAAEA3icdZPNbtNAEMe3MR8l5aOFIxeLCMQBRXY5ALdKqEmkCqlUhFZKTLS7nqSr7Ie1uy6NLF+5coV34IAQVx6ER+AtWMehxK47kqXR/H+z+x+vhiScGRsEvzda3rXrN25u3mpv3b5z9972zv33RqWawpAqrvQJwQY4kzC0zHI4STRgQTgck/nrQj8+A22Yku/sIoFI4JlkU0axdaXh+eT5h3Cy3Qm6wTL8y0m4Sjp7W9+fIBeHk53Wn3GsaCpAWsqxMaMwSGyUYW0Z5ZC3x6mBBNM5nsHIpRILMFG2dJv7j10l9qdKu09af1ld78iwMGYhiCMFtqemrhXFJm2U2unLKGMySS1IWl40TblvlV+M7sdMA7V84RJMNXNefXqKNabW/aDKSfEZS8zK9Xlpu12xcVGT8JEqIbCMs/F+no0LV1pkYzL19/O8qrsmko/CKFumSmeEp5D7nbAB1BVQQ3wFN6twPaXB2L4GkFfwosK/cbNIi5tYUTr4N1ADQP4DhDQAxJQAUTwu3lPxBuiI5BdHHNXFA7oSKebZQV0drKuDutpbV3t1tb+u9p1afV6i1NxiN0C77ZYjrK/C5WS4233VDd66JXmGythED9Ej9BSF6AXaQwN0iIaIIoY+oy/oq/fJ++b98H6WaGtj1fMAVcL79RfC1GCc

AAAEKXicdZPdbtMwFIC9hp9R/jq45MaiQ4KbKqmQgItJk9DaShPSmFpWqSmR7bhdWv9EtjOoojwDL8Itt/AOXAG3XPEWuG0oTZYdKdLR+T4759gyjlmkjev+3Kk5167fuLl7q377zt179xt7D95pmShCB0QyqYYYacoiQQcmMowOY0URx4ye4fnrJT+7oEpHUvTNIqZjjqYimkQEGVsKGs/2h0EfHkA/9Tky54qnfYVmlBipFl sha1_base64="eq0FhE0EbDt6TvVvCigsJX0mkvg=">AAAEKXicdZPdbtMwFIC9hp9R/jq45MaiQ4KbKqmQgItJk9DaShPSmFpWqSmR7bhdWv9EtjOoojwDL8Itt/AOXAG3XPEWuG0oTZYdKdLR+T4759gyjlmkjev+3Kk5167fuLl7q377zt179xt7D95pmShCB0QyqYYYacoiQQcmMowOY0URx4ye4fnrJT+7oEpHUvTNIqZjjqYimkQEGVsKGs/2h0EfHkA/9Tky54qnfYVmlBipFlkw87MgnR142fvn+0Gj6bbcVcDLiZcnTZDHSbBX++OHkiScCkMY0nrkubEZp0iZiDCa1f1E0xiROZrSkU0F4lSP09VMGXxiKyGcSGU/YeCqur0iRVzrBcfWXDauy2xZrGKjxExejtNIxImhgqx/NEkYNBIuDwiGkbLjs4VNEFGR7RWSc6QQMfYYCzuFF1Gs864/rtuuF9rY1AT9QCTnSISpf5RtjtrHE3iUZUVuF+Fs5I3TVSpVillCM9j0KkRVEBUNr/CmBa8jFdWmqygVV/i84L+xswiDqly+7uDfQBUC/i9gXCFgvRawZOHyPiWrkE5xttnitAyPSQ4JYulxmfa2aa9MO9u0U6bdbdq1tHi9WMq5QXaAet0+Dq/8FC4ng3brVct9224etvNXsgsegcfgKfDAC3AIeuAEDAABn8AX8BV8cz47350fzq+1WtvJ1zwEhXB+/wXJ0m6R

AAAEA3icdZPLbtNAFIanMZeQAm1hCQuLCMQCRXY2wK5S1SRShVQqQislIZoZn6SjzMWaGbeNLG/ZsoV3YIEQWx6AR+AReAsmcSix645k6eh834z/Y2tIzJmxQfB7o+bduHnrdv1OY/Puvftb2zsP3huVaAp9qrjSJwQb4ExC3zLL4STWgAXhcExmewt+fAbaMCXf2XkMI4Gnkk0Yxda1+hfj9of2eLsZtILl8q8W4apo7m5+e1b7VX98ON6p/RlGiiYCpKUcGzMIg9iOUqwtoxyyxjAxEGM6w1MYuFJiAWaULtNm/lPXifyJ0u6R1l9213ekWBgzF8SZAttTU2aLZhUbJHbyapQyGScWJM1fNEm4b5W/GN2PmAZq+dwVmGrmsvr0FGtMrftAhZOiMxabVeqLPHajEOOyJ+GcKiGwjNLhfpYOF6m0SIdk4u9nWZG7TSQbhKN0WSqdEp5A5jfDClEXRA3RNd604HWUBmO7GkBe44uC/8bNIi2uckWe4N9AFQL5LxBSIRCTC0TxaPE/Fa+Qjkh2ecRRGR7QFaSYpwdl2lunvTLtrNNOmXbXadfR4u8lSs0sdgM0Gu5yhOWrcLXot1uvW8Fbd0leoHzV0SP0BD1HIXqJdlEPHaI+ooihT+gz+uJ99L56370fuVrbWO15iArL+/kX9nNhiw== sha1_base64="2scxz0NxBwD5MCs6kNBOx/E2NA8=">AAAEA3icdZPNbtNAEMe3MR/F5aOFIxeLCMQBRXYuhVsl1CRShVQqQislIdpdT9JV9sPaXZdGlq9cucI7cECIKw/CI/QtuolDiV13JEuj+f9m9z9eDUk4MzYM/240vFu379zdvOdv3X/w8NH2zuOPRqWaQp8qrvQJwQY4k9C3zHI4STRgQTgck9nbhX58BtowJT/YeQIjgaeSTRjF1pX65+P2p/Z4uxm2wmUE15NolTT3tn6+QC4OxzuNi2GsaCpAWsqxMYMoTOwow9oyyiH3h6mBBNMZnsLApRILMKNs6TYPnrtKHEyUdp+0wbK63pFhYcxcEEcKbE9NVVsU67RBaievRxmTSWpB0uKiScoDq4LF6EHMNFDL5y7BVDPnNaCnWGNq3Q8qnRSfscSsXJ8Xtv2SjauahM9UCYFlnA3382y4cKVFNiSTYD/Py7prIvkgGmXLVOmM8BTyoBnVgLoEaohv4KYlrqM0GNvVAPIGXpT4d24WaXEdKwoH/waqAch/gJAagJgCIIrHi/dUvAY6IvnVEUdV8YCuRIp5dlBVe+tqr6p21tVOVe2uq12nlp+XKDWz2A3g+245ouoqXE/67dabVvjeLckrVMQmeoqeoZcoQrtoD/XQIeojihj6ir6h794X74f3y/tdoI2NVc8TVArvzyXC0mCc

x31

AAAEBXicdZPLbtNAFIanMZdibi0s2VikSKwiuyyAXSXUJFKFVCpCKiUhmhmfpKPMxZoZl0aW92zZwjuwQmx5Dh6Bt2CcmBK77kiWjs73zfg/toYknBkbhr+3Wt6Nm7dub9/x7967/+Dhzu6jD0almsKAKq70KcEGOJMwsMxyOE00YEE4DMniTcGH56ANU/K9XSYwEXgu2YxRbF1ruHcxDT++2JvutMNOuFrB1SIqizYq1/F0t/VnHCuaCpCWcmzMKAoTO8mwtoxyyP1xaiDBdIHnMHKlxALMJFvlzYNnrhMHM6XdI22w6m7uyLAwZimIMwW2Z6bOimYTG6V29mqSMZmkFiRdv2iW8sCqoBg+iJkGavnSFZhq5rIG9AxrTK37RJWT4nOWmDL1xTq2X4lx2ZPwiSohsIyz8WGejYtUWmRjMgsO87zK3SaSj6JJtiqVzghPIQ/aUYOoK6KG+BpvXvG6SoOxPQ0gr/FFxX/rZpEWN7lineDfQA0C+S8Q0iAQsxaI4nHxPxVvkE5IfnnESR0e0RJSzLOjOu1v0n6ddjdpt057m7TnaPX3EqUWFrsBfN9djqh+Fa4Wg/3O6074br99EJa3ZBs9QU/RcxShl+gA9dExGiCKFugL+oq+eZ+9794P7+dabW2Vex6jyvJ+/QUyA1/c

x30 x41

AAAEA3icdZPLbtNAFIanMZeQcmlhCQuLCMQCRXZZALtKqEmkCqlUhFZKQjQzPk5HmYs1My6NLG/ZsoV3YIEQWx6AR+AReAsmcSix645k6eh834z/Y2tIwpmxQfB7o+FduXrtevNGa/Pmrdt3trbvvjMq1RQGVHGljwk2wJmEgWWWw3GiAQvC4YjMXi340Slow5R8a+cJjAWeShYziq1rDc4m4ftnk6120AmWy79YhKuivbv57XHjV/PBwWS78WcUKZoKkJZybMwwDBI7zrC2jHLIW6PUQILpDE9h6EqJBZhxtkyb+49cJ/Jjpd0jrb/sru/IsDBmLogzBbYnpsoWzTo2TG38YpwxmaQWJC1eFKfct8pfjO5HTAO1fO4KTDVzWX16gjWm1n2g0knRKUvMKvVZEbtVinHek/CBKiGwjLLRXp6NFqm0yEYk9vfyvMzdJpIPw3G2LJXOCE8h99thjahLooboEm9a8rpKg7E9DSAv8UXJf+1mkRbXuaJI8G+gGoH8FwipEYgpBKJ4tPifitdIhyQ/P+KwCvfpClLMs/0q7a/TfpV212m3SnvrtOdo+fcSpWYWuwFaLXc5wupVuFgMdjovO8Ebd0meomI10X30ED1BIXqOdlEfHaABooihT+gz+uJ99L56370fhdrYWO25h0rL+/kX9nFhiw== sha1_base64="T2l/wqmRurda6pMX1Pn0byGW8CA=">AAAEA3icdZPNbtNAEMe3MR8l5aOFIxeLCMQBRXY5ALdKqEmkCqlUhFZKTLS7nqSr7Ie1uy6NLF+5coV34IAQVx6ER+AtWMehxK47kqXR/H+z+x+vhiScGRsEvzda3rXrN25u3mpv3b5z9972zv33RqWawpAqrvQJwQY4kzC0zHI4STRgQTgck/nrQj8+A22Yku/sIoFI4JlkU0axdaXh+ST88Hyy3Qm6wTL8y0m4Sjp7W9+fIBeHk53Wn3GsaCpAWsqxMaMwSGyUYW0Z5ZC3x6mBBNM5nsHIpRILMFG2dJv7j10l9qdKu09af1ld78iwMGYhiCMFtqemrhXFJm2U2unLKGMySS1IWl40TblvlV+M7sdMA7V84RJMNXNefXqKNabW/aDKSfEZS8zK9Xlpu12xcVGT8JEqIbCMs/F+no0LV1pkYzL19/O8qrsmko/CKFumSmeEp5D7nbAB1BVQQ3wFN6twPaXB2L4GkFfwosK/cbNIi5tYUTr4N1ADQP4DhDQAxJQAUTwu3lPxBuiI5BdHHNXFA7oSKebZQV0drKuDutpbV3t1tb+u9p1afV6i1NxiN0C77ZYjrK/C5WS4233VDd66JXmGythED9Ej9BSF6AXaQwN0iIaIIoY+oy/oq/fJ++b98H6WaGtj1fMAVcL79RfC0GCc

x32

x23

AAAEBXicdZPLbtNAFIanMZdibi0s2VikSKwiu0ICdpVQk0gVUqkIqZSEaGZ8ko4yF2tmXBpZ3rNlC+/ACrHlOXgE3oJxYkrsuiNZOjrfN+P/2BqScGZsGP7eank3bt66vX3Hv3vv/oOHO7uPPhiVagoDqrjSpwQb4EzCwDLL4TTRgAXhMCSLNwUfnoM2TMn3dpnAROC5ZDNGsXWt4d7FNPz4Ym+60w474WoFV4uoLNqoXMfT3dafcaxoKkBayrExoyhM7CTD2jLKIffHqYEE0wWew8iVEgswk2yVNw+euU4czJR2j7TBqru5I8PCmKUgzhTYnpk6K5pNbJTa2atJxmSSWpB0/aJZygOrgmL4IGYaqOVLV2Cqmcsa0DOsMbXuE1VOis9ZYsrUF+vYfiXGZU/CJ6qEwDLOxod5Ni5SaZGNySw4zPMqd5tIPoom2apUOiM8hTxoRw2iroga4mu8ecXrKg3G9jSAvMYXFf+tm0Va3OSKdYJ/AzUI5L9ASINAzFogisfF/1S8QToh+eURJ3V4REtIMc+O6rS/Sft12t2k3TrtbdKeo9XfS5RaWOwG8H13OaL6VbhaDPY7rzvhu/32QVjekm30BD1Fz1GEXqID1EfHaIAoWqAv6Cv65n32vns/vJ9rtbVV7nmMKsv79Rc1n1/d

XT = {Trajectoryj }4j=1

x40

AAAEA3icdZPLbtNAFIanMZeQcmlhCQuLCMQCRXaFBOwqoSaRKqRSEVopCdHM+DgdZS7WzLg0srxlyxbegQVCbHkAHoFH4C2YxKHErjuSpaPzfTP+j60hCWfGBsHvjYZ35eq1680brc2bt27f2dq++86oVFMYUMWVPibYAGcSBpZZDseJBiwIhyMye7XgR6egDVPyrZ0nMBZ4KlnMKLauNTibhO+fTbbaQSdYLv9iEa6K9u7mt8eNX80HB5Ptxp9RpGgqQFrKsTHDMEjsOMPaMsohb41SAwmmMzyFoSslFmDG2TJt7j9ynciPlXaPtP6yu74jw8KYuSDOFNiemCpbNOvYMLXxi3HGZJJakLR4UZxy3yp/MbofMQ3U8rkrMNXMZfXpCdaYWveBSidFpywxq9RnRexWKcZ5T8IHqoTAMspGe3k2WqTSIhuR2N/L8zJ3m0g+DMfZslQ6IzyF3G+HNaIuiRqiS7xpyesqDcb2NIC8xBcl/7WbRVpc54oiwb+BagTyXyCkRiCmEIji0eJ/Kl4jHZL8/IjDKtynK0gxz/artL9O+1XaXafdKu2t056j5d9LlJpZ7AZotdzlCKtX4WIx2Om87ARv3CV5iorVRPfRQ/QEheg52kV9dIAGiCKGPqHP6Iv30fvqffd+FGpjY7XnHiot7+df+gxhjA== sha1_base64="xbyuXoLWcfpnG9egjgyJXJszqAo=">AAAEA3icdZPNbtNAEMe3MR8l5aOFIxeLCMQBRXaFBNwqoSaRKqRSEVopMdHuepKush/W7ro0snzlyhXegQNCXHkQHoG3YB2HErvuSJZG8//N7n+8GpJwZmwQ/N5oedeu37i5eau9dfvO3XvbO/ffG5VqCkOquNInBBvgTMLQMsvhJNGABeFwTOavC/34DLRhSr6ziwQigWeSTRnF1pWG55Pww/PJdifoBsvwLyfhKunsbX1/glwcTnZaf8axoqkAaSnHxozCILFRhrVllEPeHqcGEkzneAYjl0oswETZ0m3uP3aV2J8q7T5p/WV1vSPDwpiFII4U2J6aulYUm7RRaqcvo4zJJLUgaXnRNOW+VX4xuh8zDdTyhUsw1cx59ekp1pha94MqJ8VnLDEr1+el7XbFxkVNwkeqhMAyzsb7eTYuXGmRjcnU38/zqu6aSD4Ko2yZKp0RnkLud8IGUFdADfEV3KzC9ZQGY/saQF7Biwr/xs0iLW5iReng30ANAPkPENIAEFMCRPG4eE/FG6Ajkl8ccVQXD+hKpJhnB3V1sK4O6mpvXe3V1f662ndq9XmJUnOL3QDttluOsL4Kl5PhbvdVN3jrluQZKmMTPUSP0FMUohdoDw3QIRoiihj6jL6gr94n75v3w/tZoq2NVc8DVAnv11/Ga2Cd

AAAENnicdZNNb9MwGIC9ho9Rvjo4conokDhVSYf4OFSaBGsrTUhjomxS01a243ZZHTuynbHK8j/hj3DlCn+AEyeEuPETcNIwmi6zFOXN+zx23teJUUIjqTzv+0bNuXb9xs3NW/Xbd+7eu9/YevBB8lRgMsCccnGMoCQ0YmSgIkXJcSIIjBElR2j+OuNHZ0TIiLP3apGQUQxnLJpGGCqbmjSebwcxVCcYUv3GuJ0ARbNAB/p8oudmfBoYe+94ZryTA/t02vHN+Nn2pNH0Wl4+3MuBXwRNUIyDyVbtdxBynMaEKUyhlEPfS9RIQ6EiTImpB6kkCcRzOCNDGzIYEznSeYPGfWIzoTvlwl5MuXl2dYaGsZSLGFkz60ausyxZxYapmr4c6YglqSIML180TamruJvtlhtGgmBFFzaAWES2VhefQAGxsntaWik8ixJZVH2+LLteKuMix8hHzOMYslAHe0bn+y9iHaCpu2dMmdtJyAz9kc5DLjSiKTFu068QRUkUJLzCm5W8LhdEqp4ghF3hxyX/re2FKVjlxssK/jVUIaD/AkIVApJLAXEaZt+T0wrpEJmLJQ7X4T4uYPZH76/T/irtr9PuKu2u094q7Vla/ryI87mCtoF63R4Of/0oXA4G7darlveu3dxtF6dkEzwCj8FT4IMXYBf0wQEYAAw+gS/gK/jmfHZ+OD+dX0u1tlHMeQhKw/nzF+ccdBs=

D = {xjk }3k=0

4 j=1

AAAEA3icdZPLbtNAFIanMZeQcmlhCQuLCMQCRXZZALtKqEmkCqlUmFZKQjQzPklHmYs1My6NLG/ZsoV3YIEQWx6AR+AReAsmcSix645k6eh834z/Y2tIwpmxQfB7o+FduXrtevNGa/Pmrdt3trbvvjMq1RQiqrjSxwQb4ExCZJnlcJxowIJwOCKzVwt+dAraMCXf2nkCI4Gnkk0Yxda1orPxzvtn46120AmWy79YhKuivbv57XHjV/PBwXi78WcYK5oKkJZybMwgDBI7yrC2jHLIW8PUQILpDE9h4EqJBZhRtkyb+49cJ/YnSrtHWn/ZXd+RYWHMXBBnCmxPTJUtmnVskNrJi1HGZJJakLR40STlvlX+YnQ/Zhqo5XNXYKqZy+rTE6wxte4DlU6KT1liVqnPititUozznoQPVAmBZZwN9/JsuEilRTYkE38vz8vcbSL5IBxly1LpjPAUcr8d1oi6JGqIL/GmJa+rNBjb0wDyEl+U/NduFmlxnSuKBP8GqhHIf4GQGoGYQiCKx4v/qXiNdEjy8yMOq3CfriDFPNuv0v467Vdpd512q7S3TnuOln8vUWpmsRug1XKXI6xehYtFtNN52QneuEvyFBWrie6jh+gJCtFztIv66ABFiCKGPqHP6Iv30fvqffd+FGpjY7XnHiot7+df+g5hjA== sha1_base64="DVwQBKCvrq5oMjfvMqHrPmiXSes=">AAAEA3icdZPNbtNAEMe3MR8l5aOFIxeLCMQBRXY5ALdKqEmkCqlUmFZKTLS7nqSr7Ie1uy6NLF+5coV34IAQVx6ER+At2MShxK47kqXR/H+z+x+vhqScGRsEvzda3rXrN25u3mpv3b5z9972zv33RmWaQkQVV/qEYAOcSYgssxxOUg1YEA7HZPZ6oR+fgTZMyXd2nkIs8FSyCaPYulJ0Pt798Hy83Qm6wTL8y0m4Sjp7W9+fIBeH453Wn1GiaCZAWsqxMcMwSG2cY20Z5VC0R5mBFNMZnsLQpRILMHG+dFv4j10l8SdKu09af1ld78ixMGYuiCMFtqemri2KTdows5OXcc5kmlmQtLxoknHfKn8xup8wDdTyuUsw1cx59ekp1pha94MqJyVnLDUr1+el7XbFxkVNwkeqhMAyyUf7RT5auNIiH5GJv18UVd01kWIYxvkyVTonPIPC74QNoK6AGpIruGmF6ykNxvY1gLyCFxX+jZtFWtzEitLBv4EaAPIfIKQBIKYEiOLJ4j0Vb4COSHFxxFFdPKArkWKeH9TVwbo6qKu9dbVXV/vrat+p1eclSs0sdgO02245wvoqXE6i3e6rbvDWLckzVMYmeogeoacoRC/QHhqgQxQhihj6jL6gr94n75v3w/tZoq2NVc8DVAnv11/GbWCd

AAAEA3icdZPLbtNAFIanMZeQcmlhCQuLCMQCRXZZALtKqEmkCqlUmFZKQjQzPklHmYs1My6NLG/ZsoV3YIEQWx6AR+AReAsmcSix645k6eh834z/Y2tIwpmxQfB7o+FduXrtevNGa/Pmrdt3trbvvjMq1RQiqrjSxwQb4ExCZJnlcJxowIJwOCKzVwt+dAraMCXf2nkCI4Gnkk0Yxda1orPxs/c746120AmWy79YhKuivbv57XHjV/PBwXi78WcYK5oKkJZybMwgDBI7yrC2jHLIW8PUQILpDE9h4EqJBZhRtkyb+49cJ/YnSrtHWn/ZXd+RYWHMXBBnCmxPTJUtmnVskNrJi1HGZJJakLR40STlvlX+YnQ/Zhqo5XNXYKqZy+rTE6wxte4DlU6KT1liVqnPititUozznoQPVAmBZZwN9/JsuEilRTYkE38vz8vcbSL5IBxly1LpjPAUcr8d1oi6JGqIL/GmJa+rNBjb0wDyEl+U/NduFmlxnSuKBP8GqhHIf4GQGoGYQiCKx4v/qXiNdEjy8yMOq3CfriDFPNuv0v467Vdpd512q7S3TnuOln8vUWpmsRug1XKXI6xehYtFtNN52QneuEvyFBWrie6jh+gJCtFztIv66ABFiCKGPqHP6Iv30fvqffd+FGpjY7XnHiot7+df+hBhjA== sha1_base64="e0twk8AaD9kXH3oTIyI9GtbkeLA=">AAAEA3icdZPNbtNAEMe3MR8l5aOFIxeLCMQBRXY5ALdKqEmkCqlUmFZKTLS7nqSr7Ie1uy6NLF+5coV34IAQVx6ER+At2MShxK47kqXR/H+z+x+vhqScGRsEvzda3rXrN25u3mpv3b5z9972zv33RmWaQkQVV/qEYAOcSYgssxxOUg1YEA7HZPZ6oR+fgTZMyXd2nkIs8FSyCaPYulJ0Pn7+YXe83Qm6wTL8y0m4Sjp7W9+fIBeH453Wn1GiaCZAWsqxMcMwSG2cY20Z5VC0R5mBFNMZnsLQpRILMHG+dFv4j10l8SdKu09af1ld78ixMGYuiCMFtqemri2KTdows5OXcc5kmlmQtLxoknHfKn8xup8wDdTyuUsw1cx59ekp1pha94MqJyVnLDUr1+el7XbFxkVNwkeqhMAyyUf7RT5auNIiH5GJv18UVd01kWIYxvkyVTonPIPC74QNoK6AGpIruGmF6ykNxvY1gLyCFxX+jZtFWtzEitLBv4EaAPIfIKQBIKYEiOLJ4j0Vb4COSHFxxFFdPKArkWKeH9TVwbo6qKu9dbVXV/vrat+p1eclSs0sdgO02245wvoqXE6i3e6rbvDWLckzVMYmeogeoacoRC/QHhqgQxQhihj6jL6gr94n75v3w/tZoq2NVc8DVAnv11/Gb2Cd

x33 AAAEA3icdZPLbtNAFIanMZeQcmlhCQuLCMQCRTZdALtKqEmkCqlUhFZKQjQzPklHmYs1My6NLG/ZsoV3YIEQWx6AR+AReAvGcSix645k6eh834z/Y2tIzJmxQfB7o+FduXrtevNGa/Pmrdt3trbvvjMq0RQGVHGljwk2wJmEgWWWw3GsAQvC4YjMX+X86BS0YUq+tYsYxgLPJJsyiq1rDc4mO+93JlvtoBMsl3+xCFdFe3fz2+PGr+aDg8l2488oUjQRIC3l2JhhGMR2nGJtGeWQtUaJgRjTOZ7B0JUSCzDjdJk28x+5TuRPlXaPtP6yu74jxcKYhSDOFNiemCrLm3VsmNjpi3HKZJxYkLR40TThvlV+ProfMQ3U8oUrMNXMZfXpCdaYWveBSidFpyw2q9RnRexWKcZ5T8IHqoTAMkpHe1k6ylNpkY7I1N/LsjJ3m0g2DMfpslQ6JTyBzG+HNaIuiRqiS7xZyesqDcb2NIC8xBcl/7WbRVpc54oiwb+BagTyXyCkRiCmEIjiUf4/Fa+RDkl2fsRhFe7TFaSYp/tV2l+n/SrtrtNulfbWac/R8u8lSs0tdgO0Wu5yhNWrcLEYPOu87ARv3CV5iorVRPfRQ/QEheg52kV9dIAGiCKGPqHP6Iv30fvqffd+FGpjY7XnHiot7+df/athjQ== sha1_base64="9eFkhzdMnVZckEzqODyfAfEtpvc=">AAAEA3icdZPNbtNAEMe3MR8l5aOFIxeLCMQBRTY9ALdKqEmkCqlUmFZKTLS7nqSr7Ie1uy6NLF+5coV34IAQVx6ER+At2MShxK47kqXR/H+z+x+vhqScGRsEvzda3rXrN25u3mpv3b5z9972zv33RmWaQkQVV/qEYAOcSYgssxxOUg1YEA7HZPZ6oR+fgTZMyXd2nkIs8FSyCaPYulJ0Pt79sDve7gTdYBn+5SRcJZ29re9PkIvD8U7rzyhRNBMgLeXYmGEYpDbOsbaMcijao8xAiukMT2HoUokFmDhfui38x66S+BOl3Setv6yud+RYGDMXxJEC21NT1xbFJm2Y2cnLOGcyzSxIWl40ybhvlb8Y3U+YBmr53CWYaua8+vQUa0yt+0GVk5IzlpqV6/PSdrti46Im4SNVQmCZ5KP9Ih8tXGmRj8jE3y+Kqu6aSDEM43yZKp0TnkHhd8IGUFdADckV3LTC9ZQGY/saQF7Biwr/xs0iLW5iReng30ANAPkPENIAEFMCRPFk8Z6KN0BHpLg44qguHtCVSDHPD+rqYF0d1NXeutqrq/11te/U6vMSpWYWuwHabbccYX0VLifR8+6rbvDWLckzVMYmeogeoacoRC/QHhqgQxQhihj6jL6gr94n75v3w/tZoq2NVc8DVAnv11/KCmCe

x42 AAAEA3icdZPLbtNAFIanMZeQcmlhCQuLCMQCRXaFBOwqoSaRKqRSYVopCdHM+CQdZS7WzLg0srxlyxbegQVCbHkAHoFH4C2YxKHErjuSpaPzfTP+j60hCWfGBsHvjYZ35eq1680brc2bt27f2dq++86oVFOIqOJKHxNsgDMJkWWWw3GiAQvC4YjMXi340Slow5R8a+cJjASeSjZhFFvXis7GO++fjbfaQSdYLv9iEa6K9u7mt8eNX80HB+Ptxp9hrGgqQFrKsTGDMEjsKMPaMsohbw1TAwmmMzyFgSslFmBG2TJt7j9yndifKO0eaf1ld31HhoUxc0GcKbA9MVW2aNaxQWonL0YZk0lqQdLiRZOU+1b5i9H9mGmgls9dgalmLqtPT7DG1LoPVDopPmWJWaU+K2K3SjHOexI+UCUElnE23Muz4SKVFtmQTPy9PC9zt4nkg3CULUulM8JTyP12WCPqkqghvsSblryu0mBsTwPIS3xR8l+7WaTFda4oEvwbqEYg/wVCagRiCoEoHi/+p+I10iHJz484rMJ9uoIU82y/SvvrtF+l3XXardLeOu05Wv69RKmZxW6AVstdjrB6FS4W0U7nZSd44y7JU1SsJrqPHqInKETP0S7qowMUIYoY+oQ+oy/eR++r9937UaiNjdWee6i0vJ9/Af2pYY0= sha1_base64="uGEe/vnyvCSmUcPcjgfao6ld1Vw=">AAAEA3icdZPNbtNAEMe3MR8l5aOFIxeLCMQBRXaFBNwqoSaRKqRSYVopMdHuepKush/W7ro0snzlyhXegQNCXHkQHoG3YBOHErvuSJZG8//N7n+8GpJyZmwQ/N5oedeu37i5eau9dfvO3XvbO/ffG5VpChFVXOkTgg1wJiGyzHI4STVgQTgck9nrhX58BtowJd/ZeQqxwFPJJoxi60rR+Xj3w/PxdifoBsvwLyfhKunsbX1/glwcjndaf0aJopkAaSnHxgzDILVxjrVllEPRHmUGUkxneApDl0oswMT50m3hP3aVxJ8o7T5p/WV1vSPHwpi5II4U2J6aurYoNmnDzE5exjmTaWZB0vKiScZ9q/zF6H7CNFDL5y7BVDPn1aenWGNq3Q+qnJScsdSsXJ+XttsVGxc1CR+pEgLLJB/tF/lo4UqLfEQm/n5RVHXXRIphGOfLVOmc8AwKvxM2gLoCakiu4KYVrqc0GNvXAPIKXlT4N24WaXETK0oH/wZqAMh/gJAGgJgSIIoni/dUvAE6IsXFEUd18YCuRIp5flBXB+vqoK721tVeXe2vq32nVp+XKDWz2A3QbrvlCOurcDmJdruvusFbtyTPUBmb6CF6hJ6iEL1Ae2iADlGEKGLoM/qCvnqfvG/eD+9nibY2Vj0PUCW8X38Byghgng==

x43 AAAEA3icdZPLbtNAFIanMZeQcmlhCQuLCMQCRTYgAbtKqEmkCqlUhFZKQjQzPk5HmYs1My6NLG/ZsoV3YIEQWx6AR+AReAsmcSix645k6eh834z/Y2tIwpmxQfB7o+FdunzlavNaa/P6jZu3trZvvzMq1RQGVHGljwg2wJmEgWWWw1GiAQvC4ZDMXi344Qlow5R8a+cJjAWeShYziq1rDU4nT98/m2y1g06wXP75IlwV7Z3Nbw8bv5r39ifbjT+jSNFUgLSUY2OGYZDYcYa1ZZRD3hqlBhJMZ3gKQ1dKLMCMs2Xa3H/gOpEfK+0eaf1ld31HhoUxc0GcKbA9NlW2aNaxYWrjF+OMySS1IGnxojjlvlX+YnQ/Yhqo5XNXYKqZy+rTY6wxte4DlU6KTlhiVqlPi9itUoyznoQPVAmBZZSNdvNstEilRTYisb+b52XuNpF8GI6zZal0RngKud8Oa0RdEjVEF3jTktdVGoztaQB5gS9K/ms3i7S4zhVFgn8D1Qjkv0BIjUBMIRDFo8X/VLxGOiD52REHVbhHV5Binu1VaX+d9qu0u067Vdpbpz1Hy7+XKDWz2A3QarnLEVavwvli8KTzshO8cZfkMSpWE91F99EjFKLnaAf10T4aIIoY+oQ+oy/eR++r9937UaiNjdWeO6i0vJ9/AQFVYY4= sha1_base64="Whosx3joA729RoiLls6I28Ku4hc=">AAAEA3icdZPNbtNAEMe3MR8l5aOFIxeLCMQBRTYgAbdKqEmkCqlUmFZKQrS7nqSr7Ie1uy6NLF+5coV34IAQVx6ER+AtWMehxK47kqXR/H+z+x+vhiScGRsEvzda3pWr165v3mhv3bx1+872zt33RqWaQkQVV/qYYAOcSYgssxyOEw1YEA5HZP660I9OQRum5Du7SGAs8EyyKaPYulJ0Nnn24flkuxN0g2X4F5NwlXR2t74/Qi4OJjutP6NY0VSAtJRjY4ZhkNhxhrVllEPeHqUGEkzneAZDl0oswIyzpdvcf+gqsT9V2n3S+svqekeGhTELQRwpsD0xda0oNmnD1E5fjjMmk9SCpOVF05T7VvnF6H7MNFDLFy7BVDPn1acnWGNq3Q+qnBSfssSsXJ+VttsVG+c1CR+pEgLLOBvt5dmocKVFNiJTfy/Pq7prIvkwHGfLVOmM8BRyvxM2gLoCaogv4WYVrqc0GNvXAPISXlT4N24WaXETK0oH/wZqAMh/gJAGgJgSIIrHxXsq3gAdkvz8iMO6uE9XIsU826+rg3V1UFd762qvrvbX1b5Tq89LlJpb7AZot91yhPVVuJhET7uvusFbtyRPUBmb6D56gB6jEL1Au2iADlCEKGLoM/qCvnqfvG/eD+9nibY2Vj33UCW8X38BzaVgnw==

AAAEFHicdZNNixMxGICzHVfX+tWqNy/BIngqM3tRbwuybWER1mXrFjqlJJm0Dc3HkGRW6zA/wF8g3rzqf/AkXr178uzJv2DaGdfO7BgIvLzPk+RNXoJjzoz1/R87De/K7tVre9ebN27eun2n1b77yqhEEzokiis9wshQziQdWmY5HcWaIoE5PcPL52t+dk61YUqe2lVMJwLNJZsxgqxLTVv3w80eqaZRBsMFsnA0PZ22On7X3wx4OQiKoHOw++Hdz7b+fTxtN36FkSKJoNISjowZB35sJynSlhFOs2aYGBojskRzOnahRIKaSbo5OoOPXCaCM6XdlBZustsrUiSMWQnsTIHswlTZOlnHxomdPZ2kTMaJpZLkB80SDq2C67eAEdOUWL5yASKauVohWSCNiHUvVtopOmexKap+k5fdLJVxkZP0NVFCIBml4WGWhuuqtEhDPIOHWVbmbhHOxsEkTYsuYJ7QDHaCGlGXxE27ar15yespTY3ta0rlf3xR8l+4u0iL6lyRV/D3QjUC/idgXCNgkwtY8WjdT8VrpBOcXWxxUoVHpIAE8fSoSgfbdFClvW3aq9L+Nu07Wm4vVmppkbtAs+k+R1D9CpeD4X73Wdd/6T7JPsjHHngAHoLHIABPwAEYgGMwBAS8BR/BJ/DZe+998b5633K1sVOsuQdKw/v+B3uNafk= sha1_base64="Y5+7t1V78i5Wy8MZsMEnI0B8SDI=">AAAEFHicdZPLbhMxFEDdDoUSXimwY2MRIbGKZrop7CqhJpEqpFI1NFImimyPk1jxY2R7CsGaD+ALEDu28A+sEFv27NjyFziZUDLTwZKlq3uO7WtfGaecGRuGP7e2g2s712/s3mzcun3n7r3m3v3XRmWa0D5RXOkBRoZyJmnfMsvpINUUCczpOZ6/WPLzC6oNU/LMLlI6Emgq2YQRZH1q3HwYr/ZwmiY5jGfIwsH4bNxshe1wNeDVIFoHrcOdj+9/AQBOxnvbv+NEkUxQaQlHxgyjMLUjh7RlhNO8EWeGpojM0ZQOfSiRoGbkVkfn8InPJHCitJ/SwlV2c4VDwpiFwN4UyM5MlS2TdWyY2cmzkWMyzSyVpDhoknFoFVy+BUyYpsTyhQ8Q0czXCskMaUSsf7HSTskFS8266rdF2Y1SGZc5Sd8QJQSSiYuPchcvq9LCxXgCj/K8zP0inA+jkXPrLmCe0Ry2ohpRl8RVu2q9acnrKE2N7WpK5X98UfJf+rtIi+pcUVTw90I1Av4nYFwjYFMIWPFk2U/Fa6RTnF9ucVqFx2QNCeLuuEp7m7RXpZ1N2qnS7ibtelpuL1ZqbpG/QKPhP0dU/QpXg/5++3k7fOU/yT4oxi54BB6DpyACB+AQ9MAJ6AMC3oFP4DP4EnwIvgbfgu+Fur21XvMAlEbw4w+MBWiI sha1_base64="vdsb0r5Hl4JiZScOA6Gsc3xX6/s=">AAAD8HicdZPfatRAFManG//UWLW99ia4CF4tiTfqnSDdXShCLW5b2A1lZnKyHXb+hJlJdQkBr731HbwS38dH8C2cbGK7SdOBwMf5fjPznQyHZJwZG4Z/dgbevfsPHu4+8h/v+U+ePtvfOzUq1xRmVHGlzwk2wJmEmWWWw3mmAQvC4YysPlT+2RVow5T8bNcZxAIvJUsZxdaVji/2h+Eo3KzgtogaMUTNujgY/F0kiuYCpKUcGzOPwszGBdaWUQ6lv8gNZJiu8BLmTkoswMTFJmcZvHSVJEiVdp+0waa6vaPAwpi1II4U2F6arlcV+7x5btO3ccFklluQtL4ozXlgVVA1HSRMA7V87QSmmrmsAb3EGlPrfk3rpOSKZaZJ/bWO7bdiXNckfKFKCCyTYnFYFosqlRbFgqTBYVm2fbeJlPMoLjZS6YLwHMpgGPWAugVqSO7gli1urDQYO9EA8g5etPiPrhdpcR8r6gT/G+oByA1ASA9ATA0QxZPqPRXvgU5IeX3ESdc8oo1JMS+Ouu5025123fG2O+66k2134tz28xKlVha7BnzfzUbUnYTbYvZ69G4UfgrRLnqOXqBXKEJv0Hs0RcdohihK0Hf0w/vm/fR+1SM02Glm6QC1lvf7HypvW28= sha1_base64="+tW0lyFQm3FNZ1amjx4NqJCA6/g=">AAAECXicdZPdbtMwFIC9hp9RBnSIO24sKiSuqoQb4A4Jra00IY1pZZWaqrIdt7Xqn8h2thUrz8Ett/AOXCEegkfgLXCbMJosWIp0dL4v9rGPDk45MzYMf+21glu379zdv9e+f/Dg4aPO4cFHozJN6IgorvQYI0M5k3RkmeV0nGqKBOb0HK/ebfj5BdWGKXlm1ymdCrSQbM4Isj416zyJt3s4TZMcxktk4Xh2Nut0w164XfBmEJVBF5TrZHbY+h0nimSCSks4MmYShamdOqQtI5zm7TgzNEVkhRZ04kOJBDVTtz06h899JoFzpf0nLdxmd/9wSBizFtibAtmlqbNNsolNMjt/PXVMppmlkhQHzTMOrYKbt4AJ05RYvvYBIpr5WiFZIo2I9S9W2Sm5YKkpq74qym5XyrjOSXpJlBBIJi4+yl28qUoLF+M5PMrzKvc/4XwSTZ0ru4B5RnPYjRpEXRG37Wr0FhWvrzQ1dqAplf/xRcV/7+8iLWpyRVHB3ws1CPifgHGDgE0hYMWTTT8Vb5BOcX69xWkdHpMSEsTdcZ0Od+mwTvu7tF+ng1068LTaXqzUyiJ/gXbbD0dUH4Wbwehl700v/BCCffAUPAMvQARegbdgCE7ACBDwCXwBX8G34HPwPfhRTFFrrxynx6Cygp9/AEqLZNQ= sha1_base64="mFd5IHeDPuxzlqXNUM5WNymKoZc=">AAAEFHicdZPLbhMxFEDdDI8SXimwY2MRIbGKZrop7CpVTSJVSKVqaKRMFNkeJ7Hix8j2tARrvoMtW/gHVogtez6Bv8BJhpKZDpYsXd1zbF/7yjjlzNgw/LXTCG7dvnN3917z/oOHjx639p68NyrThA6I4koPMTKUM0kHlllOh6mmSGBOL/DiaMUvLqk2TMlzu0zpWKCZZFNGkPWpSetZvN7DaZrkMJ4jC4eT80mrHXbC9YA3g6gI2qAYp5O9xu84USQTVFrCkTGjKEzt2CFtGeE0b8aZoSkiCzSjIx9KJKgZu/XROXzpMwmcKu2ntHCd3V7hkDBmKbA3BbJzU2WrZB0bZXb6euyYTDNLJdkcNM04tAqu3gImTFNi+dIHiGjma4VkjjQi1r9YaafkkqWmqPrDpuxmqYzrnKRXRAmBZOLi49zFq6q0cDGewuM8L3O/COejaOxc0QXMM5rDdlQj6pK4bletNyt5XaWpsT1NqfyPL0r+W38XaVGdKzYV/L1QjYD/CRjXCNhsBKx4suqn4jXSGc6vtzirwhNSQIK4O6nS/jbtV2l3m3artLdNe56W24uVWljkL9Bs+s8RVb/CzWCw33nTCd+F7cP94pfsgufgBXgFInAADkEfnIIBIOAj+Ay+gK/Bp+Bb8D34sVEbO8Wap6A0gp9/ALb6ZkE=

ˆT X

Figure 2: I ustrat on of the non-recurrent set Γ the non-recurrent surface Γˆ Note that the non-recurrent

ˆ does not need to be known exp c t y for earn ng the e genfunct ons On y samp ed trajector es D surface Γ w th n t a cond t ons be ong ng to d st nct trajector es are requ red (see Lemma 1) Note a so that even though the ex stence of the non-recurrent surface s assured by Lemma 1 th s surface can be h gh y rregu ar (e g osc atory) depend ng on the nterp ay between the dynam cs and the ocat ons of the n t a cond t ons

as well as a set of continuous basis functions (or a “dictionary”) G = {g1 , . . . , gNG } defining the values of the eigenfunctions on the non-recurrent set Γ. The functions in G will be referred to as boundary functions. Now we can construct NΛ · NG eigenfunctions using the developments of Section 3. Given any λ ∈ Λ and g ∈ G and setting φλ g (xj0 ) := g(xj0 ), j = 1, . . . , Mt , Eq. (10) uniquely defines the values of φλ g on the entire data set. Specifically, we have φλ g (xjk ) = eλkT g(xj0 )

(25)

for all k ∈ {0, . . . Ms } and all j ∈ {1, . . . Mt }. According to Lemma 1 (provided its asˆ passing through the initial sumptions hold), there exists an entire non-recurrent surface Γ conditions of the trajectories in the data set D. Even though this surface is unknown to us, its existence implies that the eigenfunctions computed through (25) on D are in fact samples of continuous eigenfunctions defined on [ ˆT = ˆ ; X St (Γ) (26) t∈ 0 T

see Figure 2 for an illustration. As a result, the eigenfunctions φλ g can be learned on the ˆ T (or possibly even larger region) via interpolation or approximation. Specifically, entire set X given a set of basis functions   β1   β =  ...  βNβ

with β ∈ C(X), we can solve the interpolation problems minimize Nβ

c∈C

δ1 kck1 + kck22

subject to c> β(xjk ) = φλ g (xjk ), k ∈ {0, . . . , Ms }, j ∈ {1, . . . , Mt } 15

(27)

for each λ ∈ Λ and each g ∈ G. Alternatively, we can solve the approximation problems PMs PMt > c β(xj ) − φλ,g (xj ) 2 + δ1 kck1 + δ2 kck22 minimize (28) k k k=0 j=1 Nβ c∈C

for each λ ∈ Λ and each g ∈ G. In both problems the classical `1 and `2 regularizations are optional, for promoting sparsity of the resulting approximation and preventing overfitting; the numbers δ1 ≥ 0, δ2 ≥ 0 are the corresponding regularization parameters. The resulting approximation to the eigenfunctions φλ,g , denoted by φˆλ,g , is given by φˆλ,g (x) = c> λ,g β(x),

(29)

where c> λ,g is the solution to (27) or (28) for a given λ ∈ Λ and g ∈ G. Note that the approximation φˆλ,g (x) is defined on the entire state space X; if the interpolation method (27) is used then the approximation is exact on the data set D and one expects it to be accurate ˆ T , provided that the non-recurrent surface Γ ˆ (if it exists) and on XT and possibly also on X the functions in G give rise to eigenfunctions well approximable (or learnable) by functions from the set of basis functions β. The eigenfunction learning procedure is summarized in Algorithm 1. Algorithm 1 Eigenfunction learning  Mt s Require: Data D = (xjk )M , boundary functions G = {g1 , . . . , gNG }, complex numk=0 j=1

bers Λ = {λ1 , . . . , λNΛ }, basis functions β = [β1 , . . . , βNβ ]> , sampling time Ts . 1: for g ∈ G, λ ∈ Λ do 2: for j ∈ {1, . . . , Mt }, k ∈ {0, . . . , Ms } do 3: φλ,g (xjk ) := eλkTs g(xj0 ) 4: Solve (27) or (28) to get cλ,g 5: Set φˆλ,g := c> λ,g β Output: {φˆλ,g }λ∈Λ, g∈G

Choosing initial conditions As long as the initial conditions in Γ lie on distinct trajectories that are non-periodic over the simulated time interval, the set Γ is non-recurrent as required by our approach. This is achieved with probability one if, for example, the initial conditions are sampled uniformly at random over X (assuming the cardinality of X is infinite) and the dynamics is non-periodic or the simulation time is chosen such that the trajectories are non-periodic over the simulated time interval. In practice, one will typically choose the initial conditions such that the trajectories sufficiently cover a subset of the state space of interest (e.g., the safe region of operation of a vehicle). In addition, it is advantageous (but not necessary) to sample the initial conditions from a sufficiently regular surface (e.g., a ball or ellipsoid) approximating a non-recurrent surface in order to ensure that the resulting eigenfunctions are well behaved (e.g., in terms of the Lipschitz constant) and hence easily interpolable / approximable.

16

Generalized eigenfunctions from data Algorithm 1 can be readily extended to the case of generalized eigenfunctions as described in Section 3.3. Step 3 of this algorithm is replaced by     ψλ,g1 (xjk ) g1 (xj0 )   .. .  J kT    = e λ s  ..  , . ψλ,gnλ (xjk ) gnλ (xj0 ) where, as in Section 3.3, Jλ is a Jordan block of size nλ associated to an eigenvalue λ and g1 , . . . , gnλ are continuous boundary functions. Step 4 of Algorithm 1 (interpolation / approximation) is then performed on each ψλ,gi separately. Note that with Jordan block of size one, the entire procedure reduces to the case of eigenfunctions.

We note that there are no restrictions on the parings of Jordan blocks (of arbitrary size) and continuous boundary functions, thereby providing additional freedom for constructing very rich invariant subspaces of the Koopman operator.

4.1

Obtaining matrices A and C

Let



 φˆ1 .  ˆ= φ  ..  φˆN

denote a concatenation (in an arbitrary order) of the N = NΛ · NG eigenfunction approximations corresponding to the eigenvalues Λ and boundary functions G. With this notation, ˆ This the matrix C (6) is recovered by (approximately) solving (8) with φ replaced by φ. problem is typically not solvable analytically in high dimensions since it requires a multivariate integration or uniform bounding of a continuous function (depending on the norm used in (8)). Therefore, we directly present a sample-based approximation. If the L2 norm is used in (8), we solve the optimization problem ¯ M X

ξ(¯ ˆ xi ) 2 . xi ) − C φ(¯ minimize 2 C∈C

For the sup-norm, we solve

nξ ×N

minimize C∈C ¯

nξ ×N

(30)

i=1

max

¯} i∈{1,...,M



ξ(¯ ˆ xi ) . xi ) − C φ(¯ ∞

(31)

j The samples {¯ xi } M i=1 can either coincide with the samples {xk }j,k used for learning of the eigenfunctions or they can be generated anew (e.g., to emphasize certain regions of statespace where accurate projection (and hence prediction) is required). See Section 4.3 for a discussion of computational aspects of solving these two problems.

The matrix A is diagonal as in Eq. (7) with the the same ordering of the eigenvalues as that ˆ Note that for each λ ∈ Λ there are NG approximate eigenfunctions in φ; ˆ therefore, of φ. each eigenvalue in Λ appears NG times on the diagonal of A. 17

4.2

Exploiting algebraic structure

It follows immediately from the definition of the Koopman operator eigenfunction (4) that products and powers of eigenfunctions are also eigenfunctions. In particular, given φ1 , . . . , φN0 eigenfunctions with the associated eigenvalues λ1 , . . . , λN0 , the function p

φ = φp11 · . . . · φNN0

(32)

is a Koopman eigenfunction with the associated eigenvalue λ = p1 λ1 + . . . + pN0 λN0 . This holds for any nonnegative real or integer powers p1 , . . . , pN0 . This algebraic structure can be exploited to generate additional eigenfunction approximations starting from those obtained using Algorithm 1, at a very little additional computational cost. In particular, one can construct only a handful of eigenfunction approximations using Algorithm 1, e.g., with Λ being a single real eigenvalue or a single complex conjugate pair and the set G consisting of linear coordinate functions xi , i = 1, . . . , n. This initial set of eigenfunctions can then be used to generate a very large number of additional eigenfunction approximations using (32) in order to ensure that the projection error (8) is small. When queried at a previously unseen state (e.g., during feedback control), only the eigenfunction approximations φˆ1 , . . . , φˆN0 have to be computed using interpolation or approximation (which can be costly if the number of basis functions β is large in step 5 of Algorithm 1) whereas the remaining eigenfunction approximations are obtained by simply taking powers and products according to (32).

4.3

Computational aspects

The main computational burden of the proposed method is the solution to the interpolation or approximation problems (27) and (28). Both these problems are convex optimziation problems that can be reliably solved using generic packages such as MOSEK or Gurobi. For very large problem instances, specialized packages for `1 / `2 regularized least-squares problems may need to be deployed (see, e.g. [34, 21]). We note that for each pair (λ, g) the coefficients β(xjk ) remain the same, which can be exploited to drastically speed up the solution. For problems without `1 regularization, we have an explicit solution cλ,g = W† w for (27) and cλ,g = (W + δ2 I)† w, for (28) where   Mt > t W = β(x10 ) . . . β(x1Ms ) β(x20 ) . . . β(x2Ms ) . . . β(xM 0 ) . . . β(xMs )

and

  Mt > t w = φλ,g (x10 ) . . . φλ,g (x1Ms ) φλ,g (x20 ) . . . φλ,g (x2Ms ) . . . φλ,g (xM . 0 ) . . . φλ,g (xMs ) 18

The projection problems (30) and (31) are both convex optimization problems that can be easily solved using generic convex optimization packages (e.g., MOSEK or Gurobi). The use of such tools is necessary for the sup-norm projection problem (31). However, for the least-squares projection problem (30), linear algebra is enough with the analytical solution being ˆ x1 ), . . . , φ(¯ ˆ xM¯ )]† . C = [ξ(¯ x1 ), . . . , ξ(¯ xM¯ )][φ(¯

5

Linear predictors for controlled systems

In this section we describe how to build linear predictors for controlled systems. Assume a nonlinear controlled system of the form x˙ = fc (x, u)

(33)

with the state x ∈ X ⊂ Rn and control input u ∈ U ⊂ Rm . As in [11], the goal is to construct a predictor in the form of a controlled linear dynamical system z˙ = Az + Bu ˆ 0 ), z0 = φ(x

(34a) (34b) (34c)

yˆ = Cz.

Whereas [11] uses a one-step procedure (essentially a generalization of the extended dynamic mode decomposition (EDMD) to controlled systems), here we follow a two-step procedure, where we first construct eigenfunctions for the uncontrolled system (1) with f (x) := fc (x, u¯), where u¯ ∈ U is a fixed value which we without loss of generality assume to be zero (one can always shift the origin of U such that this holds). We assume that we have two data sets available. The first one is an uncontrolled dataset D with the same structure as in (35) in Section 4. The second data set, Dc , is with control in the form of Mt,c equidistantly sampled trajectories with Ms,c + 1 samples each, i.e., Dc =



Ms,c Ms,c −1 (xjk )k=0 , (ujk )k=0

Mt,c

,

(35)

j=1

where xjk+1 = STs (xjk , ujk ), where St (x, u) denotes the solution to (33) at time t starting from x and with the control input held constant and equal to u in [0, t]. We note that both the number of trajectories and the trajectory length may differ for the controlled and uncontrolled data sets. ˆ A, C In the first step of the procedure we construct approximate eigenfuncStep 1 – φ, ˆ of (1) (with f (x) = fc (x, 0)) using the procedure described in Section 4 , obtaining tions φ also the matrices A and C. 19

Step 2 – matrix B In order to obtain the matrix B we perform a regression on the controlled data set (35). The quantity to be minimized is a multi-step prediction error. Crucially, this multi-step error can be minimized in a convex fashion; this is due to the fact that the matrices A and C are already known and fixed at this step and the predicted output yˆ of (34) depends affinely on B. This is in stark contrast to EDMD-type methods, where only one-step ahead prediction error can be minimized in a convex fashion. In order to keep expressions simple we assume that the time interval over which we want to minimize the prediction error coincides with the length of the trajectories in our data set (everything generalizes straightforwardly to shorter prediction times). The problem to be solved therefore is Mt,c Ms,c X X (36) minimize kξ(xjk ) − yˆk (xj0 )k22 , Bd ∈RN ×m

j=1 k=1

where yˆk (xj0 )

=

CAkd z0j

+

k−1 X

CAdk−i−1 Bd uji

i=0

is the output yˆ of (33) at time kTs starting from the (known) initial condition ˆ j0 ). z0j = φ(x The discretized matrices Ad (known) and Bd (to be determined) are related to A and B by  Z Ts −As ATs e ds B. (37) Ad = e , Bd = 0

We note that in the above expression the matrix multiplying B is invertible for any Ts > 0 and therefore B can be uniquely recovered from the knowledge of Bd . Using vectorization, the output yˆk (xj0 ) can be re-written as yˆk (xj0 ) = CAkd z0j +

k−1 X  i=0

 (uji )> ⊗ (CAdk−i−1 ) vec(Bd ),

(38)

where vec(·) denotes the (column-major) vectorization of a matrix and ⊗ the Kronecker product. Since Ad , C, z0j and ξ(xjk ) are all known, plugging in (38) to the least-squares problem (36) leads to the minimization problem minimize kΘb − θk22 ,

(39)

b ∈ RmN

where with

 > Θ> . . . Θ> Θ = Θ> , 1 2 Mt



 (uj0 )> ⊗ C  (uj )> ⊗ C + uj ⊗ (CAd )  1 0   Θj =  , ..   .  PMs −1  j > Ms −i−1 (ui ) ⊗ (CAd ) i=0 20

  > > θ = θ1> θ2> . . . θM t 

  θj =  

ξ(xj1 ) − CAd z0j ξ(xj2 ) − CA2d z0j .. .

s j ξ(xjMs ) − CAM d z0



  . 

The matrix Bd is then given by Bd = vec−1 (Θ† θ),

(40)

where Θ† θ is an optimal solution to (39). Since A = diag(λ1 , . . . , λN ), the matrix B is obtained as Z Ts −1   λN λ1 −As , . . . , Bd . (41) B= e ds Bd = diag 1 − e−λ1 Ts 1 − e−λN Ts 0

5.1

Koopman model predictive control

In this section we briefly describe how the linear predictor (34) can be used within a linear model predictive control (MPC) scheme to control nonlinear dynamical systems. This method was originally developed in [11] and this section closely follows this work; the reader is referred therein for additional details as well as to [14, 1] for applications in power grid and fluid flow control. An MPC controller solves at each step of a closed-loop operation an optimization problem where a given cost function is minimized over a finite prediction horizon with respect to the predicted control inputs and predicted outputs of the dynamical system. For nonlinear systems, this is almost always a nonconvex optimization problem due to the equality constraint in the form of the nonlinear dynamics. In the Koopman MPC framework, on the other hand, we solve the convex quadratic optimization problem (QP) minimize ui ,zi ,ˆ yi

> > z + QNp zNp + qN zN p Np p

subject to zi+1 = Ad zi + Bd ui , Ei yˆi + Fi ui ≤ bi , ENp yˆNp ≤ bNp yˆi = Czi ˆ current ), parameter z0 = φ(x

PNp −1 i=0

> > zi> Qi zi + u> i Ri ui + qi zi + ri ui

i = 0, . . . , Np − 1 i = 0, . . . , Np − 1

(42)

where the cost matrices Qi ∈ Rnξ ×nξ and Ri ∈ Rm×m are positive semidefinite and Np is the prediction horizon. The optimization problem is parametrized by xcurrent ∈ Rn which is the current state measured during the closed-loop operation. The control input applied to the system is the first element of the control sequence optimal in (42). Notice that in (42) we use directly the discretized predictor matrices Ad and Bd , where Ad = diag(eλ1 Ts , . . . , eλN Ts ) and Bd is given by (40) with Ts being the sampling interval. See Algorithm 2 for a summary of the Koopman MPC in this sampled data setting. Handling nonlinearities Crucially, all nonlinearities in x are subsumed in the output mapping ξ and therefore predicted in a linear fashion through (34) (or its discretized equivalent). For example, assume we wish to minimize the predicted cost Np −1

Jnonlin = Jquad + lNp (xNp ) +

X i=0

21

l(xi ),

(43)

subject to the stage and terminal constraints c(xi ) + Du ≤ 0, cNp (xi ) ≤ 0, where

i = 0, . . . , Np − 1, i = 0, . . . , Np − 1,

(44a) (44b)

Np −1

Jquad =

x> Np QNp xNp

+

> qN x p Np

+

X

> > > x> i Qxi + ui Rui + q xi + r ui

i=0

is convex quadratic and c : R → R , cNp : Rn → Rncp , l : Rn → R and lNp : Rn → R are nonlinear functions. The mapping ξ is then set to   x  l(x)     l (x) ξ(x) =  N  p .  c(x)  cNp (x) n

nc

Replacing ξ by yˆ in (43), the objective function Jnonlin translates to a convex quadratic in (u0 , . . . , uNp −1 ) and (ˆ y0 , . . . , yˆNp ); similarly the stage and terminal constraints (44) translate to affine (and hence convex) inequality constraints on (u0 , . . . , uNp −1 ) and (ˆ y0 , . . . , yˆNp ). Note that polytopic constraints on control inputs can be encoded by selecting certain components of the vector function c(xi ) equal to constant functions. For example, box constraints on u of the form u ∈ [umin , umax ] with umin ∈ Rm and umax ∈ Rm are encoded by selecting     I −umax c(x) =  umin  , D = −I , ˜ c˜(x) D ˜ model additional state-input constraints. where c˜(·) and D

No free lunch At this stage it should be emphasized that since yˆ is only an approximation of the true output ξ, the convex QP (42) is only an approximation of the nonlinear MPC problem with (43) as objective, and (33) and (44) as constraints. This is unavoidable at this level of generality since such nonlinear MPC problems are typically NP-hard whereas the convex QP (42) is polynomial time solvable. Nevertheless, as long as the prediction yˆ is accurate, we also expect the solution of the linear MPC problem (42) to be close to the optimal solution of the nonlinear MPC problem, thereby resulting in near-optimal closedloop performance. 5.1.1

Dense form Koopman MPC

Importantly for real-world deployment, in problem (42), the possibly high-dimensional variables zi and yˆi can be solved for in terms of the variable u := [u0 , . . . , uNp −1 ]> , 22

obtaining u> H1 u> + h> u + z0> H2 u

minimize u∈RmNp

subject to Lu + M z0 ≤ d ˆ current ) parameter z0 = φ(x

(45)

for some matrices H1 , H2 , L, M and vectors h, d (explicit expressions in terms of the data of (42) are in the Appendix). Notice that once the product z0> H2 is evaluated, the cost of solving the optimization problem (45) is independent of the number of eigenfunctions N used. This is essential for practical applications since N can be large in order to ensure a small prediction error (9). The optimization problem (45) is a convex QP that can be solved by any of the generic packages for convex optimization (e.g., MOSEK or Gurobi) but also using highly tailored tools exploiting the specifics of the MPC formulation. In this work, we relied on the qpOASES package [6] that uses a homotopy-based active set method which is particularly suitable for dense-form MPC problems and effectively utilizes warm starting to reduce the closed-loop computation time. The closed-loop operation of the Koopman MPC is summarized in Algorithm 2. Here we assume sampled-data operation, where the control input is computed every Ts seconds and held constant between the sampling times. We note, however, that the mapping xcurrent 7→ u?0 , where u?0 is the first component of the optimal solution u? = [u?0 , . . . , u?Np −1 ]> to the problem (45), defines a feedback controller that can be evaluated at an arbitrary state x ∈ Rn at an arbitrary time. Algorithm 2 Koopman MPC – closed-loop operation 1: for k = 0, 1, . . . do 2: Set xcurrent = x(kTs ) (current state of (33)) ˆ current ) 3: Compute z0 = φ(x 4: Solve (45) to get an optimal solution u? = [u?0 , . . . , u?Np −1 ]>   5: Apply u?0 to the system (33) for t ∈ kTs , (k + 1)Ts

6

Numerical examples

In the numerical examples we investigate the performance of the predictors on the Van der Pol oscillator and the damped Duffing oscillator. The two dynamical systems exhibit a very different behavior: The former is has a stable limit cycle whereas the latter two stable equilibria and an unstable equilibrium. However, interestingly but in line with the theory, we observe a very good performance of the predictors constructed for both systems, away from the limit cycle and singularities. On the Duffing system, we also investigate feedback control using the Koopman MPC, managing both transition between the two stable equilibria as well stabilization of the unstable one, in a purely data-driven and convex-optimization-based fashion.

23

6.1

Van der Pol oscillator 0.3

-0.1

0.2

-0.2

0.1 -0.3 0 -0.4 -0.1 -0.5

-0.2

-0.3

-0.6

0

0.5

1

-0.4

1.5

0

0.5

1

1.5

Figure 3: Van der Pol oscillator – Prediction for (NΛ , NG ) = (10, 20), a randomly chosen initial condition and square wave forcing.

100 % 10 %

0.8 0.6

0.6

0.4

0.4

0.2

0.2

0

0

-0.2

-0.2

-0.4

-0.4

-0.6

-0.6

mean error: 7.7 % standard dev: 8.6 %

-0.8 -1 -1

100 % 10 %

0.8

-0.8

-0.6

-0.4

-0.2

0

0.2

0.4

0.6

mean error: 12.3 % standard dev: 4.1 %

-0.8 -1 -1

0.8

-0.8

-0.6

-0.4

-0.2

0

0.2

0.4

0.6

0.8

Figure 4: Van der Pol oscillator – Spatial distribution of the prediction error (controlled). The trajectories used for learning of the eigenfunctions are depicted in grey. The error for each of the 500 initial conditions from the test set is encoded by the size of the blue marker. The initial conditions of the trajectories were sampled from a circle of radius 0.2 (left pane) and 0.05 (right pane), both depicted in dashed black; neither circle is a non-recurrent surface for the dynamics (which is not required by the method).

In the first example, we consider the classical Van der Pol oscillator with forcing x˙ 1 = 2x2 x˙ 2 = −0.8x1 + 2x2 − 10x21 x2 + u. We investigate the performance of the proposed predictors, both in controlled and uncontrolled (i.e., u = 0) settings. First, we construct the eigenfunction approximations as described in Section 4. We generate a set of Mt = 100 three second long trajectories sampled with a sampling period Ts = 0.01 s (i.e., Ms = 300). The initial conditions of the trajectories are sampled uniformly over a circle of radius 0.2. For the set G of boundary functions on 24

Table 1: Van der Pol oscilator – Prediction error. The total number of eigenfunctions for each combination (NΛ , NG ) is N = NΛ · NG .

(NΛ , NG )

(10, 20)

(6, 20)

(10, 10)

(10, 5)

(10, 3)

Mean RMSE [uncontrolled] Mean RMSE [controlled]

5.0 % 7.7 %

12.1 % 13.2 %

9.6 % 12.2 %

24.9 % 28.4 %

61.5 % 60.1 %

the non-recurrent set Γ we chose the thin-plate spline radial basis functions3 with centers selected randomly from a uniform distribution over [−1, 1]2 . For the set of eigenvalues Λ, we chose Λ = meshdλ ( T1s log ΛDMD ), where ΛDMD are the two eigenvalues obtained by applying the dynamic mode decomposition algorithm to the data set and meshdλ (Λ) =

p nX k=1

αk λk | λk ∈ Λ, αk ∈ N, p ∈ N,

p X k=1

o α k ≤ dλ .

(46)

The number of eigenvalues obtained in this way is   |ΛDMD | + dλ NΛ = , dλ where |ΛDMD | = 2 is the cardinality of the set ΛDMD . The N = NΛ · NG eigenfunctions are computed on the data set using (25) and linear interpolation is used to define them on the entire state space. The functions ξ was chosen to be the identity map (i.e., ξ(x) = x); therefore the output yˆ of (6) and (34) predicts the state of the system. The C matrix is computed using (30) with x¯i being the data used to construct the eigenfunctions plus a random noise uniformly distributed over [−0.05, 0.05]2 . This fully defines the linear predictor (6) in the uncontrolled setting. To get the the B matrix in the controlled setting we generate a second data set with forcing. The initial conditions are the same as in the uncontrolled setting; the forcing is piecewise constant signal taking a random uniformly distributed value in [−1, 1] in each sampling interval; the length of each trajectory is two seconds. The matrix B and its discrete counterpart Bd are then obtained using (40) and (41). Table 1 reports the prediction error over one second time interval as a function of NΛ and NG . The error is reported as the root mean square error (RMSE) sX kxpred (kTs ) − xtrue (kTs )k22 RMSE = 100 ·

k

sX k

.

(47)

kxtrue (kTs )k22

averaged over 500 randomly chosen initial conditions in the interior of the limit cycle. In the controlled setting, the forcing is a square wave with magnitude one and period 0.3 s. Figure 3 shows the true and predicted trajectories for the randomly chosen initial condition x0 = [−0.2021, −0.2217]> . 3

A thin plate spline radial basis function with center at x0 is defined by kx − x0 k2 log(kx − x0 k).

25

Next, we investigate the spatial distribution of the prediction error as a function of the initial condition. We report results for two sets of data – the original set as described above and a second set with initial conditions starting on a circle of radius 0.05 centered around the origin and trajectory length of five seconds. Figure 4 reports the results (for brevity we depict only the results with control, the uncontrolled ones being very similar). For the first data set, the prediction error is very small except for the region inside the ball of radius 0.2 where no data was available. For the second data set, this error is largely eliminated, resulting in fewer outliers in the error distribution and hence smaller standard deviation; the mean error, however, is somewhat larger. We note that when the initial conditions within the 0.2 ball are excluded from the first data set, the standard deviation drops to 5.4 % and the mean error to 5.8 %.

6.2

Damped Duffing oscillator 1

0.2

0.8 0 0.6 -0.2 0.4 -0.4 0.2 -0.6 0 -0.8 -0.2 -1 0

0.5

1

1.5

-0.4 2

0

0.5

1

1.5

2

Figure 5: Duffing oscillator – Prediction for (NΛ , NG ) = (10, 20), a randomly chosen initial condition and square wave forcing.

As our second example we consider the damped duffing oscillator with forcing x˙ 1 = x2 x˙ 2 = −0.5x2 − x1 (4x21 − 1) + 0.5u. The uncontrolled system (with u = 0) has two stable equilibria at (−0.5, 0) and (0.5, 0) as well as an unstable equilibrium at the origin. Prediction First we investigate the performance of the proposed predictors on this system. The data generation process is very similar to the previous example. In order to construct the eigenfunctions as described in Section 4 we generate Mt = 100 eight second long trajectories sampled with a sampling period Ts = 0.01 (i.e., Ms = 800). The initial conditions are chosen randomly from a uniform distribution on the unit circle. We note that the unit circle is not a non-recurrent surface for this system. The eigenvalues Λ are chosen to be Λ = meshdλ ( T1s log ΛDMD ), where ΛDMD are the eigenvalues obtained from applying the DMD algorithm to the generated data set and meshdλ (·) is defined in (46). The set of boundary 26

# Trajectories = 100

# Trajectories = 50 100 % 10 %

1

0.5

0.5

0

0

-0.5

-0.5

mean error: 7.1 % standard dev: 6.8 %

-1 -1

-0.5

0

0.5

100 % 10 %

1

1

mean error: 6.8 % standard dev: 12.2 %

-1 -1

-0.5

0

0.5

1

Figure 6: Damped Duffing oscillator – Spatial distribution of the prediction error (controlled). The trajectories used for learning of the eigenfunctions are depicted in grey. The error for each of the 500 initial conditions from the test set is encoded by the size of the blue marker. The initial conditions of the trajectories were sampled uniformly from the unit circle (dashed black); note that the unit circle is not a non-recurrent surface for the dynamics and this property is not required by the method.

functions G consists of the thin-plate spline radial basis functions with the centers uniformly distributed in [−1, 1]2 . This defines the eigenfunctions on the dataset through (25) and we extend them to the entire state space by linear interpolation. The functions ξ was chosen to be the identity map (i.e., ξ(x) = x); therefore the output yˆ of (6) and (34) predicts the state of the system. The matrix C is computed using (30) with x¯i being the data used to construct the eigenfunctions plus a random noise uniformly distributed over [−0.05, 0.05]2 . To get the the matrix B in the controlled setting we generate data with forcing. The initial conditions are the same as in the uncontrolled setting; the forcing is piecewise constant signal taking a random uniformly distributed values in [−1, 1] in each sampling interval; the length of each trajectory is two seconds. The matrix B and its discrete counterpart Bd are then obtained using (40) and (41). Figure 5 shows a prediction for a randomly chosen initial condition in the controlled setting with the forcing being a square wave with period 0.3 s and unit magnitude. Table 2 reports the RMSE error (47) averaged over 500 initial conditions chosen randomly inside the unit circle for different values of (NΛ , NG ), both in controlled and uncontrolled setting. Interestingly, the mean error is somewhat smaller in the controlled setting. This could be either due to the statistical error (the initial conditions were drawn anew in each case) or the control input may have forced some of the trajectories to remain within well sampled regions, thereby reducing the prediction error slightly. Figure 6 then shows the spatial distribution of the prediction error over a one second prediction time interval, where we compare the predictors constructed from 100 trajectories and 50 trajectories. We observe that the mean RMSE prediction error is virtually the same whereas the standard deviation is almost doubled with 50 trajectories, corresponding to outliers in under-sampled regions. Feedback control Next, we apply the Koopman MPC developed in Section 5.1 to control the system with (NΛ , NG ) = (10, 20). The goal is to track a piecewise constant reference 27

Table 2: Damped duffing oscillator – Prediction error. The total number of eigenfunctions for each combination (NΛ , NG ) is N = NΛ · NG .

(NΛ , NG )

(10, 30)

(10, 20)

(6, 20)

(10, 10)

(10, 5)

(10, 3)

Mean RMSE [uncontrolled] Mean RMSE [controlled]

6.9 % 4.6 %

8.9 % 6.7 %

17.4 % 15.8 %

19.9 % 15.7 %

38.8 % 35.6 %

56.2 % 53.5 %

0.6

0.4 0.3

0.4

0.2 0.1

0.2

0 -0.1

0

Phase space

-0.2

-0.2

-0.3 -0.4

-0.4

-0.5 -0.6 -0.6

-0.4

-0.2

0

0.2

0.4

0.6

-0.6

0

5

10

15

0

5

10

15

0.4 1 0.3

0.8

0.2

0.6

0.1

0.4

0

0.2

-0.1

0

-0.2

-0.2

-0.3

-0.4 -0.6

-0.4

-0.8

-0.5

-1 -0.6 0

5

10

15

Figure 7: Duffing oscillator – feedback control using Koopman MPC. signal, where we move from one stable equilibrium to the other (0.5, 0) 7→ (−0.5, 0), continue to the unstable saddle point at the origin and finish at (0.25, 0) which is not an equilibrium point for the uncontrolled system but is stabilizable for the controlled one. The matrices Q and R in (42) were chosen Q = diag(1, 0.1) and R = 0 and we imposed the constraint u ∈ [−1, 1] on the control input. The prediction horizon was set to one second, i.e., Np = 1.0/Ts = 100. The results are depicted in Figure 7; the tracking goal was achieved as desired. During the closed-loop operation, the MPC problem (45) was solved using the qpOASES solver [6]. The average computation time per time step was 0.023 s, the bulk of which was spent on ˆ (Step 3 of Algorithm 2) whereas the solution of evaluating the eigenfunction mapping φ ˆ could be the quadratic program (45) took on average only 5 · 10−4 s. The evaluation of φ significantly sped up by a more sophisticated interpolation implementation. We emphasize that the entire design was purely data driven and based only on linear model predictive 28

control, thereby allowing for a straightforward deployment in real-world applications.

7

Conclusion

This work presented a systematic framework for data-driven learning of Koopman eigenfunctions in transient, off-attractor, regions of the state pace. The method is geared toward prediction and control using linear predictors, allowing for feedback control and state estimation for nonlinear dynamical systems using established tools for linear systems that rely solely on convex optimization or simple linear algebra. The proposed method exploits the richness of the spectrum of the Koopman operator away from the attractor to construct a large number of eigenfunctions in order to minimize the projection error of the state (or any other observable of interest) on the span of the eigenfunctions. The proposed method is purely data-driven and very simple, relying only on linear algebra and/or convex optimization, with computation complexity comparable to DMD-type methods. Future work will investigate the possibility to enhance the proposed method by using nonconvex machine learning tools (e.g., deep neural networks) in order to learn the boundary functions {g1 , . . . , gNG } in such a way that the resulting eigenfunctions are easily approximable (or learnable) within the set of basis functions {β1 , . . . , βNβ } used for interpolation. A particularly appealing learning scheme in this regard is a concurrent learning of the functions {g1 , . . . , gNG } and {β1 , . . . , βNβ }, each parameterized by the weights and biases of a neural network. Along the same lines, one should also investigate an optimal choice of the eigenvalues used in our method such that the prediction error is minimized with as few eigenfunctions as possible. Finally, an extension of the present method to the case where recurrences are present should also be explored.

Appendix The matrices in (45) are given in terms of the data of (42) by H1 = R + B> QB, L = F + EB,

h = q> B + r> , M = EA,

H2 = 2A> QB,

> > d = [b> 0 , . . . , b Np ] ,

where 

I  CAd  2  A =  CAd  ..  .





0 CBd CAd Bd .. .

      , B =      Np N −1 CAd CAd p Bd

0 0 CBd .. . ...

... ... ... .. .

0 0 0

CAd Bd CBd



   ,  

 F0 0  0 F1   F =  ...  0 0 0 0

Q = diag(Q0 , . . . , QNp ), R = diag(R0 , . . . , RNp −1 ),

E = diag(E0 , . . . , ENp ), q = [q0 , . . . , qNp ], r = [r0 , . . . , rNp −1 ].

29

... ... ...

0 0 .. .



     . . . FNp −1  ... 0

8

Acknowledgments

This research was supported by the ARO-MURI grant W911NF17-1-0306.

References [1] H. Arbabi, M. Korda, and I. Mezic. A data-driven Koopman model predictive control framework for nonlinear flows. arXiv preprint arXiv:1804.05291, 2018. [2] H. Arbabi and I. Mezi´c. Ergodic theory, dynamic mode decomposition, and computation of spectral properties of the Koopman operator. SIAM Journal on Applied Dynamical Systems, 16(4):2096–2126, 2017. [3] V. I. Arnold. Ordinary differential equations. Springer-Verlag, third edition, 1984. [4] E. M. Bollt, Q. Li, F. Dietrich, and I. Kevrekidis. On matching, and even rectifying, dynamical systems through koopman operator eigenfunctions. SIAM Journal on Applied Dynamical Systems, 17(2):1925–1960, 2018. [5] B. W. Brunton, L. A. Johnson, J. G. Ojemann, and J. N. Kutz. Extracting spatial– temporal coherent patterns in large-scale neural recordings using dynamic mode decomposition. Journal of neuroscience methods, 258:1–15, 2016. [6] H. J. Ferreau, C. Kirches, A. Potschka, H. G. Bock, and M. Diehl. qpoases: A parametric active-set algorithm for quadratic programming. Mathematical Programming Computation, 6(4):327–363, 2014. [7] M. Georgescu and I. Mezi´c. Building energy modeling: A systematic approach to zoning and model reduction using Koopman mode analysis. Energy and buildings, 86:794–802, 2015. [8] N. Govindarajan, R. Mohr, S. Chandrasekaran, and I. Mezi´c. On the approximation of koopman spectra for measure preserving transformations. arXiv preprint arXiv:1803.03920, 2018. [9] E. Kaiser, J. N. Kutz, and S. L. Brunton. Data-driven discovery of Koopman eigenfunctions for control. arXiv preprint arXiv:1707.01146, 2017. [10] B. O. Koopman. Hamiltonian systems and transformation in Hilbert space. Proceedings of the National Academy of Sciences of the United States of America, 17(5):315, 1931. [11] M. Korda and I. Mezi´c. Linear predictors for nonlinear dynamical systems: Koopman operator meets model predictive control. Automatica, 93:149–160, 2018. [12] M. Korda and I. Mezi´c. On convergence of extended dynamic mode decomposition to the Koopman operator. Journal of Nonlinear Science, 28(2):687–710, 2018. [13] M. Korda, M. Putinar, and I. Mezi´c. Data-driven spectral analysis of the Koopman operator. Applied and Computational Harmonic Analysis, 2018. 30

[14] M. Korda, Y. Susuki, and I. Mezi´c. Power grid transient stabilization using Koopman model predictive control. arXiv preprint arXiv:1803.10744, 2018. [15] K. K¨ uster, R. Derndinger, and R. Nagel. The Koopman Linearization of Dynamical Systems. PhD thesis, Diplomarbeit, M¨arz 2015, Arbeitsbereich Funktionalanalysis, Mathematisches Institut, Eberhard-Karls-Universit¨at T¨ ubingen, 2015. [16] A. Mauroy and J. Goncalves. Linear identification of nonlinear systems: A lifting technique based on the koopman operator. arXiv preprint arXiv:1605.04457, 2016. [17] I. Mezi´c. Spectral properties of dynamical systems, model reduction and decompositions. Nonlinear Dynamics, 41(1-3):309–325, 2005. [18] I. Mezi´c. Koopman operator spectrum and data analysis. arXiv:1702.07597, 2017.

arXiv preprint

[19] I. Mezi´c and A. Banaszuk. Comparison of systems with complex behavior. Physica D: Nonlinear Phenomena, 197(1):101–133, 2004. [20] R. Mohr and I. Mezi´c. Construction of eigenfunctions for scalar-type operators via laplace averages with connections to the koopman operator. arXiv preprint arXiv:1403.6559, 2014. [21] N. Parikh, S. Boyd, et al. Proximal algorithms. Foundations and Trends in Optimization, 1(3):127–239, 2014. [22] G. Plonka and M. Tasche. Prony methods for recovery of structured functions. GAMMMitteilungen, 37(2):239–258, 2014. [23] J. L. Proctor, S. L. Brunton, and J. N. Kutz. Dynamic mode decomposition with control. SIAM Journal on Applied Dynamical Systems, 15(1):142–161, 2016. [24] F. Raak, Y. Susuki, I. Mezi´c, and T. Hikihara. On Koopman and dynamic mode decompositions for application to dynamic data with low spatial dimension. In IEEE 55th Conference on Decision and Control (CDC), pages 6485–6491, 2016. [25] C. W. Rowley, I. Mezi´c, S. Bagheri, P. Schlatter, and D. Henningson. Spectral analysis of nonlinear flows. Journal of Fluid Mechanics, 641(1):115–127, 2009. [26] P. J. Schmid. Dynamic mode decomposition of numerical and experimental data. Journal of Fluid Mechanics, 656:5–28, 2010. [27] A. Surana and A. Banaszuk. Linear observer synthesis for nonlinear systems using Koopman operator framework. In IFAC Symposium on Nonlinear Control Systems (NOLCOS), 2016. [28] A. Surana, M. O. Williams, M. Morari, and A. Banaszuk. Koopman operator framework for constrained state estimation. In Decision and Control (CDC), 2017 IEEE 56th Annual Conference on, pages 94–101. IEEE, 2017.

31

[29] N. Takeishi, Y. Kawahara, and T. Yairi. Learning Koopman invariant subspaces for dynamic mode decomposition. In Advances in Neural Information Processing Systems, pages 1130–1140, 2017. [30] M. O. Williams, M. S. Hemati, S. T. M. Dawson, I. G. Kevrekidis, and C. W. Rowley. Extending data-driven Koopman analysis to actuated systems. In IFAC Symposium on Nonlinear Control Systems (NOLCOS), 2016. [31] M. O. Williams, I. G. Kevrekidis, and C. W. Rowley. A data-driven approximation of the Koopman operator: Extending dynamic mode decomposition. Journal of Nonlinear Science, 25(6):1307–1346, 2015. [32] H. Wu and F. No´e. Variational approach for learning Markov processes from time series data. arXiv preprint arXiv:1707.04659, 2017. [33] H. Wu, F. N¨ uske, F. Paul, S. Klus, P. Koltai, and F. No´e. Variational Koopman models: Slow collective variables and molecular kinetics from short off-equilibrium simulations. The Journal of Chemical Physics, 146(15):154104, 2017. [34] A. Y. Yang, Z. Zhou, A. G. Balasubramanian, S. S. Sastry, and Y. Ma. Fast `1 minimization algorithms for robust face recognition. IEEE Transactions on Image Processing, 22(8):3234–3246, 2013.

32