|Seminar on Theoretical Machine Learning|
|Topic:||Prediction and Control of Linear Dynamical Systems|
|Date:||Thursday, January 25|
|Time/Room:||12:15pm - 1:45pm/White-Levy|
Linear dynamical systems (LDSs) are a class of time-series models widely used in robotics, finance, engineering, and meteorology. I will present our "spectral filtering" approach to the identification and control of discrete-time LDSs with multi-dimensional inputs, outputs, and a latent state. I'll cover a series of results, which are joint work with Elad Hazan, Karan Singh, Sanjeev Arora, Holden Lee, and Yi Zhang. We first consider the task of system identification: given input and output time series (linear disturbances and observations of the hidden state), find the matrices which describe the evolution of the system. This is a classic non-convex problem, typically tackled with heuristics like gradient descent ("backpropagation through time") or the EM algorithm. We present an improper learning approach: carefully overparameterize the system identification problem, in exchange for convexity of the loss function. In the online formulation of system identification, this results in a simple, efficient, and practical algorithm for low-regret prediction (i.e. asymptotically vanishing MSE). At the core of this method is a convex relaxation technique, which may be of independent interest: convolutions of the input time series by the eigenvectors of a certain Hankel matrix. Next, we extend our prediction result to the harder problem of control: given black-box access to an unknown LDS, select a series of inputs which stabilize the system around a desired state. Linear control is a widely considered primitive in robotics and engineering, and serves as a minimally stylized non-trivial problem in reinforcement learning. To this end, we give an approach for provably controlling a certain class of LDSs, which simultaneously matches the best time and sample complexity of other provable methods.