Model-Free Linear Quadratic Control via Reduction to Expert Prediction

Page created by Mildred Perez
 
CONTINUE READING
Model-Free Linear Quadratic Control via Reduction to Expert Prediction

           Yasin Abbasi-Yadkori                            Nevena Lazić                         Csaba Szepesvári
             Adobe Research                                Google Brain                             Deepmind

                          Abstract                                   either estimate state-action value functions or directly op-
                                                                     timize a parameterized policy based only on interactions
                                                                     with the environment. Model-free RL is appealing for a
     Model-free approaches for reinforcement learn-
                                                                     number of reasons: 1) it is an “end-to-end” approach, di-
     ing (RL) and continuous control find policies
                                                                     rectly optimizing the cost function of interest, 2) it avoids
     based only on past states and rewards, without
                                                                     the difficulty of modeling and robust planning, and 3) it is
     fitting a model of the system dynamics. They
                                                                     easy to implement. However, model-free algorithms also
     are appealing as they are general purpose and
                                                                     come with fewer theoretical guarantees than their model-
     easy to implement; however, they also come
                                                                     based counterparts, which presents a considerable obsta-
     with fewer theoretical guarantees than model-
                                                                     cle in deploying them in real-world physical systems with
     based RL. In this work, we present a new model-
                                                                     safety concerns and the potential for expensive failures.
     free algorithm for controlling linear quadratic
     (LQ) systems, and show that its regret scales                   In this work, we propose a model-free algorithm for con-
     as O(T ⇠+2/3 ) for any small ⇠ > 0 if time                      trolling linear quadratic (LQ) systems with theoretical
     horizon satisfies T > C 1/⇠ for a constant C.                   guarantees. LQ control is one of the most studied problems
     The algorithm is based on a reduction of con-                   in control theory (Bertsekas, 1995), and it is also widely
     trol of Markov decision processes to an expert                  used in practice. Its simple formulation and tractability
     prediction problem. In practice, it corresponds                 given known dynamics make it an appealing benchmark for
     to a variant of policy iteration with forced explo-             studying RL algorithms with continuous states and actions.
     ration, where the policy in each phase is greedy                A common way to analyze the performance of sequential
     with respect to the average of all previous value               decision making algorithms is to use the notion of regret -
     functions. This is the first model-free algorithm               the difference between the total cost incurred and the cost
     for adaptive control of LQ systems that prov-                   of the best policy in hindsight (Cesa-Bianchi and Lugosi,
     ably achieves sublinear regret and has a polyno-                2006, Hazan, 2016, Shalev-Shwartz, 2012). We show that
     mial computation cost. Empirically, our algo-                   our model-free LQ control algorithm enjoys a O(T ⇠+2/3 )
     rithm dramatically outperforms standard policy                  regret bound. Note that existing regret bounds for LQ sys-
     iteration, but performs worse than a model-based                tems are only available for model-based approaches.
     approach.
                                                                     Our algorithm is a modified version of policy iteration with
                                                                     exploration similar to ✏-greedy, but performed at a fixed
                                                                     schedule. Standard policy iteration estimates the value of
1    INTRODUCTION                                                    the current policy in each round, and sets the next policy
                                                                     to be greedy with respect to the most recent value function.
Reinforcement learning (RL) algorithms have recently                 By contrast, we use a policy that is greedy with respect to
shown impressive performance in many challenging deci-               the average of all past value functions in each round. The
sion making problems, including game playing and vari-               form of this update is a direct consequence of a reduction of
ous robotic tasks. Model-based RL approaches estimate a              the control of Markov decision processes (MDPs) to expert
model of the transition dynamics and rely on the model to            prediction problems (Even-Dar et al., 2009). In this reduc-
plan future actions using approximate dynamic program-               tion, each prediction loss corresponds to the value function
ming. Model-free approaches aim to find an optimal pol-              of the most recent policy, and the next policy is the output
icy without explicitly modeling the system transitions; they         of the expert algorithm. The structure of the LQ control
                                                                     problem allows for an easy implementation of this idea:
                                                                     since the value function is quadratic, the average of all pre-
Proceedings of the 22nd International Conference on Artificial In-   vious value functions is also quadratic.
telligence and Statistics (AISTATS) 2019, Naha, Okinawa, Japan.
PMLR: Volume 89. Copyright 2019 by the author(s).                    One major challenge in this work is the finite-time analy-
Model-Free Linear Quadratic Control via Reduction to Expert Prediction

sis of the value function estimation error. Existing finite-     the dynamics with the lowest attainable cost from a con-
sample results either consider bounded functions or dis-         fidence set; however this strategy is somewhat impracti-
counted problems, and are not applicable in our setting.         cal as finding lowest-cost dynamics is computationally in-
Our analysis relies on the contractiveness of stable poli-       tractable. Abbasi-Yadkori and Szepesvári (2015), Abeille
cies, as well as the fact that our algorithm takes exploratory   and Lazaric (2017), Ouyang et al. (2017) demonstrate sim-
actions. Another challenge is showing boundedness of the         ilar regret bounds in the Bayesian and one-dimensional set-
value functions in our iterative scheme, especially consid-      tings using Thompson sampling. Dean et al. (2018) show
ering that the state and action spaces are unbounded. We         an O(T 2/3 ) regret bound using robust control synthesis.
are able to do so by showing that the policies produced by
                                                                 Fewer theoretical results exist for model-free LQ control.
our algorithm are stable assuming a sufficiently small esti-
                                                                 The LQ value function can be expressed as a linear func-
mation error.
                                                                 tion of known features, and is hence amenable to least
Our main contribution is a model-free algorithm for adap-        squares estimation methods. Least squares temporal differ-
tive control of linear quadratic systems with strong theoret-    ence (LSTD) learning has been extensively studied in rein-
ical guarantees. This is the first such algorithm that prov-     forcement learning, with asymptotic convergence shown by
ably achieves sublinear regret and has a polynomial com-         Tsitsiklis and Van Roy (1997), Tsitsiklis and Roy (1999),
putation cost. The only other computationally efficient al-      Yu and Bertsekas (2009), and finite-sample analyses given
gorithm with sublinear regret is the model-based approach        in Antos et al. (2008), Farahmand et al. (2016), Lazaric
of Dean et al. (2018) (which appeared in parallel to this        et al. (2012), Liu et al. (2015, 2012). Most of these meth-
work). Previous works have either been restricted to one-        ods assume bounded features and rewards, and hence do
dimensional LQ problems (Abeille and Lazaric, 2017), or          not apply to the LQ setting. For LQ control, Bradtke et al.
have considered the problem in a Bayesian setting (Ouyang        (1994) show asymptotic convergence of Q-learning to op-
et al., 2017). In addition to theoretical guarantees, we         timum under persistently exciting inputs, and Tu and Recht
demonstrate empirically that our algorithm leads to signif-      (2017) analyze the finite sample complexity of LSTD for
icantly more stable policies than standard policy iteration.     discounted LQ problems. Here we adapt the work of Tu
                                                                 and Recht (2017) to analyze the finite sample estimation
                                                                 error in the average-cost setting. Among other model-free
1.1   Related work                                               LQ methods, Fazel et al. (2018) analyze policy gradient
                                                                 for deterministic dynamics, and Arora et al. (2018) formu-
Model-based adaptive control of linear quadratic systems         late optimal control as a convex program by relying on a
has been studied extensively in control literature. Open-        spectral filtering technique for representing linear dynami-
loop strategies identify the system in a dedicated explo-        cal systems in a linear basis.
ration phase. Classical asymptotic results in linear sys-
tem identification are covered in (Ljung and Söderström,       Relevant model-free methods for finite state-action MDPs
1983); an overview of frequency-domain system identifi-          include the Delayed Q-learning algorithm of Strehl et al.
cation methods is available in (Chen and Gu, 2000), while        (2006), which is based on the optimism principle and has a
identification of auto-regressive time series models is cov-     PAC bound in the discounted setting. Osband et al. (2017)
ered in (Box et al., 2015). Non-asymptotic results are lim-      propose exploration by randomizing value function param-
ited, and existing studies often require additional stability    eters, an algorithm that is applicable to large state prob-
assumptions on the system (Helmicki et al., 1991, Hardt          lems. However the performance guarantees are only shown
et al., 2016, Tu et al., 2017). Dean et al. (2017) relate the    for finite-state problems.
finite-sample identification error to the smallest eigenvalue    Our approach is based on a reduction of the MDP control
of the controllability Gramian.                                  to an expert prediction problem. The reduction was first
Closed-loop model-based strategies update the model on-          proposed by Even-Dar et al. (2009) for the online control
line while trying to control the system, and are more            of finite-state MDPs with changing cost functions. This ap-
akin to standard RL. Fiechter (1997) and Szita (2007)            proach has since been extended to finite MDPs with known
study model-based algorithms with PAC-bound guaran-              dynamics and bandit feedback (Neu et al., 2014), LQ track-
tees for discounted LQ problems. Asymptotically effi-            ing with known dynamics (Abbasi-Yadkori et al., 2014),
cient algorithms are shown in (Lai and Wei, 1982, 1987,          and linearly solvable MDPs (Neu and Gómez, 2017).
Chen and Guo, 1987, Campi and Kumar, 1998, Bittanti
and Campi, 2006). Multiple approaches (Campi and Ku-             2   PRELIMINARIES
mar, 1998, Bittanti and Campi, 2006, Abbasi-Yadkori and
Szepesvári, 2011, Ibrahimi et al., 2012) have relied on the     We model the interaction between the agent (i.e. the learn-
optimism in the face of uncertainty principle.  p Abbasi-        ing algorithm) and the environment as a Markov decision
Yadkori and Szepesvári (2011) show an O( T ) finite-            process (MDP). An MDP is a tuple hX , A, c, P i, where
time regret bound for an optimistic algorithm that selects       X ⇢ Rn is the state space, A ⇢ Rd is the action space,
Yasin Abbasi-Yadkori, Nevena Lazić, Csaba Szepesvári

c : X ⇥ A ! R is a cost function, and P : X ⇥ A ! X is                          In the infinite horizon setting, it is well-known that the
the transition probability distribution that maps each state-                   optimal policy ⇡⇤ (x) corresponding to the lowest aver-
action pair to a distribution over states X . At each dis-                      age cost ⇡ is given by constant linear state feedback,
crete time step t 2 N, the agent receives the state of the                      ⇡⇤ (x) = K⇤ x. When following any linear feedback pol-
environment xt 2 X , chooses an action at 2 A based on                          icy ⇡(x) = Kx, the system states evolve as xt+1 =
xt and past observations, and suffers a cost ct = c(xt , at ).                  (A BK)xt + wt+1 . A linear policy is called stable if
The environment then transitions to the next state accord-                      ⇢(A BK) < 1, where ⇢(·) denotes the spectral radius of
ing to xt+1 ⇠ P (xt , at ). We assume that the agent does                       a matrix. It is well-known that the value function V⇡ and
not know P , but does know c. A policy is a mapping                             state-action value function Q⇡ of any stable linear policy
⇡ : X ! A from the current state to an action, or a distri-                     ⇡(x) = Kx are quadratic functions (see e.g. Abbasi-
bution over actions. Following a policy means that in any                       Yadkori et al. (2014)):
round upon receiving state x, the action a is chosen accord-                                                ✓ ◆
                                                                                                              x
ing to ⇡(x). Let µ⇡ (x) be the stationary state distribution                      Q⇡ (x, a) = x> a> G⇡
                                                                                                              a
under policy ⇡, and let ⇡ = Eµ (c(x, ⇡(x)) be the average                                                                     ✓    ◆
cost of policy ⇡:                                                                    V⇡ (x) = x> H⇡ x = x> I         K > G⇡
                                                                                                                                 I
                                                                                                                                     x,
                              "                   #                                                                              K
                                    T
                                 1X                                             where H⇡     0 and G⇡    0. We call H⇡ the value matrix
             ⇡ := lim E⇡               c(xt , at ) ,
                  T !+1         T t=1                                           of policy ⇡. The matrix G⇡ is the unique solution of the
                                                                                equation
which does not depend on the initial state in the problems                           ✓ >◆                 ✓     ◆           ✓         ◆
that we consider in this paper. The corresponding bias                                 A             >       I                M 0
                                                                                G=           I     K    G          A   B  +              .
function, also called value function in this paper, associ-                            B>                    K                 0 N
ated with a stationary policy ⇡ is given by:                                    The greedy policy with respect to Q⇡ is given by
                           " T                       #
                            X                                                       ⇡ 0 (x) = argmin Q⇡ (x, a) =        1
                                                                                                                     G⇡,22 G⇡,21 x =      Kx .
     V⇡ (x) := lim E⇡           (c(xt , at ) ⇡ (xt )) .                                        a
                   T !+1
                                     t=1
                                                                                Here, G⇡,ij for i, j 2 {1, 2} refers to (i, j)’s block of ma-
The average cost ⇡ and value V⇡ satisfy the following                           trix G⇡ where block structure is based on state and action
evaluation equation for any state x 2 X ,                                       dimensions. The average expected cost of following a lin-
                                                                                ear policy is ⇡ = tr(H⇡ W ). The stationary state distribu-
  V⇡ (x) = c(x, ⇡(x))                ⇡    + Ex0 ⇠P (.|x,⇡(x)) (V⇡ (x0 )).       tion of a stable linear policy is µ⇡ (x) = N (x|0, ⌃), where
                                                                                ⌃ is the unique solution of the Lyapunov equation
Let x⇡t be the state at time step t when policy ⇡ is followed.
The objective of the agent is to have small regret, defined                                 ⌃ = (A      BK)⌃(A        BK)> + W .
as
                      T                           T
                                                                                3     MODEL-FREE LQ CONTROL
                      X                           X
          RegretT =         c(xt , at )     min         c(x⇡t , ⇡(x⇡t )) .
                      t=1
                                              ⇡
                                                  t=1
                                                                                Our model-free linear quadratic control algorithm (MFLQ)
                                                                                is shown in algorithm 1, where V 1 and V 2 indicate different
                                                                                versions. At a high level, MFLQ is a variant of policy iter-
2.1        Linear quadratic control
                                                                                ation with a deterministic exploration schedule. We assume
In a linear quadratic control problem, the state transition                     that an initial stable suboptimal policy ⇡1 (x) = K1 x
dynamics and the cost function are given by                                     is given. During phase i, we first execute policy ⇡i for
                                                                                a fixed number of rounds, and compute a value function
xt+1 = Axt + Bat + wt+1 ,                     c t = x>         >
                                                     t M xt + at N at .
                                                                                estimate Vbi . We then estimate Qi from Vbi and a dataset
                                                                                Z = {(xt , at , xt+1 )} which includes exploratory actions.
The state space is X = Rn and the action space is A =                           We set ⇡i+1 to the greedy policy with respect to the aver-
Rd . We assume the initial state is zero, x1 = 0. A and B                       age of all previous estimates Q b 1 , ..., Q
                                                                                                                           b i . This step is dif-
are unknown dynamics matrices of appropriate dimensions,                        ferent than standard policy iteration (which only consid-
assumed to be controllable1 . M and N are known positive                        ers the most recent value estimate), and a consequence of
definite cost matrices. Vectors wt+1 correspond to system                       using the F OLLOW- THE -L EADER expert algorithm (Cesa-
noise; similarly to previous work, we assume that wt are                        Bianchi and Lugosi, 2006).
drawn i.i.d. from a known Gaussian distribution N (0, W ).
                                                                                The dataset Z is generated by executing the policy and tak-
      1
   The linear system is controllable                        if   the   matrix   ing a random action every Ts steps. In MFLQ V 1, we gen-
(B AB · · · An 1 B) has full column rank.                                       erate Z at the beginning, and reuse it in all phases, while
Model-Free Linear Quadratic Control via Reduction to Expert Prediction

Algorithm 1 MFLQ                                                          T
                                                                          X                                T
                                                                                                           X                          T
                                                                                                                                      X
MFLQ (stable policy ⇡1 , trajectory length T , initial state     ↵T =             ct      ⇡(t) ,   T   =         ⇡(t)   ⇡,    T   =         ⇡    c⇡t
                                                                          t=1                              t=1                        t=1
x0 , exploration covariance ⌃a )
   V 1: S = T 1/3 ⇠        1, Ts = const, Tv = T 2/3+⇠           where ⇡(t) is the policy played by the algorithm at time
   V 1: Z = C OLLECT DATA (⇡1 , Tv , Ts , ⌃a )
                                                                 t. The terms ↵T and T represent the difference between
   V 2: S = T 1/4 , Ts = T 1/4 ⇠ , Tv = 0.5T 3/4
                                                                 instantaneous and average cost of a policy, and can be
   for i = 1, 2, . . . , S do                                    bounded using mixing properties of policies and MDPs. To
      Execute ⇡i for Tv rounds and compute Vbi using (2)         bound T , first we can show that (see, e.g. Even-Dar et al.
      V 2: Z = C OLLECT DATA (⇡i , Tv , Ts , ⌃a )
                                                                 (2009))
      Compute Q  b i from Z and Vbi using (7)
                               Pi                                 ⇡(t)        ⇡       = Ex⇠µ⇡ (Q⇡(t) (x, ⇡(t) (x))         Q⇡(t) (x, ⇡(x))) .
      ⇡i+1 (x) = argmina j=1 Q       b j (x, a) = Ki+1 x
   end for                                                           b i be an estimate of Qi , computed from data at the end
                                                                 Let Q
                                                                 of phase i. We can write
C OLLECT DATA (policy ⇡, traj. length ⌧ , exploration pe-
riod s, cov. ⌃a ):                                               Qi (x, ⇡i (x))                          b i (x, ⇡i (x))
                                                                                          Qi (x, ⇡(x)) = Q                        b i (x, ⇡(x))
                                                                                                                                  Q
   Z = {}                                                                                                  + Qi (x, ⇡i (x))       b i (x, ⇡i (x))
                                                                                                                                  Q
   for k = 1, 2, . . . , b⌧ /sc do                                                                           b i (x, ⇡(x)) Qi (x, ⇡(x)) .
     Execute the policy ⇡ for s 1 rounds and let x be the                                                  +Q
     final state                                                                                                                    (1)
     Sample a ⇠ N (0, ⌃a ), observe next state x+ , add                                                    b i (x, .) at the end
                                                                 Since we feed the expert in state x with Q
     (x, a, x+ ) to Z
                                                                 of each phase, the first term on the RHS can be bounded
   end for
                                                                 by the regret bound of the expert algorithm. The remaining
   return Z
                                                                 terms correspond to the estimation errors. We will show
                                                                 that in the case of linear quadratic control, the value func-
in V 2 we generate a new dataset Z in each phase follow-         tion parameters can be estimated with small error. Given
ing the execution of the policy. While MFLQ as described         sufficiently small estimation errors, we show that all poli-
stores long trajectories in each phase, this requirement can     cies remain stable, and hence all value functions remain
be removed by updating parameters of Vi and Qi in an on-         bounded. Given the boundedness and the quadratic form
line fashion. However, in the case of MFLQ V 1, we need          of the value functions, we use existing regret bounds for
to store the dataset Z throughout (since it gets reused), so     the FTL strategy to finish the proof.
this variant is more memory demanding.
Assume the initial policy is stable and let C1 be the norm       4      VALUE FUNCTION ESTIMATION
of its value matrix. Our main result are the following two
theorems.                                                        4.1     State value function
Theorem 3.1. For any , ⇠ > 0, appropriate constants C            In this section, we study least squares temporal difference
                     1/⇠
and C, and T > C , the regret of the MFLQ V 1 algo-              (LSTD) estimates of the value matrix H⇡ . In order to sim-
rithm is bounded as                                              plify notation, we will drop ⇡ subscripts in this section. In
               RegretT  CT 2/3+⇠ log T ,                        steady state, with at = ⇡(xt ), we have the following:

where C and C scale as poly(n, d, kH1 k , log(1/ )).                 V (xt ) = c(xt , at ) + E(V (xt+1 )|xt , at )
Theorem 3.2. For any , ⇠ > 0, appropriate constants C            x>
                                                                  t Hxt     = c(xt , at ) + E(x>
                                                                                               t+1 Hxt+1 |xt , at )                tr(W H) .
                    1/⇠
and C, and T > C , the regret of the MFLQ V 2 algo-
                                                                 Let VEC(A) denote the vectorized version of a symmetric
rithm is bounded as
                                                                 matrix A, such that VEC(A1 )> VEC(A2 ) = tr(A1 A2 ), and
               RegretT  CT 3/4+⇠ log T ,                        let (x) = VEC(xx> ). We will use the shorthand notation
                                                                   t = (xt ) and ct = c(xt , at ). The vectorized version of
where C and C scale as poly(n, d, kH1 k , log(1/ )).             the Bellman equation is
To prove the above theorems, we rely on the following re-            >                                                                      >
                                                                     t VEC (H)         = ct + E(       t+1 |xt , ⇡(xt ))     VEC (W )           H.
gret decomposition. The regret of an algorithm with respect
to a fixed policy ⇡ can be written as                            By multiplying both sides with t and taking expectations
                 T
                 X           T
                             X                                   with respect to the steady state distribution,
     RegretT =         cT          c⇡t = ↵T +   T   +   T   ,
                 t=1         t=1
                                                                     E( t (       t       t+1   + VEC(W ))> )VEC(H) = E( t ct ) .
Yasin Abbasi-Yadkori, Nevena Lazić, Csaba Szepesvári

We estimate H from data generated by following the policy                        In Appendix A.1, we show that the left-hand side can be
for ⌧ rounds. Let be a ⌧ ⇥ n2 matrix whose rows are                              equivalently be written as
vectors 1 , ..., ⌧ , and similarly let + be a matrix whose
rows are 2 , ..., ⌧ +1 . Let W be a ⌧ ⇥ n2 matrix whose                          P (               +   + W)(h              b
                                                                                                                           h) =            (I           ⌦ )> VEC(h                ĥ).
each row is VEC(W ). Let c = [c1 , ..., c⌧ ]> . The LSTD                                                 p
estimator of H is given by (see e.g. Tsitsiklis and Roy                          Using k pvk                     min (
                                                                                                                               >) kvk on the l.h.s., and P v 
(1999), Yu and Bertsekas (2009)):
                                                                                    >
                                                                                      v /              min (
                                                                                                                 > )       on the r.h.s.,

                                                         †                                                                                      >
              b
         VEC (H)     =      >
                                (         +   + W)           >
                                                                 c,        (2)                                                                      (    +        + )ĥ
                                                                                        (I             ⌦ )> (h             ĥ)                               >
                                                                                                                                                                                  (5)
                                                                                                                                                    min (             )
where (.) denotes the pseudo-inverse. Given that H
           †

M , we project our estimate onto the constraint Hb     M.                        Lemma 4.4 of Tu and Recht (2017) shows that for a suf-
Note that this step can only decrease the estimation error,                      ficiently long trajectory, min ( > ) = O ⌧ 2min (⌃⇡ ) .
since an orthogonal projection onto a closed convex set is                       Lemma 4.8 of Tu and Recht (2017) (adapted to the average-
contractive.                                                                     cost setting with noise covariance W ) shows that for a suf-
                                                                                 ficiently long trajectory, with probability at least 1 ,
Remark. Since the average cost tr(W H) cannot be com-
puted from value function parameters H alone, assuming                                 >
                                                                                           (   +           + )ĥ      
known noise covariance W seems necessary. However, if
W is unknown in practice, we can use the following esti-                                       p                                                1/2
                                                                                        O                      b
                                                                                                   ⌧ tr(⌃V ) W H                            ⌃V           polylog(⌧, 1/ ) .
mator P
      instead, which relies on the empirical average cost                                                                          F
         ⌧
c = ⌧1 t=1 ct :                                                                                                                                                                   (6)

                                               †
                                                                                 Here, ⌃V = ⌃0 + ⌃⇡ is a simple upper bound on the state
             e⌧ )
        VEC (H       =      >
                                (         +)
                                                    >
                                                        (c   c1).          (3)   covariances ⌃t obtained as follows:

Lemma 4.1. Let ⌃t be the state covariance at time step t,                                      ⌃t = ⌃t                1
                                                                                                                           >
                                                                                                                               +W
and let ⌃⇡ be the steady-state covariance of the policy. Let                                                                       t 1
                                                                                                                                   X
⌃V = ⌃⇡ + ⌃0 and let = A BK. With probability at                                                             t            t>                k            k>
                                                                                                       =         ⌃0            +                W
least 1    , we have                                                                                                               k=0
                                                                                                                      1
                                                                                                                      X
                            2      ✓                                                                                               k       k>
         b F =         kHk                    b
                                              1/2                                                          ⌃0 +                        W         = ⌃0 + ⌃⇡ .
  kH     Hk                    2
                                 O   ⌧     WH                     tr(⌃V )
                      min (M )                               F                                                        k=0
                                           ◆
                      1/2
                     ⌃V     polylog(⌧, 1/ ) .                              (4)   In Appendix A.1, we show that (I        ⌦ )> (h ĥ) is
                                                                                                                    2         b F . The re-
                                                                                 lower-bounded by min (M )2 kHk kH Hk
The proof is similar to the analysis of LSTD for the dis-                        sult follows from applying the bounds to (5) and rearrang-
counted LQ setting by Tu and Recht (2017); however, in-                          ing terms.
stead of relying on the contractive property of the dis-
                                                                                                                  b for large ⌧ we have
                                                                                 While the error bound depends on H,
counted Bellman operator, we use the contractiveness of
stable policies.                                                                                b F                                             b F = ckHk
                                                                                                                                                        b F
                                                                                               kHk             kHkF  kH                        Hk

Proof. Let + be a ⌧ ⇥ n2 matrix, whose rows cor-                                                            b F  (1
                                                                                 for some c < 1. Therefore kHk                                               c)   1
                                                                                                                                                                      kHkF .
respond to vectors E[ 2 | 1 , ⇡], . . . , E[ ⌧ +1 | ⌧ , ⇡]. Let
P = ( > ) 1 > be the orthogonal projector onto .                                 4.2    State-action value function
The value function estimate ĥ = VEC(H)      b and the true
value function parameters h = VEC(h) satisfy the follow-                         Let z > = (x> , a> ), and let = VEC(zz > ). The state-
ing:                                                                             action value function corresponds to the cost of deviating
                                                                                 from the policy, and satisfies
                 ĥ = P (c + (            +    W)ĥ)
                                                                                       Q(x, a) = z > Gz = c(x, a) + E(x>
                                                                                                                       + Hx+ |z)                                               tr(HW )
                 h=c+(              +     W)h .
                                                                                  >                                                                                       >
                                                                                       VEC (G)         = c(x, a) + E(                  + |z)            VEC (W )              VEC (H)    .
Subtracting the two previous equations, and adding and
subtracting + ĥ, we have:                                                       We estimate G based on the above equation, using the value
                                                                                                   b of the previous section in place of H
                                                                                 function estimate H
     P (         +   + W)(h             ĥ) = P (       +        + )ĥ .         and randomly sampled actions. Let be a ⌧ ⇥ (n + d)2
Model-Free Linear Quadratic Control via Reduction to Expert Prediction

matrix whose rows are vectors 1 , ..., ⌧ , and let c =                        5   ANALYSIS OF MFLQ
[c1 , ..., c⌧ ]> be the vector of corresponding costs. Let +
be the ⌧ ⇥ n2 matrix containing the next-state features af-                   In this section, we first show that given sufficiently small
ter each random action, and let + be its expectation. We                      estimation errors, all policies produced by the MFLQ al-
estimate G as follows:                                                        gorithm remain stable. Consequently the value matrices,
                                                                              states, and actions remain bounded. We then bound the
              b
         VEC (G)       =(     >
                                      )   1   >
                                                  (c + (   +   W)ĥ),   (7)   terms ↵T , T , and T to show the main result. For sim-
                                                                              plicity, we will assume that M       I and N       I for the
and additionally project the estimate onto the constraint                     rest of this section; we can always rescale M and N so
b ⌫ ( M 0 ).
G                                                                             that this holds true without loss of generality. We ana-
      0 N
                                                                              lyze MFLQ V 2; the analysis of MFLQ V 1 is similar and
To gather appropriate data, we iteratively (1) execute the                    obtained by a different choice of constants.
policy ⇡ for Ts iterations in order to get sufficiently close
to the steady-state distribution,2 (2) sample a random action                 By assumption, K1 is bounded and stable. By the argu-
a ⇠ N (0, ⌃a ), (3) observe the cost c and next state x+ , and                ments in Section 4, the estimation error in the first phase
(4) add the tuple (x, a, x+ ) to our dataset Z. We collect                    can be made small for sufficiently long phases. In par-
⌧ = 0.5T 1/2+⇠ such tuples in each phase of MFLQ V 2,                         ticular, we assume that estimation error in each phase is
and ⌧ = T 2/3+⇠ such tuples in the first phase of MFLQ V 1.                   bounded by "1 as in Equation (9) and that "1 satisfies
                                                                                                      p         p
Lemma 4.2. Let ⌃G,⇡ = A⌃⇡ A> + B⌃a B > . With prob-                                       "1 < 12C1 ( n + CK d)2 S
                                                                                                                           1
                                                                                                                             .       (10)
ability at least 1 1 , we have
                                                                              Here C1 > 1 is an upper bound on kH1 k, and CK =
                                  ✓
                                                                              2(3C1 kBk kAk + 1). Since we take S 2 T ⇠ random actions
          G      b
                 G         = O tr(⌃G,⇡ ) H                 b
                                                           H
                       F                                                      in the first phase, the error factor S 1 is valid as long as
                                                                                       1/⇠
                        1/2    b                                              T > C        for a constant C that can be derived from (9)
                +⌧            WH  tr(⌃⇡ + ⌃a ) k⌃G,⇡ kF
                                          F                                   and (10). We prove the following lemma in Appendix B.1.
                                        ◆
                                2 1                                           Lemma 5.1. Let {Ki }Si=2 be the sequence of policies pro-
                     ⇥ polylog(n , , ⌧ ) .                              (8)
                                              1                               duced by the MFLQ algorithm. For all i 2 [S], kHi k 
                                                                              Ci < CH := 3C1 , Ki is stable, kKi k  CK , and for all
                                                                              k 2 N,
The proof is given in Appendix A.2 and similar to that of
                                                                                                     p
Lemma 4.1. One difference is that we now require a lower                             (A BKi )k  Ci 1 (1 (6C1 ) 1 )k/2
bound on min ( > ), where > is a function of both                                                    p
states and actions. Since actions may lie in a subspace of A                                       CH (1 (2CH ) 1 )k/2 .
when following a linear policy, we estimate G using only
the exploratory dataset Z. Another difference is that we                      To prove the lemma, we first show that the value matrices
         b so the error includes a kH Hk
rely on H,                                 b term.                            Hj and closed-loop matrices j = A BKj satisfy
Let ⌃max be an upper bound on the state covariance ma-                                   x>    >
                                                                                               j+1 Hj   j+1   x + "2  x > Hj x      (11)
trices in the p.s.d. sense, and let CH be an upper bound
on kHi k (see Appendix C.3.1 and Lemma 5.1 for concrete                       where
values for these bounds). By a union bound, with probabil-
                                                                              "2 = x> (M +Kj+1
                                                                                           >
                                                                                               N Kj+1 )x "1 1> (|XKj |+|XKj+1 |)1 ,
ity at least 1 S( + 1 ), the estimation error in any phase i
of MFLQ with ⌧v steps of value estimation and ⌧q random                                        
                                                                                                 I       ⇥         ⇤
actions is bounded as                                                                    XK =        xx> I       K> ,
                                                                                                  K

  Gi      bi
          G            "1 :=                                                 and |XK | is the matrix obtained from XK by taking the
                 F                                                            absolute value of each entry. If the estimation error "1 is
                               3
         1/2                  CH kW kF tr(⌃max )                              small enough so that "2 > 0 for any unit-norm x and all
C h ⌧v         tr(⌃G,max )                                      ⌃1/2
                                   min (M )
                                                                 max
                                                                              policies, then Hj       j+1 Hj j+1 and Kj+1 is stable by
                                                                                                      >

+ C g ⌧q       1/2
                     CH kW kF (tr(⌃max ) + tr(⌃a )) k⌃G,max kF                a Lyapunov theorem. Since K1 is stable and H1 bounded,
                                                            (9)               all policies remain stable. If estimation errors are bounded
                                                                              as in (10), we can show that the policies and value matrices
                                                                              are bounded as in Lemma 5.1.
for appropriate constants Ch and Cg .
                                                                              Let E denote the event that all errors are bounded as in
   2
    Note that stable linear systems mix exponentially fast; see Tu            Equation (10). We bound the terms ↵T , T , and T for
and Recht (2017) for details.                                                 MFLQ V 2 next.
Yasin Abbasi-Yadkori, Nevena Lazić, Csaba Szepesvári

                                                                                                                      ✓X
                                                                                                                       1                              ◆
Lemma 5.2. Under the event E, for appropriate constants                                                                      t >         >        t
                                                                                                 tr(⌃⇡     ⌃0 ) tr         ( ) (M + K N K)
C 0 , D0 , and D00 ,
                                                                                                                      t=0

 T    C 0 T 3/4 log(T / ), ↵T  D0 T 3/4+⇠ ,                       T    D00 T 1/2 .           = tr(⌃⇡     ⌃0 ) tr(H⇡ ) .                        (12)
                                                                                                              PT
                                                                                        The bound on ↵T = t=1 c(xt , at )         ⇡t is similar; how-
The proof is given in Appendix C. For T , we rely on the                                ever, in addition to bounding the cost of following a policy,
decomposition shown in Equation (1). Because we execute                                 we need to account for the cost of random actions, and the
S policies, each for ⌧ = T /S rounds (where S = T 1/3 ⇠                                 changes in state covariance due to random actions and pol-
and ⌧ = T 1/4 for MFLQ V 1 and MFLQ V 2, respectively),                                 icy switches.
                 S
                 X                                                                      Theorem 3.2 is a consequence of Lemma 5.2. The proof of
         T   =         ⌧ E(Qi (x, ⇡i (x))            Qi (x, ⇡(x)))                      Theorem 3.1 is similar and is obtained by different choice
                 i=1                                                                    of constants.
                   S ✓
                   X
             =⌧             b i (x, ⇡i (x))
                          E(Q                          b i (x, ⇡(x)))
                                                       Q
                   i=1
                                                                                        6     EXPERIMENTS
                          + E(Qi (x, ⇡i (x))               Qb i (x, ⇡i (x)))
                                                                          ◆             We evaluate our algorithm on two LQ problem instances:
                              b i (x, ⇡(x))                                             (1) the system studied in Dean et al. (2017) and Tu and
                          + E(Q                            Qi (x, ⇡(x)))
                                                                                        Recht (2017), and (2) the power system studied in Lewis
                                                                                        et al. (2012), Example 11.5-1, with noise W = I. We start
where the expectations are w.r.t. x ⇠ µ⇡ . We bound the                                 all experiments from an all-zero initial state x0 = 0, and
first term using the FTL regret bound of (Cesa-Bianchi                                  set the initial stable policy K1 to the optimal controller for
and Lugosi, 2006) (Theorem 3.1), by showing that the                                    a system with a modified cost M 0 = 200M . For simplicity
theorem conditions hold for the loss function fi (K) =                                  we set ⇠ = 0 and Ts = 10 for MFLQ V 1. We set the
Ex⇠µ⇡ (Q   b i (x, Kx)). We bound the second and third term                             exploration covariance to ⌃a = I for (1) and ⌃a = 10I for
(corresponding to estimation errors) using Lemma 4.2.                                   (2).
This results in the following bound on T for constants C 0
                                                                                        In addition to the described algorithms, we also evaluate
and C 00 :
                                                                                        MFLQ V 3, an algorithm identical to MFLQ V 2 except that
                                         p
                        0
           T  T /SC (1 + log S) + C
                                       0
                                           ST log T .                                   the generated datasets Z include all data, not just random
                                                                                        actions. We compare MFLQ to the following:
                 PT
To bound T = t=1 ⇡ c(xt , ⇡(xt )), we first decom-                                          • Least squares policy iteration (LSPI) where the policy
pose the cost terms as follows. Let ⌃⇡ be the steady-                                         ⇡i in phase i is greedy with respect to the most re-
state covariance, and let ⌃t be the covariance of xt . Let                                    cent value function estimate Q b i 1 . We use the same
         1/2                  1/2
Dt = ⌃t (M + K > N K)⌃t and t = tr(Dt ). We                                                   estimation procedure as for MFLQ.
have
                                                                                            • A version RLSVI Osband et al. (2017) where we ran-
     ⇡       c(xt , ⇡(xt )) =       ⇡        t   +    t      c(x⇡t , ⇡(x⇡t ))                 domize the value function parameters rather than tak-
                                                                                              ing random actions. In particular, we update the mean
                              = tr((⌃⇡            ⌃t )(M + K > N K))
                                                                                              µQ and covariance ⌃Q of a TD estimate of G after
                                + tr(Dt )             u>
                                                       t D t ut                               each step, and switch to a policy greedy w.r.t. a param-
                                                                                              eter sample G b ⇠ (µQ , 0.2⌃Q ) every T 1/2 steps. We
where ut ⇠ N (0, In ). We showp that the second term                                          project the sample onto the constraint G        M 0 .
PT                                                                                                                                             0 N
   t=1 tr(Dt )  u>t Dt ut scales as  T with high probabil-
ity using the Hanson-Wright inequality. The first term can                                  • A model-based approach which estimates the dynam-
                                                                                              ics parameters (A,b B)
                                                                                                                   b using ordinary least squares.
be bounded by tr(H⇡ ) tr(⌃⇡ ) as follows. Note that
                                                                                              The policy at the end of each phase is produced by
  ⌃⇡         ⌃t = (⌃⇡          ⌃t       1)
                                             >
                                                 =     t
                                                           (⌃⇡     ⌃0 )( t )> .               treating the estimate as the true parameters (this ap-
                                                                                              proach is called certainty equivalence in optimal con-
Hence we have                                                                                 trol). We use the same strategy as in the model-free
 T
                                                                                              case, i.e. we execute the policy for some number of
 X                                                                                            iterations, followed by running the policy and taking
       tr((M + K > N K)(⌃⇡                   ⌃t ))
 t=0
                                                                                              random actions.
             T
             X
         =         tr (M + K > N K) t (⌃⇡                        ⌃0 )( t )>             To evaluate stability, we run each algorithm 100 times and
             t=0
                                                                                        compute the fraction of times it produces stable policies in
Model-Free Linear Quadratic Control via Reduction to Expert Prediction

Figure 1: Top row: experimental evaluation on the dynamics of Dean et al. (2017). Bottom row: experimental evaluation
on Lewis et al. (2012), Example 11.5.1.

all phases. Figure 1 (left) shows the results as a function    gret bound. Empirically, MFLQ considerably improves
of trajectory length. MFLQ V 3 is the most stable among        the performance of standard policy iteration in terms of
model-free algorithms, with performance comparable to          both solution stability and cost, although it is still not cost-
the model-based approach.                                      competitive with model-based methods.
We evaluate solution cost by running each algorithm until      Our algorithm is based on a reduction of control of MDPs
we obtain 100 stable trajectories (if possible), where each    to an expert prediction problem. In the case of LQ con-
trajectory is of length 50,000. We compute both the average    trol, the problem structure allows for an efficient imple-
cost incurred during each phase i, and true expected cost of   mentation and strong theoretical guarantees for a policy it-
each policy ⇡i . The average cost at the end of each phase     eration algorithm with exploration similar to ✏-greedy (but
is shown in Figure 1 (center and right). Overall, MFLQ V 2     performed at a fixed schedule). While ✏-greedy is known
and MFLQ V 3 achieve lower costs than MFLQ V 1, and the        to be suboptimal in unstructured multi-armed bandit prob-
performance of MFLQ V 1 and LSPI is comparable. The            lems (Langford and Zhang, 2007), it has been shown to
lowest cost is achieve by the model-based approach. These      achieve near optimal performance in problems with special
results are consistent with the empirical findings of Tu and   structure (Abbasi-Yadkori, 2009, Rusmevichientong and
Recht (2017), where model-based approaches outperform          Tsitsiklis, 2010, Bastani and Bayati, 2015), and it is worth
discounted LSTDQ.                                              considering whether it applies to other structured control
                                                               problems. However, the same approach might not general-
                                                               ize to other domains. For example, Boltzmann exploration
7   DISCUSSION                                                 may be more appropriate for MDPs with finite states and
                                                               actions. We leave this issue, as well as the application of
The simple formulation and wide practical applicability of     ✏-greedy exploration to other structured control problems,
LQ control make it an idealized benchmark for studying         to future work.
RL algorithms for continuous-valued states and actions.
In this work, we have presented MFLQ, an algorithm for
model-free control of LQ systems with an O(T 2/3+⇠ ) re-
Yasin Abbasi-Yadkori, Nevena Lazić, Csaba Szepesvári

References                                                        quadratic cost. SIAM Journal on Control and Optimiza-
                                                                  tion, 25(4):845–867, 1987.
Y. Abbasi-Yadkori, P. Bartlett, and V. Kanade. Tracking
  adversarial targets. In International Conference on Ma-      Jie Chen and Guoxiang Gu. Control-oriented system
  chine Learning (ICML), 2014.                                    identification: an H1 approach, volume 19. Wiley-
                                                                  Interscience, 2000.
Yasin Abbasi-Yadkori. Forced-exploration based algo-
  rithms for playing in bandits with large action sets. Mas-   Sarah Dean, Horia Mania, Nikolai Matni, Benjamin Recht,
  ter’s thesis, University of Alberta, 2009.                     and Stephen Tu. On the sample complexity of the linear
                                                                 quadratic regulator. arXiv preprint arXiv:1710.01688,
Yasin Abbasi-Yadkori and Csaba Szepesvári. Regret
                                                                 2017.
  bounds for the adaptive control of linear quadratic sys-
  tems. In COLT, 2011.                                         Sarah Dean, Horia Mania, Nikolai Matni, Benjamin Recht,
Yasin Abbasi-Yadkori and Csaba Szepesvári. Bayesian op-         and Stephen Tu. Regret bounds for robust adaptive con-
  timal control of smoothly parameterized systems. In            trol of the linear quadratic regulator. arXiv preprint
  Uncertainty in Artificial Intelligence (UAI), pages 1–11,      arXiv:1805.09388, 2018.
  2015.                                                        Eyal Even-Dar, Sham M Kakade, and Yishay Mansour.
Marc Abeille and Alessandro Lazaric. Thompson sam-               Online markov decision processes. Mathematics of Op-
 pling for linear-quadratic control problems. In AIS-            erations Research, 34(3):726–736, 2009.
 TATS, 2017.                                                   Amir-massoud Farahmand, Mohammad Ghavamzadeh,
András Antos, Csaba Szepesvári, and Rémi Munos. Learn-       Csaba Szepesvári, and Shie Mannor. Regularized policy
  ing near-optimal policies with bellman-residual mini-         iteration with nonparametric function spaces. The Jour-
  mization based fitted policy iteration and a single sample    nal of Machine Learning Research, 17(1):4809–4874,
  path. Machine Learning, 71(1):89–129, 2008.                   2016.
Sanjeev Arora, Elad Hazan, Holden Lee, Karan Singh,   Maryam Fazel, Rong Ge, Sham M Kakade, and Mehran
  Cyril Zhang, and Yi Zhang. Towards provable control   Mesbahi. Global convergence of policy gradient meth-
  for unknown linear dynamical systems. International   ods for linearized control problems. arXiv preprint
  Conference on Learning Representations, 2018. URL     arXiv:1801.05039, 2018.
  https://openreview.net/forum?id=BygpQlbA-.          C. Fiechter. PAC adaptive control of linear systems. In
  rejected: invited to workshop track.                  COLT, 1997.
Hamsa Bastani and Mohsen Bayati. Online decision-              Moritz Hardt, Tengyu Ma, and Benjamin Recht. Gradient
  making with high-dimensional covariates. SSRN, 2015.          descent learns linear dynamical systems. arXiv preprint
Dimitri P Bertsekas. Dynamic programming and opti-              arXiv:1609.05191, 2016.
  mal control, volume 1. Athena scientific Belmont, MA,        Elad Hazan. Introduction to online convex optimization.
  1995.                                                          Foundations and Trends R in Optimization, 2(3-4):157–
S. Bittanti and M.C. Campi. Adaptive control of linear time      325, 2016.
   invariant systems: The bet on the best principle. Com-      Arthur J Helmicki, Clas A Jacobson, and Carl N
   munications in Information and Systems, 6(4):299–320,         Nett. Control oriented system identification: a worst-
   2006.                                                         case/deterministic approach in H1 . IEEE Transactions
George EP Box, Gwilym M Jenkins, Gregory C Reinsel,              on Automatic control, 36(10):1163–1176, 1991.
  and Greta M Ljung. Time series analysis: forecasting         Morteza Ibrahimi, Adel Javanmard, and Benjamin V. Roy.
  and control. John Wiley & Sons, 2015.                         Efficient reinforcement learning for high dimensional
Steven J Bradtke, B Erik Ydstie, and Andrew G Barto.            linear quadratic systems. In Advances in Neural Infor-
  Adaptive linear quadratic control using policy iteration.     mation Processing Systems 25, pages 2636–2644. Cur-
  In American Control Conference, 1994, volume 3, pages         ran Associates, Inc., 2012.
  3475–3479. IEEE, 1994.                                       T. L. Lai and C. Z. Wei. Least squares estimates in stochas-
M. C. Campi and P. R. Kumar. Adaptive linear quadratic            tic regression models with applications to identification
 gaussian control: the cost-biased approach revisited.            and control of dynamic systems. The Annals of Statis-
 SIAM Journal on Control and Optimization, 36(6):                 tics, 10(1):154–166, 1982.
 1890–1907, 1998.                                              T. L. Lai and C. Z. Wei. Asymptotically efficient self-
Nicoló Cesa-Bianchi and Gábor Lugosi. Prediction, learn-        tuning regulators. SIAM Journal on Control and Opti-
  ing, and games. Cambridge University Press, 2006.               mization, 25:466–481, 1987.
H. Chen and L. Guo. Optimal adaptive control and con-          John Langford and Tong Zhang. The epoch-greedy al-
  sistent parameter estimates for ARMAX model with               gorithm for multi-armed bandits with side information.
Model-Free Linear Quadratic Control via Reduction to Expert Prediction

  In Advances in Neural Information Processing Systems         Stephen Tu and Benjamin Recht. Least-squares tempo-
  (NIPS), 2007.                                                  ral difference learning for the linear quadratic regulator.
Alessandro Lazaric, Mohammad Ghavamzadeh, and Rémi              arXiv preprint arXiv:1712.08642, 2017.
  Munos. Finite-sample analysis of least-squares policy it-    Huizhen Yu and Dimitri P Bertsekas. Convergence results
  eration. Journal of Machine Learning Research, 13(Oct):        for some temporal difference methods based on least
  3041–3074, 2012.                                               squares. IEEE Transactions on Automatic Control, 54
Frank L Lewis, Draguna Vrabie, and Vassilis L Syrmos.            (7):1515–1531, 2009.
  Optimal control. John Wiley & Sons, 2012.                    Fuzhen Zhang and Qingling Zhang. Eigenvalue inequali-
Bo Liu, Sridhar Mahadevan, and Ji Liu. Regularized off-          ties for matrix product. IEEE Transactions on Automatic
  policy td-learning. In Advances in Neural Information          Control, 51(9):1506–1509, 2006.
  Processing Systems, pages 836–844, 2012.
Bo Liu, Ji Liu, Mohammad Ghavamzadeh, Sridhar Ma-
  hadevan, and Marek Petrik. Finite-sample analysis of
  proximal gradient td algorithms. In UAI, pages 504–513.
  Citeseer, 2015.
Lennart Ljung and Torsten Söderström. Theory and prac-
  tice of recursive identification, volume 5. JSTOR, 1983.
G. Neu and V. Gómez. Fast rates for online learning in lin-
  early solvable markov decision processes. In 30th An-
  nual Conference on Learning Theory (COLT), 2017.
G. Neu, A. György, Cs. Szepesvári, and A. Antos. Online
  markov decision processes under bandit feedback. IEEE
  Transactions on Automatic Control, 59:676–691, 2014.
Ian Osband, Daniel J. Russo, Zheng Wen, and Ben-
  jamin Van Roy. Deep exploration via randomized value
  functions. arXiv preprint arXiv:1703.07608, 2017.
Yi Ouyang, Mukul Gagrani, and Rahul Jain. Learning-
  based control of unknown linear systems with Thomp-
  son sampling. arXiv preprint arXiv:1709.04047, 2017.
Paat Rusmevichientong and John N. Tsitsiklis. Linearly
  parameterized bandits. MATHEMATICS OF OPERA-
  TIONS RESEARCH, 35(2):395–411, 2010.
Shai Shalev-Shwartz. Online learning and online convex
  optimization. Foundations and Trends R in Machine
  Learning, 4(2):107–194, 2012.
A. L. Strehl, L. Li, E. Wiewiora, J. Langford, and M. L.
  Littman. PAC model-free reinforcement learning. In
  ICML, 2006.
Istvan Szita. Rewarding Excursions: Extending Reinforce-
   ment Learning to Complex Domains. PhD thesis, Eötvös
   Loránd University, 2007.
John N. Tsitsiklis and Benjamin Van Roy. Average cost
  temporal-difference learning. Automatica, 35:1799–
  1808, 1999.
John N Tsitsiklis and Benjamin Van Roy. Analysis of
  temporal-diffference learning with function approxima-
  tion. In Advances in neural information processing sys-
  tems, pages 1075–1081, 1997.
S. Tu, R. Boczar, A. Packard, and B. Recht. Non-
  Asymptotic Analysis of Robust Control from Coarse-
  Grained Identification. arXiv preprint, 2017.
You can also read