This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Unsolved Problems in Mathematical Systems and Control Theory Edited by Vincent D. Blondel Alexandre Megretski PRINCETON UNIVERSITY PRESS PRINCETON AND OXFORD iv Copyright c 2004 by Princeton University Press Published by Princeton University Press, 41 William Street, Princeton, New Jersey 08540, USA In the United Kingdom: Princeton University Press, 3 Market Place, Woodstock, Oxfordshire OX20 1SY, UK All rights reserved Library of Congress CataloginginPublication Data Unsolved problems in mathematical systems and control theory Edited by Vincent D. Blondel, Alexandre Megretski. p. cm. Includes bibliographical references. ISBN 0691117489 (cl : alk. paper) 1. System analysis. 2. Control theory. I. Blondel, Vincent. II. Megretski, Alexandre. QA402.U535 2004 2003064802 003—dc22 The publisher would like to acknowledge the editors of this volume for providing the cameraready copy from which this book was printed. Printed in the United States of America 10 9 8 7 6 5 4 3 2 1 I have yet to see any problem, however complicated, which, when you looked at it in the right way, did not become still more complicated. Poul Anderson Contents Preface Associate Editors Website xiii xv xvii PART 1. LINEAR SYSTEMS 1 Problem 1.1. Stability and composition of transfer functions Guillermo Fern´ndezAnaya, Juan Carlos Mart´ a ınezGarc´ ıa Problem 1.2. The realization problem for HerglotzNevanlinna functions Seppo Hassi, Henk de Snoo, Eduard Tsekanovski˘ ı Problem 1.3. Does any analytic contractive operator function on the polydisk have a dissipative scattering nD realization? Dmitry S. KalyuzhniyVerbovetzky Problem 1.4. Partial disturbance decoupling with stability Juan Carlos Mart´ ınezGarc´ Michel Malabre, Vladimir Kuˇera ıa, c Problem 1.5. Is Monopoli’s model reference adaptive controller correct? A. S. Morse Problem 1.6. Model reduction of delay systems Jonathan R. Partington Problem 1.7. Schur extremal problems Lev Sakhnovich Problem 1.8. The elusive iﬀ test for timecontrollability of behaviors Amol J. Sasane 3 8 14 18 22 29 33 36 viii
Problem 1.9. A Farkas lemma for behavioral inequalities A.A. (Tonny) ten Dam, J.W. (Hans) Nieuwenhuis CONTENTS 40 Problem 1.10. Regular feedback implementability of linear diﬀerential behaviors H. L. Trentelman Problem 1.11. Riccati stability Erik I. Verriest Problem 1.12. State and ﬁrst order representations Jan C. Willems Problem 1.13. Projection of state space realizations Antoine Vandendorpe, Paul Van Dooren 44 49 54 58 PART 2. STOCHASTIC SYSTEMS 65 Problem 2.1. On error of estimation and minimum of cost for wide band noise driven systems Agamirza E. Bashirov Problem 2.2. On the stability of random matrices Giuseppe C. Calaﬁore, Fabrizio Dabbene Problem 2.3. Aspects of Fisher geometry for stochastic linear systems Bernard Hanzon, Ralf Peeters Problem 2.4. On the convergence of normal forms for analytic control systems Wei Kang, Arthur J. Krener 67 71 76 82 PART 3. NONLINEAR SYSTEMS 87 Problem 3.1. Minimum time control of the Kepler equation JeanBaptiste Caillau, Joseph Gergaud, Joseph Noailles Problem 3.2. Linearization of linearly controllable systems R. Devanathan Problem 3.3. Bases for Lie algebras and a continuous CBH formula Matthias Kawski 89 93 97 CONTENTS ix
103 Problem 3.4. An extended gradient conjecture Luis Carlos Martins Jr., Geraldo Nunes Silva Problem 3.5. Optimal transaction costs from a Stackelberg perspective Geert Jan Olsder Problem 3.6. Does cheap control solve a singular nonlinear quadratic problem? Yuri V. Orlov Problem 3.7. DeltaSigma modulator synthesis Anders Rantzer Problem 3.8. Determining of various asymptotics of solutions of nonlinear timeoptimal problems via right ideals in the moment algebra G. M. Sklyar, S. Yu. Ignatovich Problem 3.9. Dynamics of principal and minor component ﬂows U. Helmke, S. Yoshizawa, R. Evans, J.H. Manton, and I.M.Y. Mareels 107 111 114 117 122 PART 4. DISCRETE EVENT, HYBRID SYSTEMS 129 Problem 4.1. L2 induced gains of switched linear systems Jo˜o P. Hespanha a Problem 4.2. The state partitioning problem of quantized systems Jan Lunze Problem 4.3. Feedback control in ﬂowshops S.P. Sethi and Q. Zhang Problem 4.4. Decentralized control with communication between controllers Jan H. van Schuppen 131 134 140 144 PART 5. DISTRIBUTED PARAMETER SYSTEMS 151 Problem 5.1. Inﬁnite dimensional backstepping for nonlinear parabolic PDEs Andras Balogh, Miroslav Krstic Problem 5.2. The dynamical Lame system with boundary control: on the structure of reachable sets M.I. Belishev 153 160 x CONTENTS Problem 5.3. Nullcontrollability of the heat equation in unbounded domains Sorin Micu, Enrique Zuazua Problem 5.4. Is the conservative wave equation regular? George Weiss Problem 5.5. Exact controllability of the semilinear wave equation Xu Zhang, Enrique Zuazua Problem 5.6. Some control problems in electromagnetics and ﬂuid dynamics Lorella Fatone, Maria Cristina Recchioni, Francesco Zirilli 163 169 173 179 PART 6. STABILITY, STABILIZATION 187 Problem 6.1. Copositive Lyapunov functions M. K. Camlıbel, J. M. Schumacher ¸ Problem 6.2. The strong stabilization problem for linear timevarying systems Avraham Feintuch Problem 6.3. Robustness of transient behavior Diederich Hinrichsen, Elmar Plischke, Fabian Wirth Problem 6.4. Lie algebras and stability of switched nonlinear systems Daniel Liberzon Problem 6.5. Robust stability test for interval fractional order linear systems Ivo Petr´ˇ, YangQuan Chen, Blas M. Vinagre as Problem 6.6. Delayindependent and delaydependent Aizerman problem Vladimir R˘svan a Problem 6.7. Open problems in control of linear discrete multidimensional systems Li Xu, Zhiping Lin, JiangQian Ying, Osami Saito, Yoshihisa Anazawa Problem 6.8. An open problem in adaptative nonlinear control theory Leonid S. Zhiteckij Problem 6.9. Generalized Lyapunov theory and its omegatransformable regions ShengGuo Wang 189 194 197 203 208 212 221 229 233 CONTENTS xi Problem 6.10. Smooth Lyapunov characterization of measurement to error stability Brian P. Ingalls, Eduardo D. Sontag 239 PART 7. CONTROLLABILITY, OBSERVABILITY 245 Problem 7.1. Time for local controllability of a 1D tank containing a ﬂuid modeled by the shallow water equations JeanMichel Coron Problem 7.2. A Hautus test for inﬁnitedimensional systems Birgit Jacob, Hans Zwart Problem 7.3. Three problems in the ﬁeld of observability Philippe Jouan Problem 7.4. Control of the KdV equation Lionel Rosier 247 251 256 260 PART 8. ROBUSTNESS, ROBUST CONTROL 265 Problem 8.1. H∞ norm approximation A.C. Antoulas, A. Astolﬁ Problem 8.2. Noniterative computation of optimal value in H∞ control Ben M. Chen Problem 8.3. Determining the least upper bound on the achievable delay margin Daniel E. Davison, Daniel E. Miller Problem 8.4. Stable controller coeﬃcient perturbation in ﬂoating point implementation Jun Wu, Sheng Chen 267 271 276 280 PART 9. IDENTIFICATION, SIGNAL PROCESSING 285 Problem 9.1. A conjecture on Lyapunov equations and principal angles in subspace identiﬁcation Katrien De Cock, Bart De Moor 287 xii CONTENTS Problem 9.2. Stability of a nonlinear adaptive system for ﬁltering and parameter estimation Masoud KarimiGhartemani, Alireza K. Ziarani 293 PART 10. ALGORITHMS, COMPUTATION 297 Problem 10.1. Rootclustering for multivariate polynomials and robust stability analysis PierreAlexandre Bliman Problem 10.2. When is a pair of matrices stable? Vincent D. Blondel, Jacques Theys, John N. Tsitsiklis Problem 10.3. Freeness of multiplicative matrix semigroups Vincent D. Blondel, Julien Cassaigne, Juhani Karhum¨ki a Problem 10.4. Vectorvalued quadratic forms in control theory Francesco Bullo, Jorge Cort´s, Andrew D. Lewis, Sonia Mart´ e ınez Problem 10.5. Nilpotent bases of distributions Henry G. Hermes, Matthias Kawski Problem 10.6. What is the characteristic polynomial of a signal ﬂow graph? Andrew D. Lewis Problem 10.7. Open problems in randomized µ analysis Onur Toker 299 304 309 315 321 326 330 Preface
Five years ago, a ﬁrst volume of open problems in Mathematical Systems and Control Theory appeared.1 Some of the 53 problems that were published in this volume attracted considerable attention in the research community. The book in front of you contains a new collection of 63 open problems. The contents of both volumes show the evolution of the ﬁeld in the half decade since the publication of the ﬁrst volume. One noticeable feature is the shift toward a wider class of questions and more emphasis on issues driven by physical modeling. Early versions of some of the problems in this book have been presented at the Open Problem sessions of the Oberwolfach Tagung on Regelungstheorie, on February 27, 2002, and of the Conference on Mathematical Theory of Networks and Systems (MTNS) in Notre Dame, Indiana, on August 12, 2002. The editors thank the organizers of these meetings for their willingness to provide the problems this welcome exposure. Since the appearance of the ﬁrst volume, open problems have continued to meet with large interest in the mathematical community. Undoubtedly, the most spectacular event in this arena was the announcement by the Clay Mathematics Institute2 of the Millennium Prize Problems whose solution will be rewarded by one million U.S. dollars each. Modesty and modesty of means have prevented the editors of the present volume from oﬀering similar rewards toward the solution of the problems in this book. However, we trust that, notwithstanding this absence of a ﬁnancial incentive, the intellectual challenge will stimulate many readers to attack the problems. The editors thank in the ﬁrst place the researchers who have submitted the problems. We are also very thankful to the Princeton University Press, and in particular Vickie Kearn, for their willingness to publish this volume. The full text of the problems, together with comments, additions, and solutions, will be posted on the book website at Princeton University Press (link available from http://pup.princeton.edu/math/) and on http://www.inma.ucl.ac.be/∼blondel/op/. Readers are encouraged to submit contributions by following the instructions given on these websites. The editors, LouvainlaNeuve, March 15, 2003. 1 Vincent D. Blondel, Eduardo D. Sontag, M. Vidyasagar, and Jan C. Willems, Open Problems in Mathematical Systems and Control Theory, Springer Verlag, 1998. 2 See http://www.claymath.org. Associate Editors
Roger Brockett, Harvard University, USA JeanMichel Coron, University of Paris (Orsay), France Roland Hildebrand, University of Louvain (LouvainlaNeuve), Belgium Miroslav Krstic, University of California (San Diego), USA Anders Rantzer, Lund Institute of Technology, Sweden Joachim Rosenthal, University of Notre Dame, USA Eduardo Sontag, Rutgers University, USA M. Vidyasagar, Tata Consultancy Services, India Jan Willems, University of Leuven, Belgium Website
The full text of the problems presented in this book, together with comments, additions and solutions, are freely available in electronic format from the book website at Princeton University Press: http://pup.princeton.edu/math/ and from an editor website: http://www.inma.ucl.ac.be/∼blondel/op/ Readers are encouraged to submit contributions by following the instructions given on these websites. PART 1 Linear Systems Problem 1.1
Stability and composition of transfer functions G. Fern´ndezAnaya a
Departamento de Ciencias B´sicas a Universidad Iberoam´ricana e Lomas de Santa Fe 01210 M´xico D.F. e M´xico e [email protected] J. C. Mart´ ınezGarc´ ıa
Departamento de Control Autom´tico a CINVESTAVIPN A.P. 14740 07300 M´xico D.F. e M´xico e [email protected] 1 INTRODUCTION As far as the frequencydescribed continuous linear timeinvariant systems are concerned, the study of controloriented properties (like stability) resulting from the substitution of the complex Laplace variable s by rational transfer functions have been little studied by the Automatic Control community. However, some interesting results have recently been published: Concerning the study of the socalled uniform systems, i.e., LTI systems consisting of identical components and ampliﬁers, it was established in [8] a general criterion for robust stability for rational functions of the form D(f (s)), where D(s) is a polynomial and f (s) is a rational transfer function. By applying such a criterium, it gave a generalization of the celebrated Kharitonov’s theorem [7], as well as some robust stability criteria under H∞ uncertainty. The results given in [8] are based on the socalled Hdomains.1 As far as robust stability of polynomial families is concerned, some Kharito1 The Hdomain of a function f (s) is deﬁned to be the set of points h on the complex plane for which the function f (s) − h has no zeros on the open righthalf complex plane. 4 PROBLEM 1.1 nov’s like results [7] are given in [9] (for a particular class of polynomials), when interpreting substitutions as nonlinearly correlated perturbations on the coeﬃcients. More recently, in [1], some results for proper and stable real rational SISO functions and coprime factorizations were proved, by making substitutions with α (s) = (as + b) / (cs + d), where a, b, c, and d are strictly positive real numbers, and with ad − bc = 0. But these results are limited to the bilinear transforms, which are very restricted. In [4] is studied the preservation of properties linked to control problems (like weighted nominal performance and robust stability) for SingleInput SingleOutput systems, when performing the substitution of the Laplace variable (in transfer functions associated to the control problems) by strictly positive real functions of zero relative degree. Some results concerning the preservation of controloriented properties in MultiInput MultiOutput systems are given in [5], while [6] deals with the preservation of solvability conditions in algebraic Riccati equations linked to robust control problems. Following our interest in substitutions we propose in section 22.2 three interesting problems. The motivations concerning the proposed problems are presented in section 22.3. 2 DESCRIPTION OF THE PROBLEMS In this section we propose three closely related problems. The ﬁrst one concerns the characterization of a transfer function as a composition of transfer functions. The second problem is a modiﬁed version of the ﬁrst problem: the characterization of a transfer function as the result of substituting the Laplace variable in a transfer function by a strictly positive real transfer function of zero relative degree. The third problem is in fact a conjecture concerning the preservation of stability property in a given polynomial resulting from the substitution of the coeﬃcients in the given polynomial by a polynomial with nonnegative coeﬃcients evaluated in the substituted coeﬃcients. Problem 1: Let a Single Input Single Output (SISO) transfer function G(s) be given. Find transfer functions G0 (s) and H(s) such that: 1. G (s) = G0 (H (s)) ; 2. H (s) preserves proper stable transfer functions under substitution of the variable s by H (s), and: 3. The degree of the denominator of H(s) is the maximum with the properties 1 and 2. STABILITY AND COMPOSITION OF TRANSFER FUNCTIONS 5 Problem 2: Let a SISO transfer function G(s) be given. Find a transfer function G0 (s) and a Strictly Positive Real transfer function of zero relative degree (SPR0), say H(s), such that: 1. G(s) = G0 (H (s)) and: 2. The degree of the denominator of H(s) is the maximum with the property 1. Problem 3: (Conjecture) Given any stable polynomial: an sn + an−1 sn−1 + · · · + a1 s + a0 and given any polynomial q(s) with nonnegative coeﬃcients, then the polynomial: q (an )sn + q (an−1 )sn−1 + · · · + q (a1 )s + q (a0 ) is stable (see [3]). 3 MOTIVATIONS Consider the closedloop control scheme: y (s) = G (s) u (s) + d (s) , u (s) = K (s) (r (s) − y (s)) , where: P (s) denotes the SISO plant; K (s) denotes a stabilizing controller; u (s) denotes the control input; y (s) denotes the control input; d (s) denotes the disturbance and r (s) denotes the reference input. We shall denote the closedloop transfer function from r (s) to y (s) as Fr (G (s) , K (s)) and the closedloop transfer function from d (s) to y (s) as Fd (G (s) , K (s)). • Consider the closedloop system Fr (G (s) , K (s)), and suppose that the plant G(s) results from a particular substitution of the s Laplace variable in a transfer function G0 (s) by a transfer function H (s), i.e., G(s) = G0 (H (s)). It has been proved that a controller K0 (s) which stabilizes the closedloop system Fr (G0 (s) , K0 (s)) is such that K0 (H (s)) stabilizes Fr (G (s) , K0 (H (s))) (see [2] and [8]). Thus, the simpliﬁcation of procedures for the synthesis of stabilizing controllers (proﬁting from transfer function compositions) justiﬁes problem 1. • As far as problem 2 is concerned, consider the synthesis of a controller K (s) stabilizing the closedloop transfer function Fd (G (s) , K (s)), and such that Fd (G (s) , K (s)) ∞ < γ , for a ﬁxed given γ > 0. If we known that G(s) = G0 (H (s)), being H (s) a SPR0 transfer function, the solution of problem 2 would arise to the following procedure: 1. Find a controller K0 (s) which stabilizes the closedloop transfer function Fd (G0 (s) , K0 (s)) and such that: Fd (G0 (s) , K0 (s))
∞ < γ. 6 PROBLEM 1.1 2. The composed controller K (s) = K0 (H (s)) stabilizes the closedloop system Fd (G (s) , K (s)) and: Fd (G (s) , K (s)) (see [2], [4], and [5]). It is clear that condition 3 in the ﬁrst problem, or condition 2 in the second problem, can be relaxed to the following condition: the degree of the denominator of H (s) is as high as be possible with the appropriate conditions. With this new condition, the open problems are a bit less diﬃcult. • Finally, problem 3 can be interpreted in terms of robustness under positive polynomial perturbations in the coeﬃcients of a stable transfer function.
∞ <γ BIBLIOGRAPHY [1] G. Fern´ndez, S. Mu˜oz, R. A. S´nchez, and W. W. Mayol, “Simultaa n a neous stabilization using evolutionary strategies,”Int. J. Contr., vol. 68, no. 6, pp. 14171435, 1997. [2] G. Fern´ndez, “Preservation of SPR functions and stabilization by suba stitutions in SISO plants,”IEEE Transaction on Automatic Control, vol. 44, no. 11, pp. 21712174, 1999. [3] G. Fern´ndez and J. Alvarez, “On the preservation of stability in fama ilies of polynomials via substitutions,”Int. J. of Robust and Nonlinear Control, vol. 10, no. 8, pp. 671685, 2000. [4] G. Fern´ndez, J. C. Mart´ a ınezGarc´ and V. Kuˇera, “H∞ Robustness ıa, c Properties Preservation in SISO Systems when applying SPR Substitutions,”Submitted to the International Journal of Automatic Control. [5] G. Fern´ndez and J. C. Mart´ a ınezGarc´ “MIMO Systems Properties ıa, Preservation under SPR Substitutions,” International Symposium on the Mathematical Theory of Networks and Systems (MTNS’2002), University of Notre Dame, USA, August 1216, 2002. [6] G. Fern´ndez, J. C. Mart´ a ınezGarc´ and D. AguilarGeorge, “Preservaıa, tion of solvability conditions in Riccati equations when applying SPR0 substitutions,” submitted to IEEE Transactions on Automatic Control, 2002. [7] V. L. Kharitonov, “Asymptotic stability of families of systems of linear diﬀerential equations, ”Diﬀerential’nye Uravneniya, vol. 14, pp. 20862088, 1978. STABILITY AND COMPOSITION OF TRANSFER FUNCTIONS 7 [8] B. T. Polyak and Ya. Z. Tsypkin, “Stability and robust stability of uniform systems, ”Automation and Remote Contr., vol. 57, pp. 16061617, 1996. [9] L. Wang, “Robust stability of a class of polynomial families under nonlinearly correlated perturbations,”System and Control Letters, vol. 30, pp. 2530, 1997. Problem 1.2
The realization problem for HerglotzNevanlinna functions Seppo Hassi
Department of Mathematics and Statistics University of Vaasa P.O. Box 700, 65101 Vaasa Finland [email protected] Henk de Snoo
Department of Mathematics University of Groningen P.O. Box 800, 9700 AV Groningen Nederland [email protected] Eduard Tsekanovski˘ ı
Department of Mathematics Niagara University, NY 14109 USA [email protected] 1 MOTIVATION AND HISTORY OF THE PROBLEM Roughly speaking, realization theory concerns itself with identifying a given holomorphic function as the transfer function of a system or as its linear fractional transformation. Linear, conservative, timeinvariant systems whose main operator is bounded have been investigated thoroughly. However, many realizations in diﬀerent areas of mathematics including system theory, electrical engineering, and scattering theory involve unbounded main operators, and a complete theory is still lacking. The aim of the present proposal is to outline the necessary steps needed to obtain a general realization theory along the lines of M. S. Brodski˘ and M. S. Livˇic [8], [9], [16], who have ı s THE REALIZATION PROBLEM FOR HERGLOTZNEVANLINNA FUNCTIONS 9 considered systems with a bounded main operator. An operatorvalued function V (z ) acting on a Hilbert space E belongs to the HerglotzNevanlinna class N, if outside R it is holomorphic, symmetric, i.e., V (z )∗ = V (¯), and satisﬁes (Im z )(Im V (z )) ≥ 0. Here and in the following z it is assumed that the Hilbert space E is ﬁnitedimensional. Each HerglotzNevanlinna function V (z ) has an integral representation of the form V (z ) = Q + Lz +
R 1 t − t−z 1 + t2 dΣ(t), (1) where Q = Q∗ , L ≥ 0, and Σ(t) is a nondecreasing matrixfunction on R with dΣ(t)/(t2 + 1) < ∞. Conversely, each function of the form (1) belongs R to the class N. Of special importance (cf. [15]) are the class S of Stieltjes functions ∞ dΣ(t) V (z ) = γ + , (2) t−z 0 where γ ≥ 0 and functions
∞ 0 dΣ(t)/(t +1) < ∞, and the class S−1 of inverse Stieltjes
∞ V (z ) = α + βz +
0 1 1 − t−z t dΣ(t), (3) where α ≤ 0, β ≥ 0, and ∞ 0 dΣ(t)/(t2 + 1) < ∞. 2 SPECIAL REALIZATION PROBLEMS One way to characterize HerglotzNevanlinna functions is to identify them as (linear fractional transformations of) transfer functions: V (z ) = i[W (z ) + I ]−1 [W (z ) − I ]J,
∗ −1 (4) where J = J = J and W (z ) is the transfer function of some generalized linear, stationary, conservative dynamical system (cf. [1], [3]). The approach based on the use of Brodski˘ ıLivˇic operator colligations Θ yields s to a simultaneous representation of the functions W (z ) and V (z ) in the form WΘ (z ) = I − 2iK ∗ (T − zI )−1 KJ, VΘ (z ) = K ∗ (TR − zI )−1 K, (5) (6) where TR stands for the real part of T . The deﬁnitions and main results associated with Brodski˘ ıLivˇic type operator colligations in realization of s HerglotzNevanlinna functions are as follows, cf. [8], [9], [16]. Let T ∈ [H], i.e., T is a bounded linear mapping in a Hilbert space H, and assume that Im T = (T − T ∗ )/2i of T is represented as Im T = KJK ∗ , where K ∈ [E, H], and J ∈ [E] is selfadjoint and unitary. Then the array Θ= T H K J E (7) 10 PROBLEM 1.2 deﬁnes a Brodski˘ ıLivˇic operator colligation, and the function WΘ (z ) given s by (5) is the transfer function of Θ. In the case of the directing operator J = I the system (7) is called a scattering system, in which case the main operator T of the system Θ is dissipative: Im T ≥ 0. In system theory WΘ (z ) is interpreted as the transfer function of the conservative system (i.e., Im T = KJK ∗ ) of the form (T − zI )x = KJϕ− and ϕ+ = ϕ− − 2iK ∗ x, where ϕ− ∈ E is an input vector, ϕ+ ∈ E is an output vector, and x is a state space vector in H, so that ϕ+ = WΘ (z )ϕ− . The system is said to be minimal if the main operator T of Θ is completely non selfadjoint (i.e., there are no nontrivial invariant subspaces on which T induces selfadjoint operators), cf. [8], [16]. A classical result due to Brodski˘ and Livˇic [9] ı s states that the compactly supported HerglotzNevanlinna functions of the b form a dΣ(t)/(t − z ) correspond to minimal systems Θ of the form (7) via (4) with W (z ) = WΘ (z ) given by (5) and V (z ) = VΘ (z ) given by (6). Next consider a linear, stationary, conservative dynamical system Θ of the form A KJ Θ= . (8) H+ ⊂ H ⊂ H− E Here A ∈ [H+ , H− ], where H+ ⊂ H ⊂ H− is a rigged Hilbert space, A ⊃ T ⊃ A, A∗ ⊃ T ∗ ⊃ A, A is a Hermitian operator in H, T is a nonHermitian operator in H, K ∈ [E, H− ], J = J ∗ = J −1 , and Im A = KJK ∗ . In this case Θ is said to be a Brodski˘ ıLivˇc rigged operator colligation. The transfer s function of Θ in (8) and its linear fractional transform are given by WΘ (z ) = I − 2iK ∗ (A − zI )−1 KJ, VΘ (z ) = K ∗ (AR − zI )−1 K. (9) The functions V (z ) in (1) which can be realized in the form (4), (9) with a transfer function of a system Θ as in (8) have been characterized in [2], [5], [6], [7], [18]. For the signiﬁcance of rigged Hilbert spaces in system theory, see [14], [16]. Systems (7) and (8) naturally appear in electrical engineering and scattering theory [16]. 3 GENERAL REALIZATION PROBLEMS In the particular case of Stieltjes functions or of inverse Stieltjes functions general realization results along the lines of [5], [6], [7] remain to be worked out in detail, cf. [4], [10]. The systems (7) and (8) are not general enough for the realization of general HerglotzNevanlinna functions in (1) without any conditions on Q = Q∗ and L ≥ 0. However, a generalization of the Brodski˘ ıLivˇic operator colligation s (7) leads to analogous realization results for HerglotzNevanlinna functions V (z ) of the form (1) whose spectral function is compactly supported: such functions V (z ) admit a realization via (4) with W (z ) = WΘ (z ) = I − 2iK ∗ (M − zF )−1 KJ, (10) V (z ) = WΘ (z ) = K ∗ (MR − zF )−1 K, THE REALIZATION PROBLEM FOR HERGLOTZNEVANLINNA FUNCTIONS 11 where M = MR + iKJK ∗ , MR ∈ [H] is the real part of M , F is a ﬁnitedimensional orthogonal projector, and Θ is a generalized Brodski˘ ıLivˇic s operator colligation of the form Θ= MF H K J , E (11) see [11], [12], [13]. The basic open problems are: Determine the class of linear, conservative, timeinvariant dynamical systems (new type of operator colligations) such that an arbitrary matrixvalued HerglotzNevanlinna function V (z ) acting on E can be realized as a linear fractional transformation (4) of the matrixvalued transfer function WΘ (z ) of some minimal system Θ from this class. Find criteria for a given matrixvalued Stieltjes or inverse Stieltjes function acting on E to be realized as a linear fractional transformation of the matrixvalued transfer function of a minimal Brodski˘ ıLivˇic type system Θ in (8) s with: (i) an accretive operator A, (ii) an αsectorial operator A, or (iii) an extremal operator A (accretive but not αsectorial). The same problem for the (compactly supported) matrixvalued Stieltjes or inverse Stieltjes functions and the generalized Brodski˘ ıLivˇic systems of the s form (11) with the main operator M and the ﬁnitedimensional orthogonal projector F . There is a close connection to the socalled regular impedance conservative systems (where the coeﬃcient of the derivative is invertible) that were recently considered in [17] (see also [19]). It is shown that any function D(s) with nonnegative real part in the open right halfplane and for which D(s)/s → 0 as s → ∞ has a realization with such an impedance conservative system. BIBLIOGRAPHY [1] D. Alpay, A. Dijksma, J. Rovnyak, and H.S.V. de Snoo, “Schur functions, operator colligations, and reproducing kernel Pontryagin spaces,” Oper. Theory Adv. Appl., 96, Birkh¨user Verlag, Basel, 1997. a [2] Yu. M. Arlinski˘ “On the inverse problem of the theory of characteristic ı, functions of unbounded operator colligations”, Dopovidi Akad. Nauk Ukrain. RSR, 2 (1976), 105–109 (Russian). [3] D. Z. Arov, “Passive linear steadystate dynamical systems,” Sibirsk. Mat. Zh., 20, no. 2, (1979), 211–228, 457 (Russian) [English transl.: Siberian Math. J., 20 no. 2, (1979) 149–162]. 12 PROBLEM 1.2 [4] S. V. Belyi, S. Hassi, H. S. V. de Snoo, and E. R. Tsekanovski˘ ı, “On the realization of inverse Stieltjes functions,” Proceedings of the 15th International Symposium on Mathematical Theory of Networks and Systems, Editors D. Gillian and J. Rosenthal, University of Notre Dame, South Bend, Idiana, USA, 2002, http://www.nd.edu/∼mtns/papers/20160 6.pdf [5] S. V. Belyi and E. R. Tsekanovski˘ “Realization and factorization probı, lems for J contractive operatorvalued functions in halfplane and systems with unbounded operators,” Systems and Networks: Mathematical Theory and Applications, Akademie Verlag, 2 (1994), 621–624. [6] S. V. Belyi and E. R. Tsekanovski˘ “Realization theorems for operatorı, valued Rfunctions,” Oper. Theory Adv. Appl., 98 (1997), 55–91. [7] S. V. Belyi and E. R. Tsekanovski˘ “On classes of realizable operatorı, valued Rfunctions,” Oper. Theory Adv. Appl., 115 (2000), 85–112. [8] M. S. Brodski˘ “Triangular and Jordan representations of linear opı, erators,” Moscow, Nauka, 1969 (Russian) [English trans.: Vol. 32 of Transl. Math. Monographs, Amer. Math. Soc., 1971]. [9] M. S. Brodski˘ and M. S. Livˇic, “Spectral analysis of nonselfadjoint ı s operators and intermediate systems,” Uspekhi Mat. Nauk, 13 no. 1, 79, (1958), 3–85 (Russian) [English trans.: Amer. Math. Soc. Transl., (2) 13 (1960), 265–346]. [10] I. Dovshenko and E. R.Tsekanovski˘ “Classes of Stieltjes operatorı, functions and their conservative realizations,” Dokl. Akad. Nauk SSSR, 311 no. 1 (1990), 18–22. [11] S. Hassi, H. S. V. de Snoo, and E. R. Tsekanovski˘ “An addendum ı, to the multiplication and factorization theorems of Brodski˘ ıLivˇics Potapov,” Appl. Anal., 77 (2001), 125–133. [12] S. Hassi, H. S. V. de Snoo, and E. R. Tsekanovski˘ “On commutaı, tive and noncommutative representations of matrixvalued HerglotzNevanlinna functions,” Appl. Anal., 77 (2001), 135–147. [13] S. Hassi, H. S. V. de Snoo, and E. R. Tsekanovski˘ “Realizations ı, of HerglotzNevanlinna functions via F systems,” Oper. Theory: Adv. Appl., 132 (2002), 183–198. [14] J.W. Helton, “Systems with inﬁnitedimensional state space: Hilbert space approach,” Proc. IEEE, 64 (1976), no. 1, 145–160. the [15] I. S. Ka˘ and M. G. Kre˘ “The Rfunctions: Analytic functions mapc ın, ping the upper halfplane into itself,” Supplement I to the Russian edition of F. V. Atkinson, Discrete and Continuous Boundary Problems, Moscow, 1974 [English trans.: Amer. Math. Soc. Trans., (2) 103 (1974), 1–18]. THE REALIZATION PROBLEM FOR HERGLOTZNEVANLINNA FUNCTIONS 13 [16] M. S. Livˇic, “Operators, Oscillations, Waves,” Moscow, Nauka, 1966 s (Russian) [English trans.: Vol. 34 of Trans. Math. Monographs, Amer. Math. Soc., 1973]. [17] O. J. Staﬀans, “Passive and conservative inﬁnitedimensional impedance and scattering systems (from a personal point of view),” Proceedings of the 15th International Symposium on Mathematical Theory of Networks and Systems, Ed., D. Gillian and J. Rosenthal, University of Notre Dame, South Bend, Indiana, USA, 2002, Plenary talk, http://www.nd.edu/∼mtns [18] E. R. Tsekanovski˘ and Yu. L. Shmul’yan, “The theory of biextensions ı of operators in rigged Hilbert spaces: Unbounded operator colligations and characteristic functions,” Uspekhi Mat. Nauk, 32 (1977), 69–124 (Russian) [English transl.: Russian Math. Surv., 32 (1977), 73–131]. [19] G. Weiss, “Transfer functions of regular linear systems. Part I: characterizations of regularity”, Trans. Amer. Math. Soc., 342 (1994), 827– 854. Problem 1.3
Does any analytic contractive operator function on the polydisk have a dissipative scattering nD realization? Dmitry S. KalyuzhniyVerbovetzky
Department of Mathematics The Weizmann Institute of Science Rehovot 76100 Israel [email protected] 1 DESCRIPTION OF THE PROBLEM Let X, U, Y be ﬁnitedimensional or inﬁnitedimensional separable Hilbert spaces. Consider nD linear systems of the form x(t) = n (A x(t − e ) + B u(t − e )), n k k k k k=1 (t ∈ Zn : tk > 0) α: n y (t) = (Ck x(t − ek ) + Dk u(t − ek )), k=1 k=1 (1) where ek := (0, . . . , 0, 1, 0, . . . , 0) ∈ Zn (here unit is on the k th place), for all n t ∈ Zn such that k=1 tk ≥ 0 one has x(t) ∈ X (the state space), u(t) ∈ U (the input space), y (t) ∈ Y (the output space), Ak , Bk , Ck , Dk are bounded linear operators, i.e., Ak ∈ L(X), Bk ∈ L(U, X), Ck ∈ L(X, Y), Dk ∈ L(U, Y) for all k ∈ {1, . . . , n}. We use the notation α = (n; A, B, C, D; X, U, Y) for such a system (here A := (A1 , . . . , An ), etc.). For T ∈ L(H1 , H2 )n and n z ∈ Cn denote z T := k=1 zk Tk . Then the transfer function of α is θα (z ) = z D + z C(IX − z A)−1 z B. Clearly, θα is analytic in some neighbourhood of z = 0 in Cn . Let Ak Bk Gk := ∈ L(X ⊕ U, X ⊕ Y), k = 1, . . . , n. Ck Dk We call α = (n; A, B, C, D; X, U, Y) a dissipative scattering nD system (see [5, 6]) if for any ζ ∈ Tn (the unit torus) ζ G is a contractive operator, i.e., DISSIPATIVE SCATTERING ND REALIZATION 15 ζ G ≤ 1. It is known [5] that the transfer function of a dissipative scatter0 ing nD system α = (n; A, B, C, D; X, U, Y) belongs to the subclass Bn (U, Y) of the class Bn (U, Y) of all analytic contractive L(U, Y)valued functions on the open unit polydisk Dn , which is segregated by the condition of vanishing of its functions at z = 0. The question whether the converse is true was implicitly asked in [5] and still has not been answered. Thus, we pose the following problem. 0 Problem: Either prove that an arbitrary θ ∈ Bn (U, Y) can be realized as the transfer function of a dissipative scattering nD system of the form (1) with the input space U and the output space Y, or give an example 0 of a function θ ∈ Bn (U, Y) (for some n ∈ N, and some ﬁnitedimensional or inﬁnitedimensional separable Hilbert spaces U, Y) that has no such a realization. 2 MOTIVATION AND HISTORY OF THE PROBLEM For n = 1 the theory of dissipative (or passive, in other terminology) scattering linear systems is well developed (see, e.g., [2, 3]) and related to various problems of physics (in particular, scattering theory), stochastic processes, control theory, operator theory, and 1D complex analysis. It is well known (essentially, due to [8]) that the class of transfer functions of dissipative scattering 1D systems of the form (1) with the input space U and the output 0 space Y coincides with B1 (U, Y). Moreover, this class of transfer functions remains the same when one is restricted within the important special case of conservative scattering 1D systems, for which the system block matrix G is unitary, i.e., G∗ G = IX⊕U , GG∗ = IX⊕Y . Let us note that in the case n = 1 a system (1) can be rewritten in an equivalent form (without a unit delay in output signal y ) that is the standard form of a linear system, then a transfer function does not necessarily vanish at z = 0, and the class of transfer functions turns into the Schur class S (U, Y) = B1 (U, Y). The 0 classes B1 (U, Y) and B1 (U, Y) are canonically isomorphic due to the relation 0 B1 (U, Y) = zB1 (U, Y). In [1] an important subclass Sn (U, Y) in Bn (U, Y) was introduced. This subclass consists of analytic L(U, Y)valued functions on Dn , say, θ(z ) = n tk t n n t t∈Zn θt z (here Z+ = {t ∈ Z : tk ≥ 0, k = 1, . . . , n}, z := k=1 zk for + z ∈ Dn , t ∈ Zn ) such that for any ntuple T = (T1 , . . . , Tn ) of commuting + contractions on some common separable Hilbert space H and any positive t r < 1 one has θ(rT) ≤ 1, where θ(rT) = t∈Zn θt ⊗ (r T) ∈ L(U ⊗ H, Y ⊗ H), and (rT)t := k=1 (rTk )tk . For n = 1 and n = 2 one has Sn (U, Y) = Bn (U, Y). However, for any n > 2 and any nonzero spaces U and Y the class Sn (U, Y) is a proper subclass of Bn (U, Y). J. Agler in [1] constructed a representation of an arbitrary function from Sn (U, Y), which in a systemtheoretical language was interpreted in [4] as follows: Sn (U, Y)
n
+ 16 PROBLEM 1.3 coincides with the class of transfer functions of nD systems of Roesser type with the input space U and the output space Y, and certain conservativity condition imposed. The analogous result is valid for conservative systems of the form (1). A system α = (n; A, B, C, D; X, U, Y) is called a conservative scattering nD system if for any ζ ∈ Tn the operator ζ G is unitary. Clearly, a conservative scattering system is a special case of a dissipative one. By [5], the class of transfer functions of conservative scattering nD systems coincides 0 with the subclass Sn (U, Y) in Sn (U, Y), which is segregated from the latter by the condition of vanishing of its functions at z = 0. Since for n = 1 and n = 2 0 0 one has Sn (U, Y) = Bn (U, Y), this gives the whole class of transfer functions of dissipative scattering nD systems of the form (1), and the solution to the problem formulated above for these two cases. In [6] the dilation theory for nD systems of the form (1) was developed. It was proven that α = (n; A, B, C, D; X, U, Y) has a conservative dilation if and only if the corresponding linear function LG (z ) := z G belongs to 0 Sn (X ⊕ U, X ⊕ Y). Systems that satisfy this criterion are called ndissipative scattering ones. In the cases n = 1 and n = 2 the subclass of ndissipative scattering systems coincides with the whole class of dissipative ones, and in the case n > 2 this subclass is proper. Since transfer functions of a system and of its dilation coincide, the class of transfer functions of ndissipative 0 scattering systems with the input space U and the output space Y is Sn (U, Y). According to [7], for any n > 2 there exist p ∈ N, m ∈ N, operators Dk ∈ L(Cp ) and commuting contractions Tk ∈ L(Cm ), k = 1, . . . , n, such that
n ζ ∈T n max n
k=1 zk Dk = 1 <
k=1 Tk ⊗ Dk . The system α = (n; 0, 0, 0, D; {0}, Cp , Cp ) is a dissipative scattering one, however not, ndissipative. Its transfer function θα (z ) = LG (z ) = z D ∈ 0 0 Bn (Cp , Cp ) \ Sn (Cp , Cp ). 0 0 Since for functions in Bn (U, Y) \ Sn (U, Y) the realization technique elaborated in [1] and developed in [4] and [5] is not applicable, our problem is of current interest. BIBLIOGRAPHY [1] J. Agler, “On the representation of certain holomorphic functions deﬁned on a polydisc,” Topics in Operator Theory: Ernst D. Hellinger Memorial Volume (L. de Branges, I. Gohberg, and J. Rovnyak, Eds.), Oper. Theory Adv. Appl. 48, pp. 4766 (1990). [2] D. Z. Arov, “Passive linear steadystate dynamic systems,” Sibirsk. Math. Zh. 20 (2), 211228 (1979), (Russian). [3] J. A. Ball and N. Cohen, “De BrangesRovnyak operator models and systems theory: A survey,” Topics in Matrix and Operator Theory (H. DISSIPATIVE SCATTERING ND REALIZATION 17 Bart, I. Gohberg, and M.A. Kaashoek, eds.), Oper. Theory Adv. Appl., 50, pp. 93136 (1991). [4] J. A. Ball and T. Trent, “Unitary colligations, reproducing kernel hilbert spaces, and NevanlinnaPick interpolation in several variables,” J. Funct. Anal. 157, pp. 161 (1998). [5] D. S. Kalyuzhniy, “Multiparametric dissipative linear stationary dynamical scattering systems: Discrete case,” J. Operator Theory, 43 (2), pp. 427460 (2000). [6] D. S. Kalyuzhniy, “Multiparametric dissipative linear stationary dynamical scattering systems: Discrete case, II: Existence of conservative dilations,” Integr. Eq. Oper. Th., 36 (1), pp. 107120 (2000). [7] D. S. Kalyuzhniy, “On the von Neumann inequality for linear matrix functions of several variables,” Mat. Zametki 64 (2), pp. 218223 (1998), (Russian); translated in Math. Notes 64 (2), pp. 186189 (1998). [8] B. Sz.Nagy and C. Foia¸, Harmonic Analysis of Operators on Hilbert s Spaces, North Holland, Amsterdam, 1970. Problem 1.4
Partial disturbance decoupling with stability J. C. Mart´ ınezGarc´ ıa
Programa de Investigaci´n en Matem´ticas Aplicadas y Computaci´n o a o Instituto Mexicano del Petr´leo o Eje Central L´zaro C´rdenas No. 152 a a Col San Bartolo Atepehuacan, 07730 M´xico D.F., e M´xico e [email protected] M. Malabre
Institut de Recherche en Communications et Cybern´tique de Nantes e CNRS&(Ecole CentraleUniversit´Ecole des Mines) de Nantes e 1 rue de la No¨, F44321 Nantes Cedex 03, e France [email protected] V. Kuˇera c
Faculty of Electrical Engineering Czech Technical University in Prague Technicka 2, 16627 Prague 6, Czech Republic [email protected] 1 DESCRIPTION OF THE PROBLEM Consider a linear timeinvariant system (A, B , C , E ) described by: σ x (t) = Ax (t) + Bu (t) + Ed (t) , (1) z (t) = Cx (t) , where σ denotes either the derivation or the shift operator, depending on the continuoustime or discretetime context; x (t) ∈ X Rn denotes the state; u (t) ∈ U Rm denotes the control input; z (t) ∈ Z Rm denotes the output, and d (t) ∈ D Rp denotes the disturbance. A : X → X, B : U → X, C : X → Z, and E : D → X denote linear maps represented by real constant matrices. PARTIAL DISTURBANCE DECOUPLING WITH STABILITY 19 Let a system (A, B , C , E ) and an integer k ≥ 1 be given. Find necessary and suﬃcient conditions for the existence of a static state feedback control law u (t) = F x (t)+ Gd (t) , where F : X → U and G : D → U are linear maps such as zeroing the ﬁrst k Markov parameters of Tzd , the transfer function between the disturbance and the controlled output, while insuring internal stability, i.e.: • C (A + BF ) (BG + E ) ≡ 0, for i ∈ {0, 1, . . . , k − 1}, and • σ (A + BF ) ⊆ Cg , where σ (A + BF ) stands for the spectrum of A + BF and Cg stands for the (good) stable part of the complex plane, e.g., the open lefthalf complex plane (continuoustime case) or the open unit disk (discretetime case)
l 2 MOTIVATION The literature contains a lot of contributions related to disturbance rejection or attenuation. The early attempts were devoted to canceling the eﬀect of the disturbance on the controlled output, i.e., insuring Tzd ≡ 0. This problem is usually referred to as the disturbance decoupling problem with internal stability, noted as DDPS (see [11], [1]). The solvability conditions for DDPS can be expressed as matching of inﬁnite and unstable (invariant) zeros of certain systems (see, for instance, [8]), namely those of (A, B , C ), i.e., (1) with d(t) ≡ 0, and those of (A, B E , C ), i.e., (1) with d(t) considered as a control input. However, the rigid solvability conditions for DDPS are hardly met in practical cases. This is why alternative design procedures have been considered, such as almost disturbance decoupling (see [10]) and optimal disturbance attenuation, i.e., minimization of a norm of Tzd (see, for instance, [12]). The partial version of the problem, as deﬁned in Section 1, oﬀers another alternative from the rigid design of DDPS. The partial disturbance decoupling problem (PDDP) amounts to zeroing the ﬁrst, say k , Markov parameters of Tzd . It was initially introduced in [2] and later revisited in [5], without stability, [6, 7] with dynamic state feedback and stability, [4] with static state feedback and stability (suﬃcient solvability conditions for the singleinput singleoutput case), [3] with dynamic measurement feedback, stability, and H∞ norm bound. When no stability constraint is imposed, solvability conditions of PDDP involve only a subset of the inﬁnite structure of (A, B , C ) and (A, B E , C ), namely the orders which are less than or equal to k − 1 (see details in [5]). For PDDPS (i.e., PDDP with internal stability), the role played by the ﬁnite invariant zeros must be clariﬁed to obtain the necessary and suﬃcient conditions that we are looking for, and solve the open problem. 20 Several extensions of this problem are also important: • solve PDDPS while reducing the H∞ norm of Tzd ; PROBLEM 1.4 • consider static measurement feedback in place of static state feedback. BIBLIOGRAPHY [1] G. Basile and G. Marro, Controlled and Conditioned Invariants in Linear System Theory, PrenticeHall, 1992. [2] E. Emre and L. M. Silverman, “Partial model matching of linear systems,”IEEE Trans. Automat. Contr., vol. AC25, no. 2, pp. 280281, 1980. ¨ ¨ [3] V. Eldem, H. Ozbay, H. Selbuz, and K. Ozcaldiran, “Partial disturbance rejection with internal stability and H∞ norm bound, ”SIAM Journal on Control and Optimization, vol. 36 , no. 1 , pp. 180192, 1998. [4] F. N. Koumboulis and V. Kuˇera, “Partial model matching via static c feedback (The multivariable case),”IEEE Trans. Automat. Contr., vol. AC44, no. 2, pp. 386392, 1999. [5] M. Malabre and J. C. Mart´ ınezGarc´ “The partial disturbance reıa, jection or partial model matching: Geometric and structural solutions, ”IEEE Trans. Automat. Contr., vol. AC40, no. 2, pp. 356360, 1995. [6] V. Kuˇera, J. C. Mart´ c ınezGarc´ and M. Malabre, “Partial model ıa, matching: Parametrization of solutions, ” Automatica, vol. 33, no. 5, pp. 975977, 1997. [7] J. C. Mart´ ınezGarc´ M. Malabre, and V. Kuˇera, “The partial model ıa, c matching problem with stability,”Systems and Control Letters, no. 24, pp. 6174, 1994. [8] J. C. Mart´ ınezGarc´ M. Malabre, J.M. Dion, and C. Commault, “Conıa, densed structural solutions to the disturbance rejection and decoupling problems with stability,”International Journal of Control, vol. 72, No. 15, pp. 13921401, 1999. [9] A. Saberi, P. Sannuti, A. A. Stoorvogel, and B. M. Chen, H2 Optimal Control, PrenticeHall, 1995. [10] J. C. Willems, “Almost invariant subspaces: An approach to high gain feedback design  part I: Almost controlled invariant subspaces,”IEEE Trans. Automat. Contr., vol. AC26, no.1, pp. 235252, 1981. [11] M. M. Wonham, Linear Multivariable Control: A Geometric Approach, 3rd ed., Springer Verlag, New York, 1985. PARTIAL DISTURBANCE DECOUPLING WITH STABILITY 21 [12] K. Zhou, J. C. Doyle, and K. Glover, Robust and Optimal Control, Upper Saddle River, NJ: PrenticeHall, Inc., Simon & Schuster, 1995. Problem 1.5
Is Monopoli’s model reference adaptive controller correct? A. S. Morse1
Center for Computational Vision and Control Department of Electrical Engineering Yale University, New Haven, CT 06520 USA 1 INTRODUCTION In 1974 R. V. Monopoli published a paper [1] in which he posed the now classical model reference adaptive control problem, proposed a solution and presented arguments intended to establish the solution’s correctness. Subsequent research [2] revealed a ﬂaw in his proof, which placed in doubt the correctness of the solution he proposed. Although provably correct solutions to the model reference adaptive control problem now exist (see [3] and the references therein), the problem of deciding whether or not Monopoli’s original proposed solution is in fact correct remains unsolved. The aim of this note is to review the formulation of the classical model reference adaptive control problem, to describe Monopoli’s proposed solution, and to outline what’s known at present about its correctness. 2 THE CLASSICAL MODEL REFERENCE ADAPTIVE CONTROL PROBLEM The classical model reference adaptive control problem is to develop a dynamical controller capable of causing the output y of an imprecisely modeled SISO process P to approach and track the output yref of a prespeciﬁed reference model Mref with input r. The underlying assumption is that the process model is known only to the extent that it is one of the members of a prespeciﬁed class M. In the classical problem M is taken to be the set of
1 This research was supported by DARPA under its SEC program and by the NSF. IS MONOPOLI’S MODEL REFERENCE ADAPTIVE CONTROLLER CORRECT? 23 all SISO controllable, observable linear systems with strictly proper transfer β functions of the form g α(s) where g is a nonzero constant called the high ( s) frequency gain and α(s) and β (s) are monic, coprime polynomials. All g have the same sign and each transfer function is minimum phase (i.e., each β (s) is stable). All transfer functions are required to have the same relative degree n (i.e., deg α(s) − deg β (s) = n.) and each must have a McMillan ¯ ¯ degree not exceeding some prespeciﬁed integer n (i.e., deg α(s) ≤ n). In the sequel we are going to discuss a simpliﬁed version of the problem in which 1 all g = 1 and the reference model transfer function is of the form (s+λ)n ¯ where λ is a positive number. Thus Mref is a system of the form ¯ yref = −λyref + cxref + dr ˙ ¯ ¯ xref = Axref + ¯ ˙ br
1 . ¯ (s+λ)(n−1) (1) ¯ b, ¯ ¯ where {A, ¯ c, d} is a controllable, observable realization of 3 MONOPOLI’S PROPOSED SOLUTION Monopoli’s proposed solution is based on a special representation of P that involves picking any ndimensional, singleinput, controllable pair (A, b) with A stable. It is possible to prove [1, 4] that the assumption that the process P admits a model in M, implies the existence of a vector p∗ ∈ IR2n and initial conditions z (0) and x(0), such that u and y exactly satisfy ¯ b A0 y+ z+ 0 0A ˙ x = Ax + ¯(u − z p∗ ) ¯ ¯¯ b y = −λy + cx + d(u − z p∗ ) ˙ ¯¯ ¯ z= ˙ 0 b u Monopoli combined this model with that of Mref to obtain the direct control model reference parameterization A0 b 0 z+ y+ 0A 0 b ¯ + ¯(u − z p∗ − r) x = Ax b ˙ ¯ eT = −λeT + cx + d(u − z p∗ − r) ˙ ¯ z= ˙ Here eT is the tracking error eT = y − yref
∆ ∆ u (2) (3) (4) (5) and x = x − xref . Note that it is possible to generate an asymptotically ¯ correct estimate z of z using a copy of (2) with z replacing z . To keep the exposition simple, we are going to ignore the exponentially decaying estimation error z − z and assume that z can be measured directly. To solve the MRAC problem, Monopoli proposed a control law of the form u=z p+r (6) 24 PROBLEM 1.5 where p is a suitably deﬁned estimate of p∗ . Motivation for this particular choice stems from the fact that if one knew p∗ and were thus able to use the control u = z p∗ + r instead of (6), then this would cause eT to tend to zero exponentially fast and tracking would therefore be achieved. Monopoli proposed to generate p using two subsystems that we will refer to here as a “multiestimator” and a “tuner” respectively. A multiestimator E(p) is a parametervarying linear system with parameter p, whose inputs are u, y , and r and whose output is an estimate e of eT that would be asymptotically correct were p held ﬁxed at p∗ . It turns out that there are two diﬀerent but very similar types of multiestimators that have the requisite properties. While Monopoli focused on just one, we will describe both since each is relevant to the present discussion. Both multiestimators contain (2) as a subsystem. Version 1 There are two versions of the adaptive controller that are relevant to the problem at hand. In this section we describe the multiestimator and tuner that, together with reference model (1) and control law (6), comprise the ﬁrst version. MultiEstimator 1 The form of the ﬁrst multiestimator E1 (p) is suggested by the readily veriﬁable fact that if H1 and w1 are n × 2n and n × 1 signal matrices generated ¯ ¯ by the equations ˙ ¯ H1 = AH1 + ¯ bz and ¯ w1 = Aw1 + ¯(u − r) ˙ b (7) respectively, then w1 − H1 p∗ is a solution to (3). In other words x = w1 − H1 p∗ + where is an initial condition dependent time function decaying to ¯ zero as fast as eAt . Again, for simplicity, we shall ignore . This means that (4) can be rewritten as ¯ ¯ eT = −λeT − (¯H1 + dz )p∗ + cw1 + d(u − r) ˙ c ¯ Thus a natural way to generate an estimate e1 of eT is by means of the equation ˙ ¯ ¯ c ¯ e1 = −λe1 − (¯H1 + dz )p + cw1 + d(u − r) (8) From this it clearly follows that the multiestimator E1 (p) deﬁned by (2), (7) and (8) has the required property of delivering an asymptotically correct estimate e1 of eT if p is ﬁxed at p∗ . IS MONOPOLI’S MODEL REFERENCE ADAPTIVE CONTROLLER CORRECT? 25 Tuner 1 From (8) and the diﬀerential equation for eT directly above it, it can be seen that the estimation error2 e1 = e1 − eT satisﬁes the error equation e1 = −λe1 + φ1 (p − p∗ ) ˙ where ¯ φ1 = −(¯H1 + dz ) c (11) Prompted by this, Monopoli proposed to tune p1 using the pseudogradient tuner ˙ p1 = −φ1 e1 (12) The motivation for considering this particular tuning law will become clear shortly, if it is not already. What is known about Version 1? The overall model reference adaptive controller proposed by Monopoli thus consists of the reference model (1), the control law (6), the multiestimator (2), (7), (8), the output estimation error (9) and the tuner (11), (12). The open problem is to prove that this controller either solves the model reference adaptive control problem or that it does not. Much is known that is relevant to the problem. In the ﬁrst place, note that (1), (2) together with (5)  (11) deﬁne a parameter varying linear system Σ1 (p) with input r, state (yref , xref , z, H1 , w1 , e1 , e1 ) and output e1 . The consequence of the assumption that every system in M is minimum phase is that Σ1 (p) is detectable through e1 for every ﬁxed value of p [5]. Meanwhile the form of (10) enables one to show by direct calculation, that the rate of ∆ change of the partial Lyapunov function V = e2 + p − p∗ 2 along a solution 1 to (12) and the equations deﬁning Σ1 (p), satisﬁes ˙ V = −2λe2 ≤ 0 (13) 1 From this it is evident that V is a bounded monotone nonincreasing function and consequently that e1 and p are bounded wherever they exist. Using and the fact that Σ1 (p) is a linear parametervarying system, it can be concluded that solutions exist globally and that e1 and p are bounded on [0, ∞). By integrating (13) it can also be concluded that e1 has a ﬁnite L2 [0, ∞)norm and that e1 2 + p − p∗ 2 tends to a ﬁnite limit as t → ∞. Were it possible to deduce from these properties that p tended to a limit p, then it would ¯ possible to establish correctness of the overall adaptive controller using the detectability of Σ1 (¯). p
2 Monopoli ∆ (9) (10) called e1 an augmented error. 26 PROBLEM 1.5 There are two very special cases for which correctness has been established. The ﬁrst is when the process models in M all have relative degree 1; that is when n = 1. See the references cited in [3] for more on this special case. ¯ The second special case is when p∗ is taken to be of the form q ∗ k where k ∆ is a known vector and q ∗ is a scalar; in this case p = q k where q is a scalar ˙ parameter tuned by the equation q = −k φ1 e1 [6]. Version 2 In the sequel we describe the multiestimator and tuner that, together with reference model (1) and control law (6), comprise the second version of them adaptive controller relevant to the problem at hand. MultiEstimator 2 The second multiestimator E2 (p), which is relevant to the problem under consideration, is similar to E1 (p) but has the slight advantage of leading to a tuner that is somewhat easier to analyze. To describe E2 (p), we need ﬁrst to deﬁne matrices ¯ ¯ ∆ ¯2 = b ¯∆ A 0 A2 = and b ¯ c −λ ¯ d The form of E2 (p) is motivated by the readily veriﬁable fact that if H2 and w2 are (¯ +1) × 2n and (¯ +1) × 1 signal matrices generated by the equations n n ˙ ¯ H2 = A2 H2 + ¯2 z b and ¯ w2 = A2 w2 + ¯2 (u − r) ˙ b (14) then w2 − H2 p∗ is a solution to (3)  (4). In other words, x eT = w2 − H2 p∗ + where is an initial condition dependent time function decaying ¯ to zero as fast as eA2 t . Again, for simplicity, we shall ignore . This means that eT = c2 w2 − c2 H2 p∗ ¯ ¯ where c2 = 0 · · · 0 1 . Thus, in this case, a natural way to generate ¯ an estimate e2 of eT is by means of the equation e2 = c2 w2 − c2 H2 p ¯ ¯ (15) It is clear that the multiestimator E2 (p) deﬁned by (2), (14) and (15) has the required property of delivering an asymptotically correct estimate e2 of eT if p is ﬁxed at p∗ . Tuner 2 Note that in this case the estimation error e2 = e2 − eT
∆ (16) IS MONOPOLI’S MODEL REFERENCE ADAPTIVE CONTROLLER CORRECT? 27 satisﬁes the error equation e2 = φ2 (p2 − p∗ ) where φ2 = −c2 H2 ¯ (18) Equation (17) suggests that one consider a pseudogradient tuner of the form ˙ p = φ2 e2 (19) What is Known about Version 2? The overall model reference adaptive controller in this case thus consists of the reference model (1), the control law (6), the multiestimator (2), (14), (15), the output estimation error (16) and the tuner (18), (19). The open problem is here to prove that this version of the controller either solves the model reference adaptive control problem or that it does not. Much is known about the problem. In the ﬁrst place, (1), (2) together with (5), (6) (14)  (18) deﬁne a parameter varying linear system Σ2 (p) with input r, state (yref , xref , z, H2 , w2 ) and output e2 . The consequence of the assumption that every system in M is minimum phase is that this Σ2 (p) is detectable through e2 for every ﬁxed value of p [5]. Meanwhile the form of (17) enables one to show by direct calculation that the rate of change of the ∆ partial Lyapunov function V = p − p∗ 2 along a solution to (19) and the equations deﬁning Σ2 (p), satisﬁes ˙ V = −2λe2 ≤ 0 (20) 2 It is evident that V is a bounded monotone nonincreasing function and consequently that p is bounded wherever they exist. From this and the fact that Σ2 (p) is a linear parametervarying system, it can be concluded that solutions exist globally and that p is bounded on [0, ∞). By integrating (20) it can also be concluded that e2 has a ﬁnite L2 [0, ∞)norm and that p − p∗ 2 tends to a ﬁnite limit as t → ∞. Were it possible to deduce from these properties that p tended to a limit p , then it would to establish ¯ correctness using the detectability of Σ2 (¯). p There is one very special cases for which correctness has been established [6]. This is when p∗ is taken to be of the form q ∗ k where k is a known vector ∆ and q ∗ is a scalar; in this case p = q k where q is a scalar parameter tuned ˙ by the equation q = −k φ2 e2 . The underlying reason why things go through is because in this special case, the fact that p − p∗ 2 and consequently q − q ∗  tend to a ﬁnite limits, means that q tends to a ﬁnite limit as well. (17) 4 THE ESSENCE OF THE PROBLEM In this section we transcribe a stripped down version of the problem that retains all the essential feature that need to be overcome in order to decide 28 PROBLEM 1.5 whether or not Monopoli’s controller is correct. We do this only for version 2 of the problem and only for the case when r = 0 and n = 1. Thus, in ¯ ¯ this case, we can take A2 = −λ and ¯2 = 1. Assuming the reference model b is initialized at 0, dropping the subscript 2 throughout, and writing φ for −H , the system to be analyzed reduces to z= ˙ A0 0A z+ b 0 (w + φ p∗ ) + 0 b pz (21) (22) (23) (24) (25) ˙ φ = −λφ − z w = −λw + p z ˙ e = φ (p − p∗ ) ˙ p = −φe To recap, p∗ is unknown and constant but is such that the linear parametervarying system Σ(p) deﬁned by (21) to (24) is detectable through e for each ﬁxed value of p. Solutions to the system (21)  (25) exist globally. The parameter vector p and integral square of e are bounded on [0, ∞) and p − p∗  tends to a ﬁnite limit as t → ∞. The open problem here is to show for every initialization of (21)(25), that the state of Σ(p) tends to 0 or that it does not. BIBLIOGRAPHY [1] R. V. Monopoli, “Model reference adaptive control with an augmented error,” IEEE Transactions on Automatic Control, pp. 474–484, October 1974. [2] A. Feuer, B. R. Barmish, and A. S. Morse, “An unstable system associated with model reference adaptive control,” IEEE Transactions on Automatic Control, 23:499–500, 1978. [3] A. S. Morse, “Overcoming the obstacle of high relative degree,” European Journal of Control, 2(1):29–35, 1996. [4] K. J. ˚str¨m and B. Wittenmark, “On selftuning regulators,” AutomatAo ica, 9:185–199, 1973. [5] A. S. Morse, “Towards a uniﬁed theory of parameter adaptive control Part 2: Certainty equivalence and implicit tuning,” IEEE Transactions on Automatic Control, 37(1):15–29, January 1992. [6] A. Feuer, Adaptive Control of Single Input Single Output Linear Systems, Ph.D. thesis, Yale University, 1978. Problem 1.6
Model reduction of delay systems Jonathan R. Partington
School of Mathematics University of Leeds Leeds, LS2 9JT U.K. [email protected] 1 DESCRIPTION OF THE PROBLEM Our concern here is with stable single input single output delay systems, and we shall restrict to the case when the system has a transfer function of the form G(s) = e−sT R(s), with T > 0 and R rational, stable, and strictly proper, thus bounded and analytic on the right half plane C+ . It is a fundamental problem in robust control design to approximate such systems by ﬁnitedimensional systems. Thus, for a ﬁxed natural number n, we wish to ﬁnd a rational approximant Gn (s) of degree at most n in order to make small the approximation error G − Gn , where . denotes an appropriate norm. See [9] for some recent work on this subject. Commonly used norms on a linear timeinvariant system with impulse response g ∈ L1 (0, ∞) and transfer function G ∈ H ∞ (C+ ) are the H ∞ 1/p ∞ norm G ∞ = supRe s>0 G(s), the Lp norms g p = 0 g (t)p dt (1 ≤ p < ∞), and the Hankel norm Γ , where Γ : L2 (0, ∞) → L2 (0, ∞) is the Hankel operator deﬁned by
∞ (Γu)(t) =
0 g (t + τ )u(τ ) dτ. These norms are related by Γ≤G
∞ ≤g 1 ≤ 2n Γ , where the last inequality holds for systems of degree at most n. Two particular approximation techniques for ﬁnitedimensional systems are wellestablished in the literature [14], and they can also be used for some inﬁnitedimensional systems [5]: 30 PROBLEM 1.6 • Truncated balanced realizations, or, equivalently, output normal realizations [11, 13, 5]; • Optimal Hankelnorm approximants [1, 4, 5]. As we explain in the next section, these techniques are known to produce H ∞ convergent sequences of approximants for many classes of delay systems (systems of nuclear type). We are thus led to pose the following question: Do the sequences of reduced order models produced by truncated balanced realizations and optimal Hankelnorm approximations converge for all stable delay systems? 2 MOTIVATION AND HISTORY OF THE PROBLEM Balanced realizations were introduced in [11], and many properties of truncations of such realizations were given in [13]. An H ∞ error bound for the reducedorder system produced by truncating a balanced realization was given for ﬁnitedimensional systems in [3, 4], and extended to inﬁnitedimensional systems in [5]. This commonly used bound is expressed in terms of the sequence (σk )∞ of singular values of the Hankel operator Γ correk=1 sponding to the original system G; in our case Γ is compact, and so σk → 0. ∞ Provided that g ∈ L1 ∩ L2 and Γ is nuclear (i.e., k=1 σk < ∞) with distinct singular values, then the inequality G − Gb n
∞ ≤ 2(σn+1 + σn+2 + . . .) holds for the degreen balanced truncation Gb of G. The elementary lower n bound G − Gn ≥ σn+1 holds for any degreen approximation to G. Another numerically convenient approximation method is the optimal Hankelnorm technique [1, 4, 5], which involves ﬁnding a best rankn Hankel approximation ΓH to Γ, in the Hankel norm, so that Γ − ΓH = σn+1 . In n n this case the bound G − GH − D0 n
∞ ≤ σn+1 + σn+2 + . . . is available for the corresponding transfer function GH with a suitable conn stant D0 . Again, we require the nuclearity of Γ for this to be meaningful. 3 AVAILABLE RESULTS In the case of a delay system G(s) = e−sT R(s) as speciﬁed above, it is known r that the Hankel singular values σk are asymptotic to A T , where r is πk MODEL REDUCTION OF DELAY SYSTEMS 31 the relative degree of R and sr R(s) tends to the ﬁnite nonzero limit A as s → ∞. Hence Γ is nuclear if and only if the relative degree of R is at least 2. (Equivalently, if and only if g is continuous.) We refer to [6, 7] for these and more precise results. e−sT Even for a very simple nonnuclear system such as G(s) = s + 1 , for which kσk T /π , no theoretical upper bound is known for the H ∞ errors in the rational approximants produced by truncated balanced realizations and optimal Hankelnorm approximation, although numerical evidence suggests that they should still tend to zero. A related question is to ﬁnd the best error bounds in L1 approximation of a delay system. For example, a smoothing technique gives an L1 approximation error O ln n for systems of relative degree r = 1 (see [8]), and n it is possible that the optimal Hankel norm might yield a similar rate of convergence. (A lower bound of C/n for some constant C > 0 follows easily from the above discussion.) One approach that may be useful in these analyses is to exploit Bonsall’s theorem that a Hankel integral operator Γ is bounded if and only if it is uniformly bounded on the set of all normalized L2 functions whose Laplace transforms are rational of degree one [2, 12]. An explicit constant in Bonsall’s theorem is not known, and would be of great interest in its own right. Another approach which may be relevant is that of Megretski [10], who introduces maximal real part norms. Their interest stems from the inequality G ∞ ≥ Re G ∞ ≥ Γ /2. BIBLIOGRAPHY [1] V. M. Adamjan, D. Z. Arov, and M. G. Kre˘ “Analytic properties of ın, Schmidt pairs for a Hankel operator and the generalized Schur–Takagi problem,” Math. USSR Sbornik, 15:31–73, 1971. [2] F. F. Bonsall, “Boundedness of Hankel matrices”, J. London Math. Soc. (2), 29(2):289–300, 1984. [3] D. Enns, Model Reduction for Control System Design, Ph.D. dissertation, Stanford University, 1984. [4] K. Glover, “All optimal Hankelnorm approximations of linear multivariable systems and their L∞ error bounds, Internat. J. Control, 39(6):1115–1193, 1984. 32 PROBLEM 1.6 [5] K. Glover, R. F. Curtain, and J. R. Partington, “Realisation and approximation of linear inﬁnitedimensional systems with error bounds,” SIAM J. Control Optim., 26(4):863–898, 1988. [6] K. Glover, J. Lam, and J. R. Partington, “Rational approximation of a class of inﬁnitedimensional systems. I. Singular values of Hankel operators,” Math. Control Signals Systems, 3(4):325–344, 1990. [7] K. Glover, J. Lam, and J. R. Partington,“Rational approximation of a class of inﬁnitedimensional systems. II. Optimal convergence rates of L∞ approximants,” Math. Control Signals Systems, 4(3):233–246, 1991. [8] K. Glover and J. R. Partington, “Bounds on the achievable accuracy in model reduction,” In: Modelling, Robustness and Sensitivity Reduction in Control Systems (Groningen, 1986), pp. 95–118. Springer, Berlin, 1987. [9] P. M. M¨kil¨ and J. R. Partington, “Shift operator induced approxiaa mations of delay systems,” SIAM J. Control Optim., 37(6):1897–1912, 1999. [10] A. Megretski, “Model order reduction using maximal real part norms,” Presented at CDC 2000, Sydney, 2000. http://web.mit.edu/ameg/www/images/lund.ps. [11] B. C. Moore, “Principal component analysis in linear systems: controllability, observability, and model reduction,” IEEE Trans. Automat. Control, 26(1):17–32, 1981. [12] J. R. Partington and G. Weiss, “Admissible observation operators for the rightshift semigroup,” Math. Control Signals Systems, 13(3):179– 192, 2000. [13] L. Pernebo and L. M. Silverman, “Model reduction via balanced state space representations,” IEEE Trans. Automat. Control, 27(2):382–387, 1982. [14] K. Zhou, J. C. Doyle, and K. Glover, Robust and Optimal Control, Upper Saddle River, NJ: Prentice Hall1996. Problem 1.7
Schur extremal problems Lev Sakhnovich
Courant Institute of Mathematical Science New York, NY 11223 USA [email protected] 1 DESCRIPTION OF THE PROBLEM In this paper we consider the wellknown Schur problem the solution of which satisfy in addition the extremal condition w (z )w(z ) ≤ ρ2 , z  < 1, min (1) where w(z ) and ρmin are m × m matrices and ρmin > 0. Here the matrix ρmin is deﬁned by a certain minimalrank condition (see Deﬁnition 1). We remark that the extremal Schur problem is a particular case. The general case is considered in book [1] and paper [2]. Our approach to the extremal problems does not coincide with the superoptimal approach [3],[4]. In paper [2] we compare our approach to the extremal problems with the superoptimal approach. Interpolation has found great applications in control theory [5],[6]. Schur Extremal Problem: The m×m matrices a0 , a1 , ..., an are given. Describe the set of m×m matrix functions w(z ) holomorphic in the circle z  < 1 and satisfying the relation w(z ) = a0 + a1 z + ... + an z n + ... (2) and inequality (1.1). A necessary condition of the solvability of the Schur extremal problem is the inequality
2 Rmin − S ≥ 0, (3) where the (n + 1)m×(n + 1)m matrices S and Rmin are deﬁned by the relations S = Cn Cn , Rmin = diag [ρmin , ρmin , ..., ρmin ], (4) 34 PROBLEM 1.7 a0 0 ... 0 a1 a0 ... 0 . Cn = (5) ... ... ... ... an an−1 ... a0 Deﬁnition 1: We shall call the matrix ρ = ρmin > 0 minimal if the following two requirements are fulﬁlled: 1. The inequality 2 Rmin − S ≥ 0 (6) holds. 2. If the m×m matrix ρ > 0 is such that R 2 − S ≥ 0, (7) then 2 rank (Rmin − S ) ≤ rank (R2 − S ), (8) where R = diag [ρ, ρ, ..., ρ]. Remark 1: The existence of ρmin follows directly from deﬁnition 1. Question 1: Is ρmin unique? Remark 2: If m = 1 then ρmin is unique and ρ2 = λmax , where λmax is min the largest eigenvalue of the matrix S . Remark 3: Under some assumptions the uniqueness of ρmin is proved in the case m > 1, n = 1 (see [2],[7]). If ρmin is known then the corresponding wmin (ξ ) is a rational matrix function. This generalizes the wellknown fact for the scalar case (see [7]). Question 2: How to ﬁnd ρmin ? In order to describe some results in this direction we write the matrix S = Cn Cn in the following block form S11 S12 , (9) S21 S22 where S22 is an m×m matrix. Proposition 1: [1] If ρ = q > 0 satisﬁes inequality (1.7) and the relation q 2 = S22 + S12 (Q2 − S11 )−1 S12 , (10) where Q = diag [q, q, ..., q ], then ρmin = q . We shall apply the method of successive approximation when studying equa2 2 tion (1.10). We put q0 = S22 , qk+1 = S22 + S12 (Q2 − S11 )−1 S12 , where k ≥0, k Qk = diag [qk , qk , ..., qk ]. We suppose that Q2 − S11 > 0. (11) 0 222 Theorem 1: [1] The sequence q0 , q2 , q4 , ... monotonically increases and has 222 the limit m1 . The sequence q1 , q3 , q5 , ... monotonically decreases and has the limit m2 . The inequality m1 ≤m2 holds. If m1 = m2 then ρ2 = q 2 . min Question 3: Suppose relation (1.11) holds. Is there a case when m1 =m2 ? The answer is “no” if n = 1 (see [2],[8]). Remark 4: In book [1] we give an example in which ρmin is constructed in explicit form. SCHUR EXTREMAL PROBLEMS 35 BIBLIOGRAPHY [1] Lev Sakhnovich. Interpolation Theory and Applications, Acad.Publ. (1997). Kluwer [2] J. Helton and L. Sakhnovich, Extremal Problems of Interpolation Theory (forthcoming). [3] N. J. Young, “The NevanlinnaPick Problem for matrixvalued functions,” J. Operator Theory, 15, pp. 239–265, (1986). [4] V. V. Peller and N. J. Young, “Superoptimal analytic approximations of matrix functions,” Journal of Functional Analysis, 120, pp. 300–343, (1994). [5] M. Green and D. Limebeer, Linear Robust Control, PrenticeHall, (1995). [6] “Lecture Notes in Control and Information,” Sciences, Springer Verlag, 135 (1989). [7] N. I. Akhiezer, “On a Minimum in Function Theory,” Operator Theory, Adv. and Appl. 95, pp. 19–35, (1997). [8] A. Ferrante and B. C. Levy, “Hermitian Solutions of the Equations X = Q+N X −1 N ,” Linear Algebra and Applications 247, pp. 359373, (1996). Problem 1.8
The elusive iﬀ test for timecontrollability of behaviours Amol Sasane
Faculty of Mathematical Sciences University of Twente 7500 AE, Enschede The Netherlands [email protected] 1 DESCRIPTION OF THE PROBLEM Problem: Let R ∈ C[η1 , . . . , ηm , ξ ]g×w and let B be the behavior given by the kernel representation corresponding to R. Find an algebraic test on R characterizing the timecontrollability of B. In the above, we assume B to comprise of only smooth trajectories, that is, B = w ∈ C∞ Rm+1 , Cw  DR w = 0 , where DR : C∞ Rm+1 , Cw → C∞ Rm+1 , Cg is the diﬀerential map that acts as follows: if R = rij g×w , then w1 . DR . = . ww w k=1 r1k ∂ ∂ ∂ ∂x1 , . . . , ∂xm , ∂t wk . . . .
w k=1 rgk ∂ ∂ ∂ ∂x1 , . . . , ∂xm , ∂t wk Timecontrollability is a property of the behavior, deﬁned as follows. The behavior B is said to be timecontrollable if for any w1 and w2 in B, there exists a w ∈ B and a τ ≥ 0 such that w(•, t) = w1 (•, t) for all t ≤ 0 . w2 (•, t − τ ) for all t ≥ τ THE ELUSIVE IFF TEST FOR TIMECONTROLLABILITY 37 2 MOTIVATION AND HISTORY OF THE PROBLEM The behavioral theory for systems described by a set of linear constant coefﬁcient partial diﬀerential equations has been a challenging and fruitful area of research for quite some time (see, for instance, Pillai and Shankar [5], Oberst [3] and Wood et al. [4]). An excellent elementary introduction to the behavioral theory in the 1−D case (corresponding to systems described by a set of linear constant coeﬃcient ordinary diﬀerential equations) can be found in Polderman and Willems [6]. In [5], [3] and [4], the behaviours arising from systems of partial diﬀerential equations are studied in a general setting in which the timeaxis does not play a distinguished role in the formulation of the deﬁnitions pertinent to control theory. Since in the study of systems with “dynamics,” it is useful to give special importance to time in deﬁning system theoretic concepts, recent attempts have been made in this direction (see, for example, Cotroneo and Sasane [2], Sasane et al. [7], and Camlıbel and Sasane [1]). The formulation ¸ of deﬁnitions with special emphasis on the timeaxis is straightforward, since they can be seen quite easily as extensions of the pertinent deﬁnitions in the 1−D case. However, the algebraic characterization of the properties of the behavior, such as timecontrollability, turn out to be quite involved. Although the traditional treatment of distributed parameter systems (in which one views them as an ordinary diﬀerential equation with an inﬁnitedimensional Hilbert space as the statespace) is quite successful, the study of the present problem will have its advantages, since it would give a test that is algebraic in nature (and hence computationally easy) for a property of the sets of trajectories, namely timecontrollability. Another motivation for considering this problem is that the problem of patching up of solutions of partial diﬀerential equations is also an interesting question from a purely mathematical point of view. 3 AVAILABLE RESULTS In the 1−D case, it is wellknown (see, for example, theorem 5.2.5 on page 154 of [6]) that timecontrollability is equivalent with the following condition: There exists a r0 ∈ N ∪ {0} such that for all λ ∈ C, rank(R(λ)) = r0 . This condition is in turn equivalent with the torsion freeness of the C[ξ ]module C[ξ ]w /C[ξ ]g R. Let us consider the following statements A1. The C(η1 , . . . , ηm )[ξ ]module C(η1 , . . . , ηm )[ξ ]w /C(η1 , . . . , ηm )[ξ ]g R is torsion free. A2. There exists a χ ∈ C[η1 , . . . , ηm , ξ ]w \ C(η1 , . . . , ηm )[ξ ]g R and there exists a nonzero p ∈ C[η1 , . . . , ηm , ξ ] such that p · χ ∈ C(η1 , . . . , ηm )[ξ ]g R, and 38 deg(p) = deg((p)), where denotes the homomorphism p(ξ, η1 , . . . , ηm ) → p(ξ, 0, . . . , 0) : C[ξ, η1 , . . . , ηm ] → C[ξ ]. In [2], [7] and [1], the following implications were proved: B is time controllable ⇓⇑ ⇑ ⇐ ¬A2 A1 ⇒ PROBLEM 1.8 Although it is tempting to conjecture that the condition A1 might be the iﬀ test for timecontrollability, the diﬀusion equation reveals the precariousness of hazarding such a guess. In [1] it was shown that the diﬀusion equation is timecontrollable with respect to1 the space W deﬁned below. Before deﬁning the set W, we recall the deﬁnition of the (small) Gevrey class of order 2, denoted by γ (2) (R): γ (2) (R) is the set of all ϕ ∈ C∞ (R, C) such that for every compact set K and every > 0 there exists a constant C such that for every k ∈ N, ϕ(k) (t) ≤ C k (k!)2 for all t ∈ K . W is then deﬁned to be the set of all w ∈ B such that w(0, •) ∈ γ (2) (R). Furthermore, it was also shown in [1], that the control could then be implemented by the two point control input functions acting at the point x = 0: u1 (t) = w(0, t) and u2 (t) = ∂ ∞ 2 ∂x w (0, t) for all t ∈ R. The subset W of C (R , C) functions comprises a large class of solutions of the diﬀusion equation. In fact, an interesting open problem is the problem of constructing a trajectory in the behavior that is not in the class W. Also whether the whole behavior (and not just trajectories in W) of the diﬀusion equation is timecontrollable or not is an open question. The answers to these questions would either strengthen or discard the conjecture that the behavior corresponding to p ∈ C[η1 , . . . , ηm , ξ ] is timecontrollable iﬀ p ∈ C[η1 , . . . , ηm ], which would eventually help in settling the question of the equivalence of A1 and timecontrollability. BIBLIOGRAPHY [1] M. K. Camlıbel and A. J. Sasane, “Approximate timecontrollability ¸ versus timecontrollability,” submitted to the 15th MTNS, U.S.A., June 2002. [2] T. Cotroneo and A. J. Sasane, “Conditions for timecontrollability of behaviours,” International Journal of Control, 75, pp. 6167 (2002). [3] U. Oberst, “Multidimensional constant linear systems,” Acta Appl. Math., 20, pp. 1175 (1990).
1 That is, for any two trajectories in W ∩ B, there exists a concatenating trajectory in W ∩ B. THE ELUSIVE IFF TEST FOR TIMECONTROLLABILITY 39 [4] D. H. Owens, E. Rogers and J. Wood, “Controllable and autonomous n−D linear systems,” Multidimensional Systems and Signal Processing, 10, pp. 3369 (1999). [5] H. K. Pillai and S. Shankar, “A Behavioural Approach to the Control of Distributed Systems,” SIAM Journal on Control and Optimization, 37, pp. 388408 (1998). [6] J. W. Polderman and J. C. Willems, Introduction to Mathematical Systems Theory, SpringerVerlag, 1998. [7] A. J. Sasane, E. G. F. Thomas and J. C. Willems, “Timeautonomy versus timecontrollability,” accepted for publication in The Systems and Control Letters, 2001. Problem 1.9
A Farkas lemma for behavioral inequalities A. A. (Tonny) ten Dam
Information, Communication and Technology Division National Aerospace Laboratory NLR P.O. Box 90502 , 1006 BM Amsterdam The Netherlands [email protected] J.W. (Hans) Nieuwenhuis
Faculty of Economics University of Groningen Postbus 800, 9700 AV Groningen The Netherlands [email protected] 1 DESCRIPTION OF THE PROBLEM Within the systems and control community there has always been an interest in minimality issues. In this chapter we conjecture a Farkas Lemma for behavioral inequalities that, when true, will allow to study minimality and elimation issues for behavioral systems described by inequalities. Let Rn×m [s, s−1 ] denote the (n × m) polynomial matrices with real coeﬃcients and positive and negative powers in the indeterminate s. Let Rn×m [s, s−1 ] denote the set of matrices in Rn×m [s, s−1 ] with nonnegative + coeﬃcients only. In this chapter we consider discretetime systems with timeaxis Z. Let σ denote the (backward) shift operator, and let R(σ, σ −1 ) denote polynomial operators in the shift. Of interest is the relation between two polynomial matrices R(s, s−1 ) and R (s, s−1 ) when they satisfy R(σ, σ −1 )w ≥ 0 ⇒ R (σ, σ −1 )w ≥ 0. (1) Based on the static case, one may expect that such a relation should be the extension of Farkas’s lemma to the behavioral case. This leads to the raison d’tre of this chapter. A FARKAS LEMMA FOR BEHAVIORAL INEQUALITIES 41 Conjecture: Let R ∈ Rg×q [s, s−1 ] and R ∈ Rg ×q [s, s−1 ]. Then we have {R(σ, σ −1 )w ≥ 0 ⇒ R (σ, σ −1 )w ≥ 0} if and only if there exists a polynomial matrix H ∈ Rg ×g [s, s−1 ] such that R (s, s−1 ) = H (s, s−1 )R(s, s−1 ). + In order to prove this conjecture, one could try to extend the original proof given by Farkas in [4]. However, this proof explicitly uses the fact that every scalar that is unequal to zero is invertible. Such a general statement does not hold for elements of Rg×q [s, s−1 ]. The most promising approach for the dynamic case seems to be the use of mathematical tools such as the separation theorem of HahnBanach (see, for instance, [5]). The basic mathematical preliminaries read as follows. Denote E := (Rq )Z with the topology of pointwise convergence. The dual of E, denoted by E∗ , consists of all Rq valued sequences that have compact support. Let R ∈ Rg×q [s, s−1 ]. Let B = {w ∈ Eq R(σ, σ −1 )w ≥ 0}. The polar cone of B, denoted by B# , is given by {w ∈ E∗ ∀w ∈ B : ∗ # t∈Z w (t)w (t) ≥ 0}. We would like to establish that B = {w ∈ E∃α ∈ ∗ ∗ T −1 E , α ≥ 0 such that w = R (σ , σ )α}, but we have so far not been able to prove or disprove these statements. These statements, together with the fact that {B1 ⊆ B2 } implies {B# ⊆ B# }, are believed to be useful in a 2 1 proof of the conjecture. 2 MOTIVATION AND HISTORY OF THE PROBLEM In the early nineties the ﬁrst author started to investigate minimality issues for socalled behavior inequality systems, e.g., systems whose behavior B allows a description B = {w ∈ Rq R(σ, σ −1 )w ≥ 0}. Examples can be found in [2]. The ﬁrst publication that we are aware of that deals with this class of systems is [1]. And the conjecture mentioned above can already be found in that paper. As the problem proved hard to solve, a number of investigations where carried out in the context of linear static inequalities, where the problem of minimal representations of systems containing both equalities and inequalities was solved [2]. The conjecture, however, withstood our eﬀorts, and it became a part of the Ph.D. thesis of the ﬁrst author [2]. As the study is placed in the context of behaviors, the Farkas lemma for behavioral inequalities is also discussed in the Willem’s Festschrift [3] (chapter 16). Until the Farkas lemma for behavioral inequalities has been proven, issues like minimal representations, elimination of latent variables etcetera cannot be solved in their full generality. It is our belief that the Farkas lemma for behavior inequalities, as conjectured here, will be a cornerstone for further investigations in a theory for behavioral inequalities. 42 3 AVAILABLE RESULTS PROBLEM 1.9 For the static case, the conjecture is nothing else than the famous Farkas lemma for linear inqualities. For the dynamic case, the conjecture holds true for a special case. Proposition: Let R ∈ Rg×q [s, s−1 ] be a fullrow rank polynomial matrix. Let R ∈ Rg ×q [s, s−1 ]. Then: {R(σ, σ −1 )w ≥ 0 ⇒ R (σ, σ −1 )w ≥ 0} if and only if there exists a unique polynomial matrix H ∈ Rg ×g [s, s−1 ] such that + R (s, s−1 ) = H (s, s−1 )R(s, s−1 ). The proof of this proposition can be found in [2] (proposition 4.5.12). 4 A RELATED CONJECTURE It is of interest to present a related conjecture, whose resolution is closely linked to the Farkas lemma for behavioral inequalities. Recall from [6] that a matrix U ∈ Rg×g [s, s−1 ] is said to be unimodular if it has an inverse U −1 ∈ Rg×g [s, s−1 ]. We will call a matrix H ∈ Rg×g [s, s−1 ] + posimodular if it is unimodular and H −1 ∈ Rg×g [s, s−1 ]. Omitting the + formal deﬁnitions, we will call a representation minimal if the number of equations used to describe the behavior is minimal. Conjecture: Let {w ∈ (Rq )Z R1 (σ, σ −1 )w = 0 and R2 (σ, σ −1 )w ≥ 0} and {w ∈ (Rq )Z R1 (σ, σ −1 )w = 0 and R2 (σ, σ −1 )w ≥ 0} both be two minimal representations. They represent the same behavior if and only if there are polynomial matrices U (s, s−1 ), H (s, s−1 ) and S (s, s−1 ) such that R1 (s, s−1 ) R2 (s, s−1 ) = U (s, s−1 ) 0 S (s, s−1 ) H (s, s−1 ) R1 (s, s−1 ) R2 (s, s−1 ) (2) with U unimodular, H posimodular and no conditions on S . We remark that this conjecture holds true for static inequalities and for that case is given as proposition 3.4.5 in [2]. BIBLIOGRAPHY [1] A. A. ten Dam, “Representations of dynamical systems described by behavioural inequalities,” In: Proceedings European Control Conference ECC’93, vol. 3, pp. 17801783, June 28July 1, Groningen, The Netherlands, 1993. [2] A. A. ten Dam, Unilaterally Constrained Dynamical Systems, Ph.D. Dissertation, University Groningen, The Netherlands, 1997. URL: http://www.ub.rug.nl/eldoc/dis/science/a.a.ten.dam/ A FARKAS LEMMA FOR BEHAVIORAL INEQUALITIES 43 [3] A. A. ten Dam and J. W. Nieuwenhuis, “On behavioural inequalities,” In: The Mathematics from Systems and Control: From Intelligent Control to Behavioral Systems, Groningen, 1999, pp. 165176. [4] J. Farkas, “Die algebraische Grundlage der Anwendungen des Mechanischen Princips von Fourier,” Mathematische und naturwissenschaftliche Berichte aus Ungarn, 16, pp. 154157, 1899. (Translation of: Gy. Farkas, “A Fourierf´le mechanikai elv alkalmaz´s´nak algebrai alapja,” Mathee aa ´ matikai ´s Term´szettudom´nyi Ertes´ o, 16, pp. 361364, 1898.) e e a ıt¨ [5] W. Rudin, Functional Analysis, Tata McGrawHill Publishing Company Ltd., 1973. [6] J.C. Willems, “Paradigms and puzzles in the theory of dynamical systems,” IEEE Transactions on Automatic Control, vol. 36, no. 3, pp. 259294, 1991. Problem 1.10
Regular feedback implementability of linear diﬀerential behaviors H. L. Trentelman
Mathematics Institute University of Groningen P.O. Box 800, 9700 AV Groningen The Netherlands [email protected] 1 INTRODUCTION In this short paper, we want to discuss an open problem that appears in the context of interconnection of systems in a behavioral framework. Given a system behavior, playing the role of plant to be controlled, the problem is to characterize all system behaviors that can be achieved by interconnecting the plant behavior with a controller behavior, where the interconnection should be a regular feedback interconnection. More speciﬁcally, we will deal with linear timeinvariant diﬀerential systems, i.e., dynamical systems Σ given as a triple {R, Rw , B}, where R is the timeaxis, and where B, called the behavior of the system Σ, is equal to the set of all solutions w : R → Rw of a set of higher order, linear, constant coeﬃcient, diﬀerential equations. More precisely, d )w = 0}, dt for some polynomial matrix R ∈ R•×w [ξ ]. The set of all such systems Σ is denoted by Lw . Often, we simply refer to a system by talking about its behavior, and we write B ∈ Lw instead of Σ ∈ Lw . Behaviors B ∈ Lw can hence d be described by diﬀerential equations of the form R( dt )w = 0, typically with the number of rows of R strictly less than its number of columns. Mathematd ically, R( dt )w = 0 is then an underdetermined system of equations. This results in the fact that some of the components of w = (w1 , w2 , . . . , ww ) are unconstrained. This number of unconstrained components is an integer “invariant” associated with B, and is called the input cardinality of B, denoted by m(B), its number of free, “input,” variables. The remaining number of B = {w ∈ C∞ (R, Rw  R( REGULAR FEEDBACK IMPLEMENTABILITY 45 variables, w − m(B), is called the output cardinality of B and is denoted by p(B). Finally, a third integer invariant associated with a system behavior B ∈ Lw is its McMillan degree. It can be shown that (modulo permutation of the components of the external variable w) any B ∈ Lw can be represented d by a state space representation of the form dt x = Ax + Bu, y = Cx + Du, w = (u, y ). Here, A, B, C , and D are constant matrices with real components. The minimal number of components of the state variable x needed in such an input/state/output representation of B is called the McMillan degree of B, and is denoted by n(B). Suppose now Σ1 = {R, Rw1 × Rw2 , B1 } ∈ Lw1 +w2 and Σ2 = {R, Rw2 × Rw3 , B2 } ∈ Lw2 +w3 are linear diﬀerential systems with common factor Rw2 in the signal space. The manifest variable of Σ1 is (w1 , w2 ) and that of Σ2 is (w2 , w3 ). The variable w2 is shared by the systems, and it is through this variable, called the interconnection variable, that we can interconnect the systems. We deﬁne the interconnection of Σ1 and Σ2 through w2 as the system Σ1 ∧w2 Σ2 := {R, Rw1 × Rw2 × Rw3 , B1 ∧w2 B2 }, with interconnection behavior B1 ∧w2 B2 := {(w1 , w2 , w3 )  (w1 , w2 ) ∈ B1 and (w2 , w3 ) ∈ B2 }. The interconnection Σ1 ∧w2 Σ2 is called a regular interconnection if the output cardinalities of Σ1 and Σ2 add up to that of Σ1 ∧w2 Σ2 : p(B1 ∧w2 B2 ) = p(B1 ) + p(B2 ). It is called a regular feedback interconnection if, in addition, the sum of the McMillan degrees of B1 and B2 is equal to the McMilan degree of the interconnection: n(B1 ∧w2 B2 ) = n(B1 ) + n(B2 ). It can be proven that the interconnection of Σ1 and Σ2 is a regular feedback interconnection if, possibly after permutation of components within w1 , w2 and w3 , there exists a componentwise partition of w2 into w2 = (u, y1 , y2 ), of w1 into w1 = (v1 , z1 ), and of w3 into w3 = (v2 , z2 ) such that the following four conditions hold: 1. in the system Σ1 , (v1 , y2 , u) is input and (z1 , y1 ) is output, and the transfer matrix from (v1 , y2 , u) to (z1 , y1 ) is proper. 2. in the system Σ2 , (v2 , y1 , u) is input and (z2 , y2 ) is output, and the transfer matrix from (v2 , y1 , u) to (z2 , y2 ) is proper. 3. in the system Σ1 ∧w2 Σ2 , (v1 , v2 , u) is input and (z1 , z2 , y1 , y2 ) is output, and the transfer matrix from (v1 , v2 , u) to (z1 , z2 , y1 , y2 ) is proper. 4. if we introduce new (“perturbation signals”) e1 and e2 and, instead of y1 and y2 we apply inputs y1 + e2 and y2 + e1 to Σ2 and Σ1 respectively, then the transfer matrix from (v1 , v2 , u, e1 , e2 ) to (z1 , z2 , y1 , y2 ) is proper. 46 PROBLEM 1.10 The ﬁrst three of these conditions state that, in the interconnection of Σ1 and Σ2 , along the terminals of the interconnected system one can identify a signal ﬂow that is compatible with the signal ﬂow diagram of a feedback conﬁguration with proper transfer matrices. The fourth condition states that this feedback interconnection is “wellposed.” The equivalence of the property of being a regular feedback interconnection with these four conditions was studied for the “full interconnection case” in [8] and [2]. 2 STATEMENT OF THE PROBLEM Suppose Pfull ∈ Lw+c is a system (the plant) with two types of external variables, namely c and w. The ﬁrst of these, c, is the interconnection variable through which it can be interconnected to a second system C ∈ Lc (the controller) with external variable c. The external variable c is shared by Pfull and C. The remaining variable w is the variable through which Pfull interacts with the rest of its environment. After interconnecting plant and controller through the shared variable c, we obtain the full controlled behavior Pfull ∧c C ∈ Lw+c . The manifest controlled behavior K ∈ Lw is obtained by projecting all trajectories (w, c) ∈ Pfull ∧c C on their ﬁrst coordinate: K := {w  there exists c such that (w, c) ∈ Pfull ∧c C}. (1) If this holds, then we say that C implements K. If, for a given K ∈ Lw there exists C ∈ Lc such that C implements K, then we call K implementable. If, in addition, the interconnection of Pfull and C is regular, we call K regularly implementable. Finally, if the interconnection of Pfull and C is a regular feedback interconnection, we call K implementable by regular feedback. This now brings us to the statement of our problem: the problem is to characterize, for a given Pfull ∈ Lw+c , the set of all behaviors K ∈ Lw that are implementable by regular feedback. In other words: Problem statement: Let Pfull ∈ Lw+c be given. Let K ∈ Lw . Find necessary and suﬃcient conditions on K under which there exists C ∈ Lc such that 1. C implements K [meaning that (1) holds], 2. p(Pfull ∧c C) = p(Pfull ) + p(C), 3. n(Pfull ∧c C) = n(Pfull ) + n(C). Eﬀectively, a characterization of all such behaviors K ∈ Lw gives a characterization of the “limits of performance” of the given plant under regular feedback control. REGULAR FEEDBACK IMPLEMENTABILITY 47 3 BACKGROUND Our open problem is to ﬁnd conditions for a given K ∈ Lw to be implementable by regular feedback. An obvious necessary condition for this is that K is implementable, i.e., it can be achieved by interconnecting the plant with a controller by (just any) interconnection through the interconnection variable c. Necessary and suﬃcient conditions for implementability have been obtained in [7]. These conditions are formulated in terms of two behaviors derived from the full plant behavior Pfull : P := {w  there exists c such that (w, c) ∈ Pfull } and N := {w  (w, 0) ∈ Pfull }. P and N are both in L , and are called the manifest plant behavior and hidden behavior associated with the full plant behavior Pfull , respectively. In [7] it has been shown that K ∈ Lw is implementable if and only if
w N ⊆ K ⊆ P, (2) i.e., K contains N, and is contained in P. This elegant characterization of the set of implementable behaviors still holds true if, instead of (ordinary) linear diﬀerential system behaviors, we deal with nD linear system behaviors, which are system behaviors that can be represented by partial diﬀerential equations of the form ∂ ∂ ∂ , ,..., )w(x1 , x2 , . . . , xn ) = 0, R( ∂x1 ∂x2 ∂xn with R(ξ1 , ξ2 , . . . , ξn ) a polynomial matrix in n indeterminates. Recently, in [6] a variation of condition (2) was shown to be suﬃcient for implementability of system behaviors in a more general (including nonlinear) context. For a system behavior K ∈ Lw to be implementable by regular feedback, another necessary condition is of course that K is regularly implementable, i.e., it can be achieved by interconnecting the plant with a controller by regular interconnection through the interconnection variable c. Also for regular implementability necessary and suﬃcient conditions can already be found in the literature. In [1] it has been shown that a given K ∈ Lw is regularly implementable if and only if, in addition to condition (2), the following condition holds: K + Pcont = P. (3) Condition (3) states that the sum of K and the controllable part of P is equal to P. The controllable part Pcont of the behavior P is deﬁned as the largest controllable subbehavior of P, which is the unique behavior Pcont with the properties that 1.) Pcont ⊆ P, and 2.) P controllable and P ⊆ P implies P ⊆ Pcont . Clearly, if the manifest plant behavior P is controllable, then P = Pcont , so condition (3) automatically holds. In this case, implementability and regular implementability are equivalent properties. For the special 48 PROBLEM 1.10 case N = 0 (which is equivalent to the “full interconnection case”), conditions (2) and (3) for regular implementability in the context of nD system behaviors can also be found in [4]. In the same context, results on regular implementability can also be found in [9]. We ﬁnally note that, again for the full interconnection case, the open problem stated in this paper has recently been studied in [3], using a somewhat diﬀerent notion of linear system behavior, in discrete time. Up to now, however, the general problem has remained unsolved. BIBLIOGRAPHY [1] M. N. Belur and H. L. Trentelman, “Stabilization, pole placement and regular implementability,” IEEE Transactions on Automatic Control, May 2002. [2] M. Kuijper, “Why do stabilizing controllers stabilize ?” Automatica, vol. 31, pp. 621625, 1995. [3] V. Lomadze, On interconnections and control, manuscript, 2001. [4] P. Rocha and J. Wood, “Trajectory control and interconnection of nD systems,” SIAM Journal on Contr. and Opt., vol. 40, no 1, pp. 107134, 2001. [5] J. W. Polderman and J. C. Willems, Introduction to Mathematical Systems Theory: A Behavioral Approach, Springer Verlag, 1997. [6] A. J. van der Schaft, Achievable behavior of general systems, manuscript, submitted for publication, 2002. [7] J. C. Willems and H. L. Trentelman, “Synthesis of dissipative systems using quadratic diﬀerential forms, Part 1,” IEEE Transactions on Automatic Control, vol. 47, no. 1, pp. 5369, 2002. [8] J. C. Willems, “On Interconnections, Control and Feedback,” IEEE Transactions on Automatic Control, vol. 42, pp. 326337, 1997. [9] E. Zerz and V. Lomadze, “A constructive solution to interconnection and decomposition problems with multidimensional behaviors,” SIAM Journal on Contr. and Opt., vol. 40, no 4, pp. 10721086, 2001. Problem 1.11
Riccati stability Erik I. Verriest1
School of Electrical and Computer Engineering Georgia Institute of Technology Atlanta, GA,303320250 USA [email protected] 1 DESCRIPTION OF THE PROBLEM Given two n × n real matrices, A and B , consider the matrix Riccati equation A P + P A + Q + P BQ−1 B P + R = 0. (1) Can one characterize the pairs (A, B ) for which the above equation has a solution for positive deﬁnite symmetric matrices P , Q, and R? In [8] a pair (A, B ) was deﬁned to be Riccati stable if a triple of positive deﬁnite matrices P, Q, R exists such that (1) holds. The problem may be stated equivalently as an LMI: Can one characterize all pairs (A, B ) without invoking additional matrices, for which there exist positive deﬁnite matrices P and Q such that A P + PA + Q PB < 0. (2) BP −Q 2 MOTIVATION AND HISTORY OF THE PROBLEM Equation (1) plays an important role in the stability analysis of linear timeinvariant delaydiﬀerential systems. It is known [9] that the autonomous system x(t) = Ax(t) + Bx(t − τ ) ˙ (3) 1 Support by the NSFCNRS collaborative grant INT9818312 is gratefully acknowledged. 50 PROBLEM 1.11 is asymptotically stable, for all values of τ ≥ 0, if the pair (A, B ) is Riccati stable. Note that since (3) has to be stable for τ = 0 and τ → ∞, the matrices A + B and A have to be Hurwitz stable, i.e., has its spectrum in the open left half plane. Recall also that a matrix C is SchurCohn stable, if its spectrum lies in the open unit disk. If B = 0, thus reducing the problem to a ﬁnite dimensional timeinvariant system, the Riccati equation reduces to the ubiquitous Lyapunov equation, A P + P A + S = 0, (4) where we have set Q + R = S . It is well known that a positive deﬁnite pair (P, S ) exists if and only if A is Hurwitz. This condition is necessary and suﬃcient. The above result and its equivalent LMI formulation, initiated a whole set of extensions: for multiple delays, distributed delays, timevariant systems (with timevariant delays) [3, 5]. In addition all the above variants can further be extended to include parameter variations (robust stability) and noise (stochastic stability). Also other types of functional diﬀerential equations (scale delay) lead to such conditions [7]. The main idea in deriving these results is the use of the LyapunovKrasovskii theory with appropriate Lyapunov functionals. The equation (1) appears also in H∞ control theory and in game theory. 3 AVAILABLE RESULTS In [8], where Riccatistability was called “dstability,” referring to “delay,” the following connections with spectral properties of A and B were obtained Theorem 1: If there exists a triple of symmetric positive deﬁnite matrices P, Q, and R, satisfying (1), then A is Hurwitz and A−1 B is SchurCohn. There is no complete converse of this theorem, however, two partial converses are easily proven: Theorem 2: If the matrix product A−1 B is SchurCohn, then there exists an orthogonal matrix Θ such that ΘA is Hurwitz, and the pair (ΘA, ΘB ) is Riccatistable. Theorem 3: If the matrix A is Hurwitz, then there exists a matrix B such that A−1 B is SchurCohn and (A, B ) is Riccatistable. In addition the following scaling properties are shown in [8]. Lemma 1: If (A, B ) is Riccatistable, then (αA, αB ) is Riccatistable for all α > 0. Lemma 2: If (A, B ) is Riccatistable, then (SAS −1 , SBS −1 ) is Riccatistable, for all nonsingular S . Lemma 3: If (A, B ) is Riccatistable, and B has full rank, then (A , B ) is RICCATI STABILITY 51 Riccatistable. The full rank condition on B can be relaxed. Lemma 3 is a duality result. In [8] a detailed construction was given for a subset of Riccatistable pairs for the case n = 2. It leads to an (over)parameterization, but the construction readily extends to arbitrary dimensions, by using Theorem 4: Assume that the pairs {(Ai , Bi )  i = 1 . . . N } are Riccatistable for the same P matrix. i.e., there exist Qi , Ri > 0, i = 1 . . . N such that Ai P + P Ai + Qi + P Bi Q−1 Bi P + Ri = 0. i Then all pairs in the positive cone generated by the above pairs are Riccatistable. i.e., ∀αi ≥ 0, but not all zero, the pair ( i αi Ai , i αi Bi ) is Riccatistable. The invariance of Riccatistability under similarity (lemma 2) ensures that if (A, B ) is Riccatistable, one can transform the system to one for which the new P matrix, i.e., S −T P S −1 is the identity. Thus motivated, we provide a simpliﬁed form: Given B , denote by AB the set of matrices A for which (A, B ) is Riccati stable, with P = I , i.e., AB = { A  ∃Q = Q > 0, s.t. A + A + Q + BQ−1 B < 0 }. Hence, a necessary condition for A ∈ AB is that its symmetric part As satisﬁes 1 As < − ( Q + BQ−1 B ) , 2 for some Q > 0. If for each B the set AB can be determined, the proposed problem will be solved. The following special case is proven: Theorem 5: If B is in the realdiagonal form, B = Blockdiag {Λ+ , 0, −Λ− , B1 , . . . , Bc } where Λ+ = diag{λ1 , . . . , λp } are the positive real eigenvalues, −Λ− = diag{−λp+1 , . . . , −λp+m } are the negative real ones, and the Bk ’s are 2 × 2 σk ωk blocks Bk = , associated with the complex eigenvalues σk ± iωk , −ωk σk then the set AB is characterized by the set of all matrices, A, whose symmetric part satisﬁes As < −2 Blockdiag {Λ+ , 0, Λ− , σ1 I2 . . . , σc I2 }. Proof: In this block diagonal form, it is clear that it suﬃces to choose the same blockdiagonal structure for Q and the problem decouples. For real λ2 eigenvalues in the sets Λ+ and −Λ− , observe that q + qk ≥ 2λk  and equality is obtained for q = λk . Likewise for a zero eigenvalue, the corresponding q may be taken inﬁnitesimally small. For complex conjugate eigenvalue pairs, observe that q1 q q q2 + σk −ωk ωk σk q1 q q q2
−1 σk −ωk ωk σk ≥ 2σk  1 0 0 1 . 52 Equality is achieved with
2 PROBLEM 1.11 q1 q σ  ω = , if σ  ≥ ω  and, q q2 ω σ  ρ ρ switching to polar form , with q1 =  cos φ (1 + cos χ), q2 =  cos φ (1 − cos χ), and q = ρ tan2 φ − cos2 χ , where ρ and φ are respectively the modulus and cos φ the argument of the complex eigenvalue σ +iω , and χ arbitrary with  cos χ <  sin φ if σ  < ω . In the latter case, the solution was obtained by direct optimization of the minimal eigenvalue of the matrix Q + Bk Q−1 Bk over all positive deﬁnite matrices Q. Hence Q+BQ−1 B ≥ 0 if B is singular, and Q+ BQ−1 B ≥ 2zI , where z = min ({λk ; k = 1 . . . p + m} {σ , = 1, . . . , c}) if B has full rank, from which the theorem follows. Equations related to (1) are also discussed in [1,2,4,6]. BIBLIOGRAPHY [1] W. N. Anderson, Jr., T. D. Morley and G. E. Trapp, “Positive solutions to X = A − B ∗ X −1 B ∗ ,” LAA, 134, pp. 5362, 1990. [2] R. Datko, “Solutions of the operator equation A∗ K + KA + KRK = −W ,” In:Semigroups of Operators: Theory and Applications, A.V. Balakrishnan, Ed., Birkh¨user, 2000. a [3] L. Dugard and E. I. Verriest, Stability and Control of TimeDelay Systems, SpringerVerlag, LNCIS, vol. 228, 1998. [4] J. Engwerda, “On the existence of a positive deﬁnite solution to the matrix equation X + AT X −1 A = I ,” LAA, 194, pp. 91108, 1993. [5] V. B. Kolmanovskii, S.I. Niculescu and K. Gu, “Delay eﬀects on stability: A survey,” Proc. 38th IEEE Conf. Dec. Control, Phoenix, AZ, pp. 19931998, 199). [6] A. C. M. Ran and M. C. B. Reurings, “On the nonlinear matrix equation X + A∗ F(X )A = Q: solutions and perturbation theory,” Linear Algebra and its Applications, vol. 346, pp. 1526, 2002. [7] E.I. Verriest, “Robust stability, adjoints, and LQ control of scaledelay systems,” Proc. 38th Conf. Dec. Control, Phoenix, AZ, pp. 209214, 1999. [8] E.I. Verriest, “Robust stability and stabilization: from linear to nonlinear,” Proceedings of the 2nd IFAC Workshop on Linear Time Delay Systems, Ancona, Italy, pp. 184195, September 2000. RICCATI STABILITY 53 [9] E.I. Verriest and A.F. Ivanov, “Robust stabilization of systems with delayed feedback,” Proceedings of the Second International Symposium on Implicit and Robust Systems, Warszawa, Poland, pp. 190193, July 1991. Problem 1.12
State and ﬁrst order representations Jan C. Willems1
Department of Electrical Engineering  SCD (SISTA) University of Leuven Kasteelpark Arenberg 10 B3001 LeuvenHeverlee Belgium [email protected] 1 DESCRIPTION OF THE PROBLEM We conjecture that the solution set of a system of linear constant coeﬃcient PDEs is Markovian if and only if it is the solution set of a system of ﬁrst order PDEs. An analogous conjecture regarding state systems is also made. Notation First, we introduce our notation for the solution sets of linear PDEs in the n real independent variables x = (x1 , . . . , xn ). Let Dn denote, as usual, the set of real distributions on Rn , and Lw the linear subspaces of (Dn )w consisting n of the solutions of a system of linear constant coeﬃcient PDEs in the w realvalued dependent variables w = col(w1 , . . . , ww ). More precisely, each element B ∈ Lw is deﬁned by a polynomial matrix R ∈ R•×w [ξ1 , ξ2 , . . . , ξn ], n with w columns, but any number of rows, such that B = {w ∈ (Dn )w  R( ∂ ∂ ∂ , ,..., )w = 0}. ∂x1 ∂x2 ∂xn We refer to elements of Lw as linear diﬀerential nD systems. The above n PDE is called a kernel representation of B ∈ Lw . Note that each B ∈ Lw has n n many kernel representations. For an indepth study of Lw , see [1] and [2]. n
1 This research is supported by the Belgian Federal Government under the DWTC program Interuniversity Attraction Poles, Phase V, 2002  2006, Dynamical Systems and Control: Computation, Identiﬁcation and Modelling. STATE AND FIRST ORDER REPRESENTATIONS 55 Next, we introduce a class of special threeway partitions of Rn . Denote by P the following set of partitions of Rn : [(S− , S0 , S+ ) ∈ P] :⇔ [(S− , S0 , S+ are disjoint subsets of Rn ) ∧ (S− ∪ S0 ∪ S+ = Rn ) ∧ (S− and S+ are open, and S0 is closed)]. Finally, we deﬁne concatenation of maps on Rn . Let f− , f+ : Rn → F, and let π = (S− , S0 , S+ ) ∈ P. Deﬁne the map f− ∧π f+ : Rn → F, called the concatenation of (f− , f+ ) along π , by (f− ∧π f+ )(x) := f− (x) f+ (x) for for x ∈ S− x ∈ S0 ∪ S+ Markovian systems Deﬁne B ∈ Lw to be Markovian :⇔ n [(w− , w+ ∈ B ∩ C∞ (Rn , Rw )) ∧ (π = (S− , S0 , S+ ) ∈ P) ∧ (w− S0 = w+ S0 )] ⇒ [(w− ∧π w+ ∈ B]. Think of S− as the “past”, S0 as the “present”, and S+ as the “future.” Markovian means that if two solutions of the PDE agree on the present, then their pasts and futures are compatible, in the sense that the past (and present) of one, concatenated with the (present and) future of the other, is also a solution. In the language of probability: the past and the future are independent given the present. We come to our ﬁrst conjecture: B ∈ Lw is Markovian n if and only if it has a kernel representation that is ﬁrst order. Thus, it is conjectured that a Markovian system admits a kernel representation of the form R0 w + R1 ∂ ∂ ∂ w + R2 w + · · · Rn w = 0. ∂x1 ∂x2 ∂xn Oberst [2] has proven that there is a onetoone relation between Lw and the n submodules of Rw [ξ1 , ξ2 , . . . , ξn ], each B ∈ Lw being identiﬁable with its set of annihilators n NB := {n ∈ Rw [ξ1 , ξ2 , . . . , ξn ]  n ( ∂ ∂ ∂ , ,..., )B = 0}. ∂x1 ∂x2 ∂xn Markovianity is hence conjectured to correspond exactly to those B ∈ Lw n for which the submodule NB has a set of ﬁrst order generators. 56 State systems PROBLEM 1.12 In this section we consider systems with two kind of variables: w realvalued manifest variables, w = col(w1 , . . . , ww ), and z realvalued state variables, z = col(z1 , . . . , zz ). Their joint behavior is again assumed to be the solution set of a system of linear constant coeﬃcient PDEs. Thus we consider behaviors in Lw+z , whence each element B ∈ Lw+z is described in terms of two n n polynomial matrices (R, M ) ∈ R•×(w+z) [ξ1 , ξ2 , . . . , ξn ] by B = {(w, z ) ∈ (Dn )w × (Dn )z  ∂ ∂ ∂ ∂ ∂ ∂ R( , ,..., )w + M ( , ,..., )z = 0}. ∂x1 ∂x2 ∂xn ∂x1 ∂x2 ∂xn Deﬁne B ∈ Lw+z to be a state system with state z :⇔ n [((w− , z− ), (w+ , z+ ) ∈ B ∩ C∞ (Rn , Rw+z )) ∧ (π = (S− , S0 , S+ ) ∈ P) ∧ (z− S0 = z+ S0 )] ⇒ [(w− , z− ) ∧π (w+ , z+ ) ∈ B]. Think again of S− as the “past”, S0 as the “present”, S− + as the “future”. State means that if the state components of two solutions agree on the present, then their pasts and futures are compatible, in the sense that the past of one solution (involving both the manifest and the state variables), concatenated with the present and future of the other solution, is also a solution. In the language of probability: the present state “splits” the past and the present plus future of the manifest and the state trajectory combined. We come to our second conjecture: B ∈ Lw+z is a state system n if and only if it has a kernel representation that is ﬁrst order in the state variables z and zeroth order in the manifest variables w. I.e., it is conjectured that a state system admits a kernel representation of the form R 0 w + M0 z + M1 ∂ ∂ ∂ z + M2 z + · · · Mn z = 0. ∂x1 ∂x2 ∂xn 2 MOTIVATION AND HISTORY OF THE PROBLEM These open problems aim at understanding state and state construction for nD systems. Maxwell’s equations constitute an example of a Markovian system. The diﬀusion equation and the wave equation are nonexamples. STATE AND FIRST ORDER REPRESENTATIONS 57 3 AVAILABLE RESULTS It is straightforward to prove the “if”part of both conjectures. The conjectures are true for n = 1, i.e., in the ODE case, see [3]. BIBLIOGRAPHY [1] H. K. Pillai and S. Shankar, “A behavioral approach to control of distributed systems,” SIAM Journal on Control and Optimization, vol. 37, pp. 388408, 1999. [2] U. Oberst, “Multidimensional constant linear systems,” Acta Applicandae Mathematicae, vol. 20, pp. 1175, 1990. [3] P. Rapisarda and J. C. Willems, “State maps for linear systems,” SIAM Journal on Control and Optimization, vol. 35, pp. 10531091, 1997. Problem 1.13
Projection of state space realizations Antoine Vandendorpe and Paul Van Dooren
Department of Mathematical Engineering Universit´ catholique de Louvain e B1348 LouvainlaNeuve Belgium 1 DESCRIPTION OF THE PROBLEM We consider two m × p strictly proper transfer functions ˆ ˆ ˆ ˆ T (s) = C (sIn − A)−1 B, T (s) = C (sIk − A)−1 B, (1) of respective Mc Millan degrees n and k < n. We want to characterize the set of projecting matrices Z, V ∈ Cn×k such that ˆ ˆ ˆ C = CV, A = Z T AV, B = Z T B, Z T V = Ik . (2) Given only T (s), we are interested in characterizing the set of all transfer ˆ functions T (s) that can be obtained via the projection equations (1,2). Here is our ﬁrst conjecture. ˆ Conjecture 1. Any minimal state space realization of T (s) can be obtained by a projection from any minimal state space realization of T (s) if m+p ≤ n − k. (3) 2 In the case that condition (3) is not satisﬁed, we give a second, more detailed ˆ conjecture in section 3 in terms of the zero structure of T (s) − T (s). A justiﬁcation of conjecture 1 is that it actually holds for SISO systems. Indeed, the following result was shown in [4]: ˆ ˆ ˆ Theorem. Let T (s) = C (sIn − A)−1 B and T (s) = C (sIk − A)−1 B be arbitrary strictly proper SISO transfer functions of McMillan degrees n and ˆ k < n, respectively. Then any minimal state space realization of T (s) can be constructed via projection of any minimal state space realization of T (s) using equations (2). PROJECTION OF STATE SPACE REALIZATIONS 59 2 MOTIVATION AND HISTORY OF THE PROBLEM Equation (2) arises naturally in the general framework of model reduction of large scale linear systems [1]. In this context we are given a transfer function T (s) of McMillan degree n, which we want to approximate by a ˆ transfer function T (s) of smaller McMillan degree k , in order to solve a simpler analysis or design problem. Classical model reduction techniques include modal approximation (where the dominant poles of the original transfer function are copied in the reduced order transfer function), balanced truncation and optimal Hankel norm approximation (related to the controllability and observability Grammians of the transfer function [10]). These methods either provide a global error bound between the original and reducedorder system and/or guarantee stability of the reduced order system. Unfortunately, their exact calculation involves O(n3 ) ﬂoating point operations even for systems with sparse model matrices {A, B, C }, which becomes untractable for a very large state dimension n. Only the image of the projecting matrices Z and V are important since choosing other bases satisfying the biorthogonality condition (2) amounts ˆ to a statespace transformation of the realization of T (s). A more recent approach involves generalized Krylov spaces ([3]) which are deﬁned as the images of the generalized Krylov matrices x0 . .. (σIn − A)−1 B · · · (σIn − A)−k B X, X = . (4) . . xk−1 and (γIn − AT )−1 C T · · · (γIn − AT )− C T Y , y0 . Y = . . y −1 . ... .. . (5) y0 ... x0 These are related to the respective right and left tangential interpolation conditions ˆ T (s) − T (s) x(s) = O(s − σ )k , and ˆ T (s) − T (s)
T . x(s) = k−1 xi (s − σ )i
i=0 (6) y (s) = O(s − γ ) , . y (s) = −1 yi (s − γ )i .
i=0 (7) In the most general form, one imposes such conditions in several points σi and γj as well as bitangential conditions (see [2], [5] for more details). The calculation of Krylov spaces and the solution of the corresponding tangential interpolation problem typically exploits the sparsity or the structure of the 60 PROBLEM 1.13 model matrices (A, B, C ) of the original system and are therefore eﬃcient for large scale dynamical systems with such structure. Their drawbacks are that the resulting reduced order systems have no guaranteed error bound and that stability is not necessarily preserved. The conjecture and open problem is that these methods are in fact quite universal (i.e., they contain the classical methods as special cases) and can be formulated in terms of Sylvester equations and generalized eigenvalue problems. Tangential interpolation would then be a unifying procedure to construct reducedorder transfer functions in which only the interpolation points and tangential conditions need to be speciﬁed. 3 OUR CONJECTURE . ˆ The error transfer function E (s) = T (s) − T (s) is realized by the following pencil: B A 0 In . ˆ ˆ . A B − s Ik M − Ns = 0 (8) ˆ0 0 C −C The transmission zeros of the system matrix (i.e., the system zeros of its ˆ minimal part) can be chosen as interpolation points between T (s) and T (s) since the normal rank of E (s) drops below its normal rank. Therefore one can impose interpolation conditions of the type (6,7) for appropriate choices of x(s) and y (s) and generalized eigenvalues σ and γ of (8). Our conjecture tries to give necessary and suﬃcient conditions for this in terms of the system zero matrix. Conjecture 2. A minimal state space realization of the strictly proper ˆ transfer function T (s) of McMillan degree k can be obtained by projection from a minimal state space realization of the strictly proper transfer function T (s) of McMillan degree n > k if and only if there exist two regular pencils, ˆ ˆ Mr − sNr and Ml − sNl such that the matrices L, L, R, R, Ql and Qr of the following equations A − sIn 0 B RNr R ˆ ˆ 0 ˆ ˆ A − sIk B RNr = R (Mr − sNr ), (9) ˆ Qr 0 C −C 0 T A − sIn 0 CT LNl L ˆ ˆ ˆ ˆ 0 AT − sIk −C T −LNl = −L (Ml − sNl ), (10) T ˆT Ql 0 B B 0 satisfy the following conditions : RNr ˆ ˆ 1. NlT LT −NlT LT QT (M − N s) RNr = 0, l Qr PROJECTION OF STATE SPACE REALIZATIONS 61 ˆ ˆ 2. dim I m(RNr ) = dim I m(LNl ) = k . Moreover, such matrices always exist provided 2k ≤ 2n − m − p. The conditions given by our conjecture are at least suﬃcient. Indeed, from equations (10), and (9) and the regularity assumption of Mr − sNr and Ml − sNl , it follows that ˆˆ ˆˆ CRNr = C RNr , N T LT B = N T LT B. (11)
l l ˆ ˆˆ NlT LT ARNr = NlT LT ARNr . (12) ˆ ˆ Finally, conditions 1 and 2 imply that the matrices RNr and LNl are right n×k invertible. Deﬁning Z, V ∈ C by ˆ l )−r , V = RNr (RNr )−r , ˆ Z = LNl (LN (13) , we can easily verify equations (1) and (2). Another justiﬁcation is that (by looking carefully at the proof of theorem 1) Conjecture 3 is true for the SISO case. We now present the link with the Krylov techniques. Equations (9) and (10) give us the following Sylvester equations: ARNr − RMr + BQr = 0 , AT LNl − LMl + C T Ql = 0. (14) Then, from condition 1, ˆˆ NlT LT RNr = NlT LT RNr These Sylvester equations correspond to generalized left and right eigenspaces of the system zero matrix (8). More precisely, Im(RNr ) and Im(LNl ) can be expressed as generalized Krylov spaces of the form (4) and (5). The choice of matrices Ml , Nl , Mr , Nr , Ql , and Qr correspond respectively to left and right tangential interpolation conditions at the eigenvalues σi of ˆ (Mr − sNr ) and γj of (Ml − sNl ), that are satisﬁed between T (s) and T (s) (see [5]). These eigenspaces correspond to disjoint parts of the spectrum of ˆˆ M − N s such that the product NlT LT RNr = NlT LT RNr is invertible (see [5] for more details). In other words, our conjecture is that any projected reducedorder transfer function can be obtained by imposing some interpolation conditions or some modal approximation conditions with respect to the original transfer function. Moreover, a solution always exists provided 2k ≤ 2n − m − p (i.e., for ˆ all T (s) of suﬃciently small degree k ). If this turns out to be true, we could hope to ﬁnd the interpolation conditions that yield, e.g., the optimal Hankel norm or optimal H∞ norm reducedorder models using cheap interpolation techniques. 4 AVAILABLE RESULTS Independently, Halevi recently proved in [6] new results concerning the general framework of model order reduction via projection. The unknowns 62 PROBLEM 1.13 Z and V have 2nk parameters (or degrees of freedom), while (2) imposes + (2k + m + p)k constraints. He shows that the case k = n − m2 p corresponds to a ﬁnite number of solutions. Moreover, for the particular case m = p and k = n − m, he shows that any pair of projecting matrices Z, V satisfying (2) can be seen as generalized eigenspaces of a certain matrix pencil. The matrix pencil used by Halevi can be linked to the system zero matrix of the error transfer function deﬁned in equation (8). Matrices Z and V satisfying (2) are also the k trailing rows of S −1 , respectively columns of S which transform the system (A, B, C ) to the system (S −1 AS, S −1 B, CS ) : ∗ ∗ ∗ S −1 AS − sIn S −1 B ˆ ˆ = ∗ A − sIk B . (15) CS 0 ˆ ∗ C 0 The existence of projecting matrices Z, V satisfying (1 and 2) is therefore ˆ related to the above submatrix problem. A square matrix A is said to be embedded in a square matrix A when there exists a change of coordinates S ˆ such that A − sIk is a submatrix of S −1 (A − sIn )S . Necessary and suﬃcient conditions for the embedding of such monic pencils are given in [9], [8]. ˆ ˆ As for monic pencils, we say that the pencil M − N s is embedded in the ˆˆ pencil M − N s when there exist invertible matrices Le, Ri such that M − N s is a submatrix of Le(M − N s)Ri. Finding necessary and suﬃcient conditions for the embedding of such general pencils is still an open problem [7]. ˆˆˆ Nevertheless, one obtains from [9], [8], [7] necessary conditions on (C, A, B ) ˆ − sIk B ˆ A A − sIn B and (C, A, B ) for to be embedded in . These ˆ C 0 C 0 obviously give necessary conditions for the existence of projecting matrices Z, V satisfying (1 and 2). We hope to be able to shed new light on the necessary and suﬃcient conditions for the embedding problem via the connections developed in this paper. BIBLIOGRAPHY [1] A. C. Antoulas, “Lectures on the approximation of largescale dynamical systems,” SIAM Book Series: Advances in Design and Control, 2002. [2] J. A. Ball, I. Gohberg and L. Rodman, Interpolation of rational matrix functions, Birkh¨user Verlag, Basel, 1990. a [3] K. Gallivan, A. Vandendorpe, and P. Van Dooren, “Model reduction via tangential interpolation,” MTNS 2002 (15th Symp. on the Mathematical Theory of Networks and Systems), University of Notre Dame, South Bend, Indiana, USA, August 2002. PROJECTION OF STATE SPACE REALIZATIONS 63 [4] K. Gallivan, A. Vandendorpe, and P. Van Dooren, “Model Reduction via truncation: an interpolation point of view,” Linear Algebra Appl., submitted. [5] K. Gallivan, A. Vandendorpe, and P. Van Dooren, “Model reduction of MIMO systems via tangential interpolation,” Internal report, Universit´ catholique de Louvain , 2002. Available at e http://www.auto.ucl.ac.be/ vandendorpe/. [6] Y. Halevi, “On model order reduction via projection,” 15th IFAC World Congress on automatic control, Barcelona, July 2002. [7] J. J. Loiseau, S. Mondi´, I. Zaballa and P. Zagalak, “Assigning the Kroe necker invariants of a matrix pencil by row or column completions,” Linear Algebra Appl., 278, pp. 327336, 1998. [8] E. Marques de S´, “Imbedding conditions for λmatrices,” Linear Algebra a Appl., 24, pp. 3350, 1979. [9] R. C. Thompson, “Interlacing Inequalities for Invariant Factors,” Linear Algebra Appl., 24, pp. 131, 1979. [10] K. Zhou, J. C. Doyle, and K. Glover, Robust and optimal control, Upper Saddle River, Prentice Hall, Inc, N.J.: 1996. PART 2 Stochastic Systems Problem 2.1
On error of estimation and minimum of cost for wide band noise driven systems Agamirza E. Bashirov
Department of Mathematics Eastern Mediterranean University Mersin 10 Turkey [email protected] 1 DESCRIPTION OF THE PROBLEM The suggested open problem concerns the error of estimation and the minimum of the cost in the ﬁltering and optimal control problems for a partially observable linear system corrupted by wide band noise processes. Recent results allow to construct a wide band noise process in a certain integral form on the basis of its autocovariance function and design the optimal ﬁlter and the optimal control for a partially observable linear system corrupted by such wide band noise processes. Moreover, explicit formulae for the error of estimation and for the minimum of the cost have been obtained. But, the information about wide band noise contained in its autocovariance function is incomplete. Hence, every autocovariance function generates inﬁnitely many wide band noise processes represented in the integral form. Consequently, the error of estimation and the minimum of the cost mentioned above are for a sample wide band noise process corresponding to the given autocovariance function. The following problem arises: given an autocovariance function, what are the least upper and greatest lower bounds of the respective error of estimation and the respective minimum of the cost? What are the distributions of the error of estimation and the minimum of the cost? What are the parameters of the wide band noise process producing the average error and the average minimum of the cost? 68 PROBLEM 2.1 2 MOTIVATION AND HISTORY OF THE PROBLEM Modern stochastic optimal control and ﬁltering theories use white noise driven systems. Results such as the separation principle and the KalmanBucy ﬁltering are based on the white noise model. In fact, white noise, being a mathematical idealization, gives only an approximate description of real noise. In some ﬁelds the parameters of real noise are near to the parameters of white noise and, so, the mathematical methods of control and ﬁltering for white noise driven systems can be satisfactorily applied to them. But in many ﬁelds white noise is a crude approximation to real noise. Consequently, the theoretical optimal controls and the theoretical optimal ﬁlters for white noise driven systems become not optimal and, indeed, might be quite far from being optimal. It becomes important to develop the control and estimation theories for the systems driven by noise models that describe real noise more adequately. Such a noise model is the wide band noise model. The importance of wide band noise processes was mentioned by Fleming and Rishel [1]. An approach to wide band noise based on approximations by white noise was used in Kushner [2]. Another approach to wide band noise based on representation in a certain integral form was suggested in [3] and its applications to space engineering and gravimetry was discussed in [4, 5]. Filtering, smoothing, and prediction results for wide band noise driven linear systems are obtained in [3, 6]. The proofs in [3, 6] are given through the duality principle and, technically, they are routine, making further developments in the theory diﬃcult. A more handle technique based on the reduction of a wide band noise driven system to a white noise driven system was developed in [7, 8, 9]. This technique allows to ﬁnd the explicit formulae for the optimal ﬁlter and for the optimal control, as well as for the error of estimation and for the minimum of the cost in the ﬁltering and optimal control problems for a wide band noise driven linear system. In particular the open problem described here was originally formulated in [9]. A complete discussion of the subject can be found in the recent book [10]. 3 AVAILABLE RESULTS AND DISCUSSION The random process ϕ with the property cov (ϕ(t + s), ϕ(t)) = λ(t, s) if 0 ≤ s < ε and cov (ϕ(t + s), ϕ(t)) = 0 if s ≥ ε, where ε > 0 is a small value and λ is a nonzero function, is called a wide band noise process and it is said to be stationary (in wide sense) if the function λ (called the autocovatiance function of ϕ) depends only on s (see Fleming and Rishel [8]). Starting from the autocovariance function λ, one can construct the respective wide band noise process ϕ in the integral form
0 ϕ(t) =
− min(t,ε) φ(θ)w(t + θ) dθ, t ≥ 0, (1) WIDE BAND NOISE CONTROL AND FILTERING 69 where w is a white noise process with cov (w(t), w(s)) = δ (t − s), δ is the Dirac’s deltafunction, ε > 0 and φ is a solution of the equation
−s φ(θ)φ(θ + s) dθ = λ(s), 0 ≤ s ≤ ε.
−ε (2) The solution ϕ of (2) is called a relaxing function. Since in (2) φ has only one variable the process ϕ from (1) is stationary in wide sense (except small time interval [0, ε]). The following theorem from [8, 9] is crucial for the proposed problem. Theorem: Let ε > 0 and let λ be a continuous realvalued function on [0, ε]. Deﬁne the function λ0 as the even extension of λ to the real line vanishing outside of [−ε, ε] and assume that λ0 is a positive deﬁnite function with F(λ0 )1/2 ∈ L2 (−∞, ∞) where F(λ0 ) is the Fourier transformation of λ0 . Then there exists an inﬁnite number of solutions of the equation (2) in the space L2 (−ε, 0) if λ is a nonzero function a.e. on [−ε, 0]. The nonuniqueness of the solution of equation (2) demonstrates that the covariance function λ does not provide complete information about the respective wide band noise process ϕ. Therefore, for given λ, a sample solution φ of (2) generates the random process ϕ in the form (1) that could be considered as a less or more adequate model of real noise. In order to make a reasonable decision about the relaxing function, one of the ways is studying the distributions of the error of estimation and the minimum of the cost in ﬁltering and control problems, ﬁnding the average error and the average ¯ minimum and identifying the relaxing function φ producing these average values. For this, the explicit formulae from [7, 8, 9] (they are not given here because of the length) can be used to investigate the problem analytically or numerically. Also, the proof of the theorem from [8, 9] can be useful for construction diﬀerent solutions of equation (2). Finally, note that in a partially observable system both the state (signal) and the observations may be disturbed by wide band noise processes. Hence, the suggested problem concerns both these cases and their combination as well. BIBLIOGRAPHY [1] W. H. Fleming and R. W. Rishel, Deterministic and Stochastic Optimal Control, New York, Springer Verlag, 1975, p. 126. [2] H. J. Kushner,Weak Convergence Methods and Singularly Perturbed Stochastic Control and Filtering Problems, Boston, Birkh¨user, 1990. a [3] A. E. Bashirov, “On linear ﬁltering under dependent wide band noises”, Stochastics, 23, pp. 413437, 1988. 70 PROBLEM 2.1 [4] A. E. Bashirov, L. V. Eppelbaum and L. R. Mishne, “Improving E¨tv¨s oo corrections by wide band noise Kalman ﬁltering,” Geophys. J. Int., 108, pp. 193127, 1992. [5] A. E. Bashirov, “Control and ﬁltering for wide band noise driven linear systems”, Journal on Guidance, Control and Dynamics, 16, pp. 983985, 1993. [6] A. E. Bashirov, H. Etikan and N. Semi, “Filtering, smoothing and pre¸ diction of wide band noise driven systems”, J. Franklin Inst., Eng. Appl. Math., 334B, pp. 667683, 1997. [7] A. E. Bashirov, “On linear systems disturbed by wide band noise”, Proceedings of the 14th International Conference on Mathematical Theory of Networks and Systems, Perpignan, France, June 1923, 7 p., 2000. [8] A. E. Bashirov, “Control and ﬁltering of linear systems driven by wide band noise,” 1st IFAC Symposium on Systems Structure and Control, Prague, Czech Republic, August 2931, 6 p.,2001. [9] A. E. Bashirov and S. U˘ural, “Analyzing wide band noise with applig cation to control and ﬁltering”, IEEE Trans. Automatic Control, 47, pp. 323327, 2002. [10] A. E. Bashirov, Partially Observable Linear Systems Under Dependent Noises, Basel, Birkh¨user, 2003. a Problem 2.2
On the stability of random matrices Giuseppe Calaﬁore∗ , Fabrizio Dabbene∗∗
∗ Dip. di Automatica e Informatica IEIITCNR Politecnico di Torino C.so Duca degli Abruzzi, 24 Torino, Italy {giuseppe.calafiore, fabrizio.dabbene}@polito.it
∗∗ 1 INTRODUCTION AND MOTIVATION In the theory of linear systems, the problem of assessing whether the omogeneous system x = Ax, A ∈ Rn,n is asymptotically stable is a well understood ˙ (and fundamental) one. Of course, the system (and we shall say also the matrix A) is stable if and only if Reλi < 0, i = 1, . . . , n, being λi the eigenvalues of A. Evolving from this basic notion, much research eﬀort has been devoted in recent years to the study of robust stability of a system. Without entering in the details of more than thirty years of fruitful research, we could condense the essence of the robust stability problem as follows: given a bounded set ∆ and a stable matrix A ∈ Rn,n , state whether A∆ = A + ∆ is stable for all ∆ ∈ ∆. Since the above deterministic problem may be computationally hard in some cases, a recent line of study proposes to introduce a probability distribution over ∆, and then to assess the probability of stability of the random matrix A + ∆. Actually, in the probabilistic approach to robust stability, this probability is not analytically computed but only estimated by means of randomized algorithms, which makes the problem feasible from a computational point of view, see, for instance, [3] and the references therein. Leaving apart the randomized approach, which circumvents the problem of analytical computations, there is a clear disparity between the abundance of results available for the deterministic problem (both positive and negative results, in the form of computational “hardness,” [2]) and their deﬁciency in the probabilistic one. In this latter case, almost no analytical result is known among control researchers. 72 PROBLEM 2.2 The objective of this note is to encourage research on random matrices in the control community. The one who adventures in this ﬁeld will encounter unexpected and exciting connections among diﬀerent ﬁelds of science and beautiful branches of mathematics. In the next section, we resume some of the known results on random matrices, and state a simple new (to the best of our knowledge) closed form result on the probability of stability of a certain class of random matrices. Then, in section 3 we propose three open problems related to the analytical assessment of the probability of stability of random matrices. The problems are presented in what we believe is their order of diﬃculty. 2 AVAILABLE RESULTS Notation : A real random matrix X is a matrix whose elements are real random variables. The probability density (pdf) of X, fX (X ) is deﬁned as the joint pdf of its elements. The notation X ∼ Y means that X, Y are random quantities with the same pdf. The Gaussian density with mean µ and variance σ 2 is denoted as N (µ, σ 2 ). For a matrix X , ρ(X ) denotes the spectral radius, and X the Frobenius norm. The multivariate Gamma n function is deﬁned as Γn (x) = π n(n−1)/4 i=1 Γ(x − (i − 1)/2), where Γ(·) is the standard Gamma function. In this note, we consider the class of random matrices (a class of random matrices is often called an “ensemble” in the physics literature) whose density is invariant under orthogonal similarity. For a random matrix X in this class, we have that X ∼ U XU T , for any ﬁxed orthogonal matrix U . For symmetric orthogonal invariant random matrices, it can be proved that the pdf of X is . a function of only its eigenvalues Λ = diag(λ1 , . . . , λn ), i.e., fX (X ) = gX (Λ). (1) Orthogonal invariant random matrices may seem specialized, but we provide below some notable examples: 1. Gn : Gaussian matrices. It is the class of n × n real random matrices with independent identically distributed (iid) elements drawn from N (0, 1). 2. Wn : Whishart matrices. Symmetric n × n random matrices of the form XXT , where X is Gn . 3. GOEn : Gaussian Orthogonal Ensemble. Symmetric n × n random matrices of the form (X + XT )/2, where X is Gn . 4. Sn : Symmetric orthogonal invariant ensemble. Generic symmetric n × n random matrices whose density satisﬁes (1). Wn and GOEn are special cases of these. ON THE STABILITY OF RANDOM MATRICES 73 5. USρ : Symmetric n × n random matrices from Sn , which are uniform n over the set {X ∈ Rn,n : ρ(X ) ≤ 1}. 6. USF : Symmetric n × n random matrices from Sn , which are uniform n over the set {X ∈ Rn,n : X ≤ 1}. Whishart matrices have a long history, and are well studied in the statistics literature, see [1] for an early reference. The Gaussian Orthogonal Ensemble is a fundamental model used to study the theory of energy levels in nuclear physics, and it has been originally introduced by Wigner [9, 8]. A thorough account of its statistical properties is presented in [7]. A fundamental result for the Sn ensemble is that the joint pdf of the eigenvalues of random matrices belonging to Sn is known analytically. In particular, if λ1 ≥ λ2 ≥ . . . ≥ λn are the eigenvalues of a random matrix X belonging to Sn , then their pdf fΛ (Λ) is fΛ (Λ) = π n /2 gX (Λ) Γn (n/2)
2 (λi − λj ).
1≤i<j ≤n (2) This result can be deduced from [7], and it is also presented in [4]. For some of the ensembles listed above, this specializes to: Wn : Γ2π n/2) exp(− 1 ( 2
n n2 i λi ) i λi −1/2 1≤i<j ≤n (λi 1≤i<j ≤n (λi − λj ) (3) (4) (5) GOEn : USρ n 1 2n/2 exp(− 1 2 i Γ(i/2) 2 i λi ) − λj ) : Ku 1≤i<j ≤n (λi − λj ), 1 ≥ λ1 ≥ . . . ≥ λn ≥ −1. The normalization constant Ku in the last expression can be determined in closed form solving a Legendre integral, see eq. (17.6.3) of [7]
n−1 Ku = n!2 2 (n+1)
j =0 n Γ(3/2 + j/2)Γ2 (1 + j/2) . Γ(3/2)Γ((n + j + 3)/2) (6) Clearly, knowing the joint density of the eigenvalues is a key step in the direction of computing the probability of stability of a random matrix. We remark that the above results all refer to the symmetric case, which has the advantage of having all real eigenvalues. Very little is known for instance about the distribution of the eigenvalues of generic Gaussian matrices Gn . By consequence, to the best of our knowledge, nothing is known about the probability of stability of Gaussian random matrices (i.e., matrices drawn using Matlab randn command). Famous asymptotic results (i.e., for n → ∞) go under the name of “circular laws” and are presented in [6]. An exact formula for the distribution of the real eigenvalues may be found in [5]. We show below a (seemingly new) result regarding the probability of stability for the USρ ensemble. n 74 2.1 Probability of stability for the USρ ensemble n PROBLEM 2.2 Given an n × n real random matrix X, let fΛ (Λ) be the marginal density of the eigenvalues of X. The probability of stability of X is deﬁned as . P= ··· ReΛ<0 fΛ (Λ)dΛ. (7) We now compute this probability for matrices in the USρ ensemble, whose n pdf is given in (5). To this end, we ﬁrst remove the ordering of the eigenvalues, and therefore divide by n! the pdf (5). Then, the probability of stability is PU S = Ku n!
0 0 ···
−1 −1 1≤i<j ≤n λi − λj  dλ1 · · · dλn . (8) This multiple integral is a Selberg type integral whose solution is reported for instance in [7], p. 339. The above probability results to be PU S = 2− 2 n(n+1) .
1 3 OPEN PROBLEMS The probability of stability can be computed also for the GOEn ensemble and the USF ensemble, using a technique of integration over alternate variables. n We pose this as the ﬁrst open problem (of medium diﬃculty): Problem 1: Determine the probability of stability for the GOEn and the USF ensembles. n A much harder problem would be to determine an analytic expression for the density of the eigenvalues (which are now both real and complex) of Gaussian matrices Gn , and then integrate it to obtain the probability of stability for the Gn ensemble: Problem 2: Determine the probability of stability for the Gn ensemble. A numerical estimate of the probability of (Hurwitz) stability for Gn matrices is reported in table 2.2.1, as a function of dimension n. n Prob. 1 0.500 2 0.250 3 0.104 4 0.037 5 0.011 6 0.003 Table 2.2.1 Estimated probability of stability for Gn matrices. As the reader may have noticed, all the problems treated so far relate to random matrices with zero mean. From the point of view of robustness analysis it would be much more interesting to consider the case of biased random matrices. This motivates our last (and most diﬃcult) open problem: ON THE STABILITY OF RANDOM MATRICES 75 Problem 3: Let A ∈ Rn,n be a given stable matrix. Determine the probability of stability of the random matrix A + X, where X belongs to one of the ensembles listed in section 2. ACKNOWLEDGEMENTS The authors wish to thank Professor Madan Lal Mehta for reading and giving his precious advice on this manuscript. BIBLIOGRAPHY [1] T. W. Anderson, An Introduction to Multivariate Statistical Analysis, John Wiley & Sons, New York, 1958. [2] V. D. Blondel and J. N. Tsitsiklis, “A survey of computational complexity results in systems and control,” Automatica, 36:1249–1274, 2000. [3] G. Calaﬁore, F. Dabbene, and R. Tempo, “Randomized algorithms for probabilistic robustness with real and complex structured uncertainty,” IEEE Trans. Aut. Control, 45(12):2218–2235, December 2000. [4] A. Edelman, Eigenvalues and Condition Numbers of Random Matrices, Ph.D. thesis, Massachusetts Institute of Technology, Boston, 1989. [5] A. Edelman, “How many eigenvalues of a random matrix are real?”, J. Amer. Math. Soc., 7:247–267, 1994. [6] V. L. Girko, Theory of Random Determinants, Kluwer, Boston, 1990. [7] M. L. Mehta, Random Matrices, Academic Press, Boston, 1991. [8] E. P. Wigner, “Distribution laws for the roots of a random Hermitian matrix,” In C. E. Porter, ed., Statistical Theories of Spectra: Fluctuations. Academic, New York, 1965. [9] E. P. Wigner, “Statistical properties of real symmetric matrices with many dimensions,” In C. E. Porter, ed., Statistical Theories of Spectra: Fluctuations. Academic, New York, 1965. Problem 2.3
Aspects of Fisher geometry for stochastic linear systems Bernard Hanzon
Institute for Econometrics, Operations Research and System Theory Technical University of Vienna Argentinierstr. 8/119 A1040 Vienna Austria [email protected] Ralf Peeters
Department of Mathematics Maastricht University P.O. Box 616, 6200 MD Maastricht The Netherlands [email protected] 1 DESCRIPTION OF THE PROBLEM Consider the space S of stable minimum phase systems in discretetime, of order (McMillan degree) n, having m inputs and m outputs, driven by a stationary Gaussian white noise (innovations) process of zero mean and covariance Ω. This space is often considered, for instance in system identiﬁcation, to characterize stochastic processes by means of linear timeinvariant dynamical systems (see [8, 18]). The space S is well known to exhibit a differentiable manifold structure (cf. [5]), which can be endowed with a notion of distance between systems, for instance by means of a Riemannian metric, in various meaningful ways. One particular Riemannian metric of interest on S is provided by the socalled Fisher metric. Here the Riemannian metric tensor is deﬁned in terms of local coordinates (i.e., in terms of an actual parametrization at hand) by the Fisher information matrix associated with a given system. The open question raised in this paper reads as follows: FISHER GEOMETRY FOR LINEAR SYSTEMS 77 Does there exist a uniform upper bound on the distance induced by the Fisher metric for a ﬁxed Ω > 0, between any two systems in S ? In case the answer is aﬃrmative, a natural followup question from the differential geometric point of view would be whether it is possible to construct a ﬁnite atlas of charts for the manifold S , such that the charts as subsets of Euclidean space are bounded (i.e., contained in an open ball in Euclidean space), while the distortion of each chart remains ﬁnite. 2 MOTIVATION AND BACKGROUND OF THE PROBLEM An important and wellstudied problem in linear systems identiﬁcation is the construction of parametrizations for various classes of linear systems. In the literature a great number of parametrizations for linear systems have been proposed and used. From the geometric point of view the question arises whether one can qualify various parametrizations as good or bad. A parametrization is a way to (locally) describe a geometric object. Intuitively, a parametrization is better the more it reﬂects the (local) structure of the geometric object. An important consideration in this respect is the scale of the parametrization, or rather the spectrum of scales, see [4]. To explain this, consider the tangent space of a diﬀerential manifold of systems, such as S . The diﬀerentiable manifold can be supplied with a Riemannian geometry, for example, by smoothly embedding the diﬀerentiable manifold in an appropriate Hilbert space: then the tangent spaces to the manifold are linear subspaces of the Hilbert space, which induces an inner product on each of the tangent spaces and a Riemannian metric structure on the manifold. If such a Riemannian metric is deﬁned, then any suﬃciently smooth parametrization will have an associated Riemannian metric tensor. In local coordinates (i.e., in terms of the parameters used) it is represented by a symmetric, positive deﬁnite matrix at each point. The eigenvalues of this matrix reﬂect the local scales of the parametrization: the scale of any inﬁnitesimal movement starting from a given point, will vary between the largest and the smallest eigenvalue of the Riemannian metric tensor at the point involved. Over a set of points the scale will clearly vary between the largest eigenvalue to be found in the spectra of the corresponding set of Riemannian metric matrices and the smallest eigenvalue to be found in that same set of spectra. Following Milnor (see [12]), who considered the question of ﬁnding good charts for the earth, we deﬁne the distortion of a parametrization, which we will call the Milnor distortion, as the quotient of the largest scale and the smallest scale of the parametrization. Note that this concept of Milnor distortion is a generalization of the concept of the condition number of a matrix. However it is (in general) not the maximum of the condition numbers of the set of Riemannian metric matrices. 78 PROBLEM 2.3 Indeed, the largest eigenvalue and the smallest eigenvalue that enter into the deﬁnition of the Milnor distortion do not have to correspond to the Riemannian metric tensor at the same point. If one has an atlas of overlapping charts, one can calculate the Milnor distortion in each of the charts and consider the largest distortion in any of the charts of the atlas. One could now be tempted to deﬁne this number as the distortion of the atlas and look for atlases with relatively small distortion. However, in this case, the problem shows up that it is always possible to take a large number of small charts, each one displaying very little distortion (i.e., distortion close to one), while such an atlas may still not be desirable as it may require a huge number of charts. The diﬃculty here is to trade oﬀ the number of charts in an atlas against the Milnor distortion in each of those charts. At this point, we have no clear natural candidate for this tradeoﬀ. But at least for atlases with an equal ﬁnite number of charts the concept of maximal Milnor distortion could be used to compare the atlases. 3 AVAILABLE RESULTS In trying to apply these ideas to the question of parametrization of linear systems, the problem arises that many parametrizations turn out to have in fact an inﬁnite Milnor distortion. Consider for example the case of real SISO discretetime strictly proper stable systems of order one. (See also [9] and [13, section 4.7].) This set can be described by two real parameters, e.g., by writing the associated transfer function into the form h(z ) = b/(z − a). Here, the parameter a denotes the pole of the system and the parameter b is associated with the gain. The Riemannian metric tensor induced by the H2 norm of this parametrization can be computed b2 (1 + a2 )/(1 − a2 )3 ab/(1 − a2 )2 as , see [9]. Therefore it tends to ab/(1 − a2 )2 1/(1 − a2 ) inﬁnity when a approaches the stability boundary a = 1, whence the Milnor distortion of this parametrization becomes inﬁnity. In this example the geometry is that of a ﬂat double inﬁnitesheeted Riemann surface. Locally, it is isometric with Euclidean space and therefore one can construct charts that have the identity matrix as their Riemannian metric tensor (see [13]). However, in this case, this means that close to the stability boundary the distances between points become arbitrarily large. Therefore, although it is possible to construct charts with optimal Milnor distortion, this can only be done at the price of having to work with inﬁnitely large (i.e., unbounded) charts. If one wants to work with charts in which the distances remain bounded then one will need inﬁnitely many of them on such occasions. In the case of stochastic Gaussian timeinvariant linear dynamical systems without observed inputs, the class of stable minimumphase systems plays an important role. For such stochastic systems the (asymptotic) Fisher information matrix is welldeﬁned. This matrix is dependent on the parametrization FISHER GEOMETRY FOR LINEAR SYSTEMS 79 used and admits the interpretation of a Riemannian metric tensor (see [15]). There is an extensive literature on the computation of Fisher information, especially for AR and ARMA systems. See, e.g., [6, 7, 11]. Much of this interest derives from the many applications in practical settings: it can be used to establish local parameter identiﬁability, it is used for parameter estimation in the method of scoring, and it is also known to determine the local convergence properties of the popular GaussNewton method for leastsquares identiﬁcation of linear systems based on the maximum likelihood principle (see [10]). In the case of stable AR systems, the Fisher metric tensor can, for instance, be calculated using the parametrization with Schur parameters. From the formulas in [14] it follows that the Fisher information for scalar AR systems of order one driven by zero mean Gaussian white noise of unit variance is 2 equal to 1/(1 − γ1 ). Here γ1 is required to range between −1 and 1 (to impose stability) and to be nonzero (to impose minimality). Although this again implies an inﬁnite Milnor distortion, the situation here is structurally diﬀerent from the situation in the previous case: the length of the curve of systems obtained by letting γ1 range from 0 to 1 is ﬁnite! Indeed, the 1 (Fisher) length of this curve is computed as 0 √ 1 2 dγ1 = π/2.
1−γ1 Let the inner geometry of a connected Riemannian manifold of systems be deﬁned by the shortest path distance: d(Σ1 , Σ2 ) is the Riemannian length of the shortest curve connecting the two systems Σ1 and Σ2 . Then, in this simple case, the Fisher geometry has the property that the corresponding inner geometry has a uniform upper bound. Therefore, this example provides an instance of a subset of the manifold S for which the answer to the question raised is aﬃrmative. As a matter of fact, if one now reparametrizes the set of systems as in [17] by θ deﬁned through γ1 = sin(θ), then the resulting Fisher information quantity becomes equal to 1 everywhere. Thus, it is bounded and the Milnor distortion of this reparametrization is ﬁnite. But at the same time the parameter chart itself remains bounded! Hence, also the “followup question” of the previous section is answered aﬃrmative here. If one considers SISO stable minimumphase systems of order 1, it can be shown likewise that also here the Fisher distance between two systems is uniformly bounded and that a ﬁnite atlas with bounded charts and ﬁnite Milnor distortion can be designed. Whether this also occurs for larger statespace dimensions is still unknown (to the best of the authors’ knowledge) and this is precisely the open problem presented above. To conclude, we note that the role played by the covariance matrix Ω of the driving white noise is rather limited. It is well known that if the system equations and the covariance matrix are parametrized independently of each other, then the Fisher information matrix attains a blockdiagonal structure (see, e.g., [18, Ch. 7]. The covariance matrix Ω then appears as a weighting matrix for the block of the Fisher information matrix associated with the 80 PROBLEM 2.3 parameters involved in the system equations. Therefore, if Ω is known, or rather if an upper bound on Ω is known (which is likely to be the case in any practical situation!), its role with respect to the questions raised can be largely disregarded. This allows to restrict attention to the situation where Ω is ﬁxed to the identity matrix Im . BIBLIOGRAPHY [1] S.I. Amari, DiﬀerentialGeometrical Methods in Statistics, Lecture Notes in Statistics 28, Springer Verlag, Berlin, 1985. [2] S.I. Amari, “Diﬀerential geometry of a parametric family of invertible linear systems Riemannian metric, dual aﬃne connections, and divergence,” Mathematical Systems Theory, 20, 53–82, 1987 [3] C. Atkinson and A. F. S. Mitchell, “Rao’s distance measure. Sankhya: ¯ The Indian Journal of Statistics, Series A, 43(3), 345–365, 1981. [4] R. W. Brockett and P. S. Krishnaprasad “A scaling theory for linear systems,” IEEE Trans. Aut. Contr., AC25, 197–206, 1980. [5] J. M. C. Clark, “The consistent selection of parametrizations in system identiﬁcation,” Proc. Joint Automatic Control Conference, 576– 580. Purdue University, Lafayette, Indiana, 1976. [6] B. Friedlander, “On the Computation of the CramerRao Bound for ARMA Parameter Estimation,” IEEE Transactions on Acoustics, Speech and Signal Processing, ASSP32 (4), 721–727. [7] E. J. Godolphin and J. M. Unwin, “Evaluation of the covariance matrix for the maximum likelihood estimator of a Gaussian autoregressivemoving average process,” Biometrika, 70 (1), 279–284, 1983. [8] E. J. Hannan and M. Deistler, The Statistical Theory of Linear Systems. John Wiley & Sons, New York, 1988. [9] B. Hanzon, Identiﬁability, Recursive Identiﬁcation and Spaces of Linear Dynamical Systems, CWI Tracts 63 and 64, Centrum voor Wiskunde en Informatica (CWI), Amsterdam, 1989. [10] B. Hanzon and R. L. M. Peeters, “On the Riemannian Interpretation of the GaussNewton Algorithm,” In: M. K´rn´ and K. Warwick ay (eds.), Mutual Impact of Computing Power and Control Theory, 111– 121. Plenum Press, New York, 1993. [11] A. Klein and G. M´lard,“On Algorithms for Computing the Covarie ance Matrix of Estimates in Autoregressive Moving Average Processes,” Computational Statistics Quarterly, 1, 1–9, 1989. FISHER GEOMETRY FOR LINEAR SYSTEMS 81 [12] J. Milnor, “A problem in cartography,” American Math. Monthly, 76, 1101–1112, 1969. [13] R. L. M. Peeters, System Identiﬁcation Based on Riemannian Geometry: Theory and Algorithms. Tinbergen Institute Research Series, vol. 64, Thesis Publishers, Amsterdam, 1994. [14] R. L. M. Peeters and B. Hanzon, “Symbolic computation of Fisher information matrices for parametrized statespace systems,” Automatica, 35, 1059–1071, 1999. [15] C. R. Rao, “Information and accuracy attainable in the estimation of statistical parameters,” Bull. Calcutta Math. Soc., 37, 81–91, 1945. [16] N. Ravishanker, E. L. Melnick and C.L. Tsai, “Diﬀerential geometry of ARMA models,” Journal of Time Series Analysis, 11, 259–274, . [17] A. L. Rijkeboer, “Fisher optimal approximation of an AR(n)process by an AR(n1)process,” In: J. W. Nieuwenhuis, C. Praagman and H. L. Trentelman eds., Proceedings of the 2nd European Control Conference ECC ’93, 1225–1229, Groningen, 1993. [18] T. S¨derstr¨m and P. Stoica, System Identiﬁcation, PrenticeHall, New o o York, 1989. Problem 2.4
On the convergence of normal forms for analytic control systems Wei Kang
Department of Mathematics, Naval Postgraduate School Monterey, CA 93943 USA [email protected] Arthur J. Krener
Department of Mathematics, University of California Davis, CA 95616 USA [email protected] 1 BACKGROUND A fruitful technique for the local analysis of a dynamical system consists of using a local change of coordinates to transform the system to a simpler form, which is called a normal form. The qualitative behavior of the original system is equivalent to that of its normal form which may be easier to analyze. A bifurcation of a parameterized dynamics occurs when a change in the parameter leads to a change in its qualitative properties. Therefore, normal forms are useful in analyzing when and how a bifurcation will occur. In his dissertation, Poincar´ studied the problem of linearizing a dynamical e system around an equilibrium point, linear dynamics being the simplest normal form. Poincar´’s idea is to simplify the linear part of a system ﬁrst, e using a linear change of coordinates. Then the quadratic terms in the system are simpliﬁed, using a quadratic change of coordinates, then the cubic terms, and so on. For systems that are not linearizable, the Poincar´Dulac theorem e provides the normal form. Given a C ∞ dynamical system in its Taylor expansion around x = 0, x = f (x) = F x + f [2] (x) + f [3] (x) + · · · ˙ (1) CONVERGENCE OF NORMAL FORMS 83 where x ∈ n , F is a diagonal matrix with eigenvalues λ = (λ1 , . . . , λn ), and f [d] (x) is a vector ﬁeld of homogeneous polynomial of degree d. The dots + · · · represent the rest of the formal power series expansion of f . Let ek be the k th unit vector in n . Let m = (m1 , . . . , mn ) be a vector of nonnegative integers. In the following, we deﬁne x and xm by m = mi  and xm = xm1 xm2 . . . xmn . A nonlinear term xm ek is said to be resonant n 1 2 if m · λ = λk for some nonzero vector of nonnegative integers m and some 1≤k≤n. Deﬁnition 1 The eigenvalues of F are in the Poincar´ domain if their convex e hull does not contain zero, otherwise they are in the Siegel domain. Deﬁnition 2: The eigenvalues of F are of type (C, ν ) for some C > 0, ν > 0 if C m · λ − λk  ≥ mν For eigenvalues in the Poincar´ domain, there are at most a ﬁnite number e of resonances. For eigenvalues of type (C, ν ), there are no resonances and as m → ∞ the rate at which resonances are approached is controlled. A formal change of coordinates is a formal power series z = T x + θ[2] (x) + θ[3] (x) + · · · (2) where T is invertible. If T = I , then it is called a near identity change of coordinates. If the power series converges locally, then it deﬁnes a real analytic change of coordinates. Theorem 1: (Poincar´Dulac) If the system (1) is C ∞ then there exists a e formal change of coordinates (2) transforming it to z = Az + w(z ) ˙ where A is in Jordan form and w(z ) consists solely of resonant terms. (If some of the eigenvalues of F are complex then the change of coordinates will also be complex). In this normal form, w(z ) need not be unique. If the system (1) is real analytic and its eigenvalues lie in the Poincar´ doe main (2), then w(z ) is a polynomial vector ﬁeld and the change of coordinates(2) is real analytic. Theorem 2: (Siegel) If the system (1) is real analytic and its eigenvalues are of type (C, ν ) for some C > 0, ν > 0 , then w(z ) = 0 and the change of coordinates (2) is real analytic. As is pointed out in [1], even in cases where the formal series are divergent, the method of normal forms turns out to be a powerful device in the study of nonlinear dynamical systems. A few low degree terms in the normal form often give signiﬁcant information on the local behavior of the dynamics. 84 2 THE OPEN PROBLEM PROBLEM 2.4 In [3], [4], [5], [10], and [8], Poincar´’s idea is applied to nonlinear control syse tems. A normal form is derived for nonlinear control systems under change of state coordinates and invertible state feedback. Consider a C ∞ control system (3) x = f (x, u) = F x + Gu + f [2] (x, u) + f [3] (x, u) + · · · ˙ n where x ∈ is the state variable, u ∈ is a control input. We only discuss scalar input systems, but the problem can be generalized to vector input systems. Such a system is called linearly controllable at the origin if the linearization (F, G) is controllable. In contrast with Poincar´’s theory, a homogeneous transformation for (3) e consists of both change of coordinates and invertible state feedback, (4) z = x + θ[d] (x), v = u + κ[d] (x, u) where θ[d] (x) represents a vector ﬁeld whose components are homogeneous polynomials of degree d. Similarly, κ[d] (x) is a polynomial of degree d. A formal transformation is deﬁned by ∞ ∞ (5) z = T x + d=2 θ[d] (x), v = Ku + d=2 κ[d] (x, u) where T and K are invertible. If T and K are identity matrices then this is called a near identity transformation. The following theorem for the normal form of control systems is a slight generalization of that proved in [3], see also [8] and [10]. Theorem 3: Suppose (F, G) in (3) is a controllable pair. Under a suitable transformation (5), (3) can be transformed into the following normal form zi = zi+1 + j =i+2 pi,j (¯j )zj 1 ≤ i ≤ n − 1 ˙ z2 (6) zn = v ˙ = v , zj = (z1 , z2 , · · · , zj ), and pi,j (¯j ) is a formal series of zj . ¯ z ¯
n+1 where zn+1 Once again, the convergence of the formal series pi,j in (6) is not guaranteed, hence nothing is known about the convergence of the normal form. Open Problem (The Convergence of Normal Form): Suppose the controlled vector ﬁeld f (x, u) in (3) is real analytic and F, G is a controllable pair. Find veriﬁable necessary and suﬃcient conditions for the existence of a real analytic transformation (5) that transforms the system to the normal form (6). Normal forms of control systems have proven to be a powerful tool in the analysis of local bifurcations and local qualitative performance of control systems. A convergent normal form will make it possible to study a control system over the entire region in which the normal form converges. Global or semiglobal results on control systems and feedback design can be proved by studying analytic normal forms. CONVERGENCE OF NORMAL FORMS 85 3 RELATED RESULTS The convergence of the Poincar´ normal form was an active research topic in e dynamical systems. According to Poincar´’s Theorem and Siegel’s theorem, e the location of eigenvalues determines the convergence. If the eigenvalues are located in the Poincar´ domain with no resonances, or if the eigenvalues e are located in the Siegel domain and are of type (C, ν ), then the analytic vector ﬁeld that deﬁnes the system is biholomorphically equivalent to a linear vector ﬁeld. Clearly, the normal form converges because it has only a linear part. The Poincar´Dulac theorem deals with a more complicated case. It e states that if the eigenvalues of an analytic vector ﬁeld belong to the Poincar´ e domain, then the ﬁeld is biholomorphically equivalent to a polynomial vector ﬁeld. Therefore, the Poincar´ normal form has only ﬁnite many terms, and e hence is convergent. As for control systems, it is proved in [5] that if an analytic control system is linearizable by a formal transformation, than it is linearizable by an analytic transformation. It is also proved in [5] that a class of threedimensional analytic control systems, which are not necessarily linearizable, can be transformed to their normal forms by analytic transformations. No other results on the convergence of control system normal forms are known to us. The convergence problem for control systems has a fundamental diﬀerence from the convergence results of Poincar´Dulac. For the latter, the location e of the eigenvalues are crucial and the eigenvalues are invariant under change of coordinates. However, the eigenvalues of a control system can be changed by linear state feedback. It is unknown what intrinsic factor in control systems determines the convergence of their normal form or if the normal form is always convergent. The convergence of normal forms is an important problem to be addressed. Applications of normal forms for control systems are proved to be successful. In [6] the normal forms are used to classify the bifurcation of equilibrium sets and controllability for uncontrollable systems. In [7] the control of bifurcations using state feedback is introduced based on normal forms. For discretetime systems, normal form and the stabilization of NaimarkSacker bifurcation are addressed in [2]. In [10] a complete characterization for the symmetry of nonlinear systems is found for linearly controllable systems. In addition to linearly controllable systems, the normal form theory has been generalized to larger family of control systems. Normal forms for systems with uncontrollable linearization are derived in several papers ([6], [7], [8], and [10]). Normal forms of discretetime systems can be found in [9] and [2]. The convergence of these normal forms is also an open problem. 86 BIBLIOGRAPHY PROBLEM 2.4 [1] V. I. Arnold, Geometrical Method in the Theory of Ordinary Diﬀerential Equations, SpringerVerlag, New York, Berlin, 1988. [2] B. Hamzi, J.P. Barbot, S. Monaco, and D. NormandCyrot, “Nonlinear discretetime control of systems with a NaimarkSacker bifurcation,” Systems and Control Letters, 44, 4, pp. 245258, 2001. [3] W. Kang, Extended controller normal form, invariants and dynamical feedback linearization of nonlinear control systems, Ph.D. dissertation, University of California at Davis, 1991. [4] W. Kang and A. J. Krener, “Extended quadratic controller normal form and dynamic feedback linearization of nonlinear systems,” SIAM J. Control and Optimization, 30 (1992), 13191337. [5] W. Kang, “Extended controller form and invariants of nonlinear control systems with a single input,” J. of Mathematical Systems, Estimation and Control, 6 (1996), 2751. [6] W. Kang, “Bifurcation and normal form of nonlinear control systems Part I and II, SIAM J. Control and Optimization, 36 (1998), 193212 and 213232. [7] W. Kang, “Bifurcation control via state feedback for systems with a single uncontrollable mode,” SIAM J. Control and Optimization, 38, (2000), 14281452. [8] A. J. Krener, W. Kang, and D. E. Chang, “Control bifurcations,” IEEE Transactions on Automatic Control, forthcoming. [9] A. J. Krener and L. Li, “Normal forms and bifurcations of discrete time nonlinear control systems,” Proc. of 5th IFAC NOLCOS Symposium, SaintPetersburg, 2001. [10] I. Tall and W. Respondek, “Feedback classiﬁcation of nonlinear singleinput controls systems with controllable linearization: Normal forms, canonical forms, and invariants,” preprint. PART 3 Nonlinear Systems Problem 3.1
Minimum time control of the Kepler equation JeanBaptiste Caillau, Joseph Gergaud, and Joseph Noailles
ENSEEIHTIRIT, UMR CNRS 5505 2 rue Camichel F31071 Toulouse France {caillau, gergaud, jnoaille}@enseeiht.fr 1 DESCRIPTION OF THE PROBLEM We consider the controlled Kepler equation in three dimensions r r = −k 3 + γ ¨ r (1) where r = (r1 , r2 , r3 ) is the position vector–the double dot denoting the second order time derivative–, k a strictly positive constant, . the Euclidean norm in R3 , and where γ = (γ1 , γ2 , γ3 ) is the control. The minimum time problem is then stated as follows: ﬁnd a positive time T and a measurable function γ deﬁned on [0, T ] such that (1) holds almost everywhere on [0, T ] and: T → min r(0) = r0 , r(0) = r0 ˙ ˙ h(r(T ), r(T )) = 0 ˙ γ  ≤ Γ. In (2), r0 and r0 are the known initial position and speed with: ˙ r0 2 ˙ k − 0 <0 2 r  in order that the uncontrolled initial motion be periodic [1]. In (3) h is a ﬁxed submersion of R6 onto Rl , l ≤ 6, deﬁning a nontrivial endpoint condition. The constraint (4) on the Euclidean norm of the control, with Γ a strictly positive constant, means that almost everywhere on [0, T ]
2 2 2 γ1 + γ2 + γ3 ≤ Γ2 . (2) (3) (4) 90 Our ﬁrst concern is uniqueness (see §3 about existence): Question 1. Is the optimal control unique? The second point is about regularity, namely: Question 2. Are there continuous optimal controls? PROBLEM 3.1 Denoting by T (Γ) the value function that assigns to any strictly positive Γ (parameter involved in (4)) the associated minimum time, our third and last question is: Question 3. Does the product T (Γ) · Γ have a limit when Γ tends to zero? 2 MOTIVATION This problem originates in the computation of optimal orbit transfers in celestial mechanics for satellites with very low thrust engines [5]. Since the 1990s, low electroionic propulsion is been considered as an alternative to strong chemical propulsion, but the lower the thrust, the longer the transfer time, hence the idea of minimizing the ﬁnal time. In this context, γ is the ratio u/m of the engine thrust by the mass of the satellite, and one has moreover to take into account the mass variation due to fuel consumption: m = −β u. ˙ Typical boundary conditions in this case consist in inserting the spacecraft on a high geostationnary orbit, and the terminal condition is deﬁned by: r(T ) and r(T ) ﬁxed, r(T ) · r(T ) = 0, r(T ) × r(T ) × k = 0 ˙ ˙ ˙ where k is the normal vector to the equatorial plane. In contrast with the impulsional manoeuvres performed using the strong classic chemical propulsion, the gradual control by a low thrust engine is sometimes referred to as “continuous?” Thus, question 2 could be rephrased according to: Are “continuous” optimal controls continuous? Besides, this question is also relevant in practice since continuity of controls is the basic assumption required by most numerical methods [2]. In the same respect, question 3 is the key to get accurate estimates of the unknown transfer time, needed to ensure convergence of the numerical computation. MINIMUM TIME CONTROL OF THE KEPLER EQUATION 91 3 RELATED RESULTS The existence of controls achieving the minimum time transfer comes from the controllability of the system (the associated Lie algebra has maximal rank and the drift is periodic, see [7]) and from the convexity properties of the dynamics by Filippov theorem [4]. Regarding regularity, it is proven in [3] (whose results extend straightforwardly to three dimensions) that any time minimal control of (1) has at most ﬁnitely many discontinuity points. More precisely, using Pontryagin Maximum Principle [4, 6], one gets that ¯ any discontinuity point t is a switching point in the sense that the control is instantaneously rotated of an angle π : ¯ ¯ γ (t+) = −γ (t−). Furthermore, bounds are given in [3], not on the total number of switchings but for those located at special points of the osculating ellipse: there cannot be consecutive switchings at perigee or apogee. Since the numerics suggest that the possible discontinuities are exactly located at the perigee, a conjecture would be: There is at most one switching point and this point is located at the perigee. Finally, as for question 3, the value function T (Γ) is obviously decreasing and is proven to be rightcontinuous in [2]. Besides, the product T (Γ) · Γ turns to be nearly constant numerically so that the conjecture would be to answer positively: There is a positive constant c such that T (Γ) · Γ tends to c when Γ tends to zero. BIBLIOGRAPHY [1] V. I. Arnold, Mathematical Methods of Classical Mechanics, SpringerVerlag, 1978. [2] J. B. Caillau, J. Gergaud and J. Noailles, “3D geosynchronous transfer of a satellite: Continuation on the thrust,” JOTA, 118:3, 2003 (Forthcoming). [3] J.B. Caillau and J. Noailles, “Coplanar control of a satellite around the earth,” ESAIM COCV, vol. 6, pp. 239258, 2001. [4] L. Cesari,Optimization Theory and Applications, SpringerVerlag, 1983. [5] J. Noailles and C. T. Le, “Contrˆle en temps minimal et transfert ` o a ´ faible pouss´e,” Equations aux d´riv´es partielles et applications, Articles e ee in Honour of J. L. Lions for his 70th Birthday, pp. 705724, GauthiersVillars, 1998. 92 PROBLEM 3.1 [6] H. J. Sussmann, “Geometry and optimal control,” Mathematical Control Theory, Dedicated to R. W. Brockett on his 60th Birthday, J. Baillieul and J. C. Willems eds., SpringerVerlag, 1998. [7] V. Jurdjevic,Geometric Control Theory Cambridge University Press, 1997. Problem 3.2
Linearization of linearly controllable systems R. Devanathan
School of Electrical and Electronic Engineering (Block S1) Nanyang Technological University Singapore 639798 Republic of Singapore [email protected] 1 DESCRIPTION OF THE PROBLEM We consider a class of systems of the form ˙ ξ = f (ξ ) + g (ξ )ζ (1) where ξ is an ntuple vector and f (ξ )and g (ξ ) are vector ﬁelds, i.e., ntuple vectors whose elements are, in general, functions of ξ . For simplicity, we assume a scalar input ζ . We require that the system (1) be linearly controllable [1], i.e., the pair (F, G) is controllable where F = ∂f (0) and ∂ξ G = g (0) at the assumed equilibrium point at the origin. The power series expansion of (1) about the origin can be written, with an appropriate change of variable and input, as x = F x + Gφ + O1 (x)(2) + γ1 (x, φ)(1) ˙ (2) where, without loss of generality, F and G can be in Brunovsky form [2], superscript (2) corresponds to terms in x of degree greater than one, superscript (1) corresponds to terms in x of degree greater than or equal to one and x and φ are the transformed state and input variables respectively. We introduce state feedback as in [3] φ = −Kx + u where K = [kn , kn−1 , · · · , k2 , k1 ]t Equation (2) then becomes x = Ax + G u + O(x)(2) + γ (x, u)(1) ˙ (3) 94 where PROBLEM 3.2 A = F − GK (4) We can choose the eigenvalues of matrix A in (4), without loss of generality, to be real, distinct, and nonresonant by a proper choice of the matrix K [3]. The nonresonant property, meaning that no integral relation exists among the eigenvalues of matrix A, ensures that (3) can be linearized up to an arbitrary order. Put (3) into the form x = Ax + Gu + f2 (x) + f3 (x) + · · · + g1 (x)u + g2 (x)u + · · · ˙ (5) where fm (x) and gm−1 (x) correspond to vectorvalued polynomials containing terms of the form xm = xm1 xm2 · · · xmn , mi ∈ (0, 1, 2, · · · , n), i = 1, 2, · · · , n; n 1 2
n mi = m, m ≥ 2.
i=1 Consider a near identity (normalizing) transformation as in x = y + h(y ) (6) and a change of input as in v = (1 + β (x))u + α(x) , 1 + β (x) = 0 (7) where h(y ) = h2 (y ) + h3 (y ) + · · · (8) α(x) = α2 (x) + α3 (x) + · · · (9) β (x) = β1 (x) + β2 (x) + · · · (10) The problem is to ﬁnd a solution for hm (.), αm (.) and βm−1 (.), m ≥ 2 such that the nonlinear terms upto an arbitrary order, viz., “fm (.)” and “gm−1 (.)u” can be removed from (5) by the application of the transformations (6) and (7) to it. 2 MOTIVATION AND HISTORY OF THE PROBLEM Linearization of a nonlinear dynamic system of the form (1), but without the control input, was originally investigated by Poincar [4] [5]. It was shown that, around an equilibrium point, a near identity (normalizing ) transformation takes it to its normal form where only the residual nonlinearities, that cannot be removed by the transformation, remain. The dynamic system is said to be resonant in the order of these residual nonlinearities. The solution for the normalizing transformation is in the form of an inﬁnite series as in (8) whose convergence has been proved under certain assumptions [6] [7]. Irrespective of the convergence of the inﬁnite series, the transformation is of interest because, one can remove up to an arbitrary order of nonlinearities (as long as they are nonresonant) through such a transformation, thus providing an approximate linearization of the dynamic system. LINEARIZATION OF LINEARLY CONTROLLABLE SYSTEMS 95 3 AVAILABLE RESULTS Our problem is an analog of Poincar’s problem with the control input provided. Krener et al. [8] have considered a nonlinear system of the form (1) and showed that a generalized form of the homological equation can be formulated in this case. Devanathan [3] has shown that, for a linearly controllable system, the system matrix can be made nonresonant through an appropriate choice of state feedback. This concept is further exploited in [9] to ﬁnd a solution to the second order linearization. An analogous solution to the case of an arbitrary order linearization, however, is still open. By application of (6) and (7), one can write f2 (x) + f3 (x) + f4 (x) + · · · = f2 (y ) + f3 (y ) + f4 (y ) + · · · g1 (x)u + g2 (x)u + g3 (x)u + · · · = g1 (y )u + g2 (y )u + g3 (y )u + · · · α(x) = α2 (y ) + α3 (y ) + α4 (y ) + · · · β (x) = β1 (y ) + β2 (y ) + β3 (y ) + · · · (11) (12) (13) (14) for some appropriate polynomials fm (.), gm−1 (.), αm (.) and βm−1 (.), m ≥ 2. Substituting (6) and (7) into (5) and using (11)(14), consider the polynomials of the form y m and y m−1 u, m = 2, 3, · · · . Then the terms “fm (x)” and “gm−1 (x)u” can be removed from (5) progressively, m = 2,3, etc. provided the following generalized homological equations are satisﬁed [3]. ∂hm (y ) (Ay ) − Ahm (y ) + Gαm (y ) = fm (y ), m ≥ 2 ∂y ∂hm (y ) (Gu) + Gβm−1 (y )u = gm−1 (y )u, ∀ u, m ≥ 2 ∂y (15) (16) where f2 (y ) = f2 (y ) = f2 (y ) and fm (y ) is expressed in terms of fm−i (y ), i = 0, 1, 2, · · · , (m − 2) and hm−j (y ),j = 1, 2, · · · , (m − 2), m > 2. Also, g1 (y ) = g1 (y ) = g1 (y ) and gm (y ) is expressed in terms of gm−i (y ), i = 0, 1, 2, · · · , (m− 1) and hm−j (y ),j = 0, 1, 2, · · · , (m − 2), m ≥ 2. Assuming that hm−j (y ), αm−j (y ), βm−j −1 (y ), j = 1, 2, · · · , (m − 2), m > 2 , are known, fm (y ) and gm−1 (y ) can be computed. Without loss of generality, one can assume matrix A to be diagonal and G = [1, 1, · · · , 1, 1]t by applying a change of coordinate to (5) involving Vandermonde matrix [10]. One can then proceed to solve (15) for hm (y ) in terms of αm (y ) and substitute the same into (16) to set up a linear system of equations in the unknown coeﬃcients of polynomials αm (y ) and βm−1 (y ). For m = 2, it has been shown in [9] that the corresponding system of linear − equations can be reduced to a system of ( n(n2 1) ) equations in n variables whose rank is (n − 1). It is conjectured that a similar reduction of the linear system of equations, in the arbitrary order case, should also be possible. Formulation of the properties and solution, if it exists, of the linear system of equations involving the coeﬃcients of the polynomials αm (.), βm−1 (.) and hm (.), m > 2 will constitute the solution to the open problem. 96 BIBLIOGRAPHY PROBLEM 3.2 [1] A. J. Krener and W. Kang, “Extended normal forms of quadratic systems,” Proc. 29th Conf. Decision and Control, pp. 20912095, 1990. [2] P. Brunovsky, “A Classiﬁcation of linear controllable systems,” Kybernetica cislo, vol. 3, pp. 173188, 1970. [3] R. Devanathan, “Linearization condition through state feedback,” IEEE Transactions on Automatic Control, vol. 46, no. 8, pp. 12571260, 2001. [4] V. I. Arnold, Geometrical Methods in the Theory of Ordinary Diﬀerential Equations, SpringerVerlag, New York, pp. 177188, 1983. [5] J. Guckenheimer and P. Holmes, Nonlinear Oscillations, Dynamical Systems and Bifurcation of Vector Fields, SpringerVerlag, New York, 1983. [6] A. D. Bruno, Local Methods in Nonlinear Diﬀerential Equations, SpringerVerlag, Berlin, 1989. [7] Giampaolo Cicogna, “On the convergence of normalizing transformations in the presence of symmetries,” Departmento di Fisica, Universita di Pisa, P.za Torricelli 2, I56126, Pisa, Italy. [8] A. J. Krener, S. Karahan, and M. Hubbard, “Approximate normal forms of nonlinear systems,” Proc. 27th Conf. Decision and Control, pp. 12231229, 1988. [9] R. Devanathan, “Solution of second order linearization,” Fifteenth International Symposium on Mathematical Theory of Networks and Systems, University of Notre Dame, Notre Dame, IN, USA (2002). [10] B. C. Kuo, Automatic Control Systems, Fourth Ed., PrenticeHall, pp. 227240, 1982. Problem 3.3
Bases for Lie algebras and a continuous CBH formula Matthias Kawski1
Department of Mathematics and Statistics Arizona State University Tempe, AZ 852871804 USA [email protected] 1 DESCRIPTION OF THE PROBLEM Many timevarying linear systems x = F (t, x) naturally split into time˙ invariant geometric components and timedependent parameters. A special case are nonlinear control systems that are aﬃne in the control u, and speciﬁed by analytic vector ﬁelds on a manifold M n
m x = f0 (x) + ˙
k=1 uk fk (x). (1) It is natural to search for solution formulas for x(t) = x(t, u) that separate the timedependent contributions of the controls u from the invariant, geometric role of the vector ﬁelds fk . Ideally, one may be able to a priori obtain closedform expressions for the ﬂows of certain vector ﬁelds. The quadratures of the control might be done in realtime, or the integrals of the controls may be considered new variables for theoretical purposes such as pathplanning or tracking. To make this scheme work, one needs simple formulas for assembling these pieces to obtain the solution curve x(t, u). Such formulas are nontrivial since in general the vector ﬁelds fk do not commute: already in the case of linear systems, exp(sA) · exp(tB ) = exp(sA + tB ) (for matrices A and B ). Thus the desired formulas not only involve the ﬂows of the system vector ﬁelds fi but also the ﬂows of their iterated commutators [fi , fj ], [[fi , fj ], fk ], and so on. Using HallViennot bases H for the free Lie algebra generated by m indeterminates X1 , . . . Xm , Sussmann [22] gave a general formula as a directed
1 Supported in part by NSFgrant DMS 0072369 98 inﬁnite product of exponentials
→ PROBLEM 3.3 x(T, u) =
H ∈H exp(ξH (T, u) · fH ). (2) Here the vector ﬁeld fH is the image of the formal bracket H under the canonical Lie algebra homomorphism that maps Xi to fi . Using the chronoT logical product (U ∗ V )(t) = 0 U (s)V (s) ds, the iterated integrals ξH are T deﬁned recursively by ξXk (T, u) = 0 uk (t)dt and ξHK = ξH ∗ ξK (3) if H, K, HK are Hall words and the left factor of K is not equal to H [9, 22]. (In the case of repeated left factors, the formula contains an additional factorial.) An alternative to such inﬁnite exponential product (in Lie group language, “coordinates of the 2nd kind”) is a single exponential of an inﬁnite Lie series (“coordinates of the 1st kind”). x(T, u) = exp(
B ∈B ζB (T, u) · fB ) (4) It is straightforward to obtain explicit formulas for ζB for some spanning sets B of the free Lie algebra [22], but much preferable are series that use bases B, and which, in addition, yield as simple formulas for ζB as (3) does for ξH . Problem 1: Construct bases B = {Bk : k ≥ 0} for the free Lie algebra on a ﬁnite number of generators X1 , . . . Xm such that the corresponding iterated integral functionals ζB deﬁned by (4) have simple formulas (similar to (3)), suitable for control applications (both analysis and design). The formulae (4) and (2) arise from the “free control system” on the free associative algebra on m generators. Its universality means that its solutions map to solutions of speciﬁc systems (1) on M n via the evaluation homomorphism Xi → fi . However, the resulting formulas contain many redundant terms since the vector ﬁelds fB are not linearly independent. Problem 2: Provide an algorithm that generates for any ﬁnite collection of analytic vector ﬁelds F = {f1 , . . . , fm } on M n a basis for L(f1 , . . . , fm ) together with eﬀective formulas for associated iterated integral functionals. Without loss of generality, one may assume that the collection F satisﬁes the Lie algebra rank condition, i.e., L(f1 , . . . , fm )(p) = Tp M at a speciﬁed initial point p. It makes sense to ﬁrst consider special classes of systems F, e.g., which are such that L(f1 , . . . , fm ) is ﬁnite, nilpotent, solvable, etc. The words simple and eﬀective are not used in a technical sense in problems 1 and 2 (as in formal studies of computational complexity) but instead refer to comparison with the elegant formula (3), which has proven convenient for theoretical studies, numerical computation, and practical implementations. BASES FOR LIE ALGEBRAS 99 2 MOTIVATION AND HISTORY OF THE PROBLEM Series expansions of solution to diﬀerential equations have a long history. m ˙ Elementary Picard iteration of the universal control system S = i=1 Xi ui on the free associative algebra over (X1 , . . . , Xm ) yields the Chen Fliess series [5, 11, 21]. Other major tools are Volterra series, and the Magnus expansion [14], which groups the terms in a diﬀerent way than the Fliess series. The main drawback of the Fliess series is that (unlike its exponential product expansion (2)) no ﬁnite truncation is the exact solution of any approximating system. A key innovation is the chronological calculus of 1970s Agrachev and Gamkrelidze [1]. However, it is generally not formulated using explicit bases. The series and product expansions have manifold uses in control beyond simple computation of integral curves and analysis of reachable sets (which includes controllability and optimality). These include statespace realizations of systems given in inputoutput operator form [8, 20], output tracking, and pathplanning. For the latter, express the target or reference trajectory in terms of the ξ or ζ , now considered as coordinates of a suitably lifted system (e.g., free nilpotent) and invert the restriction of the map u → {ξB : B ∈ BN } or u → {ζB : B ∈ BN } (for some ﬁnite subbasis BN ) to a ﬁnitely parameterized family of controls u, e.g., piecewise polynomial [7] or trigonometric polynomial [12, 17]. The CampbellBakerHausdorﬀ formula [18] is a classic tool to combine products of exponentials into a single exponential ea eb = eH (a,b) where 1 1 H (a, b) = a + b + 1 [a, b] + 12 [a, [a, b]] − 12 [b, [a, b] + . . .. It has been exten2 sively used for designing piecewise constant control variations that generate high order tangent vectors to reachable sets, e.g., for deriving conditions for optimality. However, repeated use of the formula quickly leads to unwieldly expressions. The expansion (2) is the natural continuous analogue of the CBH formula, and the problem is to ﬁnd the most useful form. The uses of these expansions (2) and (4) extend far beyond control, as they apply to any dynamical systems that split into diﬀerent interacting components. In particular, closely related techniques have recently found much attention in numerical analysis. This started with a search for RungeKuttalike integration schemes such that the approximate solutions inherently satisfy algebraic constraints (e.g., conservation laws) imposed on the dynamical system [3]. Much eﬀort has been devoted to optimize such schemes, in particular minimizing the number of costly function evaluations [16]. For a recent survey see, [6]. Clearly, the form (4) is most attractive as it requires the evaluation of only a single (computationally costly) exponential. The general area of noncommuting formal power series admits both dynamical systems/analytic and purely algebraic/combinatorial approaches. Algebraically, underlying the expansions (2) and (4) is the Chen series [2], which is wellknown to be an exponential Lie series, compare [18], thus guarantee 100 ing the existence of the alternative expansions
→ PROBLEM 3.3 w ⊗ w = exp
w∈Z ∗ B ∈B ! ζB ⊗ B =
B ∈B ! exp (ξB ⊗ B ) (5) The ﬁrst bases for free Lie algebras build on Hall’s work in the 1930s on commutator groups. While several other bases (Lyndon, Sirsov) have been proposed in the sequel, Viennot [23] showed that they are all special cases of generalized Hall bases. Underlying their construction is a unique factorization principle, which in turn is closely related to PoincarBirckhoﬀWitt bases (of the universal enveloping algebra of a Lie algebra) and Lazard elimination. Formulas for the dual PBW bases ξB have been given by Sch¨tzenberger, u Sussmann [22], Grossman, and Melancon and Reutenauer [15]. For an introductory survey, see [11], while [15] elucidates the underlying Hopf algebra structure, and [18] is the principal technical reference for combinatorics of free Lie algebras. 3 AVAILABLE RELATED RESULTS The direct expansion of the logarithm into a formal power series may be simpliﬁed using symmetrization [18, 22], but this still does not yield welldeﬁned “coordinates” with respect to a basis. Explicit but quite unattractive formulas for the ﬁrst 14 coeﬃcients ζH in the case of m = 2 and a Hallbasis are calculated in [10]. This calculation can be automated in a computer algebra system for terms of considerably higher order, but no apparent algebraic structure is discernible. These results suﬃce for some numerical purposes, but they do not provide much structural insight. Several new algebraic structures introduced in [19] lead to systematic formulas for ζB using spanning sets B that are smaller than those in [22], but are not bases. These formulas can be reﬁned to apply to Hallbases, but at the cost of loosing their simple structure. Further recent insights into the underlying algebraic structures may be found in [4, 13]. The introductory survey [11] lays out in elementary terms the close connections between Lazard elimination, Hallsets, chronological products, and the particularly attractive formula (3). These intimate connections suggest that to obtain similarly attractive expressions for ζB one may have to start from the very beginning by building bases for free Lie algebras that do not rely on recursive use of Lazard elimination. While it is desirable that any such new bases still restrict to bases of the homogeneous subspaces of the free Lie algebra, we suggest consider balancing the simplicity of the basis for the Lie algebra and structural simplicity of the formulas for the dual objects ζB . In particular, consider bases whose elements are not necessarily Lie monomials but possibly nontrivial linear combinations of iterated Lie brackets of the generators. BASES FOR LIE ALGEBRAS 101 BIBLIOGRAPHY [1] A. Agrachev and R. Gamkrelidze, “Chronological algebras and nonstationary vector ﬁelds,” Journal Soviet Math., 17, pp. 1650–1675, 1979. [2] K. T. Chen, “Integration of paths, geometric invariants and a generalized BakerHausdorﬀ formula,” Annals of Mathematics, 65, pp. 163–178, 1957. [3] P. Crouch and R. Grossman, “The explicit computation of integration algorithms and ﬁrst integrals for ordinary diﬀerential equations with polynomial coeﬃcients using trees,” Proc. Int. Symposium on Symbolic and Algebraic Computation, pp. 8994, ACM Press, 1992. [4] A. Dzhumadil’daev, “Nonassociative algebras without unit,” Comm. Algebra, 2002. [5] M. Fliess, “Fonctionelles causales nonlin´aires et indetermin´es noncome e mutatives,” Bull. Soc. Math. France, 109, pp. 3–40, 1981. [6] A. Iserles, “Expansions that grow on trees,” Notices AMS, 49, pp. 430440, 2002. [7] G. Jacob, “Motion planning by piecewise constant or polynomial inputs,” Proc. NOLCOS, Bordeaux, 1992. [8] B. Jakubczyk, “Convergence of power series along vector ﬁelds and their commutators; a CartanK¨hler type theorem,” Ann. Polon. Math., 74, a pp. 117132, 2000. [9] M. Kawski and H. J. Sussmann “Noncommutative power series and formal Liealgebraic techniques in nonlinear control theory,” In: Operators, Systems, and Linear Algebra, U. Helmke, D. Pr¨tzelWolters and E. Zerz, a eds. Teubner, pp. 111–128, 1997. [10] M. Kawski, “Calculating the logarithm of the Chen Fliess series,” Proc. MTNS, Perpignan, 2000. [11] M. Kawski, “The combinatorics of nonlinear controllability and noncommuting ﬂows,” Lecture Notes series of the Abdus Salam ICTP, 2001. [12] G. Laﬀerriere and H. Sussmann, “Motion planning for controllable systems without drift,” Proc. Int. Conf. Robot. Automat., pp. 1148–1153, 1991. [13] J.L. Loday and T. Pirashvili, “Universal enveloping algebras of Leibniz algebras and (co)homology,” Math. Annalen, 196, pp. 139–158, 1993. [14] W. Magnus, “On the exponential solution of diﬀerential equations for a linear operator,” Comm. Pure Appl. Math., VII, pp. 649–673, 1954. 102 PROBLEM 3.3 [15] G. Melan¸on and C. Reutenauer, “Lyndon words, free algebras and c shuﬄes,” Canadian J. Math., XLI, pp. 577–591, 1989. [16] H. MuntheKaas and B. Owren, “Computations in a free Lie algebra,”,Royal Soc. London Philos. Trans. Ser. A Math. Phys. Eng. Sci., 357, pp. 957–982, 1999. [17] R. Murray and S. Sastry, “Nonholonomic path planning: steering with sinusoids,” IEEE Trans.Aut.Control, 38, pp. 700–716, 1993. [18] C. Reutenauer, Free Lie algebras, Oxford: Clarendon Press, 1993. [19] E. Rocha, “On computataion of the logarithm of the ChenFliess series for nonlinear systems,” Proc. NCN, Sheﬃeld, 2001. [20] E. Sontag and Y. Wang, “Orders of input/output diﬀerential equations and state space dimensions,” SIAM J. Control and Optimization, 33, pp. 11021126, 1995. [21] H. Sussmann, “Lie brackets and local controllability: A suﬃcient condition for scalarinput systems,” SIAM J. Cntrl. & Opt., 21, pp. 686–713, 1983. [22] H. Sussmann, “A product expansion of the Chen series,” Theory and Applications of Nonlinear Control Systems, C. I. Byrnes and A. Lindquist, eds., Elsevier, NorthHolland, pp. 323–335, 1986. [23] G. Viennot, “Alg`bres de Lie Libres et Mono¨ e ıdes Libres,” Lecture Notes Math., 692, Springer, Berlin, 1978. Problem 3.4
An extended gradient conjecture Luiz Carlos Martins Jr.
Universidade Paulista  UNIP 15091450, S. J. do Rio Preto, SP Brazil [email protected] Geraldo Nunes Silva
Departamento de Computacao e Estatistica Universidade Estadual Paulista  UNESP 15054000, S. J. do Rio Preto, SP Brazil [email protected] 1 DESCRIPTION OF THE PROBLEM Let f : Rn → R be a locally Lipschitz function, i.e., for all x ∈ R there is > 0 and a constant K depending on such that f (x1 ) − f (x2 ) ≤ K x1 − x2 , ∀x1 , x2 ∈ x + B. Here B denotes the open unit ball of Rn . Let v ∈ Rn . The generalized directional derivative of f at x, in the direction v , denoted by f 0 (x; v ), is deﬁned as follows: f 0 (x; v ) = lim sup
y →x s→0+ f (y + sv ) − f (y ) . s Here y ∈ Rn , s ∈ (0, +∞). The generalized gradient of f at x, denoted by ∂f (x), is the subset of Rn given by {ξ ∈ Rn : f 0 (x; v ) ≥ ξ , v , ∀v ∈ R}. For the properties and basic calculus of the generalized gradient, standard references are [1] and [2]. The problem we propose here is regarding the following diﬀerential inclusion x(t) ∈ ∂f (x(t)) a.e. t ∈ [0, β ), ˙ (1) 104 PROBLEM 3.4 where β is a positive scalar. A solution of (1) is an absolutely continuous function x : [0, β ) → Rn that, together with x, its derivative with respect ˙ to t, satisﬁes (1). Note that x may fail to exist on a set A ⊂ [0, ∞) of zero ˙ Lebesgue measure. Take S to be the set [0, ∞) \ A. We say that d := lim
t→β S x ˙ , x ˙ when the limit exists, is a tangential direction of x at 0 ∈ Rn . The notation t→β means that the limit is taken for t ∈ S .
S We are now in a position to propose our problem. Conjecture: Suppose that f (0) = 0 and let x be a solution of (1) such that x(t) → 0, as t → β . Then there exists a unique tangential direction. 2 MOTIVATION AND HISTORY OF THE PROBLEM This problem has been stated, for the ﬁrst time, in the smooth case, that is, in the situation where f is a real analytic function on an open neighborhood U0 ⊂ Rn of a point x0 , and x is a maximal curve of (1) with f , the gradient of f , replacing the generalized gradient of f and x(t) → x0 , as t → β . Under this conditions, R. Thom asked whether the tangent of x(t) at x0 was welldeﬁned. This was later named the conjecture of the gradient, see, for example, [4, 5, 6]. We now show that this was a natural question to ask. Assuming that f is an analytic function as above and that x0 = 0 and f (0) = 0, Lojasiewicz proved in [8, p. 92] that there exists 0 < θ < 1 such that  f (x) f (x) ,
θ for x ∈ U0 . This result is known as Lojasiewicz Inequality and is the main tool in the proof of the next stated result. For an account on this see, for example, [7] and [9]. Theorem (Lojasiewicz): Let A = f −1 (0) ∩ U0 . Then β = +∞ and if x (t) tends toward A, then x (t) tends to a unique point of A. (A simple proof of this theorem is provided in [3]). Since, from the theorem above, we see that a maximal trajectory x lives in the whole interval [0, ∞) and approximates a unique point in the inverse image of 0 by f , it is natural to ask if the tangent of x(t) in the limit point is also unique. This was precisely what R. Thom conjectured and became the wellknown gradient conjecture. In this work, we propose an extension of this conjecture to the nonsmooth case. AN EXTENDED GRADIENT CONJECTURE 105 3 KNOWN RESULTS AND REMARKS The gradient conjecture, as it is known in the regular case, is equivalent to fact that the integral curves of f have tangent in all points of ω (x). Partial results on the conjecture of the gradient was given in [3], [11], and [9]. The ﬁrst proof of the general regular case was given in [4] and a simpler modiﬁed proof has appeared in [6]. Actually, it has been proved a stronger result that states that the radial projection of x(t) from x(0) into the sphere S n−1 has ﬁnite length. The arguments of the proof rely on the Lojasiewicz Inequality. The new conjecture of the gradiente is stated in the nonsmooth setting and is called the extended gradient conjecture. As far as we know, no result has appeared in this direction. We reckon that a simple extension of the standard techniques used to prove the regular case is not enough. It will be necessary to come up with new ideas to prove this conjecture if it happens to be true. BIBLIOGRAPHY [1] F. H. Clarke, Optimization and Nonsmooth Analysis, Wiley Interscience, New York 1983; reprinted as vol. 5 of Classics in Applied Mathematics, SIAM, Philadelphia, PA, 1990; Russian translation, Nauka, Moscow, 1988. [2] F. H. Clarke, Yu. S. Ledyaev, R. J. Stern, P. R. Wolenski, Nonsmooth Analysis and Control Theory, GTM 178,New York, SpringerVerlag, 1998. [3] F. Ichikawa, “Thom’s conjecture on singulatiries of gradient vector ﬁelds,” Kodai Math. J. 15, pp. 134140, 1992. [4] K. Kurdyka, T. Mostowski, The Gradient Conjecture of R. Thom, preprint, 1996 (revised 1999). http://www.lama.univsavoie.fr/sitelama/Membres/pages web/KURDIKA/index.html. [5] K. Kurdyka, “On the gradient conjecture of R. Thom,” Seminari di Geometria 19981999, Universit` di Bologna, Istituto di Geometria, Dia partamento di Matematica, pp. 143151, 2000. [6] K. Kurdyka, T. Mostowski, A. Parusinki, “Proof of the gradient conjecture of R. Thom,” Annals of Mathematics, 152 (2000), pp. 763792. [7] S. Lojasiewicz, “Une propri´t´ topologique des sousensembles analyee tiques r´els,” Colloques Internationaux du C.N.R.S. #117, les ´quations e e aux d´riv´es partielles, Paris 2530 juin (1962), pp. 8789. ee [8] S. Lojasiewicz, Ensembles semianalytiques, IHES preprint, 1965. 106 PROBLEM 3.4 [9] R. Moussu, “Sur la dynamique des gradients. Existence de vari´t´s inee variants,” Math. Ann., 307 (1997), pp. 445460. [10] F. Sanz, “Non oscillating solutions of analytic gradient vector ﬁelds,” Ann. Inst. Fourier, Grenoble 48 (4) 1998, pp. 10451067. [11] H. Xing Lin, Sur la structure des champs de gradients de fonctions analytiques r´elles, Ph. D. Th`se, Universit´ Paris VII, 1992. e e e Problem 3.5
Optimal transaction costs from a Stackelberg perspective Geert Jan Olsder
Faculty of Information Technology and Systems Delft University of Technology P.O. Box 5031, 2600 GA Delft The Netherlands [email protected] 1 DESCRIPTION OF THE PROBLEM The problem to be considered is x = f (x, u), x(0) = x0 , ˙
T T (1) g (x, u)dt − γ (u(t))dt},
0 max JF = max{q (x(T )) +
u u 0 T (2) (3) max JL = max
γ ( ·) γ ( ·) 0 γ (u(t))dt, with f , g and q being given functions, the state x ∈ Rn , the control u ∈ R, and γ (·) is a scalar function which maps R into R. The problem concerns a dynamic game problem in which u is the decision variable of one player called the Follower, and the function γ is up to the choice of the other player called the Leader. An essential feature of the problem is that the Leader’s proﬁt (3) is a direct loss for the Follower in (2). The Leader lives as a parasite on the Follower. In the next section, a more concrete motivation will be given. The function γ must be chosen subject to the constraints γ (0) = 0, γ (·) ≥ 0 and if at all possible it must be nondecreasing with respect to u, and possibly also γ (u) = γ (−u). By means of the notation introduced and the names of the players it should be clear that the problem formulated is a (special kind of) Stackelberg game [2]. The Leader announces the function γ that thus becomes known to the Follower who subsequently chooses u. Thus the optimal u is a function of γ (·). 108 PROBLEM 3.5 2 MOTIVATION AND HISTORY OF THE PROBLEM For n = 1, i.e., x ∈ R, which we assume henceforth, an interpretation of this model is that x(t) represents the Follower’s wealth at time t. This Follower is an investor and who would like to maximize
T g (x, u)dt + q (x(T )).
0 (4) The term q (x(T )) in this criterion is a function of the wealth of the investor at T the ﬁnal time T and the term 0 g (x, u)dt represents the consumption during the time interval [0, T ]. The decision variable u(t) denotes the transactions with the bank at time t (e.g., selling or buying stocks). To be more precise, u(t) denotes a transaction density, i.e., during the time interval [t, t + dt] the number of transactions equals u(t)dt. For u = 0, no transactions take place and the bank does not earn money (because γ (0) = 0). Transactions cost money and we assume that the bank (i.e., the Leader) wants to maximize these transaction costs as indicated by (3). These costs are subtracted from (4) and hence the ultimate criterion of the Follower is given by (2). The restrictions posed on γ (nondecreasing with respect to u and γ (0) = 0) now have a clear meaning. The higher the number of transactions (either buying or selling, one being related to a positive u, the other one to a negative u), the higher the costs. Equation (1) is supposed to tell how the wealth x evolves in time. Usually, such models are represented by stochastic diﬀerential equations, but due to the complexity of the problem, we restrict ourselves to a less realistic deterministic diﬀerental function. 3 AVAILABLE RESULTS AND BACKGROUND Problems with transaction costs have been studied before, see e.g., [1, 3, 4], but never from the point of view of the bank to maximize these costs. The problem as stated is a diﬃcult one, see [7] for some ﬁrst solution attempts. The principal diﬃculty is that composed functions are involved, i.e., one function is the argument of another [6]. Hence, we will also consider the following static problem, which is simpler than the timedependent one: max(q (u) − γ (u)), max γ (u),
u γ ( ·) subject to γ (·) ≥ 0 and γ (0) = 0 and possibly also γ (u) nondecreasing with respect to u. With the same interpretation as before, the investor is secured of a minimum value q (0) by playing u = 0. Therefore, he will only take uvalues into consideration for which q (u) ≥ q (0). This static problem is a special case of the socalled inverse Stackelberg problem as it was introduced in [5] and a solution method is known, see chapter 7 of [2]. OPTIMAL TRANSACTION COSTS 109 To start with, in a conventional Stackelberg game, there are two players, called Leader and Follower respectively, each having their own cost function JL (uL , uF ), JF (uL , uF ), where uF , uL ∈ R. Each player wants to choose his own decision variable in such a way as to maximize his own criterion. Without giving an equilibrium concept, the problem as stated so far is not well deﬁned. Such an equilibrium concept could, for instance, be one named after Nash or Pareto. In the Stackelberg equilibrium concept, one player, the Leader, announces his decision uL , which is subsequently known to the other player, the Follower. With this knowledge, the Follower chooses his uF . Hence, uF becomes a function of uL , written as uF = lF (uL ), which is determined through the relation min JF (uL , uF ) = JF (uL , lF (uL )),
uF provided that this minimum exists and is a singleton for each possible choice uL of the Leader. The function lF (·) is sometimes called a reaction function. Before the Leader announces his decision uL , he will realize how the Follower will react and hence the Leader chooses uL such as to minimize JL (uL , lF (uL )). In an inverse Stackelberg game, the Leader does not announce his choice uL ahead of time, as above, but instead a function γL (uF ). Think (as another motivating example) of the Leader being the government and of the Follower as a citizen. The government states how much income tax the citizen has to pay and this tax will depend on the income uF of the citizen. It is up to the citizen as to how much money to earn (by working harder or not) and thus he can choose uF . The income tax the government will receive equals γL (uF ), where the ”rule for taxation” γL (·) was made known ahead of time. BIBLIOGRAPHY [1] M. Akian, J. L. Menaldi, and A. Sulem, “On an investmentconsumption model with transaction costs,” SIAM J. Control and Optim., vol. 34 pp. 329364, 1996. [2] T. Ba¸ar and G. J. Olsder, Dynamic Noncooperative Game Theory, s SIAM, Philadelphia, 1999. [3] P. Bernhard, “A robust control approach to option pricing including transaction costs,” In: Annals of the ISDG 7, A.Nowak, ed., Birkh¨user, a 2002. 110 PROBLEM 3.5 [4] E. R. Grannan and G. H. Swindle, “Minimizing transaction costs of option hedging strategies,” Mathematical ﬁnance, vol. 6, no. 4, 341–364, 1996. [5] Y.C. Ho, P. B. Luh and G. J. Olsder, “A controltheoretic view on incentives,” Automatica, vol. 18, pp. 167179, 1982. [6] M. Kuczma, Functional Equations in a Single Variable, Polish Scientiﬁc Publishers, 1968. [7] G. J. Olsder, “Diﬀerential gametheoretic thoughts on option pricing and transaction costs,” International Game Theory Review, vol. 2, pp. 209228, 2000. Problem 3.6
Does cheap control solve a singular nonlinear quadratic problem? Yuri V. Orlov
Electronics Department CICESE Research Center Ensenada, BC 22860 Mexico [email protected] 1 DESCRIPTION OF THE PROBLEM A standard control synthesis for aﬃne systems x = f (x) + g (x)u, x ∈ Rn , u ∈ Rm ˙ under degenerate perfomance criterion
∞ (1) J (u) =
0 xT (t)P x(t)dt, P = P T > 0 (2) depending on the state vector x(t) only, replaces this singular optimization problem by its regularization through εapproximation
∞ Jε (u) =
0 [xT (t)P x(t) + εuT (t)Ru(t)]dt, ε > 0, R = RT > 0 (3) of this criterion with small (cheap) penalty on the control input u. Hereafter, functions f, g are assumed suﬃciently smooth, and all quantities in (1)through(3) are assumed to have compatible dimensions. The optimal control synthesis corresponding to (2) is then obtained as a limit as ε → 0 of the optimal control law u0 corresponding to (3). Since only ε particular approximation is taken while other approximations are possible as well there is no guarantee that the original perfomance criterion is minimized by the control law obtained via this procedure. An open problem that arises here is to prove that inf J (u) = lim inf Jε (u)
u ε→0 u (4) or present a counterexample of system (1) where the limiting relation (4) is not satisﬁed. 112 PROBLEM 3.6 2 MOTIVATION AND HISTORY OF THE PROBLEM The above problem is wellunderstood in the linear case when system (1) is speciﬁed as follows: x = Ax + Bu, x ∈ Rn , u ∈ Rm . ˙ (5) Under the stabilizability and detectability conditions the linear system (5) driven by the cheap control u0 exhibits an initial fast transient followed by a ε slow motion on a singular arc (see, e.g., [3, section 6] and references therein). In the limit ε → 0, a singular perturbation analysis reveals that the stable fast modes decay instantaneously as if they would be driven by the impulsive component of the controller minimizing the degenerate performance criterion (2). This feature, however, does not admit a straightforward extension to the system in question because in contrast to the linear system (5), an instantaneous impulse response of the aﬃne system (1), generally speaking, depends on an approximation of the impulse [2]. Thus, it might happen that the original performance (2) is not minimized through the εapproximation (3) of this criterion. 3 AVAILABLE RESULTS A distributionoriented variational analysis [1] of the singular nonlinear quadratic problem (1), (2), admitting both integrable and impulsive inputs, reveals that the inﬁmum of the degenerate criterion (2) is typically attained by a controller with impulsive behavior at the initial time moment. In that case, an instantaneous impulse response of the closedloop system does not depend on an approximation of the impulse if and only if the aﬃne system (1) satisﬁes the Frobenius condition, i.e., the distribution spanned by the columns of g (x) is involutive (see [2] for details). Motivated by these arguments, the author suspects that the limiting relation (4) holds whenever system (1) satisﬁes the Frobenius condition, and a counterexample of system (1), violating (4), is indeed possible if the Frobenius condition is not imposed on the system anymore. BIBLIOGRAPHY [1] Y. Orlov, “Necessary optimality conditions of generalized control actions 1,2,” Automation and Remote Control, vol. 44, no. 7, pp. 868877; vol. 44, no. 8, pp. 9981105, 1984. [2] Y. Orlov, “Instantaneous impulse response of nonlinear systems,” IEEE Trans. Aut. Control, vol. 45, no. 5, pp. 9991001, 2000. SINGULAR NONLINEAR QUADRATIC PROBLEM 113 [3] V. R. Saksena, J. O’Reilly, and P. V. Kokotovic, “Singular perturbations and timescale methods in control theory: Survey 19761983,” Automatica, vol. 20, no. 3, pp. 273293, 1984. Problem 3.7
DeltaSigma modulator synthesis Anders Rantzer
Dept. of Automatic Control Lund Institute of Technology P.O. Box 118 SE221 00 LUND, Sweden [email protected] 1 DESCRIPTION OF THE PROBLEM DeltaSigma modulators are among the key components in modern electronics. Their main purpose is to provide cheap conversion from analog to digital signals. In the ﬁgure below, the analog signal r with values in the interval [−1, 1] is supposed to be approximated by the digital signal d that takes only two values, −1 and 1. One can not expect good approximation at all frequencies. Hence, the dynamic system D should be chosen to minimize the error f in a given frequency range [ω1 , ω2 ]. There is a rich literature on DeltaSigma modulators. See [2, 1] and references therein. The purpose of this note is to reach a broad audience by focusing on the central mathematical problem. r f d  e 6  Dynamic system D −1 To make a precise problem formulation, we need to introduce some notation: Notation: The signal space [0, ∞] is the set of all sequences {f (k )}∞ such k=0 that f (k ) ∈ [−1, 1] for k = 0, 1, 2, . . .. A map D : [0, ∞] → [0, ∞] is called a causal dynamic system if for every u, v ∈ [0, ∞] such that u(k ) = v (k ) for k ≤ T it holds that [D(u)](k ) = [D(v )](k ) for k ≤ T . Deﬁne also the DELTASIGMA MODULATOR SYNTHESIS 115 function sgn(x) = 1 −1 if x ≥ 0 else Problem: Given r ∈ [0, ∞] and a causal dynamic system D, deﬁne d, f ∈ [0, ∞] by d(k + 1) = sgn [D(f )](k ) , f (k ) = r(k ) − d(k ) d(0) = 0 and ﬁnd a causal dynamic system D such that regardless of the reference input r, the error signal f becomes small in a prespeciﬁed frequency interval [ω1 , ω2 ]. The problem formulation is intentionally left vague on the last line. The size of f can be measured in many diﬀerent ways. One option is to require a uniform bound on lim sup
T →∞ 1 T T e−ikω f (k )
k=0 for all ω ∈ [ω1 , ω2 ] and all reference signals r ∈ [0, ∞]. Another option is to allow D to be stochastic system and put a bound on the spectral density of f in the frequency interval. This would be consistent with the widespread practice to add a stochastic “dithering signal” before the nonlinearity in order to avoid undesired periodic orbits. 2 AVAILABLE RESULTS The simplest and best understood case is where x(k + 1) = x(k ) + f (k ) f (k ) = r(k ) − sgn(x(k )) In this case, it is easy to see that the set x ∈ [−2, 2] is invariant, so with
T T FT (z ) =
k=0 z −k f (k ) XT (z ) =
k=0 z −k x(k ) 116 it holds that 1 ω0 1 FT (eiω )2 dω = T0 T 1 = T
ω0 PROBLEM 3.7 (eiω − 1)XT (eiω )2 dω
0 ω0 2(1 − cos ω )XT (eiω )2 dω
0 ≤ 2(1 − cos ω0 ) = 2(1 − cos ω0 ) 1 T π T π XT (eiω )2 dω
0 T x(k )2
k=0 ≤ 8π (1 − cos ω0 ) which clearly bounds the error f at low frequencies. Many modiﬁcations using higher order dynamics have been suggested in order to further reduce the error. However, there is still a strong demand for improvements and a better understanding of the nonlinear dynamics. The following two references are suggested as entries to the literature on ∆Σmodulators: BIBLIOGRAPHY [1] James A. Cherry, Continuous Time DeltaSigma Modulators for HighSpeed A/D Conversion: Theory, Practice & Fundamental Performance Limits, Kluwer, 1999. [2] S. R. Norsworthy, R. Schreier, and G. C. Temes, DeltaSigma Data Converters, IEEE Press, New York, 1997. Problem 3.8
Determining of various asymptotics of solutions of nonlinear timeoptimal problems via right ideals in the moment algebra G. M. Sklyar
Szczecin University Wielkopolska str. 15, 70451 Szczecin, Poland; Kharkov National University Svoboda sqr. 4, 61077 Kharkov Ukraine [email protected], [email protected] S. Yu. Ignatovich
Kharkov National University Svoboda sqr. 4, 61077 Kharkov, Ukraine [email protected] 1 MOTIVATION AND HISTORY OF THE PROBLEM The timeoptimal control problem is one of the most natural and at the same time diﬃcult problems in the optimal control theory. For linear systems, the maximum principle allows to indicate a class of optimal controls. However, the explicit form of the solution can be given only in a number of particular cases [13]. At the same time [4], an arbitrary linear timeoptimal problem with analytic coeﬃcients can be approximated (in a neighborhood of the origin) by a certain linear problem of the form xi = −tqi u, i = 1, . . . , n, q1 < · · · < qn , x(0) = x0 , x(θ) = 0, ˙ u ≤ 1, θ → min . (1) In the nonlinear case, the careful analysis is required for any particular system [5, 6]. However, in a number of cases the timeoptimal problem for a nonlinear system can be approximated by a linear problem of the form (1) [7]. We recall this result brieﬂy. Consider the timeoptimal problem in the 118 following statement PROBLEM 3.8 x = a(t, x) + ub(t, x), a(t, 0) ≡ 0, x(0) = x0 , x(θ) = 0, u ≤ 1, θ → min, ˙ (2) where a, b are real analytic in a neighborhood of the origin in Rn+1 . Let us denote by (θx0 , ux0 ) the solution of this problem. Denote by Ra , Rb the operators acting as Ra d(t, x) = dt (t, x) + dx (t, x) · a(t, x), Rb d(t, x) = dx (t, x) · b(t, x) for any vector function d(t, x) analytic +1 in a neighborhood of the origin in Rn+1 and let ad ma Rb = [Ra , admaRb ], R R m ≥ 0; ad 0 aRb = Rb , where [·, ·] is the operator commutator. Denote R E (x) ≡ x. Theorem 1: The conditions rank{adj a Rb E (x) t=0 }j ≥0 = n and R
x=0 k 1 k [adma Rb , · · · [adRa −1 Rb , adma Rb ] · · · ]E (x) R R m t=0 x=0 ∈ Lin adj a Rb E (x) R m−2
t=0 x=0 j =0 (3) for any k ≥ 2 and m1 , . . . , mk ≥ 0, where m = m1 + · · · + mk + k , hold if and only if there exist a nonsingular transformation Φ of a neighborhood of the origin in Rn , Φ(0) = 0, and a linear timeoptimal problem of the form (1), which approximates problem (2) in the following sense θΦ(x0 ) → 1, Lin θx0 1 θ
θ uLin (t) − uΦ(x0 ) (t)dt → 0 x0
0 as x0 → 0, Lin Lin where (θx0 , uLin ) denotes the solution of (1) and θ = min{θΦ(x0 ) , θx0 }. x0 That means that if the conditions of theorem 1 are not satisﬁed, then the asymptotic behavior of the solution of the nonlinear timeoptimal problem diﬀers from asymptotics of solutions of all linear problems. In order to formulate the next result, let us give the representation of the system in the form of a series of nonlinear power moments [7]. We assume the initial point x0 is steered to the origin in the time θ by the control u(t) w.r.t. system (2). Then under our assumptions for rather small θ one has ∞ x0 =
m=1 m1 +···+mk +k=m vm1 ...mk ξm1 ...mk (θ, u),
θ τ τk−1 0 mj k j =1 τj u(τj ) (4) where ξm1 ...mk (θ, u) = 0 0 1 · · · linear power moments and vm1 ...mk = dτk · · · dτ2 dτ1 are non (−1)k 2 k adm1 Rb adma Rb · · · adma Rb E (x) t=0 . R R x=0 m1 ! · · · mk ! Ra We say that ord(ξm1 ...mk ) = m1 + · · · + mk + k is the order of ξm1 ...mk . Theorem 1 means that there exists a transformation Φ which reduces (4) to (Φ(x0 ))i = ξqi (θ, u) + ρi , i = 1, . . . , n, where ρi includes power moments of order greater than qi + 1 only while the representation (4) for the linear system (1) obviously has the form x0 = ξqi (θ, u), i i = 1, . . . , n. ASYMPTOTICS OF SOLUTIONS OF TIMEOPTIMAL PROBLEMS 119 That is the linear moments that correspond to the linear timeoptimal problem (1) form the principal part of the series in representation (4) as θ → 0. When condition (3) is not satisﬁed, one can try to ﬁnd a nonlinear system which has rather simply form and approximates system (2) in the sense of time optimality. In [8] we claim the following result. Consider the linear span A of all nonlinear moments ξm1 ...mk over R as a free algebra with the basis (ξm1 ...mk : k ≥ 1, m1 , . . . , mk ≥ 0) and the product ξm1 ...mk ξn1 ...ns = ξm1 ...mk n1 ...ns . Introduce the inner product in A assuming the basis (ξm1 ...mk ) to be orthonormal. Consider also the Lie algebra L over R generated by the elements (ξm )∞=0 with the commutator m ∞ [ 1 , 2 ] = 1 2 − 2 1 . Introduce further the graded structure A = m=1 Am putting Am = Lin{ξm1 ...mk : ord(ξm1 ...mk ) = m}. Consider now a system of the form (2). The series in (4) naturally deﬁnes the linear mapping v : A → Rn by the rule v (ξm1 ...mk ) = vm1 ...mk . Further we assume the system (2) to be ndimensional, i.e. dim v (L) = n. Note that the form of coeﬃcients vm1 ...mk of the series in (4) implies the following property of the mapping v : the equality v ( ) = 0 for ∈ L implies v ( x) = 0 for any x ∈ A. That means that any system of the form (2) generates a right ideal in the algebra A. We introduce the right ideal in the following way. Consider the sequence of subspaces Dr = v (L ∩ (A1 + · · · + Ar )) ⊂ Rn , and put r0 = min{r : dim Dr = n}. For any r ≤ r0 consider a subspace Pr of all elements y ∈ L ∩ Ar such that v (y ) ∈ Dr−1 (we assume D0 = {0}). Then r put J = r0 Pr (A + R). Let J ⊥ be the orthogonal complement of J . In =1 the next theorem LJ ⊥ denotes the projection of the Lie algebra L on J ⊥ . Theorem 2: (A) Let system (2) be ndimensional, 1 , . . . , n be a basis of r0 r =1 (LJ ⊥ ∩ Ar ) such that ord( i ) ≤ ord( j ) as i < j . Then there exists a nonsingular analytic transformation Φ of a neighborhood of the origin that reduces (4) to the following form (Φ(x0 ))i =
i + ρi , i = 1, . . . , n, where ρi contains moments of order greater than ord( i ) only. Moreover, there exists a control system of the form x = ub∗ (t, x), ˙ such that representation (4) for this system is of the form x0 = i
i, (5) i = 1, . . . , n. (6) (B) Suppose there exists an open domain Ω ⊂ Rn \{0}, 0 ∈ Ω, such that i) the timeoptimal problem for system (5) with representation (6) has a ∗ unique solution (θx0 , u∗ 0 (t)) for any x0 ∈ Ω; x
∗ ii) the function θx0 is continuous for x0 ∈ Ω; ∗ iii) denote K = {u∗ 0 (tθx0 ) : x0 ∈ Ω} and suppose that the following conx dition holds: when considering K as a set in the space L2 (0, 1), the weak convergence of a sequence of elements from K implies the strong convergence. 120 PROBLEM 3.8 Then the timeoptimal problem for system (5) approximates problem (2) in the domain Ω in the following sense: there exists a set of pairs (θx0 , ux0 (t)), x0 ∈ Ω, such that the control ux0 (t) steers the point Φ(x0 ) to the origin in the time θx0 w.r.t. system (2) and θΦ(x0 ) θ0 1 → 1, x → 1, ∗ ∗ θx0 θx0 θ
θ u∗ 0 (t) − ux0 (t)dt → 0 x
0 as x0 → 0, x0 ∈ Ω, ∗ where θΦ(x0 ) is the optimal time for problem (2) and θ = min{θx0 , θx0 }. Remark 1: If there exists the autonomous system x = a(x) + ub(x) such ˙ that its representation (4) is of the form (6) and the origin belongs to the ∗ interior of the controllability set then the function θx0 is continuous in a neighborhood of the origin [9]. Further, if timeoptimal controls for system (5) are bangbang then they satisfy condition iii) of theorem 2. Remark 2: Consider any r0 ≥ 0 and an arbitrary sequence of subspaces r M = {Mr }r0 , Mr ⊂ L ∩ Ar , such that r0 (dim(L ∩ Ar ) − dim Mr ) = n. r =1 =1 r0 Put JM = r=1 Mr (A + R). We denote by J the set of all such ideals. For any J ∈ J one can construct a control system of the form (5) such that its representation (4) is of the form (6). 2 FORMULATION OF THE PROBLEM. Thus, the steering problem x = a(t, x)+ ub(t, x), x(θ) = 0, where a(t, 0) ≡ 0, ˙ generates the right ideal in the algebra A, which deﬁnes system (5), and, under conditions i)–iii) of theorem 2, describes the asymptotics of the solution of timeoptimal problem (2). The question is: if any system of the form (5) having the representation of the form (6) satisﬁes conditions i)–iii) of theorem 2. The positive answer means that all possible asymptotics of solutions of the timeoptimal problems (2) are represented as asymptotics of solutions of the timeoptimal problems for systems (5) with representations of the form (6). In other words, if any system of the form (5) having the representation of the form (6) satisﬁes conditions i)–iii) of theorem 2, then timeoptimal problems (2) induce the same structure in the algebra A as steering problems to the origin under the constraint u ≤ 1, namely, the set of right ideals J. If this is not the case, then the next problem is to describe constructively the class of such systems. BIBLIOGRAPHY [1] V. I. Korobov and G. M. Sklyar, “Time optimality and the power moment problem,” Math. USSR Sb., 62, pp. 185205,1989. ASYMPTOTICS OF SOLUTIONS OF TIMEOPTIMAL PROBLEMS 121 [2] V. I. Korobov and G. M. Sklyar, “Time optimality and the trigonometric moment problem,” Math. USSR Izv., 35, pp. 203220,1990. [3] V. I. Korobov and G. M. Sklyar, “Markov power minproblem with periodic gaps,” J. Math. Sci., 80 , pp. 15591581, 1996. [4] G. M. Sklyar and S. Yu. Ignatovich, “A classiﬁcation of linear timeoptimal control problems in a neighborhood of the origin,” J. Math. Anal. Applic., 203, pp. 791811, 1996. [5] A. Bressan, “The generic local timeoptimal stabilizing controls in dimension 3,” SIAM J. Control Optimiz., 24, pp. 177190, 1986. [6] B. Bonnard, “On singular estremals in the time minimal control problem in R3 ,” SIAM J. Control Optimiz., 23, pp. 79480, 1985. [7] G. M. Sklyar and S. Yu. Ignatovich, “Moment approach to nonlinear time optimality,” SIAM J. Control Optimiz., 38, pp. 17071728, 2000. [8] G. M. Sklyar and S. Yu. Ignatovich, “Approximation in the sense of time optimality via nonlinear moment problems,” SIAM J. Control Optimiz. [9] V. I. Korobov, “On continuous dependence of the solution of the optimal control problem with free time on initial data,” Diﬀeren. Uravn., 7, pp. 11201123, 1971, (in Russian). Problem 3.9
Dynamics of principal and minor component ﬂows U. Helmke and S. Yoshizawa
Department of Mathematics University of W¨rzburg u Am Hubland, D97074 W¨rzburg u Germany [email protected] R. Evans, J.H. Manton and I.M.Y. Mareels
Department of Electrical and Electronic Engineering The University of Melbourne Victoria 3010 Australia [email protected] Stochastic subspace tracking algorithms in signal processing and neural networks are often analyzed by studying the associated matrix diﬀerential equations. Such gradientlike nonlinear diﬀerential equations have an intricate convergence behavior that is reminiscent of matrix Riccati equations. In fact, these types of systems are closely related. We describe a number of open research problems concerning the dynamics of such ﬂows for principal and minor component analysis. 1 DESCRIPTION OF THE PROBLEM Principal component analysis is a widely used method in neural networks, signal processing, and statistics for extracting the dominant eigenvalues of the covariance matrix of a sequence of random vectors. In the literature, various algorithms for principal component and principal subspace analysis have been proposed along with some, but in many aspects incomplete, theoretical analyzes of them. The analysis is usually based on stochastic approximation techniques and commonly proceeds via the socalled Ordinary Diﬀerential Equation (ODE) method, i.e., by associating an ODE whose convergence properties reﬂect that of the stochastic algorithm; see e.g., [7]. In the sequel, we consider some of the relevant ODEs in more detail and pose some open DYNAMICS OF PRINCIPAL AND MINOR COMPONENT FLOWS 123 problems concerning the dynamics of the ﬂows. In order to state our problems in precise mathematical terms, we give a formal deﬁnition of a principal and minor component ﬂow. Deﬁnition[PSA/MSA Flow]:A normalized subspace ﬂow for a covariance ˙ matrix C is a matrix diﬀerential equation X = f (X ) on Rn×p with the following properties: 1. Solutions X (t) exist for all t ≥ 0 and have constant rank. 2. If X0 is orthonormal, then X (t) is orthornormal for all t. 3. limt→∞ X (t) = X∞ exists for all full rank initial conditions X0 . 4. X∞ is an orthonormal basis matrix of a pdimensional eigenspace of C. The subspace ﬂow is called a PSA (principal subspace) or MSA (minor subspace) ﬂow, if, for generic initial conditions, the solutions X (t) converge for t → ∞ to an orthonormal basis of the eigenspace that is spanned by the ﬁrst p dominant or minor eigenvectors of C , respectively. In the neural network and signal processing literature, a number of such principal subspace ﬂows have been considered. The bestknown example of a PSA ﬂow is Oja’s ﬂow [9, 10] ˙ X = (I − XX )CX. (1) Here C = C > 0 is the n × n covariance matrix and X is an n × p matrix. Actually, it is nontrivial to prove that this cubic matrix diﬀerential equation is indeed a PSA in the above sense and thus, generically, converges to a dominant eigenspace basis. Another, more general example of a PSA ﬂow is that introduced by [12, 13] and [17]: ˙ X = CXN − XN X C X (2) Here N = N > 0 denotes an arbitrary diagonal k × k matrix with distinct eigenvalues. This system is actually a joint generalization of Oja’s ﬂow (1) and of Brockett’s [1] gradient ﬂow on orthogonal matrices ˙ X = [C, XN X ]X (3) For symmetric matrix diagonalisation, see also [6]. In [19], Oja’s ﬂow was rederived by ﬁrst proposing the gradient ﬂow ˙ X = (C (I − XX ) + (I − XX )C )X (4) and then omitting the ﬁrst term C (I − XX )X because C (I − XX )X = CX (I − X X ) → 0, a consequence of both terms in (4) forcing X to the invariant manifold {X : X X = I }. Interestingly, it has recently been realized [8] that (4) has certain computational advantages compared with (1), however, a rigorous convergence theory is missing. Of course, these three systems are just prominent examples from a bigger list of potential PSA 124 PROBLEM 3.9 ﬂows. One open problem in most of the current research is a lack of a full convergence theory, establishing pointwise convergence to the equilibria. In particular, a solution to the following three problems would be highly desirable. The ﬁrst problem addresses the qualitative analysis of the ﬂows. Problem 1. Develop a complete phase portrait analysis of (1), (2) and (4). In particular, prove that the ﬂows are PSA, determine the equilibria points, their local stability properties and the stable and unstable manifolds for the equilibrium points. The previous systems are useful for principal component analysis, but they cannot be used immediately for minor component analysis. Of course, one possible approach might be to apply any of the above ﬂows with C replaced by C −1 . Often this is not reasonable though, as in most applications the covariance matrix C is implemented by recursive estimates and one does not want to invert these recursive estimates on line. Another alternative could be to put a negative sign in front of the equations. But this does not work either, as the minor component equilibrium point remains unstable. In the literature, therefore, other approaches to minor component analysis have been proposed [2, 3, 5], but without a complete convergence theory.1 Moreover, a guiding geometric principle that allows for the systematic construction of minor component ﬂows is missing. The key idea here seems to be an appropriate concept of duality between principal and minor component analysis. Conjecture 1. Principal component ﬂows are dual to minor component ﬂows, via an involution in matrix space Rn×p , that establishes a bijective correspondence between solutions of PSA ﬂows and MSA ﬂows, respectively. If a PSA ﬂow is actually a gradient ﬂow for a cost function f , as is the case for (1), (2) and (4), then the corresponding dual MSA ﬂow is a gradient ﬂow for the Legendre dual cost function f ∗ of f . When implementing these diﬀerential equations on a computer, suitable discretizations are to be found. Since we are working in unconstrained Euclidean matrix space Rn×p , we consider Euler step discretizations. Thus, e.g., for system (1) consider Xt+1 = Xt − st (I − Xt Xt )CXt , (5) with suitably small step sizes. Such Euler discretization schemes are widely used in the literature, but usually without explicit stepsize selections that guarantee, for generic initial conditions, convergence to the p dominant orthonormal eigenvectors of A. A further challenge is to obtain stepsize selections that achieve quadratic convergence rates (e.g., via a Newtontype approach).
is remarked that the convergence proof in [5] appears ﬂawed; they argue that because d vec Q = G(t) vec Q for some matrix G(t) < 0 then Q → 0. However, counterdt examples are known [15] where G(t) is strictly negative deﬁnite (with constant eigenvalues) yet Q diverges.
1 It DYNAMICS OF PRINCIPAL AND MINOR COMPONENT FLOWS 125 Problem 2. Develop a systematic convergence theory for discretisations of the ﬂows, by specifying stepsize selections that imply global as well as local quadratic convergence to the equilibria. 2 MOTIVATION AND HISTORY Eigenvalue computations are ubiquitous in Mathematics and Engineering Sciences. In applications, the matrices whose eigenvectors are to be found are often deﬁned in a recursive way, thus demanding recursive computational methods for eigendecomposition. Subspace tracking algorithms are widely used in neural networks, regression analysis, and signal processing applications for this purpose. Subspace tracking algorithms can be studied by replacing the stochastic, recursive algorithm through an averaging procedure by a nonlinear ordinary diﬀerential equation. Similarly, new subspace tracking algorithms can be developed by starting with a suitable ordinary differential equation and then converting it to a stochastic approximation algorithm [7]. Therefore, understanding the dynamics of such ﬂows is paramount to the continuing development of recursive eigendecomposition techniques. The starting point for most of the current work in principal component analysis and subspace tracking has been Oja’s system from neural network theory. Using a simple Hebbian law for a single perceptron with a linear activation function, Oja [9, 10] proposed to update the weights according to Xt+1 = Xt − st (I − Xt Xt )ut ut Xt . (6) Here Xt denotes the n × p weight matrix and ut the input vector of the perceptron, respectively. By applying the ODE method to this system, Oja arrives at the diﬀerential equation (1). Here C = E (ut ut ) is the covariance matrix. Similarly, the other ﬂows, (2) and (4), have analogous interpretations. In [9, 11] it is shown for p = 1 that (1) is a PSA ﬂow, i.e., it converges for generic initial conditions to a normalised dominant eigenvector of C . In [11] the system (1) was studied for arbitrary values of p and it was conjectured that (1) is a PSA ﬂow. This conjecture was ﬁrst proven in [18], assuming positive deﬁniteness of C . Moreover, in [18, 4], explicit initial conditions in terms of intersection dimensions for the dominant eigenspace with the inital subspace were given, such that the ﬂow converges to a basis matrix of the pdimensional dominant eigenspace. This is reminiscent of Schubert type conditions in Grassmann manifolds. Although the Oja ﬂow serves as a principal subspace method, it is not useful for principal component analysis because it does not converge in general to a basis of eigenvectors. Flows for principal component analysis such as (2) have been ﬁrst studied in [14, 12, 13, 17]. However, pointwise convergence to the equilibria points was not established. In [16] a Lyapunov function for the Oja ﬂow (1) was given, but without recognizing that (1) is actually a gradient 126 PROBLEM 3.9 ﬂow. There have been confusing remarks in the literature claiming that (1) cannot be a gradient system as the linearization is not a symmetric matrix. However, this is due to a misunderstanding of the concept of a gradient. In [20] it is shown that (2), and in particular (1), is actually a gradient ﬂow for the cost function f (X ) = 1/4tr(AXN X )2 −1/2tr(A2 XD2 X ) and a suitable Riemannian metric on Rn×p . Moreover, starting from any initial condition in Rn×p , pointwise convergence of the solutions to a basis of k independent eigenvectors of A is shown together with a complete phase portrait analysis of the ﬂow. First steps toward a phase portrait analysis of (4) are made in [8]. 3 AVAILABLE RESULTS In [12, 13, 17] the equilibrium points of (2) were computed together with a local stability analysis. Pointwise convergence of the system to the equilibria is established in [20] using an early result by Lojasiewicz on real analytic gradient ﬂows. Thus these results together imply that (2), and hence (1), is a PSA. An analogous result for (4) is forthcoming; see [8] for ﬁrst steps in this direction. Suﬃcient conditions for initial matrices in the Oja ﬂow (1) to converge to a dominant subspace basis are given in [18, 4], but not for the other, unstable equilibria, nor for system (2). A complete characterization of the stable/unstable manifolds is currently unknown. BIBLIOGRAPHY [1] R. W. Brockett, “Dynamical systems that sort lists, diagonalise matrices, and solve linear programming problems,” Linear Algebra Appl., 146:79–91, 1991. [2] T. Chen, “ Modiﬁed Oja’s algorithms for principal subspace and minor subspace extraction,” Neural Processing Letters, 5:105–110, April 1997. [3] T. Chen, S. Amari, and Q. Lin, “ A uniﬁed algorithm for principal and minor component extraction,” Neural Networks, 11:385–390, 1998. [4] T. Chen, Y. Hua, and W.Y. Yan, “Global convergence of Oja’s subspace algorithm for principal component extraction,” IEEE Transactions on Neural Networks, 9(1):58–67, 1998. [5] S. C. Douglas, S.Y. Kung, and S. Amari, “A selfstabilized minor subspace rule,” IEEE Signal Processing Letters, 5(12):328–330, December 1998. [6] U. Helmke and J. B. Moore, Optimization and Dynamical Systems, SpringerVerlag, 1994. DYNAMICS OF PRINCIPAL AND MINOR COMPONENT FLOWS 127 [7] H. J. Kushner and G. G. Yin, Stochastic Approximation Algorithms and Applications, Springer, 1997. [8] J. H. Manton, I. M. Y. Mareels, and S. Attallah, “An analysis of the fast subspace tracking algorithm NOja,” In: IEEE Conference on Acoustics, Speech and Signal Processing, Orlando, Florida, May 2002. [9] E. Oja, “A simpliﬁed neuron model as a principal component analyzer,” Journal of Mathematical Biology, 15:267–273, 1982. [10] E. Oja, “ Neural networks, principal components, and subspaces,” International Journal of Neural Systems, 1:61–68, 1989. [11] E. Oja and J. Karhunen, “On stochastic approximation of the eigenvectors and eigenvalues of the expectation of a random matrix,” Journal of Mathematical Analysis and Applications, 106:69–84, 1985. [12] E. Oja, H. Ogawa, and J. Wangviwattana, “Principal component analysis by homogeneous neural networks, Part I: The weighted subspace criterion,” IEICE Transactions on Information and Systems, 3:366–375, 1992. [13] E. Oja, H. Ogawa, and J. Wangviwattana, “Principal component analysis by homogeneous neural networks, Part II: Analysis and extensions of the learning algorithms,” IEICE Transactions on Information and Systems, 3:376–382, 1992. [14] T. D. Sanger, “Optimal unsupervised learning in a singlelayer linear feedforward network,” Neural Networks, 2:459–473, 1989. [15] J. L. Willems, Stability Theory of Dynamical Systems, Studies in Dynamical Systems, London, Nelson, 1970. [16] J. L. Wyatt and I. M. Elfadel, “Timedomain solutions of Oja’s equations,” Neural Computation, 7:915–922, 1995. [17] L. Xu, “Least mean square error recognition principle for self organizing neural nets,” Neural Networks, 6:627–648, 1993. [18] W.Y. Yan, U. Helmke, and J. B. Moore, “Global analysis of Oja’s ﬂow for neural networks,” IEEE Transactions on Neural Networks, 5(5):674– 683, September 1994. [19] B. Yang, “Projection approximation subspace tracking”, IEEE Transactions on Signal Processing, 43(1):95–107, January 1995. [20] S. Yoshizawa, U. Helmke, and K. Starkov, “Convergence analysis for principal component ﬂows,” International Journal of Applied Mathematics and Computer Science, 11:223–236, 2001. PART 4 Discrete Event, Hybrid Systems Problem 4.1
L2 induced gains of switched linear systems Jo˜o P. Hespanha1 a
Dept. of Electrical and Computer Engineering University of California, Santa Barbara USA [email protected] 1 SWITCHED LINEAR SYSTEMS In the 1999 collection of Open Problems in Mathematical Systems and Control Theory, we proposed the problem of computing inputoutput gains of switched linear systems. Recent developments provided new insights into this problem leading to new questions. A switched linear system is deﬁned by a parameterized family of realizations {(Ap , Bp , Cp , Dp ) : p ∈ P}, together with a family of piecewise constant switching signals S := {σ : [0, ∞) → P}. Here we consider switched systems for which all the matrices Ap , p ∈ P are Hurwitz. The corresponding switched system is represented by x = Aσ x + Bσ u, ˙ y = Cσ x + Dσ u, σ∈S (1) and by a solution to (1), we mean a pair (x, σ ) for which σ ∈ S and x is a solution to the timevarying system x = Aσ(t) x + Bσ(t) u, ˙ y = Cσ(t) x + Dσ(t) u, t ≥ 0. (2) Given a set of switching signals S, we deﬁne the L2 induced gain of (1) by inf {γ ≥ 0 : y 2 ≤ γ u 2 , ∀u ∈ L2 , x(0) = 0, σ ∈ S}, where y is computed along solutions to (1). The L2 induced gain of (1) can be viewed as a “worst case” energy ampliﬁcation gain for the switched system, over all possible inputs and switching signals and is an important tool to study the performance of switched systems, as well as the stability of interconnections of switched systems.
1 This material is based upon work supported by the National Science Foundation under Grant No. ECS0093762. 132 2 PROBLEM DESCRIPTION PROBLEM 4.1 We are interested here in families of switching signals for which consecutive discontinuities are separated by no less than a positive constant called the dwelltime. For a given τD > 0, we denote by S[τD ] the set of piecewise constant switching signals with interval between consecutive discontinuities no smaller than τD . The general problem that we propose is the computation of the function g : [0, ∞) → [0, ∞] that maps each dwelltime τD with the L2 induced gain of (1), for the set of dwelltime switching signals S := S[τD ]. Until recently? little more was known about g other than the following: 1. g is monotone decreasing 2. g is bounded below by gstatic := sup Cp (sI − Ap )−1 Bp + Dp
p∈P ∞, where T ∞ := sup [s]≥0 T (s) denotes the H∞ norm of a transfer matrix T . We recall that T ∞ is numerically equal to the L2 induced gain of any linear timeinvariant system with transfer matrix T . Item 1 is a trivial consequence of the fact that given two dwelltimes τD1 ≤ τD2 , we have that S[τD1 ] ⊃ S[τD2 ]. Item 2 is a consequence of the fact that every set S[τD ], τD > 0 contains all the constant switching signals σ = p, p ∈ P. It was shown in [2] that the lowerbound gstatic is strict and in general there is a gap between gstatic and gslow := lim g[τD ].
τD →∞ This means that even switching arbitrarily seldom, one may not be able to recover the L2 induced gains of the “unswitched systems.” In [2] a procedure was given to compute gslow . Opposite to what had been conjectured, gslow is realization dependent and cannot be determined just from the transfer functions of the systems being switched. The function g thus looks roughly like the ones shown in ﬁgure 4.1.1, where (a) corresponds to a set of realizations that remains stable for arbitrarily fast switching and (b) to a set that can exhibit unstable behavior for suﬃciently fast switching [3]. In (b), the scalar τmin denotes the smallest dwelltime for which instability can occur for some switching signal in S[τmin ]. Several important basic questions remain open: 1. Under what conditions is g bounded? This is really a stability problem whose general solution has been eluding researchers for a while now (cf., the survey paper [3] and references therein). 2. In case g is unbounded (case (b) in ﬁgure 4.1.1), how to compute the position of the vertical asymptote? Or, equivalently, what is the smallest dwelltime τmin for which one can have instability? L2 INDUCED GAINS OF SWITCHED LINEAR SYSTEMS 133
g(τD ) g(τD ) gslow gstatic τD gslow gstatic τmin τD (a) (b) Figure 4.1.1 L2 induced gain versus the dwelltime. 3. Is g a convex function? Is it smooth (or even continuous)? Even if direct computation of g proves to be diﬃcult, answers to the previous questions may provide indirect methods to compute tight bounds for it. They also provide a better understanding of the tradeoﬀ between switching speed and induced gain. As far as we know, currently only very coarse upperbounds for g are available. These are obtained by computing a conservative upperbound τupper for τmin and then an upperbound for g that is valid for every dwelltime larger than τupper (cf., e.g., [4, 5]). These bounds do not really address the tradeoﬀ mentioned above. BIBLIOGRAPHY [1] J. P. Hespanha and A. S. Morse, “Inputoutput gains of switched linear systems,” In: Open Problems in Mathematical Systems Theory and Control, V. D. Blondel, E. D. Sontag, M. Vidyasagar, and J. C. Willems, eds., London: SpringerVerlag, 1999. [2] J. P. Hespanha, “Computation of rootmeansquare gains of switched linear systems,” presented at the Fifth Hybrid Systems: Computation and Control Workshop, Mar. 2002. [3] D. Liberzon and A. S. Morse, “Basic problems in stability and design of switched systems,” IEEE Contr. Syst. Mag., vol. 19, pp. 59–70, Oct. 1999. [4] J. P. Hespanha and A. S. Morse, “Stability of switched systems with average dwelltime,” In: Proc. of the 38th Conf. on Decision and Contr., pp. 2655–2660, Dec. 1999. [5] G. Zhai, B. Hu, K. Yasuda, and A. N. Michel, “Disturbance attenuation properties of timecontrolled switched systems,” submitted to publication, 2001. Problem 4.2
The state partitioning problem of quantised systems Jan Lunze
Institute of Automation and Computer Control RuhrUniversity Bochum D44780 Bochum Germany [email protected] 1 DESCRIPTION OF THE PROBLEM Consider a continuous system whose state can only be accessed through a quantizer. The quantizer is deﬁned by a partition of the state space. The system generates an event if the system trajectory crosses the boundary between adjacent partitions. The problem concerns the prediction of the event sequence generated by the system for a given initial event. As the initial event does not deﬁne the initial system state unambiguously but only restricts the initial state to a partition boundary, when predicting the system behavior the bundle of all state trajectories have to be considered that start on this partition boundary. The question to be answered is: under what conditions on the vector ﬁeld of the system and the state partition is the event sequence unique? In more detail, consider the continuousvariable system ˙ x = f (x(t)), x(0) = x0 (1) with the state x ∈ X ⊆ Rn . The vector ﬁeld f satisﬁes a Lipschitz condition so that eqn. (1) has, for all x0 ∈ X, a unique solution. The state space X is partitioned into N disjoint sets Qx (i) (i = 1, 2, ..., N ) that satisfy the conditions
N X=
i=1 Qx (i) and Qx (i) ∩ Qx (j ) = ∅ for i = j. The set Q = {Qx (i) : i = 1, 2, ..., N } THE STATE PARTITIONING PROBLEM OF QUANTISED SYSTEMS 135 is called a state quantization. The quantized state is denoted by [x] and deﬁned by [x] = i ⇔ x ∈ Qx (i). (2) The change of the quantized state is called an event, where the event eij ¯ occurs at time t if the relations ¯ ¯ [x(t + δt)] = i and [x(t − δt)] = j ¯ hold for small δt > 0. Hence, at time t the state x is on the boundary between the state partitions Qx (i) and Qx (j ) ¯ x(t) ∈ δ Qx (i) ∩ Qx (j ), where δ Qx denotes the hull of Qx . The system (1) together with the quantization Q is called the quantized system. For given initial state x0 the system (1) generates, for the time interval [0, T ], a unique state trajectory x(x0 , t) and, hence, a unique event sequence E = (e0 , e1 , ..., eH ) = Quant(x(x0 , t)), which formally can be represented as the result of the operator Quant applied to the state trajectory. H is the number of events generated by the system within the time interval [0, T ]. The following considerations concern only those initial events e0 for which the quantized system generates an event sequence with H > 1. If instead of the initial state x0 only the initial event e0 = eij is given, the initial state is only known to lie on the boundary δ Qx (i) ∩ Qx (j ) between the state partitions Qx (i) and Qx (j ). Consequently, the bundle of trajectories starting in all these initial states have to be considered. These trajectories yield the set E(e0 ) = {E = Quant(x(x0 , t)) for x0 ∈ δ Qx (i) ∩ Qx (j )} of event sequences. If the set E has more than one element, the quantized system is nondeterministic in the sense that the knowledge of the initial event e0 is not suﬃcient to predict the future event sequence unambiguously. On the other hand, the quantized system is called to be deterministic if the set E(e0 ) is a singleton for all possible initial events e0 . In order to deﬁne the events precisely, the state partition should satisfy the following assumptions: A1. The trajectories do not lie in the hypersurfaces that represent the partition boundaries. A2. The system cannot generate an inﬁnite number of events in a ﬁnite time interval. A3. No ﬁxpoint of the vector ﬁeld f lie on a partition boundary. These assumptions can be satisﬁed by appropriately deﬁning the state partitions for the given vector ﬁeld f . 136 PROBLEM 4.2 State partitioning problem. Find conditions under which the quantised system is deterministic. This problem can be reformulated in two versions: Problem: For given vector ﬁeld f , ﬁnd a partition of the state space such that the quantized system is deterministic. Problem B: For given vector ﬁeld f and a state quantization Q, test whether the quantized system is deterministic. Both formulations have their engineering relevance. Where problem A concerns the practical situation in which a state partition has to be selected, problem B refers to the test of the determinism of the system for given partition. The problem stated so far is, possibly, too general in two respects. First, the problem for testing the determinism of the system should be as simple as possible. For a given partition consisting of N disjoint sets, Problem B can be solved by considering all trajectory bundles that start on all partition boundaries. Here, the characterization of classes of vector ﬁelds f and partitioning methods is interesting for which the complexity of the test is constant or grows only linearly with N . Second, for problem A it is interesting to ﬁnd partitions that can be distinguished with only a few measurements. For example, rectangular partitions are interesting from a practical viewpoint which result from separate quantizations of all n state variables xi . Nonautonomous systems. The problem can be extended to nonautonomous quantized systems ˙ x = f (x(t), u(t)), y(t) = g(x(t), u(t)) x(0) = x0 (3) (4) with input u ∈ U ⊆ Rm and output y ∈ Y ⊆ Rr . The functions f and g satisfy a Lipschitz condition so that eqns. (3), (4) have, for all x0 ∈ X and u(t), a unique solution. The output quantizer is deﬁned by a partition of the output space Y into the sets Qy (i) where the quantized output [y] is deﬁned analogously to equation (2). The event sequence E is now deﬁned in terms of the events that the output signal y generates. The system is considered with the quantized input [u]. An injector associates with each input a unique element of the ﬁnite discrete set U = {u1 , u2 , ..., uM } such that u(t) = ui if [u(t)] = i. Again, a change of the quantized input value is called an (input) event. It is assumed that the input and output events occur synchronously. This assumption ﬁxes the time instances in which the input changes its value. It is motivated by the fact that in closedloop THE STATE PARTITIONING PROBLEM OF QUANTISED SYSTEMS 137 systems a supervisor deﬁnes the quantized input in the same time instant in which an output event occurs. Here the state partitioning problem includes also to deﬁne an output partition and an input set U such that the quantized system is deterministic for all input sequences. 2 MOTIVATION AND HISTORY OF THE PROBLEM The problem results from hybrid systems, whose simplest form is a continuousvariable system with discrete inputs. Many technological systems that are controlled by programmable logic controllers (PLC) have a continuous state space and are controlled by discrete inputs. The contrast of the continous state and the discrete input does not matter because many systems are designed in such a way that any accessible input results in an unambiguous state or output event. For example, the (simpliﬁed) state space of a lift has the state variables “vehicle position,” and “door position” both of which are quantized where the vehicle position refers to the ﬂoor in which it stops and the two discrete door position are called “open” or “closed.” For the performance of this system, only the events are important, which refer to the beginning and the end of the presence of the vehicle or the door in one of these positions. As the PLC can only switch on or oﬀ, the motors of the vehicle or the door and it is programmed so that the next command is given only after the next output event has occurred, every new input event is followed by exactly one output event (unless the system is faulty). So, the lift is a continuousvariable system (3), (4) with quantized input and output, that is deterministic. In this case, the solution to the state partitioning problem is simple. The determinism of the quantized system results from the fact that the system trajectories are parallel to the coordinate axes of the state space for all accessible inputs and the quantization refers to separate intervals of both state variables. So, the end point of any movement initiated by a PLC command is a point in the state space and every trajectory of the closedloop system results in precisely one output event. In a more general setting, continuousvariable systems are dealt with as quantized systems for process supervision tasks. Then the system is not designed to behave like a discreteevent system but has a continuous state space. The quantizers are introduced deliberately to reduce the information to be processed. For example, alarm messages show that a certain signal has exceeded a threshold. The state partitioning problem asks for the a choice of discrete sensors such that the system behavior is deterministic. As the third motivation for the state partitioning problem, hybrid systems theory concerns dynamical systems with continuousvariable and discreteevent subsystems. The interfaces between both parts are the quantizer and 138 PROBLEM 4.2 the injector introduced above that transform the discrete output signal of the discrete subsystem into a realvalued input signal of the continuous subsystem and vice versa. The problem occurs under what condition the overall hybrid system has a deterministic inputoutput behavior if only the discrete inputs and outputs of the discrete subsystem are considered. The main source of nondeterminism results from the quantization of the signal space of the continuous subsystem, which again leads to the state partitioning problem. In all these situations, the discrete behavior of a continuous system is considered. In the literature on fault diagnosis and veriﬁcation of discrete control algorithms the hybrid nature of the closedloop system is removed by using a discreteevent representation of the quantized system. As in many practical situations the quantizers can be chosen, the state partitioning problem asks for guidelines of this selection. For a deterministic discrete behavior, a deterministic model can be used to describe the quantised system. If, however, the discrete behavior is nondeterministic, a nondeterministic model like a nondeterministic or stochastic automaton or a Petri net has to be used. Several ways for determining such models for a given quantized system have been elaborated recently ([3], [4], [6], [7], [8], [9]). 3 AVAILABLE RESULTS The ﬁrst result on the state partitioning problem concerns discretetime systems (rather than continuoustime systems) with quantized state space. Reference [4] gives a necessary and suﬃcient condition for the determinism of the discrete behavior for linear autonomous systems with a state space partition that regularly decomposes each state variable into intervals of the same size. In [5] it has been shown how state partitions can be generated by mapping a given initial set Qx (1) by the model (1) that is used with reversed time axis. For the problem stated here, only preliminary results are available. If the system trajectories are, like in the lift example, parallel to the coordinate axes of the state space and the quantization boundaries deﬁne rectangular cells whose axes are parallel to the coordinate axes, the discrete behavior is deterministic. This situation is encountered if, for example, the state variables are decoupled and controlled by separate inputs. Hence, the model can be decomposed into xi = fi (xi , ui ) ˙ yi = gi (xi ), which corresponds again to the simple lift example. Another example is an undamped oscillator with a state partition that decomposes the state space into the two halfplanes. Then the ﬁxpoint lies on the partition boundary (and, thus, violates assumption A3). However, the oscillator generates, for THE STATE PARTITIONING PROBLEM OF QUANTISED SYSTEMS 139 each initial state, a unique (alternating) event sequence. Results on symbolic dynamics are closely related to the problem stated here (cf. [1], [2]). A bundle of trajectories (or ﬂows) is considered, which generate a symbolic output if some partition boundary is crossed. The partition is called Markovian if all trajectories of the bundle cross the same partition boundary and, hence, generate the same symbol. In the terminology used there, the problem posed here asks the question how to ﬁnd Markovian partitions. BIBLIOGRAPHY [1] A. Lasota, M. C. Mackey, Chaos, Fractals and Noise, SpringerVerlag, New York, 1994. [2] D. Lind, B. Marcus, An Introduction to Symbolic Dynamics and Coding, Cambridge University Press, 1995. [3] J. Lunze, “A Petri net approach to qualitative modelling of continuous dynamical systems,” Systems Analysis, Modelling, Simulation, 9, pp. 885903, 1992. [4] J. Lunze, “Qualitative modelling of linear dynamical systems with quantized state measurements,” Automatica, 30, pp. 417431, 1994. [5] J. Lunze, B. Nixdorf, J. Schr¨der, “On the nondeterminism of discreteo event representations of continuousvariables systems,” Automatica, 35, pp. 395408, 1999. [6] H.A. Preisig, M.J.H. Pijpers, M. Weiss, “A discrete modelling procedure for continuous processes based on statediscretisation,” 2n MATHMOD, Vienna, pp. 189194, 1997. [7] J. Raisch, S. O’Young, “A totally ordered set of discrete abstractions for a given hybrid or continuous system,” pp. 342360 In: P. Antsaklis, W. Kohn, A. Nerode, S. Sastry, eds., Hybrid Systems II, SpringerVerlag, Berlin, 1995. [8] J. Schr¨der, Modelling, State Observation and Diagnosis of Quantised o Systems, SpringerVerlag, Heidelberg, 2002. [9] O. Stursberg, S. Kowalewski, S. Engell, “Generating timed discrete models,” 2n MATHMOD, Vienna, pp. 203207, 1997. Problem 4.3
Feedback control in ﬂowshops S. P. Sethi
School of Management The University of Texas at Dallas Box 830688, Mail Station JO4.7 Richardson, TX 75083 USA [email protected] Q. Zhang
Department of Mathematics University of Georgia Athens, GA 30602 USA [email protected] 1 DESCRIPTION OF THE PROBLEM Consider a manufacturing system producing a single ﬁnished product using m machines in tandem that are subject to breakdown and repair. We are given a ﬁnitestate Markov chain α(·) = (α1 (·), . . . , αm (·)) on a probability space (Ω, F, P ), where αi (t), i = 1, . . . , m, is the capacity of the ith machine at time t. We use ui (t) to denote the input rate to the ith machine, i = 1, . . . , m, and xi (t) to denote the number of parts in the buﬀer between the ith and (i + 1)th machines, i = 1, . . . , m − 1. Finally, the surplus is denoted by xm (t). The dynamics of the system can then be written as follows: x(t) = Au(t) + Bz, x(0) = x, ˙ where z is the rate of demand and 1 −1 0 ··· 0 1 −1 · · · A= . . . . . ··· . . . . 0 0 0 ··· (1) 0 0 . . . 1 and B = 0 0 . . . −1 . FEEDBACK CONTROL IN FLOWSHOPS 141 Since the number of parts in the internal buﬀers cannot be negative, we impose the state constraints xi (t) ≥ 0, i = 1, . . . , m − 1. To formulate the problem precisely, let S = [0, ∞)m−1 × (−∞, ∞) ⊂ Rm denote the state constraint domain. For α = (α1 , . . . , αm ) ≥ 0, let U (α) = {u = (u1 , . . . , um ) : 0 ≤ ui ≤ αi , i = 1, . . . , m}, and for x ∈ S, let U (x, α) = {u : u ∈ U (α); xi = 0 ⇒ ui − ui+1 ≥ 0, i = 1, . . . , m − 1}.
j j Let M = {α1 , . . . , αp } for a given integer p ≥ 1, where αj = (α1 , . . . , αm ) j with αi denoting the possible capacity states of the ith machine, i = 1, . . . , m. Let the σ algebra Ft = σ {α(s) : 0 ≤ s ≤ t}. Deﬁnition 1: A control u(·) is admissible with respect to the initial state x ∈ S and α ∈ M if: (i) u(·) is {Ft }adapted, (ii) u(t) ∈ U (α(t)) for all t ≥ 0, and (iii) the corresponding state process x(t) = (x1 (t), . . . , xm (t)) ∈ S for all t ≥ 0. Let A(x, α) denote the set of admissible controls. Deﬁnition 2: A function u(x, α) is called a feedback control, if (i) for any given initial x, the equation (1) has a unique solution; and (ii) u(·) = {u(t) = u(x(t), α(t)), t ≥ 0} ∈ A(x, α). The problem is to ﬁnd an admissible control u(·) that minimizes ∞ J (x, α, u(·)) = E
0 e−ρt G(x(t), u(t))dt, (2) where G(x, u) deﬁnes the cost of surplus x and production u, α is the initial value of α(t), and ρ > 0 is the discount rate. We assume that G(x, u) ≥ 0 is jointly convex and locally Lipschitz. The value function is then deﬁned as v (x, α) =
u(·)∈A(x,α) inf J (x, α, u(·)). (3) The optimal control of this problem was considered in [1] using HJB equations with directional derivatives. It is shown that there exists a unique optimal control. In addition, a veriﬁcation theorem associated with the HJB equations is obtained. However, these HJB equations are diﬃcult to solve numerically, especially when the state space of M is large. In this case, it is desirable to derive an approximate solution instead. We consider the case when α(·) jumps rapidly. In particular, we assume α(t) = αε (t) ∈ M, t ≥ 0, to be a Markov chain with the generator Qε = 1 Q + Q, ε where Q = (˜ij ) and Q = (ˆij ) are generator matrices and Q is weakly irq q reducible. Here ε is a small parameter. We use Pε to denote our control problem. As ε gets smaller and smaller, one expects that Pε approaches to a 142 PROBLEM 4.3 limiting problem. To obtain such limiting problem, let ν = (ν 1 , . . . , ν p ) denote the equilibrium distribution of Q. We consider the class of deterministic controls deﬁned below. Deﬁnition 3: For x ∈ S , let A0 (x) denote the set of the following measurable controls U (·) = (u1 (·), . . . , up (·)) = ((u1 (·), . . . , u1 (·)), . . . , (up (·), . . . , up (·))) 1 m m 1
j such that 0 ≤ uj (t) ≤ αi for all t ≥ 0, i = 1, . . . , n and j = 1, . . . , p, and the i corresponding solutions x(·) of the system p x(t) = A ˙
j =1 ν j uj (t) + Bz, x(0) = x satisfy x(t) ∈ S for all t ≥ 0. The objective of the limiting problem is to choose a control U (·) ∈ A0 (x) that minimizes
∞ p J 0 (x, U (·)) =
0 e−ρt
j =1 ν j G(x(t), uj (t))dt. We use P0 to denote the limiting problem and v 0 (x) the corresponding value function. 2 MOTIVATION AND HISTORY OF THE PROBLEM It is shown in [1] that the value function v ε (x, α) converges to v 0 (x) as ε → 0. The limiting problem is much easier to solve. The goal is to use the solution of the limiting problem to construct a control for the original problem that is nearly optimal. 3 AVAILABLE RESULTS The idea is to use an optimal (or a near optimal) control to construct a control for the original problem Pε . The main diﬃculty is how to construct an admissible control for Pε in a way that still guarantees the asymptotic optimality as ε goes to zero. Partial results were obtained using a “lifting” and “modiﬁcation” approach. This was applied to openloop controls; see [1]. Construction of an asymptotic optimal feedback control remains open. A resolution of this problem would perhaps also apply to a more complicated jobshop considered in [1]. FEEDBACK CONTROL IN FLOWSHOPS 143 BIBLIOGRAPHY [1] S. P. Sethi and Q. Zhang, Hierarchical Decision Making in Stochastic Manufacturing Systems, Birkh¨user, Boston, 1994. a Problem 4.4
Decentralized control with communication between controllers Jan H. van Schuppen
CWI P.O. Box 94079, 1090 GB Amsterdam The Netherlands [email protected] 1 DESCRIPTION OF THE PROBLEM Problem 1: Decentralized control with communication between controllers Consider a control system with inputs from r diﬀerent controllers. Each controller has partial observations of the system and the partial observations of each pair of controllers is diﬀerent. The controllers are allowed to exchange online information on their partial observations, state estimates, or input values, but there are constraints on the communication channels between each tuple of controllers. In addition, there is speciﬁed a control objective. The problem is to synthesize r controllers and a communication protocol for each directed tuple of controllers, such that when the controllers all use their received communications the control objective is met as well as possible. The problem can be considered for a discreteevent system in the form of a generator, for a timed discreteevent system, for a hybrid system, for a ﬁnitedimensional linear system, for a ﬁnitedimensional Gaussian system, etc. In each case, the communication constraint has to be chosen and a formulation has to be proposed on how to integrate the received communications into the controller. Remarks on problem (1) The constraints on the communication channels between controllers are essential to the problem. Without it, every controller communicates all his/her partial observations to all other controllers and one obtains a control problem with a centralized controller, albeit one where each controller carries out the same control computations. DECENTRALIZED CONTROL WITH COMMUNICATION 145 (2) The complexity of the problem is large, for control of discreteevent systems it is likely to be undecidable. Therefore, the problem formulation has to be restricted. Note that the problem is analogous to human communication in groups, ﬁrms, and organizations and that the communication problems in such organizations are eﬀectively solved on a daily basis. Yet there is scope for a fundamental study of this problem also for engineering control systems. The approach to the problem is best focused on the formulation and analysis of simple control laws and on the formulation of necessary conditions. (3) The basic underlying problem seems to be: what information of a controller is so essential in regard to the control purpose that it has to be communicated to other controllers? A system theoretic approach is suitable for this. (4) The problem will also be useful for the development of hierarchical models. The information to be communicated has to be dealt with at a global level, the information that does not need to be communicated can be treated at the local level. To assist the reader with the understanding of the problem, the special cases for discreteevent systems and for ﬁnitedimensional linear systems are stated below. Problem 2: Decentralized control of a discreteevent system with communication between supervisors Consider a discreteevent system in the form of a generator and r ∈ Z+ supervisors: G = (Q, E, f, q0 ), Q, the state set, q0 ∈ Q, the initial state, E, the event set, f : Q × E → Q, the transition function, L(G) = {s ∈ E ∗ f (q0 , s) is deﬁned}, ∀k ∈ Zr = {1, 2, . . . , r}, a partition, E = Ec,k ∪ Euc,k , Ecp,k = {Ee ⊆ E Euc,k ⊆ Ee }, ∗ ∀k ∈ Zr , a partition, E = Eo,k ∪ Euo,k , pk : E ∗ → Eo,k , ∀k ∈ Zr , an event is enabled if it is enabled by all supervisors, {vk : pk (L(G)) → Ecp,k , ∀k ∈ Zr }, the set of supervisors based on partial observations, Lr , La ⊆ L(G), required and admissible language, respectively. The problem or better, a variant of it, is to determine a set of subsets of the event set that represent the events to be communicated by each supervisor 146 to the other supervisors and a set of supervisors, PROBLEM 4.4 ∀(i, j ) ∈ Zr × Zr , Eo,i,j ⊆ Eo,i , pi,j : E → Eo,i,j , the set of supervisors based on partial observations and on communications, {vk (pk (s), {pj,k (s), ∀j ∈ Z+ \{k }}) → Ecp,k , ∀k ∈ Zr }; is such that the closedloop language, L(v1 ∧ . . . ∧ vr /G), satisﬁes Lr ⊆ L(v1 ∧ . . . ∧ vr /G) ⊆ La , and the controlled system is nonblocking. Problem 3: Decentralized control of a ﬁnitedimensional linear system with communication between controllers Consider a ﬁnitedimensional linear system with r ∈ Z+ input signals and r output signals,
r x(t) = Ax(t) + ˙
k=1 r Bk uk (t), x(t0 ) = x0 , Dj,k uk (t), ∀ j ∈ Zr = {1, 2, . . . , r},
k=1 yj (t) = Cj x(t) + ys,j (t) = Cj (vs,j (t))x(t), where yj,s represents the communication signal from Controller s to Controller j , where vs,j is the control input of Controller s for the communication to Controller j , and where the dimensions of the state, the input signals, the output signals, and of the matrices have been omitted. The ith controller observes output yi and provides to the system input ui . Suppose that Controller 2 communicates some components of his observed output signal to Controller 1. Can the system then be stabilized? How much can a quadratic cost be lowered by doing so? The problem becomes diﬀerent if the communications from Controller 2 to Controller 1 are not continuous but are spaced periodically in time. How should the period be chosen for stability or for a cost minimization? The period will have to take account of the feedback achievable time constants of the system. A further restriction on the communication channel is to impose that messages can carry at most a ﬁnite number of bits. Then quantization is required. For a recent work on quantization in the context of control see, [17]. 2 MOTIVATION The problem is motivated by control of networks: for example, of communication networks, of telephone networks, of traﬃc networks, ﬁrms consisting of many divisions, etc. Control of traﬃc on the internet is a concrete example. In such networks, there are local controllers at the nodes of the network, DECENTRALIZED CONTROL WITH COMMUNICATION 147 each having local information about the state of the network but no global information. Decentralized control is used because it is technologically demanding and economically expensive to convey all observed informations to other controllers. Yet it is often possible to communicate information at a cost. This viewpoint has not been considered much in control theory. In the tradeoﬀ, the economic costs of communication have to be compared with the gains for the control objectives. This was already remarked on in the context of team theory a long time ago. But this has not been used in control theory till recently. The current technological developments make the communication relatively cheap and therefore the tradeoﬀ has shifted toward the use of more communication. 3 HISTORY OF THE PROBLEM The decentralized control problem with communication between supervisors was formulated by the author of this paper around 1995. The plan for this problem is older, though, but there are no written records. With Kai C. Wong a necesary and suﬃcient condition was derived (see [20]) for the case of two controllers with asymmetric communication. The aspect of the problem that asks for the minimal information to be communicated was not solved in that paper. Subsequent research has been carried out by many researchers in control of discreteevent systems, including George Barrett, Rene Boel, Rami Debouk, Stephane Lafortune, Laurie Ricker, Karen Rudie, Demos Teneketzis; see [1, 2, 3, 4, 5, 11, 12, 13, 14, 15, 16, 19]. Besides the control problem, the corresponding problem for failure diagnosis has also been analyzed; see [6, 7, 8, 9]. The problem for failure diagnosis is simpler than that for control due to the fact that there is no relation of the diagnosing via the input to the future observations. The problem for timed discreteevent systems has been formulated also because in communication networks time delays due to communication need to be taken into account. There are relations of the problem with team theory; see [10]. There are also relations with the asymptotic agreement problem in distributed estimation; see [18]. There are also relations of the problem to graph models and Bayesian belief networks where computations for large scale systems are carried out in a decentralized way. 4 APPROACH Suggestions follow for the solution of the problem. Approaches are: (1) Exploration of simple algorithms. (2) Development of fundamental properties of control laws. 148 PROBLEM 4.4 An example of a simple algorithm is the IEEE 802.11 protocol for wireless communication. The protocol prescribes stations when they can transmit and when not. All stations are in competition with each other for the available broadcasting time on a particular frequency. The protocol does not have a theoretical analysis and was not designed via a control synthesis procedure. Yet it is a beautiful example of a decentralized control law with communication between supervisors. The alternating bit protocol is another example. In a recent paper, S. Morse has analyzed another algorithm for decentralized control with communication based on a model for a school of ﬁshes. A more fundamental study will have to be directed at structural properties. Decentralized control theory is based on the concept of Nash equilibrium from game theory and on the concept of personbyperson optimality from team theory. The computation of an equilibrium is diﬃcult because it is the solution of a ﬁxpoint equation in function space. However, properties of the control law may be derived from the equilibrium equation, as is routinely done for optimal control problems. Consider then the problem for a particular controller: it regards as the combined system the plant with the other controllers being ﬁxed. The controller then faces the problem of designing a control law for the combined system. However, due to communication with other supervisors, it can in addition select components of the state vector of the combined system for its own observation process. A question then is which components to select. This approach leads to a set of equations, which, combined with those for other controllers, have to be solved. Special cases of which the solution may point to generalizations are the case of two controllers with asymmetric communication and the case of three controllers. For larger number of controllers graph theory may be exploited but it is likely that simple algorithms will carry the day. Constraints can be formulated in terms of informationlike quantities as information rate, but this seems most appropriate for decentralized control of stochastic systems. Constraints can also be based on complexity theory as developed in computer science, where computations are counted. This case can be extended to counting bits of information. BIBLIOGRAPHY [1] G. Barrett, Modeling, Analysis and Control of Centralized and Decentralized Logical DiscreteEvent Systems, Ph.D. thesis, University of Michigan, Ann Arbor, 2000. [2] G. Barrett and S. Lafortune, “A novel framework for decentralized supervisory control with communication,” In: Proc. 1998 IEEE Systems, Man, and Cybernetics Conference, New York, IEEE Press, 1998. DECENTRALIZED CONTROL WITH COMMUNICATION 149 [3] G. Barrett and S. Lafortune, “On the synthesis of communicating controllers with decentralized information structures for discreteevent systems,” In: Proceedings IEEE Conference on Decision and Control, pp. 3281–3286, New York, IEEE Press, 1998. [4] G. Barrett and S. Lafortune, “Some issues concerning decentralized supervisory control with communicatio,” In: Proceedings 38th IEEE Conference on Decision and Control, pp. 2230–2236, New York, IEEE Press, 1999. [5] G. Barrett and S. Lafortune, “Decentralized supervisory control with communicating controllers,” IEEE Trans. Automatic Control, 45:1620– 1638, 2000. [6] R. K. Boel and J. H. van Schuppen, “Decentralized failure diagnosis for discreteevent systems with costly communication between diagnosers,” In: Proceedings of International Workshop on Discrete Event Systems (WODES2002), pp. 175–181, Los Alamitos, IEEE Computer Society, 2002. [7] R. Debouk, Failure diagnosis of decentralized discreteevent systems, Ph.D. thesis, University of Michigan, Ann Arbor, 2000. [8] R. Debouk, S. Lafortune, and D. Teneketzis, “Coordinated decentralized protocols for failure diagnosis of discreteevent systems,” Discrete Event Dynamics Systems, 10:33–86, 2000. [9] R. Debouk, S. Lafortune, and D. Teneketzis, “Coordinated decentralized protocols for failure diagnosis of discrete event systems,” Report CGR9717, College of Engineering, University of Michcigan, Ann Arbor, 1998. [10] R. Radner, “Allocation of a scarce resource under uncertainty: An example of a team,” In: C. B. McGuire and R. Radner, eds, Decision and organization, pp. 217 – 236. NorthHolland, Amsterdam, 1972. [11] S. Ricker and K. Rudie, “Know means no: Incorporating knowledge into decentralized discreteevent control,” In: Proc. 1997 American Control Conference, 1997. [12] S. L. Ricker, Knowledge and Communication in Decentralized DiscreteEvent Control, Ph.D. thesis, Queen’s University, Department of Computing and Information Science, August 1999. [13] S. L. Ricker and G. Barrett, “Decentralized supervisory control with singlebit communications,” In: Proceedings of American Control Conference (ACC01),pp. 965–966, 2001. 150 PROBLEM 4.4 [14] S. L. Ricker and K. Rudie, “Incorporating communication and knowledge into decentralized discreteevent systems,” In: Proceedings 38th IEEE Conference on Decision and Control, pp. 1326–1332, New York, IEEE Press, 1999. [15] S. L. Ricker and J. H. van Schuppen, “Asynchronous communication in timed discrete event systems,” In: Proceedings of the American Control Conference (ACC2001), pp. 305–306, 2001. [16] S. L. Ricker and J. H. van Schuppen, “Decentralized failure diagnosis with asynchronuous communication between supervisors,” In: Proceedings of the European Control Conference (ECC2001), pp. 1002–1006, 2001. [17] S. C. Tatikonda, Control under communication constraints, Ph.D. thesis, Department of Electrical Engineering and Computer Science, MIT, Cambridge, MA, 2000. [18] D. Teneketzis and P. Varaiya, “Consensus in distributed estimation with inconsistent beliefs,” Systems & Control Lett., 4:217–221, 1984. [19] J. H. van Schuppen, “Decentralized supervisory control with information structures,” In: Proceedings International Workshop on Discrete Event Systems (WODES98), pp. 36–41, London, IEE, 1998. [20] K. C. Wong and J. H. van Schuppen, “Decentralized supervisory control of discreteevent systems with communication,” In: Proceedings International Workshop on Discrete Event Systems 1996 (WODES96), pp. 284–289, London, IEE, 1996. PART 5 Distributed Parameter Systems Problem 5.1
Inﬁnite dimensional backstepping for nonlinear parabolic PDEs Andras Balogh and Miroslav Krstic
Department of MAE University of California at San Diego La Jolla, CA 92093–0411 USA [email protected] and [email protected] 1 INTRODUCTION This note explores an approach to global stabilization of boundary controlled nonlinear PDEs by a technique inspired by ﬁnite dimensional backstepping/feedback linearization. Solution of the problem presented herein would be of enormous signiﬁcance because these are the only truly constructive and systematic techniques in ﬁnite dimension. We consider nonlinear parabolic PDEs of the form ut (x, t) = εuxx (x, t) + f (u (x, t)) for x ∈ (0, 1), t > 0, with boundary conditions u (0, t) = 0, u (1, t) = α1 (u) , initial condition u (x, 0) = u0 (x) , and under the assumption ε > 0, f ∈ C ∞ (R) .1 (4) The task is to derive a nonlinear (feedback) functional α1 : C ([0, 1]) → R that stabilizes the trivial solution u (x, t) ≡ 0 in an appropriate way. An inﬁnite dimensional version of backstepping was introduced in [2] that solves
1 The (1) (2) (3) x ∈ [0, 1] , smoothness requirement is explained after formula (18). 154 PROBLEM 5.1 this problem for f (u) = λu with λ > 0 arbitrarily large. Superlinear nonlinearities can imply ﬁnite time blow–up for the uncontrolled case [6, 7, 9, 10]. However, numerical results in a series of papers by Boskovic and Krstic [3, 4, 5] show promise for the applicability of the inﬁnite dimensional backstepping to nonlinear problems, at least for ﬁnite–grid discretizations. In this note, we present the open problem of convergence of nonlinear backstepping schemes as the discretization grid becomes inﬁnitely reﬁned. Note that this problem is diﬀerent from the question of controllability [1, 8]. 2 BACKSTEPPING TRANSFORMATION We look for a coordinate transformation of the form w = u − α (u) , (5) where α : C ([0, 1]) → C ([0, 1]) is a nonlinear operator to be found, that transforms system (1)–(3) into the exponentially stable system wt (x, t) = εwxx (x, t) , with boundary conditions w (0, t) = 0 , w (1, t) = 0 . (7) (8) x ∈ (0, 1) , t > 0, (6) Once transformation (5) is found, it is realized through the stabilizing boundary feedback control (3) with α1 (u) = α (u)x=1 . In order to ﬁnd (5) in a constructive way, we ﬁrst discretize in space (1)– (3), then we develop a stabilizing coordinate transformation for the semi– discretized system. The main question of showing that the discretization converges to an inﬁnite dimensional transformation is open in the case of functions f (u) that are nonlinear. We deﬁne un = u (ih, t) for i, j = 0, 1, . . . , n + 1, n = 1, 2, . . . where h = i 1/ (n + 1), and the ﬁnite diﬀerence discretization of the rest of the functions is deﬁned the same way. The discretized version of coordinate transformation (5) now has the form wn = (I − αn ) (un )
n n = 1, 2, . . .
n T (9) (10) (11) (12) (13) (14) where α is an n–vector valued function of u and
n n n wn = w0 , w1 , . . . , wn+1 , un = un , un , . . . , un+1 0 1 n The discretized form of system (1)–(3) is T . un = 0 , 0 un − 2un + un 1 i i− un = ε i+1 ˙i + f (un ) , i = 1, . . . , n , i h2 n un+1 = αn (un , un , . . . , un ) . n 1 2 n BACKSTEPPING FOR NONLINEAR PDES 155 n with the convention of α0 = 0. The discretized form of system (6)–(8) is n w0 = 0 , n n wn − 2wi + wi−1 wi = ε i+1 ˙n , i = 1, 2, . . . , n , 2 h n wn+1 = 0 . (15) (16) (17) Combining (16), (9) and (13), and solving for of the recursive formula for the transformation: n αi , we obtain the ﬁnal form n αi = − h2 n n f (un ) + 2αi−1 − αi−2 i ε i−1 n ∂αi−1 h2 + un+1 − 2un + un−1 + f un j j j j ∂uj ε j =1 (18) for i = 1, 2, . . . , n. This recursive formula contains the functions f (u) (which is nonlinear in general,) and it involves diﬀerentiation. As a result, as n → ∞, eventually inﬁnite smoothness of the function f is required. A few values n of αi :
n α0 = 0, n α1 = − (19) (20) h2 f (un ) , 1 ε n α2 = − h2 h2 h2 h2 f (un ) − 2 f (un ) − f (un ) un − 2un + f (un ) , (21) 2 1 1 2 1 1 ε ε ε ε h2 h2 h2 f (un ) − 2 f (un ) − 3 f (un ) 3 2 1 ε ε ε h2 h2 −2 f (un ) un − 2un + f (un ) 1 2 1 1 ε ε +− h2 h2 f (un ) un − 2un + f (un ) − 1 2 1 1 ε ε h2 f (un ) 1 ε
2 n α3 = − · · un − 2un + 2 1 − h2 f (un ) 1 ε h2 h2 f (un ) + f (un ) 2 1 ε ε un − 2un + 3 2 h2 f (un ) 2 ε (22) 3 OPEN PROBLEM Using the above backstepping approach, the problem of ﬁnding the coordinate transformation (5) and the corresponding stabilizing boundary control (3) requires two steps. 156 PROBLEM 5.1 1. Find assumptions on the nonlinear function f that ensures the convergence of the discretized coordinate transformation (18) to a (nonlinear) operator α in order to obtain the feedback boundary control law (5). 2. Establish the bounded invertibility of operator I − α (see equation (5)) in appropriate function spaces. 4 KNOWN LINEAR RESULT For the linear case f (u) = λu we have the following result [2]. Theorem 1: For any λ ∈ R and ε, c > 0 there exists a function k1 ∈ L∞ (0, 1) such that for any u0 ∈ L∞ (0, 1) the unique classical solution u (x, t) ∈ C 1 (0, ∞) ; C 2 (0, 1) of system (1)–(3) with boundary feedback control
1 α1 (u) =
0 k1 (ξ ) u (ξ, t) dξ (23) is exponentially stable in the L2 (0, 1) and maximum norms with decay rate c. The precise statements of stability properties are the following: there exists a positive constant M 2 such that for all t > 0 u (t) ≤ M u0 e−ct and
x∈[0,1] (24) max u (t, x) ≤ M sup u0 (x) e−ct .
x∈[0,1] (25) In this linear case, the transformation is a bounded linear operator α : x L1 → L1 in the form of α (u) = 0 k (x, ξ ) u (ξ ) dξ with integral kernel k ∈ 1 L∞ ([0, ∞] × [0, ∞]). The boundary control is α1 (u) = 0 k (1, ξ ) u (ξ ) dξ . The explicit form of αi is
i n αi = j =1 n ki,j un , j i = 1, . . . , n, (26) where
n ki,i−j =− i j+1
[j/2] (c + λ) ε (n + 1) 1 l j−l l−1
2 j +1 − (i − j )
l=1 i−l j − 2l (c + λ) ε (n + 1)
2 j −2l+1 (27) for i = 1, . . . , n, j = 1, . . . , i.
2M grows with c, λ and 1/ε. BACKSTEPPING FOR NONLINEAR PDES 157 5 NUMERICAL RESULTS In the nonlinear case, we need at least the uniform boundedness of sequences n n {αi (u)}i=1 ⊂ R as n → ∞ for all u from some reasonable function space. n We used Mathematica and MuPAD to calculate αn (u) symbolically using the recursive relationship (18) and then to evaluate it for several diﬀerent functions u (x) and for diﬀerent nonlinear functions f (u). Since we found no qualitative diﬀerence between results corresponding to functions u (x) of the same size, we present here only the results for functions of the form u (x) = p sin (πx) with diﬀerent values of p. The symbolic calculation becomes extremely demanding computationally for increasing values of n. We n were able to evaluate αn for values up to n = 9 or n = 10 depending on the complexity of the nonlinear function f (u). The results are collected below in two tables. −− 1. In the case of f (u) = u ln 1 + u2 , we have superlinearity f (u) − − → u ∞ ∞, but the condition b fdu) < ∞, which is necessary for ﬁnite time (u blow up (see, e.g., [9]) is not satisﬁed for any b > 0. Also, the zero solution of equation (1) is locally stable. The value p = 1.5 corresponds to an initial value for which the open–loop solution converges to zero. As the corresponding column in the table below shows, the control n operator αn converges to a ﬁnite value. For p = 2 the uncontrolled n solution of (1) does not converge to zero, but still αn converges to a ﬁnite value. For larger values of p, the convergence is not obvious from the calculations, but the concavity of the function graphs (decreasing n rates of change in the values of αn ) suggest that we have convergence for increasing values of n with a decreasing rate of convergence as the size of the initial function is increased.
n αn for p = 1.5 −4.4 −4.5 −4.4 −4.3 −4.3 −4.2 −4.2 −4.2 −4.2 u→∞ n 1 2 3 4 5 6 7 8 9 f (u) = u ln 1 + u2 p=2 p=5 p = 10 −8.0 −40.7 −115.3 −11.0 −97.4 −356.2 −11.6 −141.1 −615.1 −12.3 −178.4 −867.1 −12.6 −209.0 −1099.1 −12.8 −233.4 −1301.5 −13.0 −252.5 −1472.6 −13.1 −267.6 −1615.4 −13.2 −279.5 −1733.6 2. For f (u) = u2 solutions corresponding to large initial data exhibit ﬁnite time blow–up. In fact, all of the present p values correspond to initial functions that result in ﬁnite time blow–up. However, for p = 1.5 158 PROBLEM 5.1 and p = 2, the control values seem to converge as the table below shows. For larger values (p = 5 and p = 10), numerical calculations suggest fast divergence.
n αn for f (u) = u2 p=2 p=5 −10.0 −62.5 −16.1 −221.3 −18.6 −402.0 −21.1 −637.3 −22.6 −926.7 −23.8 −1244.8 −24.6 −1578.1 −25.3 −1915.4 −25.8 −2247.4 −26.1 −2567.5 n 1 2 3 4 5 6 7 8 9 10 p = 1.5 −5.6 −7.2 −7.6 −8.0 −8.2 −8.3 −8.3 −8.4 −8.4 −8.5 p = 10 −250.0 −1687.0 −4974.2 −11202.1 −22798.3 −41999.6 −70862.2 −111498.4 −165709.2 −234811.7 BIBLIOGRAPHY [1] S. Anita and V. Barbu, “Null controllability of nonlinear convective heat equations,” ESAIM COCV, vol. 5, pp. 157–173, 2000. [2] A. Balogh and M. Krstic, “Inﬁnite dimensional backstepping–style feedback transformations for a heat equation with an arbitrary level of instability,” European Journal of Control, 2002. [3] D. Boskovic and M. Krstic, “Nonlinear stabilization of a thermal convection loop by state feedback,” Automatica, vol. 37, pp. 20332040, 2001. [4] D. Boskovic and M. Krstic, “Backstepping control of chemical tubular reactors,” Computers and Chemical Engineering, vol. 26, pp. 10771085, 2002. [5] D. Boskovic and M. Krstic, “Stabilization of a solid propellant rocket instability by state feedback,” Int. J. Robust and Nonlinear Control, in press. [6] L.A. Caﬀarrelli and A. Friedman, “Blow–up of solutions of nonlinear heat equations,” J. Math. Anal. Appl., 129, pp. 409–419, 1988. [7] M. Chipot, M. Fila, and P. Quittner, “Stationary solutions, blow up and convergence to stationary solutions for semilinear parabolic equations with nonlinear boundary conditions,” Acta Math. Univ. Comenianae, vol. LX, no. 1, pp. 35–103, 1991. BACKSTEPPING FOR NONLINEAR PDES 159 [8] E. Fernandez–Cara, “Null controllability of the semilinear heat equation,” ESAIM COCV, vol. 2, pp. 87–103, 1997. [9] A. A. Lacey, “Mathematical analysis of thermal runaway for spatially inhomogeneous reactions,” SIAM J. Appl. Math., vol. 43, no. 6, pp. 1350–1366, 1983. [10] S. Seo, “Blowup of solutions to heat equations with nonlocal boundary conditions,” Kobe J. Math., vol. 13, pp. 123–132, 1996. Problem 5.2
The dynamical Lame system with boundary control: on the structure of reachable sets M. I. Belishev1
Dept. of the Steklov Mathematical Institute (POMI) Fontanka 27 St. Petersburg 191011 Russia [email protected] 1 MOTIVATION The questions posed below come from dynamical inverse problems for the hyperbolic systems with boundary control. These questions arise in the framework of the BC–method, which is an approach to inverse problems based on their relations to the boundary control theory [1], [2]. 2 GEOMETRY Let Ω ⊂ R3 be a bounded domain with the smooth (enough) boundary Γ; λ, µ, ρ smooth functions (Lame parameters) satisfying ρ > 0, µ > 0, 3λ + ¯ 2µ > 0 in Ω. ¯ The parameters determine two metrics in Ω 2 dx 2 dlα = 2 , α = p, s cα where 1 1 λ + 2µ 2 µ2 cp := , cs := ρ ρ are the velocities of p− (pressure) and s− (shear) waves; let distα be the corresponding distances.
1 Supported by the RFBR grant No. 020100260. THE DYNAMICAL LAME SYSTEM WITH BOUNDARY CONTROL 161 The distant functions (eikonals) ¯ τα (x) := distα (x, Γ), x∈Ω determine the subdomains ΩT := {x ∈ Ω  τα (x) < T }, T >0 α and the values Tα := inf {T > 0  ΩT = Ω}, which are the times it takes for α α–waves moving from Γ to ﬁll the whole of Ω. The relation cs < cp implies τp < τs , ΩT ⊂ ΩT , and Ts > Tp . If T < Ts then s p ¯ ∆ΩT := ΩT \ ΩT
p s is a nonempty open set. If T > 0 is ’not too large’, the vector ﬁelds τα να :=  τα  are regular and satisfy νp (x) · νs (x) > 0, x ∈ ΩT . Due to the latter, each p vector ﬁeld (R3 − valued function) u = u(x) may be represented in the form u(x) = u(x)p + u(x)s , x ∈ ΩT (∗) p with u(x)p νp (x) and u(x)s ⊥ νs (x). 3 LAME SYSTEM. CONTROLLABILITY Consider the dynamical system
3 ui tt = ρ−1
j,k,l=1 ∂j cijkl ∂l uk (i = 1, 2, 3) in Ω × (0, T ); ut=0 = ut t=0 = 0 in Ω; u = f on Γ × [0, T ], ∂ (∂j := ∂xj ) where cijkl is the elasticity tensor of the Lame model: cijkl = λδij δkl + µ(δik δjl + δil δjk ); let u = uf (x, t) = {uf (x, t)}3=1 be the solution (wave). i i T Denote H := L2,ρ (Ω; R3 ) (with measure ρ dx); Hα := { y ∈ H  supp y ⊂ ¯ T }. As was shown in [3], the map f → uf is continuous from L2 (Γ × Ωα [0, T ]; R3 ) into C ([0, T ]; H). By virtue of this and due to the ﬁniteness of the wave velocities, the reachable set UT := { uf (·, T )  f ∈ L2 (Γ × [0, T ]; R3 )} T is embedded into Hp . As was proved in the same paper, the relation
T clos UT ⊃ Hs is valid for any T > 0, i.e., an approximate controllability always holds in the subdomain ΩT ﬁlled with the shear waves, whereas the elements of the s defect subspace T NT := Hp closH UT (‘unreachable states’) can be supported only in ∆ΩT . On the other hand, it is not diﬃcult to show the examples with NT = {0}, T < Ts . 162 4 PROBLEMS AND HYPOTHESES PROBLEM 5.2 The open problem is to characterize the defect subspace NT . The following is the reasonable hypotheses. • The defect space is always nontrivial: NT = {0} for T < Ts in the general case (not only in examples). Let us note that, due to the standard ‘controllabilityobservability’ duality, this property would mean that in any inhomogeneous isotropic elastic media there exist the slow waves whose forward front propagates with the velocity cs . • In the subdomain ∆ΩT , where the elements of the defect subspace are supported, the pressure component of the wave ( see (∗) ) determines its shear component through a linear operator: uf (·, T )s = K T [uf (·, T )p ] in ∆ΩT . If this holds, the question is to describe the operator K T . • The decomposition (∗) diagonalizes the principal part of the Lame system. The progress in these questions would be of great importance for the inverse problems of the elasticity theory that is now the most diﬃcult and challenging class of dynamical inverse problems. BIBLIOGRAPHY [1] M. I. Belishev, “Boundary control in reconstruction of manifolds and metrics (the BCmethod),” Inv.Prob., 13(5):R1–R45, 1997. [2] M. I. Belishev and A. K. Glasman, “Dynamical inverse problem for the Maxwell system: Recovering the velocity in the regular zone (the BCmethod),” St.Petersburg Math. J., 12 (2):279–316, 2001. [3] M. I. Belishev and I. Lasiecka,“The dynamical Lame system: Regularity of solutions, boundary controllability and boundary data continuation,” ESAIM COCV, 8:143–167, 2002. Problem 5.3
Nullcontrollability of the heat equation in unbounded domains Sorin Micu
Facultatea de Matematic˘Informatic˘ a a Universitatea din Craiova Al. I. Cuza 13, Craiova, 1100 Romania sd [email protected] Enrique Zuazua
Departamento de Matem´ticas, Facultad de Ciencias a Universidad Aut´noma o Cantoblanco, 28049, Madrid Spain [email protected] 1 DESCRIPTION OF THE PROBLEM Let Ω be a smooth domain of Rn with n ≥ 1. Given T > 0 and Γ0 ⊂ ∂ Ω, an open nonempty subset of the boundary of Ω, we consider the linear heat equation: in Q u t − ∆u = 0 u = v 1Σ0 on Σ (1) u(x, 0) = u0 (x) in Ω, where Q = Ω × (0, T ), Σ = ∂ Ω × (0, T ) and Σ0 = Γ0 × (0, T ) and where 1Σ0 denotes the characteristic function of the subset Σ0 of Σ. In (1) v ∈ L2 (Σ) is a boundary control that acts on the system through the subset Σ0 of the boundary and u = u(x, t) is the state. System (1) is said to be nullcontrollable at time T if for any u0 ∈ L2 (Ω) there exists a control v ∈ L2 (Σ0 ) such that the solution of (1) satisﬁes u(x, T ) = 0 in Ω. (2) This article is concerned with the nullcontrollability problem of (1) when the domain Ω is unbounded. 164 PROBLEM 5.3 2 MOTIVATION AND HISTORY OF THE PROBLEM We begin with the following wellknown result Theorem 1. When Ω is a bounded domain of class C 2 system (1) is nullcontrollable for any T > 0. We refer to D. L. Russell [12] for some particular examples treated by means of moment problems and Fourier series and to A. Fursikov and O. Yu. Imanuvilov [3] and G. Lebeau and L. Robbiano [7] for the general result covering any bounded smooth domain Ω and open, nonempty subset Γ0 of ∂ Ω. Both the approaches of [3] and [7] are based on the use of Carleman inequalities. However, in many relevant problems the domain Ω is unbounded. We address the following question: if Ω is an unbounded domain, is system (1) nullcontrollable for some T > 0?. None of the approaches mentioned above apply in this situation. In fact, very particular cases being excepted (see the following section), there exist no results on the nullcontrollability of the heat equation (1) when Ω is unbounded. The approach described in [6] and [9] is also worth mentioning. In this article it is proved that, for any T > 0, the heat equation has a fundamental solution that is C ∞ away from the origin and with support in the strip 0 ≤ t ≤ T . This fundamental solution, of course, grows very fast as x goes to inﬁnity. As a consequence of this, a boundary controllability result may be immediately obtained in any domain Ω with controls distributed all along its boundary. Note, however, that when the domain is unbounded the solutions and controls obtained in this way grow too fast as  x → ∞ and, therefore, these are not solutions in the classical sense. In fact, in the frame of unbounded domains, one has to be very careful in deﬁning the class of admissible controlled solutions. When imposing, for instance, the classical integrability conditions at inﬁnity, one is imposing additional restrictions that may determine the answer to the controllability problem. This is indeed the case, as we shall explain. There is a weaker notion of controllability property. It is the socalled approximate controllability property. System (1) is said to be approximately controllable in time T if for any u0 ∈ L2 (Ω) the set of reachable states, R(T ; u0 ) = {u(T ) : u solution of (1) with v ∈ L2 (Σ0 )}, is dense in L2 (Ω). With the aid of classical backward uniqueness results for the heat equation (see, for instance, J.L. Lions and E. Malgrange [8] and J.M. Ghidaglia [4]), it can be seen that nullcontrollability implies approximate controllability. The approximate control problem for the semilinear heat equation in general unbounded domains was addressed in [13] where an approximation method was developed. The domain Ω was approximated by bounded domains (essentially by Ω ∩ BR , BR being the ball of radius R) and the approximate control in the unbounded domain Ω was obtained as limit of the approximate NULLCONTROLLABILITY OF THE HEAT EQUATION 165 control on the approximating bounded domain Ω ∩ BR . But this approach does not apply in the context of the nullcontrol problem. However, taking into account that approximate controllability holds, it is natural to analyze whether nullcontrollability holds as well. In [1] it was proved that the nullcontrollability property holds even in unbounded domains if the control is supported in a subdomain that only leaves a bounded set uncontrolled. Obviously, this result is very close to the case in which the domain Ω is bounded and does not answer to the main issue under consideration of whether heat processes are nullcontrollable in unbounded domains. 3 AVAILABLE RESULTS To our knowledge, in the context of unbounded domains Ω and the boundary control problem, only the particular case of the halfspace has been considered: Ω = Rn = {x = (x , xn ) : x ∈ Rn−1 , xn > 0} + Γ0 = ∂ Ω = Rn−1 = {(x , 0) : x ∈ Rn−1 } (3) (see [10] for n = 1 and [11] for n > 1). According to the results in [10] and [11], the situation is completely diﬀerent to the case of bounded domains. In fact a simple argument shows that the null controllability result which that holds for the case Ω bounded is no longer true. Indeed, the nullcontrollability of (1) with initial data in L2 Rn and + boundary control in L2 (Σ) is equivalent to an observability inequality for the adjoint system ϕt + ∆ ϕ = 0 ϕ=0 on on Q Σ. (4) More precisely, it is equivalent to the existence of a positive constant C > 0 such that ϕ(0)
2 L2 (Rn ) ≤ + C
Σ ∂ϕ ∂xn 2 dx dt (5) holds for every smooth solution of (4). When Ω is bounded, Carleman inequalities provide the estimate (5) and, consequently, nullcontrollability holds (see, for instance, [3]). In the case of a halfspace, by using a translation argument, it is easy to see that (5) does not hold (see [11]). In the case of bounded domains, by using Fourier series expansion, the control problem may be reduced to a moment problem. However, Fourier series cannot be used directly in Rn . Nevertheless, it was observed by M. Es+ cobedo and O. Kavian in [2] that, on suitable similarity variables and at the 166 PROBLEM 5.3 appropriate scale, solutions of the heat equation on conical domains may be indeed developed in Fourier series on a weighted L2 −space. This idea was used in [10] and [11] to study the nullcontrollability property when Ω is given by (3). Firstly, we use similarity variables and weighted Sobolev spaces to develop the solutions in Fourier series. A sequence of onedimensional controlled systems like those studied in [10] is obtained. Each of these systems is equivalent to a moment problem of the following type: given S > 0 and (an )n≥1 (depending on the Fourier coeﬃcients of the initial data u0 ) ﬁnd f ∈ L2 (0, S ) such that
S f (s)ens ds = an , ∀n ≥ 1.
0 (6) This moment problem turns out to be critical since it concerns the family of real exponential functions {eλn s }n≥1 with λn = n, in which the usual 1 summability condition on the inverses of the exponents, n≥1 λn < ∞, does not hold. It was proved that, if the sequence (an )n≥1 has the property that, for any δ > 0, there exists Cδ > 0 such that an  ≤ Cδ eδn , ∀n ≥ 1, (7) problem (6) has a solution if and only if an = 0 for all n ≥ 1. Since (an )n≥1 depend on the Fourier coeﬁcients of the initial data, the following negative controllability result for the onedimensional systems is obtained: Theorem 2. When Ω is the half line, there is no nontrivial initial datum u0 belonging to a negative Sobolev space that is nullcontrollable in ﬁnite time with L2 boundary controls. This negative result was complemented by showing that there exist initial data with exponentially growing Fourier coeﬃcients for which nullcontrollability holds in ﬁnite time with L2 −controls. We mention that in [10] and [11] we are dealing with solutions deﬁned in the sense of transposition, and therefore the solutions in [6] and [9] that grow and oscillate very fast at inﬁnity are excluded. 4 OPEN PROBLEMS As we have already mentioned, the nullcontrollability property of (1) when Ω is unbounded and diﬀerent from a halfline or halfspace is still open. The approach based on the use of the similarity variables may still be used in general conical domains. But, due to the lack of orthogonality of the traces of the normal derivatives of the eigenfunctions, the corresponding moment problem is more complex and remains to be solved. NULLCONTROLLABILITY OF THE HEAT EQUATION 167 When Ω is a general unbounded domain, the similarity transformation does not seem to be of any help since the domain one gets after transformation depends on time. Therefore, a completely diﬀerent approach seems to be needed when Ω is not conical. However, one may still expect a bad behavior of the nullcontrol problem. Indeed, assume for instance that Ω contains Rn . If one is able to control to zero in Ω an initial data u0 by means of + a boundary control acting on ∂ Ω × (0, T ), then, by restriction, one is able to control the initial data u0 Rn with the control being the restriction of + the solution in the larger domain Ω × (0, T ) to Rn−1 × (0, T ). A careful development of this argument and of the result it may lead to remains to be done. ACKNOWLEDGEMENTS The ﬁrst author was partially supported by Grant PB960663 of DGES (Spain) and Grant A3/2002 of CNCSIS (Romania). The second author was partially supported by Grant PB960663 of DGES (Spain) and the TMR network of the EU “Homogenization and Multiple Scales (HMS2000).” BIBLIOGRAPHY
[1] V. Cabanillas, S. de Menezes and E. Zuazua, “Null controllability in unbounded domains for the semilinear heat equation with nonlinearities involving gradient terms,” J. Optim. Theory Applications, 110 (2) (2001), 245264. [2] M. Escobedo and O. Kavian, “Variational problems related to selfsimilar solutions of the heat equation,” Nonlinear Anal. TMA, 11 (1987), 11031133. [3] A. Fursikov and O. Yu. Imanuvilov, “Controllability of evolution equations,” Lecture Notes Series #34, Research Institute of Mathematics, Global Analysis Research Center, Seoul National University, 1996. [4] J. M. Ghidaglia, “Some backward uniqueness results,” Nonlinear Anal. TMA, 10 (1986), 777790. [5] O. Yu. Imanuvilov and M. Yamamoto, “Carleman estimate for a parabolic equation in Sobolev spaces of negative order and its applications,” In: Control of Nonlinear Distributed Parameter Systems, G. Chen et al. eds., MarcelDekker, 2000, pp. 113137. [6] B. F. Jones, Jr., “A fundamental solution of the heat equation which is supported in a strip,” J. Math. Anal. Appl., 60 (1977), 314324. [7] G. Lebeau and L. Robbiano, “Contrˆle exact de l’´quation de la chaleur,” o e Comm. P.D.E., 20 (1995), 335356. [8] J. L. Lions and E. Malgrange, “Sur l’unicit´ r´trograde dans les probl`mes ee e mixtes paraboliques,” Math. Scan., 8 (1960), 277286. 168 PROBLEM 5.3 [9] W. Littman, “Boundary control theory for hyperbolic and parabolic partial differential equations with constant coeﬃcients,”Annali Scuola Norm. Sup. Pisa, Serie IV, 3 (1978), 567580. [10] S. Micu and E. Zuazua, “On the lack of nullcontrollability of the heat equation on the halfline,” Trans. AMS, 353 (2001), 16351659. [11] S. Micu and E. Zuazua, “On the lack of nullcontrollability of the heat equation on the halfspace,” Portugalia Matematica, 58 (2001), 124. [12] D. L. Russell, “Controllability and stabilizability theory for linear partial differential equations. Recent progress and open questions,” em SIAM Rev., 20 (1978), 639739. [13] L. de Teresa and E. Zuazua, “Approximate controllability of the heat equation in unbounded domains,” Nonlinear Anal. TMA, 37 (1999), 10591090. Problem 5.4
Is the conservative wave equation regular? George Weiss
Dept. of Electrical and Electronic Engineering Imperial College London Exhibition Road London SW7 2BT UK G.Weiss @ imperial.ac.uk 1 DESCRIPTION OF THE PROBLEM We consider an inﬁnitedimensional system described by the wave equation on an n–dimensional domain, with mixed boundary control and mixed boundary observation, which has been analyzed (as an example for a certain class of conservative linear systems) in [13]. A somewhat simpler version of this system has appeared (also as an example) in the paper [11, section 7] and a related system has been discussed in [5]. We assume that Ω ⊂ Rn is a bounded domain with Lipschitz boundary Γ, as deﬁned in Grisvard [3]. This means that, locally, after a suitable rotation of the orthogonal coordinate system, the boundary is the graph of a Lipschitz function deﬁned on an open set in Rn−1 . Such a boundary admits corners and edges. Γ0 and Γ1 are nonempty open subsets of Γ such that Γ0 ∩ Γ1 = ∅ and Γ0 ∪ Γ1 = Γ. We denote by x the space variable (x ∈ Ω). A function b ∈ L∞ (Γ1 ) is given, which intuitively expresses how strongly the input signal acts on diﬀerent parts of the active boundary Γ1 . We assume that b(x) = 0 for almost every x ∈ Γ1 . The equations of the system are on Ω × [0, ∞), z (x, t) = ∆z (x, t) ¨ z (x, t) = 0 on Γ0 × [0, ∞), √ ∂ z (x, t) + b(x)2 z (x, t) = 2 · b(x)u(x, t) on Γ1 × [0, ∞), ˙ (1) ∂ν √ ∂ 2 ˙ ∂ν z (x, t) − b(x) z (x, t) = 2 · b(x)y (x, t) on Γ1 × [0, ∞), z (x, 0) = z0 (x), z (x, 0) = w0 (x) ˙ on Ω, where u is the input function and y is the output function. The functions 170 PROBLEM 5.4 z0 and w0 are the initial state of the system. The part Γ0 of the boundary is just reﬂecting waves, while inputs and outputs act through the part Γ1 . For every g ∈ H1 (Ω) we denote by γg the Dirichlet trace of g on Γ (for g ∈ C 1 (Ω) ⊂ H1 (Ω) this would simply be the restriction of g to Γ). We regard γg as an element of L2 (Γ). We deﬁne the Hilbert space
1 HΓ0 (Ω) = {g ∈ H1 (Ω)  γg = 0 on Γ0 } , g H1 = g L2 . Proposition 1. The equations (1) determine a wellposed linear system Σ with input space U = L2 (Γ1 ), output space Y = L2 (Γ1 ) and state space
1 X = HΓ0 (Ω) × L2 (Ω) . For the precise meaning of a wellposed linear system we refer to [8, 9, 6]. These papers use the same notation and terminology that we use here, but their references will indicate other works in which equivalent deﬁnitions can be found. We give a short explanation of what wellposedness means in our case. If we take x(0) = [z0 w0 ]T ∈ X , u ∈ L2 ([0, ∞); U ) and we solve the equations (1) on the time interval [0, ∞), then we get x(τ ) = [z (τ ) z (τ )]T ∈ ˙ X for every τ ≥ 0. x(τ ) is called the state of the system at time τ . Moreover, if we denote the restriction of y to [0, τ ] by Pτ y , then Pτ y ∈ L2 ([0, τ ]; Y ). (Note that in our particular case, U = Y .) We can introduce four families of bounded operators T, Φ, Ψ, and F indexed by τ ≥ 0 such that for every such τ , x(τ ) = Tτ x(0) + Φτ Pτ u, Pτ y = Ψτ x(0) + Fτ Pτ u . Φτ Fτ Thus, for every τ ≥ 0, the operator matrix Στ = Tτ Ψτ deﬁnes a bounded operator from X × L2 ([0, τ ]; U ) to X × L2 ([0, τ ]; Y ). This is the essential feature of a wellposed linear system. In fact, in [8, 9, 6], Σ is deﬁned as the family of operators Στ . For a wellposed linear system, the family T is a strongly continuous semigroup of operators acting on X . Proposition 1 was proved in [13, section 7], together with the following: Proposition 2. The system Σ from Proposition 1 is conservative. The fact that Σ is conservative means that the operators Στ are unitary. In particular, the fact that Στ is isometric means that we have
τ τ x(τ ) 2 − x(0) 2 =
0 u(t) 2 dt −
0 y (t) 2 dt , which can be interpreted as an energy balance equation. For background on conservative systems, we refer to [1, 2, 4, 7, 12, 13]. The system Σ has, like every conservative system, a transfer function G that is in the Schur class. This means that G is analytic on the open right IS THE CONSERVATIVE WAVE EQUATION REGULAR? 171 halfplane C0 and G(s) ≤ 1 for all s ∈ C0 . For the simple proof of this fact, see [13, theorem 1.3 and proposition 4.5]. The boundary values G(iω ) can be deﬁned for almost every ω ∈ R as nontangential limits, and we have G(iω )∗ G(iω ) = G(iω )G(iω )∗ = I for almost every ω ∈ R (i.e., G is inner and coinner). This follows from [10, proposition 2.1] or alternatively from [7, corollary 7.3]. Recall that a wellposed linear system with input space U , output space Y , and transfer function G is called regular if for every v ∈ U , the limit
s→+∞, s∈R lim G(s)v = Dv exists. In this case, the operator D ∈ L(U, Y ) is called the feedthrough operator of the system (see [8, 9, 6] for further details). For regular linear systems, the theories of local representation, feedback and dynamic stabilization are much simpler than for wellposed linear systems. Conjecture. The system Σ from Proposition 1 is regular and its feedthrough operator is zero. Consider the particular situation when Ω is onedimensional: Ω = (0, 1), Γ0 = {0}, Γ1 = {1} and U = Y = C. Now the function b becomes a nonzero number, and without loss of generality we may take b = 1. It is easy to see that the input signal enters the domain at x = 1, propagates along the domain (with unit speed) until it gets reﬂected at x = 0 and then it propagates back to exit (as the output signal) at x = 1. If the initial state is zero, then for t ≥ 2 we have y (t) = u(t − 2), so that the transfer function is G(s) = e−2s . Note that G is indeed inner and it is regular with feedthrough operator zero. The author thinks that he can prove the conjecture in the following particular case: the active boundary Γ1 can be partitioned into a ﬁnite union of open subsets that are either planar (i.e., an open subset of an n − 1 dimensional hyperplane) or spherical (i.e., an open subset of an n − 1 dimensional sphere). The idea is to construct solutions of (1), which locally (near a boundary point) look like a planar or spherical wave moving into the domain Ω (the initial state is zero) and locally (in time and space), u is a step function. Then locally (in time and space) y is zero, which proves the claim, due to the equivalent characterization of regularity via the step response, see [8, theorem 5.8]. BIBLIOGRAPHY [1] D. Z. Arov and M. A. Nudelman, “Passive linear stationary dynamical scattering systems with continous time,” Integral Equations and Operator Theory, 24 (1996), pp. 1–43. 172 PROBLEM 5.4 [2] J. A. Ball, “Conservative dynamical systems and nonlinear LivsicBrodskii nodes,” Operator Theory: Advances and Applications, 73 (1994), pp. 67–95. [3] P. Grisvard, Elliptic Problems in Nonsmooth Domains, Pitman, Boston, 1985. [4] B. M. J. Maschke and A. J. van der Schaft, “Portcontrolled Hamiltonian representation of distributed parameter systems,” Proc. of the IFAC Workshop on Lagrangian and Hamiltonian Methods for Nonlinear Control, N. E. Leonard and R. Ortega, eds., Princeton University Press, 2000, pp. 28–38. [5] A. Rodriguez–Bernal and E. Zuazua, “Parabolic singular limit of a wave equation with localized boundary damping,” Discrete and Continuous Dynamical Systems, 1 (1995), pp. 303–346. [6] O. J. Staﬀans and G. Weiss, “Transfer functions of regular linear systems. Part II: The system operator and the LaxPhillips semigroup,” Trans. American Math. Society, 354 (2002), pp. 3229–3262. [7] O. J. Staﬀans and G. Weiss, “Transfer functions of regular linear systems. Part III: Inversions and duality,” submitted. [8] G. Weiss, “Transfer functions of regular linear systems. Part I: Characterizations of regularity,” Trans. American Math. Society, 342 (1994), pp. 827–854. [9] G. Weiss, “Regular linear systems with feedback,” Mathematics of Control, Signals and Systems, 7 (1994), pp. 23–57. [10] G. Weiss, “Optimal control of systems with a unitary semigroup and with colocated control and observation,” Systems and Control Letters, 48 (2003), pp. 329–340. [11] G. Weiss and R. Rebarber, “Optimizability and estimatability for inﬁnitedimensional linear systems,” SIAM J. Control and Optimization, 39 (2001), pp. 1204–1232. [12] G. Weiss, O. J. Staﬀans and M. Tucsnak, “Wellposed linear systems: A survey with emphasis on conservative systems,” Applied Mathematics and Computer Science, 11 (2001), pp. 101–127. [13] G. Weiss and M. Tucsnak, “How to get a conservative wellposed linear system out of thin air. Part I: wellposedness and energy balance,” ESAIM COCV, vol. 9, pp. 24774, 2003. Problem 5.5
Exact controllability of the semilinear wave equation Xu Zhang
Departamento de Matem´tica Aplicada a Universidad Complutense 28040 Madrid Spain and School of Mathematics Sichuan University Chengdu 610064 China [email protected] Enrique Zuazua
Departamento de Matem´ticas, Facultad de Ciencias a Universidad Aut´noma o 28049 Madrid Spain [email protected] 1 DESCRIPTION OF THE PROBLEM Let T > 0 and Ω ⊂ Rn (n ∈ N) be a bounded domain with a C 1,1 boundary ∂ Ω. Let ω be a proper subdomain of Ω and denote the characteristic function of the set ω by χω . Fix a nonlinear function f ∈ C 1 (R). We are concerned with the exact controllability of the following semilinear wave equation: ytt − ∆y + f (y ) = χω (x)u(t, x) in (0, T ) × Ω, y=0 on (0, T ) × ∂ Ω, (1) y (0) = y0 , yt (0) = y1 in Ω. In (1), (y (t, ·), yt (t, ·)) is the state and u(t, ·) is the control that acts on the system through the subset ω of Ω. 1 In what follows, we choose the state space and the control space as H0 (Ω) × L2 (Ω) and L2 ((0, T )×Ω), respectively. Of course, the choice of these spaces is 174 PROBLEM 5.5 not unique. But this one is very natural in the context of the wave equation. 1 The space H0 (Ω) × L2 (Ω) is often referred to as the energy space. 1 The exact (internal) controllability problem for (1) (in H0 (Ω) × L2 (Ω)) may 1 be formulated as follows: for any given (y0 , y1 ), (z0 , z1 ) ∈ H0 (Ω) × L2 (Ω), 2 to ﬁnd (if possible) a control u ∈ L ((0, T ) × Ω) such that the weak solution y of (1) satisﬁes y (T ) = z0 and yt (T ) = z1 in Ω. (2) The exact (boundary) controllability problem of (1) can be formulated similarly. In that case, the control u enters on the system through the boundary conditions. This produces extra technical diﬃculties. The main open problem on the controllability of this semilinear wave equation we shall describe here arises in both cases. We prefer to present it in the case where the control acts on the internal subdomain ω to avoid unnecessary technical diﬃculties. First of all, it is wellknown that when f grows too fast, the solution of (1) may blow up. In the presence of blowup phenomena, as a consequence of the ﬁnite speed of propagation of solutions of (1), the exact controllability of (1) does not hold unless ω = Ω ([13]). This exception means that the control acts on the system everywhere in Ω in which case the eﬀect of nonlinearity may be suppressed easily. Therefore, we suppose that (H1) The nonlinearity f ∈ C 1 (R) is such that (1) admits a global weak 1 solution y ∈ C ([0, T ]; H0 (Ω)) ∩ C 1 ([0, T ]; L2 (Ω)) for any given (y0 , y1 ) ∈ 1 2 H0 (Ω) × L (Ω) and u ∈ L2 ((0, T ) × Ω). There are two classes of conditions on f guaranteeing that (H1) holds. The ﬁrst one, which will be called mild growth condition, amounts to requesting that f ∈ C 1 (R) grows “mildly” at inﬁnity (see [2] and [3]), i.e.,
x x→∞ ∞ −2 lim f (s)ds
0 x
k=1 logk (ek + x ) 2 < ∞, (3) where the iterated logarithm function logj is deﬁned by the formulas log0 s = s and logj +1 s = log(logj s), j = 0, 1, 2, · · · , the number ej is deﬁned by the equations logj ej = 1. It is obvious that any globally Lipschitz continuous function f satisﬁes (3). But, of course, (3) allows f to grow in a slight superlinear way at inﬁnity. The second one, which will be called good sign growth condition, is the class of functions f ∈ C 1 (R) that grow fast at inﬁnity but satisfy a “goodsign” condition, i.e., there exist constants L > 0, p ∈ (1, n/(n − 2)] if n ≥ 3 and p ∈ (1, ∞) if n = 1, 2, such that f (r) − f (s) ≤ L(1 + rp−1 + sp−1 )r − s, and
x ∀ r, s ∈ R (4) f (s)ds ≥ −Lx2
0 as x → ∞. (5) EXACT CONTROLLABILITY OF THE SEMILINEAR WAVE EQUATION 175 A typical example is f (u) = u3 for n = 1, 2, 3. (6) On the other hand, it is wellknown that, even in the linear case where f ≡ 0, some conditions on the controllability time T and the geometry of the set ω where the control applies are needed in order to guarantee the exact controllability property. Thus, we assume that (H2) T and ω are such that (1) with f ≡ 0 is exactly controllable. There are also two classes of conditions on T and ω guaranteeing that (H2) holds. The ﬁrst one, which we will call the classical multiplier condition, is when ω is a neighborhood of a subset of the boundary of the form Γ(x0 ) = {x ∈ ∂ Ω : (x − x0 ) · ν (x) > 0} for some x0 ∈ Rn , where ν (x) is the unit outward normal vector to ∂ Ω at x, and T > 2 max{x − x0  : x ∈ Ω \ ω }. This is the typical situation one encounters when applying the multiplier technique ([8]). The second one is when T and Ω satisfy the socalled Geometric Control Condition introduced in [1]. We have the following Open Problem: Do (H1) and (H2) imply the exact controllability of (1)? The above problem can also be formulated in the more general case in which the nonlinearity is of the form f (t, x, y, yt , y ). Of course, the problem is even more diﬃcult in that case and new phenomena may occur due to the strong dissipative eﬀects that terms of the form ut p−1 ut may produce. Thus, we shall focus in the case f = f (y ). This open problem will be made more precise below. 2 AVAILABLE RESULTS AND OPEN PROBLEMS Nonlinearities with mild growth condition For the one space dimensional case, by combining the sidewise energy estimates for the 1 − d wave equations and the ﬁxed point technique, Zuazua ([13]) obtained the following result: Theorem 1: Assume n = 1 and Ω = (0, 1). Let (a, b) be a (proper) subinterval of (0, 1), T > 2 max(a, 1 − b) and
x→∞ lim f (x)x−1 log−2 x = 0. (7) Then (1) is exact controllable. ´ Later on, based on a method due to Emanuilov ([5]), Cannarsa, Komornik, and Loreti ([2]) improved theorem 1 by relaxing the growth condition on f . The main result in [2] says that the same conclusion in theorem 1 holds if the condition (7) on f is replaced by (3). The growth condition (3) on f is 176 PROBLEM 5.5 sharp (since solutions of (1) may blow up whenever f grows faster than (3) at inﬁnity and f has the bad sign). For the higher dimensional case, Li and Zhang ([7]) proved the following result: Theorem 2: Let ω be a neighborhood of ∂ Ω, T > diam(Ω \ ω ) and lim f (x)x−1 log−1/2 x = 0.
x→∞ (8) Then (1) is exactly controllable. A special case of theorem 2 is when f is globally Lipschitz continuous, which gives the main result of Zuazua in [12]. The main result in [12] was generalized to an abstract setting by Lasiecka and Triggiani ([6]) using a global version of Inverse Function theorem and was extended in [9] to the case when T and ω satisfy the classical multiplier condition. It is natural to conjecture that the same conclusion in Theorem 2 holds under the growth condition (3) on f as in one dimension. But this is by now an open problem. On the other hand, whether the same conclusion in theorem 2 holds for more general conditions on T and ω , say the classical multiplier condition or Geometric Control Condition, is also an open problem. Especially, when T and ω satisfy the Geometric Control Condition, the exact controllability problem for (1) is open even for globally Lipschitz continuous nonlinearities. Nonlinearities with good sign and superlinear growth at inﬁnity In this case, there are no global exact controllability results in the literature. However, using a ﬁxed point argument, Zuazua proved the following local exact controllability results for (1) ([10]): Theorem 3: Let (H2) hold, f ∈ C 1 (R) satisfy (4) and f (0) = 0. Then 1 there is a δ > 0 such that for any (y0 , y1 ) and (z0 , z1 ) in H0 (Ω) × L2 (Ω) 1 1 with (y0 , y1 )H0 (Ω)×L2 (Ω) + (z0 , z1 )H0 (Ω)×L2 (Ω) ≤ δ , there is a control u ∈ L2 ((0, T ) × Ω), such that (2) holds. Combining theorem 3 and the stabilization results for the semilinear wave equations with “goodsign” condition on the nonlinearity ([11] and [4]), it is easy to show that Theorem 4: Let T0 and ω satisfy the classical multiplier condition and f 1 satisfy (4)–(5). Then for any (y0 , y1 ) and (z0 , z1 ) in H0 (Ω) × L2 (Ω), there 2 exist a time T ≥ T0 and a control u(·) ∈ L ((0, T ) × Ω), such that (2) holds. Note that the controllability time T in theorem 4 depends on (y0 , y1 ) and (z0 , z1 ). According to [11], one can obtain explicit bounds on T . However, whether T may be chosen to be uniform, i.e., independent of the data (y0 , y1 ) and (z0 , z1 ), is an open problem even for the nonlinearity in (6) for n = 1. This is certainly one of the main open problems in the context of controllability of nonlinear PDE. EXACT CONTROLLABILITY OF THE SEMILINEAR WAVE EQUATION 177 ACKNOWLEDGEMENTS This work was supported in part by the grants PB960663 of the DGES (Spain), the EU TMR Project “Homogenization and Multiple Scales,” a Foundation for the Authors of Excellent Ph.D. Theses in China, and the NSF of China (19901024). BIBLIOGRAPHY [1] C. Bardos, G. Lebeau, and J. Rauch, “Sharp suﬃcient conditions for the observation, control, and stabilization of waves from the boundary,” SIAM J. Control Optim., 30, pp. 10241065, 1992. [2] P. Cannarsa, V. Komornik, and P. Loreti, “Onesided and internal controllability of semilinear wave equations with inﬁnitely iterated logarithms,” Preprint. ´ [3] T. Cazenave and A. Haraux, “Equations d’´volution avec non lin´arit´ e e e logarithmique,” Ann. Fac. Sci. Toulouse, 2, pp. 2151, 1980. [4] B. Dehman, G. Lebeau, and E. Zuazua, “Stabilization and control for the subcritical semilinear wave equation,” Preprint, 2002. ´ [5] O. Yu. Emanuilov, “Boundary controllability of semilinear evolution equations,” Russian Math. Surveys, 44, pp. 183184, 1989. [6] I. Lasiecka and R. Triggiani, “Exact controllability of semilinear abstract systems with application to waves and plates boundary control problems,” Appl. Math. Optim., 23, pp. 109154, 1991. [7] L. Li and X. Zhang, “Exact controllability for semilinear wave equations,” J. Math. Anal. Appl., 250, pp. 589597, 2000. [8] J. L. Lions, Contrˆlabilit´ exacte, perturbations et syst´mes distribu´s, o e e e Tome 1, Rech. Math. Appl. 8, Masson, Paris, 1988. [9] X. Zhang, “Explicit observability estimate for the wave equation with potential and its application,” R. Soc. Lond. Proc. Ser. A Math. Phys. Eng. Sci., 456, pp. 11011115, 2000. [10] E. Zuazua, “Exact controllability for the semilinear wave equation,” J. Math. Pures Appl., 69, pp. 131, 1990. [11] E. Zuazua, “Exponential decay for semilinear wave equations with localized damping,” Comm. Partial Diﬀerential Equations, 15, pp. 205235, 1990. [12] E. Zuazua, “Exact boundary controllability for the semilinear wave equation,” In: Nonlinear partial diﬀerential equations and their applica 178 PROBLEM 5.5 tions, Coll´ge de France Seminar, vol. X (Paris, 19871988), pp. 357391, e Pitman Res. Notes Math. Ser., 220, Longman Sci. Tech., Harlow, 1991. [13] E. Zuazua, “Exact controllability for semilinear wave equations in one space dimension,” Ann. Inst. H. Poincar´ Anal. Non Lin´aire, 10, pp. e e 109129, 1993. Problem 5.6
Some control problems in electromagnetics and ﬂuid dynamics Lorella Fatone
Dipartimento di Matematica Pura ed Applicata Universit` di Modena e Reggio Emilia a Via Campi 213/b, 41100 Modena (MO) Italy [email protected] Maria Cristina Recchioni
Istituto di Teoria delle Decisioni e Finanza Innovativa (DE.F.IN.) Universit` di Ancona a Piazza Martelli 8, 60121 Ancona (AN) Italy [email protected] Francesco Zirilli
Dipartimento di Matematica “G. Castelnuovo” Universit` di Roma “La Sapienza” a Piazzale Aldo Moro 2, 00185 Roma Italy [email protected] 1 INTRODUCTION In recent years, as a consequence of the dramatic increases in computing power and of the continuing reﬁnement of the numerical algorithms available, the numerical treatment of control problems for systems governed by partial diﬀerential equation; see, for example, [1], [3], [4], [5], [8]. The importance of these mathematical problems in many applications in science and technology cannot be overemphasized. The most common approach to a control problem for a system governed by partial diﬀerential equations is to see the problem as a constrained nonlinear optimization problem in inﬁnite dimension. After discretization the 180 PROBLEM 5.6 problem becomes a ﬁnite dimensional constrained nonlinear optimization problem that can be attacked with the usual iterative methods of nonlinear optimization, such as Newton or quasiNewton methods. Note that the problem of the convergence, when the “discretization step goes to zero,” of the solutions computed in ﬁnite dimension to the solution of the inﬁnite dimensional problem is a separate question and must be solved separately. When this approach is used an objective function evaluation in the nonlinear optimization procedure involves the solution of the partial diﬀerential equations that govern the system. Moreover, the evaluation of the gradient or Hessian of the objective function involves the solution of some kind of sensitivity equations for the partial diﬀerential equations considered. The nonlinear optimization procedure that usually involves function, gradient and Hessian evaluation is computationally very expensive. This fact is a serious limitation to the use of control problems for systems governed by partial diﬀerential equations in real situations. However the approach previously described is very straightforward and does not use any of the special features present in every system governed by partial diﬀerential equations. So that, at least in some special cases, it should be possible to improve on this straightforward approach. The purpose of this paper is to point out a problem, see [6], [2], where a new approach, that greatly improves on the previously described one, has been introduced and to suggest some other problems where, hopefully, similar improvements can be obtained. In particular, we propose two control problems of great relevance in several applications in science and technology and we suggest the (open) question of characterizing the optimal solution of these control problems as the solution of suitable systems of partial diﬀerential equations. If this question has an aﬃrmative answer, high performance algorithms can be developed to solve the control problems proposed. Note that in [6], [2] this characterization has been made for some control problems in acoustics, thanks to the use of the Pontryagin maximum principle, and has permitted to develop high performance algorithms to solve these control problems. Moreover, we suggest the (open) question of using eﬀectively the dynamic programming method to derive closed loop control laws for the control problems considered. For eﬀective use of the dynamic programming method, we mean the possibility of computing a closed loop control law at approximately the same computational cost of solving the original problem when no control strategy is involved. In section 2 we summarize the results obtained in [6], [2], and in section 3 we present two problems that we believe can be approached in a way similar to the one described in [6], [2]. SOME CONTROL PROBLEMS IN ... 181 2 PREVIOUS RESULTS In [6], [2] a furtivity problem in time dependent acoustic obstacle scattering is considered. An obstacle of known acoustic impedance is hit by a known incident acoustic ﬁeld. When hit by the incident acoustic ﬁeld, the obstacle generates a scattered acoustic ﬁeld. To make the obstacle furtive means to “minimize” the scattered ﬁeld. The furtivity eﬀect is obtained circulating on the boundary of the obstacle a “pressure current” that is a quantity whose physical dimension is: pressure divided by time. The problem consists in ﬁnding the optimal “pressure current” that “minimizes” the scattered ﬁeld and the “size” of the pressure current employed. The mathematical model used to study this problem is a control problem for the wave equation, where the control function (i.e., the pressure current) inﬂuences the state variable (i.e., the scattered ﬁeld) through a boundary condition imposed on the boundary of the obstacle, and the cost functional depends explicitly from both the state variable and the control function. Introducing an auxiliary variable and using the Pontryagin maximum principle (see [7]) in [6], [2] it is shown that the optimal control of this problem can be obtained from the solution of a system of two coupled wave equations. This system of wave equations is equipped with suitable initial, ﬁnal, and boundary conditions. Thanks to this ingenious construction the solution of the optimal control problem can be obtained solving the system of wave equations without the necessity of going through the iterations implied in general by the nonlinear optimization procedure. This fact avoids many of the diﬃculties, that have been mentioned above, present in the general case. Finally, the system of wave equations is solved numerically using a highly parallelizable algorithm based on the operator expansion method (for more details, see [6], [2] and the references therein). Some numerical results obtained with this algorithm on simple test problems can be seen in the form of computer animations in the websites: http://www.econ.unian.it/recchioni/w6, http://www.econ.unian.it/recchioni/w8. In the following section, we suggest two problems where will be interesting to carry out a similar analysis. 3 TWO CONTROL PROBLEMS Let R be the set of real numbers, x = (x1 , x2 , x3 )T ∈ R3 (where the superscript T means transposed) be a generic vector of the threedimensional real Euclidean space R3 , and let (·, ·), · and [·, ·] denote the Euclidean scalar product, the Euclidean vector norm and the vector product in R3 , respectively. The ﬁrst problem suggested is a “masking” problem in timedependent electromagnetic scattering. Let Ω ⊂ R3 be a bounded simply connected open set (i.e., the obstacle) with locally Lipschitz boundary ∂ Ω. Let Ω denote the closure of Ω and n(x) = (n1 (x), n2 (x), n3 (x))T ∈ R3 , x ∈ ∂ Ω be the 182 PROBLEM 5.6 outward unit normal vector in x for x ∈ ∂ Ω. Note that n(x) exists almost everywhere in x for x ∈ ∂ Ω. We assume that the obstacle Ω is characterized by an electromagnetic boundary impedance χ > 0. Note that χ = 0 (χ = +∞) corresponds to consider a perfectly conducting (insulating) obstacle. Let R3 \ Ω be ﬁlled with a homogeneous isotropic medium characterized by a constant electric permittivity > 0, a constant magnetic permeability ν > 0, zero electric conductivity, zero free charge density, and zero free current density. Let (Ei (x, t), Bi (x, t)), (x, t) ∈ R3 × R (where Ei is the electric ﬁeld and Bi is the magnetic induction ﬁeld) be the incoming electromagnetic ﬁeld propagating in the medium ﬁlling R3 \ Ω and satisfying the Maxwell equations (1)(3) in R3 × R. Let (Es (x, t), Bs (x, t)), (x, t) ∈ (R3 \ Ω) × R be the electromagnetic ﬁeld scattered by the obstacle Ω when hit by the incoming ﬁeld (Ei (x, t), Bi (x, t)), (x, t) ∈ R3 × R. The scattered electric ﬁeld Es and the scattered magnetic induction ﬁeld Bs satisfy the following equations: curlEs + curlBs − ∂ Bs ∂t (x, t) = 0, (x, t) ∈ (R3 \ Ω) × R, (x, t) = 0, (x, t) ∈ (R3 \ Ω) × R, (x, t) ∈ (R3 \ Ω) × R, (1) 1 ∂ Es c2 ∂t (2) (3) (4) div Bs (x, t) = 0, div Es (x, t) = 0, [n(x), Es (x, t)] − cχ[n(x), [n(x), Bs (x, t)]] = −[n(x), Ei (x, t)] + cχ[n(x), [n(x), Bi (x, t)]], (x, t) ∈ ∂ Ω × R, Es (x, t) = O 1 r 1 ˆ , [Bs (x, t), x] − Es (x, t) = o c 1 r , r → +∞, t ∈ R, (5) √ x ˆ where 0 = (0, 0, 0)T , c = 1/ ν , r = x , x ∈ R3 , x = x , x = 0, x ∈ R3 , O(·) and o(·) are the Landau symbols, and curl· and div · denote the curl and the divergence operator of · with respect to the x variables respectively. A classical problem in electromagnetics consists in the recognition of the obstacle Ω through the knowledge of the incoming electromagnetic ﬁeld and of the scattered ﬁeld (Es (x, t), Bs (x, t)), (x, t) ∈ (R3 \ Ω) × R solution of (1)(5). In the above situation, Ω plays a “passive” (“static”) role. We want to make the obstacle Ω “active” (“dynamic”) in the sense that, thanks to a suitable control function chosen in a proper way, the obstacle itself tries to react to the incoming electromagnetic ﬁeld producing a scattered ﬁeld that looks like the ﬁeld scattered by a preassigned obstacle D (the “mask”) with impedance χ . We suggest to consider the following control problem: Problem 1: Electromagnetic “Masking” Problem: Given an incoming electromagnetic ﬁeld (Ei , Bi ), an obstacle Ω and its electromagnetic boundary impedance χ, and given an obstacle D such that D ⊆ Ω with electromagnetic boundary impedance χ , choose a vector control function ψ deﬁned on SOME CONTROL PROBLEMS IN ... 183 the boundary of the obstacle ∂ Ω for t ∈ R and appearing in the boundary condition satisﬁed by the scattered electromagnetic ﬁeld on ∂ Ω, in order to minimize a cost functional that measures the “diﬀerence” between the electromagnetic ﬁeld scattered by Ω, i.e., (Es , Bs ), and the electromagnetic ﬁeld scattered by D, i.e., (Es , Bs ), when Ω and D respectively are hit by the inD D coming ﬁeld (Ei , Bi ), and the “size” of the vector control function employed. The control function ψ has the physical dimension of an electric ﬁeld and the action of the optimal control electric ﬁeld on the boundary of the obstacle makes the obstacle “active” (“dynamic”) and able to react to the incident electromagnetic ﬁeld to become “unrecognizable,” that is “Ω will do its best to appear as his mask D.” The second control problem we suggest to consider is a control problem in ﬂuid dynamics. Let us consider an obstacle Ωt , t ∈ R, that is a rigid body, assumed homogeneous, moving in R3 with velocity υ = υ (x, t), (x, t) ∈ Ωt ×R. ˜˜ Moreover for t ∈ R the obstacle Ωt ⊂ R3 is a bounded simply connected open set. For t ∈ R let ξ = ξ (t) be the position of the center of mass of the obstacle Ωt . The motion of the obstacle is completely described by the velocity w = w(ξ , t), t ∈ R of the center of mass of the obstacle (i.e., dξ , t ∈ R), the angular velocity ω = ω (ξ , t), t ∈ R of the obstacle w= dt around the instantaneous rotation axis going through the center of mass ξ = ξ (t), t ∈ R and the necessary initial conditions. Note that the velocities of the points belonging to the obstacle υ (x, t), (x, t) ∈ Ωt × R can be ex˜ pressed in terms of w(ξ , t), ω (ξ , t), t ∈ R. Let R3 \ Ωt , t ∈ R be ﬁlled with a Newtonian incompressible viscous ﬂuid of viscosity η . We assume that both the density of the ﬂuid and the temperature are constant. For example, Ωt , t ∈ R can be a submarine or an airfoil immersed in an incompressible viscous ﬂuid. Let v = (v1 , v2 , v3 )T and p be the velocity ﬁeld and the pressure ﬁeld of the ﬂuid, respectively, f be the density of the external forces per mass unit acting on the ﬂuid, and v −∞ be an assigned solenoidal vector ﬁeld. We assume that in the limit t → −∞ the body Ωt is at rest in the position Ω−∞ . Under these assumptions, we have that in the reference frame given by x = (x1 , x2 , x3 )T the following system of NavierStokes equations holds: ∂v (x, t) + (v (x, t), ) v (x, t) − η ∆v (x, t) + p(x, t) = f (x, t), (6) ∂t (x, t) ∈ (R3 \ Ωt ) × R , div v (x, t) = 0, (x, t) ∈ (R3 \ Ωt ) × R ,
3 t→−∞ (7) lim v (x, t) = v −∞ (x), x ∈ R \ Ω−∞ , v (x, t) = υ (x, t), (x, t) ∈ ∂ Ωt × R. ˜ (8) =
∂ ∂ ∂ ∂x1 , ∂x2 , ∂x3 T In (6) we have and T 3 3 ∂v1 ∂v2 ∂v3 . (v , )v = vj , vj , vj ∂xj j =1 ∂xj j =1 ∂xj j =1 3 184 PROBLEM 5.6 The boundary condition in (8) requires that the ﬂuid velocity v and the velocity of the obstacle υ are equal on the boundary of the obstacle for ˜ t ∈ R. We want to consider the problem associated to the choice of a maneuver w(ξ , t), ω (ξ , t), t ∈ R connecting two given states that minimizes the work done by the obstacle Ωt , t ∈ R against the ﬂuid going from the initial state to the ﬁnal state, and the “size” of the maneuver employed. Note that in this context a maneuver connecting two given states is made of two functions w(ξ , t), ω (ξ , t), t ∈ R such that limt→±∞ w(ξ , t) = w± and limt→±∞ ω (ξ , t) = ω ± , where w± and ω ± are preassigned. The couple (w− , ω − ) is the initial state and the couple (w+ , ω + ) is the ﬁnal state. For simplicity, we have assumed (w− , ω − ) = (0, 0). We formulate the following problem: Problem 2: “Drag” Optimization Problem: Given a rigid obstacle Ωt , t ∈ R moving in a Newtonian ﬂuid characterized by a viscosity η and the initial condition and forces acting on the ﬂuid, and given the initial state (0, 0) and the ﬁnal state (w+ , ω + ), choose a maneuver connecting these two states in order to minimize a cost functional that measures the work that the obstacle Ωt , t ∈ R must exert on the ﬂuid to make the maneuver, and the “size” of the maneuver employed. From the previous considerations, several problems arise. The ﬁrst one is connected with the question of formulating problem 1 and problem 2 as control problems. In [2] we suggest a possible formulation of a furtivity problem similar to problem 1 as a control problem. In particular, the open question that we suggest is how problem 1 and problem 2 should be formulated as control problems whose optimal solutions can be determined solving suitable systems of partial diﬀerential equations via an ingenious way of using the Pontryagin maximum principle as done in [2], [6]. The relevance of this formulation lies in the fact that avoids computationally expensive iterative procedures to solve the control problems considered. Moreover, a second open question is the derivation of closed loop control laws at an aﬀordable computational cost for the control problems associated to Problem 1 and Problem 2. Furthermore many variations of problem 1 and 2 can be considered. For example in problem 1 we have assumed, for simplicity, that the “mask” is a passive obstacle, that is (Es (x, t), Bs (x, t)), (x, t) ∈ (R3 \ D) × R is the soD D lution of problem (1)(5) when Ω, χ are replaced with D, χ , respectively. In a more general situation also the “mask” can be an active obstacle. Finally, problem 1 and 2 are examples of control problems for systems governed by the Maxwell equations and the NavierStokes equations, respectively. Many other examples relevant in several application ﬁelds involving diﬀerent systems of partial diﬀerential equations can be considered. SOME CONTROL PROBLEMS IN ... 185 BIBLIOGRAPHY [1] T. S. Angell, A. Kirsch, and R. E. Kleinman, “Antenna control and optimization,” Proceedings of the IEEE, 79, pp. 15591568,1991. [2] L. Fatone, M. C. Recchioni, and F. Zirilli, “Furtivity and masking problems in time dependent acoustic obstacle scattering,” forthcoming in Third ISAAC Congress Proceedings. [3] J. W. He, R. Glowinski, R. Metcalfe, A. Nordlander, and J. Periaux, “Active control and drag optimization for ﬂow past a circular cylinder,” Journal of Computational Physics, 163, pp. 83117, 2000. [4] J. E. Lagnese, “A singular perturbation problem in exact controllability of the Maxwell system,” ESAIM Control, Optimization and Calculus of Variations, 6, pp. 275289, 2001. [5] J. Luniley and P. Blossery, “Control of turbulence,” Annual Review of Fluid Mechanics, 30, pp. 311327, 1998. [6] F. Mariani, M. C. Recchioni, and F. Zirilli, “The use of the Pontryagin maximum principle in a furtivity problem in timedependent acoustic obstacle scattering,” Waves in Random Media, 11, pp. 549575, 2001. [7] L. Pontriaguine, V. Boltianski, R. Gamkr´lidz´, and F. Micktckenko, e e Th´orie Math´matique des Processus Optimaux, Editions Mir, Moscow, e e 1974. [8] S. S. Sritharan, “An introduction to deterministic and stochastic control of viscous ﬂow,” In: Optimal Control of Viscous Flow, S. S. Sritharan eds., SIAM, Philadelphia, pp. 142, 1998. PART 6 Stability, Stabilization Problem 6.1
Copositive Lyapunov functions M. K. Camlıbel ¸
Department of Mathematics University of Groningen P.O. Box 800, 9700 AV Groningen The Netherlands [email protected] J. M. Schumacher
Department of Econometrics and Operations Research Tilburg University P.O. Box 90153, 5000 LE Tilburg The Netherlands [email protected] 1 PRELIMINARIES The following notational conventions and terminology will be in force. Inequalities for vectors are understood componentwise. Given two matrices M and N with the same number of columns, the notation col(M, N ) denotes the matrix obtained by stacking M over N . Let M be a matrix. The submatrix MJK of M is the matrix whose entries lie in the rows of M indexed by the set J and the columns indexed by the set K . For square matrices M , MJJ is called a principal submatrix of M . A symmetric matrix M is said to be nonnegative (nonpositive) deﬁnite if xT M x ≥ 0 (xT M x ≤ 0) for all x. It is said to be positive (negative) deﬁnite if the equalities hold only for x = 0. Sometimes, we write M > 0 (M ≥ 0) to indicate that M is positive deﬁnite (nonnegative deﬁnite), respectively. We say that a square matrix M is Hurwitz if its eigenvalues have negative real parts. A pair of matrices (A, C ) is observable if the corresponding system x = Ax, y = Cx ˙ is observable, equivalently if col(C, CA, · · · , CAn−1 ) is of rank n where A is of order n. 190 2 MOTIVATION PROBLEM 6.1 Lyapunov stability theory is one of the ever green topics in systems and control. For (ﬁnite dimensional) linear systems, the following theorem is very wellknown. Theorem 1:[3, Theorem 1.2]: The following conditions are equivalent. 1. The system x = Ax is asymptotically stable. ˙ 2. The Lyapunov equation AT P + P A = Q has a positive deﬁnite symmetric solution P for any negative deﬁnite symmetric matrix Q. As a reﬁnement, we can replace the last statement by 2 . The Lyapunov equation AT P + P A = Q has a positive deﬁnite symmetric solution P for any nonpositive deﬁnite symmetric matrix Q such that the pair (A, Q) is observable. An interesting application is to the stability of the socalled switched systems . Consider the system x = Aσ x ˙ (1) where the switching signal σ : [0, ∞) → {1, 2} is a piecewise constant function. We assume that it has a ﬁnite number of discontinuities over ﬁnite time intervals in order to rule out inﬁnitely fast switching. A strong notion of stability for the system (1) is the requirement of stability for arbitrary switching signals. The dynamics of (1) coincides with one of the linear subsystems if the switching signal is constant, i.e., there are no switchings at all. This leads us to an obvious necessary condition: stability of each subsystem. Another extreme case would emerge if there exists a common Lyapunov function for the subsystems. Indeed, such a Lyapunov function would immediately prove the stability of (1). An earlier paper [8] pointed out the importance of commutation relations between A1 and A2 in ﬁnding a common Lyapunov function. More precisely, it has been shown that if A1 and A2 are Hurwitz and commutative then they admit a common Lyapunov function. In [1, 6], the commutation relations of subsystems are studied further in a Lie algebraic framework and suﬃcient conditions for the existence of a common Lyapunov function are presented. Notice that the results of [1] are stronger than those in [6]. However, we prefer to restate [6, Theorem 2] for simplicity. Theorem 2: If Ai is a Hurwitz matrix for i = 1, 2 and the Lie algebra {A1 , A2 }LA is solvable then there exists a positive deﬁnite matrix P such that AT P + P Ai < 0 for i = 1, 2. i COPOSITIVE LYAPUNOV FUNCTIONS 191 So far, we quoted some known results. Our main goal is to pose two open problems that can be viewed as extensions of Theorems 2 and 2 for a class of piecewise linear systems. More precisely, we will consider systems of the form x = Ai x ˙ for Ci x ≥ 0 i = 1, 2. (2) Here the cones Ci = {x  Ci x ≥ 0} do not necessarily cover the whole xspace. We assume that a. there exists a (possibly discontinuous) function f such that (2) can be described by x = f (x) for all x ∈ C1 ∪ C2 , and ˙ b. for each initial state x0 ∈ C1 ∪ C2 , there exists a unique solution x in t the sense of Carath´odory, i.e., x(t) = x0 + 0 f (x(τ )) dτ . e A natural example[b.] of such piecewise linear dynamics is a linear complementarity system (see [9]) of the form x = Ax + Bu, y = Cx + Du ˙ {(u(t) ≥ 0) and (y (t) ≥ 0) and (u(t) = 0 or y (t) = 0)} for all t ≥ 0 where A ∈ Rn×n , B ∈ Rn×1 , C ∈ R1×n , and D ∈ R. If D > 0 this system can be put into the form of (2) with A1 = A, C1 = C , A2 = A − BD−1 C , and C2 = −C . Equivalently, it can be described by x = f (x) ˙
−1 (3) where f (x) = Ax if Cx ≥ 0 and f (x) = (A − BD C )x if Cx ≤ 0. Note that f is Lipschitz continuous and hence (3) admits a unique (continuously diﬀerentiable) solution x for all initial states x0 . One way of studying the stability of the system (2) is simply to utilize Theorem 2. However, there are some obvious drawbacks: i. It requires positive deﬁniteness of the common Lyapunov function whereas the positivity on a cone is enough for the system (2). ii. It considers any switching signal whereas the initial state determines the switching signal in (2). In the next section, we focus on ways of eliminating the conservatism mentioned in 2. 3 DESCRIPTION OF THE PROBLEMS First, we need to introduce some nomenclature. A matrix M is said to be copositive (strictly copositive) with respect to a cone C if xT M x ≥ 0 (xT M x > 0) for all nonzero x ∈ C. We use the notation M
C 0 and M C 0 192 PROBLEM 6.1 respectively for copositivity and strict copositivity. When the cone C is clear from the context we just write or . The ﬁrst problem that we propose calls for an extension of Theorem 2 for linear dynamics restricted to a cone. Problem 1: Let a square matrix A and a cone C = {x  Cx ≥ 0} be given. Determine necessary and suﬃcient conditions for the existence of a symmetric matrix P such that P 0 and AT P + P A 0. An immediate necessary condition for the existence of such a matrix P is that the matrix A should not have any eigenvectors in the cone C corresponding to its positive eigenvalues. Once Problem 3 solved, it would be natural to take a step further by attempting to extend Theorem 2 to the systems (2). In other words, it would be natural to attack the following problem. Problem 2: Let two square matrices A1 , A2 , and two cones C1 = {x  C1 x ≥ 0}, C2 = {x  C2 x ≥ 0} be given. Determine suﬃcient conditions for the existence of a symmetric matrix P such that P for i = 1, 2.
Ci 0 and AT P + P Ai i Ci 0 4 ON COPOSITIVE MATRICES This last section discusses copositive matrices in order to provide a starting point for further investigation of the proposed problems. The class of copositive matrices occurs in optimization theory and particularly in the study of the linear complementarity problem [2]. We quote from [4] the following theorem which provides a characterization of copositive matrices. Theorem 2. A symmetric matrix M is (strictly) copositive with respect to the cone {x  x ≥ 0} if and only if every principal submatrix of M has no eigenvector v > 0 with associated eigenvalue (λ ≤ 0) λ < 0. Since the number of principal submatrices of a matrix of order n is roughly 2n , this result has a practical disadvantage. In fact, Murty and Kabadi [7] showed that testing for copositivity is NPcomplete. An interesting subclass of copositive matrices are the ones that are equal to the sum of a nonnegative deﬁnite matrix and a nonnegative matrix. This class of matrices is studied in [5] where a relatively more tractable algorithm has been presented for checking if a given matrix belongs to the class or not. COPOSITIVE LYAPUNOV FUNCTIONS 193 BIBLIOGRAPHY [1] A. A. Agrachev and D. Liberzon, “Liealgebraic stability criteria for switched systems,” SIAM J. Control Optim., 40(1):253–269, 2001. [2] R. W. Cottle, J.S. Pang, and R. E. Stone, The Linear Complementarity Problem, Academic Press, Inc., Boston, 1992. [3] Z. Gaji´ and M. T. J. Qureshi, Lyapunov Matrix Equation in System c Stability and Control, volume 195 of Mathematics in Science and Engineering, Academic Press, San Diego, 1995. [4] W. Kaplan, “A test for copositive matrices,” Linear Algebra Appl., 313:203–206, 2000. [5] W. Kaplan, “A copositivity probe,” Linear Algebra Appl., 337:237–251, 2001. [6] D. Liberzon, J. P. Hespanha, and A. S. Morse, “Stability of switched systems: A Liealgebraic condition,” Systems Control Lett., 37:117–122, 1999. [7] K. Murty and S. N. Kabadi, “Some NPcomplete problems in quadratic and nonlinear programming,” Math. Programming, 39:117–129, 1987. [8] K. S. Narendra and J. Balakrishnan, “A common Lyapunov function for stable LTI systems with commuting Amatrices,” IEEE Trans. Automatic Control, 39:2469–2471, 1994. [9] A. J. van der Schaft and J. M. Schumacher, “Complementarity modelling of hybrid systems,” IEEE Transactions on Automatic Control, 43(4):483–490, 1998. Problem 6.2
The strong stabilization problem for linear timevarying systems Avraham Feintuch
Department of Mathematics BenGurion University of the Negev BeerSheva Israel [email protected] 1 DESCRIPTION OF THE PROBLEM I will formulate the strong stabilization problem in the formalism of the operator theory of systems. In this framework, a linear system is a linear transformation L acting on a Hilbert space H that is equipped with a natural time structure, which satisﬁes the standard physical realizability condition known as causality. To simplify the formulation, we choose H to be the 2 sequence space l2 [0, ∞) = {< x0 , x1 , · · · > : xi ∈ Cn , xi < ∞} and denote by Pn the truncation projection onto the subspace generated by the ﬁrst n vectors {e0 , · · · , en } of the standard orthonormal basis on H . Causality of L is expressed as Pn L = Pn LPn for all nonnegative integers n. A linear system L is stable if it is a bounded operator on H . A fundamental issue that was studied in both classical and modern control theory was that of internal stabilization of unstable systems by feedback. It is generally acknowledged that the paper of Youla et al. [2] was a landmark event in this study and in fact the issue of strong stabilization was ﬁrst raised there. It was quickly seen [5] that while this paper restricted itself to the classical case of rational transfer functions its ideas were given to abstraction to much more general frameworks. We brieﬂy describe the one revelant to our discussion. I For a linear system L, its graph G(L) is the range of the operator L deﬁned on the domain D(L) = {x ∈ H : Lx ∈ H }. G(L) is a subspace of I C H ⊕ H . The operator deﬁned on D(L) ⊕ D(C ) is called the L −I STRONG STABILIZATION PROBLEM 195 feedback system {L, C } with plant L and compensator C , and {L, C } is stable if it has a bounded causal inverse. L is stabilizable if there exists a causal linear system C (not necessarily stable) such that {L, C } is stable. The analogue of the result of Youla et al. which characterizes all stabilizable linear systems and parametrizes all stabilizers was given by Dale and Smith [4]: Theorem 1.[[6], p. 103] : Suppose L is a linear system and there exist ˆˆˆˆ causal stable systems M , N , X , Y , M , N , X , Y such that (1) G(L) = −1 ˆ Y X M M −X ˆˆ Ran = Ker[ −N M ], (2) = . ˆˆ ˆ N −N M N Y Then (1) L is stabilizable (2) C stabilizes L if and only if ˆ Y − NQ ˆ ˆ G(C ) = Ran ˆ = Ker[ −(X + QM ) Y − QN ], where Q X + MQ varies over all stable linear systems. The Strong Stabilization Problem is: Suppose L is stabilizable. Can internal stability be achieved with C itself a stable system? In such a case, L is said to be strongly stabilizable. Theorem 2.[[6], p.108]: A linear system L with property (1), (2) of Theorem ˆ ˆ 1 is stabilized by a stable C if and only if M + N C is an invertible operator. Equivalently, a stable C stabilizes L if and only if M + CN is an invertible operator (by an invertible operator we mean that its inverse is also bounded). It is not hard to show that in fact the same C works in both cases; i.e., ˆ ˆ M + CN is invertible if and only if M + N C is invertible. So here is the precise mathematical formulation of the problem: Given causal stable systems M , N , X , Y such that XM + Y N = I . Does there exist a causal stable system C such that M + CN is invertible? 2 MOTIVATION AND HISTORY OF THE PROBLEM The notion of strong internal stabilization was introduced in the classical paper of Youla et al. [2] and was solved for rational transfer functions. Another formulation was given in [1]. An approach to the classical problem from the point of view described here was ﬁrst given in [9]. Recently suﬃcient conditions for the existance of strongly stabilizing controllers were formulated from the point of view of H ∞ control problems. The latest such eﬀort is [7]. It is of interest to write that our formulation of the strong stabilization problem connects it to an equivalent problem in Banach algebras, the question of 1stability of a Banach algebra: given a pair of elements {a, b} in a Ba 196 PROBLEM 6.2 nach algebra B which satisﬁes the Bezout identity xa + yb = 1 for some x, y ∈ B , does there exist c ∈ B : a + cb is a unit? This was shown to be the case for B = H ∞ by Treil [8] and this proves that every stabilizable scalar timeinvariant system is strongly stabilizable over the complex number ﬁeld. The matrix analogue to Treil’s result is not known. It is interesting that the Banach algebra B (H ) of all bounded linear operators on a given Hilbert space H is not 1stable [3]. Our strong stabilization problem is the question whether nest algebras are 1stable. BIBLIOGRAPHY [1] B. D. O. Anderson, “A note on the YoulaBongiornoLu condition,” Automatica 12 (1976), 387388. [2] J. J. Bongiorno, C. N. Lu, D. C. Youla, “Singleloop feedback stabilization of linear multivariable plants,” Automatica 10 (1974), 159173. [3] G. Corach, A. Larotonda, “Stable range in Banach algebras,” J. Pure and Applied Alg. 32 (1984), 289300. [4] W. Dale, M. Smith, “Stabilizability and existance of system representations for discretetime, timevarying systems,” SIAM J. Cont. and Optim. 31 (1993), 15381557. [5] C. A. Desoer, R. W. Liu, J. Murray, R. Saeks, “Feedback system design: the factorial representation approach to analysis and synthesis,” IEEE Trans. Auto. Control AC25 (1980), 399412. [6] A. Feintuch, “Robust Control Theory on Hilbert Space,” Applied Math. Sciences 130, Springer, 1998. [7] H. Ozbay, M. Zeren, “On the strong stabilization and stable H ∞ controller design problems for MIMO systems,” Automatica 36 (2000), 16751684. [8] S. Treil, “The stable rank of the algebra H ∞ equals 1,” J. Funct. Anal. 109 (1992), 130154. [9] M. Vidyasagar, Control System Synthesis: A Factorization Approach , M.I.T. Press, 1985. Problem 6.3
Robustness of transient behavior Diederich Hinrichsen, Elmar Plischke, and Fabian Wirth
Zentrum f¨r Technomathematik u Universit¨t Bremen a 28334 Bremen Germany {dh, elmar, fabian}@math.unibremen.de 1 DESCRIPTION OF THE PROBLEM By deﬁnition, a system of the form x(t) = Ax(t), ˙
n×n t≥0 (1) (A ∈ K , K = R, C) is exponentially stable if and only if there are constants M ≥ 1, β < 0 such that eAt ≤ M eβt , t ≥ 0. (2) The respective roles of the two constants in this estimate are quite diﬀerent. The exponent β < 0 determines the longterm behavior of the system, whereas the factor M ≥ 1 bounds its shortterm or transient behavior. In applications large transients may be unacceptable. This leads us to the following stricter stability concept. Deﬁnition 1: Let M ≥ 1, β < 0. A matrix A ∈ Kn×n is called (M, β )stable if (2) holds. Here β < 0 and M ≥ 1 can be chosen in such a way that (M, β )stability guarantees both an acceptable decay rate and an acceptable transient behavior. For any A ∈ Kn×n let γ (A) denote the spectral abscissa of A, i.e., the maximum of the real parts of the eigenvalues of A. It is wellknown that γ (A) < 0 implies exponential stability. More precisely, for every β > γ (A) there exists a constant M ≥ 1 such that (2) is satisﬁed. However, it is unknown how to determine the minimal value of M such that (2) holds for a given β ∈ (γ (A), 0). 198 Problem 1: PROBLEM 6.3 a) Given A ∈ Kn×n and β ∈ (γ (A), 0), determine analytically the minimal value Mβ (A) of M ≥ 1 for which A is (M, β )stable. b) Provide easily computable formulas for upper and lower bounds for Mβ (A) and analyze their conservatism. Associated to this problem is the design problem for linear control systems of the form x = Ax + Bu, ˙
n×n n×m (3) ×K . Assume that a desired transient and stability where (A, B ) ∈ K behavior for the closed loop is prescribed by given constants M ≥ 1, β < 0, then the pair (A, B ) is called (M, β )stabilizable (by state feedback), if there exists an F ∈ Km×n such that A − BF is (M, β )stable. Problem 2: a) Given constants M ≥ 1, β < 0, characterize the set of (M, β )stabilizable pairs (A, B ) ∈ Kn×n × Kn×m . b) Provide a method for the computation of (M, β )stabilizing feedbacks F for (M, β )stabilizable pairs (A, B ). In order to account for uncertainties in the model, we consider systems described by x = A∆ x = (A + D∆E )x, ˙ where A ∈ K is the nominal system matrix, D ∈ Kn× and E ∈ Kq×n are given structure matrices, and ∆ ∈ K ×q is an unknown perturbation matrix for which only a bound of the form ∆ ≤ δ is assumed to be known. Problem 3: a) Given A ∈ Kn×n , D ∈ Kn× and E ∈ Kq×n , determine analytically the (M, β )−stability radius deﬁned by r(M,β ) (A; D, E ) = inf ∆ ∈K
×q n×n , ∃τ > 0 : e(A+D∆E )τ ≥ M eβτ . (4) b) Provide an algorithm for the calculation of this quantity. c) Determine easily computable upper and lower bounds for r(M,β ) (A; D, E ). The two previous problems can be thought of as steps towards the following ﬁnal problem. Problem 4: Given a system (A, B ) ∈ Kn×n × Kn×m , a desired transient behavior described by M ≥ 1, β < 0, and matrices D ∈ Kn× , E ∈ Kq×n describing the perturbation structure, ROBUSTNESS OF TRANSIENT BEHAVIOR 199 a) characterize the constants γ > 0 for which there exists a state feedback matrix such that r(M,β ) (A − BF ; D, E ) ≥ γ . (5) b) Provide a method for the computation of feedback matrices F such that (5) is satisﬁed. 2 MOTIVATION AND HISTORY OF THE PROBLEM Stability and stabilization are fundamental concepts in linear systems theory and in most design problems exponential stability is the minimal requirement that has to be met. From a practical point of view, however, the transient behavior of a system may be of equal importance and is often one of the criteria that decides on the quality of a controller in applications. As such, the notion of (M, β )−stability is related to such classical design criteria as “overshoot” of system responses. The question of how far transients move away from the origin is of interest in many situations; for instance, if certain regions of the state space are to be avoided in order to prevent saturation eﬀects. A similar problem occurs if linear design is performed as a local design for a nonlinear system. In this case, large transients may result in a small domain of attraction. For an introduction to the relation of the constant M with estimates of the domain of attraction, we refer to [4, Chapter 5]. The solution of Problem 4 and also of the other problems would provide a way to design local linear feedbacks with good local estimates for the domain of attraction without having to resort to the knowledge of Lyapunov functions. While the latter method is excellent if a Lyapunov function is known, it is also known that it may be quite hard to ﬁnd them or if quadratic Lyapunov functions are used then the obtainable estimates may be far from optimal, see section 3. Apart from these motivations from control the relation between domains of attraction and transient behavior of linearizations at ﬁxed points is an active ﬁeld in recent years motivated by problems in mathematical physics, in particular, ﬂuid dynamics; see [1, 10] and references therein. Related problems occur in the study of iterative methods in numerical analysis; see e.g., [3]. We would like to point out that the problems discussed in this note give pointwise conditions in time for the bounds and are therefore diﬀerent from criteria that can be formulated via integral constraints on the positive time axis. In the literature, such integral criteria are sometimes also called bounds on the transient behavior; see e.g., [9] where interesting results are obtained for this case. Stability radii with respect to asymptotic stability of linear systems were introduced in [5] and there is a considerable body of literature investigating 200 PROBLEM 6.3 this problem. The questions posed in this note are an extension of the available theory insofar as the transient behavior is neglected in most of the available results on stability radii. 3 AVAILABLE RESULTS A number of results are available for problem 1. Estimates of the transient behavior involving either quadratic Lyapunov functions or resolvent inequalities are known but can be quite conservative or intractable. Moreover, for many of the available estimates, little is known on their conservatism. The HilleYosida Theorem [8] provides an equivalent description of (M, β )stability in terms of the norm of powers of the resolvent of A. Namely, A is (M, β )stable if and only if for all n ∈ N and all α ∈ R with α > β it holds that M −n . (αI − A) ≤ (α − β )n A characterization of M as the minimal eccentricity of norms that are Lyapunov functions of (1) is presented in [7]. While these conditions are hard to check, there is a classical, easily veriﬁable, suﬃcient condition using quadratic Lyapunov functions. Let β ∈ (γ (A), 0), if P > 0 satisﬁes the Lyapunov inequality A∗ P + P A ≤ 2βP < 0 , and has condition number κ(P ) := P P −1 ≤ M 2 then A is (M, β )stable. The existence of P > 0 satisfying these conditions may be posed as an LMIproblem [2]. However, it can be shown that if β < 0 is given and the spectral bound of A is below β then this method is necessarily conservative, in the sense that the best bound on M obtainable in this way is strictly larger than the minimal bound. Furthermore, experiments show that the gap between these two bounds can be quite large. In this context, note that the problem cannot be solved by LMI techniques since the characterization of the optimal M for given β is not an algebraic problem. There is a large number of further upper bounds available for eAt . These are discussed and compared in detail in [4, 11], see also the references therein. A number of these bounds is also valid in the inﬁnitedimensional case. For problem 2, suﬃcient conditions are derived in [7] using quadratic Lyapunov functions and LMI techniques. The existence of a feedback F such that P (A − BF ) + (A − BF )∗ P ≤ 2βP and κ(P ) = P P −1 ≤ M 2 , (6) or, equivalently, the solvability of the associated LMI problem, is characterized in geometric terms. This, however, only provides a suﬃcient condition under which Problem 2 can be solved. But the LMI problem (6) is far from being equivalent to Problem 2. ROBUSTNESS OF TRANSIENT BEHAVIOR 201 Concerning problem 3 diﬀerential Riccati equations were used to derive bounds for the (M, β )− stability radius in [6]. Suppose there exist positive deﬁnite Hermitian matrices P 0 , Q, R of suitable dimensions such that the diﬀerential Riccati equation ˙ P − (A − βI )P − P (A − βI )∗ − E ∗ QE − P DRD∗ P = 0 P (0) = P has a solution on R+ which satisﬁes σ (P (t))/σ (P 0 ) ≤ M 2 , ¯ t ≥ 0.
0 (7) (8) Then the structured (M, β )−stability radius is at least r(M,β ) (A; D, E ) ≥ σ (Q)σ (R) , (9) where σ (X ) and σ (X ) denote the largest and smallest singular value of X. ¯ However, it is unknown how to choose the parameters P 0 , Q, R in an optimal way and it is unknown whether equality can be obtained in (9) by an optimal choice of P 0 , Q, R. To the best of our knowledge, no results are available dealing with problem 4. BIBLIOGRAPHY [1] J. S. Baggett and L. N. Trefethen, “Lowdimensional Models of Subcritical Transition to Turbulence’” Physics of Fluids 9:1043–1053, 1997. [2] S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, Linear Matrix Inequalities in Systems and Control Theory, volume 15 of Studies in Applied Mathematics, SIAM, Philadelphia, 1994. [3] T. Braconnier and F. ChaitinChatelin, “Roundoﬀ induces a chaotic behavior for eigensolvers applied on highly nonnormal matrices,” M.O. Bristeau et al. (eds.), Computational Science for the 21st Century. Symposium, Tours, France, May 5–7, 1997. Chichester: John Wiley & Sons. 4352 (1997). [4] M. I. Gil’, Stability of Finite and Inﬁnite Dimensional Systems, Kluwer Academic Publishers, Boston, 1998. [5] D. Hinrichsen and A. J. Pritchard, “Stability radius for structured perturbations and the algebraic Riccati equation,” Systems & Control Letters, 8:105–113, 1986. [6] D. Hinrichsen, E. Plischke, and A. J. Pritchard, “Liapunov and Riccati equations for practical stability,” In: Proc. European Control Conf. ECC2001, Porto, Portugal, (CDROM), pp. 2883–2888, 2001. 202 PROBLEM 6.3 [7] D. Hinrichsen, E. Plischke, and F. Wirth; “State Feedback Stabilization with Guaranteed Transient Bounds,” In: Proceedings of MTNS2002, Notre Dame, IN, USA, August, 2002. [8] A. Pazy, Semigroups of Linear Operators and Applications to Partial Diﬀerential Equations, SpringerVerlag, New York, 1983. [9] A. Saberi, A. A. Stoorvogel, and P. Sannuti, Control of Linear Systems with Regulation and Input Constraints, SpringerVerlag, London, 2000. [10] L. N. Trefethen, “Pseudospectra of linear operators,” SIAM Review, 39(3):383–406, 1997. [11] K. Veseli´, “Bounds for exponentially stable semigroups,” Lin. Alg. c Appl., 358:195–217, 2003. Problem 6.4
Lie algebras and stability of switched nonlinear systems Daniel Liberzon
Coordinated Science Laboratory University of Illinois at UrbanaChampaign Urbana, IL 61801 USA [email protected] 1 PRELIMINARY DESCRIPTION OF THE PROBLEM Suppose that we are given a family fp , p ∈ P of continuously diﬀerentiable functions from Rn to Rn , parameterized by some index set P . This gives rise to the switched system x = fσ (x), ˙ x ∈ Rn (1) where σ : [0, ∞) → P is a piecewise constant function of time, called a switching signal. Impulse eﬀects (state jumps), inﬁnitely fast switching (chattering), and Zeno behavior are not considered here. We are interested in the following problem: ﬁnd conditions on the functions fp , p ∈ P which guarantee that the switched system (1) is asymptotically stable, uniformly over the set of all possible switching signals. If this property holds, we will refer to the switched system simply as being stable. It is clearly necessary for each of the subsystems x = fp (x), p ∈ P to be asymptotically stable—which we ˙ henceforth assume—but simple examples show that this condition alone is not suﬃcient. The problem posed above naturally arises in the stability analysis of switched systems in which the switching mechanism is either unknown or too complicated to be explicitly taken into account. This problem has attracted considerable attention and has been studied from various angles (see [7] for references). Here we explore a particular research direction, namely, the role of commutation relations among the subsystems being switched. In the following sections, we provide an overview of available results on this topic and delineate the open problem more precisely. 204 2 AVAILABLE RESULTS: LINEAR SYSTEMS PROBLEM 6.4 In this section, we concentrate on the case when the subsystems are linear. This results in the switched linear system x = Aσ x, ˙ x ∈ Rn . (2) We assume throughout that {Ap : p ∈ P } is a compact set of stable matrices. To understand how commutation relations among the linear subsystems being switched play a role in the stability question for the switched linear system (2), consider ﬁrst the case when P is a ﬁnite set and the matrices commute pairwise: Ap Aq = Aq Ap for all p, q ∈ P . Then it not hard to show by a direct analysis of the transition matrix that the system (2) is stable. Alternatively, in this case one can construct a quadratic common Lyapunov function for the family of linear subsystems x = Ap x, p ∈ P as shown in [10], ˙ which is wellknown to lead to the same conclusion. A useful object that reveals the nature of commutation relations is the Lie algebra g generated by the matrices Ap , p ∈ P . This is the smallest linear subspace of Rn×n that contains these matrices and is closed under the Lie bracket operation [A, B ] := AB − BA (see, e.g., [11]). Beyond the commuting case, the natural classes of Lie algebras to study in the present context are nilpotent and solvable ones. A Lie algebra is nilpotent if all Lie brackets of suﬃciently high order vanish. Solvable Lie algebras form a larger class of Lie algebras, in which all Lie brackets of suﬃciently high order having a certain structure vanish. If P is a ﬁnite set and g is a nilpotent Lie algebra, then the switched linear system (2) is stable; this was proved in [4] for the discretetime case. The system (2) is still stable if g is solvable and P is not necessarily ﬁnite (as long as the compactness assumption made at the beginning of this section holds). The proof of this more general result, given in [6], relies on the facts that matrices in a solvable Lie algebra can be simultaneously put in the triangular form (Lie’s Theorem) and that a family of linear systems with stable triangular matrices has a quadratic common Lyapunov function. It was subsequently shown in [1] that the switched linear system (2) is stable if the Lie algebra g can be decomposed into a sum of a solvable ideal and a subalgebra with a compact Lie group. Moreover, if g fails to satisfy this condition, then it can be generated by families of stable matrices giving rise to stable as well as to unstable switched linear systems, i.e., the Lie algebra alone does not provide enough information to determine whether or not the switched linear system is stable (this is true under the additional technical requirement that I ∈ g ). By virtue of the above results, one has a complete characterization of all matrix Lie algebras g with the property that every set of stable generators for g gives rise to a stable switched linear system. The interesting and rather surprising discovery is that this property depends only on the structure of g as a Lie algebra, and not on the choice of a particular matrix represen STABILITY OF SWITCHED SYSTEMS 205 tation of g . Namely, Lie algebras with this property are precisely the Lie algebras that admit a decomposition of the kind described earlier. Thus, in the linear case, the extent to which commutation relations can be used to distinguish between stable and unstable switched systems is well understood. Liealgebraic suﬃcient conditions for stability are mathematically appealing and easily checkable in terms of the original data (it has to be noted, however, that they are not robust with respect to small perturbations in the data and therefore highly conservative). 3 OPEN PROBLEM: NONLINEAR SYSTEMS We shall now turn to the general nonlinear situation described by equation (1). Linearizing the subsystems and applying the results described in the previous section together with Lyapunov’s indirect method, it is not hard to obtain Liealgebraic conditions for local stability of the system (1). This was done in [6, 1]. However, the problem we are posing here is to investigate how the structure of the Lie algebra generated by the original nonlinear vector ﬁelds fp , p ∈ P is related to stability properties of the switched system (1). Taking higherorder terms into account, one may hope to obtain more widely applicable Liealgebraic stability criteria for switched nonlinear systems. The ﬁrst step in this direction is the result proved in [8] that if the set P is ﬁnite and the vector ﬁelds fp , p ∈ P commute pairwise, in the sense that [fp , fq ](x) := ∂fp (x) ∂fq (x) fp (x) − fq (x) = 0 ∂x ∂x ∀x ∈ Rn , ∀p, q ∈ P then the switched system (1) is (globally) stable. In fact, commutativity of the ﬂows is all that is needed, and the continuous diﬀerentiability assumption on the vector ﬁelds can be relaxed. If the subsystems are exponentially stable, a construction analogous to that of [10] can be applied in this case to obtain a local common Lyapunov function; see [12]. A logical next step is to study switched nonlinear systems with nilpotent or solvable Lie algebras. One approach would be via simultaneous triangularization, as done in the linear case. Nonlinear versions of Lie’s Theorem, which provide Liealgebraic conditions under which a family of nonlinear systems can be simultaneously triangularized, are developed in [3, 5, 9]. However, as demonstrated in [2], the triangular structure alone is not suﬃcient for stability in the nonlinear context. Additional conditions that can be imposed to guarantee stability are identiﬁed in [2], but they are coordinatedependent and so cannot be formulated in terms of the Lie algebra. Moreover, the results on simultaneous triangularization described in the papers mentioned above require that the Lie algebra have full rank, which is not true in the case of a common equilibrium. Thus an altogether new approach seems to be required. 206 In summary, the main open question is this: PROBLEM 6.4 Q : Which structural properties (if any) of the Lie algebra generated by a noncommuting family of asymptotically stable nonlinear vector ﬁelds guarantee stability of every corresponding switched system? For example, when does nilpotency or solvability of the Lie algebra imply stability? To begin answering this question, one may want to ﬁrst address some special classes of nonlinear systems, such as homogeneous systems or systems with feedback structure. One may also want to restrict attention to ﬁnitedimensional Lie algebras. A more general goal of this paper is to point out the fact that Lie algebras seem to be directly connected to stability of switched systems and, in view of the wellestablished theory of the former and high theoretical interest as well as practical importance of the latter, there is a need to develop a better understanding of this connection. It may also be useful to pursue possible relationships with Liealgebraic results in the controllability literature (see [1] for a brief preliminary discussion on this matter). BIBLIOGRAPHY [1] A. A. Agrachev and D. Liberzon, “Liealgebraic stability criteria for switched systems,” SIAM J. Control Optim., 40:253–269, 2001. [2] D. Angeli and D. Liberzon, “ A note on uniform global asymptotic stability of nonlinear switched systems in triangular form,” In: Proc. 14th Int. Symp. on Mathematical Theory of Networks and Systems (MTNS), 2000. [3] P. E. Crouch, “Dynamical realizations of ﬁnite Volterra series,” SIAM J. Control Optim., 19:177–202, 1981. [4] L. Gurvits, “Stability of discrete linear inclusion,” Linear Algebra Appl., 231:47–85, 1995. [5] M. Kawski, “Nilpotent Lie algebras of vectorﬁelds‘,” J. Reine Angew. Math., 388:1–17, 1988. [6] D. Liberzon, J. P. Hespanha, and A. S. Morse,“Stability of switched systems: a Liealgebraic condition,” Systems Control Lett., 37:117–122, 1999. [7] D. Liberzon, Switching in Systems and Control. Birkh¨user, Boston, a 2003. STABILITY OF SWITCHED SYSTEMS 207 [8] J. L. MancillaAguilar, “A condition for the stability of switched nonlinear systems,” IEEE Trans. Automat. Control, 45:2077–2079, 2000. [9] A. Marigo, “Constructive necessary and suﬃcient conditions for strict triangularizability of driftless nonholonomic systems‘,” In: Proc. 38th IEEE Conf. on Decision and Control, pages 2138–2143, 1999. [10] K. S. Narendra and J. Balakrishnan, “A common Lyapunov function for stable LTI systems with commuting Amatrices,” IEEE Trans. Automat. Control, 39:2469–2471, 1994. [11] H. Samelson, Notes on Lie Algebras. Van Nostrand Reinhold, New York, 1969. [12] H. Shim, D. J. Noh, and J. H. Seo, “Common Lyapunov function for exponentially stable nonlinear systems,” In: Proc. 4th SIAM Conference on Control and its Applications, 1998. Problem 6.5
Robust stability test for interval fractional order linear systems Ivo Petr´ˇ as
Department of Informatics and Process Control BERG Faculty, Technical University of Koˇice s B. Nˇmcovej 3, 042 00 Koˇice e s Slovak Republic [email protected] YangQuan Chen
Center for SelfOrganizing and Intelligent System Dept. of Electrical and Computer Engineering Utah State University Logan, UT843224160 USA [email protected] Blas M. Vinagre
Dept. of Electronic and Electromechanical Engineering Industrial Engineering School, University of Extramadura Avda. De Elvas s/n, 06071Badajoz Spain [email protected] 1 DESCRIPTION OF THE PROBLEM Recently, a robust stability test procedure is proposed for linear timeinvariant fractional order systems (LTI FOS) of commensurate orders with parametric interval uncertainties [6]. The proposed robust stability test method is based on the combination of the argument principle method [2] for LTI FOS and the celebrated Kharitonov’s edge theorem. In general, an LTI FOS can be described by the diﬀerential equation or the corresponding transfer ROBUST STABILITY CHECK METHODS FOR FOS 209 function of noncommensurate real orders [7] of the following form: G(s) = bm sβm + . . . + b1 sβ1 + b0 sβ0 Q(sβk ) = , an sαn + . . . + a1 sα1 + a0 sα0 P (sαk ) (1) where αk , βk (k = 0, 1, 2, . . . ) are real numbers and without loss of generality they can be arranged as αn > . . . > α1 > α0 , βm > . . . > β1 > β0 . The coeﬃcients ak , bk (k = 0, 1, 2, . . . ) are uncertain constants within a known interval. It is wellknown that an integer order LTI system is stable if all the roots of the characteristic polynomial P (s) are negative or have negative real parts if they are complex conjugate (e.g., [1]). This means that they are located on the left of the imaginary axis of the complex splane. When dealing with noncommensurate order systems (or, in general, with fractional order systems) it is important to bear in mind that P (sα ), α ∈ R is a multivalued function of s, the domain of which can be viewed as a Riemann surface (see e.g., [4]). A question of robust stability test procedure and proof of its validity for general type of the LTI FOS described by (1) is still open. 2 MOTIVATION AND HISTORY OF THE PROBLEM For the LTI FOS with no uncertainty, the existing stability test (or check) methods for dynamic systems with integerorders such as Routh table technique, cannot be directly applied. This is due to the fact that the characteristic equation of the LTI FOS is, in general, not a polynomial but a pseudopolynomial function of the fractional powers of s. Of course, being the characteristic equation a function of a complex variable, stability test based on the argument principle can be applied. On the other hand, it has been shown, by several authors and by using several methods, that for the case of LTI FOS of commensurate order, a geometrical method based on the argument of the roots of the characteristic equation (a polynomial in this particular case) can be used for the stability check in the BIBO (boundedinput boundedoutput) sense (see, e.g., [3]). In the particular case of commensurate order systems, it holds that αk = αk, βk = αk, (0 < α < 1), ∀k ∈ Z, and the transfer function has the following form M bk (sα )k Q(sα ) k G(s) = K0 N=0 = K0 (2) αk P (sα ) k=0 ak (s ) With N > M the function G(s) becomes a proper rational function in the complex variable sα and can be expanded in partial fractions of the form
N G(s) = K0
i=1 Ai , sα + λi (3) 210 PROBLEM 6.5 where λi (i = 1, 2, .., N ) are the roots of the polynomial P (sα ) or the system poles that are assumed to be simple. Stability condition can then be stated that [2, 3]: A commensurate order system described by a rational transfer function (2) is stable if arg (λi ) > α π , with λi the ith root of 2 P (sα ). For the LTI FOS with commensurate order where system poles are in general complex conjugate, the stability condition can be expressed as follows [2, 3]: A commensurate order system described by a rational transfer (σ ) function G(σ ) = Q(σ) , where σ = sα , α ∈ R+ , (0 < α < 1), is P stable if arg (σi ) > α π , with σi the ith root of P (σ ). 2 The robust stability test procedure for the LTI FOS of commensurate orders with parametric interval uncertainties can be divided into the following steps: • step 1: Rewrite the LTI FOS G(s) of the commensurate order α, to the equivalence system H (σ ), where transformation is: sα → σ , α ∈ R+ ; • step 2: Write the interval polynomial P (σ, q ) of the equivalence system H (σ ), where interval polynomial is deﬁned as
n P (σ, q ) =
i=0 [q − , q + ]σ i ; • step 3: For interval polynomial P (σ, q ), construct four Kharitonov’s polynomials: p−− (σ ), p−+ (σ ), p+− (σ ), p++ (σ ); • step 4: Test the four Kharitonov’s polynomials whether they satisfy the stability condition: arg (σi ) > α π , ∀σ ∈ C, with σi the ith root 2 of P (σ ); Note that for lowdegree polynomials, less Kharitonov’s polynomials are to be tested: • Degree 5: p−− (σ ), p−+ (σ ), p+− (σ ); • Degree 4: p+− (σ ), p++ (σ ); • Degree 3: p+− (σ ). We demonstrated this technique for the robust stability check for the LTI FOS with parametric interval uncertainties through some workedout illustrative examples in [6]. In [6] the timedomain analytical expressions are available and therefore the timedomain and the frequencydomain stability test results (see also [5]) can be crossvalidated. ROBUST STABILITY CHECK METHODS FOR FOS 211 3 AVAILABLE RESULTS For general LTI FOS, if the coeﬃcients are uncertain but are known to lie within known intervals, how to generalize the robust stability test result by Kharitonov’s wellknown edge theorem? This is deﬁnitely a new research topic. The main future research objectives could be: • A proof of validity of the robust stability test procedure for the LTI FOS of commensurate orders with parametric interval uncertainties. • An algebraic method and an exact proof for the stability investigation for the LTI FOS of noncommensurate orders with known parameters. • A robust stability test procedure of LTI FOS of noncommensurate orders with parametric interval uncertainties. BIBLIOGRAPHY [1] R. C. Dorf and R. H. Bishop, Modern Control Systems, AddisonWesley Publishing Company, 1990. [2] D. Matignon, “Stability result on fractional diﬀerential equations with applications to control processing,” In: IMACS  SMC Proceeding, July, Lille, France, pp. 963968, 1996. [3] D. Matignon, “Stability properties for generalized fractional diﬀerential systems,” In: Proceeding of Fractional Diﬀerential Systems: Models, Methods and Applications, vol. 5, pp. 145158, 1998. [4] D. A. Pierre, “Minimum MeanSquareError Design of Distributed Parameter Control Systems,” ISA Transactions, vol. 5, pp. 263271, 1966. ˇ [5] I. Petr´ˇ and L. Dorˇ´k, “The Frequency Method for Stability Invesas ca tigation of Fractional Control Systems,” J. of SACTA, vol. 2, no. 12, pp. 7585, 1999. [6] I. Petr´ˇ, Y. Q. Chen, B. M. Vinagre, “A robust stability test proceas dure for a class of uncertain LTI fractional order systems,” In: Proc. of ICCC2002, May 2730, Beskydy, pp. 247252, 2002. [7] I. Podlubny, Fractional Diﬀerential Equations, Academic Press, San Diego, 1999. Problem 6.6
Delay independent and delay dependent Aizerman problem Vladimir R˘svan a
Department of Automatic Control University of Craiova 13 A.I.Cuza Street 1100 Craiova Romania [email protected] 1 INTRODUCTION The halfcentury old problem of Aizerman consists in a comparison of the absolute stability sector with the Hurwitz sector of stability for the linearized system. While the ﬁrst has been shown to be, generally speaking, smaller than the second one, this comparison still serves as a test for the sharpness of suﬃcient stability criteria as Liapunov function or Popov inequality. On the other hand, there are now very popular for linear time delay systems two types of suﬃcient stability criteria: delayindependent and delaydependent. The present paper suggests a comparison of these criteria with the corresponding ones for nonlinear systems with sector restricted nonlinearities. In this way, a problem of Aizerman type is suggested for systems with delay. Some examples are analyzed. 2 A SIMPLE EXAMPLE. STATEMENT OF THE PROBLEM. Consider the simple time delay equation x + a0 x(t) + a1 x(t − τ ) = 0, τ > 0 ˙ (1) with a0 , a1 , x scalars. It is a wellknown fact [7, 9, 10] that exponential stability of (1) is ensured provided the following inequalities hold: 1 + a0 τ > 0, −a0 τ < a1 τ < ψ (a0 τ ) (2) DELAY INDEPENDENT 213 where ψ (ξ ) is obtained by eliminating the parameter λ between the two equalities below λ λ , ψ= (3) ξ=− tan λ sin λ Since these conditions contain the time delay τ such property is called delaydependent stability. If one is interested in exponential stability conditions that hold for any delay τ > 0, this property, called delayindependent stability is ensured provided the simple inequalities a0 > 0, a1  < a0 (4) are fulﬁlled. It can be shown [10] that ψ (ξ ) > ξ for ξ > 0, hence the fulﬁlment of (4) implies the fulﬁlment of (2). Let us follow the way of Barbashin [6] to introduce a stability problem in the nonlinear case: given system (1) for a0 > 0, if we replace a0 x by ϕ(x) where ϕ(x)x > 0, the equilibrium at the origin of the nonlinear time delay system should be globally asymptotically stable provided ϕ(σ ) > a1  (5) σ for the delayindependent stability, or provided ϕ(σ ) 1 > max −a1 , ψ −1 (a1 τ ) (6) σ τ in the delaydependent case. We may view the above problem in a more general setting and state it as follows: Problem: Given the delay(in)dependent exponential stability conditions for some time delay linearized system, are they valid in the case when the nonlinear system with a sector restricted nonlinearity, i.e., satisfying (7) ϕσ 2 < ϕ(σ )σ < ϕσ 2 is considered instead of the linear one, or have they to be strengthened? It is clear that we have gathered here both the delayindependent and delaydependent cases, thus deﬁning a stability problem in two diﬀerent cases. This problem is called Aizerman problem, stated here as delaydependent (Aizerman problem) and delayindependent (Aizerman problem). Since this problem in the ODE (ordinary diﬀerential equations) setting is not only wellknown but also quite wellstudied, a short state of the art could be useful. 3 THE PROBLEM OF THE ABSOLUTE STABILITY. THE PROBLEMS OF AIZERMAN AND KALMAN Exactly 60 years ago a paper of B. V. Bulgakov [8] considered, apparently for the ﬁrst time, a problem of global asymptotic stability for the zero equi 214 PROBLEM 6.6 librium of a feedback control system composed of a linear dynamic part and a nonlinear static element x = Ax − b(c x) ˙ (8) where x, b, c are ndimensional vectors, A is a n × n matrix and ϕ : R → R is a continuous function. The only additional assumption about ϕ was its location in some sector as deﬁned by (7), where the inequalities may be nonstrict. In this very ﬁrst paper, only conditions for the absence of selfsustained oscillations were obtained but in another, more famous paper of Lurie and Postnikov [17] global asymptotic stability conditions were obtained for a system (8) of 3d order with ϕ(σ ) satisfying ϕ(σ )σ > 0, i.e., satisfying (7) with ϕ = 0, ϕ = +∞. The conditions obtained using a suitably chosen Liapunov function of the form “quadratic form of the state variables plus an integral of the nonlinearity” were in fact valid for the whole class of nonlinear functions deﬁned by ϕ(σ )σ > 0. Later this was called absolute stability but it is obviously a robust stability problem since it deals with the uncertainty on the nonlinear function deﬁned by (7). We shall not insist more on this problem and we shall concentrate on another one, connected with it, stated by M. A. Aizerman [1, 2]. This last problem is on (8) and its linearized version x = Ax − bhc x ˙ (9) i.e., system (8) with ϕ(σ ) = hσ . It is known that the necessary and suﬃcient conditions of asymptotic stability for (9) will require h to be restricted to some interval h, h called the Hurwitz sector. On the other hand, for system (8) the absolute stability problem is stated: ﬁnd conditions of global asymptotic stability of the zero equilibrium for all functions satisfying (7). All functions include the linear ones hence the class of systems deﬁned by (8) is larger than the class of systems deﬁned by (9). Consequently the sector ϕ, ϕ from (7) may be at most as large as the Hurwitz sector h, h . The Aizerman problem asks simply: do these sectors always coincide? The Aizerman conjecture assumed: yes. The ﬁrst counterexample to this conjecture has been produced by Krasovskii [16] in the form of a 2nd order system of special form. The most celebrated counterexample is a 3rd order system and belongs to Pliss [21]. Today we know that the conjecture of Aizerman does not hold in general. Nevertheless the problem itself stimulated interesting research that could be summarized as seeking necessary and suﬃcient conditions for absolute stability. A straightforward application of these studies is checking of the sharpness for “traditional” absolute stability criteria: the Liapunov function and the frequency domain inequality of Popov. In fact this is nothing more but comparison of the absolute stability sector with the Hurwitz sector. One can mention here the results of Voronov [26] and his coworkers on what they called “stability in the Hurwitz sector.” Other noteworthy results belong to Pyatnitskii who found necessary and suﬃcient conditions of absolute stability connected to a special variational DELAY INDEPENDENT 215 problem and to N. E. Barabanov (e.g., [4]). Among the results of Barabanov we would like to mention those concerned with the socalled Kalman problem and conjecture topics that deserve some particular attention. In his paper [15] R. E. Kalman replaced the class of nonlinear functions deﬁned by (7) by the class of diﬀerentiable functions with slope restrictions γ < ϕ (σ ) < γ (10) The Kalman problem asks: do coincide the intervals γ , γ and h, h the last one being previously deﬁned by the inequalities of Hurwitz? The answer to this question is also negative but its story is not quite straightforward. A good reference is the paper of Barabanov [3] and we would like to follow some of the presentation there: the only counterexample known up to that paper had been published by Fitts [11] and the authors of a wellknown and cited monograph in the ﬁeld (Narendra and Taylor, [18]) were citing it as a basic argument for the negative answer to Kalman conjecture. In fact there was no proof in the paper of Fitts but just a simulation: a speciﬁc linear subsystem had been adopted, a speciﬁc nonlinearity also and selfsustained periodic oscillations were computed for various values of a system’s parameter. In his important paper Barabanov [3] was able to prove rigorously the following: • the answer to the problem of Kalman is positive for all 3d order systems; it follows that the system of Pliss counterexample is absolutely stable within the Hurwitz sector provided the class of the nonlinear functions is deﬁned by (10) instead of (7); • the counterexample given by Fitts is not correct at least for some subset of its parameters as is follows by simple application of the Brockett Willems frequency domain inequality for absolute stability of systems with slope restricted nonlinearity. Moreover, the paper of Barabanov provides an algorithm of ﬁnding systems with a nontrivial periodic solution; in this way, a procedure is given for constructing counterexamples to the two conjectures discussed above. Obviously, the technique of Barabanov seems an echo of the pioneering paper of Bulgakov [8], but we shall insist no more on this subject. 4 STABILITY AND ABSOLUTE STABILITY OF THE SYSTEMS WITH TIME DELAY A. We shall consider for simplicity only the case of the systems described by functional diﬀerential equations of delayed type (according to the wellknown classiﬁcation of these equations; see, for instance, Bellman and Cooke [7]) and we shall restrict ourselves to the single delay case. In the linear case, the system is described by x = A0 x(t) + A1 x(t − τ ), τ > 0 ˙ (11) 216 PROBLEM 6.6 Exponential stability of this system is ensured by the location in the LHP (lefthand plane) of the roots of the characteristic equation det λI − A0 − A1 e−λτ = 0 (12) where the LHS (lefthand side) is a quasipolynomial. We have here the RouthHurwitz problem for quasipolynomials. This problem has been studied since the ﬁrst applications of (11); the basic results are to be found in the paper of Pontryagin [22] and in the memoir of Chebotarev and Meiman [9]. A valuable reference is the book of Stepan [25]. From this topic, we shall recall the following. Starting from their algebraic intuition Chebotarev and Meiman pointed out that, according to Sturm theory, the RouthHurwitz conditions for quasipolynomials have to be expressed as a ﬁnite number of inequalities that might be transcendental. The detailed analysis performed in their memoir for the 1st and 2nd degree quasipolynomials showed two types of inequalities: one of them contained only algebraic inequalities, while the other contained also transcendental inequalities; the ﬁrst ones correspond to stability for arbitrary values of the delay τ , while the second ones put some limitations on the values of τ > 0 for which exponential stability of (11) holds. The system described by (1) and conditions (2), (3) and (4) are good illustrations of this. The aspect is quite transparent in the examples analysis performed throughout author’s book [23] as well as throughout the book of Stepan [25]. We may see here the diﬀerence operated between what will be called later delayindependent and delaydependent stability. This diﬀerence will become important after the publication of the paper of Hale et al. [13], which will be assimilated by the control community after its incorporation in the 3d edition of Hale’s monograph, authorized by Hale and Verduyn Lunel [14]. There are by now dozens of references concerning delaydependent and delayindependent RouthHurwitz problem for (11); we send the reader to the books of S. I. Niculescu [19, 20] with their rich reference lists. A special case of (2) that is in fact the underlying topic of most references cited in [19, 20] is stability for small delays. As shown in [10] the stability inequalities are given by a1 + a0 > 0, 0≤τ < arccos − a0 a1 a2 − a2 1 0 (13) provided a1 > a0  (otherwise (4) holds and stability is delayindependent). In fact most recent research deﬁnes delaydependent stability as above, i.e., as preservation of stability for small delays (a better name would be “delay robust stability” since, according to a paper of Jaroslav Kurzweil, “small delays don’t matter”). B. Since linear blocks with delay are usual in control, introduction of systems with sector restricted nonlinearities (7) is only natural. The most suitable references on this problem are the monographs of A. Halanay [12] and of the DELAY INDEPENDENT 217 author [23]. If we restrict ourselves again to the case of delayed type with a single delay, then a model problem could be the system x = A0 x(t) + A1 x(t − τ ) − bϕ (c0 x(t) + c1 x(t − τ )) ˙ (14) where x, b, c0 , c1 are nvectors and A0 , A1 are n × n matrices; the nonlinear function ϕ(σ ) satisﬁes the sector condition (7). Following author’s book [23] we shall consider a scalar version of (14): x + a0 x(t) + ϕ (x(t) + c1 x(t − τ )) = 0 ˙ (15) where ϕ(σ )σ > 0. Assume that a0 > 0 and apply the frequency domain inequality of Popov for ϕ = +∞: Re(1 + jωβ )H (jω ) > 0, Since H (s) = 1 + c1 e−τ s s + a0 ∀ω ≥ 0 (16) the frequency domain inequality reads a2 + ω 2 β (1 + c1 cos ωτ ) + ω (a0 β − 1) sin ωτ 0 >0 a2 + ω 2 0 By choosing the Popov parameter β = a−1 the above inequality becomes 0 1 + c1 cos ωτ > 0, ∀ω ≥ 0, (17) which cannot hold for ∀ω but only with c1  < 1. The frequency domain inequality of Popov prescribes in this case a delayindependent absolute stability. 5 BACK TO THE EXAMPLE We have stated a delayindependent and a delaydependent Aizerman problem for systems with time delay in a rather general setting that could include rather general systems of diﬀerential equations with deviated argument while we chose the starting system as a very simple one, of the delayed type. In the following, we shall illustrate the solving of a speciﬁc problem for the initial example. Consider, for instance, the delayindependent Aizerman problem deﬁned above, for system (1) replaced by x + a1 x(t − τ ) + ϕ (x(t)) = 0 ˙ (18) where ϕ(σ )σ > 0. Taking into account that (4) suggests ϕ(σ ) > a1  σ we introduce a new nonlinear function f (σ ) = ϕ(σ ) − a1  σ 218 and obtain the transformed system (via a sector rotation): x + a1  x(t) + a1 x(t − τ ) + f (x(t)) = 0 ˙ PROBLEM 6.6 (19) For this system, we apply the frequency domain inequality of Popov for ϕ = +∞, i.e., inequality (16); here H (s) = 1 s + a1  + a1 e−sτ (20) and the frequency domain inequality reduces to βω 2 − (βa1 sin ωτ )ω + a1  + a1 cos ωτ ≥ 0 which is fulﬁlled provided the free Popov parameter β is chosen from 0 < β a1  < 2 (22) (21) (more details concerning manipulation of the frequency domain inequality for time delay systems may be found in author’s book [23]). It follows that (19) is absolutely stable for the nonlinearities satisfying f (σ )σ > 0 i.e. ϕ(σ )σ > a1  σ 2 : the just stated delayindependent Aizerman problem for (1) and (18) has been answered positively. 6 CONCLUDING REMARKS Since the class of systems with time delays is considerably larger than the class of systems described by ordinary diﬀerential equations, we expect various settings of Aizerman (or Kalman) problems. The case of the equations of neutral type that express propagation phenomena was not yet analyzed from this point of view even if the absolute stability has been considered for such systems (see author’s book [23]). Such a variety of systems and problems should be stimulating for the development of the tools of analysis. It is a known fact that the frequency domain inequalities are better suited for delayindependent results, as well as the mostly used LiapunovKrasovskii functionals leading to ﬁnite dimensional LMIs (see e.g., the cited books of Niculescu [19, 20]); the LiapunovKrasovskii approach has nevertheless some “opening” to delaydependent results and it is worth trying to apply it in solving the delaydependent Aizerman problem. The algebraic approach suggested by the memoir of Chebotarev and Meiman [9] could be also applied as well as the (non)existence of selfsustained oscillations that sends back to Bulgakov and Pliss. As in the case without delay the statement and solving of the Aizerman problems could be rewarding from at least two points of view: extension of the class of the systems having an “almost linear behavior” [5, 24] and reﬁnement of analysis tools by testing the “sharpness” of the suﬃcient conditions. DELAY INDEPENDENT 219 BIBLIOGRAPHY [1] M. A. Aizerman, “On convergence of the control process under large deviations of the initial conditions,” (in Russian) Avtom. i telemekh. vol. VII, no. 23, pp. 148167, 1946. [2] M. A. Aizerman, “On a problem concerning stability ”in the large” of dynamical systems,” (in Russian) Usp.Mat.Nauk t.4, no. 4, pp. 187188, 1949. [3] N. E. Barabanov, “About the problem of Kalman,” (in Russian) Sib.Mat.J. vol.XXIX, no. 3, pp. 311, 1988. [4] N. E. Barabanov, “On the problem of Aizerman for nonstationary systems of 3d order,” (in Russian) Diﬀer.uravn. vol.29, no. 10, pp. 16591668, 1992. [5] I. Barb˘lat and A. Halanay, “Conditions de comportement presque a linaire dans la thorie des oscillations,” Rev.Roum.Sci. Techn.Electrotechn. et Energ. vol. 29, no. 2, pp. 321341, (1974). [6] E. A. Barbashin, Introduction to stability theory, (in Russian) Nauka Publ.House, Moscow, 1967. [7] R. E. Bellman and K. L. Cooke, Diﬀerential Diﬀerence Equations, Acad.Press, N.Y., 1963. [8] B. V. Bulgakov, “Selfsustained oscillations of control systems,” (in Russian) DAN SSSR vol. 37, no. 9, pp. 283287, 1942. [9] N. G. Chebotarev and N. N. Meiman, “The RouthHurwitz problem for polynomials and entire functions,” (in Russian) Trudy Mat.Inst. V. A. Steklov, vol. XXVI, 1949. [10] L. E. El’sgol’ts and S. B. Norkin, Introduction to the theory and applications of diﬀerential equations with deviating arguments(in Russian), Nauka Publ.House, Moscow, 1971; English version by Acad. Press, 1973. [11] R. E. Fitts, “Two counterexamples to Aizerman’s conjecture,” IEEE Trans. vol. AC11, no. 3, pp. 553556, July 1966. [12] A. Halanay, Diﬀerential Equations. Stability. Oscillations. Time Lags, Acad.Press, N.Y., 1966. [13] J. K. Hale, E. F. Infante and F. S. P. Tsen, “Stability in linear delay equations,” J. Math. Anal. Appl. vol. 105, pp. 533555, 1985. [14] J. K. Hale and S. Verduyn Lunel, Introduction to Functional Diﬀerential Equations, Springer Verlag, 1993. 220 PROBLEM 6.6 [15] R. E. Kalman, “Physical and mathematical mechanisms of instability in nonlinear automatic control systems,” Trans.ASME, vol. 79, no. 3, pp. 553566, April 1957. [16] N. N. Krasovskii, “Theorems concerning stability of motions determined by a system of two equations,” (in Russian) Prikl. Mat. Mekh. (PMM), vol. XVI, no. 5, pp. 547554, 1952. [17] A. I. Lurie and V. N. Postnikov, “On the theory of stability for control systems,” (in Russian) Prikl. Mat. Mekh. (PMM), vol. VIII, no. 3, pp. 246248, 1944. [18] K. S. Narendra and J. H. Taylor, Frequency domain stability criteria’, Acad.Press, N.Y., 1973. [19] S. I. Niculescu, Syst`mes a retard’, Diderot, Paris, 1997. e [20] S. I. Niculescu, “Delay eﬀects on stability: A robust control approach,” Lecture Notes in Control and Information Sciences, no. 269, Springer Verlag, 2001. [21] V. A. Pliss, Some Problems of the Theory of Stability of Motion in the Large (in Russian), Leningrad State Univ. Publ. House, Leningrad, (1958). [22] L. S. Pontryagin, “On zeros of some elementary transcendental functions,” (in Russian) Izv. AN SSSR Ser. Matem. vol. 6, no. 3, pp. 115134, 1942, with an Appendix published in DAN SSSR, vol. 91, no. 6, pp. 12791280, 1953. [23] Vl. R˘svan, Absolute stability of automatic control systems with time dea lay (in Romanian), Editura Academiei, Bucharest, 1975; Russian version by Nauka Publ. House, Moscow, 1983. [24] Vl. R˘svan, “Almost linear behavior in systems with sector restricted a nonlinearities,” Proc. Romanian Academy Series A: Math., Phys., Techn. Sci, Inform. Sci., vol. 2, no. 3, pp. 127135, 2002. [25] G. Stepan, “Retarded dynamical systems: stability and characteristic function,” Pitman Res. Notes in Math. vol. 210, Longman Scientiﬁc and Technical, 1989. [26] A. A. Voronov, Stability, controllability, observability (in Russian), Nauka Publ.House, Moscow, 1979. Problem 6.7
Open problems in control of linear discrete multidimensional systems Li Xu
Dept. of Electronics and Information Systems Akita Prefectural University Honjo, Akita 0150055 Japan [email protected] Zhiping Lin
School of EEE Nanyang Technological University Singapore 639798 Republic of Singapore [email protected] JiangQian Ying
Faculty of Regional Studies Gifu University 11 Yanagido, Gifu 5011193 Japan [email protected] Osami Saito
Faculty of Engineering Chiba University Inageku, Chiba 2638522 Japan Yoshihisa Anazawa
Dept. of Electronics and Information Systems Akita Prefectural University Honjo, Akita 0150055 Japan 222 1 INTRODUCTION PROBLEM 6.7 This chapter summarizes several open problems closely related to the following control problems in linear discrete multidimensional (nD, n ≥ 2) systems: • output feedback stabilizability and stabilization, • strong stabilizability and stabilization, or, equivalently, simultaneous stabilizability and stabilization of two given nD systems, • regulation and tracking control. Though some of the open problems presented here have been scattered in the literature (see e.g., [7, 13, 24, 26] and the references therein), it seems that they have not received suﬃcient attention, and were even occasionally mistaken as known results. The purpose of this chapter is twofold: ﬁrst, to clear up such confusions and to call for more eﬀorts for the solution to these existing open problems; and second, to propose some related new open problems. 2 DESCRIPTION OF THE PROBLEMS Let R[z ], where z = (z1 , . . . , zn ), be the set of nD polynomials in the variables z1 , . . . , zn with coeﬃcients in the ﬁeld of real numbers R; R(z ) the set of nD rational functions over R; Rs [z ], Rs (z ) the set of (structurally) stable nD polynomials and rational functions, respectively, i.e., nD polyno¯ mials having no zeros in U n = {z ∈ Cn : z1  ≤ 1, . . . , zn  ≤ 1} and nD rational functions whose denominators belong to Rs [z ]. Similarly, let C[z ] be the set of nD polynomials over the ﬁeld of complex numbers C, etc. Problem 1: Let a1 (z ), . . . , aM (z ) ∈ R[z ] be given. Let I denote the ideal generated by a1 (z ), . . . , aM (z ), and V(I) the algebraic variety of I, i.e., ¯ V(I) = {z ∈ Cn : ai (z ) = 0, i = 1, . . . , M }. Suppose that V(I) ∩ U n = ∅. Find a constructive method to obtain h1 (z ), . . . , hM (z ) ∈ R[z ] such that ¯ a1 (z )h1 (z ) + · · · + aM (z )hM (z ) = 0 in U n ˜ ˜ or, equivalently, to obtain h1 (z ), . . . , hβ (z ) ∈ Rs (z ) such that ˜ ˜ a1 (z )h1 (z ) + · · · + aβ (z )hM (z ) = 1 (2) (1) This problem can be reduced to problem 1 , in the sense that once the following problem is solved, problem 1 can be solved easily using the Gr¨bner o basis approach [6, 11, 23]. OPEN PROBLEMS IN CONTROL OF N D SYSTEMS 223 ¯ Problem 1 : Under the assumption that V(I) ∩ U n = ∅, ﬁnd a constructive method to obtain a polynomial s(z ) such that s(z ) ∈ Rs [z ] and s(z ) vanishes on V(I). Problem 2: Let g (z ), a2 (z ), . . . , aM (z ) ∈ R[z ] be given. Suppose that it is known that there exist h2 (z ), . . . , hM (z ) ∈ Rs (z ) such that
M g (z ) +
i=2 ¯ ai (z )hi (z ) = 0 in U n . (3) Find a constructive method to obtain such h2 (z ), . . . , hM (z ). Problem 3: Let D(z ) ∈ Rm×m [z ], N (z ) ∈ Rm×l [z ] be given. Denote by α1 (z ), . . . , αM (z ) the m × m minors of [D(z ) N (z )] with M = m+l and m α1 (z ) = det D(z ). Suppose that D(z ) and N (z ) are minor left coprime (MLC), i.e., α1 (z ), . . ., αM (z ) have no nonunit common factors over R[z ] [30]. Suppose that some h2 (z ), . . . , hM (z ) ∈ Rs (z ) have been found such that
M det D(z ) +
i=2 ¯ αi (z )hi (z ) = 0 in U n , (4) show whether or not there exists a matrix C (z ) ∈ Rl×m (z ) such that s ¯ det(D(z ) + N (z )C (z )) = 0 in U n , (5) and further ﬁnd a constructive method to obtain such a C (z ) when its existence is known. Problem 4: Let D(z ) ∈ Rm×m [z ], N (z ) ∈ Rl×m [z ] be given. Show the condition for the existence of X (z ) ∈ Rm×m (z ), Y (z ) ∈ Rm×l (z ) such that s s D(z )X (z ) + Y (z )N (z ) = I, (6) and further ﬁnd a constructive method to obtain X (z ), Y (z ) when the existence is known. 3 MOTIVATIONS Since the beginning of 1970s, growing interests have led to a considerable number of contributions to the theory of nD systems. This is, of course, mainly due to the diversity of the actual and potential applications of nD systems theory embracing nD signal processing, variableparameter and lumpeddistributed network synthesis, delaydiﬀerential systems, linear systems of partial diﬀerence and diﬀerential equations, iterative learning control systems, linear multipass processes, etc. (see, e.g., the books of [5, 10], the special issues of [2, 3, 15, 18] and the references therein). As it is wellknown, the generalization of the conventional onedimensional (1D) systems theory 224 PROBLEM 6.7 to its nD counterpart is nontrivial because of many deep and substantial diﬀerences between the two. Despite of the tremendous progress made in the past three decades, there are still many open problems in the area of nD systems, either theoretically challenging or practically important or both, remaining to be tackled. In this chapter, we are mainly concerned with open problems in the area of nD control systems, although some of these problems are also closely related to nD signal processing, as to be discussed shortly. An nD MIMO (multiinputmultioutput) system P (z ) ∈ Rm×l (z ) is said to be output feedback (structurally) stabilizable if there is a compensator C (z ) ∈ Rl×m (z ) such that the closedloop transfer matrix H (z ) deﬁned below is (structurally) stable, i.e., each entry of H (z ) is in Rs (z ): H (z ) = (I + P C )−1 C (I + P C )−1 −P (I + CP )−1 (I + CP )−1 . (7) If C (z ) itself can be further chosen to be stable, P (z ) is said to be strongly stabilizable. It can be shown that two unstable systems can be simultaneously stabilized by a single compensator if a certain system constructed from the two given ones is strongly stabilizable [21, 20]. Consider an nD system given by a left matrix fractional description (MFD) P (z ) = D(z )−1 N (z ) with D(z ) ∈ Rm×m [z ] and N (z ) ∈ Rm×l [z ]. For simplicity, suppose that D(z ) and N (z ) are MLC. Then, P (z ) is stabilizable if and only if ¯ V(I) ∩ U n = ∅ (8) where V(I) is the algebraic variety of the ideal I generated by the m × m minors α1 (z ), . . . , αM (z ) of [D(z ) N (z )] as deﬁned in problem 3 [11, 12, 19, 23]. This condition is equivalent [4] to that there exist h1 (z ), . . . , hM (z ) ∈ R[z ] such that
M αi (z )hi (z ) = 0
i=1 ¯ in U n . (9) Further, it has been shown that, once h1 (z ), . . . , hM (z) satisfying (9) have been found, a stabilizing compensator C (z ) = Y (z )X (z )−1 ∈ Rl×m (z ) with X (z ) ∈ Rm×m [z], Y (z ) ∈ Rl×m [z ] can be constructed [12, 23]. Therefore, the stabilizability for a given P (z ) is equivalent to the condition of (8) or the solvability of (9), while the stabilization problem, i.e., the problem of designing a stabilizing compensator, for a stabilizable P (z ) is reduced to the problem of constructing h1 (z), . . . , hM (z ) in (9), which is just what has been described in Problem 1. As mentioned previously, problem 1 can be further reduced to problem 1 . In addition to the abovementioned stabilization problem, problem 1 also plays an essential role in nD signal processing, such as the design of nD ﬁlter banks (see, e.g., [1, 7, 17]). For the strong stabilizability and stabilization problems, it is further required that C (z ) = Y (z )X (z )−1 ∈ Rl×m (z ), which is equivalent to requiring that s OPEN PROBLEMS IN CONTROL OF N D SYSTEMS 225 det X (z ) ∈ Rs [z ] [26]. It is then easy to see [28] that a necessary condition for P (z ) to be strongly stabilizable is that there exist e2 (z ), . . . , eM (z ) such that
M det D(z ) +
i=2 αi (z )ei (z ) = 0 ¯ in U n (10) where the assumption α1 (z ) = det D(z ) is used. This condition has been shown to be also suﬃcient for SIMO (singleinputmultioutput) and MISO (multiinputsingleoutput) nD systems [28], and for these special cases, if e2 (z ), . . . , eM (z ) satisfying (10) can be found, a stable stabilizing compensator C (z ) can then be constructively obtained. However, the suﬃciency of this condition for a general MIMO nD system is still unknown and the problem for constructing a stable stabilizing compensator for a general MIMO nD system is still open, even if e2 (z), . . . , eM (z ) have been obtained. Problem 2 corresponds to the solution problem of (10), while problem 3 relates to the strong stabilizability and stabilization of a general MIMO nD system. It is clear that the solution of problem 2 is assumed to be a precondition for problem 3. Another important issue in feedback system design is the tracking and disturbance rejection problems. It can be shown that equation (6) plays a central role for various types of regulation and tracking problems (see, e.g, [21, 22, 25]). So, problem 4 is relate to the solvability and solution of regulation and tracking problems of nD MIMO systems. 4 AVAILABLE RESULTS Problem 1 and problem 1 : The test of the solvability condition ¯ V(I) ∩ U n = ∅ and the solutions of problem 1 and problem 1 for the case n = 2 can be found in [11, 23] and the references therein. For the case n ≥ 3, if I is of zero dimensional, i.e., V(I) consists of only a ﬁnite number of points, the solvability test and solution construction can then be carried out by utilizing the Gr¨bner basis approach [6, 24]. For some other special cases o when n ≥ 3, see [14]. Another solution method has been suggested in [4] by using analytic function theory. However, we believe that this method is not ¯ constructive. Further, as the determination of whether or not V(I) ∩ U n = ∅ can be formulated as a typical quantiﬁer elimination problem, it may be possible to solve it by Cylindrical Algebraic Decomposition (CAD) techniques developed in the ﬁeld of computer algebra [8, 9]. Problem 2: Problem 2 is much more complicated and diﬃcult than its 1D counterpart (see e.g., [21]). To solve this problem, we have to follow two steps: ﬁrst, to see if there exist h2 (z ), . . . , hM (z ) ∈ Cs (z ), and then to see if there exist h2 (z ), . . . , hM (z ) ∈ Rs (z ), such that (3) holds. It is also 226 PROBLEM 6.7 interesting to note that in contrast to the 1D case, Problem 2 may possess no solution on Rs (z ), even if it has a solution on Cs (z ) [26, 28, 29]. Necessary and suﬃcient conditions for the existence of h2 (z ), . . . , hM (z ) ∈ Cs (z ) and Rs (z ), respectively, have been shown in [26, 28], which can be veriﬁed by the CAD based method [27]. Problem 3: Problem 3 has been considered and solved for the cases when m = 1 or l = 1 (i.e., for SIMO and MISO nD systems) [28], and for the case when D(z ) and N (z ) satisfy certain conditions given in [16]. Problem 4: Necessary and suﬃcient solvability conditions and constructive solution procedures for the case n = 2 can be found in [22]. BIBLIOGRAPHY [1] S. Basu, “Multidimensional ﬁlter banks and wavelets: A system theoretic perspective,” J. Franklin Inst., 335B(8), pp. 13671409, 1998. [2] S. Basu and B. C. L´vy, Eds., “Special Issue on Multidimensional Filter e Banks and Wavelets,” Multidimensional Systems and Signal Processing, 7/8, 1996/7. [3] S. Basu and M. N. S. Swamy, Eds., “Special Issue on Multidimensional Signals and Systems,” IEEE Trans. Circuits Syst. I, 49, 2002. [4] C. A. Berenstein and D. C. Struppa, “1Inverses for polynomial matrices of nonconstant rank,” Systems & Control Letters, 6, pp. 309–314, 1986. [5] N. K. Bose, Applied Multidimensional Systems Theory, New York: Van Nostrand Reinhold, 1982. [6] B. Buchberger, “Gr¨bner bases: An algorithmic method in polynomial o ideal theory,” in Multidimensional Systems Theory: Progress, Directions and Open Problems N. K. Bose, ed., Dordrecht: Reidel, pp. 184– 232, 1985. [7] C. Charoenlarpnopparut and N. K. Bose, “Gr¨bner Bases for Problem o Solving in Multidimensional Systems,” Multidimensional Systems and Signal Processing, 12(3/4), pp. 365376, 2001. [8] G. Collins, “Quantiﬁer elimination for real closed ﬁelds by cylindrical algebraic decomposition,”, LNCS, 33, pp. 134183, 1975. [9] G. Collins, H. Hong, “Partial cylindrical algebraic decomposition and quantiﬁer elimination,” J. Symbolic Computation, 12, pp. 299328, 1991. OPEN PROBLEMS IN CONTROL OF N D SYSTEMS 227 [10] K. Galkowski and J. Wood,Eds., Multidimensional Signals, Circuits and Systems, Taylor & Francis, 2001. [11] J. P. Guiver and N. K. Bose, “Causal and weakly causal 2D ﬁlters with applications in stabilizations,” In: Multidimensional Systems Theory: Progress, Directions and Open Problems, N. K. Bose, ed., Dordrecht: Reidel, pp. 52–100, 1985. [12] Z. Lin, Feedback Stabilizability of MIMO nD Linear Systems, Multidimensional Systems and Signal Processing, 9, pp. 149172, 1998. [13] Z. Lin, “Output feedback stabilizability and stabilization of linear nD systems,” In: Multidimensional Signals, Circuits and Systems, K. Galkowski and J. Wood, eds., Taylor & Francis, pp. 59  76, 2001. [14] Z. Lin, J. Lam, K. Galkowski and S. Xu, “A Constructive Approach to Stabilizability and Stabilization of a Class of nD Systems,” Multidimensional Systems and Signal Processing, 12(3/4), pp. 329344, 2001. [15] Z. Lin and L. Xu, Eds., “Special Issue on Applications of Gr¨bner Bases o to Multidimensional Systems and Signal Processing,” Multidimensional Systems and Signal Processing, 12, 2001. [16] Z. Lin, J. Q. Ying, L. Xu, “An Algebraic Approach to Strong Stabilizability of Linear nD MIMO Systems,” IEEE Trans. Automat. Contr., 47(9), pp. 1510  1514, 2002. [17] H. Park, T. Kalker, M. Vetterli, “Gr¨bner Bases and Multidimensional o FIR Multirate Systems,” Multidimensional Systems and Signal Processing, 8(1/2), pp. 1130, 1997. [18] E. Rogers and P. Rocha, Eds., “Recent Progress in Multidimensional Control Theory and Applications,” Multidimensional Systems and Signal Processing, 11, 2000. [19] S. Shankar, V. R. Sule, “Algebraic Geometric Aspects of Feedback Stabilization,” SIAM J. Contr. Optim., 30, pp. 1130, 1992. [20] S. Shankar, “An obstruction to the simultaneous stabilization of two nD plants,” Acta Applicandae Mathematicae, 36, pp. 289301, 1994. [21] M. Vidyasagar, Control System Synthesis: A Factorization Approach, Cambridge, MA: MIT Press, 1985. [22] L. Xu, O. Saito, and K. Abe. “Bilateral Polynomial Matrix Equations in Two Indeterminates,” Multidimensional Systems and Signal Processing, 1(4), pp. 363–37, 1990. [23] L. Xu, O. Saito, and K. Abe. “Output Feedback Stabilizability and Stabilization Algorithms for 2D systems,” Multidimensional Systems and Signal Processing, 5(1), pp. 41–60, 1994. 228 PROBLEM 6.7 [24] L. Xu, J. Q. Ying, O. Saito, “Feedback Stabilization for a Class of MIMO nD Systems by Gr¨bner Basis Approach,” In: Abstract of o First International Workshop on Multidimensional Systems, pp. 8890, Poland, 1998. [25] L. Xu, O. Saito, J. Q. Ying, “2D Feedback System Design: The Tracking and Disturbance Rejection Problems,” Proceedings of ISCAS99, V, pp. 1316, Orlando, USA, 1999. [26] J. Q. Ying, “Conditions for strong stabilizabilities of ndimensional systems,” Multidimensional Systems and Signal Processing, 9, pp. 125–148, 1998. [27] J. Q. Ying, L. Xu, Z. Lin, “A computational method for determining strong stabilizability of nD systems”, J. Symbolic Computation, 27, pp. 479499, 1999. [28] J. Q. Ying, “On the strong stabilizability of MIMO ndimensional linear systems,” SIAM J. Contr. Optim., 38, pp. 313–335, 2000. [29] J. Q. Ying, Z. Lin, L. Xu, “Some algebraic aspects of the strong stabilizability of timedelay linear systems,” IEEE Trans. Automat. Contr., 46, pp. 454–457, 2001. [30] D. C. Youla and G. Gnavi, “Notes on ndimensional system theory,” IEEE Trans. Circuits Syst., 26, pp. 105–111, 1979. Problem 6.8
An open problem in adaptative nonlinear control theory Leonid S. Zhiteckij
Int. Centre of Inform. Technologies and Systems Institute of Cybernetics 40 Prospect Akademika Glushkova MSP03680 Kiev 187 Ukraine [email protected] 1 STATEMENT OF THE PROBLEM We deal with the problem of globally stable adaptive control for discretetime, timeinvariant, nonlinear, but linearly parameterized (LP) systems described by the diﬀerence equation yt = θT ϕ(xi−1 ) + but−1 + vt ,
+ + (1) where yt : Z → R and ut : Z → R are the measurable output and control input, respectively, and vt : Z+ → R is the unmeasured disturbance (the integer t denotes the discrete time). θ ∈ Rd and b ∈ R are the unknown parameter vector and scalar (d ≥ 1). f (·) : RN → Rd represents a known nonlinear vector function depending on the vector xT 1 = [yt−1 , . . . , yt−N ] t− of N past outputs. Its growth is given by ϕ(x) = O( x
β ) as x → ∞. (2) Assume that vt is upper bounded by some ﬁnite η , i.e., vt
∞ ≤ η < ∞, (3) where vt ∞ := sup0≤t<+∞ vt  denotes the l∞ norm of vt . To regulate yt around zero, we choose the wellknown certainty equivalence (CE) feedback control law
T ut = −b−1 θt ϕ(xt ), t (4) where bt and θt are the estimates of unknown b and θ that are to be updated online by using either gradient or least squares (LS) based algorithms. These 230 PROBLEM 6.8 classical recursive adaptation algorithms may be written in a general form as ¯ ¯ θt = θt−1 + αt Ωθ (Pt , yt , ϕ(xt−1 )) ¯ Pt = Pt−1 − ΩP (Pt−1 , ϕ(xt−1 ), αt ) ¯ with αt = Ωα (yt , ϕ(xt−1 )), αt ≥ 0, (7)
T ¯T where θt = [θt , bt ],ϕT = [ϕT , ut ] are the extended vectors, Pt is a positive ¯t t deﬁnite (d + 1) × (d + 1) matrix and Ωθ : R(d+1)×(d+1) × R × Rd+1 → Rd+1 , ΩP : R(d+1)×(d+1) × Rd+1 × R → R(d+1)×(d+1) and Ωα : R × Rd → R. Deﬁnition. System (1) is said to be globally stabilizable if there exists an adaptive feedback control of the form (4)(7) such that (5) (6) lim supt→∞ yt  < ∞ ¯ for any initial x0 ∈ RN , θ0 ∈ Rd+1 , α0 ∈ R+ , some P0 > 0 and a given sequence of the disturbances {vt } satisfying (3). Now, we formulate the problem as follows: determine the triple (Ωθ , ΩP , Ωα ) such that the adaptive feedback control (4)(7) will ensure the global stabilizability of system (1) for the given class of {vt } ∈ l∞ provided that ϕ(x) belongs to a given class of nonlinearities having a growth rate (2) with some β satisfying 1 < β < β , where β needs to be evaluated. The problem stated thus generalizes the problem solved in [1] and [4] to the bounded disturbance case. This is an open and diﬃcult problem in the adaptive control theory. To the best of the author’s knowledge, there are no available results solving it for vt ∞ = 0, whereas the solution to its continuoustime counterpart is known. 2 MOTIVATION In contrast to the adaptive control of nonlinear continuoustime systems, where substantial breakthroughs in the theoretical area have been achieved by the middle of the 1990s (see, e.g., [3], [5], etc.), very few similar works are available in the literature that address the global stable adaptive control design for discretetime systems with nonlinearities [1], [2], [4], [6][8]. One of the inherent diﬃculties of discretetime adaptive control is that the Lyapunov stability techniques typically exploited in the continuous time case may not be straightforwardly extended to its discretetime counterpart, as detailed in [4], [6], [8]. It has been shown in Section II of [4], and in [8] and [9] that the socalled Key Technical Lemma, which has played a key role in analyzing the adaptation algorithms of type (5)(7) applied to linear discretetime systems, can be used to derive the stabilizability properties of adaptive nonlinear LP systems with a nonlinearity whose growth rate (2) AN OPEN PROBLEM IN ADAPTATIVE NONLINEAR CONTROL THEORY 231 is linear (β = 1). Unfortunately, this stability analysis tool is no longer valid if ϕ(x) has a growth rate faster than linear, i.e., β > 1 (see, e.g., [4], [8]). In such a situation, the following questions naturally arise: Can the linear growth restriction β = 1 be relaxed without going to the instability of closed loop? What are the limitations of gradient and LS based algorithms? An answer to these questions can be partially found in recent works [2],[7] dealing with a similar problem in the stochastic framework. Although the results of [2], [7] shed some light on restrictions that must be imposed on β to achieve global stability, however, the question of how they might be extended to the nonstochastic case, where {vt } ∈ l∞ , has not been resolved as yet. 3 RELATED RESULTS The ﬁrst step allowing to relax the linear growth condition with respect to ϕ(x) has been made by Kanellakopoulos [4] who dealt with the scalar oneparametric disturbancefree system of form (1) (N = 1, d = 1, vt ≡ 0) provided that the gain b is known and equal to 1. In Section III of [8] it has been established that the LS algorithm (5), (6) with the nonlinear gain determined in (7) as αt = 1 + ϕ2 (xt ) can be used to adaptively stabilize system (1) for any smooth nonlinearity ϕ(x) : R → R independently of its growth rate. To derive this global stability result, Kanellakopoulos employed the Lyapunov function ˜2 Vt = ln(1 + x2 ) + cPt−1 θt + Pt2 t ˜ with some c > 0, where θt = θ − θt is the parameter error vector. The adaptive control of system (1) with no disturbance and b = 1 has also been studied by Guo and Wei [1]. In contrast to [4], these authors used the standard (αt ≡ 1) LS based algorithm of form (5), (6). By exploiting a new theoretical tool based on some boundedness properties of {det Pt−1 }, they have proved that if d = 1, then the closedloop adaptive system (1),(5)(7) is globally stable whenever β < 8. It has been also established for the multiparameter case (d > 1) that the global stability condition is βd < 4 (see theorem 3 of [4]). The fundamental limitations of the standard LSbased adaptive control applied to system (1) with d = 1 and b = 1 in the presence of stochastic {vt } have been established by Guo [2] who proved that a globally stabilizing adaptive LSbased controller can be designed if and only if β < 4. Recently, Xie and Guo [7] showed that if d 1, then the linear growth restriction (β = 1) cannot be essentially relaxed in general to globally stabilize system (1) subjected to a Gaussian white noise {vt }, unless additional conditions on number d and the structure of ϕ(·) are imposed (see Remark 3 of [7]). It seems that a new theoretical tool should be devised to solve the problem formulated above. 232 BIBLIOGRAPHY PROBLEM 6.8 [1] L. Guo and C. Wei, “Global stability/instability of LSbased discretetime adaptive nonlinear control,” In: Proc. 13th IFAC World Congress, San Francisco, CA, USA, Vol. K, pp. 277282, 1996. [2] L. Guo, “On critical stability of discretetime adaptive Nnonlinear control,” IEEE Trans. Automat. Contr., 42, pp. 14881499, 1997. [3] I. Kanellakopoulos, P. V. Kokotovic and A. S. Morse, “Systematic design of adaptive controllers for feedback linearizable systems,” IEEE Trans. Automat. Contr., 36, pp. 12421253, 1991. [4] I. Kanellakopoulos, “ Discretetime adaptive nonlinear system,” IEEE Trans. Automat. Contr., 39, pp. 23622365, 1994. [5] M. Krstic, I. Kanellakopoulos, and P. V. Kokotovic, Nonlinear and Adaptive Control Design, N.Y.: Wiley, 1995. [6] Y.Song and J. W. Grizzle, “Adaptive outputfeedback control of a class of discrete time nonlinear systems,” In: Proc. 1993 Amer. Contr. Conf.,, San Francisco, USA, pp. 13591364, 1993. [7] L.L. Xie and L. Guo, “Fundamental limitations of discretetime adaptive nonlinear control,” IEEE Trans. Automat. Contr., 44, pp. 17771782, 1999. [8] P.C. Yeh and P. V. Kokotovic, “Adaptive control of a class of nonlinear discretetime systems,” Int. J. Control, 62, pp. 303324, 1995. [9] L. S. Zhiteckij, “Singularityfree stable adaptive control of a class of nonlinear discretetime systems,” In: Proc. 15th IFAC World Congress, Barcelona, Spain, 2002. Problem 6.9
Generalized Lyapunov theory and its omegatransformable regions ShengGuo Wang
College of Engineering University of North Carolina at Charlotte Charlotte, NC 282230001 USA [email protected] 1 DESCRIPTION OF THE PROBLEM The open problem discussed here is a Generalized Lyapunov Theory and its Ωtransformable regions. First, we provide the deﬁnition of the Ωtransformable regions and its degrees. Then the open problem is presented and discussed. Deﬁnition 1: (Gutman & Jury 1981) A region Ωv = {(x, y )  f (λ, λ∗ ) = f (x + jy, x − jy ) = fxy (x, y ) < 0} (1) ∗ is Ωtransformable if any two points α, β ∈ Ωv imply Re[f (α, β )] < 0, where function f (λ, λ∗ ) = fxy (x, y ) = 0 is the boundary function of the region Ωv and v is the degree of the function f . Otherwise, the region Ωv is nonΩtransformable. It is noticed that a region on one side of a line and a region within a circle in the plane both are Ωtransformable regions. However, some regions are nonΩtransformable regions. Open Problem: (Generalized Lyapunov Theory) Consider a matrix A ∈ C n×n and any Ωtransformable region Ωv described by fxy (x, y ) = f (λ, λ∗ ) < 0 with its boundary equation fxy (x, y ) = f (λ, λ∗ ) = 0, where v (any positive integer number) is the degree of the boundary function f and
v f (λ, λ∗ ) =
p+q ≤v, p, q =1 cpq λp λ∗q , λ = x + jy (2) λ is a point on the complex plane. For the eigenvalues of the matrix A to lie in Ωv , it is necessary and suﬃcient that given any positive deﬁnite 234 PROBLEM 6.9 (p.d.) Hermitian matrix Q, there exists a unique p.d. Hermitian matrix P satisfying the Generalized Laypunov Equation (GLE)
v cpq Ap P A∗q = −Q
p+q ≤v, p,q =1 (3) Strictly speaking, the above open problem is for Ωtransformable regions with degree v greater than two. However, in order to let the problem be more general, we present the genreral Lyapunov equation for any positive integer v . 2 MOTIVATION AND HISTORY OF THE PROBLEM The Lyapunov theory is well known for Hurwitz stability and Schur stability, i.e., the continuoustime system Lyapunov theory and the discretetime system Lyapunov theory, respectively. The above generalized Lyapunov theory (GLT) takes both continuoustime system and discretetime system Lyapunov theories as its special cases. Furthermore, it is well known that the closedloop system poles determine the system stability and nature, and dominate the system response and performance. Thus, when we consider the performance, we need the closedloop system poles, i.e., the closedloop system matrix eigenvalues, within a speciﬁc region. Various engineering applications and performance requirements need a consideration to locate the system poles within various speciﬁc regions. The GLT provides a necessary and suﬃcient condition to these problems as the Lyapunov theory to the stability problems. Here, let us brieﬂy review the history of the classical Lyapunov theory as follows. Its signiﬁcance is to provide a necessary and suﬃcient condition for matrix eigenvalues to lie in the lefthalf plane by the Lyapunov equation for the continuoustime systems. Lyapunov Theory (continuoustime): For the eigenvalues of matrix A to lie in the left half plane, i.e., matrix A is Hurwitz stable, it is necessary and suﬃcient that given any positive deﬁnite (p.d.) Hermitian matrix Q, there exists a unique p.d. Hermitian matrix P satisfying the following Laypunov Equation (LE) AP + P A∗ = −Q (4) For discretetime systems, the interest on the system stability is to check if the system matrix eigenvalues lie within the unit disk. The corresponding Lyapunov theory for the discretetime systems is as follows: Lyapunov Theory (discretetime): For the eigenvalues of matrix A to lie in the unitdisk, i.e., matrix A is Schur stable, it is necessary and suﬃcient that given any p.d. Hermitian matrix Q, there exists a unique p.d. Hermitian matrix P satisfying the following LE AP A∗ − P = −Q (5) GENERALIZED LYAPUNOV THEORY 235 It is clear that the Lyapunov theory for the Hurwitz stability and the Schur stability is a special case of the Generalized Lyapunov Theory described in the open problem with their speciﬁc Ωtransformable regions of the left halfplane and the unit disk, respectively. The degree v of the lefthalf plane is one, and the degree v of the unit disk is two. For the system performance, we may check the system pole clustering in speciﬁc interested general Ωtransformable regions by the GLT. With respect to the robust control, we need the robust performance in addition to the robust stability. Thus, a robust pole clustering, or robust root clustering, or robust Gamma stability, as called in the literature (Ackermann, Kaesbauer & Muench 1991, Barmish 1994, Wang & Shieh 1994a,b, Yedavalli 1993, among others), is needed. The approach for the robust pole clustering is ﬁrst to deﬁne the region boundary function for the system performance. Then the General Lyapunov Theory will be very useful for us to determine the robust pole clustering in general Ωtransformable regions for the system robust performance, which is similar to our dealing with the system stability and robust stability via the Lyapunov theory. Also, the more general regions may be very interesting in the study of discretetime systems, where the transient behavior is hard to specify in terms of common simple regions. In other areas, such as multidimensional digital ﬁlters and multidimensional systems, the Ωtransformable regions and its related GLT, as well as nonΩtransformable regions, will be further useful. The nonΩtransformable regions also identify the GLT as invalid in the regions. All these considerations constitute the motivation to investigate the open problem GLT and its Ωtransformable regions. 3 AVAILABLE RESULTS This section describes some related available results. Theorem 1: (GLT: Gutman & Jury 1981) Let A ∈ C n×n and consider any Ω transformable Ωv in Equation (1) with its boundary function f , where v = 1, 2 and
v f (λ, λ∗ ) =
p+q ≤v, p,q =1 cpq λp λ∗q (6) For the eigenvalues of A to lie in Ωv , it is necessary and suﬃcient that given any p.d. Hermitian matrix Q, there exists a unique p.d. Hermitian matrix P satisfying the GLE
v cpq Ap P A∗q = −Q
p+q ≤v, p,q =1 (7) 236 PROBLEM 6.9 Notice that the GLT is proved and valid for Ωtransformable regions with v = 1, 2 (Gutman & Jury 1981). For Ωtransformable regions with v ≥ 3, the GLT is only a conjecture so far. On the other hand, it is also noticed that the GLT is not valid for nonΩtransformable regions as pointed in Gutman & Jury 1981 and Wang 1996. In Wang (1996), a counterexample shows that the GLT is not valid for nonΩtransformable regions. Furthermore, notice from Gutman & Jury 1981 that Γtransformable regions proposed by Kalman (1969) and Ωtransformable regions do not cover each other. Γtransformable regions are originally a rational mapping from the upper halfplane (UHP) or the left halfplane (LHP) into the unit circle, identical to the region proposed by Hermite (1856) (see Gutman and Jury 1981). Strictly speaking, a region Γv is Γv = {(x, y )  ψ (s)2 − φ(s)2 < 0, s = x + jy } (8) that is mapped from the unit disk {w  w < 1} by the rational function (s) w = ψ(s) , s = x + jy , with v being the degree of the (x, y ) polynomial in φ Equation (8). By applying the GLT, Horng, Horng and Chou (1993) and Yedavalli (1993) discussed robust pole clustering in Ωtransformable regions with degrees one and two. However, Wang and Shieh (1994a,b) used a Rayleigh principle approach to analyze the robust pole clustering in general Ωregions, described as
v Ωv = {(x, y )  f (λ, λ∗ ) =
p+q ≤v, p,q =1 cpq λp λ∗q < 0, λ = x + jy } (9) which they called Hermitian regions or general Ω regions, including both Ω transformable and nonΩtransformable regions, as well as Γ regions. It is wellknown that if a region is Ωtransformable, its complement is not Ωtransformable. On the other hand, the complement of a Γtransformable region is also a Γtransformable region. However, the general Ωregions or Hermitian regions include all of them. Notice that the general Ω regions (Wang and Shieh 1994a,b) do not need to satisfy the condition in Deﬁnition 1 of Ωtransformable regions. Wang and Yedavalli (1997) discussed eigenvectors and robust pole clustering in general subregions Ω of complex plane for uncertain matrices. Wang (1999, 2000, and 2003) discussed robust pole clustering in a good ride quality region of aircraft, a speciﬁc nonΩtransformable region. However, so far the related researches have not provided any solution to the above open problem. Therefore, the above open problem remains an open problem thus far. GENERALIZED LYAPUNOV THEORY 237 BIBLIOGRAPHY [1] J. Ackermann, D. Kaesbauer, and R. Muench, “Robust Gamma stability analysis in a plant parameter space,” Automatica, 27, pp.7585, 1991. [2] B. R. Barmish, New Tools for Robustness of Linear Systems, Macmillan Publishing Comp., New York, 1994. [3] S. Gutman and E. I. Jury, “A general theory for matrix rootclustering in subregions of the complex plane,” IEEE Trans. Automatic Control, 26(4), pp.853863, 1981. [4] C. Hermite, “On the number of roots of an algebraic equation contained between given limits,” J. Reine Angew. Math., 52, pp. 3951, 1856; also in Int. J. Control, 26, pp. 183195, 1977. [5] I.R. Horng, H.Y. Horng and J.H. Chou, “Eigenvalue clustering in subregions of the complex plane for interval dynamic systems,” Int. J. Systems Science, 24, pp. 901914, 1993. [6] R. E. Kalman, “Algebriac characterization of polynomials whose zeros lie in a certain algebraic domain,” Proc. Nat. Acad. Sci., 64, pp.818823, 1969. [7] S.G. Wang, “Comments on perturbation bounds for rootclustering of linear systems in a speciﬁed second order subregion,” IEEE Trans. Automatic Control, 41, pp. 766767, 1996. [8] S.G. Wang, “Robust pole clustering in a good ride quality region of aircraft for structured uncertain matrices,” Proc. the 14th World Congress of IFAC, Vol. P, pp. 277282, Beijing, China, 1999. [9] S.G. Wang, “Analysis of robust pole clustering in a good ride quality region for uncertain matrices,” Proc. the 39th IEEE CDC, pp. 42034208, Sydney, Australia, 2000. [10] S.G. Wang, “Robust pole clustering in a good ride quality region of aircraft for matrices with structured uncertainties,” Automatica, 39(3), pp.525532, 2003. [11] S.G. Wang and L. Shieh, “Robustness of linear quadratic regulators with regionalpole constraints for uncertain linear systems,” Int. J. Control Theory & Advanced Technology, 10(4), pp.737769, 1994a. [12] S.G. Wang and L. Shieh, “A general theory for analysis and design of robust pole clustering in subregions of the complex plane,” Proc. 1994 American Control Conf., pp. 627631, Baltimore, USA, 1994b. 238 PROBLEM 6.9 [13] S.G. Wang and R. Yedavalli, “Eigenvectors and robust pole clustering in general subregions of complex plane for uncertain matrices,” Proc. 36th IEEE Conf. Decision and Control, pp. 21212126, San Diego, CA, 1997. [14] R. K. Yedavalli, “Robust root clustering for linear uncertain system using generalized Lyapunov theory,” Automatica, 29(1), pp. 237240, 1993. Problem 6.10
Smooth Lyapunov characterization of measurement to error stability Brian P. Ingalls
Control and Dynamical Systems California Institute of Technology Pasadena, CA 91125 USA [email protected] Eduardo D. Sontag
Department of Mathematics Rutgers University Piscataway, NJ 08854 USA [email protected] 1 DESCRIPTION OF THE PROBLEM Consider the system x(t) = f (x(t), u(t)) ˙ with two output maps y (t) = h(x(t)), w(t) = g (x(t)), (1) with states x(t) ∈ R and controls u measurable essentially bounded functions into Rm . Assume that the function f : Rn × Rm → Rn is locally Lipschitz, and that the system is forward complete. Assume that the output maps h : Rn → Rpy and g : Rn → Rpw are locally Lipschitz. The Euclidean norm in a space Rk is denoted simply by ·. If z is a function deﬁned on a real interval containing [0, t], z [0,t] is the sup norm of the restriction of z to [0, t], that is z [0,t] = ess sup {z (t) : t ∈ [0, t]}. A function γ : R≥0 → R≥0 is of class K (denoted γ ∈ K) if it is continuous, positive deﬁnite, and strictly increasing; and is of class K∞ if in addition it 240 PROBLEM 6.10 is unbounded. A function β : R≥0 × R≥0 → R≥0 is of class KL if for each ﬁxed t ≥ 0, β (·, t) is of class K and for each ﬁxed s ≥ 0, β (s, t) decreases to zero as t → ∞. The following deﬁnitions are given for a forward complete system with two output channels as in (1). The outputs y and w are considered as error and measurement signals, respectively. Deﬁnition: We say that the system (1) is inputmeasurement to error stable (IMES) if there exist β ∈ KL and γ1 , γ2 ∈ K so that y (t) ≤ max{β (x(0) , t), γ1 ( w
[0,t] ), γ2 ( u [0,t] )} for each solution of (1), and all t ≥ 0. Open Problem: Find a (if possible, smooth) Lyapunov characterization of the IMES property. 2 MOTIVATION AND HISTORY OF THE PROBLEM The inputmeasurement to error stability property is a generalization of input to state stability (ISS). Since its introduction in [11], the ISS property has been extended in a number of ways. One of these is to a notion of output stability, input to output stability (IOS), in which the magnitude of an output signal is asymptotically bounded by the input. Another is to a detectability notion: inputoutput to state stability (IOSS). In this case, the size of the state is asymptotically bounded by the input and output. In these two concepts, the outputs play distinct roles. In IOS the output is to be kept small, e.g., an error. In IOSS the output provides information about the size of the state, e.g., a measurement. This leads one to consider a system with two output channels: an error and a measurement. The notions of IOS and IOSS can be combined to yield (IMES), a property in which the error is asymptotically bounded by the input and a measurement. This partial detectability notion is a direct generalization of IOS and IOSS (and ISS). It constitutes the key concept needed in order to fully extend regulator theory to a global nonlinear context, and was introduced in [12], where it was called “input measurement to output stability” (IMOS). One of the most useful results on ISS is its characterization in terms of the existence of an appropriate smooth Lyapunov function [13]. As the IOS and IOSS properties were introduced, they too were characterized in terms of Lyapunov functions (in [16, 17] and [7, 14, 15], respectively). A Lyapunov characterization of IMES would include both of these results, as well as the original characterization of ISS. For applications of Lyapunov functions to ISS and related properties, see, for instance, [1, 4, 5, 6, 8, 9, 10]. SMOOTH LYAPUNOV CHARACTERIZATION OF MES 241 3 AVAILABLE RESULTS In an attempt to determine a Lyapunov characterization for IMES, one might hope to fashion a proof along the same lines as that for the IOSS characterization given in [7]. Such an attempt has been made, with preliminary results reported in [3]. In that paper, the MES property (i.e., IMES for a system with no input) is addressed. The relation between MES and a secondary property, stability in three measures (SIT), is described, and the following (discontinuous) Lyapunov characterization for SIT is given. Deﬁnition: We say that the system (1) is measurement to error stable (MES) if there exist β ∈ KL and γ1 ∈ K so that y (t) ≤ max{β (x(0) , t), γ1 ( w for each solution of (1), and all t ≥ 0. Deﬁnition: Let ρ ∈ K. We say that the system (1) satisﬁes the stability in three measures (SIT) property (with gain ρ) if there exists β ∈ KL so that for any solution of (1), if there exists t1 > 0 so that y (t) > ρ(w(t)) for all t ∈ [0, t1 ], then y (t) ≤ β (x(0) , t) ∀t ∈ [0, t1 ].
[0,t] )} The MES property implies the SIT property. The converse does not hold in general, but is true under additional assumptions on the system. Deﬁnition: Let ρ ∈ K. We say that a lower semicontinuous function V : Rn → R≥0 is a lower semicontinuous SITLyapunov function for system (1) with gain ρ if • there exist α1 , α2 ∈ K∞ so that α1 (h(ξ )) ≤ V (ξ ) ≤ α2 (ξ ), ∀ξ so that h(ξ ) > ρ(g (ξ )), • there exists α3 : R≥0 → R≥0 continuous positive deﬁnite so that for each ξ so that h(ξ ) > ρ(g (ξ )), ζ · v ≤ −α3 (V (ξ )) ∀ζ ∈ ∂D V (ξ ), ∀v ∈ F (ξ ). (2) (Here ∂D denotes a viscosity subgradient.) Theorem: Let a system of the form (1) and a function ρ ∈ K be given. The following are equivalent. i. The system satisﬁes the SIT property with gain ρ. ii. The system admits a lower semicontinuous SITLyapunov function with gain ρ. 242 PROBLEM 6.10 iii. The system admits a lower semicontinuous exponential decay SITLyapunov function with gain ρ. Further details are available in [3] and [2]. BIBLIOGRAPHY [1] R. A. Freeman and P. V. Kokotovi’c, Robust Nonlinear Control Design, StateSpace and Lyapunov Techniques, Birkhauser, Boston, 1996. [2] B. Ingalls, Comparisons of Notions of Stability for Nonlinear Control Systems with Outputs, Ph.D. thesis, Rutgers University, New Brunswick, New Jersey, USA, 2001. Available at www.cds.caltech.edu/∼ingalls. [3] B. Ingalls, E. D. Sontag, and Y. Wang, “Measurement to error stability: a notion of partial detectability for nonlinear systems,” submitted. [4] A. Isidori, Nonlinear Control Systems II, SpringerVerlag, London, 1999. [5] H. K. Khalil, Nonlinear Systems, Second Edition, PrenticeHall, Upper Saddle River, NJ, 1996. [6] P. Kokotovi´ and M. Arcak, “Constructive nonlinear control: Progress c in the 90s,” Invited Plenary Talk, IFAC Congress, In: Proc. 14th IFAC World Congress, the Plenary and Index Volume, pp. 49–77, Beijing, 1999. [7] M. Krichman, E. D. Sontag, and Y. Wang, “Inputoutputtostate stability,” SIAM Journal on Control and Optimization 39, 2001, pp. 18741928, 2001. [8] M. Krsti´ and H. Deng, Stabilization of Uncertain Nonlinear Systems, c SpringerVerlag, London, 1998. [9] M. Krsti´, I. Kanellakopoulos, and P. V. Kokotovi´, Nonlinear and Adapc c tive Control Design, John Wiley & Sons, New York, 1995. [10] R. Sepulchre, M. Jankovic, P. V. Kokotovi´, Constructive Nonlinear c Control, Springer, 1997. [11] E. D. Sontag, “Smooth stabilization implies coprime factorization,” IEEE Transactions on Automatic Control 34, 1989, pp. 435–443. [12] E. D. Sontag, “The ISS philosophy as a unifying framework for stabilitylike behavior,” In: Nonlinear Control in the Year 2000 (Volume 2) (Lecture Notes in Control and Information Sciences, A. Isidori, F. LamnabhiLagarrigue, and W. Respondek, eds.), SpringerVerlag, Berlin, 2000, pp. 443468. SMOOTH LYAPUNOV CHARACTERIZATION OF MES 243 [13] E. D. Sontag and Y. Wang, “On characterizations of the inputtostate stability property,” Systems & Control Letters 24, pp. 351–359, 1995. [14] E. D. Sontag and Y. Wang, “Detectability of nonlinear systems,” In: Proceedings of the Conference on Information Sciences and Systems (CISS 96), Princeton, NJ, 1996, pp. 1031–1036. [15] E. D. Sontag and Y. Wang, “Outputtostate stability and detectability of nonlinear systems,” Systems & Control Letters 29, pp. 279–290, 1997. [16] E. D. Sontag and Y. Wang, “Notions of input to output stability,” Systems & Control Letters 38, pp. 351–359, 1999. [17] E. D. Sontag and Y. Wang, “Lyapunov characterizations of input to output stability,” SIAM Journal on Control and Optimization 39, pp. 226– 249, 2001. PART 7 Controllability, Observability Problem 7.1
Time for local controllability of a 1D tank containing a ﬂuid modeled by the shallow water equations JeanMichel Coron
Universit´ ParisSud e D´partement de Math´matique e e Bˆtiment 425 a 91405 Orsay France [email protected] 1 DESCRIPTION OF THE PROBLEM We consider a 1D tank containing an inviscid incompressible irrotational ﬂuid. The tank is subject to onedimensional horizontal moves. We assume that the horizontal acceleration of the tank is small compared to the gravity constant and that the height of the ﬂuid is small compared to the length of the tank. This motivates the use of the SaintVenant equations [5] (also called shallow water equations) to describe the motion of the ﬂuid; see, e.g., [2, Sec. 4.2]. After suitable scaling arguments, the length of the tank and the gravity constant can be taken to be equal to 1; see [1]. Then the dynamics equations considered are, see [3] and [1], Ht (t, x) + (Hv )x (t, x) = 0, (1) 2 v vt (t, x) + H + (t, x) = −u (t) , (2) 2x v (t, 0) = v (t, 1) = 0, (3) ds (t) = u (t) , (4) dt dD (t) = s (t) , (5) dt where • H (t, x) is the height of the ﬂuid at time t and for x ∈ [0, 1], • v (t, x) is the horizontal water velocity of the ﬂuid in a referential attached to the tank at time t and for x ∈ [0, 1] (in the shallow water 248 PROBLEM 7.1 model, all the points on the same vertical have the same horizontal velocity), • u is the horizontal acceleration of the tank in the absolute referential, • s is the horizontal velocity of the tank, • D is the horizontal displacement of the tank. This is a control system, denoted Σ, where • the state is Y = (H, v, s, D), • the control is u ∈ R. Still, by scaling arguments, we may assume that, for every steady state, H , which is then a constant function, is equal to 1; see [1]. One is interested in the local controllability of the control system Σ around the equilibrium point (Ye , ue ) := ((1, 0, 0, 0), 0). Of course, the total mass of the ﬂuid is conserved so that, for every solution of (1) to (3), d dt
1 H (t, x) dx = 0.
0 (6) One gets (6) by integrating (1) on [0, 1] and by using (3 together with an integration by parts.) Moreover, if H and v are of class C1 , it follows from (2) and (3) that Hx (t, 0) = Hx (t, 1) (= −u (t)). (7) Therefore we introduce the vector space E of functions Y = (H, v, s, D) ∈ C1 ([0, 1]) × C1 ([0, 1]) × R × R such that Hx (0) = Hx (1), v (0) = v (1) = 0, (8) (9) and consider the aﬃne subspace Y ⊂ E of Y = (H, v, s, D) ∈ E satisfying
1 H (x)dx = 1.
0 (10) With these notations, we can deﬁne a trajectory of the control system Σ. Deﬁnition of a trajectory: Let T1 and T2 be two real numbers satisfying T1 T2 . A function (Y, u) = ((H, v, s, D), u) : [T1 , T2 ] → Y × R is a trajectory of the control system Σ if (i) the functions H and v are of class C1 on [T1 , T2 ] × [0, 1], TIME FOR LOCAL CONTROLLABILITY OF A 1D TANK 249 (ii) the functions s and D are of class C1 on [T1 , T2 ] and the function u is continuous on [0, T ], (iii) the equations (1) to (5) hold for every (t, x) ∈ [T1 , T2 ] × [0, 1]. For w ∈ C1 ([0, 1]), let w1 := Max{w(x) + wx (x); x ∈ [0, 1]}. We now consider the following property of local controllability of Σ around (Ye , ue ). Deﬁnition of P(T ): Let T > 0. The control system Σ satisﬁes the property P(T ) if, for every , there exists η > 0 such that, for every Y0 = (H0 , v0 , s0 , D0 ) ∈ Y, and for every Y1 = (H1 , v1 , s1 , D1 ) ∈ Y such that H0 − 11 + v0 1 + H1 − 11 + v1 1 + s0  + s1  + D0  + D1  < η, there exists a trajectory (Y, u) : [0, T ] → Y × R, t → ((H (t) , v (t) , s (t) , D (t)) , u (t)) of the control system Σ such that Y (0) = Y0 and Y (T ) = Y1 , and, for every t ∈ [0, T ], H (t) − 11 + v (t)1 + s (t) + D (t) + u (t) < . (12) Our open problem is to ﬁnd for which T > 0 P(T ) holds. We conjecture that P(T ) holds if and only if T > 2. (11) 2 MOTIVATION AND HISTORY OF THE PROBLEM The problem of controllability of the system Σ has been raised by F. Dubois, N. Petit, and P. Rouchon in [3]. Let us recall that they have studied in this paper the controllability of the linearized control system around (Ye , ue ). This linearized control system is ht + vx = 0, v + h = −u (t) , t x v (t, 0) = v (t, 1) = 0, (13) (Σ0 ) ds (t) = u (t) , dt dD (t) = s (t) , dt where the state is (h, v, s, D) ∈ Y0 , with
1 Y0 := (h, v, s, D) ∈ E ;
0 hdx = 0 , and the control is u ∈ R. It is proved in [3] that Σ0 is not controllable. It is also proved in [3] that, even if Σ0 is not controllable, for any 250 PROBLEM 7.1 T > 1, one can move during the interval of time [0, T ] from any steady state (h0 , v0 , s0 , D0 ) := (0, 0, 0, D0 ) to any steady state (h1 , v1 , s1 , D1 ) := (0, 0, 0, D1 ) for the linear control system Σ0 ; see also [4] when the tank has a nonstraight bottom. Unfortunately, this does not imply that the related property (move from (H0 , v0 , s0 , D0 ) := (0, 0, 0, D0 ) to (H1 , v1 , s1 , D1 ) := (0, 0, 0, D1 ) also holds for the nonlinear control system Σ, even if D1 − D0  is arbitrary small but not 0. In fact we conjecture that, for > 0 small enough, even if D1 − D0  is arbitrarily small but not 0, one needs T > 2 to move from (H0 , v0 , s0 , D0 ) := (1, 0, 0, D0 ) to (H1 , v1 , s1 , D1 ) := (1, 0, 0, D1 ) for the nonlinear control system Σ if one requires (12). 3 AVAILABLE RESULTS Clearly, P(T ) implies P(T ) for T ≤ T . Using the characteristics of the hyperbolic system (1)(2), one easily sees that P(T ) does not hold T < 1. It is proved in [1] that P(T ) holds for T large enough. The method used in [1] requires, at least, T > 2. BIBLIOGRAPHY [1] J.M. Coron, “Local controllability of a 1D tank containing a ﬂuid modeled by the shallow water equations,” reprint University ParisSud, 2002, accepted for publication in ESAIM: COCV. [2] L. Debnath, Nonlinear Water Waves, Academic Press, San Diego, 1994. [3] F. Dubois, N. Petit, and P. Rouchon, “Motion planning and nonlinear simulations for a tank containing a ﬂuid,” ECC 99. [4] N. Petit and P. Rouchon, “Dynamics and solutions to some control problems for watertank systems,” preprint, CITCDS 00004, 2000, accepted for publication in IEEE Transactions on Automatic Control. [5] A.J.C.B. de SaintVenant, “Th´orie du mouvement non permanent des e eaux, avec applications aux crues des rivi`res et ` l’introduction des e a mar´es dans leur lit,” C.R. Acad. Sci. Paris, 53 pp. 147154, 1971. e Problem 7.2
A Hautus test for inﬁnitedimensional systems Birgit Jacob
Fachbereich Mathematik Universit¨t Dortmund a D44221 Dortmund Germany [email protected] Hans Zwart
Department of Applied Mathematics University of Twente P.O. Box 217, 7500 AE Enschede The Netherlands [email protected] 1 DESCRIPTION OF THE PROBLEM We consider the abstract system x(t) = Ax(t), x(0) = x0 , ˙ y (t) = Cx(t), t ≥ 0, t≥0 (1) (2) on a Hilbert space H . Here A is the inﬁnitesimal generator of an exponentially stable C0 semigroup (T (t))t≥0 and by the solution of (1) we mean x(t) = T (t)x0 , which is the weak solution. If C is a bounded linear operator from H to a second Hilbert space Y , then it is straightforward to see that y (·) in (2) is welldeﬁned and continuous. However, in many PDE’s, rewritten in the form (1)(2), C is only a bounded operator from D(A) to Y (D(A) denotes the domain of A), although the output is a welldeﬁned (locally) square integrable function. In the following, C will always be a bounded operator from D(A) to Y . Note that D(A) is a dense subset of H . If the output is locally square integrable, then C is called an admissible observation operator, see Weiss [11]. It is not hard to see that since the C0 semigroup is exponentially stable, the output is locally square integrable if and only if it is square integrable. Using the uniform boundedness theorem, we see that the observation operator C is admissible if and only if there 252 exists a constant L > 0 such that
∞ PROBLEM 7.2 C T (t)x
0 2 Y dt ≤ L x 2 H, x ∈ D(A). (3) Assuming that the observation operator C is admissible, system (1)(2) is said to be exactly observable if there is a bounded mapping from the output trajectory to the initial condition, i.e., there exists a constant l > 0 such that
∞ C T (t)x
0 2 Y dt ≥ l x 2 H, x ∈ D(A). (4) Often the emphasis is on exact observability on a ﬁnite interval, which means that the integral in (4) is over [0, t0 ] for some t0 > 0. However, for exponentially stable semigroups, both notions are equivalent, i.e., if (4) holds and the system is exponentially stable, then there exists a t0 > 0 such that the system is exactly observable on [0, t0 ]. There is a strong need for easy veriﬁable equivalent conditions for exact observability. Based on the observability conjecture by Russell and Weiss [9] we now conjecture the following: Conjecture Let A be the inﬁnitesimal generator of an exponentially stable C0 semigroup on a Hilbert space H and let C be an admissible observation operator. Then system (1)(2) is exactly observable if and only if (C1) (T (t))t≥0 is similar to a contraction, i.e., there exists a bounded operator S from H to H , which is boundedly invertible such that (ST (t)S −1 )t≥0 is a contraction semigroup; and (C2) there exists a m > 0 such that (sI − A)x
2 H + Re(s) Cx 2 Y ≥ mRe(s)2 x 2 H (5) for all complex s with negative real part, and for all x ∈ D(A). Our conjecture is a revised version of the (false) conjecture by Russell and Weiss; they did not require that the semigroup is similar to a contraction. 2 MOTIVATION AND HISTORY OF THE CONJECTURE System (1)(2) with A ∈ Cn×n and C ∈ Cp×n is observable if and only if rank sI − A =n C for all s ∈ C. (6) This is known as the Hautus test, due to Hautus [2] and Popov [8]. If A is a stable matrix, then (6) is equivalent to condition (C2). Although there are some generalizations of the Hautus test to delay diﬀerential equations A HAUTUS TEST FOR INFINITEDIMENSIONAL SYSTEMS 253 (see, e.g., Klamka [6] and the references therein) the full generalization of the Hautus test to inﬁnitedimensional linear systems is still an open problem. It is not hard to see that if (1)(2) is exactly observable, then the semigroup is similar to a contraction, see Grabowski and Callier [1] and Levan [7]. Condition (C2) was found by Russell and Weiss [9]: they showed that this condition is necessary for exact observability, and, under the extra assumption that A is bounded, they showed that this condition also is suﬃcient. Without the explicit usage of condition (C1) it was shown that condition (C2) implies exact observability if • A has a Riesz basis of eigenfunctions, Re(λn ) = −ρ1 , λn+1 − λn  > ρ2 , where λn are the eigenvalues of A, and ρ1 , ρ2 > 0, [9]; or if • m in equation (5) is one, [1]; or if • A is skewadjoint and C is bounded, Zhou and Yamamoto [12]; or if • A has a Riesz basis of eigenfunctions, and Y = Cp , Jacob and Zwart [5]. Recently, we showed that (C2) is not suﬃcient in general, [4]. The C0 semigroup in our counterexample is an analytic semigroup, which is not similar to a contraction semigroup. The output space in the example is just the complex plane C. 3 AVAILABLE RESULTS AND CLOSING REMARKS In order to prove the conjecture it is suﬃcient to show that for exponentially stable contraction semigroups condition (C2) implies exact observability. It is wellknown that system (1)(2) is exactly observable if and only if there exists a bounded operator L that is positive and boundedly invertible and satisﬁes the Lyapunov equation Ax1 , Lx2
H + Lx1 , Ax2 H = C x1 , Cx2 Y , for all x1 , x2 ∈ D(A). (7) From the admissibility of C and the exponential stability of the semigroup, one easily obtains that equation (7) has a unique (nonnegative) solution. Russell and Weiss [9] showed that Condition (C2) implies that this solution has zero kernel. Thus the Lyapunov equation (2) could be a starting point for a proof of the conjecture. We have stated our conjecture for inﬁnitedimensional output spaces. However, it could be that it only holds for ﬁnitedimensional output spaces. If the output space Y is onedimensional one could try to prove the conjecture using powerful tools like the Sz.NagyFoias model theorem (see [10]). This tool was quite useful in the context of admissibility conditions for contraction 254 PROBLEM 7.2 semigroups [3]. Based on this observation, it would be of great interest to check our conjecture for the right shift semigroup on L2 (0, ∞). We believe that exponential stability is not essential in our conjecture, and can be replaced by strong stability and inﬁnitetime admissibility, see [5]. Note that our conjecture is also related to the leftinvertibility of semigroups, see [1] and [4] for more details. BIBLIOGRAPHY [1] P. Grabowski and F. M. Callier, “Admissible observation operators, semigroup criteria of admissibility,” Integral Equation and Operator Theory, 25:182–198, 1996. [2] M. L. J. Hautus, “Controllability and observability conditions for linear autonomous systems,” Ned. Akad. Wetenschappen, Proc. Ser. A, 72:443–448, 1969. [3] B. Jacob and J. R. Partington, “The Weiss conjecture on admissibility of observation operators for contraction semigroups,” Integral Equations and Operator Theory, 40(2):231243, 2001. [4] B. Jacob and H. Zwart, “Disproof of two conjectures of George Weiss,” Memorandum 1546, Faculty of Mathematical Sciences, University of Twente, 2000. Available at http://www.math.utwente.nl/publications/ [5] B. Jacob and H. Zwart, “Exact observability of diagonal systems with a ﬁnitedimensional output operator,” Systems & Control Letters, 43(2):101109, 2001. [6] J. Klamka, Controllability of Dynamical Systems, Kluwer Academic Publisher, 1991. [7] N. Levan, “The leftshift semigroup approach to stability of distributed systems,” J. Math. Ana. Appl., 152: 354–367, 1990. [8] V. M. Popov, Hyperstability of Control Systems. Editura Academiei, Bucharest, 1966 (in Romanian); English trans. by Springer Verlag, Berlin, 1973. [9] D. L. Russell and G. Weiss, “A general necessary condition for exact observability,” SIAM J. Control Optim., 32(1):1–23, 1994. [10] B. Sz.Nagy and C. Foias, Harmonic analysis of operators on Hilbert spaces, AmsterdamLondon: NorthHolland Publishing Company. XIII, 1970. A HAUTUS TEST FOR INFINITEDIMENSIONAL SYSTEMS 255 [11] G. Weiss, “Admissible observation operators for linear semigroups,” Israel Journal of Mathematics, 65:17–43, 1989. [12] Q. Zhou and M. Yamamoto, “Hautus condition on the exact controllability of conservative systems,” Int. J. Control, 67(3):371–379, 1997. Problem 7.3
Three problems in the ﬁeld of observability Philippe Jouan
Laboratoire R. Salem CNRS UMR 6085 Universit´ de Rouen e Math´matiques, site Colbert e 76821 MontSaintAignan Cedex France [email protected] 1 INTRODUCTION. Let X be a C∞ (resp. Cω ), connected manifold. We consider on X the system x = f (x, u) ˙ Σ= (1) y = h(x) where x ∈ X , u ∈ U = [0, 1]m , and y ∈ Rp . The parametrized vector ﬁeld f and the output function h are assumed to be C∞ (resp. Cω ). In order to avoid certain complications, the state space X is assumed to be compact, but this assumption is not crucial (we can for instance assume that the vector ﬁeld f vanishes out of a relatively compact open subset of X ). The three problems addressed herein concern observability and the existence of observers for such systems. 2 PROBLEM 1. We ﬁrst consider an uncontrolled system: x = f (x) ˙ Σu = y = h(x). (2) This system is assumed to be observable (in the following sense: the trajectories starting from two diﬀerent initial states are distinguished by the output). THREE PROBLEMS IN THE FIELD OF OBSERVABILITY 257 Whenever the nth derivative of the output with respect to the vector ﬁeld f is a Cr function of the output and the n − 1 previous ones it is possible to construct obververs (see [5], [9]). More accurately, the injective mapping Φ = (h, Lf h, L2 h, . . . , Ln−1 h) f f is used to “immerse” Σu into Rpn where a Cr observer is designed. The observed data are the outputs of Σu together with their (n − 1)th ﬁrst derivatives. The state of the system, being a continuous mapping of (h, Lf h, L2 h, . . . , Ln−1 h), is thus estimated by the observer. f f More generaly a Cr observer for Σu is a system Σ deﬁned in an open subset V of Rn by Σ= z = F (z, y ) ˙ x = θ(z ) (3) where F is a Cr vector ﬁeld on V and θ is a continuous mapping from V into X such that ∀x ∈ X, ∀z ∈ V lim d(x(t), x(t)) = 0
t→+∞ for any distance d on X compatible with the topology of X . The ﬁrst problem is: Does the existence of a Cr observer for Σu imply the existence of an integer n such that the nth derivative of the output is a Cr function of the output and the n − 1 previous ones? In an equivalent way does it exist a Cr function ϕ such that
n Ln h = ϕ(h, Lf h, L2 h, . . . , Lf −1 h) ? f f A positive answer to this question would imply that all the observability properties of an uncontrolled system are contained in the functional relation Ln h = ϕ(h, Lf h, L2 h, . . . , Ln−1 h). f f f Notice that we already know that the kind of dependence between the nth derivative of the output and the preceding ones, that is the kind of function ϕ, determines whether the system is linearizable, or linearizable modulo an output injection (see [2], [7]). 3 PROBLEM 2. Once we know that a controlled system is observable in the weak sense of [4] (two diﬀerent initial states have to be distinguished by the output for at least one input) a question arises naturally: which inputs are universal? (An input is universal if any two diﬀerent initial states are distinguished by the output for this input, see [8].) An equivalent formulation is: for which inputs is the system observable? 258 PROBLEM 7.3 For generic reasons, we consider controlled systems with more outputs than inputs: p > m. Problem 2 is: Is it true that the set of C∞ systems (resp. Cω systems) that are observable for every C∞ input contains an open (or better an open and dense) subset of the set of C∞ systems (resp. the set of Cω systems)? Both in the C∞ and Cω cases does observability for every C∞ input imply observability for every L∞ input? The following facts are known: i. The set of systems observable for every C∞ input is dense in the set of C∞ or Cω systems (see [3], [1]). ii. For a given bound, the set of systems observable for every C∞  input whose 2 dim(X ) ﬁrst derivatives are bounded contains an open and dense subset of the set of C∞ or Cω systems (see [3], [1]). iii. A Cω system observable for every C∞ input is observable for every L∞ input (see [3]). iv. In the singleinput, controlaﬃne, C∞ case, the implication Σ C∞ observable =⇒ Σ L∞ observable is true for an open and dense subset of systems (see [6]). Of course, a positive answer to this problem would mean that the property of being observable for every L∞ input is preserved under slight perturbations. 4 PROBLEM 3. Since the set of systems observable for every L∞ input is residual (with more ouputs than inputs), it is very interesting to design observers for them, particularly if this set contains an open subset. At the present time, the more general construction of observers for nonlinear systems is the high gain one (see [3]). But the observers designed in this way have the default to make use of the derivatives of the input and cannot work if this last is only L∞ . In some particular cases (linearizable systems, linearizable modulo an output injection systems, bilinear systems, uniformly observable systems . . . ) observers that works for every input are known but they cannot be generalized. Problem 3 is therefore: For systems observable for every L∞ input, ﬁnd a general construction of an observer which works for every L∞ input. Notice that if the system is “immersed” in RN (in a sense to make precise) the “immersion” must not depend on the input: in that case the image of THREE PROBLEMS IN THE FIELD OF OBSERVABILITY 259 the vector ﬁeld f in RN would depend upon the derivative of the input. In particular the mapping Φ = (h, Lf h, L2 h, . . . , Ln−1 h) f f from X into Rpn cannot be used because the vector ﬁeld f (x, u) depends upon u and so are Lf h(x, u), L2 h(x, u), . . ., Ln−1 h(x, u). f f BIBLIOGRAPHY [1] M. Balde and Ph. Jouan, “Genericity of observability of controlaﬃne systems,” Control Optimization and Calculus of Variations, vol. 3, 1998, 345359. [2] M. Fliess and I. Kupka, “A ﬁniteness criterion for nonlinear inputoutput diﬀerential systems,” SIAM J. on Control and Optimization, 21 (1983), 5, 721728. [3] J. P. Gauthier and I. Kupka, Deterministic Observation Theory and Applications, Cambridge University Press, 2001. [4] R. Hermann and A. J. Krener, “Nonlinear controllability and observability,” IEEE Trans. Aut. Contro, AC22 (1977), 728740. [5] Ph. Jouan and J. P. Gauthier, “Finite singularities of nonlinear systems. Output stabilization, observability and observers,” Journal of Dynamical and Control Systems, vol.2, no 2, 1996, 255288. [6] Ph. Jouan, “C ∞ and L∞ observability of singleinput C ∞ Systems,” Journal of Dynamical and Control Systems, vol. 7, no 2, 2001, 151169. [7] Ph. Jouan, “Immersion of nonlinear systems into linear systems modulo output injection,” submitted to SIAM J. on Control and Optimization. [8] H. J. Sussmann, “Singleinput observability of continuoustime systems,” Math. Syst. Theory 12 (1979), 371393. [9] X. Xia and M. Zeitz, “On nonlinear continuous observers,” Int. J. Control, vol. 66, no 6, 1997, 943954. Problem 7.4
Control of the KdV equation Lionel Rosier
Institut Elie Cartan Universit´ Henri Poincar´ Nancy 1 e e B.P. 239 54506 Vandœuvrel`sNancy Cedex e France [email protected] 1 DESCRIPTION OF THE PROBLEM The Kortewegde Vries (KdV) equation is the simplest model for unidirectional propagation of small amplitude long waves in nonlinear dispersive systems. It occurs in various physical contexts (e.g., water waves, plasma physics, nonlinear optics). It reads yt + yxxx + yx + y yx = 0, t > 0, x ∈ Ω,
∂y ∂t ). (1) the subscripts denoting partial derivatives (e.g., yt = The KdV equation has been intensively studied since the 1960s because of its fascinating properties (inﬁnite set of conserved integral quantities, integrability, Kato smoothing eﬀect, etc.). (See [5] and the references therein.) Here, we are concerned with the boundary controllability of the KdV equation in the domain Ω = (0, +∞). For any pair (a, b) with 0 ≤ a < b ≤ +∞ ∞ let C0 (a, b) denote the space of functions of class C ∞ and with compact ∞ ∞ support in (a, b). Given T > 0, y0 ∈ C0 (0, +∞) and h ∈ C0 (0, T ), it is by now well known (see [1]) that the initialboundaryvalue problem 0 < t < T, 0 < x < +∞ yt + yxxx + yx + yyx = 0, y (t, 0) = h(t), 0<t<T (2) y (0, x) = y0 (x), 0 < x < +∞ admits a unique classical solution that is smooth. The boundary value h is the input of the system. Let R(y0 , T ) denote the space of all reachable states from y0 in time T ; that is,
∞ R(y0 , T ) = {y (T, .); y fulﬁlls (2) for some h ∈ C0 (0, T )}. (3) CONTROL OF THE KDV EQUATION 261 We are now in a position to state the problem of interest. Open Problem: Is is true that 0 ∈ R(y0 , T ) for T is large enough ? The main diﬃculty of the problem is that the domain is unbounded. Notice that it would also be of great interest to identify the closure of R(y0 , T ) in L2 (0, +∞). 2 MOTIVATION AND HISTORY OF THE PROBLEM The KdV equation has been ﬁrst introduced in [4] to explain the emergence of long solitary waves, the socalled “solitons.” In this context, t stands for the elapsed time, x is the independent space variable, and y = y (t, x) stands for the deviation of the ﬂuid surface from the rest position. The above control problem may serve as a model for the control of the ﬂuid surface in a shallow canal by means of a wavemaker. Indeed, taking Lagrangian coordinates, it is proved in [10] that the movement of the ﬂuid surface is governed by (2), the speed of the moving boundary being roughly represented by the input h. Thus, the space R(0, T ) stands for the set of waves that may be generated (from the rest position) by the wavemaker in time T . A similar control problem is investigated in [7], but with a ﬂuid model in which both the dispersive and nonlinear eﬀects are neglected. In [2] the author uses the (nonlinear) shallow water equations as a ﬂuid model to investigate the control of the ﬂuid surface in a moving tank. These equations are appropriate in situations where the dispersive eﬀects may be neglected, e.g., when the height of the ﬂuid and the length of the tank are of the same order of magnitude. The shallow water equations have to be replaced by the KdV equation (or the Boussinesq system) when studying the propagation of traveling waves. The above problem is important for the following reason. A lack of compactness, due to the fact that the domain is unbounded, prevents us from using the standard linearization procedure in the study of the controllability properties of (2). Therefore, a new approach (based on the inverse scattering?) has to be developed to investigate the (exact or approximate) controllability of the nonlinear KdV equation on the half line. 3 AVAILABLE RESULTS The boundary controllability of the KdV equation has been investigated in numerous papers; see, e.g., [3], [8], [11] and [12]. In these papers, the domain Ω = (0, L) is bounded and the control is applied at the right endpoint, although the waves are expected to move from the left to the right. If the 262 PROBLEM 7.4 control is applied at the left endpoint, and if the system is supplemented by the boundary conditions y = yx = 0 at x = L, then it is proved in [10] that for any T, L > 0, 0 ∈ R(y0 , T ) for any initial state y0 with a small enough H 3 (0, L)norm. It means that a soliton moving to the right may be caught up and annihilated by a set of waves generated by the wavemaker. This result rests on a Carleman estimate for the linearized equation (i.e., (1) without the nonlinear term yyx ). When we look at the linearized equation on the unbounded domain Ω = (0, +∞), then the controllability results are not so good, due to a lack of compactness. Indeed, it is proved in [9] that there exists a state y0 ∈ L2 (0, +∞) for which any trajectory connecting y0 to the null state does not belong to L∞ (0, T, L2 (0, +∞)) (that +∞ is, ess sup0<t<T 0 y (t, x)2 dx = +∞). It means that the bad behavior of the trajectory y (t, x) as x → +∞ is the price to be paid to get the nullcontrollability. (A similar phenomenon has been observed in [6] for the heat equation.) Thus, the linearization procedure fails since we do not have any bound on y (t, .) L2 (0,+∞) . BIBLIOGRAPHY [1] J. Bona, R. Winther, “The Kortewegde Vries equation, posed in a quarterplane,” SIAM J. Math. Anal., 14, pp. 10561106, 1983. [2] J.M. Coron, “Local controllability of a 1D tank containing a ﬂuid modeled by the shallow water equations,” A tribute to J. L. Lions, ESAIM Control Optim. Calc. Var., 8, pp. 513554, 2002 (electronic). [3] E. Cr´peau, “Exact boundary controllability of the Kortewegde Vries e equation around a nontrivial stationary solution,” Internat. J. Control 74, pp. 1096–1106, 2001. [4] D. J. Korteweg and G. de Vries, “On the change of form of long waves advancing in a rectangular canal, and on a new type of long stationary waves,” Philos. Mag., 39, pp. 422443, 1995. [5] C. E. Kenig, G. Ponce, and L. Vega, “The Cauchy problem for the Kortewegde Vries equation in Sobolev spaces of negatives indices,” Duke math J., 71, pp. 121, 1993. [6] S. Micu, E. Zuazua, “On the lack of nullcontrollability of the heat equation on the half space,” Portugal. Math., 58, pp. 124, 2001. [7] S. Mottelet, “Controllability and stabilization of a canal with wave generators,” SIAM J. Control Optim., 38, pp. 711735, 2000. [8] L. Rosier, “Exact boundary controllability for the Kortewegde Vries equation on a bounded domain,” ESAIM Control Optim. Calc. Var., 2, pp. 3355, 1997. CONTROL OF THE KDV EQUATION 263 [9] L. Rosier, “Exact boundary controllability for the linear Kortewegde Vries equation on the halfline,” SIAM J. Control Optim.:, 39, pp. 331351, 2000. [10] L. Rosier, “Control of the surface of a ﬂuid by a wavemaker,” in preparation. [11] D. L. Russell, B. Y. Zhang, “Exact controllability and stabilizability of the Kortewegde Vries equation,” Trans. Amer. Math. Soc., 348, pp. 36433672, 1996. [12] B. Y. Zhang, “Exact boundary controllability of the Kortewegde Vries equation,” SIAM J. Control Optim., 37, pp. 543565, 1999. PART 8 Robustness, Robust Control Problem 8.1
H∞ norm approximation A. C. Antoulas
Department of Electrical and Computer Engineering Rice University 6100 South Main Street Houston, TX 770051892 USA [email protected] A. Astolﬁ
Department of Electrical and Electronic Engineering Imperial College Exhibition Road SW7 2BT, London United Kingdom [email protected] 1 DESCRIPTION OF THE PROBLEM Let RHm be the (Hardy) space of realrational scalar1 transfer functions ∞ of order m, bounded on the imaginary axisand analytic into the righthalf complex plane. The optimal approximation problem in the H∞ norm can be statedas follows. (A ) (Optimal Approximation in the H∞ norm) Given G(s) ∈ RHN and an integer n < N ﬁnd,2 if possible, A (s) ∈ ∞ RHn such that ∞ A (s) = arg minA(s)∈RHn G(s) − A(s) ∞ For such a problem, let γn = minA(s)∈RHn G(s) − A(s) ∞ then two further problems can be posed.
considerations can be done for the nonscalar case. ﬁnd we mean ﬁnd an exact solution or an algorithm converging to the exact solution.
2 By 1 Similar ∞. (1) ∞, 268 (D) (Optimal Distance problem in the H∞ norm) Given G(s) ∈ RHN and an integer n < N ﬁnd γn . ∞ PROBLEM 8.1 (A) (Suboptimal Approximation in the H∞ norm) ˜ Given G(s) ∈ RHN , an integer n < N and γ > γn ﬁnd A(s) ∈ RHn ∞ ∞ such that ˜ γn ≤ G(s) − A(s)
∞ ≤ γ. The optimal H∞ approximation problem can be formally posed as a constrained minmax problem. For, note that any function in RHn can be put ∞ in a onetoone correspondence with a point θ of some (open) set Ω ⊂ R2n , therefore the problem of computing γn can be posed as γn = min max G(jω ) − A(jω ) ,
θ ∈Ω ω ∈R (2) where A(s) = A(s, θ). The above formulation provides a brute force approach to the solution of the problem. Unfortunately, this method is not of any use in general, because of the complexity of the set Ω and because of the curse of dimensionality. However, the formulation (2) suggests that possible candidate solutions of the optimal approximation problem are the saddle points of the function G(jω ) − A(jω, θ) , which can be, in principle, computed using numerical tools. It would be interesting to prove (or disprove) that min max G(jω ) − A(jω, θ) = max min G(jω ) − A(jω, θ) .
θ ∈Ω ω ∈R ω ∈R θ ∈Ω The solution method based on the computation of saddle points does not give any insight into the problem, neither exposes any systems theoretic interpretation of the optimal approximant. An interesting property of the optimal approximant is stated in the following simple fact, which can be used to rule out that a candidate approximant is optimal. Fact: Let A (s) ∈ RHn be such that equation (1) holds. Suppose ∞ W (jω ) − A (jω ) = γn , and A(jω ) = 0 for ω = 0. Then there exists a constant ω = ω such that ˜ W (j ω ) − A (j ω ) = γn , ˜ ˜ i.e., if the value γn is attained by the function W (jω ) − A (jω )at ω = 0 it is also attained at some ω = 0. Proof: We prove the statement by contradiction. Suppose W (jω ) − A (jω ) < γn , (5) (4) (3) H∞ NORM APPROXIMATION 269 ˜ for all ω = ω and consider the approximant A(s) = (1 + λ)A (s), with λ ∈ I . By equation (5), condition (4) and by continuity with respect to λ R and ω of ˜ W (jω ) − A(jω ), there is a λ (suﬃciently small) such that max W (jω ) − (1 + λ )A (jω ) < γn ,
ω or, what is the same, it is possible to obtain an approximant that is better than A (s), hence a contradiction. It would be interesting to show that the above fact holds (or it does not hold) when ω = 0. 2 AVAILABLE RESULTS AND POSSIBLE SOLUTION PATHS Approximation and model reduction have always been central issues in system theory. For a recent survey on model reduction in the largescale setting, we refer the reader to [1]. There are several results in this area. If the approximation is performed in the Hankel norm, then an explicit solution of the optimal approximation and model reduction problems has been given in [3]. Note that this procedure provides, as a byproduct, an upper bound for γn and a solution of the suboptimal approximation problem. If the approximation is performed in the H2 norm, several results and numerical algorithms are available [4]. For approximation in the H∞ norm a conceptual solution is given in [5]. Therein it is shown that the H∞ approximation problem can be reduced to a Hankel norm approximation problem for an extended system (i.e., a system obtained from a state space realization of the original transfer function G(s) by adding inputs and outputs). The extended system has to be constructed with the constraint that the corresponding Grammians P and Q satisfy λmin (P Q) = (γn )2 with multiplicity N − n. (6) However, the above procedure, as also noted by the authors of [5], is not computationally viable, and presupposes the knowledge of γn . Hence the need for further study of the problem. In the recent paper [2], the decay rates of the Hankel singular values of stable, singleinput singleoutput systems, ( s) are studied. Let G(s) = p(s) be the transfer function under consideration. q The decay rate of the Hankel singular values is studied by introducing a new set of input/output system invariants, namely the quantities qp((ss)) , where ∗ ∗ q (s) = q (−s), evaluated at the poles of G(s). These results are expected to yield light into the structure of the above problem (6). Another paper of interest especially for the suboptimal approximation case, is [6]. In this paper the set, of all systems whose H∞ norm is less than some positive number γ is parameterized. Thus the following problem can be posed: given 270 PROBLEM 8.1 such a system with H∞ norm less than γ , ﬁnd conditions under which it can be decomposed in the sum of two systems, one of which is prespeciﬁed. Finally, there are two special classes of systems that may be studied to improve our insight into the general problem. The ﬁrst class is composed of singleinput singleoutput discretetime stable systems. For such systems, an interesting related problem is the Carath´odoryFej´r (CF) approximae e tion problem that is used for elliptic ﬁlters design. In [7] it is shown that in the scalar, discretetime case, optimal approximants in the Hankel norm approach asymptotically optimal approximants in the H∞ norm (the asymptotic behavior being with respect to → 0, where z  ≤ < 1). The CF problem through the contribution of AdamjanArovKrein and later Glover, evolved into what is nowadays called the Hankelnorm approximation problem. However, no asymptotic results have been shown to hold in the general case. The second special class is that of symmetric systems, that is, systems whose state space representation (C, A, B ) satisﬁes A = A and B = C . For instance, these systems have a positive deﬁnite Hankel operator and have further properties that can be exploited in the construction of approximants in the H∞ sense. BIBLIOGRAPHY [1] A. C. Antoulas, “Lectures on the approximation of large scale dynamicalsystems,” SIAM, Philadelphia, 2002. [2] A. C. Antoulas, D. C. Sorensen, and Y. Zhou, “On the decay rate of Hankel singular values and related issues,” Systems and Control Letters, 2002. [3] K. Glover, “All optimal Hankelnorm approximations of linear multivariable systems and their L∞ error bounds,” International Journal of Control, 39: 11151193, 1984. [4] X.X. Huang, W.Y. Yan and K. L. Teo, “H2 near optimal model reduction,” IEEE Trans. Automatic Control, 46: 12791285, 2001. [5] D. Kavranoglu and M. Bettayeb, “Characterization of the solution to the optimal H∞ model reduction problem,” Systems and Control Letters, 20: 99108, 1993. [6] H. G. Sage and M. F. de Mathelin, “Canonical H∞ state space parametrization,” Automatica, July 2000. [7] L. N. Trefethen, “Rational Chebyshev approximation on the unitdisc,” Numerische Mathematik, 37: 297320, 1981. Problem 8.2
Noniterative computation of optimal value in H∞ control Ben M. Chen
Department of Electrical and Computer Engineering National University of Singapore Singapore 117576 Republic of Singapore [email protected] 1 DESCRIPTION OF THE PROBLEM We consider an nth order generalized linear system Σ characterized by the following statespace equations: ˙ x= A x+ B u+ E w y = C1 x + D11 u + D1 w Σ: (1) h = C2 x + D2 u + D22 w where x is the state, u is the control input, w is the disturbance input, y is the measurement output, and h is the controlled output of Σ. For simplicity, we assume that D11 = 0 and D22 = 0. We also let ΣP be the subsystem characterized by the matrix quadruple (A, B, C2 , D2 ) and ΣQ be the subsystem characterized by (A, E, C1 , D1 ). The standard H∞ optimal control problem is to ﬁnd an internally stabilizing proper measurement feedback control law, Σcmp : v = Acmp v + Bcmp y ˙ u = Ccmp v + Dcmp y (2) such that when it is applied to the given plant (1), the H∞ norm of the resulting closedloop transfer matrix function from w to h, say Thw (s), is minimized. We note that the H∞ norm of an asymptotically stable and proper continuoustime transfer matrix Thw (s) is deﬁned as Thw
∞ := sup σmax [Thw (jω )] = sup
ω ∈[0,∞) w
2 =1 h w 2 2 , where w and h are, respectively, the input and output of Thw (s). 272 PROBLEM 8.2 The inﬁmum or the optimal value associated with the H∞ control problem is deﬁned as γ ∗ := inf Thw (Σ × Σcmp )
∞  Σcmp internally stabilizes Σ . (3) Obviously, γ ∗ ≥ 0. In fact, when γ ∗ = 0, the problem is reduced to the wellknown problem of H∞ almost disturbance decoupling with measurement feedback and internal stability. We note that in order to design a meaningful H∞ control law for the given system (1), the designer should know before hand the inﬁmum γ ∗ , which represents the best achievable level of disturbance attenuation. Unfortunately, the problem of a noniterative computation of this γ ∗ for general systems still remains unsolved in the open literature. 2 MOTIVATION AND HISTORY OF THE PROBLEM Over the last two decades, we have witnessed a proliferation of literature on H∞ optimal control since it was ﬁrst introduced by Zames [20]. The main focus of the work has been on the formulation of the problem for robust multivariable control and its solution. Since the original formulation of the H∞ problem in Zames [20], a great deal of work has been done on ﬁnding the solution to this problem. Practically all the research results of the early years involved a mixture of timedomain and frequencydomain techniques including the following: 1) interpolation approach (see, e.g., [13]); chenbm2) frequency domain approach (see, e.g., [5, 8, 9]); 3) polynomial approach (see, e.g., [12]); and 4) J spectral factorization approach (see, e.g., [11]). Recently, considerable attention has been focused on purely timedomain methods based on algebraic Riccati equations (ARE) (see, e.g., [6, 7, 10, 15, 16, 17, 18, 19, 21]). Along this line of research, connections are also made between H∞ optimal control and diﬀerential games (see, e.g., [1, 14]). It is noted that most of the results mentioned above are focusing on ﬁnding solutions to H∞ control problems. Many of them assume that γ is known or simply assume that γ ∗ = 1. The computation of γ ∗ in the literature are usually done by certain iteration schemes. For example, in the regular case and utilizing the results of Doyle et al. [7], an iterative procedure for approximating γ ∗ would proceed as follows: one starts with a value of γ and determines whether γ > γ ∗ by solving two “indeﬁnite” algebraic Riccati equations and checking the positive semideﬁniteness and stabilizing properties of these solutions. In the case when such positive semideﬁnite solutions exist and satisfy a coupling condition, then we have γ > γ ∗ and one simply repeats the above steps using a smaller value of γ . In principle, one can approximate the inﬁmum γ ∗ to within any degree of accuracy in this manner. However, this search procedure is exhaustive and can be very costly. More signiﬁcantly, due to the possible highgain occurrence as γ gets close to γ ∗ , numerical solutions for these H∞ AREs can become highly sensitive and NONITERATIVE COMPUTATION OF OPTIMAL VALUE IN H∞ CONTROL 273 illconditioned. This diﬃculty also arises in the coupling condition. Namely, as γ decreases, evaluation of the coupling condition would generally involve ﬁnding eigenvalues of stiﬀ matrices. These numerical diﬃculties are likely to be more severe for problems associated with the singular case. Thus, in general, the iterative procedure for the computation of γ ∗ based on AREs is not reliable. 3 AVAILABLE RESULTS There are quite a few researchers who have attempted to develop procedures for the determination of the value of γ ∗ without iterations. For example, Petersen [15] has solved the problem for a class of oneblock regular case. Scherer [17, 18] has obtained a partial answer for state feedback problem for a larger class of systems by providing a computable candidate value together with algebraically veriﬁable conditions, and Chen and his coworkers [3, 4] (see also [2]) have developed a noniterative procedures for computing the value of γ ∗ for a class of systems (singular case) that satisfy certain geometric conditions. To be more speciﬁc, we introduce the following two geometric subspaces of linear systems: Given an nth order linear system Σ∗ characterized by a matrix quadruple (A∗ , B∗ , C∗ , D∗ ), we deﬁne i. V− (Σ∗ ), a weakly unobservable subspace, is the maximal subspace of Rn which is (A∗ + B∗ F∗ )invariant and contained in Ker (C∗ + D∗ F∗ ) such that the eigenvalues of (A∗ + B∗ F∗ )V− are contained in C− , the openleft complex plane, for some constant matrix F∗ ; and ii. S− (Σ∗ ), a strongly controllable subspace, is the minimal (A∗ + K∗ C∗ )invariant subspace of Rn containing Im (B∗ + K∗ D∗ ) such that the eigenvalues of the map which is induced by (A∗ + K∗ C∗ ) on the factor space Rn /S− are contained in C− for some constant matrix K∗ . The problem of noniterative computation of γ ∗ has been solved by Chen and his coworkers [3, 4] (see also [2]) for a class of systems that satisfy the following conditions: i. Im(E ) ⊂ V− (ΣP ) + S− (ΣP ); and ii. Ker(C2 ) ⊃ V− (ΣQ ) ∩ S− (ΣQ ), together with some other minor assumptions. The work of Chen et al. involves solving a couple of algebraic Riccati and Lyapunov equations. The computation of γ ∗ is then done by ﬁnding the maximum eigenvalue of a resulting constant matrix. It has been demonstrated by an example in Chen [2] that the noniterative computation of γ ∗ can be done for a larger class of systems, which do not 274 PROBLEM 8.2 necessarily satisfy the above geometric conditions. It is believed that there are rooms to improve the existing results. BIBLIOGRAPHY [1] T. Ba¸ar and P. Bernhard, H∞ Optimal Control and Related Minimax s Design Problems: A Dynamic Game Approach, 2nd Ed., Birkh¨user, a Boston, 1995. [2] B. M. Chen, H∞ Control and Its Applications, Springer, London, 1998. [3] B. M. Chen, Y. Guo and Z. L. Lin, “Noniterative computation of inﬁmum in discretetime H∞ optimization and solvability conditions for the discretetime disturbance decoupling problem,” International Journal of Control, vol. 65, pp. 433454, 1996. [4] B. M. Chen, A. Saberi, and U. Ly, “Exact computation of the inﬁmum in H∞ optimization via output feedback,” IEEE Transactions on Automatic Control, vol. 37, pp. 7078, 1992. [5] J. C. Doyle, Lecture Notes in Advances in Multivariable Control, ONRHoneywell Workshop, 1984. [6] J. C. Doyle and K. Glover, “Statespace formulae for all stabilizing controllers that satisfy an H∞ norm bound and relations to risk sensitivity,” Systems & Control Letters, vol. 11, pp. 167172, 1988. [7] J. Doyle, K. Glover, P. P. Khargonekar, and B. A. Francis, “State space solutions to standard H2 and H∞ control problems,” IEEE Transactions on Automatic Control, vol. 34, pp. 831847, 1989. [8] B. A. Francis, A Course in H∞ Control Theory, Lecture Notes in Control and Information Sciences, vol. 88, Springer, Berlin, 1987. [9] K. Glover, “All optimal Hankelnorm approximations of linear multivariable systems and their L∞ error bounds,” International Journal of Control, vol. 39, pp. 11151193, 1984. [10] P. Khargonekar, I. R. Petersen, and M. A. Rotea, “H∞ optimal control with state feedback,” IEEE Transactions on Automatic Control, vol. AC33, pp. 786788, 1988. [11] H. Kimura, Chain Scattering Approach to H∞ Control, Birkh¨user, a Boston, 1997. [12] H. Kwakernaak, “A polynomial approach to minimax frequency domain optimization of multivariable feedback systems,” International Journal of Control, vol. 41, pp. 117156, 1986. NONITERATIVE COMPUTATION OF OPTIMAL VALUE IN H∞ CONTROL 275 [13] D. J. N. Limebeer and B. D. O. Anderson, “An interpolation theory approach to H∞ controller degree bounds,” Linear Algebra and its Applications, vol. 98, pp. 347386, 1988. [14] G. P. Papavassilopoulos and M. G. Safonov, “Robust control design via game theoretic methods,” Proceedings of the 28th Conference on Decision and Control, Tampa, Florida, pp. 382387, 1989. [15] I. R. Petersen, “Disturbance attenuation and H∞ optimization: A design method based on the algebraic Riccati equation,” IEEE Transactions on Automatic Control, vol. AC32, pp. 427429, 1987. [16] A. Saberi, B. M. Chen, and Z. L. Lin, “Closedform solutions to a class of H∞ optimization problem,” International Journal of Control, vol. 60, pp. 4170, 1994. [17] C. Scherer, “H∞ control by state feedback and fast algorithm for the computation of optimal H∞ norms,” IEEE Transactions on Automatic Control, vol. 35, pp. 10901099, 1990. [18] C. Scherer, “The statefeedback H∞ problem at optimality,” Automatica, vol. 30, pp. 293305, 1994. [19] G. Tadmor, “Worstcase design in the time domain: The maximum principle and the standard H∞ problem,” Mathematics of Control, Signals and Systems, vol. 3, pp. 301324, 1990. [20] G. Zames, “Feedback and optimal sensitivity: Model reference transformations, multiplicative seminorms, and approximate inverses,” IEEE Transactions on Automatic Control, vol. 26, pp. 301320, 1981. [21] K. Zhou, J. Doyle, and K. Glover, Robust and Optimal Control, Prentice Hall, New York, 1996. Problem 8.3
Determining the least upper bound on the achievable delay margin Daniel E. Davison and Daniel E. Miller
Department of Electrical and Computer Engineering University of Waterloo Waterloo, Ontario N2L 3G1 Canada [email protected] and [email protected] 1 MOTIVATION AND PROBLEM STATEMENT Control engineers have had to deal with time delays in control processes for decades and, consequently, there is a huge literature on the topic, e.g., see [1] or [2] for collections of recent results. Delays arise from a variety of sources, including physical transport delay (e.g., in a rolling mill or in a chemical plant), signal transmission delay (e.g., in an earthbased satellite control system or in a system controlled over a network), and computational delay (e.g., in a system which uses image processing). The problems posed here are concerned in particular with systems where the time delay is not known exactly: such uncertainty exists, for example, in a rolling mill system where the physical speed of the process may change daytoday, or in a satellite control system where the signal transmission time between earth and the satellite changes as the satellite moves, or in a control system implemented on the internet where the time delay is uncertain because of unknown traﬃc load on the network. Motivated by the above examples, we focus here on the simplest problem that captures the diﬃculty of control in the face of uncertain delay. Speciﬁcally, consider the classical linear timeinvariant (LTI) unityfeedback control system with a known controller and with a plant that is known except for an uncertain output delay. Denote the plant delay by τ , the plant transfer function by P (s) = P0 (s)exp(−sτ ), and the controller by C (s). Assume the feedback system is internally stable when τ = 0. Let us deﬁne the delay margin (DM) to be the largest time delay such that, for any delay less than LEAST UPPER BOUND ON DELAY MARGIN 277 or equal to this value, the closedloop system remains internally stable: DM (P0 , C ) := sup{τ : for all τ ∈ [0, τ ], the feedback control system with controller C (s) and plant P (s) = P0 (s)exp(−sτ ) is internally stable}. Computation of DM (P0 , C ) is straightforward. Indeed, the Nyquist stability criterion can be used to conclude that the delay margin is simply the phase margin of the undelayed system divided by the gain crossover frequency of the undelayed system. Other techniques for computing the delay margin for LTI systems have also been developed, e.g., see [3], [4], [5], and [6], just to name a few. In contrast to the problem of computing the delay margin when the controller is known, the design of a controller to achieve a prespeciﬁed delay margin is not straightforward, except in the trivial case where the plant is openloop stable, in which case the zero controller achieves DM (P0 , C ) = ∞. To the best of the authors’ knowledge, there is no known technique for designing a controller to achieve a prespeciﬁed delay margin. Moreover, the fundamental question of whether or not there exists a ﬁnite upper bound on the delay margin that is achievable by a LTI controller has not even been addressed. Hence, there are three unsolved problems: Problem 1: Does there exist an (unstable) LTI plant, P0 , for which there is a ﬁnite upper bound on the delay margin that is achievable by a LTI controller? In other words, does there exist a P0 for which DMsup (P0 ) := sup{DM (P0 , C ) : the feedback control system with controller C (s) and plant P0 (s) is internally stable} is ﬁnite? Problem 2: If the answer to Problem 1 is aﬃrmative, devise a computationally feasible algorithm that, given P0 (s), computes DMsup (P0 ) to a given prescribed degree of accuracy. Problem 3: If the answer to Problem 1 is aﬃrmative, devise a computationally feasible algorithm that, given P0 (s) and a value T in the range 0 < T < DMsup (P0 ), constructs a C (s) that satisﬁes DM (P0 , C ) ≥ T. 2 RELATED RESULTS It is natural to attempt to use robust control methods to solve these problems (e.g., see [7] or [8]). That is, construct a plant uncertainty “ball” that includes all possible delayed plants, then design a controller to stabilize every 278 PROBLEM 8.3 plant within that ball. To the best of the authors’ knowledge, such techniques always introduce conservativeness, and therefore cannot be used to solve the problems stated above. Alternatively, it has been established in the literature that there are upper bounds on the gain margin and phase margin if the plant has poles and zeros in the open righthalf plane [9], [7]. These bounds are not conservative, but it is not obvious how to apply the same techniques to the delay margin problem. As a ﬁnal possibility, performance limitation integrals, such as those described in [10], may be useful, especially for solving Problem 1. BIBLIOGRAPHY [1] M. S. Mahmoud, Robust Control and Filtering for TimeDelay Systems, New York: Marcel Dekker, 2000. [2] L. Dugard and E. Verriest, eds., Stability and Control of TimeDelay Systems, SpringerVerlag, 1997. [3] J. Zhang, C. R. Knospe, and P. Tsiotras, “Stability of linear timedelay systems: A delaydependent criterion with a tight conservatism bound,” In: Proceedings of the American Control Conference, Chicago, Illinois, pp. 1458–1462, June 2000. [4] J. Chen, G. Gu, and C. N. Nett, “A new method for computing delay margins for stability of linear delay systems,” In: Proceedings of the 33rd Conference on Decision and Control, Lake Buena Vista, Florida, pp. 433–437, Dec. 1994. [5] J. Chiasson, “A method for computing the interval of delay values for which a diﬀerentialdelay system is stable,” IEEE Transactions on Automatic Control, vol. 33, pp. 1176–1178, Dec. 1988. [6] K. Walton and J. Marshall, “Direct method for TDS stability analysis,” IEE Proceedings, Part D, vol. 134, pp. 101–107, Mar. 1987. [7] J. C. Doyle, B. A. Francis, and A. R. Tannenbaum, Feedback Control Theory, New York, NY: Macmillan, 1992. [8] K. Zhou, J. C. Doyle, and K. Glover, Robust and Optimal Control, Upper Saddle River, NJ: Prentice Hall, 1996. [9] P. P. Khargonekar and A. Tannenbaum, “Noneuclidean metrics and the robust stabilization of systems with parameter uncertainty,” IEEE Transactions on Automatic Control, vol. AC30, pp. 1005–1013, Oct. 1985. LEAST UPPER BOUND ON DELAY MARGIN 279 [10] J. Freudenberg, R. Middleton, and A. Stefanopoulou, “A survey of inherent design limitations,” In: Proceedings of the American Control Conference, Chicago, Illinois, pp. 2987–3001, June 2000. Problem 8.4
Stable controller coeﬃcient perturbation in ﬂoating point implementation Jun Wu
National Key Laboratory of Industrial Control Technology Institute of Advanced Process Control Zhejiang University Hangzhou 310027 P. R. China [email protected] Sheng Chen
Department of Electronics and Computer Science University of Southampton, Highﬁeld Southampton SO17 1BJ United Kingdom [email protected] 1 DESCRIPTION OF THE PROBLEM For real matrix X = [xij ], denote X
max = max xij .
i,j (1) For real matrices X = [xij ] and Y = [yij ] of the same dimension, denote the Hadamard product of X and Y as X ◦ Y = [xij yij ]. (2) A square real matrix is said to be stable if its eigenvalues are all in the interior of the unit disc. Consider a stable discretetime closedloop control system, consisting of a linear time invariant plant P (z ) and a digital controller C (z ). The plant model P (z ) is assumed to be strictly proper with a statespace description xP (k + 1) = AP xP (k ) + BP u(k ) y(k ) = CP xP (k ) (3) STABLE COEFFICIENT PERTURBATION 281 where AP ∈ Rm×m , BP ∈ Rm×l and CP ∈ Rq×m . The controller C (z ) is described by xC (k + 1) = AC xC (k ) + BC y(k ) u(k ) = CC xC (k ) + DC y(k ) (4) where AC ∈ Rn×n , BC ∈ Rn×q , CC ∈ Rl×n and DC ∈ Rl×q . It can be shown easily that the transition matrix of the closedloop system is A= AP + BP DC CP BC CP BP CC AC ∈ R(m+n)×(m+n) . (5) It is wellknown that a discretetime closedloop system is stable if and only if its transition matrix is stable. Since the closedloop system, consisting of (3) and (4), is designed to be stable, A is stable. Let B= C= W= BP 0 CP 0 DC BC 0 I 0 I CC AC ∈ R(m+n)×(l+n) , ∈ R(q+n)×(m+n) , ∈ R(l+n)×(q+n) , (6) (7) (8) where 0 and I denote the zero and identity matrices of appropriate dimensions, respectively. Deﬁne the set S = {∆ : ∆ ∈ R(l+n)×(q+n) , A + B(W ◦ ∆)C is stable} and further deﬁne υ = inf { ∆
max (9) : ∆ ∈ R(l+n)×(q+n) , ∆ ∈ S}. (10) The open problem is: calculate the value of υ . 2 MOTIVATION OF THE PROBLEM The classical digital controller design methodology often assumes that the controller is implemented exactly, even though in reality a control law can only be realized with a digital processor of ﬁnite word length (FWL). It may seem that the uncertainty resulting from ﬁniteprecision computing of the digital controller is so small, compared to the uncertainty within the plant, such that this controller “uncertainty” can simply be ignored. Increasingly, however, researchers have realized that this is not necessarily the case. Due to the FWL eﬀect, a casual controller implementation may degrade the designed closedloop performance or even destabilize the designed stable closedloop system, if the controller implementation structure is not carefully chosen [1, 2]. With decreasing in price and increasing in availability, the use of ﬂoatingpoint processors in controller implementations has increased dramatically. 282 PROBLEM 8.4 When a real number x is implemented in a ﬂoatingpoint format, it is perturbed to x(1 + δ ) with δ  < η , where η is the maximum roundoﬀ error of the ﬂoatingpoint representation [3]. It can be seen that the perturbation resulting from ﬁniteprecision ﬂoatingpoint arithmetic is multiplicative. For the closedloop system described in section 1, when C (z ) is implemented in ﬁniteprecision ﬂoatingpoint format, the controller realization W is perturbed to W + W ◦ ∆. Each element of ∆ is bounded by ±η , that is, ∆
max < η. (11) With the perturbation ∆, the transition matrix of the closedloop system becomes A + B(W ◦ ∆)C. If an eigenvalue of A + B(W ◦ ∆)C is outside the open unit disc, the closedloop system, designed to be stable, becomes unstable with the FWL ﬂoatingpoint implemented W. It is therefore critical to know the ability of the closedloop stability to tolerate the coeﬃcient perturbation ∆ in W resulted from ﬁniteprecision implementation. This means that we would like to know the largest “cube” in the perturbation space, within which the closedloop system remains stable. The measure υ deﬁned in (10) gives the exact size of the largest “stable perturbation cube” for W. If the value of υ can be computed, it becomes a simple matter to check whether W is “robust” to FWL errors, because A + B(W ◦ ∆)C remains stable when υ > η . Furthermore, W or (AC , BC , CC , DC ) is a realization of the controller C (z ). The realizations of C (z ) are not unique. Diﬀerent realizations are all equivalent if they are implemented in inﬁnite precision. In fact, suppose (A0 , B0 , C0 , D0 ) is a realization of C (z ), then all the realizations of C (z ) C C C C form a set SC = W:W= I 0 0 T−1 D0 C B0 C C0 C A0 C I 0 0 T (12) where the transformation matrix T ∈ Rn×n is an arbitrary nonsingular matrix. A useful observation is that diﬀerent W have diﬀerent values of υ . Provided that the value of υ is computationally tractable, an optimal realization of C (z ), which has a maximum tolerance to FWL errors, can be obtained via optimization. The open problem deﬁned in section 1 was ﬁrst seen in [3]. At present, there exists no available result. An approach to bypass the diﬃculty in computing υ is to deﬁne some approximate upper bound of υ using a ﬁrstorder approximation, which is computationally tractable (see [3]). One of the thorny items in the open problem is the Hadamard product W ◦ ∆. The form of structured perturbation, which was adopted in µanalysis methods [4], may be used to deal with this Hadamard product: ˜ ∆ can be transformed into a generalized perturbation ∆ that has certain ˜˜ ˜ structure such as blockdiagonal. The ﬁxed matrices A, B and C may ˜ ˜˜˜ be obtained such that the stability of A + B∆C is equivalent to that of ˜ ˜˜˜ A + B(W ◦ ∆)C. Although the stability of A + B∆C can be treated STABLE COEFFICIENT PERTURBATION 283 satisfactorily by µanalysis methods, the open problem cannot be solved successfully by µanalysis methods. This is because µanalysis methods are ˜ ˜ concerned about the maximal singular value σ (∆) of ∆. In fact, the distance ˜ ) and ∆ max can be quite large, and ∆ max is the other between σ (∆ thorny item which makes the open problem diﬃcult. ACKNOWLEDGEMENTS The authors gratefully acknowledge the support of the United Kingdom Royal Society under a KC Wong fellowship (RL/ART/CN/XFI/KCW/11949). Jun Wu wishes to thank the support of the National Natural Science Foundation of China under Grant 60174026 and the Scientiﬁc Research Foundation for the Returned Overseas Chinese Scholars of Zhejiang province under Grant J20020546. BIBLIOGRAPHY [1] M. Gevers and G. Li, Parameterizations in Control, Estimation and Filtering Problems: Accuracy Aspects, London, Springer Verlag, 1993. [2] R. S. H. Istepanian and J. F. Whidborne, eds., Digital Controller Implementation and Fragility: A Modern Perspective, London, Springer Verlag, 2001. [3] J. Wu, S. Chen, J. F. Whidborne and J. Chu, “Optimal ﬂoatingpoint realizations of ﬁniteprecision digital controllers,” In: Proc. 41st IEEE Conference on Decision and Control, Las Vegas, USA, Dec. 1013, 2002, pp. 2570–2575. [4] J. Doyle, “Analysis of feedback systems with structured uncertainties,” Proc. IEE, vol. 129D, no 6, pp. 242–250, 1982. PART 9 Identiﬁcation, Signal Processing Problem 9.1
A conjecture on Lyapunov equations and principal angles in subspace identiﬁcation Katrien De Cock and Bart De Moor1
Dept. of Electrical Engineering (ESAT–SCD) K.U.Leuven Kasteelpark Arenberg 10 B3001 Leuven Belgium http://www.esat.kuleuven.ac.be/sistacosicdocarch [email protected], [email protected] 1 DESCRIPTION OF THE PROBLEM The following conjecture relates the eigenvalues of certain matrices that are derived from the solution of a Lyapunov equation that occurred in the analysis of stochastic subspace identiﬁcation algorithms [3]. First, we formulate the conjecture as a pure matrix algebraic problem. In Section 2, we will describe its system theoretic consequences and interpretation. Conjecture: Let A ∈ Rn×n be a real matrix and v, w ∈ Rn be real vectors A 0 so that there are no two eigenvalues λi and λj of for which 0 A + vwT λi λj = 1 (i, j = 1, . . . , 2n). If the n × n matrices P , Q and R satisfy the
De Cock is a research assistant at the K.U.Leuven. Dr. Bart De Moor is a full professor at the K.U.Leuven. Our research is supported by grants from several funding agencies and sources: Research Council KUL: Concerted Research Action GOAMeﬁsto 666, IDO, several Ph.D., postdoctoral & fellow grants; Flemish Government: Fund for Scientiﬁc Research Flanders (several Ph.D. and postdoctoral grants, projects G.0256.97, G.0115.01, G.0240.99, G.0197.02, G.0407.02, research communities ICCoS, ANMMM), AWI (Bil. Int. Collaboration Hungary/ Poland), IWT (Soft4s, STWWGenprom, GBOUMcKnow, EurekaImpact, EurekaFLiTE, several PhD grants); Belgian Federal Government: DWTC (IUAP IV02 (19962001) and IUAP V22 (20022006)), Program Sustainable Development PODOII (CP/40); Direct contract research: Verhaert, Electrabel, Elia, Data4s, IPCOS.
1 Katrien 288 Lyapunov equation P RT R Q = A 0 0 A + vwT P RT R Q AT 0 + 0 A + vwT v w vT PROBLEM 9.1 T wT , (1) and P , Q and (In + P Q) are nonsingular, 2 then the matrices P −1 RQ−1 RT and (In + P Q)−1 have the same eigenvalues. Note that the condition λi λj = 1 (∀i, j = 1, . . . , 2n) ensures that there exists PR of the Lyapunov equation (1) and that the solution is a solution RT Q unique. We have checked the similarity of P −1 RQ−1 RT and (In + P Q)−1 for numerous examples (“proof by Matlab”) and it is simple to prove the conjecture for n = 1. Furthermore, via a large detour (see [3]) we can also prove it from the system theoretic interpretation, which is given in section 5. However, we have not been able to ﬁnd a general and elegant proof. We also remark that the requirement that v and w are vectors is necessary for the conjecture to hold. One can easily ﬁnd counterexamples for the case V, W ∈ Rn×m , where m > 1. It is consequently clear that this condition on v and w should be used in the proof. 2 BACKGROUND AND MOTIVATION Although the conjecture is formulated as a pure matrix algebraic problem, its system theoretic interpretation is particularly interesting. In order to explain the consequences, we ﬁrst have to introduce some concepts: the principal angles between subspaces (section 3) and their statistical counterparts, the canonical correlations of random variables (section 4). Next, in section 5 we will show how the conjecture, when proved correct, would enable us to prove in an elegant way that the nonzero canonical correlations of the past and the future of the output process of a linear stochastic model are equal to the sines of the principal angles between two speciﬁc subspaces that are derived from the model. This result, in its turn, is instrumental for further derivations in [3], where a cepstral distance measure is related to canonical correlations and to the mutual information of two processes (see also section 5). Moreover, by this new characterization of the canonical correlations, we gain insight in the geometric properties of subspace based techniques. 2 The matrix In is the n × n identity matrix. A CONJECTURE ON THE LYAPUNOV EQUATION 289 3 THE PRINCIPAL ANGLES BETWEEN TWO SUBSPACES The concept of principal angles between and principal directions in subspaces of a linear vector space is due to Jordan in the nineteenth century [8]. We give the deﬁnition and brieﬂy describe how the principal angles can be computed. Let S1 and S2 be subspaces of Rn of dimension p and q , respectively, where p ≤ q . Then, the p principal angles between S1 and S2 , denoted by θ1 , . . . , θp , and the corresponding principal directions ui ∈ S1 and vi ∈ S2 (i = 1, . . . , p) are recursively deﬁned as cos θ1 = max max uT v  = uT v1 1
u∈S1 v ∈S2 u∈S1 v ∈S2 cos θk = max max uT v  = uT vk (k = 2, . . . , p) k subject to u = v = 1, and for k > 1: uT ui = 0 and v T vi = 0, where i = 1, . . . , k − 1. If S1 and S2 are the row spaces of the matrices A ∈ Rl×n and B ∈ Rm×n , respectively, then the cosines of the principal angles θ1 , . . . , θp , can be computed as the largest p generalized eigenvalues of the matrix pencil 0 AB T 0 AAT λ. − T BA 0 0 BB T Furthermore, if A and B are full row rank matrices, i.e., l = p and m = q , then the squared cosines of the principal angles between the row space of A and the row space of B are equal to the eigenvalues of (AAT )−1 AB T (BB T )−1 BAT . Numerically stable methods to compute the principal angles via the QR and singular value decomposition can be found in [5, pp. 603–604]. 4 THE CANONICAL CORRELATIONS OF TWO RANDOM VARIABLES Canonical correlation analysis, due to Hotelling [6], is the statistical version of the notion of principal angles. Let X ∈ Rp and Y ∈ Rq , where p ≤ q , be zeromean random variables with full rank joint covariance matrix3 X Qx Qxy XT Y T Q=E = . Y Qyx Qy The canonical correlations of X and Y are deﬁned as the largest p eigenvalues 0 Qxy Qx 0 of the pencil − λ. More information on canonical Qyx 0 0 Qy correlation analysis can be found in [1, 6].
3E {·} is the expected value operator. 290 PROBLEM 9.1 5 SYSTEM THEORETIC INTERPRETATION OF CONJECTURE Let {y (k )}k∈Z be a real, discretetime, scalar and zeromean stationary stochastic process that is generated by the following singleinput, singleoutput (SISO), asymptotically stable state space model in forward innovation form: x(k + 1) = Ax(k ) + Ku(k ) , y (k ) = Cx(k ) + u(k ) , (2) where {u(k )}k∈Z is the innovation process of {y (k )}k∈Z , A ∈ Rn×n , K ∈ Rn×1 is the Kalman gain and C ∈ R1×n . The state space matrices of the inverse model (or whitening ﬁlter) are A − KC , K and −C , respectively, as is easily seen by writing u(k ) as an output with y (k ) as an input. By substituting the vector v in (1) by K , and w by −C T , the matrices P , Q and R in (1) can be given the following interpretation. The matrix P is the controllability Gramian of the model (2) and Q is the observability Gramian of the inverse model, while R is the cross product of the inﬁnite controllability matrix of (2) and the inﬁnite observability matrix of the inverse model. Otherwise formulated: P RT R Q = C∞ ΓT ∞ CT ∞ Γ∞ , C C (A − KC ) where C∞ = K AK A2 K · · · and Γ∞ = − C (A − KC )2 . . . . Due to the stability and the minimum phase property of the forward innovation model (2), these inﬁnite products result in ﬁnite matrices and in addition, the condition λi λj = 1 in conjecture 1 is fulﬁlled. Furthermore, under fairly general conditions, P , Q, and In + P Q are nonsingular, which follows from the positive deﬁniteness of P and Q under general conditions. The matrix P −1 RQ−1 RT in conjecture 1 is now equal to the product (C∞ CT )−1 (C∞ Γ∞ )(ΓT Γ∞ )−1 (ΓT CT ) . ∞ ∞ ∞∞ Consequently, its n eigenvalues are the squared cosines of the principal angles between the row space of C∞ and the column space of Γ∞ (see Section 3). The angles will be denoted by θ1 , . . . , θn (in nondecreasing order). The eigenvalues of the matrix (In + P Q)−1 , on the other hand, are related to the canonical correlations of the past and the future stochastic processes of {y (k )}k∈Z , which are deﬁned as the canonical correlations of the random variables y (0) y (−1) y (1) y (−2) yp = y (−3) and yf = y (2) , . . . . . . A CONJECTURE ON THE LYAPUNOV EQUATION 291 and denoted by ρ1 , ρ2 , . . . (in nonincreasing order). It can be shown [3] that the largest n canonical correlations of yp and yf are equal to the square roots of the eigenvalues of In − (In + P Q)−1 . The other canonical correlations are equal to 0. Conjecture 1 now gives us the following characterization of the canonical correlations of the past and the future of {y (k )}k∈Z : the largest n canonical correlations are equal to the sines of the principal angles between the row space of C∞ and the column space of Γ∞ and the other canonical correlations are equal to 0: ρ1 = sin θn , ρ2 = sin θn−1 , . . . , ρn = sin θ1 , ρn+1 = ρn+2 = · · · = 0 . (3) This result can be used to prove that a recently deﬁned cepstral norm [9] for a model as in (2) is closely related to the mutual information of the past and the future of its output process. Let the transfer function of the system in (2) be denoted by H (z ). Then the complex cepstrum {c(k )}k∈Z of the model is deﬁned as the inverse Z transform of the complex logarithm of H (z ): 1 log(H (z ))z k−1 dz , c(k ) = 2πi C where the complex logarithm of H (z ) is appropriately deﬁned (see [10, pp. 495–497]) and the contour C is the unit circle. The cepstral norm that we consider, is deﬁned as
∞ log H 2 =
k=0 kc(k )2 . As we have proven in [2], it can be characterized in terms of the principal angles θ1 , . . . , θn between the row space of C∞ and the column space of Γ∞ as follows:
n log H and from (3) we obtain log H
∞ 2 = − log
i=1 cos2 θi , 2 = − log (1 − ρ2 ) . i The relation k=0 kc(k )2 = − log (1 − ρ2 ) was also reported in [7, propoi sition 2]. Moreover, if {y (k )}k∈Z is a Gaussian process, then the expression 1 − 2 log (1 − ρ2 ) is equal to the mutual information of its past and future i (see, e.g., [4]), which is denoted by I (yp ; yf ). Consequently,
∞ log H 2 =
k=0 kc(k )2 = 2I (yp ; yf ) . 6 CONCLUSIONS We presented a matrix algebraic conjecture on the eigenvalues of matrices that are derived from the solution of a Lyapunov equation. We showed that 292 PROBLEM 9.1 a proof of conjecture 1 would provide yet another elegant geometric result in the subspace based study of linear stochastic systems. Moreover, it can be used to express a cepstral distance measure that was deﬁned in [9] in terms of canonical correlations and also as the mutual information of two processes. BIBLIOGRAPHY [1] T. W. Anderson, An Introduction to Multivariate Statistical Analysis, John Wiley & Sons, New York, 1984. [2] K. De Cock and B. De Moor, “Subspace angles between ARMA models,” Technical Report4 0044a, ESATSCD, K.U.Leuven, Leuven, Belgium, accepted for publication in Systems & Control Letters, 2002. [3] K. De Cock, Principal Angles in System Theory, Information Theory and Signal Processing, Ph.D. thesis,5 Faculty of Applied Sciences, K.U.Leuven, Leuven, Belgium, 2002. [4] I. M. Gel’fand and A. M. Yaglom, “Calculation of the amount of information about a random function contained in another such function,” American Mathematical Society Translations, Series (2), 12, pp. 199– 236, 1959. [5] G. H. Golub and C. F. Van Loan, Matrix Computations, The Johns Hopkins University Press, Baltimore, 1996. [6] H. Hotelling, “Relations between two sets of variates,” Biometrika, 28, pp. 321–372, 1936. [7] N. P. Jewell, P. Bloomﬁeld, and F. C. Bartmann, “Canonical correlations of past and future for time series: Bounds and computation,” The Annals of Statistics, 11, pp. 848–855, 1983. [8] C. Jordan, “Essai sur la g´om´trie ` n dimensions,” Bulletin de la ee a Soci´t´ Math´matique, 3, pp. 103–174, 1875. ee e [9] R. J. Martin, “A metric for ARMA processes,”, IEEE Transactions on Signal Processing, 48, pp. 1164–1170, 2000. [10] A. V. Oppenheim and R. W. Schafer, Digital Signal Processing, Prentice/Hall International, 1975. 4 This report is available by anonymous ftp from ftp.esat.kuleuven.ac.be in the directory pub/sista/reports as ﬁle 0044a.ps.gz. 5 This thesis will also be made available by anonymous ftp from ftp.esat.kuleuven.ac.be as ﬁle pub/sista/decock/reports/phd.ps.gz Problem 9.2
Stability of a nonlinear adaptive system for ﬁltering and parameter estimation Masoud KarimiGhartemani
Department of Electrical and Computer Engineering University of Toronto Toronto, Ontario Canada M5S 3G4 [email protected] Alireza K. Ziarani
Department of Electrical and Computer Engineering Clarkson University Potsdam, NY USA 136995720 [email protected] 1 DESCRIPTION OF THE PROBLEM We are concerned about the mathematical properties of the dynamical system presented by the following three diﬀerential equations: dA 2 dt = −2µ1 A sin φ + 2µ1 sinφ f (t), dω = −µ2 A2 sin(2φ) + 2µ2 A cosφ f (t), (1) dt dφ = ω + µ3 dω dt dt where parameters µi , i = 1, 2, 3 are positive real constants and f (t) is a function of time having a general form of f (t) = Ao sin(ωo t + δo ) + f1 (t). (2) Ao , ωo and δo are ﬁxed quantities and it is assumed that f1 (t) has no frequency component at ωo . Variables A and ω are in R1 and φ varies on the onedimensional circle S1 with radius 2π . The dynamical system presented by (1) is designed to (i) take the signal f (t) as its input signal and extract the component fo (t) = Ao sin(ωo t + δo ) as its 294 PROBLEM 9.2 output signal, and (ii) estimate the basic parameters of the extracted signal fo (t), namely its amplitude, phase, and frequency. The extracted signal is y = A sinφ and the basic parameters are the amplitude A, frequency ω and phase angle φ = ωt + δ . Consider the three variables (A, ω, φ) in the threedimensional space R1 × R1 × S1 . The sinusoidal function fo (t) = Ao sin(ωo t + δo ) corresponds to the To periodic curve Γo (t) = (Ao , ωo , ωo t + δo )
2π ωo . (3) in this space, with To = The following theorem, which the authors have proved in [1], presents some of the mathematical properties of the dynamical system presented by 1. Theorem 1: Consider the dynamical system presented by the set of ordinary diﬀerential equations (1) in which the function f (t) is deﬁned in (2) and f1 (t) is a bounded T1 periodic function with no frequency component at ωo . The three variables (A, ω, φ) are in R1 × R1 × S1 . The parameters µi , i = 1, 2, 3 are small positive real numbers. If T1 = To for any arbin trary n ∈ N, the dynamical system of (1) has a stable To periodic orbit in a neighborhood of Γo (t) as deﬁned in (3). The behavior of the system, as examined within the simulation environments, has led the authors to the following two conjectures, the proofs of which are desired. Conjecture 1: With the same assumptions as those presented in Theorem 1, if T1 = p To for any arbitrary (p, q ) ∈ N2 with (p, q ) = 1, the dynamical q system presented by (1) has a stable mTo periodic orbit which lies on a torus in a neighborhood of Γo (t) as deﬁned in (3). The value of m ∈ N is determined by the pair (p, q ). Conjecture 2: With the same assumptions as those presented in Theorem 1, if T1 = αTo for irrational α, the dynamical system presented by (1) has an attractor set that is a torus in a neighborhood of Γo (t) as deﬁned in (3). In other words, the response is a never closing orbit that lies on the torus. Moreover, this orbit is a dense set on the torus. For both conjectures, the neighborhood in which the torus is located depends on the values of parameters µi , i = 1, 2, 3 and the function f1 (t). If the function f1 (t) is small in order and the parameters are properly selected, the neighborhood can be made to be very small, meaning that the ﬁltering and estimation processes may be achieved with a high degree of accuracy. Theorem 1 deals with the local stability analysis of the dynamical system (1). In other words, the existence of an attractor (periodic orbit or torus) and an attraction domain around the attractor is proved. However, this theorem does not deal with this domain of attraction. It is desirable to specify this domain of attraction in terms of the function f1 (t) and parameters µi , i = 1, 2, 3, hence the following open problem: Open Problem: Consider the dynamical system presented by the ordinary STABILITY OF A NONLINEAR ADAPTIVE SYSTEM 295 diﬀerential equations (1) in which the function f (t) is deﬁned in (2) and f1 (t) is a bounded T1 periodic function with no frequency component at ωo . Three variables (A, ω, φ) are in R1 × R1 × S1 . Parameters µi , i = 1, 2, 3 are small positive real numbers. This system has an attractor set that may be either a periodic orbit or a torus based on the value of T1 . It is desired to specify the extent of the attraction domain associated with the attractor in terms of the function f1 (t) and the parameters µi , i = 1, 2, 3. In other words, and in a simpliﬁed case, for a threeparameter representation of f1 (t) as f1 (t) = a1 sin(2π/T1 t + δ1 ), it is desirable to parameterize, in terms of the nineparameter set of {µ1 , µ2 , µ3 , Ao , To , δo , a1 , T1 , δ1 }, the attractor set, and also the whole region of points (A, ω, φ) in R1 × R1 × S1 that falls in the attraction domain of the attractor. 2 MOTIVATION AND HISTORY OF THE PROBLEM The dynamical system presented by (1) was proposed by the authors to devise a system for the extraction of a sinusoidal component with timevarying parameters when it is corrupted by other sinusoids and noise [1, 2]. This is of signiﬁcant interest in power system applications, for instance [3]. Estimation of the basic parameters of the extracted sinusoid, namely the amplitude, phase, and frequency, was another object of the work. These parameters provide important information useful in electrical engineering applications. Some applications of the system in biomedical engineering are presented in [2, 4]. This dynamical system presents an alternative structure for the wellknown phaselocked loop (PLL) system with signiﬁcantly advantageous features. 3 AVAILABLE RESULTS Theorem 1, corresponding to the case of T1 = To , has been proved by the n authors in [1] where the existence, local uniqueness and stability of a To periodic orbit are shown using the Poincar´ map theorem as stated in [5, page e 70]. Extensive computer simulations veriﬁed by laboratory experimental results are presented in [1, 2]. Some of the wideranging applications of the dynamical system are presented in [2, 3, 4]. The algorithm governed by the proposed dynamical system presents a powerful signal processing method of analysis/synthesis of nonstationary signals. Alternatively, it may be thought of as a nonlinear adaptive notch ﬁlter capable of estimation of parameters of the output signal. 296 BIBLIOGRAPHY PROBLEM 9.2 [1] M. KarimiGhartemani and A. K. Ziarani, “Periodic orbit analysis of two dynamical systems for electrical engineering applications,” Journal of Engineering Mathematics, 45, pp. 135154, 2003. [2] A. K. Ziarani, Extraction of Nonstationary Sinusoids, Ph.D. dissertation, University of Toronto, 2002. [3] M. KarimiGhartemani and M. R. Iravani, “A Nonlinear Adaptive Filter for OnLine Signal Analysis in Power Systems: Applications,” IEEE Transactions on Power Delivery, 17, pp. 617622, 2002. [4] A. K. Ziarani and A. Konrad, “A Nonlinear Adaptive Method of Elimination of Power Line Interference in ECG Signals,” IEEE Transactions on Biomedical Engineering, 49, pp. 540547, 2002. [5] S. Wiggins, Introduction to Applied Nonlinear Dynamical Systems and Chaos, New York: SpringerVerlag, 1983. PART 10 Algorithms, Computation Problem 10.1
Rootclustering for multivariate polynomials and robust stability analysis PierreAlexandre Bliman
INRIA Rocquencourt BP 105 78153 Le Chesnay cedex France [email protected] 1 DESCRIPTION OF THE PROBLEM Given the (m + 1) complex matrices A0 , . . . , Am of size n × n and denoting D (resp. C+ ) the closed unit ball in C (resp. the closed righthalf plane), let us consider the following problem: determine whether ∀s ∈ C+ , ∀z = (z1 , . . . , zm ) ∈ D , det(sIn − A0 − z1 A1 − · · · − zm Am ) = 0 . (1) We have proved in [1] that property (1) is equivalent to the existence of m m− 1 (k+1)n k ∈ N and (m + 1) matrices P, Q1 ∈ Hk n , Q2 ∈ Hk , . . . , Qm ∈ m−1 k(k+1) n H , such that P > 0km n and R(P, Q1 , . . . , Qm ) < 0(k+1)m n . Here, H
n def m (2) represents the space of n × n hermitian matrices, and R is a m ˆ def linear application taking values in H(k+1) n , deﬁned as follows. Let Jk = def ˇ (Ik 0k×1 ), Jk = (0k×1 Ik ), then (using the power of Kronecker product with the natural meaning): R=
def
m H ˆm Jk ⊗ ⊗ A0 +
i=1 T ˆ(m−i)⊗ ⊗ Jk ⊗ J (i−1)⊗ ⊗ Ai ˇ ˆ Jk k
m ˆm P Jk ⊗ ⊗ In ˆm + Jk ⊗ ⊗ In
m P ˆm Jk ⊗ ⊗ A0 +
i=1 T ˆ(m−i)⊗ ⊗ Jk ⊗ J (i−1)⊗ ⊗ Ai ˇ ˆ Jk k +
i=1 ˆ(m−i+1)⊗ ⊗ I(k+1)i−1 n Jk ˆ(m−i+1)⊗ ⊗ I(k+1)i−1 n Qi Jk 300
m PROBLEM 10.1 −
i=1 ˆ(m−i)⊗ ⊗ Jk ⊗ I(k+1)i−1 n ˇ Jk T ˆ(m−i)⊗ ⊗ Jk ⊗ I(k+1)i−1 n ˇ Qi Jk (3) Problem (2,3) is a linear matrix inequality in the m + 1 unknown matrices P, Q1 , . . . , Qm , a convex optimization problem. The LMIs (2,3) obtained for increasing values of k constitute indeed a family of weaker and weaker suﬃcient conditions for (1). Conversely, property (1) necessarily implies solvability of the LMIs for a certain rank k and beyond. See [1] for details. Numerical experimentations have shown that the precision of the criteria obtained for small values of k (2 or 3) may be remarkably good already, but rational use of this result requires to have a priori information on the size of the least k , if any, for which the LMIs are solvable. Bounds, especially upper bound, on this quantity are thus highly desirable, and they should be computed with low complexity algorithms. Open Problem 1: Find an integervalued function k ∗ (A0 , A1 , . . . , Am ) deﬁned on the product (Cn×n )m+1 , whose evaluation necessitates polynomial time, and such that property (1) holds if and only if LMI (2,3) is solvable for k = k ∗ . One may imagine that the previous quantity exists, depending upon n and m only.
∗ Open Problem 2: Determine whether the quantity kn,m = sup{k ∗ (A0 , A1 , n×n . . . , Am ) : A0 , A1 , . . . , Am ∈ C } is ﬁnite. In this case, provide an upper bound of its value. ∗ If kn,m < +∞, then, for any A0 , A1 , . . . , Am ∈ Cn×n , property (1) holds if ∗ and only if LMI (2,3) is solvable for k = kn,m . def 2 MOTIVATIONS AND COMMENTS We expose here some problems related to property (1). Robust stability Property (1) is equivalent to asymptotic stability of the uncertain system x = (A0 + z1 A1 + · · · + zm Am )x , ˙
m (4) for any value of z ∈ D . Usual approaches leading to suﬃcient LMI conditions for robust stability are based on search for quadratic Lyapunov functions x(t)H Sx(t) with constant S , see related bibliography in [2, p. 72–73], or parameterdependent S (z ), namely aﬃne [8, 7, 5, 6, 12] and more recently quadratic [19, 20]. Methods based on piecewise quadratic Lyapunov functions [21, 13] and LMIs with augmented number of variables [9, 11] also provide suﬃcient conditions for robust stability. ROOTCLUSTERING AND STABILITY ANALYSIS 301 The approach leading to the result exposed in §1 systematizes the idea of expanding S (z ) in powers of the parameters. Indeed, robust stability of (4) guarantees existence of a Lyapunov function x(t)H S (z )x(t) with S (z ) m polynomial with respect to z and z in D , and the integer k is related to the ¯ degree of this polynomial [1]. Computation of structured singular values with repeated scalar blocks Property (1) is equivalent to µ∆ (A) < 1, for a certain matrix A deduced from A0 , A1 , . . . , Am , and a set ∆ of complex uncertainties with m + 1 repeated scalar blocks. Evaluation of the structured singular values (a standard and powerful tool of robust analysis) has been proved to be a NPhard problem, see [3, 16]. Hope had dawned that its standard, eﬃciently computable, upper bound could be a satisfying approximant [17], but the gap between the two measures has latter on been proved inﬁnite [18, 14]. The approach in §1 could oﬀer attractive numerical alternative for the same purpose, as resolution of LMIs is a convex problem. It provides a family of convex relaxations, of arbitrary precision, of a class of NPhard problems. The complexity results evoked previously imply the existence of k ∗ (A0 , A1 , . . . , Am ) such that property (1) is equivalent to solvability of LMI (2,3) for k = k ∗ : ﬁrst, check that µ∆ (A) < 1; if this is true, then assess to k ∗ the value of the smallest k such that LMI (2,3) is solvable, otherwise put k ∗ = 1. This algorithm is, of course, a disaster from the point of view of complexity and computation time, and it does not answer Problem 1. Concerning the ∗ value of kn,m in Problem 2, its growth at inﬁnity should be faster than any power in n, except if P=NP. Delayindependent stability of delay systems with noncommensurate delays Property (1) is a strong version of the delayindependent stability of the functional diﬀerential equation of retarded type x = A0 x(t) + A1 x(t − ˙ h1 ) + · · · + Am x(t − hm ), that is the asymptotical stability for any value of h1 , . . . , hm ≥ 0, see [10, 2, 4]. This problem has been recognized as NPhard [15]. Solving LMI (2,3) provides explicitly a quadratic LyapunovKrasovskii functional independent upon the values of the delays [1]. Robust stability of discretetime systems and stability of multidimensional (nD) systems Understanding how to cope with the choice of k to apply LMI (2,3), should also lead to progress in the analysis of the discretetime analogue of (4), the uncertain system xk+1 = (A0 + z1 A1 + · · · + zm Am )xk . Similarly, stability analysis for multidimensional systems (a discretetime analogue of the functional diﬀerential equations of neutral type) would beneﬁt from such contributions. 302 BIBLIOGRAPHY PROBLEM 10.1 [1] P.A. Bliman, “A convex approach to robust stability for linear systems with uncertain scalar parameters,” Report research no 4316, INRIA, 2001. Available online at http://www.inria.fr/rrrt/rr4316.html [2] S. Boyd, L. El Ghaoui, E. Feron, V. Balakrishnan, “Linear matrix inequalities in system and control theory,” SIAM Studies in Applied Mathematics, vol. 15, 1994. [3] R. P. Braatz, P. M. Young, J. C. Doyle, M. Morari, “Computational complexity of µ calculation,” IEEE Trans. Automat. Control , 39, no 5, 1000–1002, 1994. [4] J. Chen, H.A. Latchman, “ Frequency sweeping tests for stability independent of delay,” IEEE Trans. Automat. Control , 40, no 9, 1640–1645, 1995. [5] M. Dettori, C. W. Scherer, “Robust stability analysis for parameter dependent systems using full block Sprocedure,” Proc. of 37th IEEE CDC, Tampa, Florida, 2798–2799, 1998. [6] M. Dettori, C. W. Scherer, “New robust stability and performance conditions based on parameter dependent multipliers,” Proc. of 39th IEEE CDC, Sydney, Australia, 2000. [7] E. Feron, P. Apkarian, P. Gahinet, “Analysis and synthesis of robust control systems via parameterdependent Lyapunov functions,” IEEE Trans. Automat. Control , 41, no 7, 1041–1046, 1996. [8] P. Gahinet, P. Apkarian, M. Chilali, “Aﬃne parameterdependent Lyapunov functions and real parametric uncertainty,” IEEE Trans. Automat. Control , 41, no 3, 436–442, 1996. [9] J. C. Geromel, M. C. de Oliveira, L. Hsu, “LMI characterization of structural and robust stability,” Linear Algebra Appl., 285, no 13, 69– 80, 1998. [10] J. K. Hale, E. F. Infante, F. S. P. Tsen, “Stability in linear delay equations,” J. Math. Anal. Appl., 115, 533–555, 1985. [11] D. Peaucelle, D. Arzelier, O. Bachelier, J. Bernussou, “A new robust Dstability condition for real convex polytopic uncertainty,” Systems and Control Letters , 40, no 1, 21–30, 2000. [12] D. C. W. Ramos, P. L. D. Peres, “An LMI approach to compute robust stability domains for uncertain linear systems,” Proc. ACC, Arlington, Virginia, 4073–4078, 2001. ROOTCLUSTERING AND STABILITY ANALYSIS 303 [13] A. Rantzer, M. Johansson, “Piecewise linear quadratic optimal control,” IEEE Trans. Automat. Control , 45, no 4, 629–637, 2000. [14] M. Sznaier, P. A. Parrilo, “On the gap between µ and its upper bound for systems with repeated uncertainty blocks,” Proc. of 38th IEEE CDC, Phoenix, Arizona, 4511–4516, 1999. ¨ [15] O. Toker, H. Ozbay, “Complexity issues in robust stability of linear delaydiﬀerential systems,” Math. Control Signals Systems , 9, no 4, 386– 400, 1996. ¨ [16] O. Toker, H. Ozbay On the complexity of purely complex µ computation and related problems in multidimensional systems,” IEEE Trans. Automat. Control , 43, no 3, 409–414, 1998. [17] O. Toker, B. de Jager, “Conservatism of the standard upper bound test: is sup(¯/µ) ﬁnite? is it bounded by 2?,” Open problems in mathematical µ systems and control theory (V. D. Blondel, E. D. Sontag, M. Vidyasagar, J.C. Willems eds), SpringerVerla,g London, 1999. [18] S. Treil, “The gap between complex structured singular value µ and its upper bound is inﬁnite,” 1999. Available online at http://www.math.msu.edu/treil/papers/mu/muabs.html [19] A. Troﬁno, “Parameter dependent Lyapunov functions for a class of uncertain linear systems: a LMI approach,” Proc. of 38th IEEE CDC, Phoenix, Arizona, 2341–2346, 1999. [20] A. Troﬁno, C. E. de Souza, “Biquadratic stability of uncertain linear systems,” Proc. of 38th IEEE CDC, Phoenix, Arizona, 1999. [21] L. Xie, S. Shishkin, M. Fu, “Piecewise Lyapunov functions for robust stability of linear timevarying systems,” Syst. Contr. Lett., 31, no 3, 165–171, 1997. Problem 10.2
When is a pair of matrices stable? Vincent D. Blondel, Jacques Theys
Department of Mathematical Engineering University of Louvain Avenue Georges Lematre, 4 B1348 LouvainlaNeuve Belgium [email protected], [email protected] John N. Tsitsiklis
Laboratory for Information and Decision Systems Massachusetts Institute of Technology Cambridge, MA 02139 USA [email protected] 1 STABILITY OF ALL PRODUCTS We consider problems related to the stability of sets of matrices. Let Σ be a ﬁnite set of n × n real matrices. Given a system of the form xt+1 = At xt t = 0, 1, . . . suppose that it is known that At ∈ Σ, for each t, but that the exact value of At is not a priori known because of exogenous conditions or changes in the operating point of the system. Such systems can also be thought of as a timevarying systems. We say that such a system is stable if
t→∞ lim xt = 0 for all initial states x0 and all sequences of matrix products. This condition is equivalent to the requirement
t→∞ lim Ait · · · Ai1 Ai0 = 0 for all inﬁnite sequences of indices. Sets of matrices that satisfy this condition are said to be stable. WHEN IS A PAIR OF MATRICES STABLE? 305 Problem 1. Under what conditions is a given set of matrices stable? Condition for stability are trivial for matrices of dimension one (all scalar must be of magnitude strictly less than one), and are wellknown for sets that contain only one matrix (the eigenvalues of the matrix must be of magnitude strictly less than one). We are asking stability conditions for more general cases. The matrices in the set must of course have all their eigenvalues of magnitude strictly less than one. This condition does not suﬃce in general as it is possible to obtain an unstable dynamical system by switching between two stable linear dynamics. Consider, for instance, the matrices 11 01 1 1 0 1 A0 = α and A1 = α These matrices are stable iﬀ α < 1. Consider, then, the product A0 A1 = α2 21 11 It is immediate to verify that the stability of this matrix is equivalent to the condition α < ((2/(3 + 51/2 ))1/2 = 0.618 and so the stability of A0 , A1 does not imply that of the set {A0 , A1 }. Except for elementary cases, no satisfactory conditions are presently available for checking the stability of sets of matrices. In fact the problem is open even in the case of matrices of dimension two. From a set of m matrices of dimension n, it is easy to construct two matrices of dimension nm whose stability is equivalent to that of the original set. Indeed, let Σ = {A1 , . . . , Am } be a given set and deﬁne B0 = diag(A1 , . . . , Am ) and B1 = T ⊗ I where T is a m × m cyclic permutation matrix, ⊗ is the Kronecker matrix product, and I the n × n identity matrix. Then the stability of the pair of matrices {B0 , B1 } is easily seen equivalent to that of Σ (see [3] for a more detailled argument). Our question is thus: When is a pair of matrices stable? Several results are available in the literature for this problem, see, e.g., the Lie algebra condition given in [9]. The conditions presently available are only partly satisfactory in that they are either incomplete (they do not cover all cases), they are approximate (see, e.g., [1] and [8]), or they are not eﬀective. We say that a problem is eﬀectively decidable (or, decidable) if there is an algorithm that, upon input of the data associated with an instance of the problem, provides a yesno answer after a ﬁnite amount of computation. The precise deﬁnition of what is meant by an algorithm is not critical; most algorithm models proposed so far are known to be equivalent from the point of view of their computing capabilities, and they also coincide with 306 PROBLEM 10.2 the intuitive notion of what can be eﬀectively achieved (see [10] for a general description of decidability, and [4] for a survey on decidability in systems and control). Problem 1 can thus be made more explicit by asking for an eﬀective decision algorithm for stability of arbitrary ﬁnite sets. Problems similar to this one are known to be undecidable (see, e.g. [2] and [3]); also, attempts (including by the authors of this contribution) of ﬁnding such an algorithm have so far failed, we therefore risk the conjecture: Conjecture 1: The problem of determining if a given pair of matrices is stable is undecidable. 2 STABILITY OF ALL PERIODIC PRODUCTS Problem 1 is related to the generalized spectral radius of sets of matrices, a notion that generalizes to sets of matrices the usual notion of spectral radius of a single matrix. Let ρ(A) denote the spectral radius of a real matrix A, ρ(A) := max{λ : λ is an eigenvalue of A}. The generalized spectral radius ρ(Σ) of a ﬁnite set of matrices Σ is deﬁned in [7] by ρ(Σ) = lim sup ρk (Σ),
k→∞ where for each k ≥ 1 ρk (Σ) = sup{(ρ(A1 A2 · · · Ak ))1/k : each Ai ∈ Σ}. When Σ consist of just one single matrix, this quantity is equal to the usual spectral radius. Moreover, it is easy to see that, as for the single matrix case, the stability of the set Σ is equivalent to the condition ρ(Σ) < 1, and so problem 1 is the problem of ﬁnding eﬀective conditions on Σ for ρ(Σ) < 1. It is conjectured in [11] that the equality ρ(Σ) = ρk (Σ) always occur for some ﬁnite k . This conjecture, known as the ﬁniteness conjecture, can be restated by saying that, if a set of matrices Σ is unstable, then there exists a ﬁnite unstable product, i.e., if ρ(Σ) ≥ 1, then there exists some k ≥ 1 and Ai ∈ Σ (i = 1, . . . , k ) such that ρ(A1 A2 · · · Ak ) ≥ 1. The existence of a ﬁnite unstable product is equivalent to the existence of an inﬁnite periodic product that does not converge to zero. We say that a set of matrices is periodically stable if all inﬁnite periodic products of matrices taken in the set converge to zero. Stability clearly implies periodic stability; according to the ﬁniteness conjecture, the converse is also true. The conjecture has been proved to be false in [6]. A simple counterexample is provided WHEN IS A PAIR OF MATRICES STABLE? 307 in [5], where it is shown that there are uncountably many values of the real parameters a and b for which the pair of matrices a 1 0 1 1 ,b 1 1 0 1 is not stable but is periodically stable. Since stability and periodic stability are not equivalent, the following question naturally arises. Problem 2: Under what conditions is a given ﬁnite set of matrices periodically stable? BIBLIOGRAPHY [1] N. E. Barabanov, “A method for calculating the Lyapunov exponent of a diﬀerential inclusion,” Avtomat. i Telemekh., 4: 5358, 1989; translation in Automat. Remote Control 50:4, part 1, 475479, 1989. [2] V. D. Blondel and J. N. Tsitsiklis, “When is a pair of matrices mortal?”, Information Processing Letters, 63:5, 283286, 1997. [3] V. D. Blondel and J. N. Tsitsiklis, “The boundedness of all products of a pair of matrices is undecidable,” Systems and Control Letters, 41(2):135140, 2000. [4] V. D. Blondel and J. N. Tsitsiklis, “A survey of computational complexity results in systems and control,” Automatica, 36(9), 1249–1274, 2000. [5] V. D. Blondel, J. Theys, and A. A. Vladimirov, “An elementary counterexample to the ﬁniteness conjecture,” forthcoming in SIAM Journal of Matrix Analysis. [6] T. Bousch and J. Mairesse, “Asymptotic height optimization for topical IFS, Tetris heaps, and the ﬁniteness conjecture,” J. Amer. Math. Soc., 15, 77111, 2002. [7] I. Daubechies and J. C. Lagarias, “Sets of matrices all inﬁnite products of which converge,” Linear Algebra Appl., 162, 227263, 1992. [8] G. Gripenberg, “Computing the joint spectral radius,” Linear Algebra Appl., 234, 43–60, 1996. [9] L. Gurvits, “Stability of discrete linear inclusion,” Linear Algebra Appl., 231, 47–85, 1995. [10] J. E. Hopcroft and J. D. Ullman, Introduction to Automata Theory, Languages, and Computation, AddisonWesley, Reading, MA, 1979. 308 PROBLEM 10.2 [11] J. C. Lagarias and Y. Wang, “The ﬁniteness conjecture for the generalized spectral radius of a set of matrices,” Linear Algebra Appl., 214, 1742, 1995. [12] C. H. Papadimitriou, Computational Complexity, AddisonWesley, Reading, 1994. [13] J. N. Tsitsiklis, “The stability of the products of a ﬁnite set of matrices,” In: Open Problems in Communication and Computation, SpringerVerlag, 1987. [14] J. N. Tsitsiklis and V. D. Blondel, “Spectral quantities associated with pairs of matrices are hard when not impossible to compute and to approximate,” Mathematics of Control, Signals, and Systems, 10, 3140, 1997. Problem 10.3
Freeness of multiplicative matrix semigroups Vincent D. Blondel
Department of Mathematical Engineering University of Louvain Avenue Georges Lemaitre, 4 B1348 LouvainlaNeuve Belgium [email protected] Julien Cassaigne
Institut de Math´matiques de Luminy e UPR 9016  Campus de Luminy, Case 907 13288 Marseille Cedex 9 France [email protected] Juhani Karhum¨ki a
Dept. of Math. & TUCS University of Turku FIN20014 Turku Finland [email protected] 1 DESCRIPTION OF THE PROBLEM Matrices play a major role in control theory. In this note, we consider a decidability question for ﬁnitely generated multiplicative matrix semigroups. Such semigroups arise, for example, when considering switched linear systems. We consider embeddings of the free semigroup Σ+ = {a0 , . . . , ak−1 }+ into the multiplicative semigroup of 2 × 2 matrices over nonnegative integers N: ϕ : Σ+ → M2×2 (N). 310 PROBLEM 10.3 For a two generator semigroup, i.e., Σ+ = {a, b}+ , such embeddings are deﬁned, for example, by mappings: 11 20 a −→ a −→ 01 01 ϕ1 : and ϕ2 : . (1) 10 21 b −→ b −→ 11 01 Actually, ϕ1 provides an embedding of the two generator free group into the multiplicative semigroup of unimodular matrices, e.g., into SL(2, N). The embedding ϕ2 , in turn, directly extends to all ﬁnitely generated free semigroups. Indeed, the mapping ki ϕi : ai −→ for i = 0, . . . , k − 1 01 yields an embedding {a0 , . . . , ak−1 }+ → M2×2 (N). (2) To see this, it is enough to verify that k w k (w) ϕi (w) = , 0 1 where k (w) denotes the number represented in base k by the word w ∈ {a0 , . . . , ak−1 }+ under the identiﬁcation: ai corresponds the digit i. Embeddings of countably generated free semigroups are obtained by employing a morphism {a0 , a1 , . . .}+ → {a, b}+ , given, for example, by the mapping τ : ai → ai b. Then ϕ2 ◦ τ yields a required embedding. In the above examples it is easy to check, as we did for ϕi , i ≥ 2, that the mappings are indeed embeddings. In general, the situation is strikingly diﬀerent. In fact, we formulate: Problem 1: Is it decidable whether a given morphism ϕ : Σ+ → M2×2 (N) is an embedding, or equivalently, whether a ﬁnite set X = {A0 , . . . , Ak−1 } of 2 × 2 matrices over N is a free generating set of X + ? Problem 1 deserves two comments. First, we could consider matrices over rational numbers rather than nonnegative integers. This variant is as it is not too diﬃcult to see equivalent to the case where matrices are integer matrices. Second, the problem is open even if only two matrices are considered: Problem 2: Is it decidable whether the multiplicative semigroup generated by two 2 × 2 matrices over N is free? Of course, the nontrivial part of problem 2 is the case when the semigroup is of rank 2. In many concrete examples, the freeness is easy to conclude, as we saw. Amazingly, however, the problem remains even if the matrices are uppertriangular, as is ϕ2 above. FREENESS OF MULTIPLICATIVE MATRIX SEMIGROUPS 311 2 MOTIVATION AND HISTORY The importance of problem 1 should be obvious, without any further motivation. Indeed, product of matrices is one of the most fundamental operations in mathematics. In linear algebra it witnesses the composition of linear mappings, and in automata theory it deﬁnes the behavior of ﬁnite automaton, cf. [7], to mention just two examples. However, the importance of Problem 1 goes far beyond these general reasons. The existence of embeddings like (2) have been known for a long time. Already in the 1920s J. Neilsen [12] was using these when studying free groups. Such embeddings are extremely useful for both the theories involved. In one hand, these can be used to transfer results on words into those of matrices. The undecidability is an example of a property that is natural and common in the theory of words, and translatable to matrices via these embeddings. This, indeed, is essential in the spirit of this note. On the other hand, there exist many deep results on matrices that have turned out useful for solving problems on words. A splendid example is Hilbert Bases Theorem, which implies again via above embeddings a fundamental compactness property of word equations, socalled Ehrenfeucht Compactness Property, cf. [5]. According to the knowledge of the author, the problems mentioned were ﬁrst discussed in [10], where problem 1 was explicitly stated, and its variant for 3× 3 matrices over N was shown to be undecidable. In [4] the undecidability was extended to uppertriangular 3 × 3 matrices over N, and moreover problem 2 was formulated. Similar problems on matrices have been considered much earlier. Among the oldest results is a remarkable paper by M. Paterson [13], where he shows that it is undecidable whether the multiplicative semigroup generated by a ﬁnite set of 3 × 3 integer matrices contains the zero matrix. In other words, the mortality problem, cf. [16], for 3 × 3 integer matrices is undecidable. According to the current knowledge, it remains undecidable even in the cases when a ﬁnite set consists of only seven 3 × 3 integer matrices or of only two 21 × 21 integer matrices, cf. [11] and [8, 3, 2]. For 2 × 2 matrices, the mortality problem is open. The above undecidability results can be interpreted as follows. First, the existence of the zero element in a two generator (matrix) semigroup is undecidable, cf. [3]. Second, it is also undecidable whether some composition of an eﬀectively given ﬁnite set of linear transformation of Euclidian space R3 equals to the zero mapping. The above motivates a related question: is it decidable whether a ﬁnitely generated semigroup contains the unit element? In terms of matrices, we state: Problem 3: Is it decidable whether the multiplicative semigroup S gener 312 PROBLEM 10.3 ated by a ﬁnite set of n × n integer matrices contains the unit matrix? For n = 2 this is shown to be decidable in the case of two matrices in [11], and in the case of an arbitrary number of matrices in [6], but in general the problem is open. A related problem, also open at the moment, asks whether the semigroup S contains a diagonal matrix. In this context, the following example is of interest. Example 1: For two morphisms h, g : {a0 , . . . , ak−1 }+ → {2, 3}+ deﬁne the matrices h(a ) 10 i 10h(ai ) − 10g(ai ) h(ai ) − g (ai ) M (i) = 0 10g(ai ) g (ai ) 0 0 1 for i = 0, . . . , k − 1. A straightforward computation shows that, for any w = ai1 . . . ait : h(w) 10 10h(w) − 10g(w) h(w) − g (w) . M (i1 ) . . . M (it ) = 0 10g(w) g (w) 0 0 1 Consequently, due to the undecidability of Post Correspondence Problem, cf. [14], it is also undecidable whether the multiplicative semigroup generated by a ﬁnite set of 3 × 3 integer matrices contains a matrix of the form α00 0 β δ . 00γ We do not know how to get rid of δ . 3 AVAILABLE RESULTS Due to the embedding Σ+ → M2×2 (N), one way to view Problem 1 is to consider it as an extension of the problem asking to decide whether a ﬁnite set of words of Σ+ generates a free subsemigroup of Σ+ . This problem is basic in the theory of codes, cf. [1]. It is decidable, even eﬃciently, as it is not too diﬃcult to see, cf. e.g. [15]. Very little seems to be known about problem 1. As we already said the corresponding problem for 3 × 3 matrices is undecidable, the proof being a reduction to Post Correspondence Problem, as in example 1. A bit more intriguing reduction techniques were used in [4] in order to show that the undecidability holds even for uppertriangular 3 × 3 matrices over N. A fundamental observation in these proofs is that the product monoid Σ+ × ∆+ is not embeddable only into the semigroup of matrices of dimension four but also into that of dimension three. In other words, there exists an embedding ϕ : Σ+ × ∆+ → M3×3 (N). FREENESS OF MULTIPLICATIVE MATRIX SEMIGROUPS 313 On the other hand, as also shown in [4], there does not exist any such embedding into the semigroup of 2 × 2 matrices, i.e., into M2×2 (N). Problem 2 was formulated after vain attempts to solve it in [4]. Actually, even the case when both of the matrices are uppertriangular, i.e., of the form α 0 β γ remained unanswered. Only several suﬃcient (eﬀective) conditions for the freeness were established. Even for some very particular cases, we do not know if the semigroup is free. In particular, we do not know whether the matrices A= 2 0 0 3 and B = 35 05 generate a free semigroup. We only know that these matrices do not satisfy any equation where both sides are of length at most 20. As a conclusion, we hope, we have been able to point out a problem that is not only very simply formulated, but also fundamental and challenging. BIBLIOGRAPHY [1] J. Berstel and D. Perrin, Theory of Codes, Academic Press, 1986. [2] V. Blondel and J. Tsitsiklis, “When is a pair of matrices mortal,” Inform. Process. Lett. 63, 283–286, 1997. [3] J. Cassaigne and J. Karhum¨ki, “Examples of undecidable problems a for 2generator matrix semigroups,” Theoret. Comput. Sci. 204, 29–34, 1998. [4] J. Cassaigne, T. Harju and J. Karhum¨ki, “On the undecidability of a freeness of matrix semigroups,” Int. J. Algebra Comp. 9, 295–305, 1999. [5] C. Choﬀrut and J. Karhum¨ki, “Combinatorics of words,” In: G. Rozena berg and A. Salomaa (eds.), Handbook of Formal Languages, vol. I, Springer, 1997, 329–438. [6] C. Choﬀrut and J. Karhum¨ki, “On decision problems on integer maa trices,” manuscript. [7] S. Eilenberg, Automata, Languages and Machines, vol. A, Academic Press, 1974. [8] V. Halava and T. Harju, “Mortality in matrix semigroups,” Amer. Math. Monthly 108, 649–653, 2001. 314 PROBLEM 10.3 [9] T. Harju and J. Karhum¨ki, Morphisms, In: G. Rozenberg and A. a Salomaa (eds.), Handbook of Formal Languages, vol. I, Springer, 1997, 439–510. [10] D. Klarner, J.C. Birget and W. Satterﬁeld, “On the undecidability of the freeness of integer matrix semigroups,” Int. J. Algebra Comp. 1, 223–226, 1991. [11] F. Mazoit, “Autour de quelques probl`mes de d´cidabilit´ sur des semie e e groupes de matrices,” manuscript, 1998. [12] J. Nielsen, “Die Isomorphismengruppe der freien Gruppen,” Math. Ann. 91, 169–209, 1924. [13] M. Paterson, “Unsolvability in 3 × 3 matrices,” Studies Appl. Math. 49, 105–107, 1970. [14] E. Post, “A variant of recursively unsolvable problem,” Bull. Amer. Math. Soc. 52, 264–268, 1949. [15] W. Rytter, “The space complexity of the unique decipheribility problem,” Inform. Process. Lett. 23, 1–3, 1986. [16] P. Schultz, “Mortality of 2 × 2 matrices,” Amer. Math. Monthly 84, 463–464, 1977. Problem 10.4
Vectorvalued quadratic forms in control theory Francesco Bullo
Coordinated Science Laboratory University of Illinois UrbanaChampaign, IL 61801 USA [email protected] Jorge Cort´s e
Coordinated Science Laboratory University of Illinois UrbanaChampaign, IL 61801 USA [email protected] Andrew D. Lewis
Mathematics & Statistics Queen’s University Kingston, ON K7L 3N6 Canada [email protected] Sonia Mart´ ınez
Matem´tica Aplicada IV a Universidad Polit´cnica de Catalu˜a e n Barcelona, 08800 Spain [email protected] 1 PROBLEM STATEMENT AND HISTORICAL REMARKS For ﬁnite dimensional Rvector spaces U and V , we consider a symmetric bilinear map B : U × U → V . This then deﬁnes a quadratic map QB : U → V by QB (u) = B (u, u). Corresponding to each λ ∈ V ∗ is a Rvalued quadratic form λQB on U deﬁned by λQB (u) = λ · QB (u). B is deﬁnite if there 316 PROBLEM 10.4 exists λ ∈ V ∗ so that λQB is positivedeﬁnite. B is indeﬁnite if for each λ ∈ V ∗ \ ann(image(QB )), λQB is neither positive nor negativesemideﬁnite, where ann denotes the annihilator. Given a symmetric bilinear map B : U × U → V , the problems we consider are as follows. i. Find necessary and suﬃcient conditions characterizing when QB is surjective. ii. If QB is surjective and v ∈ V , design an algorithm to ﬁnd a point u ∈ Q−1 (v ). B iii. Find necessary and suﬃcient conditions to determine when B is indeﬁnite. From the computational point of view, the ﬁrst two questions are the more interesting ones. Both can be shown to be NPcomplete, whereas the third one can be recast as a semideﬁnite programming problem.1 Actually, our main interest is in a geometric characterization of these problems. Section 4 below constitutes an initial attempt to unveil the essential geometry behind these questions. By understanding the geometry of the problem properly, one may be lead to simple characterizations like the one presented in Proposition 3, which turn out to be checkable in polynomial time for certain classes of quadratic mappings. Before we comment on how our problem impinges on control theory, let us provide some historical context for it as a purely mathematical one. The classiﬁcation of Rvalued quadratic forms is well understood. However, for quadratic maps taking values in vector spaces of dimension two or higher, the classiﬁcation problem becomes more diﬃcult. The theory can be thought of as beginning with the work of Kronecker, who obtained a ﬁnite classiﬁcation for pairs of symmetric matrices. For three or more symmetric matrices, that the classiﬁcation problem has an uncountable number of equivalence classes for a given dimension of the domain follows from the work of Kac [12]. For quadratic forms, in a series of papers Dines (see [8] and references cited therein) investigated conditions when a ﬁnite collection of Rvalued quadratic maps were simultaneously positivedeﬁnite. The study of vectorvalued quadratic maps is ongoing. A recent paper is [13], to which we refer for other references. 2 CONTROL THEORETIC MOTIVATION Interestingly, and perhaps not obviously, vectorvalued quadratic forms come up in a variety of places in control theory. We list a few of these here.
1 We thank an anonymous referee for these observations. VECTORVALUED QUADRATIC FORMS IN CONTROL THEORY 317 Optimal control: Agraˇhev [2] explicitly realizes secondorder conditions c for optimality in terms of vectorvalued quadratic maps. The geometric approach leads naturally to the consideration of vectorvalued quadratic maps, and here the necessary conditions involve deﬁniteness of these maps. Agraˇhev and Gamkrelidze [1, 3] look at the map λ → λQB from V ∗ into c the set of vectorvalued quadratic maps. Since λQB is a Rvalued quadratic form, one can talk about its index and rank (the number of −1’s and nonzero terms, respectively, along the diagonal when the form is diagonalized). In [1, 3] the topology of the surfaces of constant index of the map λ → λQB is investigated. Local controllability: The use of vectorvalued quadratic forms arises from the attempt to arrive at feedbackinvariant conditions for controllability. BastoGon¸alves [6] gives a secondorder suﬃcient condition for loc cal controllability, one of whose hypotheses is that a certain vectorvalued quadratic map be indeﬁnite (although the condition is not stated in this way). This condition is somewhat reﬁned in [11], and a necessary condition for local controllability is also given. Included in the hypotheses of the latter is the condition that a certain vectorvalued quadratic map be deﬁnite. Control design via power series methods and singular inversion: Numerous control design problems can be tackled using power series and inversion methods. The early references [5, 9] show how to solve the optimal regulator problem and the recent work in [7] proposes local steering algorithms. These strong results apply to linearly controllable systems, and no general methods are yet available under only secondorder suﬃcient controllability conditions. While for linearly controllable systems the classic inverse function theorem suﬃces, the key requirement for secondorder controllable systems is the ability to check surjectivity and compute an inverse function for certain vectorvalued quadratic forms. Dynamic feedback linearisation: In [14] Sluis gives a necessary condition for the dynamic feedback linearization of a system x = f (x, u), ˙ x ∈ Rn , u ∈ Rm . The condition is that for each x ∈ Rn , the set Dx = {f (x, u) ∈ Tx Rn  u ∈ Rm } admits a ruling , that is, a foliation of Dx by lines. Some manipulations with diﬀerential forms turns this necessary condition into one involving a symmetric bilinear map B . The condition, it turns out, is that Q−1 (0) = {0}. B This is shown by Agraˇhev [1] to generically imply that QB is surjective. c 3 KNOWN RESULTS Let us state a few results along the lines of our problem statement that are known to the authors. The ﬁrst is readily shown to be true (see [11] for the proof). If X is a topological space with subsets A ⊂ S ⊂ X , we denote by intS (A) the interior of A relative to the induced topology on S . If S ⊂ V , 318 PROBLEM 10.4 aﬀ(S ) and conv(S ) denote, respectively, the aﬃne hull and the convex hull of S . Proposition 1: Let B : U × U → V be a symmetric bilinear map with U and V ﬁnitedimensional. The following statements hold: (i) B is indeﬁnite if and only if 0 ∈ intaﬀ(image(QB )) (conv(image(QB ))); (ii) B is deﬁnite if and only if there exists a hyperplane P ⊂ V so that image(QB ) ∩ P = {0} and so that image(QB ) lies on one side of P ; (iii) if QB is surjective then B is indeﬁnite. The converse of (iii) is false. The quadratic map from R3 to R3 deﬁned by QB (x, y, z ) = (xy, xz, yz ) may be shown to be indeﬁnite but not surjective. Agraˇhev and Sarychev [4] prove the following result. We denote by ind(Q) c the index of a quadratic map Q : U → R on a vector space U . Proposition 2: Let B : U × U → V be a symmetric bilinear map with V ﬁnitedimensional. If ind(λQB ) ≥ dim(V ) for any λ ∈ V ∗ \ {0} then QB is surjective. This suﬃcient condition for surjectivity is not necessary. The quadratic map from R2 to R2 given by QB (x, y ) = (x2 − y 2 , xy ) is surjective, but does not satisfy the hypotheses of Proposition 2. 4 PROBLEM SIMPLIFICATION One of the diﬃculties with studying vectorvalued quadratic maps is that they are somewhat diﬃcult to get one’s hands on. However, it turns out to be possible to simplify their study by a reduction to a rather concrete problem. Here we describe this process, only sketching the details of how to go from a given symmetric bilinear map B : U × U → V to the reformulated end problem. We ﬁrst simplify the problem by imposing an inner product on U and choosing an orthonormal basis so that we may take U = Rn . We let Symn (R) denote the set of symmetric n × n matrices with entries in R. On Symn (R) we use the canonical inner product A, B = tr(AB ). We consider the map π : Rn → Symn (R) deﬁned by π (x) = xxt , where t denotes transpose. Thus the image of π is the set of positive semideﬁnite symmetric matrices of rank at most one. If we identify Symn (R) Rn ⊗ Rn , then π (x) = x ⊗ x. Let Kn be the image of π and note that it is a cone of dimension n in Symn (R) having a singularity only at its vertex at the origin. Furthermore, Kn may be shown to be a subset of the hypercone in VECTORVALUED QUADRATIC FORMS IN CONTROL THEORY 319 1 Symn (R) deﬁned by those matrices A in Symn (R) forming angle arccos( √n ) with the identity matrix. Thus the ray from the origin in Symn (R) through the identity matrix is an axis for the cone KN . In algebraic geometry, the image of Kn under the projectivization of Symn (R) is known as the Veronese surface [10], and as such is wellstudied, although perhaps not along lines that bear directly on the problems of interest in this article. We now let B : Rn × Rn → V be a symmetric bilinear map with V ﬁnitedimensional. Using the universal mapping property of the tensor product, ˜ B induces a linear map B : Symn (R) Rn ⊗ Rn → V with the prop˜ ◦ π = B . The dual of this map gives an injective linear map erty that B ˜ B ∗ : V ∗ → Symn (R) (here we assume that the image of B spans V ). By an appropriate choice of inner product on V , one can render the embedding ˜ B ∗ an isometric embedding of V in Symn (R). Let us denote by LB the image of V under this isometric embedding. One may then show that with these identiﬁcations, the image of QB in V is the orthogonal projection of Kn onto the subspace LB . Thus we reduce the problem to one of orthogonal projection of a canonical object, Kn , onto a subspace in Symn (R)! To simplify things further, we decompose LB into a component along the identity matrix in Symn (R) and a component orthogonal to the identity matrix. However, the matrices orthogonal to the identity are readily seen to simply be the traceless n × n symmetric matrices. Using our picture of Kn as a subset of a hypercone having as an axis the ray through the identity matrix, we see that questions of surjectivity, indeﬁniteness, and deﬁniteness of B impact only on the projection of Kn onto that component of LB orthogonal to the identity matrix. The following summarizes the above discussion. The problem of studying the image of a vectorvalued quadratic form can be reduced to studying the orthogonal projection of Kn ⊂ Symn (R), the unprojectivized Veronese surface, onto a subspace of the space of traceless symmetric matrices. This is, we think, a beautiful interpretation of the study of vectorvalued quadratic mappings, and will surely be a useful formulation of the problem. For example, with it one easily proves the following result. Proposition 3: If dim(U ) = dim(V ) = 2 with B : U × U → V a symmetric bilinear map, then QB is surjective if and only if B is indeﬁnite. BIBLIOGRAPHY [1] A. A. Agraˇhe, “The topology of quadratic mappings and Hessians of c smooth mappings,” J. Soviet Math., 49(3):990–1013, 1990. 320 PROBLEM 10.4 [2] A. A. Agraˇhev, “Quadratic mappings in geometric control theory,” J. c Soviet Math., 51(6):2667–2734, 1990. [3] A. A. Agraˇhev and R. V. Gamkrelidze, “Quadratic mappings and c vector functions: Euler characteristics of level sets,” J. Soviet Math., 55(4):1892–1928, 1991. [4] A. A. Agraˇhev and A. V. Sarychev, “Abnormal subRiemannian c geodesics: Morse index and rigidity,” Ann. Inst. H. Poincar´. Anal. e Non Lin´aire, 13(6):635–690, 1996. e ` [5] E. G. Al’brekht, “On the optimal stabilization of nonlinear systems,” J. Appl. Math. and Mech., 25:1254–1266, 1961. [6] J. BastoGon¸alves, “Secondorder conditions for local controllability,” c Systems Control Lett., 35(5):287–290, 1998. [7] W. T. Cerven and F. Bullo, “Constructive controllability algorithms for motion planning and optimization,” IEEE Trans Automat Control, Forethcoming. [8] L. L. Dines, “On linear combinations of quadratic forms,” Bull. Amer. Math. Soc. (N.S.), 49:388–393, 1943. [9] A. Halme, “On the nonlinear regulator problem,” J. Optim. Theory Appl., 16(34):255–275, 1975. [10] J. Harris, Algebraic Geometry: A First Course, Number 133 in Graduate Texts in Mathematics, SpringerVerlag, New York Heidelberg Berlin, 1992. [11] R. M. Hirschorn and A. D. Lewis, “Geometric local controllability: Secondorder conditions,” preprint, February 2002. [12] V. G. Kac, “Root systems, representations of quivers and invariant theory,” In: Invariant Theory, number 996 in Lecture Notes in Mathematics, pages 74–108, SpringerVerlag, New YorkHeidelbergBerlin, 1983. [13] D. B. Leep and L. M. Schueller, “Classiﬁcation of pairs of symmetric and alternating bilinear forms,” Exposition. Math., 17(5):385–414, 1999. [14] W. M. Sluis, “A necessary condition for dynamic feedback linearization,” Systems Control Lett., 21(4):277–283, 1993. Problem 10.5
Nilpotent bases of distributions Henry G. Hermes
Department of Mathematics University of Colorado at Boulder, Boulder, CO 80309 USA [email protected] Matthias Kawski1
Department of Mathematics and Statistics Arizona State University Tempe, AZ 852871804 USA [email protected] 1 DESCRIPTION OF THE PROBLEM When modeling controlled dynamical systems one commonly chooses individual control variables u1 , . . . um that appear natural from a physical, or practical point of view. In the case of nonlinear models evolving on Rn (or more generally, an analytic manifold M n ) that are aﬃne in the control, such a choice corresponds to selecting vector ﬁelds f0 , f1 , . . . fm : M → T M , and the system takes the form
m x = f0 (x) + ˙
k=1 uk fk (x). (1) From a geometric point of view such a choice appears arbitrary, and the natural objects are not the vector ﬁelds themselves but their linear span. Formally, for a set F = {v1 , . . . vm } of vector ﬁelds deﬁne the distribution spanned by F as ∆F : p → {c1 v1 (p) + . . . + cm vm (p) : c1 , . . . cm ∈ R} ⊆ Tp M . ˜ For systems with drift f0 , the geometric object is the map ∆F (x) = {f0 (x) + c1 f1 (x) + . . . + cm fm (x) : c1 , . . . cm ∈ R} whose image at every point x is an aﬃne subspace of Tx M . The geometric character of the distribution is captured by its invariance under the group of feedback transformations.
1 Supported in part by NSFgrant DMS 0072369. 322 PROBLEM 10.5 In traditional notation (here formulated for systems with drift) these are (analytic) maps (deﬁned on suitable subsets) α : M n × Rm → Rm such that for each ﬁxed x ∈ M n the map v → α(x, v ) is aﬃne and invertible. Customarily, one identiﬁes α(x, ·) with a matrix and writes uk (x) = α0k (x) + v1 α1k (x) + . . . vm αmk (x) for k = 1, . . . m. (2) This transformation of the controls induces a corresponding transforma! m tion of the vector ﬁelds deﬁned by x = f0 (x) + k=1 uk fk (x) = g0 (x) + ˙ m k=1 vk gk (x) g0 (x) gk (x) = f0 (x)+ α01 (x)f1 (x) + . . . α0m (x)fm (x) = αk1 (x)f1 (x) + . . . αkm (x)fm (x), k = 1, . . . m (3) Assuming linear independence of the vector ﬁelds such feedback transformations amount to changes of basis of the associated distributions. One naturally studies the orbits of any given system under this group action, i.e., the collection of equivalent systems. Of particular interest are normal forms, i.e, natural distinguished representatives for each orbit. Geometrically (i.e., without choosing local coordinates for the state x), these are characterized by properties of the Lie algebra L(g0 , g1 , . . . gm ) generated by the vector ﬁelds gk (acknowledging the special role of g0 if present). Recall that a Lie algebra L is called nilpotent (solvable) if its central descending series L(k) (derived series L<k> ) is ﬁnite, i.e., there exists r < ∞ such that L(r) = {0} (L<r> = {0}). Here L = L(1) = L<1> and inductively L(k+1) = [L(k) , L(1) ] and L<k+1> = [L<k> , L<k> ]. The main questions of practical importance are: Problem 1: Find necessary and suﬃcient conditions for a distribution ∆F spanned by a set of analytic vector ﬁelds F = {f1 , . . . fm } to admit a basis of analytic vector ﬁelds G = {g1 , . . . gm } that generate a Lie algebra L(g1 , . . . gm ) that has a desirable structure, i.e., that is a. nilpotent, b. solvable, or c. ﬁnite dimensional. Problem 2: Describe an algorithm that constructs such a basis G from a given basis F. 2 MOTIVATION AND HISTORY OF THE PROBLEM There is an abundance of mathematical problems, which are hard as given, yet are almost trivial when written in the right coordinates. Classical examples of ﬁnding the right coordinates (or, rather, the right bases) are transformations that diagonalize operators in linear algebra and functional analysis. Similarly, every system of (ordinary) diﬀerential equation is equivalent (via a choice of local coordinates) to the system x1 = 1, x2 = 0, . . . xn = 0 (in ˙ ˙ ˙ the neighborhood of every point that is not an equilibrium). In control, for many purposes the most convenient form is the controller canonical form NILPOTENT BASES OF DISTRIBUTIONS 323 (e.g., in the case of m = 1) x1 = u and xk = xk−1 for 1 < k ≤ n. Every ˙ ˙ controllable linear system can be brought into this form via feedback and a linear coordinate change. For control systems that are not equivalent to linear systems, the next best choice is a polynomial cascade system x1 = u ˙ and xk = pk (x1 , . . . , xk−1 ) for 1 < k ≤ n. (Both linear and nonlinear cases ˙ have natural multiinput versions for m > 1.) What makes such linear or polynomial cascade form so attractive for both analysis and design is that trajectories x(t, u) may be computed from controls u(t) by quadratures only, obviating the need to solve nonlinear ODEs. Typical examples include pole placement and path planning [11, 16, 19]. In particular, if the Lie algebra is nilpotent (or similarly nice), the general solution formula for x(·, u) as an exponential Lie series [20] (which generalizes matrix exponentials to nonlinear systems) collapses and becomes innately manageable. It is wellknown that a system can be brought into such polynomial cascade form via a coordinate change if and only if the Lie algebra L(f1 , . . . fm ) is nilpotent [9]. Similar results for solvable Lie algebras are available [1]. This leaves open only the geometric question about when does a distribution admit a nilpotent (or solvable) basis. 3 RELATED RESULTS In [5] it is shown that for every 2 ≤ k ≤ (n − 1) there is a k distribution ∆ on Rn that does not admit a solvable basis in a neighborhood of zero. This shows the problems of nilpotent and solvable bases are not trivial. Geometric properties, such as smalltime local controllability (STLC) are, by their very nature, unaﬀected by feedback transformations. Thus conditions for STLC provide valuable information whether any two systems can be feedback equivalent. Typical such information, generalizing the controllability indices of linear systems theory, is contained in the growth vector, that is the dimensions of the derived distributions that are deﬁned inductively by ∆(1) = ∆ and ∆(k+1) = ∆(k) + {[v, w] : v ∈ ∆(k) , w ∈ ∆(1) }. Of highest practical interest is the case when the system is (locally) equivalent to a linear system x = Ax + Bu (for some choice of local coordinates). ˙ Necessary and suﬃcient conditions for such exact feedback linearization together with algorithms for constructing the transformation and coordinates were obtained in the 1980s [6, 7]. The geometric criteria are nicely stated in terms of the involutivity (closedness under Lie bracketing) of the distributions spanned by the sets {(adj f0 , f1 ) : 0 ≤ j ≤ k } for 0 ≤ k ≤ m. A necessary condition for exact nilpotentization is based on the observation that every nilpotent Lie algebra contains at least one element that commutes with every other element [4]. A wellstudied special case is that of nilpotent systems whichthatcan be brought into chainedform, compare [16]. This is closely related to diﬀeren 324 PROBLEM 10.5 tially ﬂat systems, compare [2, 8], which have been the focus of much study in the 1990s. The key property is the existence of an output function such that all system variables can be expressed in terms of functions of a ﬁnite number of derivatives of this output. This work is more naturally performed using a dual description in terms of exterior diﬀerential systems and codistributions ∆⊥ = {ω : M → T ∗ M : ω , f = 0 for all f ∈ ∆}. This description is particularly convenient when working with small codimension n − m, compare [12] for a recent survey. (Special care needs to be taken at singular points where the dimensions of ∆(k) are nonconstant.) This language allows one to directly employ the machinery of Cartan’s method of equivalence [3]. However, a nice description of a system in terms of diﬀerential forms does not necessarily translate in a straightforward manner into a nice description in terms of vector ﬁelds (that generate a ﬁnite dimensional, or nilpotent Lie algebra). Some of the most notable recent progress has been made in the general framework of Goursat distributions, see, e.g., [13, 14, 15, 17, 18, 21] for detailed descriptions, the most recent results and further relevant references. BIBLIOGRAPHY [1] P. Crouch, “Solvable approximations to control systems,” SIAM J. Control & Optim., 22, pp. 4045, 1984. [2] M. Fliess, J. Levine, P. Martin, and P. Rouchon, “Some open questions related to ﬂat nonlinear systems,” Open problems in Mathematical Systems and Control Theory, V. Blondel, E. Sontag, M. Vidyasagar, and J. Willems, eds., Springer, 1999. [3] R. Gardener, “The method of equivalence and its applications,” CBMS NSF Regional Conference Series in Applied Mathematics, SIAM, 58, 1989. [4] H. Hermes, A. Lundell, and D. Sullivan, “Nilpotent bases for distributions and control systems,” J. of Diﬀ. Equations, 55, pp. 385–400, 1984. [5] H. Hermes, “Distributions and the lie algebras their bases can generate,” Proc. AMS, 106, pp. 555–565, 1989. [6] R. Hunt, R. Su, and G. Meyer, “Design for multiinput nonlinear systems,” Diﬀerential Geometric Control Theory, R. Brockett, R. Millmann, H. Sussmann, eds., Birkh¨user, pp. 268–298,1982. a [7] B. Jakubvzyk and W. Respondek, “On linearization of control systems,” Bull. Acad. Polon. Sci. Ser. Sci. Math., 28, pp. 517–522, 1980. [8] F. Jean, “The car with n trailers: Characterization fo the singular conﬁguartions,” ESAIM Control Optim. Calc. Var. 1, pp. 241–266, 1996. NILPOTENT BASES OF DISTRIBUTIONS 325 [9] M. Kawski “Nilpotent Lie algebras of vector ﬁelds,” Journal f¨r die Riene u und Angewandte Mathematik, 188, pp. 117, 1988. [10] M. Kawski and H. J. Sussmann “Noncommutative power series and formal Liealgebraic techniques in nonlinear control theory,” Operators, Systems, and Linear Algebra, U. Helmke, D. Pr¨tzelWolters and E. Zerz, a eds., Teubner, 111–128, 1997. [11] G. Laﬀariere and H. Sussmann, “A diﬀerential geometric approach to motion planning,” IEEE International Conference on Robotics and Automation, pages 1148–1153, Sacramento, CA, 1991. [12] R. Montgomery, “A Tour of Subriemannian Geometries, Their Geodesics and Applications,” AMS Mathematical Surveys and Monographs, 91, 2002. [13] P. Mormul, “Goursat ﬂags, classiﬁcation of codimension one singularities,” J. Dynamical and Control Systems,6, 2000, pp. 311–330, 2000. [14] P. Mormul, “Multidimensional Caratn prolongation and special k ﬂags,” technical report, University of Warsaw, Poland, 2002. [15] P. Mormul, “Goursat distributions not strongly nilpotent in dimensions not exceeding seven,” Lecture Notes in Control and Inform. Sci., 281, pp. 249–261, Springer, Berlin, 2003. [16] R. Murray, “Nilpotent bases for a class of nonintegrable distributions with applications to trajectory generation for nonholonomic systems,” Mathematics of Controls, Signals, and Systems, 7, pp. 58–75, 1994. [17] W. PasillasLpine, and W. Respondek, “On the geometry of Goursat structures,” ESAIM Control Optim. Calc. Var. 6, pp. 119–181, 2001. [18] W. Respondek, and W. PasillasLpine, “Extended Goursat normal form: a geometric characterization,” In: Lecture Notes in Control and Inform. Sci., 259 pp. 323–338, Springer, London, 2001. [19] J. Strumper and P. Krishnaprasad, “Approximate tracking for systems on three dimensional Lie matrix groups via feedback nilpotentization,” IFAC Symposium Robot Control (1997). [20] H. Sussmann, “A product expansion of the Chen series,” Theory and Applications of Nonlinear Control Systems, C. Byrnes and A. Lindquist eds., Elsevier, pp. 323–335, 1986. [21] M. Zhitomirskii, “Singularities and normal forms of smooth distributions,” Banach Center Publ., 32, pp. 379409, 1995. Problem 10.6
What is the characteristic polynomial of a signal ﬂow graph? Andrew D. Lewis
Department of Mathematics & Statistics Queen’s University Kingston, ON K7L 3N6 Canada [email protected] 1 PROBLEM STATEMENT Suppose one is given signal ﬂow graph G with n nodes whose branches have gains that are real rational functions (the open loop transfer functions). The gain of the branch connecting node i to node j is denoted Rji , and we write N Rji = Dji as a coprime fraction. The closedloop transfer function from ji node i to node j for the closedloop system is denoted Tji . The problem can then be stated as follows: Is there an algorithmic procedure that takes a signal ﬂow graph G and returns a “characteristic polynomial” PG with the following properties: i. PG is formed by products and sums of the polynomials Nji and Dji , i, j = 1, . . . , n; ii. all closedloop transfer functions Tji , i, j = 1, . . . , n, are analytic in the closed right halfplane (CRHP) if and only if PG is Hurwitz? The gist of condition i is that the construction of PG depends only on the topology of the graph, and not on manipulations of the branch gains. That is, the deﬁnition of PG should not depend on the choice of branch gains Rji , i, j = 1, . . . , n. For example, one would be prohibited from factoring polynomials or from computing the GCD of polynomials. This excludes unhelpful solutions of the problem of the form, “Let PG be the product of the characteristic polynomials of the closedloop transfer functions Tji , i, j = 1, . . . , n.” SIGNAL FLOW GRAPHS 327 2 DISCUSSION Signal ﬂow graphs for modelling system interconnections are due to Mason [3, 4]. Of course, when making such interconnections, the stability of the interconnection is nontrivially related to the openloop transfer functions that weight the branches of the signal ﬂow graph. There are at least two things to consider in the course of making an interconnection: (1) is the interconnection BIBO stable in the sense that all closedloop transfer functions between nodes have no poles in the CRHP?; and (2) is the interconnection wellposed in the sense that all closedloop transfer functions between nodes are proper? The problem stated above concerns only the ﬁrst of these matters. Wellposedness when all branch gains Rji , i, j = 1, . . . , n, are proper is known to be equivalent to the condition that the determinant of the graph be a biproper rational function. We comment that other forms of stability for signal ﬂow graphs are possible. For example, Wang, Lee, and Ho [5] consider internal stabilty , wherein not the transfer functions between signals are considered, but rather that all signals in the signal ﬂow graph remain bounded when bounded inputs are provided. Internal stability as considered by Wang, Lee, and Ho and BIBO stability as considered here are diﬀerent. The source of this diﬀerence accounts for the source of the open problem of our paper, since Wang, Lee, and Ho show that internal stability can be determined by an algorithmic procedure like that we ask for for BIBO stability. This is discussed a little further in section 3. As an illustration of what we are after, consider the singleloop conﬁguration N of ﬁgure 10.6.1. As is wellknown, if we write Ri = Di , i = 1, 2, as coprime i r  d  R1 (s) −6  R2 (s) y Figure 10.6.1 Singleloop interconnection fractions, then all closedloop transfer functions have no poles in the CRHP if and only if the polynomial PG = D1 D2 + N1 N2 is Hurwitz. Thus PG serves as the characteristic polynomial in this case. The essential feature of PG is that one computes it by looking at the topology of the graph, and the exact character of R1 and R2 are of no consequence. For example, pole/zero cancellations between R1 and R2 are accounted for in PG . 3 APPROACHES THAT DO NOT SOLVE THE PROBLEM Let us outline two approaches that yield solutions having one of properties i and ii, but not the other. 328 PROBLEM 10.6 The problems of internal stability and wellposedness for signal ﬂow graphs can be handled eﬀectively using the polynomial matrix approach, e.g., [1]. Such an approach will involve the determination of a coprime matrix fractional representation of a matrix of rational functions. This will certainly solve the problem of determining internal stability for any given example. That is, it is possible using matrix polynomial methods to provide an algorithmic construction of a polynomial satisfying property ii above. However, the algorithmic procedure will involve computing GCDs of various of the polynomials Nji and Dji , i, j = 1, . . . , n. Thus the conditions developed in this manner have to do not only with the topology of the signal ﬂow graph, but also the speciﬁc choices for the branch gains, thus violating condition i above. The problem we pose demands a simpler, more direct answer to the question of determining when an interconnection is BIBO stable. Wang, Lee, and He [5] provide a polynomial for a signal ﬂow graph using the determinant of the graph which we denote by ∆G (see [3, 4]). Speciﬁcally, they deﬁne a polynomial P=
(i,j )∈{1,...,n}2 Dji ∆G . (1) Thus one multiplies the determinant by all denominators, arriving at a polynomial in the process. This polynomial has the property i above. However, while it is true that if this polynomial is Hurwitz then the system is BIBO stable, the converse is generally false. Thus property ii is not satisﬁed by P . What is shown in [5] is that all signals in the graph are bounded for bounded inputs if and only if P is Hurwitz. This is diﬀerent from what we are asking here, i.e., that all closedloop transfer functions have no CRHP poles. It is true that the polynomial P in (1) gives the desired characteristic polynomial for the interconnection of Figure 10.6.1. It is also true that if the signal ﬂow graph has no loops (in this case ∆G = 1) then the polynomial P of (1) satisﬁes condition ii. We comment that the condition of Wang, Lee, and Ho is the same condition one would obtain by converting (in a speciﬁc way) the signal ﬂow graph to a polynomial matrix system, and then ascertaining when the resulting polynomial matrix system is internally stable. The problem stated is very basic, one for which an inquisitive undergraduate would demand a solution. It was with some surprise that the author was unable to ﬁnd its solution in the literature, and hopefully one of the readers of this article will be able to provide a solution, or point out an existing one. BIBLIOGRAPHY [1] F. M. Callier and C. A. Desoer, Multivariable Feedback Systems, SpringerVerlag, New YorkHeidelbergBerlin, 1982. [2] A. D. Lewis, “On the stability of interconnections of SISO, timeinvariant, ﬁnitedimensional, linear systems,” preprint, July 2001. SIGNAL FLOW GRAPHS 329 [3] S. J. Mason, “Feedback theory: Some properties of signal ﬂow graphs,” Proc. IRE, 41:1144–1156, 1953. [4] S. J. Mason, “Feedback theory: Further properties of signal ﬂow graphs,” Proc. IRE, 44:920–926, 1956. [5] Q.G. Wang, T.H. Lee, and J.B. He, “Internal stability of interconnected systems,” IEEE Trans. Automat. Control, 44(3):593–596, 1999. Problem 10.7
Open problems in randomized µ analysis Onur Toker
College of Computer Sciences and Engineering KFUPM, P.O. Box 14 Dhahran 31261 Saudi Arabia [email protected] 1 INTRODUCTION In this chapter, we review the current status of the problem reported in [5], and discuss some open problems related to randomized µ analysis. Basically, what remains still unknown after Treil’s result [6] are the growth rate of the µ/µ ratio, and how likely it is to observe a high conservatism. In the context of randomized µ analysis, we discuss two open problems (i) Existence of polynomial time Las Vegas type randomized algorithms for robust stability against structured LTI uncertainties, and (ii) The minimum sample size to guarantee that µ/µ conservatism will be bounded by g , with a conﬁdence ˆ level of 1 − . 2 DESCRIPTION AND HISTORY OF THE PROBLEM The structured singular value [1] is a quite general framework for analysis/design against component level LTI uncertainties. Although for small number of uncertain blocks, the problem is of reasonable diﬃculty, all initial studies implied that the same is not likely to be true for the general case. Under these observations, convex upper bound tests became popular alternatives for the structured singular value. Later, it has been proved that these upper bound tests are indeed nonconservative robustness measures for diﬀerent classes of component level uncertainties, and the structured singular value analysis problem is NPhard. See the paper [5] and references therein for further details on the history of the problem. OPEN PROBLEMS IN RANDOMIZED µ ANALYSIS 331 What remains still open after Treil’s result? An important open problem was the conservativeness of the standard upper bound test for the complex µ [5]. Recently, Treil showed that the gap between µ and its upper bound µ can be arbitrarily large [6]. Despite this negative result, computational experiments show that most of the time the gap is very close to one for matrices of reasonable size. The following are still open problems: • What is the growth rate of the gap? In other words, what is the growth rate of µ(M ) sup µ(M )=0 µ(M ) as a function of n = dim(M ). It is suspected that the growth rate is O(log(n)) [6]. • How likely it is to observe low conservatism? In other words, what is the relative volume of the set {M : (1 + )µ(M ) ≥ µ(M ) ≥ µ(M )} in the set of all n × n matrices with all entries having absolute value at most 1. Randomized algorithms and some open problems Randomized algorithms are known to be quite powerful tools for dealing with diﬃcult problems. A recent paper of Vidyasagar and Blondel [8] has both a nice summary of earlier results in this area, and provides a strong justiﬁcation for the importance of randomized algorithms for tackling diﬃcult control related problems. Randomized structured singular value analysis is studied in detail in the recent paper [3], which also has many references to related work in this area. Las Vegas type algorithm for µ analysis There are two possible ways of utilizing the results of randomized algorithms, in particular the randomized µ analysis. Let us assume that several random uncertainty matrices with norm bounded by 1, ∆k ’s, k = 1, · · · , S , are generated according to some probability distribution, and µ(M ) is set to ˆ µ(M ) = max ρ(M ∆k ). ˆ
1≤k≤S The ﬁrst interpretation is the following: with a high probability, the inequality ρ(M ∆) ≤ µ(M ) is satisﬁed for all ∆’s except for a set of small relative ˆ volume [7]. The second interpretation is to consider the whole process of generating random ∆ samples and checking the condition ρ(M ∆) < 1, as a Monte Carlo type algorithm for the complement of robust stability [2]. Indeed, after generating several ∆ samples, if µ(M ) ≥ 1, then the (M, ∆) ˆ system is not robustly stable, otherwise one can either say the test is inconclusive or conclude that the (M, ∆) system is robustly stable (which 332 PROBLEM 10.7 sometimes can be the wrong conclusion). This unpleasant phenomena is a standard characteristic of Monte Carlo algorithms. What is not known is whether there is also a polynomial time Monte Carlo type randomized algorithm for the robust stability condition itself. Combining these two Monte Carlo algorithms will result an algorithm that never gives a false answer, and the probability of getting inconclusive answers goes to zero as we generate more and more random samples. Problem 1 (Las Vegas type algorithm for µ analysis): Is there a polynomial time Las Vegas type randomized algorithm for testing robust stability against structured LTI uncertainties? Why this problem is important? An algorithm like this can be used to check whether the (M, ∆) system is robustly stable or not by generating random ∆ matrices: There is no possibility of getting false answers, and probability of getting inconclusive answers goes to zero as the sample size goes to inﬁnity. However, the rate of convergence of the probability of getting inconclusive answers to zero, is also an important factor for the algorithm to be practical. Relationship between the conservatism of µ/µ, sample size, and conﬁdence ˆ levels Conservativeness of the randomized lower bound µ is also an open problem. ˆ More precisely, we have very little knowledge about the relationship between the conservatism ratio µ/µ, the sample size S , and the conservatism bound ˆ g . For simplicity, let n denotes the dimension of the matrix M for the rest of this section. The following is a major open problem: Problem 2: Find the best lower found, S (n, g, ), such that generating S ≥ S (n, g, ) random samples is enough to claim that, for all M , µ(M ) ≥
1≤k≤S (n,g, ) max ρ(M ∆k ) ≥ g −1 µ(M ), with conﬁdence level ≥ 1 − . In other words, the probability inequality Prob ∆1 , · · · , ∆S (n,g,
) : µ(M ) ≥ 1≤k≤S (n,g, ) max ρ(M ∆k ) ≥ g −1 µ(M ) ≥ ≥1− , is satisﬁed for all M matrices. Why this is an important problem? In a robust stability analysis, one can set a conﬁdence level very close one, say 1 − 10−6 , generate many random ∆ samples, and compute the randomized OPEN PROBLEMS IN RANDOMIZED µ ANALYSIS 333 lower bound µ. Ideally, a control engineer would like know how conservative ˆ is the obtained µ compared to the actual µ, in order to have a better feeling ˆ of the system at hand. There is very little known about this problem, and some partial results are reported in [4]. The following are simple corollaries: Result 1 (Polynomial number of samples): For any positive universal constants C, α ∈ Z, generating Sn = Cnα random samples is not enough to claim that, for all M , µ(M ) ≥ max ρ(M ∆k ) ≥ 0.99µ(M ),
1≤k≤Sn with conﬁdence level ≥ 1 − 10−6 . Result 2 (Exponential number of samples): There is a universal con2.01 stant C such that, generating Sn = Cen random samples is enough to claim that, for all M , µ(M ) ≥ max ρ(M ∆k ) ≥ 0.99µ(M ), with conﬁdence level ≥ 1 − 10−6 .
1≤k≤Sn Alternatively, one can ﬁx a conﬁdence level, say 1 − 10−6 , and study the relationship between the sample size S , and the best conservatism bound, g (S ), that can be guaranteed with this conﬁdence level. Again not much is known about how fast/slow the best conservatism bound g (S ) converges to 1 as the sample size S goes to inﬁnity. BIBLIOGRAPHY [1] A. Packard and J. C. Doyle, “The complex structured singular value,” Automatica, 29, pp. 71–109, 1993. [2] Papadimitriou, C. H., Computational Complexity, AddisonWesley, 1994. [3] G. C. Calaﬁore, F. Dabbene, and R. Tempo,“Randomized algorithms for probabilistic robustness with real and complex structured uncertainty,” IEEE Transactions on Automatic Control, vol. 45, pp. 2218–2235, 2000. [4] O. Toker, “Conservatism of Randomized Structured Singular Value,” accepted for publication in IEEE Transactions on Automatic Control. [5] O. Toker, and B. Jager, “Conservatism of the Standard Upper Bound Test: Is sup(µ/µ) ﬁnite? Is it bounded by 2?,” In: Open Problems in Mathematical Systems and Control Theory, V. Blondel, et al. eds., chapter 43, pp. 225–228. [6] S. Treil, “The gap between complex structured singular value µ and its upper bound is inﬁnite,” Forthcoming In: IEEE Transactions on Automatic Control. 334 PROBLEM 10.7 [7] M. Vidyasagar, A Theory of Learning and Generalization, SpringerVerlag, London, 1997. [8] M. Vidyasagar and V. Blondel, “Probabilistic solutions to some NPhard matrix problems,” Automatica, vol. 37, pp. 1397–1405, 2001. ...
View
Full Document
 Spring '09
 ciro
 The Land, Main Street

Click to edit the document details