{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

math 222a - An Introduction to Mathematical Optimal Control...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon
An Introduction to Mathematical Optimal Control Theory Version 0.2 By Lawrence C. Evans Department of Mathematics University of California, Berkeley Chapter 1: Introduction Chapter 2: Controllability, bang-bang principle Chapter 3: Linear time-optimal control Chapter 4: The Pontryagin Maximum Principle Chapter 5: Dynamic programming Chapter 6: Game theory Chapter 7: Introduction to stochastic control theory Appendix: Proofs of the Pontryagin Maximum Principle Exercises References 1
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
PREFACE These notes build upon a course I taught at the University of Maryland during the fall of 1983. My great thanks go to Martino Bardi, who took careful notes, saved them all these years and recently mailed them to me. Faye Yeager typed up his notes into a first draft of these lectures as they now appear. Scott Armstrong read over the notes and suggested many improvements: thanks, Scott. Stephen Moye of the American Math Society helped me a lot with AMSTeX versus LaTeX issues. My thanks also to Atilla Yilmaz for spotting lots of typos and errors, which I have corrected. I have radically modified much of the notation (to be consistent with my other writings), updated the references, added several new examples, and provided a proof of the Pontryagin Maximum Principle. As this is a course for undergraduates, I have dispensed in certain proofs with various measurability and continuity issues, and as compensation have added various critiques as to the lack of total rigor. This current version of the notes is not yet complete, but meets I think the usual high standards for material posted on the internet. Please email me at [email protected] with any corrections or comments. 2
Background image of page 2
CHAPTER 1: INTRODUCTION 1.1. The basic problem 1.2. Some examples 1.3. A geometric solution 1.4. Overview 1.1 THE BASIC PROBLEM. DYNAMICS. We open our discussion by considering an ordinary differential equation (ODE) having the form (1 . 1) braceleftbigg ˙ x ( t ) = f ( x ( t )) ( t> 0) x (0) = x 0 . We are here given the initial point x 0 R n and the function f : R n R n . The un- known is the curve x : [0 , ) R n , which we interpret as the dynamical evolution of the state of some “system”. CONTROLLED DYNAMICS. We generalize a bit and suppose now that f depends also upon some “control” parameters belonging to a set A R m ; so that f : R n × A R n . Then if we select some value a A and consider the corresponding dynamics: braceleftbigg ˙ x ( t ) = f ( x ( t ) ,a ) ( t> 0) x (0) = x 0 , we obtain the evolution of our system when the parameter is constantly set to the value a . The next possibility is that we change the value of the parameter as the system evolves. For instance, suppose we define the function α : [0 , ) A this way: α ( t ) = a 1 0 t t 1 a 2 t 1 <t t 2 a 3 t 2 <t t 3 etc.
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 4
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}