This preview shows pages 1–4. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Stochastic Programming
Second Edition
Peter Kall
Institute for Operations Research and Mathematical Methods of Economics University of Zurich CH8044 Zurich Stein W. Wallace
Molde University College P.O. Box 2110 N6402 Molde, Norway Reference to this text is “Peter Kall and Stein W. Wallace, Stochastic Programming, John Wiley & Sons, Chichester, 1994”. The text is printed with permission from the authors. The publisher reverted the rights to the authors on February 4, 2003. This text is slightly updated from the published version. ii STOCHASTIC PROGRAMMING Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . 1.1.1 A numerical example . . . . . . . . . . . . . . . 1.1.2 Scenario analysis . . . . . . . . . . . . . . . . . 1.1.3 Using the expected value of p . . . . . . . . . . 1.1.4 Maximizing the expected value of the objective 1.1.5 The IQ of hindsight . . . . . . . . . . . . . . . 1.1.6 Options . . . . . . . . . . . . . . . . . . . . . . 1.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . 1.3 An Illustrative Example . . . . . . . . . . . . . . . . . 1.4 Stochastic Programs: General Formulation . . . . . . . 1.4.1 Measures and Integrals . . . . . . . . . . . . . . 1.4.2 Deterministic Equivalents . . . . . . . . . . . . 1.5 Properties of Recourse Problems . . . . . . . . . . . . 1.6 Properties of Probabilistic Constraints . . . . . . . . . 1.7 Linear Programming . . . . . . . . . . . . . . . . . . . 1.7.1 The Feasible Set and Solvability . . . . . . . . 1.7.2 The Simplex Algorithm . . . . . . . . . . . . . 1.7.3 Duality Statements . . . . . . . . . . . . . . . . 1.7.4 A Dual Decomposition Method . . . . . . . . . 1.8 Nonlinear Programming . . . . . . . . . . . . . . . . . 1.8.1 The Kuhn–Tucker Conditions . . . . . . . . . . 1.8.2 Solution Techniques . . . . . . . . . . . . . . . 1.8.2.1 Cuttingplane methods . . . . . . . . 1.8.2.2 Descent methods . . . . . . . . . . . . 1.8.2.3 Penalty methods . . . . . . . . . . . . 1.8.2.4 Lagrangian methods . . . . . . . . . . 1.9 Bibliographical Notes . . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix 1 1 1 2 3 4 5 5 7 10 21 21 31 36 46 53 54 64 70 75 80 83 89 90 93 97 98 102 104 iv STOCHASTIC PROGRAMMING References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 2 Dynamic Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 The Bellman Principle . . . . . . . . . . . . . . . . . . . . . . 2.2 Dynamic Programming . . . . . . . . . . . . . . . . . . . . . 2.3 Deterministic Decision Trees . . . . . . . . . . . . . . . . . . . 2.4 Stochastic Decision Trees . . . . . . . . . . . . . . . . . . . . 2.5 Stochastic Dynamic Programming . . . . . . . . . . . . . . . 2.6 Scenario Aggregation . . . . . . . . . . . . . . . . . . . . . . . 2.6.1 Approximate Scenario Solutions . . . . . . . . . . . . 2.7 Financial Models . . . . . . . . . . . . . . . . . . . . . . . . . 2.7.1 The Markowitz’ model . . . . . . . . . . . . . . . . . . 2.7.2 Weak aspects of the model . . . . . . . . . . . . . . . 2.7.3 More advanced models . . . . . . . . . . . . . . . . . . 2.7.3.1 A scenario tree . . . . . . . . . . . . . . . . . 2.7.3.2 The individual scenario problems . . . . . . . 2.7.3.3 Practical considerations . . . . . . . . . . . . 2.8 Hydro power production . . . . . . . . . . . . . . . . . . . . . 2.8.1 A small example . . . . . . . . . . . . . . . . . . . . . 2.8.2 Further developments . . . . . . . . . . . . . . . . . . 2.9 The Value of Using a Stochastic Model . . . . . . . . . . . . . 2.9.1 Comparing the Deterministic and Stochastic Objective Values . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9.2 Deterministic Solutions in the Event Tree . . . . . . . 2.9.3 Expected Value of Perfect Information . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 110 117 121 124 130 134 141 141 142 143 145 145 145 147 147 148 150 151 151 152 154 156 3 Recourse Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 159 3.1 Outline of Structure . . . . . . . . . . . . . . . . . . . . . . . . 159 3.2 The Lshaped Decomposition Method . . . . . . . . . . . . . . 161 3.2.1 Feasibility . . . . . . . . . . . . . . . . . . . . . . . . . . 161 3.2.2 Optimality . . . . . . . . . . . . . . . . . . . . . . . . . 168 3.3 Regularized Decomposition . . . . . . . . . . . . . . . . . . . . 173 3.4 Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 3.4.1 The Jensen Lower Bound . . . . . . . . . . . . . . . . . 179 3.4.2 Edmundson–Madansky Upper Bound . . . . . . . . . . 181 3.4.3 Combinations . . . . . . . . . . . . . . . . . . . . . . . . 184 3.4.4 A Piecewise Linear Upper Bound . . . . . . . . . . . . . 185 3.5 Approximations . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 3.5.1 Reﬁnements of the bounds on the “WaitandSee”Solution190 3.5.2 Using the Lshaped Method within Approximation Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 3.5.3 What is a Good Partition? . . . . . . . . . . . . . . . . 203 CONTENTS v 3.6 3.7 Simple Recourse . . . . . . . . . . . . Integer First Stage . . . . . . . . . . . 3.7.1 Initialization . . . . . . . . . . 3.7.2 Feasibility Cuts . . . . . . . . . 3.7.3 Optimality Cuts . . . . . . . . 3.7.4 Stopping Criteria . . . . . . . . 3.8 Stochastic Decomposition . . . . . . . 3.9 Stochastic QuasiGradient Methods . . 3.10 Solving Many Similar Linear Programs 3.10.1 Randomness in the Objective . 3.11 Bibliographical Notes . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . 4 Probabilistic Constraints . . . . . . . 4.1 Joint Chance Constrained Problems 4.2 Separate Chance Constraints . . . . 4.3 Bounding Distribution Functions . . 4.4 Bibliographical Notes . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 209 216 216 217 217 217 225 229 232 233 235 237 243 245 247 249 257 258 258 261 261 262 263 264 265 271 273 274 274 275 277 278 280 286 287 288 290 294 295 5 Preprocessing . . . . . . . . . . . . . . . . . . . 5.1 Problem Reduction . . . . . . . . . . . . . . . 5.1.1 Finding a Frame . . . . . . . . . . . . 5.1.2 Removing Unnecessary Columns . . . 5.1.3 Removing Unnecessary Rows . . . . . 5.2 Feasibility in Linear Programs . . . . . . . . . 5.2.1 A Small Example . . . . . . . . . . . . 5.3 Reducing the Complexity of Feasibility Tests 5.4 Bibliographical Notes . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . 6 Network Problems . . . . . . . . . . . . . . . . 6.1 Terminology . . . . . . . . . . . . . . . . . . . 6.2 Feasibility in Networks . . . . . . . . . . . . . 6.2.1 The uncapacitated case . . . . . . . . 6.2.2 Comparing the LP and Network Cases 6.3 Generating Relatively Complete Recourse . . 6.4 An Investment Example . . . . . . . . . . . . 6.5 Bounds . . . . . . . . . . . . . . . . . . . . . 6.5.1 Piecewise Linear Upper Bounds . . . . vi STOCHASTIC PROGRAMMING 6.6 Project Scheduling . . . . . . . . . . . . . . . . . 6.6.1 PERT as a Decision Problem . . . . . . . 6.6.2 Introduction of Randomness . . . . . . . . 6.6.3 Bounds on the Expected Project Duration 6.6.3.1 Series reductions . . . . . . . . . 6.6.3.2 Parallel reductions . . . . . . . . 6.6.3.3 Disregarding path dependences . 6.6.3.4 Arc duplications . . . . . . . . . 6.6.3.5 Using Jensen’s inequality . . . . 6.7 Bibliographical Notes . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 303 303 304 305 305 305 306 306 307 308 309 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 Preface
Over the last few years, both of the authors, and also most others in the ﬁeld of stochastic programming, have said that what we need more than anything just now is a basic textbook—a textbook that makes the area available not only to mathematicians, but also to students and other interested parties who cannot or will not try to approach the ﬁeld via the journals. We also felt the need to provide an appropriate text for instructors who want to include the subject in their curriculum. It is probably not possible to write such a book without assuming some knowledge of mathematics, but it has been our clear goal to avoid writing a text readable only for mathematicians. We want the book to be accessible to any quantitatively minded student in business, economics, computer science and engineering, plus, of course, mathematics. So what do we mean by a quantitatively minded student? We assume that the reader of this book has had a basic course in calculus, linear algebra and probability. Although most readers will have a background in linear programming (which replaces the need for a speciﬁc course in linear algebra), we provide an outline of all the theory we need from linear and nonlinear programming. We have chosen to put this material into Chapter 1, so that the reader who is familiar with the theory can drop it, and the reader who knows the material, but wonders about the exact deﬁnition of some term, or who is slightly unfamiliar with our terminology, can easily check how we see things. We hope that instructors will ﬁnd enough material in Chapter 1 to cover speciﬁc topics that may have been omitted in the standard book on optimization used in their institution. By putting this material directly into the running text, we have made the book more readable for those with the minimal background. But, at the same time, we have found it best to separate what is new in this book—stochastic programming—from more standard material of linear and nonlinear programming. Despite this clear goal concerning the level of mathematics, we must admit that when treating some of the subjects, like probabilistic constraints (Section 1.6 and Chapter 4), or particular solution methods for stochastic programs, like stochastic decomposition (Section 3.8) or quasigradient viii STOCHASTIC PROGRAMMING methods (Section 3.9), we have had to use a slightly more advanced language in probability. Although the actual information found in those parts of the book is made simple, some terminology may here and there not belong to the basic probability terminology. Hence, for these parts, the instructor must either provide some basic background in terminology, or the reader should at least consult carefully Section 1.4.1, where we have tried to put together those terms and concepts from probability theory used later in this text. Within the mathematical programming community, it is common to split the ﬁeld into topics such as linear programming, nonlinear programming, network ﬂows, integer and combinatorial optimization, and, ﬁnally, stochastic programming. Convenient as that may be, it is conceptually inappropriate. It puts forward the idea that stochastic programming is distinct from integer programming the same way that linear programming is distinct from nonlinear programming. The counterpart of stochastic programming is, of course, deterministic programming. We have stochastic and deterministic linear programming, deterministic and stochastic network ﬂow problems, and so on. Although this book mostly covers stochastic linear programming (since that is the best developed topic), we also discuss stochastic nonlinear programming, integer programming and network ﬂows. Since we have let subject areas guide the organization of the book, the chapters are of rather diﬀerent lengths. Chapter 1 starts out with a simple example that introduces many of the concepts to be used later on. Tempting as it may be, we strongly discourage skipping these introductory parts. If these parts are skipped, stochastic programming will come forward as merely an algorithmic and mathematical subject, which will serve to limit the usefulness of the ﬁeld. In addition to the algorithmic and mathematical facets of the ﬁeld, stochastic programming also involves model creation and speciﬁcation of solution characteristics. All instructors know that modelling is harder to teach than are methods. We are sorry to admit that this diﬃculty persists in this text as well. That is, we do not provide an indepth discussion of modelling stochastic programs. The text is not free from discussions of models and modelling, however, and it is our strong belief that a course based on this text is better (and also easier to teach and motivate) when modelling issues are included in the course. Chapter 1 contains a formal approach to stochastic programming, with a discussion of diﬀerent problem classes and their characteristics. The chapter ends with linear and nonlinear programming theory that weighs heavily in stochastic programming. The reader will probably get the feeling that the parts concerned with chanceconstrained programming are mathematically more complicated than some parts discussing recourse models. There is a good reason for that: whereas recourse models transform the randomness contained in a stochastic program into one special parameter of some random vector’s distribution, namely its expectation, chance constrained models deal PREFACE ix more explicitly with the distribution itself. Hence the latter models may be more diﬃcult, but at the same time they also exhaust more of the information contained in the probability distribution. However, with respect to applications, there is no generally valid justiﬁcation to state that any one of the two basic model types is “better” or “more relevant”. As a matter of fact, we know of applications for which the recourse model is very appropriate and of others for which chance constraints have to be modelled, and even applications are known for which recourse terms for one part of the stochastic constraints and chance constraints for another part were designed. Hence, in a ﬁrst reading or an introductory course, one or the other proof appearing too complicated can certainly be skipped without harm. However, to get a valid picture about stochastic programming, the statements about basic properties of both model types as well as the ideas underlying the various solution approaches should be noticed. Although the basic linear and nonlinear programming is put together in one speciﬁc part of the book, the instructor or the reader should pick up the subjects as they are needed for the understanding of the other chapters. That way, it will be easier to pick out exactly those parts of the theory that the students or readers do not know already. Chapter 2 starts out with a discussion of the Bellman principle for solving dynamic problems, and then discusses decision trees and dynamic programming in both deterministic and stochastic settings. There then follows a discussion of the rather new approach of scenario aggregation. We conclude the chapter with a discussion of the value of using stochastic models. Chapter 3 covers recourse problems. We ﬁrst discuss some topics from Chapter 1 in more detail. Then we consider decomposition procedures especially designed for stochastic programs with recourse. We next turn to the questions of bounds and approximations, outlining some major ideas and indicating the direction for other approaches. The special case of simple recourse is then explained, before we show how decomposition procedures for stochastic programs ﬁt into the framework of branchandcut procedures for integer programs. This makes it possible to develop an approach for stochastic integer programs. We conclude the chapter with a discussion of MonteCarlo based methods, in particular stochastic decomposition and quasigradient methods. Chapter 4 is devoted to probabilistic constraints. Based on convexity statements provided in Section 1.6, one particular solution method is described for the case of joint chance constraints with a multivariate normal distribution of the righthand side. For separate probabilistic constraints with a joint normal distribution of the coeﬃcients, we show how the problem can be transformed into a deterministic convex nonlinear program. Finally, we address a problem very relevant in dealing with chance constraints: the problem of how to construct eﬃciently lower and upper bounds for a multivariate distribution function, and give a ﬁrst sketch of the ideas used x STOCHASTIC PROGRAMMING in this area. Preprocessing is the subject of Chapter 5. “Preprocessing” is any analysis that is carried out before the actual solution procedure is called. Preprocessing can be useful for simplifying calculations, but the main purpose is to facilitate a tool for model evaluation. We conclude the book with a closer look at networks (Chapter 6). Since these are nothing else than specially structured linear programs, we can draw freely from the topics in Chapter 3. However, the added structure of networks allows many simpliﬁcations. We discuss feasibility, preprocessing and bounds. We conclude the chapter with a closer look at PERT networks. Each chapter ends with a short discussion of where more literature can be found, some exercises, and, ﬁnally, a list of references. Writing this book has been both interesting and diﬃcult. Since it is the ﬁrst basic textbook totally devoted to stochastic programming, we both enjoyed and suﬀered from the fact that there is, so far, no experience to suggest how such a book should be constructed. Are the chapters in the correct order? Is the level of diﬃculty even throughout the book? Have we really captured the basics of the ﬁeld? In all cases the answer is probably NO. Therefore, dear reader, we appreciate all comments you may have, be they regarding misprints, plain errors, or simply good ideas about how this should have been done. And also, if you produce suitable exercises, we shall be very happy to receive them, and if this book ever gets revised, we shall certainly add them, and allude to the contributor. About 50% of this text served as a basis for a course in stochastic programming at The Norwegian Institute of Technology in the fall of 1992. We wish to thank the students for putting up with a very preliminary text, and for ﬁnding such an astonishing number of errors and misprints. Last but not least, we owe sincere thanks to Julia Higle (University of Arizona, Tucson), Diethard Klatte (Univerity of Zurich), Janos Mayer (University of Zurich) and Pavel Popela (Technical University of Brno) who have read the manuscript1 very carefully and ﬁxed not only linguistic bugs but prevented us from quite a number of crucial mistakes. Finally we highly appreciate the good cooperation and very helpful comments provided by our publisher. The remaining errors are obviously the sole responsibility of the authors. Zurich and Trondheim, February 1994 P. K. and S.W.W. 1 A Written in L T X E 1 Basic Concepts
1.1 Motivation By reading this introduction, you are certainly already familiar with deterministic optimization. Most likely, you also have some insights into what new challenges face you when randomness is (re)introduced into a model. The interest for studying stochastic programming can come from diﬀerent sources. Your interests may concern the algorithmic or mathematical, as well as modeling and applied aspects of optimization. We hope to provide you with some insights into the basics of all these areas. In these very ﬁrst pages we will demonstrate why it is important, often crucial, that you turn to stochastic programming when working with decisions aﬀected by uncertainty. And, in in our view, all decision problems are of this type. Technically, stochastic programs are much more complicated than the corresponding deterministic programs. Hence, at least from a practical point of view, there must be very good reasons to turn to the stochastic models. We start this book with a small example illustrating that these reasons exist. In fact, we shall demonstrate that alternative deterministic approaches do not even look for the best solutions. Deterministic models may certainly produce good solutions for certain data set in certain models, but there is generally no way you can conclude that they are good, without comparing them to solutions of stochastic programs. In many cases, solutions to deterministic programs are very misleading. 1.1.1 A numerical example You own two lots of land. Each of them can be developed with necessary infrastructure and a plant can be built. In fact, there are nine possible decisions. Eight of them are given in Figure 1, the ninth is to do nothing. The cost structure is given in the following table. For each lot of land we give the cost of developing the land and building the plant. The extra column will be explained shortly. 2
2 STOCHASTIC PROGRAMMING
4 3 1 5 6 7 8 Figure 1 Eight of the nine possible decisions. The area surrounded by thin lines correspond to Lot 1, the area with thick lines to Lot 2. For example, Decision 6 is to develop both lots, and build a plant on Lot 1. Decision 9 is to do nothing. Lot 1 Lot 2 developing the land building the plant building the plant later 600 200 220 100 600 660 In each of the plants, it is possible to produce one unit of some product. It can be sold at a price p. The price p is unknown when the land is developed. Also, if the plant on Lot 1, say, is to be built at its lowest cost, given as 200 in the table, that must take place before p becomes known. However, it is possible to delay the building of the plant until after p becomes known, but at a 10% penalty. That is given in the last column of the table. This can only take place if the lot is already developed. There is not enough time to both develop the land and build a plant after p has become known. 1.1.2 Scenario analysis A common way of solving problems of this kind is to perform scenario analysis, also sometimes referred to as simulation. (Both terms have a broader meaning than what we use here, of course.) The idea is to construct or sample possible futures (values of p in our case) and solve the corresponding problem for these values. After having obtained a number of possible decision this way, we either pick the best of them (details will be given later), or we try to ﬁnd good combinations of the decisions. In our case it is simple to show that there are only three possible scenario BASIC CONCEPTS 3 solutions. These are given as follows. Decision numbers refer to Figure 1. Interval for p p < 700 700 ≤ p < 800 p ≥ 800 Decision number 9 4 7 So whatever scenarios are constructed or sampled, these are the only possible solutions. Note that in this setting it is never optimal to use delayed construction. The reason is that each scenario analysis is performed under certainty, and hence, there is no reason to pay the extra 10% for being allowed to delay the decision. Now, assume for simplicity that p can take on only two values, namely 210 and 1250, each with a probability of 0.5. This is a very extreme choice, but it has been made only for convenience. We could have made the same points with more complicated (for example continuous) distributions, but nothing would have been gained by doing that, except make the calculations more complicated. Hence, the expected price equals 730. 1.1.3 Using the expected value of p A common solution procedure for stochastic problems is to use the expected value of all random variables. This is sometimes done very explicitly, but more often it is done in the following fashion: The modeler collects data, either by experiments or by checking an existing process over time, and then calculates the mean, which is then said to be the best available estimate of the parameter. In this case we would then use 730, and from the list of scenario solutions above, we see that the optimal solution will Decision 4 with a proﬁt of −700 + 730 = 30. We call this the expected value solution. We can also calculate the expected value of using the expected value solution. That is, we can use the expected value solution, and then see how it performs under the possible futures. We get 1 1 −700 + 210 + 1250 = 30. 2 2 It is not a general result that the expected value of using the expected value solution equals the scenario solution value corresponding to the expected value of the parameters (here p). But in this case that happens. 4 STOCHASTIC PROGRAMMING 1.1.4 Maximizing the expected value of the objective We just calculated the expected value of using the expected value solution. It was 30. We can also calculate the expected value of using any of the possible scenario solutions. We ﬁnd that for doing nothing (Decision 9), the expected value is 0, and for Decision 7 the expected value equals 1 1 −1500 + 420 + 2500 = −40. 2 2 In other words, the expected value solution is the best of the three scenario solutions in terms of having the best expected performance. But is this the solution with the best expected performance? Let us answer this question by simply listing all possible solutions, and calculate their expected value. In all cases, if the land is developed before p becomes known, we will consider the option of building the plant at the 10% penalty if that is proﬁtable. The results are given in Table 1.
Table 1 The expected value of all nine possible solutions. The income is the value of the product if the plant is already built. If not, it is the value of the product minus the construction cost at 10% penalty. Decision 1 2 3 4 5 6 7 8 9 Investment −600 −800 −100 −700 −1300 −900 −1500 −700 0 Income if p = 210
1 2 210 1 2 210 1 2 210 1 2 210 1 2 420 0 Income if p = 1250 1 2 1030 1 2 1250 1 2 590 1 2 1250 1 2 2280 1 2 1840 1 2 2500 1 2 1620 0 Expected proﬁt −85 −70 195 30 −55 125 −40 110 0 As we see from Table 1, the optimal solution is to develop Lot 2, then wait to see what the price turns out to be. If the price turns out to be low, do nothing, if it turns out to be high, build plant 2. The solution that truly maximizes the expected value of the objective function will be called the stochastic solution. Note that also two more solutions are substantially better than the expected value solution. All three solutions that are better than the expected value solution are solutions with options in them. That is, they mean that we develop some land in anticipation of high prices. Of course, there is a chance that the investment BASIC CONCEPTS 5 will be lost. In scenario analysis, as outlined earlier, options have no value, and hence, never show up in a solution. It is important to note that the fact that these solutions did not show up as scenario solutions is not caused by few scenarios, but by the very nature of a scenario, namely that it is deterministic. It is incorrect to assume that if you can obtain enough scenarios, you will eventually come upon the correct solution. 1.1.5 The IQ of hindsight In hindsight, that is, after the fact, it will always be such that one of the scenario solutions turn out to be the best choice. In particular, the expected value solution will be optimal for any 700 < p ≤ 800. (We did not have any probability mass there in our example, but we could easily have constructed such a case.) The problem is that it is not the same scenario solution that is optimal in all cases. In fact, most of them are very bad in all but the situation where they are best. The stochastic solution, on the other hand, is normally never optimal after the fact. But, at the same time, it is also hardly ever really bad. In our example, with the given probability distribution, the decision of doing nothing (which has an expected value of zero) and the decision of building both plants (with an expected value of 40) both have a probability of 50% of being optimal after p has become known. The stochastic solution, with an expected value of 195, on the other hand, has zero probability of being optimal in hindsight. This is an important observation. If you base your decisions on stochastic models, you will normally never do things really well. Therefore, people who prefer to evaluate after the fact can always claim that you made a bad decision. If you base your decisions on scenario solutions, there is a certain chance that you will do really well. It is therefore possible to claim that in certain cases the most risky decision one can make is the one with the highest expected value, because you will then always be proven wrong after the fact. The IQ of hindsight is very high. 1.1.6 Options We have already hinted at it several times, but let us repeat the observation that the value of a stochastic programming approach to a problem lies in the explicit evaluation of ﬂexibility. Flexible solutions will always lose in deterministic evaluations. Another area where these observations have been made for quite a while is option theory. This theory is mostly developed for ﬁnancial models, but the theory of real options (for example investments) is coming. Let us consider our extremely simple example in the light of options. 6 STOCHASTIC PROGRAMMING We observed from Table 1 that the expected Net Present Value (NPV) of Decision 4, i.e. the decision to develop Lot 2 and build a plant, equals 30. Standard theory tells us to invest if a project has a positive NPV, since that means the project is proﬁtable. And, indeed, Decision 4 represents an investment which is proﬁtable in terms of expected proﬁts. But as we have observed, Decision 3 is better, and it is not possible to make both decisions; they exclude each other. The expected NPV for Decision 3 is 195. The diﬀerence of 165 is the value of an option, namely the option not to build the plant. Or to put it in a diﬀerent wording: If your only possibilities were to develop Lot 2 and build the plant at the same time, or do nothing, and you were asked how much you were willing to pay in order to be allowed to delay the building of the plant (at the 10% penalty) the answer is at most 165. Another possible setting is to assume that the right to develop Lot 2 and build the plant is for sale. This right can be seen as an option. This option is worth 195 in the setting where delayed construction of the plant is allowed. (If delays were not allowed, the right to develop and build would be worth 30, but that is not an option.) So what is it that gives an option a value? Its value stems from the right to do something in the future under certain circumstances, but to drop it in others if you so wish. And, even more importantly, to evaluate an option you must model explicitly the future decisions. This is true in our simple model, but it is equally true in any complex option model. It is not enough to describe a stochastic future, this stochastic future must contain decisions. So what are the important aspect of randomness? We may conclude that there are at least three (all related of course). 1. Randomness is needed to obtain a correct evaluation of the future income and costs, i.e. to evaluate the objective. 2. Flexibility only has value (and meaning) in a setting of randomness. 3. Only by explicitly evaluating future decisions can decisions containing ﬂexibility (options) be correctly valued. BASIC CONCEPTS 7 1.2 Preliminaries Many practical decision problems—in particular, rather complex ones—can be modelled as linear programs ⎫ min{c1 x1 + c2 x2 + · · · + cn xn } ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ subject to ⎪ ⎪ ⎪ a11 x1 + a12 x2 + · · · + a1n xn = b1 ⎪ ⎬ a21 x1 + a22 x2 + · · · + a2n xn = b2 (2.1) ⎪ . .⎪ ⎪ . .⎪ . .⎪ ⎪ ⎪ am1 x1 + am2 x2 + · · · + amn xn = bm ⎪ ⎪ ⎪ ⎭ x1 , x2 , · · · , xn ≥ 0. Using matrix–vector notation, the shorthand formulation of problem (2.1) would read as ⎫ min cT x ⎬ s.t. Ax = b (2.2) ⎭ x ≥ 0. Typical applications may be found in the areas of industrial production, transportation, agriculture, energy, ecology, engineering, and many others. In problem (2.1) the coeﬃcients cj (e.g. factor prices), aij (e.g. productivities) and bi (e.g. demands or capacities) are assumed to have ﬁxed known real values and we are left with the task of ﬁnding an optimal combination of the values for the decision variables xj (e.g. factor inputs, activity levels or energy ﬂows) that have to satisfy the given constraints. Obviously, model (2.1) can only provide a reasonable representation of a real life problem when the functions involved (e.g. cost functions or production functions) are fairly linear in the decision variables. If this condition is substantially violated—for example, because of increasing marginal costs or decreasing marginal returns of production—we should use a more general form to model our problem: ⎫ min g0 (x) ⎬ s.t. gi (x) ≤ 0, i = 1, · · · , m (2.3) ⎭ x ∈ X ⊂ IRn . The form presented in (2.3) is known as a mathematical programming problem. Here it is understood that the set X as well as the functions gi : IRn → IR, i = 0, · · · , m, are given by the modelling process. Depending on the properties of the problem deﬁning functions gi and the set X , program (2.3) is called (a) linear, if the set X is convex polyhedral and the functions gi , i = 0, · · · , m, are linear; 8 STOCHASTIC PROGRAMMING (b) nonlinear, if at least one of the functions gi , i = 0, · · · , m, is nonlinear or X is not a convex polyhedral set; among nonlinear programs, we denote a program as (b1) convex, if X ∩ {x  gi (x) ≤ 0, i = 1, · · · , m} is a convex set and g0 is a convex function (in particular if the functions gi , i = 0, · · · , m are convex and X is a convex set); and (b2) nonconvex, if either X ∩{x  gi (x) ≤ 0, i = 1, · · · , m} is not a convex set or the objective function g0 is not convex. Case (b2) above is also referred to as global optimization. Another special class of problems, called (mixed) integer programs, arises if the set X requires (at least some of) the variables xj , j = 1, · · · , n, to take integer values only. We shall deal only brieﬂy with discrete (i.e. mixed integer) problems, and there is a natural interest in avoiding nonconvex programs whenever possible for a very simple reason revealed by the following example from elementary calculus. Example 1.1 Consider the optimization problem
x∈IR min ϕ(x), (2.4) where ϕ(x) := 1 x4 − 5x3 + 27x2 − 40x. A necessary condition for solving 4 problem (2.4) is ϕ (x) = x3 − 15x2 + 54x − 40 = 0. Observing that ϕ (x) = (x − 1)(x − 4)(x − 10), we see that x1 = 1, x2 = 4 and x3 = 10 are candidates to solve our problem. Moreover, evaluating the second derivative ϕ (x) = 3x2 − 30x + 54, we get ϕ (x1 ) = 27, ϕ (x2 ) = −18, ϕ (x3 ) = 54, indicating that x1 and x3 yield a relative minimum whereas in x2 we ﬁnd a relative maximum. However, evaluating the two relative minima yields ϕ(x1 ) = −17.75 and ϕ(x3 ) = −200. Hence, solving our little problem (2.4) with a numerical procedure that intends to satisfy the ﬁrst and secondorder conditions for a minimum, we might (depending on the starting point of the procedure) end up with x1 as a “solution” without realizing that there exists a (much) better possibility. 2 As usual, a function ψ is said to attain a relative minimum—also called a local minimum—at some point x if there is a neighbourhood U of x (e.g. a ball ˆ ˆ BASIC CONCEPTS 9 with center x and radius ε > 0) such that ψ (ˆ) ≤ ψ (y ) ∀y ∈ U . A minimum ˆ x ψ (¯) is called global if ψ (¯) ≤ ψ (z ) ∀z . As we just saw, a local minimum ψ (ˆ) x x x need not be a global minimum. A situation as in the above example cannot occur with convex programs because of the following. Lemma 1.1 If problem (2.3) is a convex program then any local (i.e. relative) minimum is a global minimum. Proof If x is a local minimum of problem (2.3) then x belongs to the ¯ ¯ feasible set B := X ∩ {x  gi (x) ≤ 0, i = 1, · · · , m}. Further, there is an ε0 > 0 such that for any ball Kε := {x  x − x ≤ ε}, 0 < ε < ε0 , we have that ¯ x ¯ g0 (¯) ≤ g0 (x) ∀x ∈ Kε ∩B . Choosing an arbitrary y ∈ B , y = x, we may choose an ε > 0 such that ε < y − x and ε < ε0 . Finally, since, from our assumption, ¯ B is a convex set and the objective g0 is a convex function, the line segment xy ¯ ˆ ˆ ¯ intersects the surface of the ball Kε in a point x such that x = αx + (1 − α)y for some α ∈ (0, 1), yielding g0 (¯) ≤ g0 (ˆ) ≤ αg0 (¯) + (1 − α)g0 (y ), which x x x implies that g0 (¯) ≤ g0 (y ). x 2 During the last four decades, progress in computational methods for solving mathematical programs has been impressive, and problems of considerable size may be solved eﬃciently, and with high reliability. In many modelling situations it is unreasonable to assume that the coeﬃcients cj , aij , bi or the functions gi (and the set X ) respectively in problems (2.1) and (2.3) are deterministically ﬁxed. For instance, future productivities in a production problem, inﬂows into a reservoir connected to a hydro power station, demands at various nodes in a transportation network, and so on, are often appropriately modelled as uncertain parameters, which are at best characterized by probability distributions. The uncertainty about the realized values of those parameters cannot always be wiped out just by inserting their mean values or some other (ﬁxed) estimates during the modelling process. That is, depending on the practical situation under consideration, problems (2.1) or (2.3) may not be the appropriate models for describing the problem we want to solve. In this chapter we emphasize— and possibly clarify—the need to broaden the scope of modelling real life decision problems. Furthermore, we shall provide from linear programming and nonlinear programming the essential ingredients absolutely necessary for the understanding of the subsequent chapters. Obviously these latter sections may be skipped—or used as a quick revision—by readers who are already familiar with the related optimization courses. Before coming to a more general setting we ﬁrst derive some typical stochastic programming models, using a simpliﬁed production problem to illustrate the various model types. 10 STOCHASTIC PROGRAMMING 1.3 An Illustrative Example Let us consider the following problem, idealized for the purpose of easy presentation. From two raw materials, raw1 and raw2, we may simultaneously produce two diﬀerent goods, prod1 and prod2 (as may happen for example in a reﬁnery). The output of products per unit of the raw materials as well as the unit costs of the raw materials c = (craw1 , craw2 )T (yielding the production cost γ ), the demands for the products h = (hprod1 , hprod2 )T and the production capacity ˆ, i.e. the maximal total amount of raw materials that b can be processed, are given in Table 2. According to this formulation of our production problem, we have to deal with the following linear program:
Table 2 Productivities π (raw i, prod j). Products Raws prod1 prod2 raw1 2 3 raw2 6 3 relation ≥ ≥ h 180 162 c 2 3 = γ ˆ b 1 1 ≤ 100 min(2xraw1 + 3xraw2 ) s.t. xraw1 + xraw2 ≤ 100, 2xraw1 + 6xraw2 ≥ 180, 3xraw1 + 3xraw2 ≥ 162, xraw1 ≥ 0, xraw2 ≥ 0. ⎫ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎬ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎭ (3.1) Due to the simplicity of the example problem, we can give a graphical representation of the set of feasible production plans (Figure 2). Given the cost function γ (x) = 2xraw1 + 3xraw2 we easily conclude (Figure 3) that ˆ x (3.2) xraw1 = 36, xraw2 = 18, γ (ˆ) = 126 ˆ is the unique optimal solution to our problem. Our production problem is properly described by (3.1) and solved by (3.2) provided the productivities, the unit costs, the demands and the capacity (Table 2) are ﬁxed data and known to us prior to making our decision on the production plan. However, this is obviously not always a realistic assumption. It may happen that at least some of the data—productivities and demands for BASIC CONCEPTS 11 Figure 2 Deterministic LP: set of feasible production plans. instance—can vary within certain limits (for our discussion, randomly) and that we have to make our decision on the production plan before knowing the exact values of those data. To be more speciﬁc, let us assume that • our model describes the weekly production process of a reﬁnery relying on two countries for the supply of crude oil (raw1 and raw2, respectively), supplying one big company with gasoline (prod1) for its distribution system of gas stations and another with fuel oil (prod2) for its heating and/or power plants; • it is known that the productivities π (raw1, prod1) and π (raw2, prod2), i.e. the output of gas from raw1 and the output of fuel from raw2 may change randomly (whereas the other productivities are deterministic); • simultaneously, the weekly demands of the clients, hprod1 for gas and hprod2 for fuel are varying randomly; • the weekly production plan (xraw1 , xraw2 ) has to be ﬁxed in advance and cannot be changed during the week, whereas • the actual productivities are only observed (measured) during the production process itself, and • the clients expect their actual demand to be satisﬁed during the corresponding week. 12 STOCHASTIC PROGRAMMING Figure 3 LP: feasible production plans and cost function for γ = 290. Assume that, owing to statistics, we know that hprod1 hprod2 π (raw1, prod1) π (raw2, prod2) ˜ = 180 + ζ1 , ˜ = 162 + ζ2 , = 2 + η1 , ˜ = 3.4 − η2 , ˜ ⎫ ⎪ ⎪ ⎬ ⎪ ⎪ ⎭ (3.3) ˜ where the random variables ζj are modelled using normal distributions, and ˜ η1 and η2 are distributed uniformly and exponentially respectively, with the ˜ following parameters:1 ˜ distr ζ1 ˜2 distr ζ distr η1 ˜ distr η2 ˜ ∼ N (0, 12), ∼ N (0, 9), ∼ U [−0.8, 0.8], ∼ EX P (λ = 2.5). ⎫ ⎪ ⎪ ⎬ ⎪ ⎪ ⎭ (3.4) For simplicity, we assume that these four random variables are mutually ˜˜ independent. Since the random variables ζ1 , ζ2 and η2 are unbounded, ˜ we restrict our considerations to their respective 99% conﬁdence intervals
1 We use N (µ, σ) to denote the normal distribution with mean µ and variance σ2 . BASIC CONCEPTS 13 (except for U ). So we have for the above random variables’ realizations ⎫ ζ1 ∈ [−30.91, 30.91], ⎪ ⎪ ⎬ ζ2 ∈ [−23.18, 23.18], (3.5) η1 ∈ [−0.8, 0.8], ⎪ ⎪ ⎭ η2 ∈ [0.0, 1.84]. Hence, instead of the linear program (3.1), we are dealing with the stochastic linear program ⎫ min(2xraw1 + 3xraw2 ) ⎪ ⎪ ⎪ ⎪ xraw2 ≤ 100, s.t. xraw1 + ⎪ ⎪ ˜⎬ (2 + η1 )xraw1 + ˜ 6xraw2 ≥ 180 + ζ1 , (3.6) ˜⎪ ˜ 3xraw1 + (3.4 − η2 )xraw2 ≥ 162 + ζ2 , ⎪ ⎪ ⎪ ⎪ ≥ 0, xraw1 ⎪ ⎭ xraw2 ≥ 0. This is not a welldeﬁned decision problem, since it is not at all clear what the meaning of “min” can be before knowing a realization (ζ1 , ζ2 , η1 , η2 ) of ˜˜˜˜ (ζ1 , ζ2 , η1 , η2 ). Geometrically, the consequence of our random parameter changes may be rather complex. The eﬀect of only the righthand sides ζi varying over the intervals given in (3.5) corresponds to parallel translations of the corresponding facets of the feasible set as shown in Figure 4. We may instead consider the eﬀect of only the ηi changing their values within the intervals mentioned in (3.5). That results in rotations of the related facets. Some possible situations are shown in Figure 5, where the centers of rotation are indicated by small circles. Allowing for all the possible changes in the demands and in the productivities simultaneously yields a superposition of the two geometrical motions, i.e. the translations and the rotations. It is easily seen that the variation of the feasible set may be substantial, depending on the actual realizations of the random data. The same is also true for the socalled waitandsee solutions, i.e. for those optimal solutions we should choose if we knew the realizations of the random parameters in advance. In Figure 6 a few possible situations are indicated. In addition to the deterministic solution ˆ x = (ˆraw1 , xraw2 ) = (36, 18), γ = 126, ˆ x production plans such as ⎫ y = (ˆraw1 , yraw2 ) = (20, 30), γ = 130, ⎬ ˆ y ˆ ˆ z = (ˆraw1 , zraw2) = (50, 22), γ = 166, ˆ z ⎭ v = (ˆraw1 , vraw2 ) = (58, 6), γ = 134 ˆ v ˆ (3.7) 14 STOCHASTIC PROGRAMMING Figure 4 LP: feasible set varying with demands. Figure 5 LP: feasible set varying with productivities. BASIC CONCEPTS 15 may be waitandsee solutions. Unfortunately, waitandsee solutions are not what we need. We have to decide production plans under uncertainty, since we only have statistical information about the distributions of the random demands and productivities. A ﬁrst possibility would consist in looking for a “safe” production program: one that will be feasible for all possible realizations of the productivities and demands. A production program like this is called a fat solution and reﬂects total risk aversion of the decision maker. Not surprisingly, fat solutions are usually rather expensive. In our example we can conclude from Figure 6 that a fat solution exists at the intersection of the two rightmost constraints for prod1 and prod2, which is easily computed as x∗ = (x∗ 1 , x∗ 2 ) = (48.018, 25.548), γ ∗ = 172.681. raw raw (3.8) To introduce another possibility, let us assume that the reﬁnery has made the following arrangement with its clients. In principle, the clients expect the reﬁnery to satisfy their weekly demands. However, very likely—according to the production plan and the unforeseen events determining the clients’ demands and/or the reﬁnery’s productivity—the demands cannot be covered by the production, which will cause “penalty” costs to the reﬁnery. The amount of shortage has to be bought from the market. These penalties are supposed to be proportional to the respective shortage in products, and we assume that per unit of undeliverable products they amount to qprod1 = 7, qprod2 = 12. (3.9) The costs due to shortage of production—or in general due to the amount of violation in the constraints—are actually determined after the observation of the random data and are denoted as recourse costs. In a case (like ours) of repeated execution of the production program it makes sense—according to what we have learned from statistics—to apply an expected value criterion. More precisely, we may want to ﬁnd a production plan that minimizes the sum of our original ﬁrststage (i.e. production) costs and the expected recourse costs. To formalize this approach, we abbreviate our notation. Instead of the ˜˜˜ ˜ four single random variables ζ1 , ζ2 , η1 and η2 , it seems convenient to use the ˜ = (ζ1 , ζ2 , η1 , η2 )T . Further, we introduce for each of the ˜˜˜˜ random vector ξ ˜ two stochastic constraints in (3.6) a recourse variable yi (ξ ), i = 1, 2, which simply measures the corresponding shortage in production if there is any; ˜ since shortage depends on the realizations of our random vector ξ , so does the ˜) are themselves random variables. corresponding recourse variable, i.e. the yi (ξ Following the approach sketched so far, we now replace the vague stochastic 16 STOCHASTIC PROGRAMMING Figure 6 LP: feasible set varying with productivities and demands; some waitandsee solutions. program (3.6) by the well deﬁned stochastic program with recourse, using ˜ ˜ ˜ ˜ h1 (ξ ) := hprod1 = 180 + ζ1 , h2 (ξ ) := hprod2 = 162 + ζ2 , ˜ ˜ α(ξ ) := π (raw1, prod1) = 2 + η1 , β (ξ ) := π (raw2, prod2) = 3.4 + η2 : ˜ ˜ ˜ ˜ min{2xraw1 + 3xraw2 + Eξ [7y1 (ξ ) + 12y2 (ξ )]} ˜ s.t. xraw1 + xraw2 ˜ ˜ α(ξ )xraw1 + 6xraw2 + y1 (ξ ) ˜ ˜ 3xraw1 + β (ξ )xraw2 + y2 (ξ ) xraw1 xraw2 ˜ y1 (ξ ) ˜ y2 (ξ ) ≤ ≥ ≥ ≥ ≥ ≥ ≥ ⎫ ⎪ ⎪ ⎪ ⎪ 100, ⎪ ⎪ ⎪ ˜⎪ ⎪ h1 (ξ ), ⎪ ⎪ ⎬ ˜ h2 (ξ ), ⎪ 0, ⎪ ⎪ ⎪ ⎪ 0, ⎪ ⎪ ⎪ ⎪ 0, ⎪ ⎪ ⎭ 0. (3.10) In (3.10) Eξ stands for the expected value with respect to the distribution ˜ ˜, and in general, it is understood that the stochastic constraints have of ξ to hold almost surely (a.s.) (i.e., they are to be satisﬁed with probability ˜ 1). Note that if ξ has a ﬁnite discrete distribution {(ξ i , pi ), i = 1, · · · , r} (pi > 0 ∀i) then (3.10) is just an ordinary linear program with a socalled BASIC CONCEPTS 17 dual decomposition structure: min{2xraw1 + 3xraw2 + i=1 pi [7y1 (ξ i ) + 12y2 (ξ i )]} s.t. xraw1 + xraw2 ≤ 100, α(ξ i )xraw1 + 6xraw2 + y1 (ξ i ) ≥ h1 (ξ i ) ∀i, 3xraw1 + β (ξ i )xraw2 + y2 (ξ i ) ≥ h2 (ξ i ) ∀i, xraw1 ≥ 0, ≥ 0, xraw2 y1 (ξ i ) ≥ 0 ∀i, y2 (ξ i ) ≥ 0 ∀i.
r ⎫ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎬ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎭ (3.11) ˜ Depending on the number of realizations of ξ , r, this linear program may become (very) large in scale, but its particular block structure is amenable to specially designed algorithms. Linear programs with dual decomposition structure will be introduced in general in Section 1.5 on page 42. A basic solution method for these problems will be described in Section 1.7.4 (page 75). To further analyse our reﬁnery problem, let us ﬁrst assume that only the ˜ demands, hi (ξ ), i = 1, 2, are changing their values randomly, whereas the productivities are ﬁxed. In this case we are in the situation illustrated in Figure 4. Even this small idealized problem can present numerical diﬃculties if solved as a nonlinear program. The reason for this lies in the fact that the evaluation of the expected value which appears in the objective function requires • multivariate numerical integration; • implicit deﬁnition of the functions yi (ξ ) (these functions yielding for a ﬁxed ˆ ˜ x the optimal solutions of (3.10) for every possible realization ξ of ξ ), both of which are rather cumbersome tasks. To avoid these diﬃculties, we shall try to approximate the normal distributions by discrete ones. For this purpose, we
µ • generate large samples ζi , µ = 1, 2, · · · , K, i = 1, 2, restricted to the 99% intervals of (3.5), sample size K =10 000; • choose equidistant partitions of the 99% intervals into ri , i = 1, 2, subintervals (e.g. r1 = r2 = 15); • calculate for every subinterval Iiν , ν = 1, · · · , ri , i = 1, 2, the arithmetic ν ¯ν mean ζi of sample values ζi ∈ Iiν , yielding an estimate for the conditional ˜ expectation of ζi given Iiν ; µ • calculate for every subinterval Iiν the relative frequency piν for ζi ∈ Iiν µ (i.e. piν = kiν /K , where kiν is the number of sample values ζi contained in Iiν ). This yields an estimate for the probability of {ζi ∈ Iiν }. ¯ν The discrete distributions {(ζi , piν ), ν = 1, · · · , ri }, i = 1, 2, are then used as approximations for the given normal distributions. Figure 7 shows these discrete distributions for N (0, 12) and N (0, 9), with 15 realizations each. 18 STOCHASTIC PROGRAMMING 6 % r b 15 rb rb r r r r 30 r 25 b 20 b 15 10 5 0 b b b br 12 9 6 3 br b r b b b 5 10 15 20 r r b r 25 r 30 r ∼ N (0, 12) b ∼ N (0, 9) Figure 7 (15, 15). Discrete distribution generated from N (0, 12), N (0, 9); (r1 , r2 ) = Obviously, these discrete distributions with 15 realizations each can only be rough approximations of the corresponding normal distributions. Therefore approximating probabilities of particular events using these discrete distributions can be expected to cause remarkable discretization errors. This will become evident in the following numerical examples. Using these latter distributions, with 15 realizations each, we get 152 = 225 realizations for the joint distribution, and hence 225 blocks in our decomposition problem. This yields as an optimal solution for the linear program (3.11) (with γ (·) the total objective of (3.11) and γI (x) = 2xraw1 + 3xraw2 ) x (3.12) x = (˜1 , x2 ) = (38.539, 20.539), γ (˜) = 140.747, ˜ x˜ with corresponding ﬁrststage costs of γI (˜) = 138.694. x Deﬁning ρ(x) as the empirical reliability (i.e. the probability to be feasible) for any production plan x, we ﬁnd—with respect to the approximating discrete distribution—for our solution x that ˜ ρ(˜) = 0.9541, x whereas using our original linear program’s solution x = (36, 18) would yield ˆ the total expected cost γ (ˆ) = 199.390 x BASIC CONCEPTS 19 and an empirical reliability of ρ(ˆ) = 0.3188, x which is clearly overestimated (compared with its theoretical value of 0.25), which indicates that the crude method of discretization we use here just for demonstration has to be reﬁned, either by choosing a ﬁner discretization or preferably—in view of the numerical workload drastically increasing with the size of the discrete distributions support—by ﬁnding a more appropriate strategy for determining the subintervals of the partition. Let us now consider the eﬀect of randomness in the productivities. To ˜ this end, we assume that hi (ξ ), i = 1, 2, are ﬁxed at their expected ˜ ˜ values and the two productivities α(ξ ) and β (ξ ) behave according to their distributions known from (3.3) and (3.4). Again we discretize the given distributions conﬁning ourselves to 15 and 18 subintervals for the uniform and the exponential distributions respectively, yielding 15 × 18 = 270 blocks in (3.11). Solving the resulting stochastic program with recourse (3.11) as an ordinary linear program, we get as solution x ¯ x x = (37.566, 22.141), γ (¯) = 144.179, γI (¯) = 141.556, ¯ x whereas the solution of our original LP (3.1) would yield as total expected costs γ (ˆ) = 204.561. x For the reliability, we now get ρ(¯) = 0.9497, x in contrast to ρ(ˆ) = 0.2983 x for the LP solution x. ˆ ˜ ˜ ˜ ˜ Finally we consider the most general case of α(ξ ), β (ξ ), h1 (ξ ) and h2 (ξ ) varying randomly where the distributions are discretely approximated by 5, 9, 7 and 11point distributions respectively, in an analogous manner to the above. This yields a joint discrete distribution of 5 × 9 × 7 × 11 = 3465 realizations and hence equally many blocks in the recourse problem (3.11); in other words, we have to solve a linear program with 2 × 3465 + 1 = 6931 constraints! The solution x amounts to ˇ x x = (37.754, 23.629), γ (ˇ) = 150.446, γI (ˇ) = 146.396, ˇ x with a reliability of ρ(ˇ) = 0.9452, x 20 STOCHASTIC PROGRAMMING whereas the LP solution x = (36, 18) would yield ˆ γ (ˆ) = 232.492, ρ(ˆ) = 0.2499. x x So far we have focused on the case where decisions, turning out post festum to be the wrong ones, imply penalty costs that depend on the magnitude of constraint violations. Afterwards, we were able to determine the reliability of the resulting decisions, which represents a measure of feasibility. Note that the reliability provides no indication of the size of possible constraint violations and corresponding penalty costs. Nevertheless, there are many real life decision situations where reliability is considered to be the most important issue—either because it seems impossible to quantify a penalty or because of questions of image or ethics. Examples may be found in various areas such as medical problems as well as technical applications. For instance, suppose once again that only the demands are random. Suppose further that the management of our reﬁnery is convinced that it is absolutely necessary—in order to maintain a client base—to maintain a reliability of 95% with respect to satisfying their demands. In this case we may formulate the following stochastic program with joint probabilistic constraints: min(2xraw1 + 3xraw2 ) s.t. xraw1 + xraw1 P 2xraw1 3xraw1 xraw2 ≤ 100, ≥ 0, xraw2 ≥ 0, ˜ + 6xraw2 ≥ h1 (ξ ) ˜ + 3xraw2 ≥ h2 (ξ ) ≥ 0.95. This problem can be solved with appropriate methods, one of which will be presented later in this text. It seems worth mentioning that in this case using the normal distributions instead of their discrete approximations is appropriate owing to theoretical properties of probabilistic constraints to be discussed later on. The solution of the probabilistically constrained program is z = (37.758, 21.698), γI (z ) = 140.612. So the costs—i.e. the ﬁrststage costs—are only slightly increased compared with the LP solution if we observe the drastic increase of reliability. There seems to be a contradiction on comparing this last result with the solution (3.12) in that γI (˜) < γI (z ) and ρ(˜) > 0.95; however, this discrepancy is due x x to the discretization error made by replacing the true normal distribution of ˜˜ (ξ1 , ξ2 ) by the 15 × 15 discrete distribution used for the computation of the solution (3.12). Using the correct normal distribution would obviously yield x x γI (˜) = 138.694 (as in (3.12)), but only ρ(˜) = 0.9115! BASIC CONCEPTS 21 1.4 Stochastic Programs: General Formulation In the same way as random parameters in (3.1) led us to the stochastic (linear) program (3.6), random parameters in (2.3) may lead to the problem ⎫ ˜ “min”g0 (x, ξ ) ⎬ ˜ (4.1) s.t. gi (x, ξ ) ≤ 0, i = 1, · · · , m, ⎭ x ∈ X ⊂ IRn , ˜ where ξ is a random vector varying over a set Ξ ⊂ IRk . More precisely, we assume throughout that a family F of “events”, i.e. subsets of Ξ, and the probability distribution P on F are given. Hence for every subset A ⊂ Ξ that is an event, i.e. A ∈ F , the probability P (A) is known. Furthermore, we assume that the functions gi (x, ·) : Ξ → IR ∀x, i are random variables themselves, and that the probability distribution P is independent of x. However, problem (4.1) is not well deﬁned since the meanings of “min” as well as of the constraints are not clear at all, if we think of taking a decision ˜ on x before knowing the realization of ξ . Therefore a revision of the modelling process is necessary, leading to socalled deterministic equivalents for (4.1), which can be introduced in various ways, some of which we have seen for our example in the previous section. Before discussing them, we review some basic concepts in probability theory, and ﬁx the terminology and notation used throughout this text. 1.4.1
k Measures and Integrals In IR we denote sets of the type I[a,b) = {x ∈ IRk  ai ≤ xi < bi , i = 1, · · · , k } as (halfopen) intervals. In geometric terms, depending on the dimension k of IRk , I[a,b) is • an interval if k = 1, • a rectangle if k = 2, • a cube if k = 3, while for k > 3 there is no common language term for these objects since geometric imagination obviously ends there. Sometimes we want to know something about the “size” of a set in IRk , e.g. the length of a beam, the area of a piece of land or the volume of a building; in other words, we want to measure it. One possibility to do this is to ﬁx ﬁrst how we determine the measure of intervals, and a “natural” choice of a measure µ would be 22 STOCHASTIC PROGRAMMING • in IR1 : • in IR2 : • in IR3 : µ(I[a,b) ) = µ(I[a,b) ) = µ(I[a,b) ) = b−a 0 if a ≤ b, otherwise, (b1 − a1 )(b2 − a2 ) if a ≤ b, 0 otherwise, (b1 − a1 )(b2 − a2 )(b3 − a3 ) if a ≤ b, 0 otherwise. Analogously, in general for I[a,b) ⊂ IRk with arbitrary k , we have ⎧ ⎪k ⎨ (bi − ai ) if a ≤ b µ(I[a,b) ) = ⎪ i=1 ⎩ 0 else. (4.2) Obviously for a set A that is the disjoint ﬁnite union of intervals, i.e. A = ∪M I (n) , I (n) being intervals such that I (n) ∩ I (m) = ∅ for n = m, n=1 M (n) we deﬁne its measure as µ(A) = ). In order to measure a set n=1 µ(I A that is not just an interval or a ﬁnite union of disjoint intervals, we may proceed as follows. Any ﬁnite collection of pairwisedisjoint intervals contained in A forms a packing C of A, C being the union of those intervals, with a welldeﬁned measure µ(C ) as mentioned above. Analogously, any ﬁnite collection of pairwise disjoint intervals, with their union containing A, forms a covering D of A with a welldeﬁned measure µ(D). Take for example in IR2 the set Acirc = {(x, y )  x2 + y 2 ≤ 16, y ≥ 0}, i.e. the halfcircle illustrated in Figure 8, which also shows a ﬁrst possible packing C1 and covering D1 . Obviously we learned in high school that the area of Acirc is computed as µ(Acirc ) = 1 × π × (radius)2 = 25.1327, whereas 2 we easily compute µ(C1 ) = 13.8564 and µ(D1 ) = 32. If we forgot all our wisdom from high school, we would only be able to conclude that the measure of the halfcircle Acirc is between 13.8564 and 32. To obtain a more precise estimate, we can try to improve the packing and the covering in such a way that the new packing C2 exhausts more of the set Acirc and the new covering D2 becomes a tighter outer approximation of Acirc . This is shown in Figure 9, for which we get µ(C2 ) = 19.9657 and µ(D2 ) = 27.9658. Hence the measure of Acirc is between 19.9657 and 27.9658. If this is still not precise enough, we may further improve the packing and covering. For the halfcirle Acirc , it is easily seen that we may determine its measure in this way with any arbitrary accuracy. In general, for any closed bounded set A ⊂ IRk , we may try a similar procedure to measure A. Denote by CA the set of all packings for A and by BASIC CONCEPTS 23 Figure 8 Measure of a halfcircle: ﬁrst approximation. Figure 9 Improved approximate measure of a halfcircle. 24 STOCHASTIC PROGRAMMING DA the set of all coverings of A. Then we make the following deﬁnition. The closed bounded set A is measurable if sup{µ(C )  C ∈ CA } = inf {µ(D)  D ∈ DA }, with the measure µ(A) = sup{µ(C )  C ∈ CA }. To get rid of the boundedness restriction, we may extend this deﬁnition immediately by saying: An arbitrary closed set A ⊂ IRk is measurable iﬀ2 for every interval I[a,b) ⊂ IRk the set A ∩ I[a,b) is measurable (in the sense deﬁned before). This implies that IRk itself is measurable. Observing that there always exist collections of countably many pairwisedisjoint intervals I[aν ,bν ) , ν = 1, 2, · · · , ∞ covering IRk , i.e. ν =1 I[aν ,bν ) = IRk (e.g. take intervals with all edges having ∞ length 1), we get µ(A) = ν =1 µ(A ∩ I[aν ,bν ) ) as the measure of A. Obviously µ(A) = ∞ may happen, as it does for instance with A = IR2 (i.e. the positive + 1 orthant of IR2 ) or with A = {(x, y ) ∈ IR2  x ≥ 1, 0 ≤ y ≤ x }. But we also may ﬁnd unbounded sets with ﬁnite measure as e.g. A = {(x, y ) ∈ IR2  x ≥ 0, 0 ≤ y ≤ e−x } (see the exercises at the end of this chapter). The measure introduced this way for closed sets and based on the elementary measure for intervals as deﬁned in (4.2) may be extended as a “natural” measure for the class A of measurable sets in IRk , and will be denoted throughout by µ. We just add that A is characterized by the following properties: if A ∈ A then also IRk − A ∈ A;
∞ (4.3 i) Ai ∈ A.
i=1 ∞ if Ai ∈ A, i = 1, 2, · · · , then also (4.3 ii) This implies that with Ai ∈ A, i = 1, 2, · · ·, also i=1 Ai ∈ A. As a consequence of the above construction, we have, for the natural measure µ deﬁned in IRk , that µ(A) ≥ 0 ∀A ∈ A and µ() = 0; if Ai ∈ A, i = 1, 2, · · · , and Ai ∩ Aj = ∅ for i = j, ∞ ∞ then µ( i=1 Ai ) = i=1 µ(Ai ). (4.4 i) (4.4 ii) In other words, the measure of a countable disjoint union of measurable sets equals the countable sum of the measures of these sets.
2 “iﬀ” stands for “if and only if” BASIC CONCEPTS 25 These properties are also familiar from probability theory: there we have some space Ω of outcomes ω (e.g. the results of random experiments), a collection F of subsets F ⊂ Ω called events, and a probability measure (or probability distribution) P assigning to each F ∈ F the probability with which it occurs. To set up probability theory, it is then required that (i) Ω is an event, i.e. Ω ∈ F , and, with F ∈ F , it holds that also Ω − F ∈ F , i.e. if F is an event then so also is its complement (or notF ); (ii) the countable union of events is an event. Observe that these formally coincide with (4.3) except that Ω can be any space of objects and need not be IRk . For the probability measure, it is required that (i) P (F ) ≥ 0 ∀F ∈ F and P (Ω) = 1; (ii) if Fi ∈ F , i = 1, 2, · · · , and Fi ∩ Fj = ∅ for i = j , then P ( ∞ i=1 P (Fi ).
∞ i=1 Fi ) = The only diﬀerence with (4.4) is that P is bounded to P (F ) ≤ 1 ∀F ∈ F , whereas µ is unbounded on IRk . The triple (Ω, F , P ) with the above properties is called a probability space. In addition, in probability theory we ﬁnd random variables and random ˜ vectors ξ . With A the collection of naturally measurable sets in IRk , a random vector is a function (i.e. a singlevalued mapping) ˜ ˜ ˜ ξ : Ω −→ IRk such that, for all A ∈ A, ξ −1 [A] := {ω  ξ (ω ) ∈ A} ∈ F . (4.5) ˜ This requires the “inverse” (with respect to the function ξ ) of any measurable k set in IR to be an event in Ω. ˜ Observe that a random vector ξ : Ω −→ IRk induces a probability measure Pξ on A according to ˜ ˜ Pξ (A) = P ({ω  ξ (ω ) ∈ A}) ∀A ∈ A. ˜ Example 1.2 At a market hall for the fruit trade you ﬁnd a particular species of apples. These apples are traded in certain lots (e.g. of 1000 lb). Buying a lot involves some risk with respect to the quality of apples contained in it. What does “quality” mean in this context? Obviously quality is a conglomerate of criteria described in terms like size, ripeness, ﬂavour, colour and appearance. Some of the criteria can be expressed through quantitative measurement, while others cannot (they have to be judged upon by experts). Hence the set Ω of all possible “qualities” cannot as such be represented as a subset of some IRk . Having bought a lot, the trader has to sort his apples according to their “outcomes” (i.e. qualities), which could fall into “events” like “unusable” 26 STOCHASTIC PROGRAMMING (e.g. rotten or too unripe), “cooking apples”and “low (medium, high) quality eatable apples”. Having sorted out the “unusable” and the “cooking apples”, for the remaining apples experts could be asked to judge on ripeness, ﬂavour, colour and appearance, by assigning real values between 0 and 1 to parameters r, f, c and a respectively, corresponding to the “degree (or percentage) of achieving” the particular criterion. Now we can construct a scalar value for any particular outcome (quality) ω , for instance as ⎧ if ω ∈ “unusable”, ⎨0 1 if ω ∈ “cooking apples”, v (ω ) := ˜ ⎩2 (1 + r)(1 + f )(1 + c)(1 + a) otherwise. Obviously v has the range v [Ω] = {0} ∪ { 1 } ∪ {[1, 16]}. Denoting the events ˜ ˜ 2 “unusable” by U and “cooking apples” by C , we may deﬁne the collection F of events as follows. With G denoting the family of all subsets of Ω − (U ∪ C ) let F contain all unions of U, C, ∅ or Ω with any element of G . Assume that after long series of observations we have a good estimate for the probabilities P (A), A ∈ F . According to our scale, we could classify the apples as • eatable and – 1st class for v (ω ) ∈ [12, 16] (high selling price), ˜ – 2nd class for v (ω ) ∈ [8, 12) (medium price), ˜ – 3rd class for v (ω ) ∈ [1, 8) (low price); ˜ • good for cooking for v (ω ) = ˜ • waste for v (ω ) = 0. ˜
1 2 (cheap); Obviously the probabilities to have 1stclass apples in our lot is v Pv ({[12, 16]}) = P (˜−1 [{[12, 16]}]), whereas the probability for having 3rd˜ class or cooking apples amounts to
1 1 Pv ({[1, 8)} ∪ { 2 }) = P (˜−1 [{[1, 8)} ∪ { 2 }]) v ˜ −1 = P (˜ [{[1, 8)}]) + P (C ), v 1 ˜ using the fact that v is singlevalued and {[1, 8)}, { 2 } and hence v −1 [{[1, 8)}], ˜ 1 v −1 [{ 2 }] = C are disjoint. For an illustration, see Figure 10. ˜ 2 If it happens that Ω ⊂ IRk and F ⊂ A (i.e. every event is a “naturally” ˜ measurable set) then we may replace ω trivially by ξ (ω ) by just applying the ˜(ω ) ≡ ω , which preserves the probability measure P ˜ on identity mapping ξ ξ F , i.e. Pξ (A) = P (A) for A ∈ F ˜ BASIC CONCEPTS 27 Figure 10 Classiﬁcation of apples by quality. ˜ since obviously {ω  ξ (ω ) ∈ A} = A if A ∈ F . ˜ ˜ In any case, given a random vector ξ with Ξ ∈ A such that {ω  ξ (ω ) ∈ k Ξ} = Ω (observe that Ξ = IR always satisﬁes this, but there may be smaller ˆ sets in A that do so), with F = {B  B = A ∩ Ξ, A ∈ A}, instead of the abstract probability space (Ω, F , P ) we may equivalently consider the induced ˆ˜ probability space (Ξ, F , Pξ ), which we shall use henceforth and therefore ˜ denote as (Ξ, F , P ). We shall use ξ for the random vector and ξ for the ˜ elements of Ξ (i.e. for the possible realizations of ξ ). Sometimes we like to assert a special property (like continuity or diﬀerentiability of some function f : IRk −→ IR) everywhere in IRk . But it may happen that this property almost always holds except on some particular points of IRk like N1 = {ﬁnitely many isolated points} or (for k ≥ 2) N2 = {ﬁnitely many segments of straight lines}, the examples mentioned being (‘naturally’) measurable and having the natural measure µ(N1 ) = µ(N2 ) = 0. In a situation like this, more precisely if there is a set Nδ ∈ A with µ(Nδ ) = 0, and if our property holds for all x ∈ IRk − Nδ , we say that it holds almost everywhere (a.e.). In the context of a probability space (Ξ, F , P ), if there is an event Nδ ∈ F with P (Nδ ) = 0 such that a property holds on Ξ − Nδ , owing to the practical interpretation of probabilities, we say that the property holds almost surely (a.s.). 28 STOCHASTIC PROGRAMMING Figure 11 Integrating a simple function. Next let us brieﬂy review integrals. Consider ﬁrst IRk with A, its measurable sets, and the natural measure µ, and choose some bounded measurable set B ∈ A. Further, let {A1 , · · · , Ar } be a partition of B into measurable sets, r i.e. Ai ∈ A, Ai ∩ Aj = ∅ for i = j , and i=1 Ai = B . Given the indicator functions χAi : B −→ IR deﬁned by χAi (x) = 1 if x ∈ Ai , 0 otherwise, we may introduce a socalled simple function ϕ : B −→ IR given with some constants ci by
r ϕ(x) = = ci Then the integral
B i=1 ci χAi (x) for x ∈ Ai . ϕ(x)dµ is deﬁned as
r ϕ(x)dµ =
B i=1 ci µ(Ai ). (4.6) In Figure 11 the integral would result by accumulating the shaded areas with their respective signs as indicated. BASIC CONCEPTS 29 Figure 12 Integrating an arbitrary function. Observe that the sum (or diﬀerence) of simple functions ϕ1 and ϕ2 is again a simple function and that [ϕ1 (x) + ϕ2 (x)]dµ =
B B ϕ1 (x)dµ +
B ϕ2 (x)dµ ϕ(x)dµ ≤
B B ϕ(x)dµ from the elementary properties of ﬁnite sums. Furthermore, it is easy to see that for disjoint measurable sets (i.e. Bj ∈ A, j = 1, · · · , s, and Bj ∩ Bl = ∅ for j = l) such that s=1 Bj = B , it follows that j
s ϕ(x)dµ =
B j =1 Bj ϕ(x)dµ. To integrate any other function ψ : B −→ IR that is not a simple function, we use simple functions to approximate ψ (see Figure 12), and whose integrals converge. Any sequence {ϕn } of simple functions on B satisfying ϕn (x) − ϕm (x)dµ −→ 0 for n, m −→ ∞
B is called mean fundamental. If there exists a sequence {ϕn } such that ϕn (x) −→ ψ (x) a.e.3 and {ϕn } is mean fundamental
3 The convergence a.e. can be replaced by another type of convergence, which we omit here. 30 STOCHASTIC PROGRAMMING
B then the integral ψ (x)dµ is deﬁned by ψ (x)dµ = lim
B n→∞ ϕn (x)dµ
B and ψ is called integrable. Observe that ϕn (x)dµ −
B B ϕm (x)dµ ≤
B ϕn (x) − ϕm (x)dµ, such that { B ϕn (x)dµ} is a Cauchy sequence. Therefore limn→∞ B ϕn (x)dµ exists. It can be shown that this deﬁnition yields a uniquely determined value for the integral, i.e. it cannot happen that a choice of another mean fundamental sequence of simple functions converging a.e. to ψ yields a diﬀerent value for the integral. The boundedness of B is not absolutely essential here; with a slight modiﬁcation of the assumption “ϕn (x) −→ ψ (x) a.e.” the integrability of ψ may be deﬁned analogously. Now it should be obvious that, given a probability space (Ξ, F , P )— ˜ assumed to be introduced by a random vector ξ in IRk —and a function ψ : Ξ −→ IR, the integral with respect to the probability measure P , denoted by ˜ E ˜ψ (ξ ) = ψ (ξ )dP,
ξ Ξ can be derived exactly as above if we simply replace the measure µ by the ˜ probability measure P . Here E refers to expectation and ξ indicates that we are integrating with respect to the probability measure P induced by the ˜ random vector ξ . Finally, we recall that in probability theory the probability measure P of a probability space (Ξ, F , P ) in IRk is equivalently described by the distribution function Fξ deﬁned by ˜ Fξ (x) = P ({ξ  ξ ≤ x}), x ∈ IRk . ˜ If there exists a function fξ : Ξ −→ IR such that the distribution function can ˜ be represented by an integral with respect to the natural measure µ as Fξ (ˆ) = ˜x fξ (x)dµ, x ∈ IRk , ˆ ˜ x ≤x ˆ then fξ is called the density function of P . In this case the distribution function ˜ is called of continuous type. It follows that for any event A ∈ F we have P (A) = A fξ (x)dµ. This implies in particular that for any A ∈ F such that ˜ BASIC CONCEPTS 31 µ(A) = 0 also P (A) = 0 has to hold. This fact is referred to by saying that the probability measure P is absolutely continuous with respect to the natural measure µ. It can be shown that the reverse statement is also true: given a probability space (Ξ, F , P ) in IRk with P absolutely continuous with respect to µ (i.e. every event A ∈ F with the natural measure µ(A) = 0 has also a probability of zero), there exists a density function fξ for P . ˜ 1.4.2 Deterministic Equivalents Let us now come back to deterministic equivalents for (4.1). For instance, in analogy to the particular stochastic linear program with recourse (3.10), for problem (4.1) we may proceed as follows. With
+ gi (x, ξ ) = 0 gi (x, ξ ) if gi (x, ξ ) ≤ 0, otherwise, + the ith constraint of (4.1) is violated if and only if gi (x, ξ ) > 0 for a given ˜ decision x and realization ξ of ξ . Hence we could provide for each constraint a recourse or secondstage activity yi (ξ ) that, after observing the realization ξ , is chosen such as to compensate its constraint’s violation—if there is one—by satisfying gi (x, ξ ) − yi (ξ ) ≤ 0. This extra eﬀort is assumed to cause an extra cost or penalty of qi per unit, i.e. our additional costs (called the recourse function) amount to m + qi yi (ξ ) yi (ξ ) ≥ gi (x, ξ ), i = 1, · · · , m , i=1 Q(x, ξ ) = min
y (4.7) yielding a total cost—ﬁrststage and recourse cost—of f0 (x, ξ ) = g0 (x, ξ ) + Q(x, ξ ). (4.8) Instead of (4.7), we might think of a more general linear recourse program with a recourse vector y (ξ ) ∈ Y ⊂ IRn (Y is some given polyhedral set, such as {y  y ≥ 0}), an arbitrary ﬁxed m × n matrix W (the recourse matrix) and a corresponding unit cost vector q ∈ IRn , yielding for (4.8) the recourse function (4.9) Q(x, ξ ) = min{q T y  W y ≥ g + (x, ξ ), y ∈ Y },
y + + where g + (x, ξ ) = (g1 (x, ξ ), · · · , gm (x, ξ ))T . If we think of a factory producing m products, gi (x, ξ ) could be understood + as the diﬀerence {demand} − {output} of a product i. Then gi (x, ξ ) > 0 means that there is a shortage in product i, relative to the demand. Assuming that the factory is committed to cover the demands, problem (4.7) could for instance be interpreted as buying the shortage of products at the 32 STOCHASTIC PROGRAMMING market. Problem (4.9) instead could result from a secondstage or emergency production program, carried through with the factor input y and a technology represented by the matrix W . Choosing W = I , the m × m identity matrix, (4.7) turns out to be a special case of (4.9). Finally we also could think of a nonlinear recourse program to deﬁne the recourse function for (4.8); for instance, Q(x, ξ ) could be chosen as
+ Q(x, ξ ) = min{q (y )  Hi (y ) ≥ gi (x, ξ ), i = 1, · · · , m; y ∈ Y ⊂ IRn }, (4.10) where q : IRn → IR and Hi : IRn → IR are supposed to be given. In any case, if it is meaningful and acceptable to the decision maker to minimize the expected value of the total costs (i.e. ﬁrststage and recourse costs), instead of problem (4.1) we could consider its deterministic equivalent, the (twostage) stochastic program with recourse
x ∈X ˜ ˜ ˜ min Eξ f0 (x, ξ ) = min Eξ {g0 (x, ξ ) + Q(x, ξ )}. ˜ ˜
x ∈X (4.11) The above twostage problem is immediately extended to the multistage recourse program as follows: instead of the two decisions x and y , to be taken at stages 1 and 2, we are now faced with K + 1 sequential decisions ¯ x0 , x1 , · · · , xK (xτ ∈ IRnτ ), to be taken at the subsequent stages τ = 0, 1, · · · , K . The term “stages” can, but need not, be interpreted as “time periods”. Assume for simplicity that the objective of (4.1) is deterministic, i.e. g0 (x, ξ ) ≡ g0 (x0 ). At stage τ (τ ≥ 1) we know the realizations ξ1 , · · · , ξτ of ˜ ˜ the random vectors ξ1 , · · · , ξτ as well as the previous decisions x0 , · · · , xτ −1 , and we have to decide on xτ such that the constraint(s) (with vector valued constraint functions gτ ) gτ (x0 , · · · , xτ , ξ1 , · · · , ξτ ) ≤ 0 are satisﬁed, which—as stated—at this stage can only be achieved by the proper choice of xτ , based on the knowledge of the previous decisions and realizations. Hence, assuming a cost function qτ (xτ ), at stage τ ≥ 1 we have a recourse function Qτ (x0 , x1 , · · · , xτ −1 , ξ1 , · · · , ξτ ) = min{qτ (xτ )  gτ (x0 , · · · , xτ , ξ1 , · · · , ξτ ) ≤ 0}
xτ indicating that the optimal recourse action xτ at time τ depends on the ˆ previous decisions and the realizations observed until stage τ , i.e. xτ = xτ (x0 , · · · , xτ −1 , ξ1 , · · · , ξτ ), τ ≥ 1. ˆ ˆ BASIC CONCEPTS 33 Hence, taking into account the multiple stages, we get as total costs for the multistage problem
K f0 (x0 , ξ1 , · · · , ξK ) = g0 (x0 ) +
τ =1 Qτ (x0 , x1 , · · · , xτ −1 , ξ1 , · · · , ξτ ) ˆ ˆ (4.12) yielding the deterministic equivalent for the described dynamic decision problem, the multistage stochastic program with recourse
K x 0 ∈X min [g0 (x0 ) +
τ =1 ˜ ˜ Eξ1 ,···,ξτ Qτ (x0 , x1 , · · · , xτ −1 , ξ1 , · · · , ξτ )], ˆ ˆ ˜ ˜ (4.13) obviously a straight generalization of our former (twostage) stochastic program with recourse (4.11). For the twostage case, in view of their practical relevance it is worthwile to describe brieﬂy some variants of recourse problems in the stochastic linear programming setting. Assume that we are given the following stochastic linear program ⎫ “min”cT x ⎪ ⎪ ⎬ s.t. Ax = b, (4.14) ˜ ˜ T (ξ )x = h(ξ ), ⎪ ⎪ ⎭ x ≥ 0. Comparing this with the general stochastic program (4.1), we see that the set X ⊂ IRn is speciﬁed as X = {x IRn  Ax = b, x ≥ 0}, where the m0 × n matrix A and the vector b are assumed to be deterministic. In contrast, the m1 × n matrix T (·) and the vector h(·) are allowed to depend ˜ on the random vector ξ , and therefore to have random entries themselves. In general, we assume that this dependence on ξ ∈ Ξ ⊂ IRk is given as ˆ ˆ ˆ T (ξ ) = T 0 + ξ1 T 1 + · · · + ξk T k , ˆ 0 + ξ1 h1 + · · · + ξk hk , ˆ ˆ h(ξ ) = h (4.15) ˆ ˆ ˆ with deterministic matrices T 0 , · · · , T k and vectors ˆ 0 , · · · , hk . Observing that h the stochastic constraints in (4.14) are equalities (instead of inequalities, as in the general problem formulation (4.1)), it seems meaningful to equate their deﬁciencies, which, using linear recourse and assuming that Y = {y ∈ IRn  y ≥ 0}, according to (4.9) yields the stochastic linear program with ﬁxed 34 STOCHASTIC PROGRAMMING recourse ˜ minx Eξ {cT x + Q(x, ξ )} ˜ s.t. Ax = b x ≥ 0, where Q(x, ξ ) = min{q T y  W y = h(ξ ) − T (ξ )x, y ≥ 0} . ⎫ ⎪ ⎪ ⎪ ⎪ ⎬ ⎪ ⎪ ⎪ ⎪ ⎭ (4.16) In particular, we speak of complete ﬁxed recourse if the ﬁxed m1 × n recourse matrix W satisﬁes (4.17) {z  z = W y, y ≥ 0} = IRm1 . This implies that, whatever the ﬁrststage decision x and the realization ξ of ˜ ξ turn out to be, the secondstage program Q(x, ξ ) = min{q T y  W y = h(ξ ) − T (ξ )x, y ≥ 0} will always be feasible. A special case of complete ﬁxed recourse is simple recourse, where with the identity matrix I of order m1 : W = (I, −I ). Then the secondstage program reads as Q(x, ξ ) = min{(q + )T y + +(q − )T y −  y + − y − = h(ξ ) − T (ξ )x, y + ≥ 0, y − ≥ 0}, i.e., for q + + q − ≥ 0, the recourse variables y + and y − can be chosen to measure (positively) the absolute deﬁciencies in the stochastic constraints. Generally, we may put all the above problems into the following form: ⎫ ˜ ⎪ min Eξ f0 (x, ξ ) ˜ ⎪ ⎪ ⎬ ˜ s.t. Eξ fi (x, ξ ) ≤ 0, i = 1, · · · , s, ˜ (4.19) ˜ Eξ fi (x, ξ ) = 0, i = s + 1, · · · , m, ⎪ ¯⎪ ˜ ⎪ ⎭ x ∈ X ⊂ IRn , where the fi are constructed from the objective and the constraints in (4.1) or (4.14) respectively. So far, f0 represented the total costs (see (4.8) or (4.12)) and f1 , · · · , fm could be used to describe the ﬁrststage feasible set X . ¯ However, depending on the way the functions fi are derived from the problem functions gj in (4.1), this general formulation also includes other types of deterministic equivalents for the stochastic program (4.1). To give just two examples showing how other deterministic equivalent problems for (4.1) may be generated, let us choose ﬁrst α ∈ [0, 1] and deﬁne a “payoﬀ” function for all constraints as ϕ(x, ξ ) := 1 − α if gi (x, ξ ) ≤ 0, i = 1, · · · , m, −α otherwise. (4.18) BASIC CONCEPTS 35 Consequently, for x infeasible at ξ we have an absolute loss of α, whereas for x feasible at ξ we have a return of 1 − α. It seems natural to aim for decisions on x that, at least in the mean (i.e. on average), avoid an absolute loss. This is equivalent to the requirement ˜ Eξ ϕ(x, ξ ) = ˜ ϕ(x, ξ )dP ≥ 0. Ξ Deﬁning f0 (x, ξ ) = g0 (x, ξ ) and f1 (x, ξ ) := −ϕ(x, ξ ), we get f0 (x, ξ ) = g0 (x, ξ ), f1 (x, ξ ) = implying ˜ ˜ Eξ f1 (x, ξ ) = −Eξ ϕ(x, ξ ) ≤ 0, ˜ ˜ where, with the vectorvalued function g (x, ξ ) = (g1 (x, ξ ), · · · , gm (x, ξ ))T , ˜ Eξ f1 (x, ξ ) = ˜ =
{g(x,ξ )≤0} ⎫ ⎪ ⎬ if gi (x, ξ ) ≤ 0, i = 1, · · · , m, otherwise, ⎪ ⎭ (4.20) α−1 α Ξ f1 (x, ξ )dP (α − 1)dP + αdP
{g(x,ξ )≤0} = (α − 1)P ({ξ  g (x, ξ ) ≤ 0}) + αP ({ξ  g (x, ξ ) ≤ 0}) = α [P ({ξ  g (x, ξ ) ≤ 0}) + P ({ξ  g (x, ξ ) ≤ 0})] =1 −P ({ξ  g (x, ξ ) ≤ 0}). ˜ Therefore the constraint Eξ f1 (x, ξ ) ≤ 0 is equivalent to P ({ξ  g (x, ξ ) ≤ 0}) ≥ ˜ α. Hence, under these assumptions, (4.19) reads as ˜ minx∈X Eξ g0 (x, ξ ) ˜ s.t. P ({ξ  gi (x, ξ ) ≤ 0, i = 1, · · · , m}) ≥ α. (4.21) Problem (4.21) is called a probabilistically constrained or chance constrained program (or a problem with joint probabilistic constraints). If instead of (4.20) we deﬁne αi ∈ [0, 1], i = 1, · · · , m, and analogous “payoﬀs” for every single constraint, resulting in f0 (x, ξ ) = g0 (x, ξ ) fi (x, ξ ) = αi − 1 αi if gi (x, ξ ) ≤ 0, otherwise, i = 1, · · · , m, 36 STOCHASTIC PROGRAMMING then we get from (4.19) the problem with single (or separate) probabilistic constraints: ˜ minx∈X Eξ g0 (x, ξ ) ˜ s.t. P ({ξ  gi (x, ξ ) ≤ 0}) ≥ αi , i = 1, · · · , m. (4.22) If, in particular, we have that the functions gi (x, ξ ) are linear in x, and if furthermore the set X is convex polyhedral, i.e. we have the stochastic linear program ⎫ ˜ “min” cT (ξ )x ⎪ ⎪ ⎬ s.t. Ax = b, ˜ ˜ T (ξ )x ≥ h(ξ ), ⎪ ⎪ ⎭ x ≥ 0, then problems (4.21) and (4.22) become ˜ minx∈X Eξ cT (ξ )x ˜ s.t. P ({ξ  T (ξ )x ≥ h(ξ )}) ≥ α, (4.23) and, with Ti (·) and hi (·) denoting the ith row and ith component of T (·) and h(·) respectively, ˜ minx∈X Eξ cT (ξ )x ˜ s.t. P ({ξ  Ti (ξ )x ≥ hi (ξ )}) ≥ αi , i = 1, · · · , m, (4.24) the stochastic linear programs with joint and with single chance constraints respectively. Obviously there are many other possibilities to generate types of deterministic equivalents for (4.1) by constructing the fi in diﬀerent ways out of the objective and the constraints of (4.1). Formally, all problems derived, i.e. all the above deterministic equivalents, are mathematical programs. The ﬁrst question is, whether or under which assumptions do they have properties like convexity and smoothness such that we have any reasonable chance to deal with them computationally using the toolkit of mathematical programming methods. 1.5 Properties of Recourse Problems Convexity may be shown easily for the recourse problem (4.11) under rather mild assumptions (given the integrability of g0 + Q). Proposition 1.1 If g0 (·, ξ ) and Q(·, ξ ) are convex in x ∀ξ ∈ Ξ, and if X is a convex set, then (4.11) is a convex program. BASIC CONCEPTS 37 Proof For x, x ∈ X, λ ∈ (0, 1) and x := λx + (1 − λ)¯ we have ˆ¯ ˇ ˆ x x x x x x x g0 (ˇ, ξ ) + Q(ˇ, ξ ) ≤ λ[g0 (ˆ, ξ ) + Q(ˆ, ξ )] + (1 − λ)[g0 (¯, ξ ) + Q(¯, ξ )] ∀ξ ∈ Ξ implying x˜ x˜ x˜ x˜ x˜ x˜ Eξ {g0 (ˇ, ξ )+ Q(ˇ, ξ )} ≤ λEξ {g0 (ˆ, ξ )+ Q(ˆ, ξ )} +(1 − λ)Eξ {g0 (¯, ξ )+ Q(¯, ξ )}. ˜ ˜ ˜ 2
n Remark 1.1 Observe that for Y = IR+ the convexity of Q(·, ξ ) can immediately be asserted for the linear case (4.16) and that it also holds for the nonlinear case (4.10) if the functions q (·) and gi (·, ξ ) are convex and the Hi (·) are concave. Just to sketch the argument, assume that y and y solve (4.10) ¯ ˇ for x and x respectively, at some realization ξ ∈ Ξ. Then, by the convexity of ¯ ˇ gi and the concavity of Hi , i = 1, · · · , m, we have, for any λ ∈ (0, 1), gi (λx + (1 − λ)ˇ, ξ ) ≤ λgi (¯, ξ ) + (1 − λ)gi (ˇ, ξ ) ¯ x x x y y ≤ λHi (¯) + (1 − λ)Hi (ˇ) ≤ Hi (λy + (1 − λ)ˇ). ¯ y Hence y = λy + (1 − λ)ˇ is feasible in (4.10) for x = λx + (1 − λ)ˇ, and ˆ ¯ y ˆ ¯ x therefore, by the convexity of q , Q(ˆ, ξ ) ≤ q (ˆ) x y ≤ λq (¯) + (1 − λ)q (ˇ) y y = λQ(¯, ξ ) + (1 − λ)Q(ˇ, ξ ). x x 2 Smoothness (i.e. partial diﬀerentiability of Q(x) = Ξ Q(x, ξ ) dP ) of recourse problems may also be asserted under fairly general conditions. For example, suppose that ϕ : IR2 −→ IR, so that ϕ(x, y ) ∈ IR. Recalling that ϕ is partially diﬀerentiable at some point (ˆ, y ) with respect to x, this means that xˆ ∂ϕ(x, y ) , there exists a function, called the partial derivative and denoted by ∂x such that ϕ(ˆ + h, y ) − ϕ(ˆ, y ) x ˆ xˆ ∂ϕ(ˆ, y) r(ˆ, y ; h) xˆ xˆ = + , h ∂x h where the “residuum” r satisﬁes r(ˆ , y; h) h→0 xˆ −→ 0. h xˆ The recourse function is partially diﬀerentiable with respect to xj in (ˆ, ξ ) if there is a function
∂Q(x,ξ ) ∂xj such that ˆ Q(ˆ + hej , ξ ) − Q(ˆ, ξ ) x ∂Q(ˆ, ξ ) ρj (ˆ, ξ ; h) xˆ xˆ xˆ = + h ∂xj h 38 STOCHASTIC PROGRAMMING with ρj (ˆ, ξ ; h) h→0 xˆ −→ 0, h where ej is the j th unit vector. The vector ( ∂Q(x,ξ) , · · · , ∂Q(x,ξ) )T is called ∂x1 ∂xn the gradient of Q(x, ξ ) with respect to x and is denoted by ∇x Q(x, ξ ). Now we are not only interested in the partial diﬀerentiability of the recourse function Q(x, ξ ) but also in that of the expected recourse function Q(x). Provided that ˜ Q(x, ξ ) is partially diﬀerentiable at x a.s., we get ˆ x Q(ˆ + hej ) − Q(ˆ) x h Q(ˆ + hej , ξ ) − Q(ˆ, ξ ) x x dP h Ξ ∂ Q(ˆ, ξ ) ρj (ˆ, ξ ; h) x x + dP ∂xj h Ξ −N δ ∂Q(ˆ, ξ ) x ρj (ˆ, ξ ; h) x dP, dP + ∂xj h Ξ −N δ Ξ −N δ = = = where Nδ ∈ F and P (Nδ ) = 0. Hence, under these assumptions, Q is partially diﬀerentiable if
Ξ −N δ 1 ∂Q(ˆ, ξ ) x dP exists and ∂xj h Ξ −N δ ρj (ˆ, ξ ; h)dP −→ 0. x h→0 This yields the following. ˜ Proposition 1.2 If Q(x, ξ ) is partially diﬀerentiable with respect to xj at some x a.s. (i.e. for all ξ except maybe those belonging to an event with ˆ ∂Q(ˆ, ξ ) x is integrable and if the probability zero), if its partial derivative ∂xj ∂ Q(ˆ) x h→0 residuum satisﬁes (1/h) Ξ ρj (ˆ, ξ ; h)dP −→ 0 then x exists as well ∂xj and ∂Q(ˆ, ξ ) x ∂ Q(ˆ) x = dP. ∂xj ∂xj Ξ Questions arise as a result of the general formulation of the assumptions of this proposition. It is often possible to decide that the recourse function is partially diﬀerentiable a.s. and the partial derivative is integrable. However, the requirement that the residuum is integrable and—roughly speaking—its integral converges to zero faster than h can be diﬃcult to check. Hence we leave the general case and focus on stochastic linear programs with complete ﬁxed recourse (4.16) in the following remark. Remark 1.2 In the linear case (4.16) with complete ﬁxed recourse it is known from linear programming (see Section 1.7) that the optimal value function BASIC CONCEPTS 39 Figure 13 Linear aﬃne mapping of a polyhedron. Q(x, ξ ) is continuous and piecewise linear in h(ξ ) − T (ξ )x. In other words, there exist ﬁnitely many convex polyhedral cones Bl ⊂ IRm1 with nonempty interiors such that any two of them have at most boundary points in common and ∪l Bl = IRm1 , and Q(x, ξ ) is given as Q(x, ξ ) = dlT (h(ξ ) − T (ξ )x) + δl for h(ξ ) − T (ξ )x ∈ Bl . Then, for h(ξ ) − T (ξ )x ∈ intBl (i.e. for h(ξ ) − T (ξ )x an interior point of Bl ), the function Q(x, ξ ) is partially diﬀerentiable with respect to any component of x. Hence for the gradient with respect to x we get from the chain rule that ∇x Q(x, ξ ) = −T T (ξ )dl for h(ξ ) − T (ξ )x ∈ intBl . Assume for simplicity that Ξ is a bounded interval in IRk and keep x ﬁxed. Then, by (4.15), we have a linear aﬃne mapping ψ (·) := h(·) − T (·)x : Ξ −→ IRm1 . Therefore the sets ˆ Dl (x) = ψ −1 [Bl ] := {ξ ∈ Ξ  ψ (ξ ) ∈ Bl } ˆ are convex polyhedra (see Figure 13) satisfying l Dl (x) = Ξ. ˆ Deﬁne Dl (x) := intDl (x). To get the intended diﬀerentiability result, the following assumption is crucial: ξ ∈ Dl (x) =⇒ ψ (ξ ) = h(ξ ) + T (ξ )x ∈ intBl ∀l. (5.1) By this assumption, we enforce the event {ξ ∈ Ξ  ψ (ξ ) ∈ Bl − intBl } to have the natural measure µ({ξ ∈ Ξ  ψ (ξ ) ∈ Bl − intBl }) = 0, which need not be true in general, as illustrated in Figure 14. 40 STOCHASTIC PROGRAMMING Figure 14 Linear mapping violating assumption (5.1). Since the Bl are convex polyhedral cones in IRm1 (see Section 1.7) with nonempty interiors, they may be represented by inequality systems C l z ≤ 0, where C l = 0 is an appropriate matrix with no row equal to zero. Fix l and let ξ ∈ Dl (x) such that, by (5.1), h(ξ ) − T (ξ )x ∈ intBl . Then C l [h(ξ ) − T (ξ )x] < 0, i.e. for any ﬁxed j there exists a τlj > 0 such that ˆ C l [h(ξ ) − T (ξ )(x ± τlj ej )] ≤ 0 or, equivalently, C l [h(ξ ) − T (ξ )x] ≤ ∓τlj C l T (ξ )ej ∀τlj ∈ [0, τlj ]. ˆ Hence for γ (ξ ) = maxi (C l T (ξ )ej )i there is a tl > 0 : C l [h(ξ ) − T (ξ )x] ≤ −tγ (ξ )e ∀t < tl , e = (1, · · · , 1)T . This implies that for γ := max γ (ξ ) there exists a t0 > 0 such
ξ ∈Ξ that C l [h(ξ ) − T (ξ )x] ≤ −tγe ∀t < t0 (choose, for example, t0 = tl /γ ). In other words, there exists a t0 > 0 such that Dl (x; t) := {ξ  C l [h(ξ ) − T (ξ )x] ≤ −tγe} = ∅ ∀t < t0 , and obviously Dl (x; t) ⊂ Dl (x). Furthermore, by elementary geometry, the natural measure µ satisﬁes µ(Dl (x) − Dl (x; t)) ≤ tv BASIC CONCEPTS 41 Figure 15 Diﬀerence set Dl (x) − Dl (x; t). with some constant v (see Figure 15). For ξ ∈ Dl (x; t) it follows that C l [h(ξ ) − T (ξ )(x + tej )] = C l [h(ξ ) − T (ξ )x] − tC l T (ξ )ej ≤ −tγe − tC l T (ξ )ej ≤ 0, owing to the fact that each component of C l T (ξ )ej is absolutely bounded by γ . Hence in this case we have h(ξ ) − T (ξ )(x + tej ) ∈ Bl , and so ∂Q(x, ξ ) Q(x + tej , ξ ) − Q(x, ξ ) ∀t < t0 , = −dlT T (ξ )ej = t ∂xj i.e. in this case we have the residuum ρj (x, ξ ; t) ≡ 0. For ξ ∈ Dl (x) − Dl (x; t) we have, considering that h(ξ ) − T (ξ )(x + tej ) could possibly belong to some other B¯, at least the estimate l Q(x + tej , ξ ) − Q(x, ξ ) ¯ ≤ max{dlT T (ξ )ej   ξ ∈ Ξ, ∀¯} =: β. l t Assuming now that we have a continuous density ϕ(ξ ) for P , we know already from (5.1) that µ({ξ ∈ Ξ  ψ (ξ ) ∈ Bl − intBl }) = 0. Hence it follows that ˜ Eξ Q(x, ξ ) = ˜
l Q(x, ξ )ϕ(ξ )dξ
Dl (x) Dl (x) =
l {dlT [h(ξ ) − T (ξ )x] − δl }ϕ(ξ )dξ, 42 STOCHASTIC PROGRAMMING and, since Q(x + tej , ξ ) − Q(x, ξ ) t→0 ϕ(ξ )dµ ≤ β max ϕ(ξ )tv −→ 0, ξ ∈Ξ t ˜ ∇Eξ Q(x, ξ ) = ˜
l Dl (x)−Dl (x;t) Dl (x) Dl (x) ∇x Q(x, ξ )ϕ(ξ )dξ T T (ξ )dl ϕ(ξ )dξ. =
l Hence for the linear case—observing (4.15)—we get the diﬀerentiability statement of Proposition 1.2 provided that (5.1) is satisﬁed and P has a continuous density on Ξ. 2 Summarizing the statements given so far, we see that stochastic programs with recourse are likely to have such properties as convexity (Proposition 1.1) and, given continuoustype distributions, diﬀerentiability (Proposition 1.2), which—from the viewpoint of mathematical programming—are appreciated. On the other hand, if we have a joint ﬁnite discrete probability distribution {(ξ k , pk ), k = 1, · · · , r} of the random data then, for example, problem (4.16) becomes—similarly to the special example (3.11)—a linear program ⎫ r ⎪ ⎪ ⎪ pk q T y k minx∈X cT x + ⎪ ⎬ k=1 (5.2) ⎪ s.t. T (ξ k )x + W y k = h(ξ k ), k = 1, · · · , r, ⎪ ⎪ ⎪ ⎭ yk ≥ 0 having the socalled dual decomposition structure, as mentioned already for our special example (3.11) and demonstrated in Figure 16 (see also Section 1.7.4). cT T (ξ 1 ) T (ξ 2 ) . . . T (ξ r )
Figure 16 Dual decomposition data structure. qT W qT ··· qT h(ξ 1 ) W .. . W h(ξ 2 ) . . . h(ξ r ) BASIC CONCEPTS 43 However—for ﬁnite discrete as well as for continuous distributions—we are faced with a further problem, which we might discuss for the linear case (i.e. for stochastic linear programs with ﬁxed recourse (4.16)). By suppP we denote the support of the probability measure P , i.e. the smallest closed set Ξ ⊂ IRk such that Pξ (Ξ) = 1. With the practical interpretation of the secondstage ˜ problem as given, for example, in Section 1.3, and assuming that Ξ = suppPξ , ˜ we should expect that for any ﬁrststage decision x ∈ X the compensation of deﬁciencies in the stochastic constraints is possible whatever ξ ∈ Ξ will be ˜ realized for ξ . In other words, we expect the program ⎫ Q(x, ξ ) = min q T y ⎬ s.t. W y = h(ξ ) − T (ξ )x, (5.3) ⎭ y≥0 to be feasible ∀ξ ∈ Ξ. Depending on the deﬁned recourse matrix W and the given support Ξ, this need not be true for all ﬁrststage decisions x ∈ X . Hence it may become necessary to impose—in addition to x ∈ X —further restrictions on our ﬁrststage decisions called induced constraints. To be more speciﬁc, let us assume that Ξ is a (bounded) convex polyhedron, i.e. the convex hull of ﬁnitely many points ξ j ∈ Ξ ⊂ IRk : Ξ = ⎧ {ξ 1 , · · · , ξ r } conv r ⎨ = ξ ξ= λj ξ j , ⎩
j =1 r λj = 1, λj ≥ 0 ∀j
j =1 ⎫ ⎬ ⎭ . From the deﬁnition of a support, it follows that x ∈ IRn allows for a feasible solution of the secondstage program for all ξ ∈ Ξ if and only if this is true for all ξ j , j = 1, · · · , r. In other words, the induced ﬁrststage feasibility set K is given as K = {x  T (ξ j )x + W y j = h(ξ j ), y j ≥ 0, j = 1, · · · , r}. ˜ From this formulation of K (which obviously also holds if ξ has a ﬁnite discrete distribution, i.e. Ξ = {ξ 1 , · · · , ξ r }), we evidently get the following. ˜ Proposition 1.3 If the support Ξ of the distribution of ξ is either a ﬁnite set or a (bounded) convex polyhedron then the induced ﬁrststage feasibility set K is a convex polyhedral set. The ﬁrststage decisions are restricted to x ∈ X K. Example 1.3 Consider the following ﬁrststage feasible set: X = {x ∈ IR2  x1 − 2x2 ≥ −4, x1 + 2x2 ≤ 8, 2x1 − x2 ≤ 6}. + For the secondstage constraints choose W= −1 3 22 5 2 , T (ξ ) ≡ T = 23 31 44 STOCHASTIC PROGRAMMING ˜ and a random vector ξ with the support Ξ = [4, 19] × [13, 21]. Then the constraints to be satisﬁed for all ξ ∈ Ξ are W y = ξ − T x, y ≥ 0. Observing that the second column W2 of W is a positive linear combination of W1 and W3 , namely W2 = 1 W1 + 2 W3 , the above secondstage constraints 3 3 reduce to the requirement that for all ξ ∈ Ξ the righthand side ξ − T x can be written as ξ − T x = λW1 + µW3 , λ, µ ≥ 0, or in detail as ξ1 − 2x1 − 3x2 = −λ + 5µ, ξ2 − 3x1 − x2 = 2λ + 2µ, λ, µ ≥ 0. 21 , −2 5 which corresponds to adding 2 times the ﬁrst equation to the second and adding −2 times the ﬁrst to 5 times the second, respectively, we get the equivalent system Multiplying this system of equations with the regular matrix S = 2ξ1 + ξ2 − 7x1 − 7x2 = 12µ ≥ 0, −2ξ1 + 5ξ2 − 11x1 + x2 = 12λ ≥ 0. Because of the required nonnegativity of λ and µ, this is equivalent to the system of inequalities 7x1 + 7x2 ≤ 2ξ1 + ξ2 (≥ 21 ∀ξ ∈ Ξ), 11x1 − x2 ≤ −2ξ1 + 5ξ2 (≥ 27 ∀ξ ∈ Ξ). Since these inequalities have to be satisﬁed for all ξ ∈ Ξ, choosing the minimal righthand sides (for ξ ∈ Ξ) yields the induced constraints as K = {x  7x1 + 7x2 ≤ 21, 11x1 − x2 ≤ 27}. The ﬁrststage feasible set X together with the induced feasible set are illustrated in Figure 17. 2 It might happen that X K = ∅; then we should check our model very carefully to ﬁgure out whether we really modelled what we had in mind or whether we can ﬁnd further possibilities for compensation that are not yet contained in our model. On the other hand, we have already mentioned the case of a complete ﬁxed recourse matrix (see (4.17) on page 34), for which K = IRn and therefore the problem of induced constraints does not exist. Hence it seems interesting to recognize complete recourse matrices. BASIC CONCEPTS 45 Figure 17 Induced constraints K . Proposition 1.4 An m1 × n matrix W is a complete recourse matrix iﬀ 4 • it has rank rk(W ) = m1 and, • assuming without loss of generality that its ﬁrst m1 columns W1 , W2 , · · · , Wm1 are linearly independent, the linear constraints ⎫ Wy = 0 ⎬ y i ≥ 1 , i = 1 , · · · , m1 , (5.4) ⎭ y≥0 have a feasible solution. Proof W is a complete recourse matrix iﬀ {z  z = W y, y ≥ 0} = IRm1 . From this condition, it follows immediately that rk(W ) = m1 necessarily has m1 to hold. In addition, for z = − i=1 Wi ∈ IRm1 the secondstage constraints ˆ W y = z , y ≥ 0 have a feasible solution y such that ˆ ˇ
m1 n Wi yi + ˇ
i=1 i=m1 +1 Wi yi = z ˇ ˆ
m1 =−
i=1 Wi , yi ≥ 0, i = 1, · · · , n. ˇ
4 We use “iﬀ ” as shorthand for “if and only if ”. 46 STOCHASTIC PROGRAMMING With yi = y i + 1 , i = 1 , · · · , m1 , ˇ yi , ˇ i > m1 , this implies that the constraints (5.4) are necessarily feasible. To show that the above conditions are also suﬃcient for complete recourse let us choose an arbitrary z ∈ IRm1 . Since the columns W1 , · · · , Wm1 are ¯ linearly independent, the system of linear equations
m1 Wi yi = z ¯
i=1 ¯ ¯ has a unique solution y1 , · · · , ym1 . If yi ≥ 0, i = 1, · · · , m1 , we are ﬁnished; ¯ otherwise, we deﬁne γ := min{y1 , · · · , ym1 }. By assumption, the constraints ¯ ¯ (5.4) have a feasible solution y . Now it is immediate that y deﬁned by ˇ ˆ yi = ˆ solves W y = z , y ≥ 0. ¯ y i − γ y i , i = 1 , · · · , m1 , ¯ ˇ −γ yi , ˇ i = m1 + 1, · · · , n, 2 Finally, if (5.3) is feasible for all ξ ∈ Ξ and at least for all x ∈ X = {x  Ax = b, x ≥ 0} then (4.16) is said to be of relatively complete recourse. 1.6 Properties of Probabilistic Constraints For chance constrained problems, the situation becomes more diﬃcult, in general. Consider the constraint of (4.21), P ({ξ  g (x, ξ ) ≤ 0}) ≥ α, where the gi were replaced by the vectorvalued function g deﬁned by T ˆ g (x, ξ ) := g1 (x, ξ ), · · · , gm (x, ξ ) : a point x is feasible iﬀ the set S (ˆ) = {ξ  g (ˆ, ξ ) ≤ 0} x x (6.1) has a probability measure P (S (ˆ)) of at least α. In other words, if G ⊂ F is x the collection of all events of F such that P (G) ≥ α ∀G ∈ G then x is feasible ˆ ˜ ˜ iﬀ we ﬁnd at least one event G ∈ G such that for all ξ ∈ G, g (ˆ, ξ ) ≤ 0. x Formally, x is feasible iﬀ ∃G ∈ G : ˆ x∈ ˆ {x  g (x, ξ ) ≤ 0}.
ξ ∈G (6.2) BASIC CONCEPTS 47 Hence the feasible set B (α) = {x  P ({ξ  g (x, ξ ) ≤ 0}) ≥ α} is the union of all those vectors x feasible according to (6.2), and consequently may be rewritten as B (α) = {x  g (x, ξ ) ≤ 0}.
G∈G ξ ∈G (6.3) Since a union of convex sets need not be convex, this presentation demonstrates that in general we may not expect B (α) to be convex, even if {x  g (x, ξ ) ≤ 0} are convex ∀ξ ∈ Ξ. Indeed, there are simple examples for nonconvex feasible sets. Example 1.4 Assume that in our reﬁnery problem (3.1) the demands are random with the following discrete joint distribution: P P P Then the constraints xraw1 + xraw2 ≤ 100 xraw1 ≥0 xraw2 ≥ 0 P ˜ 2xraw1 + 6xraw2 ≥ h1 (ξ ) ˜ 3xraw1 + 3xraw2 ≥ h2 (ξ ) ≥α h1 (ξ 1 ) = 160 h2 (ξ 1 ) = 135 h1 (ξ 2 ) = 150 h2 (ξ 2 ) = 195 h1 (ξ 3 ) = 200 h2 (ξ 3 ) = 120 = 0.85, = 0.08, = 0.07. for any α ∈ (0.85, 0.92] require that we • either satisfy the demands hi (ξ 1 ) and hi (ξ 2 ), i = 1, 2 (enforcing a reliability of 93%) and hence choose a production program to cover a demand 160 hA = 195 • or satisfy the demands hi (ξ 1 ) and hi (ξ 3 ), i = 1, 2 (enforcing a reliability of 92%) such that our production plan is designed to cope with the demand 200 hB = . 135 48 STOCHASTIC PROGRAMMING Figure 18 Chance constraints: nonconvex feasible set. It follows that the feasible set for the above constraints is nonconvex, as shown in Figure 18. 2 As above, deﬁne S (x) := {ξ  g (x, ξ ) ≤ 0}. If g (·, ·) is jointly convex in (x, ξ ) x¯ then, with xi ∈ B (α), i = 1, 2, ξ i ∈ S (xi ) and λ ∈ [0, 1], for (¯, ξ ) = λ(x1 , ξ 1 ) + (1 − λ)(x2 , ξ 2 ) it follows that g (¯, ξ ) ≤ λg (x1 , ξ 1 ) + (1 − λ)g (x2 , ξ 2 ) ≤ 0, x¯ ¯ i.e. ξ = λξ 1 + (1 − λ)ξ 2 ∈ S (¯), and hence5 x S (¯) ⊃ [λS (x1 ) + (1 − λ)S (x2 )] x implying P (S (¯)) ≥ P (λS (x1 ) + (1 − λ)S (x2 )). x By our assumption on g (joint convexity), any set S (x) is convex. Now we conclude immediately that B (α) is convex ∀α ∈ [0, 1], if P (λS1 + (1 − λ)S2 ) ≥ min[P (S1 ), P (S2 )] ∀λ ∈ [0, 1] for all convex sets Si ∈ F , i = 1, 2, i.e. if P is quasiconcave. Hence we have proved the following
5 The algebraic sum of sets ρS1 + σS2 := {ξ := ρξ 1 + σξ 2  ξ 1 ∈ S1 , ξ 2 ∈ S2 }. BASIC CONCEPTS 49 Figure 19 λ = 1. 2 Convex combination of events involved by distribution functions, Proposition 1.5 If g (·, ·) is jointly convex in (x, ξ ) and P is quasiconcave, then the feasible set B (α) = {xP ({ξ g (x, ξ ) ≤ 0}) ≥ α} is convex ∀α ∈ [0, 1]. Remark 1.3 The assumption of joint convexity of g (·, ·) is so strong that it is even not satisﬁed in the linear case (4.23), in general. However, if in (4.23) T (ξ ) ≡ T (constant) and h(ξ ) ≡ ξ then it is satisﬁed and the constraints ˜ of (4.23), Fξ being the distribution function of ξ, read as ˜ P ({ξ  T x ≥ ξ }) = Fξ (T x) ≥ α. ˜ Therefore B (α) is convex ∀α ∈ [0, 1] in this particular case if Fξ is a quasi˜ concave function, i.e. if Fξ (λξ 1 + (1 − λ)ξ 2 ) ≥ min[Fξ (ξ 1 ), Fξ (ξ 2 )] for any two ˜ ˜ ˜ 2 ξ 1 , ξ 2 ∈ Ξ and ∀λ ∈ [0, 1]. It seems worthwile to mention the following facts. If the probability measure P is quasiconcave then the corresponding distribution function Fξ is quasi˜ concave. This follows from observing that by the deﬁnition of distribution ˆ functions Fξ (ξ i ) = P (Si ) with Si = {ξ  ξ ≤ ξ i }, i = 1, 2, and that for ξ = ˜ ˆ ˆ λξ 1 + (1 − λ)ξ 2 , λ ∈ [0, 1], we have S = {ξ  ξ ≤ ξ } = λS1 + (1 − λ)S2 (see Figure 19). With P being quasiconcave, this yields ˆ ˆ Fξ (ξ ) = P (S ) ≥ min[P (S1 ), P (S2 )] = min[Fξ (ξ 1 ), Fξ (ξ 2 )]. ˜ ˜ ˜ On the other hand, Fξ being a quasiconcave function does not imply in general ˜ that the corresponding probability measure P is quasiconcave. For instance, 50 STOCHASTIC PROGRAMMING Figure 20 P here is not quasiconcave: P (C ) = P ( 1 A + 3 P (A ) = P (B ) = 1 . 2 2 B) 3 = 0, but in IR1 every monotone function is easily seen to be quasiconcave, such that every distribution function of a random variable (always being monotonically increasing) is quasiconcave.But not every probability measure P on IR is quasiconcave (see Figure 20 for a counterexample). Hence we stay with the question of when a probability measure—or its distribution function—is quasiconcave. This question was answered ﬁrst for the subclass of logconcave probability measures, i.e. measures satisfying P (λS1 + (1 − λ)S2 ) ≥ P λ (S1 ) P 1−λ (S2 ) for all convex Si ∈ F and λ ∈ [0, 1]. That the class of logconcave measures is really a subclass of the class of quasiconcave measures is easily seen. Lemma 1.2 If P is a logconcave measure on F then P is quasiconcave. Proof Let Si ∈ F , i = 1, 2, be convex sets such that P (Si ) > 0, i = 1, 2 (otherwise there is nothing to prove, since P (S ) ≥ 0 ∀S ∈ F ). By assumption, for any λ ∈ (0, 1) we have P (λS1 + (1 − λ)S2 ) ≥ P λ (S1 ) P 1−λ (S2 ). By the monotonicity of the logarithm, it follows that ln[P (λS1 + (1 − λ)S2 )] ≥ λ ln[P (S1 )] + (1 − λ) ln[P (S2 )] ≥ min{ln[P (S1 )], ln[P (S2 )]}, BASIC CONCEPTS 51 and hence P (λS1 + (1 − λ)S2 ) ≥ min[P (S1 ), P (S2 )]. 2 As mentioned above, for the logconcave case necessary and suﬃcient conditions were derived ﬁrst, and later corresponding conditions for quasiconcave measures were found. Proposition 1.6 Let P on Ξ = IRk be of the continuous type, i.e. have a density f . Then the following statements hold: • P is logconcave iﬀ f is logconcave (i.e. if the logarithm of f is a concave function); • P is quasiconcave iﬀ f −1/k is convex. The proof has to be omitted here, since it would require a rather advanced knowledge of measure theory. Remark 1.4 Consider (a) the k dimensional uniform distribution on a convex body S ⊂ IRk (with positive natural measure µ) given by the density 1/µ(S ) if x ∈ S, ϕU (x) := 0 otherwise (µ is the natural measure in IRk , see Section 1.4.1); (b) the exponential distribution with density 0 if x < 0, ϕEX P (x) := −λx if x ≥ 0 λe (λ > 0 is constant); (c) the multivariate normal distribution in IRk described by the density ϕN (x) := γe− 2 (x−m)
1 T Σ−1 (x−m) (γ > 0 is constant, m is the vector of expected values and Σ is the covariance matrix). Then we get immediately (a) ϕU k (x) =
−1
k µ(S ) if x ∈ S, ∞ otherwise, implying by Proposition 1.6 that the corresponding propability measure PU is quasiconcave. 52 STOCHASTIC PROGRAMMING (b) Since ln λ − λx if x ≥ 0, the density of the exponential distribution is obviously logconcave, implying by Proposition 1.6 that the corresponding measure PEX P is logconcave and hence, by Lemma 1.2, also quasiconcave. (c) Taking the logarithm ln[ϕN (x)] = ln γ − 1 (x − m)T Σ−1 (x − m) 2 and observing that the covariance matrix Σ and hence its inverse Σ−1 are positive deﬁnite, we see that this density is logconcave, and therefore the corresponding measure PN is logconcave (by Proposition 1.6) as well as quasiconcave (by Lemma 1.2). There are many other classes of widely used continuous type probability measures, which—according to Proposition 1.6—are either logconcave or at least quasiconcave. 2 In addition to Proposition 1.5, we have the following statement, which is of interest because, for mathematical programs in general, we cannot assert the existence of solutions if the feasible sets are not known to be closed. Proposition 1.7 If g : IRn × Ξ → IRm is continuous then the feasible set B (α) is closed. ˆ Proof Consider any sequence {xν } such that xν −→ x and xν ∈ B (α) ∀ν . To prove the assertion, we have to show that x ∈ B (α). Deﬁne A(x) := {ξ  ˆ ˆ g (x, ξ ) ≤ 0}. Let Vk be the open ball with center x and radius 1/k . Then we show ﬁrst that ∞ A(ˆ) = x
k=1 ln[ϕEX P (x)] = −∞ if x < 0, cl
x∈Vk A(x). (6.4) Here the inclusion “⊂” is obvious since x ∈ Vk ∀k , so we have only to show ˆ that
∞ A(ˆ) ⊃ x
k=1 cl
x∈Vk A(x). ˆ Assume that ξ ∈ ∞ cl x∈Vk A(x). This means that for every k we have k=1 ˆ ξ ∈ cl x∈Vk A(x); in other words, for every k there exists a ξ k ∈ x∈Vk A(x) ˆ and hence some xk ∈ Vk with ξ k ∈ A(xk ) such that ξ k − ξ ≤ 1/k (and k k kk ˆ xˆ obviously x − x ≤ 1/k since x ∈ Vk ). Hence (x , ξ ) −→ (ˆ, ξ ). Since ξ k ∈ A(xk ), g (xk , ξ k ) ≤ 0 ∀k and therefore, by the continuity of g (·, ·), ˆ ξ ∈ A(ˆ), which proves (6.4) to be true. x BASIC CONCEPTS 53 The sequence of sets
K BK :=
k=1 cl
x∈Vk A(x) ˆ is monotonically decreasing to the set A(ˆ). Since xν −→ x, for every K x there exists a νK such that xνK ∈ VK ⊂ VK −1 ⊂ · · · ⊂ V1 , implying that A(xνK ) ⊂ BK and hence P (BK ) ≥ P (A(xνK )) ≥ α ∀K . Hence, by the wellknown continuity of probability measures on monotonic sequences, we have P (A(ˆ)) ≥ α, i.e. x ∈ B (α). x ˆ 2 For stochastic programs with joint chance constraints the situation appears to be more diﬃcult than for stochastic programs with recourse. But, at least under certain additional assumptions, we may assert convexity and closedness of the feasible sets as well (Proposition 1.5, Remark 1.3 and Proposition 1.7). For stochastic linear programs with single chance constraints, convexity statements have been derived without the joint convexity assumption on gi (x, ξ ) := hi (ξ ) − Ti (ξ )x, for special distributions and special intervals for the values of αi . In particular, if Ti (ξ ) ≡ Ti (constant), the situation becomes ˜ rather convenient: with Fi the distribution function of hi (ξ ), we have P ({ξ  Ti x ≥ hi (ξ )}) = Fi (Ti x) ≥ αi , or equivalently Ti x ≥ Fi−1 (αi ), where Fi−1 (αi ) is assumed to be the smallest real value η such that Fi (η ) ≥ αi . Hence in this special case any single chance constraint turns out to be just a linear constraint, and the only additional work to do is to compute Fi−1 (αi ). 1.7 Linear Programming Throughout this section we shall discuss linear programs in the following standard form ⎫ min cT x ⎬ s.t. Ax = b, (7.1) ⎭ x ≥ 0, where the vectors c ∈ IRn , b ∈ IRm and the m × n matrix A are given and x ∈ IRn is to be determined. Any other LP6 formulation can easily be
6 We use occasionally “LP” as abbreviation for “linear program(ming)”. 54 STOCHASTIC PROGRAMMING transformed to assume the form (7.1). If, for instance, we have the problem min cT x s.t. Ax ≥ b x ≥ 0, then, by introducing a vector y ∈ IRm of slack variables, we get the problem + min cT x s.t. Ax − y = b x≥0 y ≥ 0, which is of the form (7.1). This LP is equivalent to (7.1) in the sense that the x part of its solution set and the solution set of (7.1) as well as the two optimal values obviously coincide. Instead, we may have the problem min cT x s.t. Ax ≥ b, where the decision variables are not required to be nonnegative—socalled free variables. In this case we may introduce a vector y ∈ IRm of slack variables + and—observing that any real number may be presented as the diﬀerence of two nonnegative numbers—replace the original decision vector x by the diﬀerence z + − z − of the new decision vectors z + , z − ∈ IRn yielding the problem + min{cT z + − cT z − } s.t. Az + − Az − − y z+ z− y = b, ≥ 0, ≥ 0, ≥ 0, which is again of the form (7.1). Furthermore, it is easily seen that this transformed LP and its original formulation are equivalent in the sense that • given any solution (ˆ+ , z − , y) of the transformed LP, x := z + − z − is a zˆˆ ˆ ˆ ˆ solution of the original version, • given any solution x of the original LP, the vectors y := Ax − b and ˇ ˇ ˇ ˇ ˇ ˇ z + , z − ∈ IRn , chosen such that z + − z − = x, solve the transformed version, ˇˇ + and the optimal values of both versions of the LP coincide. 1.7.1 The Feasible Set and Solvability From linear algebra, we know that the system Ax = b of linear equations in (7.1) is solvable if and only if the rank condition rk(A, b) = rk(A) (7.2) BASIC CONCEPTS 55 is satisﬁed. Given this condition, it may happen that rk(A) < m, but then we may drop one or more equations from the system without changing its solution set. Therefore we assume throughout this section that rk(A) = m, which obviously implies that m ≤ n. Let us now investigate the feasible set B := {x  Ax = b, x ≥ 0} of (7.1). A central concept in linear programming is that of a feasible basic solution deﬁned as follows: x ∈ B is a feasible basic solution if with ˆ I (ˆ) := {i  xi > 0} the set {Ai  i ∈ I (ˆ)} of columns of A is linearly x ˆ x independent.7 Hence the components xi , i ∈ I (ˆ), are the unique solution of ˆ x the system of linear equations Ai xi = b.
i∈I (ˆ) x (7.3) x In general, the set I (ˆ) and hence also the column set {Ai  i ∈ I (ˆ)} may x have less than m elements, which can cause some inconvenience—at least in formulating the statements we want to present. Proposition 1.8 Given assumption (7.3), for any basic solution x of B there ˆ x x exists at least one index set IB (ˆ) ⊃ I (ˆ) such that the corresponding column x ˆ x set {Ai  i ∈ IB (ˆ)} is a basis of IRm . The components xi , i ∈ IB (ˆ), of x uniquely solve the linear system ˆ Ai xi = b with the nonsingular i∈IB (ˆ) x matrix (Ai  i ∈ IB (ˆ)). x x Proof Assume that x ∈ B is a basic solution and that {Ai  i ∈ I (ˇ)} ˇ contains k columns, k < m, of A. By (7.3), there exists at least one index set Jm ⊂ {1, · · · , n} with m elements such that the columns {Ai  i ∈ Jm } are linearly independent and hence form a basis of IRm . A standard result in linear algebra asserts that, given a basis of an mdimensional vector space and a linear independent subset of k < m vectors, it is possible, by adding m − k properly chosen vectors from the basis, to complement the subset to become a basis itself. Hence in our case it is possible to choose m − k indices from Jm and to add them to I (ˇ), yielding IB (ˇ) such that {Ai  i ∈ IB (ˇ)} is a basis x x x of IRm . 2 Given a basic solution x ∈ B , by this proposition the matrix A can be ˆ partitioned into two parts (corresponding to x): a basic part ˆ x B = (Ai  i ∈ IB (ˆ))
7 According to this deﬁnition, for I (ˆ) = ∅, i.e. x = 0 and hence b = 0, it follows that x is x ˆ ˆ a feasible basic solution as well. 56 STOCHASTIC PROGRAMMING and a nonbasic part N = (Ai  i ∈ {1, · · · , n} − IB (ˆ)). x Introducing the vectors x{B } ∈ IRm —the vector of basic variables—and x ∈ IRn−m —the vector of nonbasic variables—and assigning
{N B } {B } xk = xi , i the k th element of IB (ˆ), k = 1, · · · , m, x = xi , i the lth element of {1, · · · , n} − IB (ˆ), l = 1, · · · , n − m, x {N B } xl (7.4) the linear system Ax = b of (7.1) may be rewritten as Bx{B } + N x{N B } = b or equivalently as x{B } = B −1 b − B −1 N x{N B } , (7.5) which—using the assignment (7.4) – yields for any choice of the nonbasic variables x{N B } a solution of our system Ax = b, and in particular for x{N B } = 0 reproduces our feasible basic solution x. ˆ Proposition 1.9 If B = ∅ then there exists at least one feasible basic solution. Proof Assume that for x ˆ Ax = b, x ≥ 0. ˆ ˆ If for I (ˆ) = {i  xi > 0} the column set {Ai  i ∈ I (ˆ)} is linearly dependent, x ˆ x then the linear homogeneous system of equations
i∈I (ˆ) x Ai yi = 0, yi = 0, i ∈ I (ˆ), x has a solution y = 0 with yi < 0 for at least one i ∈ I (ˆ)—if this does not ˇ ˇ x hold for y , we could take −y, which solves the above homogeneous system as ˇ ˇ well. Hence for ¯ λ := max{λ  x + λy ≥ 0} ˆ ˇ ¯ we have 0 < λ < ∞. Since Ay = 0 obviously holds for y , it follows—observing ˇ ˇ ¯ the deﬁnition of λ—that for z := x + λy ˆ ¯ˇ Az = Ax + λAy ˆ ¯ˇ = b, z ≥ 0, i.e. z ∈ B , and I (z ) ⊂ I (ˆ), I (z ) = I (ˆ), such that we have “reduced” our x x original feasible solution x to another one with fewer positive components. ˆ BASIC CONCEPTS 57 Now either z is a basic solution or we repeat the above “reduction” with x := z . Obviously there are only ﬁnitely many reductions of the number of ˆ positive components in feasible solutions possible. Hence we have to end up— after ﬁnitely many of these steps—with a feasible basic solution. 2 With an elementary exercise, we see that the feasible set B = {x  Ax = b, x ≥ 0} of our linear program (7.1) is convex. We want now to point out that feasible basic solutions play a dominant role in describing feasible sets of linear programs. Proposition 1.10 If B is a bounded set and B = ∅ then B is the convex hull (i.e. the set of all convex linear combinations) of the set of its feasible basic solutions. Proof To avoid trivialities or statements on empty sets, we assume that the righthand side b = 0. For any feasible solution x ∈ B we again have the index set I (x) := {i  xi > 0}, and we denote by I (x) the number of elements of I (x). Obviously we have—recalling our assumption that b = 0— that for any feasible solution 1 ≤ I (x) ≤ n. We may prove the proposition by induction on I (x), the number of positive components of any feasible solution x. To begin with, we deﬁne k0 := minx∈B I (x) ≥ 1. For a feasible x with I (x) = k0 it follows that x is a basic solution—otherwise, by the proof of Proposition 1.9, there would exist a feasible basic solution with less than k0 positive components—and we have x = 1 · x, i.e. a convex linear combination of itself and hence of the set of feasible basic solutions. Let us now assume that for some k ≥ k0 and for all feasible solutions x such that I (x) ≤ k the hypothesis is true. Then, given a feasible solution x with I (ˆ) = k + 1, for ˆ x x a basic solution we again have x = 1 · x and thus the hypothesis holds. ˆ ˆ ˆ Otherwise, i.e. if x is not a basic solution, the homogeneous system ˆ
i∈I (ˆ) x Ai yi = 0 yi = 0, i ∈ I (ˆ) x has a solution y = 0, for which at least one component is strictly negative and ˜ another is strictly positive, since otherwise we could assume y ≥ 0, y = 0, ˜ ˜ to solve the homogeneous system Ay = 0, implying that x + λy ∈ B ∀λ ≥ 0, ˆ ˜ which, according to the inequality x + λy ≥ λ y − x , contradicts the ˆ ˜ ˜ ˆ assumed boundedness of B . Hence we ﬁnd for α := max{λ  x + λy ≥ 0}, ˆ ˜ β := min{λ  x + λy ≥ 0} ˆ ˜ that 0 < α < ∞ and 0 > β > −∞. Deﬁning v := x + αy and w := x + β y , ˆ ˜ ˆ ˜ we have v, w ∈ B and—by the deﬁnitions of α and β —that I (v ) ≤ k 58 STOCHASTIC PROGRAMMING Figure 21 LP: bounded feasible set. and I (w) ≤ k such that, according to our induction assumption, with r {x{i} , i = 1, · · · , r} the set of all feasible basic solutions, v = i=1 λi x{i} , r r r {i} , where where i=1 λi = 1, λi ≥ 0 ∀i, and w = i=1 µi x i=1 µi = 1, µi ≥ 0 ∀i. As is easily checked, we have x = ρv + (1 − ρ)w with ˆ ρ = −β/(α − β ) ∈ (0, 1). This implies immediately that x is a convex linear ˆ 2 combination of {x{i} , i = 1, · · · , r}. The convex hull of ﬁnitely many points {x{1} , · · · , x{r} }, formally denoted by conv{x{1} , · · · , x{r} }, is called a convex polyhedron or a bounded convex polyhedral set (see Figure 21). Take for instance in IR2 the points z 1 = (2, 2), z 2 = (8, 1), z 3 = (4, 3), z 4 = (7, 7) and z 5 = (1, 6). In Figure 22 we ˜ have P = conv{z 1 , · · · , z 5 }, and it is obvious that z 3 is not necessary to ˜ ˜ generate P ; in other words, P = conv{z 1 , z 2 , z 3 , z 4 , z 5 } = conv{z 1 , z 2 , z 4 , z 5 }. 3 ˜ Hence we may drop z without any eﬀect on the polyhedron P , whereas omitting any other of the ﬁve points would essentially change the shape of the polyhedron. The points that really count in the deﬁnition of a convex polyhedron are its vertices (z 1 , z 2 , z 4 and z 5 in the example). Whereas in twoor threedimensional spaces, we know by intuition what we mean by a vertex, we need a formal deﬁnition for higherdimensional cases: A vertex of a convex polyhedron P is a point x ∈ P such that the line segment connecting any two ˆ points in P , both diﬀerent from x, does not contain x. Formally, ˆ ˆ ∃y, z ∈ P , y = x = z, λ ∈ (0, 1), such that x = λy + (1 − λ)z. ˆ ˆ It may be easily shown that for an LP with a bounded feasible set B the feasible basic solutions x{i} , i = 1, · · · , r, coincide with the vertices of B . By Proposition 1.10, the feasible set of a linear program is a convex BASIC CONCEPTS 59 Figure 22 Polyhedron generated by its vertices. polyhedron provided that B is bounded. Hence we have to ﬁnd out under what conditions B is bounded or unbounded respectively. For B = ∅ we have seen already in the proof of Proposition 1.10 that the existence of a y = 0 such ˜ that Ay = 0, y ≥ 0, would imply that B is unbounded. Therefore, for B to be ˜ ˜ bounded, the condition {y  Ay = 0, y ≥ 0} = {0} is necessary. Moreover, we have the following. Proposition 1.11 The feasible set B = ∅ is bounded iﬀ {y  Ay = 0, y ≥ 0} = {0}. Proof Given the above observations, it is only left to show that the condition {y  Ay = 0, y ≥ 0} = {0} is suﬃcient for the boundedness of B . Assume in contrast that B is unbounded. This means that we have feasible solutions arbitrarily large in norm. Hence for any natural number K there exists an xK ∈ B such that xK ≥ K . Deﬁning z K := we have xK xK ∀K, ⎫ ⎪ ⎪ ⎪ ⎪ ⎬ ⎪ ⎪ ⎪ ⎪ ⎭ z K ≥ 0, z K = 1, Az K = b/ xK , and hence Az K ≤ b /K ∀K. (7.6) Therefore the sequence {z K , K = 1, 2, · · ·} has an accumulation point z , ˆ for which, according to (7.6), z ≥ 0, z = 1 and Az = 0, and hence ˆ ˆ ˆ 60 STOCHASTIC PROGRAMMING Az = 0, z ≥ 0, z = 0. ˆ ˆ ˆ 2 According to Proposition 1.11, the set C := {y  Ay = 0, y ≥ 0} plays a decisive role for the boundedness or unboundedness of the feasible set B . We see immediately that C is a convex cone, which means that for any two elements y, z ∈ C it follows that λy + µz ∈ C ∀λ, µ ≥ 0. In addition, we may show that C is a convex polyhedral cone, i.e. there exist ﬁnitely many y {i} ∈ C , i = 1, · · · , s, such that any y ∈ C may be represented as s y = i=1 αi y {i} , αi ≥ 0 ∀i. Formally, we also may speak of the positive hull denoted by pos{y {1} , · · · , y {s} } := {y  y = s=1 αi y {i} , αi ≥ 0 ∀i}. i Proposition 1.12 The set C = {y  Ay = 0, y ≥ 0} is a convex polyhedral cone. Proof Since for C = {0} the statement is trivial, we assume that C = {0}. n ˆ For any arbitrary y ∈ C such that y = 0 and hence i=1 yi > 0 we have, with ˆ ˆ n n ˆ ˜ ˆ ˜ µ := 1/ i=1 yi for y := µy , that y ∈ C := {y  Ay = 0, i=1 yi = 1, y ≥ 0}. n Obviously C ⊂ C and, owing to the constraints i=1 yi = 1, y ≥ 0, the set C is bounded. Hence, by Proposition 1.10, C is a convex polyhedron generated by its feasible basic solutions {y {1} , · · · , y {s} } such that y has a representation ˜ s s y = i=1 λi y {i} with i=1 λi = 1, λi ≥ 0 ∀i, implying that y = (1/µ)˜ = ˜ ˆ y s {i} . This shows that C = {y  y = s=1 αi y {i} , αi ≥ 0 ∀i}. 2 i=1 (λi /µ)y i In Figure 23 we see a convex polyhedral cone C and its intersection C with the hyperplane H = {y  eT y = 1} (e = (1, · · · , 1)T ). The vectors y {1} , y {2} and y {3} are the generating elements (feasible basic solutions) of C , as discussed in the proof of Proposition 1.12, and therefore they are also the generating elements of the cone C . Now we are ready to describe the feasible set B of the linear program (7.1) in general. Given the convex polyhedron P := conv{x{1} , · · · , x{r} } generated by the feasible basic solutions {x{1} , · · · , x{r} } ⊂ B and the convex polyhedral cone C = {y  Ay = 0, y ≥ 0}—given by its generating elements as pos{y {1} , · · · , y {s} } as discussed in Proposition 1.12—we get the following. Proposition 1.13 B is the algebraic sum of P and C , formally B = P + C , meaning that every x ∈ B may be represented as x = z + y , where z ∈ P and ˜ ˜˜˜ ˜ y ∈ C. ˜ Proof Choose an arbitrary x ∈ B . Since {y  Ay = 0, 0 ≤ y ≤ x} is compact, ˜ ˜ the continuous function ϕ(y ) := eT y , where e = (1, · · · , 1)T , attains its maximum on this set. Hence there exists a y such that ˜ ⎫ Ay = 0, ˜ ⎪ ⎪ ⎬ y ≤ x, ˜˜ (7.7) y ≥ 0, ˜ ⎪ ⎪ ⎭ eT y = max{eT y  Ay = 0, 0 ≤ y ≤ x}. ˜ ˜ BASIC CONCEPTS 61 Figure 23 Polyhedral cone intersecting the hyperplane H = {y  eT y = 1}. Let x := x−y. Then x ∈ B and {y  Ay = 0, 0 ≤ y ≤ x} = {0}, since otherwise ˆ ˜˜ ˆ ˆ we should have a contradiction to (7.7). Hence for I (ˆ) = {i  xi > 0} we have x ˆ x {y  Ay = 0, yi = 0, i ∈ I (ˆ), y ≥ 0} = {0} and therefore, by Proposition 1.11, the feasible set x B1 := {x  Ax = b, xi = 0, i ∈ I (ˆ), x ≥ 0} is bounded and, observing that x ∈ B1 , nonempty. From Proposition 1.10, it ˆ follows that x is a convex linear combination of the feasible basic solutions of ˆ Ax = b x xi = 0, i ∈ I (ˆ) x≥0 which are obviously feasible basic solutions of our original constraints Ax = b x≥0 as well. It follows that x ∈ P , and, by the above construction, we have y ∈ C ˆ ˜ and x = x + y . ˜ˆ˜ 2 According to this proposition, the feasible set of any LP is constructed as follows. First we determine the convex hull P of all feasible basic solutions, which might look like that in Figure 21, for example; then we add (algebraically) the convex polyhedral cone C (owing to Proposition 1.10 62 STOCHASTIC PROGRAMMING Figure 24 Adding the polyhedral cone C to the polyhedron P . associated with the constraints of the LP) to P , which is indicated in Figure 24. The result of this operation—for an unbounded feasible set—is shown in Figure 25; in the bounded case P would remain unchanged (as, for example, in Figure 21), since then, according to Proposition 1.11, we have C = {0}. A set given as algebraic sum of a convex polyhedron and a convex polyhedral cone is called a convex polyhedral set. Observe that this deﬁnition contains the convex polyhedron as well as the convex polyhedral cone as special cases. We shall see later in this text that it is sometimes of interest to identify socalled facets of convex polyhedral sets. Consider for instance a pyramid (in IR3 ). You will certainly agree that this is a threedimensional convex polyhedral set. The set of boundary points again consists of diﬀerent convex polyhedral sets, namely sides (twodimensional), edges (onedimensional) and vertices (zerodimensional). The sides are called facets. In general, consider an arbitrary convex polyhedral set B ⊂ IRn . Without loss of generality, assume that 0 ∈ B (if not, one could, for any ﬁxed z ∈ B , consider the transposition B − {z } obviously containing the origin). The dimension of B , dim B , is the smallest dimension of all linear spaces (in IRn ) containing B . ˆ Therefore dim B ≤ n. For any linear subspace U ∈ IRn and any z ∈ B the intersection Bz,U := [{z} + U ] ∩ B = ∅ is again a convex polyhedral set. This ˆ ˆ set is called a facet if • z is a boundary point of B and Bz,U does not contain interior points of B ; ˆ ˆ • dim U = dim Bz,U = dim B − 1. ˆ In other words, a facet of B is a (maximal) piece of the boundary of B having the dimension dim B − 1. BASIC CONCEPTS 63 Figure 25 LP: unbounded feasible set. The description of the feasible set of (7.1) given so far enables us to understand immediately under which conditions the linear program (7.1) is solvable and how the solution(s) may look. Proposition 1.14 The linear program (7.1) is solvable iﬀ B = {x  Ax = b, x ≥ 0} = ∅ and cT y ≥ 0 ∀y ∈ C = {y  Ay = 0, y ≥ 0}. (7.9) Given that these two conditions are satisﬁed, there is at least one feasible basic solution that is an optimal solution. Proof Obviously condition (7.9) is necessary for the existence of an optimal solution. If B = ∅ then we know from Proposition 1.13 that x ∈ B iﬀ x = i=1 λi x{i} + j =1 µj y {j } with λi ≥ 0 ∀i, µj ≥ 0 ∀j and
r s r i=1 (7.8) λi = 1 where {x{1} , · · · , x{r} } is the set of all feasible basic solutions in B and {y {1} , · · · , y {s} } is a set of elements generating C , for instance as described in Proposition 1.12. Hence solving min cT x s.t. Ax = b, x≥0 64 STOCHASTIC PROGRAMMING is equivalent to solving the problem min{ s.t.
r T {i} + i=1 λi c x r λi = 1 i=1 λi ≥ 0 ∀i, s j =1 µj cT y {j } } µj ≥ 0 ∀j. The objective value of this latter program can be driven to −∞ if and only if we have cT y {j } < 0 for at least one j ∈ {1, · · · , s}; otherwise, i.e. if cT y {j } ≥ 0 ∀j ∈ {1, · · · , s} and hence cT y ≥ 0 ∀y ∈ C , the objective is minimized by setting µj = 0 ∀j , and choosing λi0 = 1 and λi = 0 ∀i = i0 for 2 x{i0 } solving min1≤i≤r {cT x{i} }. Observe that in general the solution of a linear program need not be unique. Given the solvability conditions of Proposition 1.14 and the notation of its proof, if cT y {j0 } = 0, we may choose µj0 > 0, and x{i0 } + µj0 y {j0 } is a solution as well; and obviously it also may happen that min1≤i≤r {cT x{i} } is assumed by more than just one feasible basic solution. In any case, if there is more than one (diﬀerent) solution for our linear program then there are inﬁnitely many owing to the fact that, given the optimal value γ , the set Γ of optimal solutions is characterized by the linear constraints Ax = b cT x ≤ γ x≥0 and therefore Γ is itself a convex polyhedral set. 1.7.2 The Simplex Algorithm If we have the task of solving a linear program of the form (7.1) then, by Proposition 1.14, we may restrict ourselves to feasible basic solutions. Let x ∈ B be any basic solution and, as before, I (ˆ) = {i  xi > 0}. Under the ˆ x ˆ assumption (7.3), the feasible basic solution is called • nondegenerate if I (ˆ) = m, and x • degenerate if I (ˆ) < m. x To avoid lengthy discussions, we assume in this section that for all feasible basic solutions x{1} , · · · , x{r} of the linear program (7.1) we have I (x{i} ) = m, i = 1, · · · , r, (7.10) i.e. that all feasible basic solutions are nondegenerate. For the case of degenerate basic solutions, and the adjustments that might be necessary in BASIC CONCEPTS 65 this case, the reader may consult the wide selection of books devoted to linear programming in particular. Referring to our former presentation (7.5), x x we have, owing to (7.10), that IB (ˆ) = I (ˆ), and, with the basic part x x B = (Ai  i ∈ I (ˆ)) and the nonbasic part N = (Ai  i ∈ I (ˆ)) of the matrix A, the constraints of (7.1) may be rewritten—using the basic and nonbasic variables as introduced in (7.4)—as ⎫ x{B } = B −1 b − B −1 N x{N B } , ⎬ (7.11) x{B } ≥ 0, ⎭ {N B } x ≥ 0. Obviously this system yields our feasible basic solution x iﬀ x{N B } = 0, and ˆ then we have, by our assumption (7.10), that x{B } = B −1 b > 0. Rearranging the components of c analogously to (7.4) into the two vectors
{B } ck {N B } cl = ci , i the k th element of I (ˆ), k = 1, · · · , m, x = ci , i the lth element of {1, · · · , n} − I (ˆ), l = 1, · · · , n − m, x owing to (7.11), the objective may now be expressed as a function of the nonbasic variables: cT x = (c{B } )T x{B } + (c{N B } )T x{N B } = (c{B } )T B −1 b + [(c{N B } )T − (c{B } )T B −1 N ] x{N B } . (7.12) This representation of the objective connected to the particular feasible basic solution x implies the optimality condition for linear programming—the soˆ called simplex criterion. Proposition 1.15 Under the assumption (7.10), the feasible basic solution resulting from (7.11) for x{N B } = 0 is optimal iﬀ [(c{N B } )T − (c{B } )T B −1 N ]T ≥ 0. Proof By assumption (7.10), the feasible basic solution given by x{B } = B −1 b − B −1 N x{N B } , =0 x
{N B } (7.13) may be satisﬁes x{B } = B −1 b > 0. Therefore any nonbasic variable xl increased to some positive amount without violating the constraints x{B } ≥ 0. Furthermore, increasing the nonbasic variables is the only feasible change applicable to them, owing to the constraints x{N B } ≥ 0. From the objective presentation in (7.12), we see immediately that cT x = (c{B } )T B −1 b ˆ ≤ (c{B } )T B −1 b + [(c{N B } )T − (c{B } )T B −1 N ] x{N B } ∀x{N B } ≥ 0 {N B } 66 STOCHASTIC PROGRAMMING iﬀ [(c{N B } )T − (c{B } )T B −1 N ]T ≥ 0. 2 Motivated by the above considerations, we call any nonsingular m × m submatrix B = (Ai  i ∈ IB ) of A a feasible basis for the linear program (7.1) if B −1 b ≥ 0. Obviously, on rearranging the variables as before into basic variables x{B } —belonging to B —and nonbasic variables x{N B } —belonging to N = (Ai  i ∈ IB )—the objective γ and the constraints of (7.1) read (see (7.11) and (7.12)) as γ = (c{B } )T B −1 b + [(c{N B } )T − (c{B } )T B −1 N ] x{N B } , = B −1 b − B −1 N x{N B } , x {B } ≥ 0, x x{N B } ≥ 0,
{B } and x{N B } = 0 corresponds to a feasible basic solution—under our assumption (7.10), satisfying even x{B } = B −1 b > 0 instead of only x{B } = B −1 b ≥ 0 in general. Now we are ready—using the above notation—to formulate the classical solution procedure of linear programming: the simplex method Simplex method. Step 1 Determine a feasible basis B = (Ai  i ∈ IB ) for (7.1) and N = (Ai  i ∈ IB ). Step 2 If the simplex criterion (7.13) is satisﬁed then stop with x{B } = B −1 b, x{N B } = 0 being an optimal solution; otherwise, there is some ρ ∈ {1, · · · , n − m} such that for the ρth component of [(c{N B } )T − (c{B } )T B −1 N ]T we have [(c{N B } )T − (c{B } )T B −1 N ]T < 0, ρ . and we increase the ρth nonbasic variable xρ {N B } If increasing xρ is not “blocked” by the constraints x{B } ≥ 0, {N B } i.e. if xρ → ∞ is feasible, then inf B γ = −∞ such that our problem has no (ﬁnite) optimal solution. {N B } is “blocked” by one of the If, on the other hand, increasing xρ {B } constraints xi ≥ 0, i = 1, · · · , m, such that, for instance, for some {B } µ ∈ {1, · · · , m} the basic variable xµ is the ﬁrst one to become {B } {N B } xµ = 0 while increasing xρ , then go to step 3.
{N B } BASIC CONCEPTS 67 Step 3 Exchange the µth column of B with the ρth column of N , yielding ˜ ˜ ˜ new basic and nonbasic parts B and N of A such that B contains ˜ Nρ as its µth column and N contains Bµ as its ρth column. Redeﬁne ˜ ˜ B := B and N := N , and rearrange x{B } , x{N B } , c{B } and c{N B } correspondingly, and then return to step 2. Remark 1.5 The following comments on the single steps of the simplex method may be helpful for a better understanding of this procedure: Step 1 Obviously we assume that B = ∅. The existence of a feasible basis B follows from Propositions 1.9 and 1.8. Because of our assumption (7.10), we have B −1 b > 0. Step 2 (a) If for a feasible basis B we have [(c{N B } )T − (c{B } )T B −1 N ]T ≥ 0 then by Proposition 1.15 this basis (i.e. the corresponding basic solution) is optimal. (b) If the simplex criterion is violated for the feasible basic solution belonging to B given by x{B } = B −1 b, x{N B } = 0, then there must be an index ρ ∈ {1, · · · , n − m} such that α0ρ := [(c{N B } )T − (c{B } )T B −1 N ]T < 0, and, keeping all but the ρth ρ nonbasic variables on their present values xj = 0, j = ρ, with α·ρ := −B −1 Nρ , the objective and the basic variables have the representations , γ = (c{B } )T B −1 b + α0ρ xρ {N B } {B } = B −1 b + α·ρ xρ . x According to these formulae, we conclude immediately that for α·ρ ≥ 0 the nonnegativity of the basic variables would never {N B } arbitrarily such that we had be violated by increasing xρ inf B γ = −∞, whereas for α·ρ ≥ 0 it would follow that the set of rows {i  αiρ < 0, 1 ≤ i ≤ m} = ∅, and consequently, with {N B } ≥ 0 would β := B −1 b, the constraints x{B } = β + α·ρ xρ {N B } “block” the increase of xρ at some positive value (remember that, by the assumption (7.10), we have β > 0). More precisely, we now have to observe the constraints βi + αiρ x{N B } ≥ 0 for i ∈ {i  αiρ < 0, 1 ≤ i ≤ m} ρ
{N B } {N B } 68 STOCHASTIC PROGRAMMING or equivalently x{N B } ≤ ρ βi for i ∈ {i  αiρ < 0, 1 ≤ i ≤ m}. −αiρ Hence, with µ ∈ {i  αiρ < 0, 1 ≤ i ≤ m} denoting a row for which βi βµ αiρ < 0, 1 ≤ i ≤ m , = min −αµρ −αiρ is the ﬁrst basic variable to decrease to zero if xρ is xµ increased to the value βµ /(−αµρ ), and we observe that at the same time the objective value is changed to
>0
{B } {N B } βµ γ = (c{B } )T β + α0ρ −α µρ
<0 < (c {B } T >0 )β such that we have a strict decrease of the objective. ˜ Step 3 The only point to understand here is that B as constructed in this step is again a basis. By assumption, B was a basis, i.e. the column set (B1 , · · · , Bµ , · · · , Bm ) was linearly independent. Entering step 3 according to step 2 asserts that for α·ρ = −B −1 Nρ we have αµρ < 0, i.e. in the representation of the column Nρ by the basic columns, Nρ = − m Bi αiρ the column Bµ appears with a nonzero coeﬃcient. i=1 In this case it is well known from linear algebra that the column set ˜ (B1 , · · · , Nρ , · · · , Bm ) is linearly independent as well, and hence B is a basis. The operation of changing the basis by exchanging one column (step 3) is usually called a pivot step. 2 Summarizing the above remarks immediately yields the following. Proposition 1.16 If the linear program (7.1) is feasible then the simplex method yields—under the assumption of nondegeneracy (7.10)—after ﬁnitely many steps either a solution or else the information that there is no ﬁnite solution, i.e. that inf B γ = −∞. Proof As mentioned in Remark 1.5, step 3, the objective strictly decreases in every pivot step. During the cycles (steps 2 and 3) of the method, we only consider feasible bases. Since there are no more than ﬁnitely many feasible BASIC CONCEPTS 69 bases for any linear program of the form (7.1), the simplex method must end after ﬁnitely many cycles. 2 Remark 1.6 In step 2 of the simplex method it may happen that the simplex criterion is not satisﬁed and that we discover that inf B γ = −∞. It is worth mentioning that in this situation we may easily ﬁnd a generating element of the cone C associated with B , as discussed in Proposition 1.12. With the above notation, we then have a feasible basis B , and for some column Nρ = 0 we have B −1 Nρ ≤ 0. Then, with e = (1, · · · , 1)T of appropriate dimensions, for ˆ (ˆ{B } , y {N B } ) satisfying y y {B } = − B − 1 N ρ y ρ ˆ ˆ , 1 {N B } = yρ ˆ , − e T B −1 N ρ + 1 {N B } yl ˆ = 0 for l = ρ it follows that B y {B } + N y {N B } = 0 ˆ ˆ {N B } {N B } T {B } T {N B } ˆ +e y ˆ = − e T B −1 N ρ y ρ ˆ + yρ ˆ ey {N B } = (−eT B −1 Nρ + 1)ˆρ y = 1, y {B } ≥ 0 , ˆ {N B } y ˆ ≥ 0. Observe that, with B = (B1 , · · · , Bm ) a basis of IRm , owing to v = B −1 Nρ ≤ 0, and hence 1 − eT v ≥ 1, we have rk B1 1 · · · Bm ··· 1 0 1 − eT v = rk = rk It follows that B1 1 B2 1 · · · Bm ··· 1 B1 1 B1 0 Nρ 1 · · · Bm ··· 1 · · · Bm ··· 0 0 1 0 1 .
{N B } is a basis of IRm+1 . Hence (ˆ{B } , y {N B } ) is one of the generating elements of y ˆ the convex polyhedral cone {(y {B } , y {N B } )  By {B } + N y {N B } = 0, y {B } ≥ 0, y {N B } ≥ 0}, as derived in Proposition 1.12. 2 70 STOCHASTIC PROGRAMMING 1.7.3 Duality Statements Given the linear program (7.1) as socalled primal program ⎫ min cT x ⎬ s.t. Ax = b, ⎭ x ≥ 0, the corresponding dual program is formulated as max bT u s.t. AT u ≤ c. (7.14) (7.15) Remark 1.7 Instead of stating a whole bunch of rules on how to assign the correct dual program to any of the various possible formulations of the primal linear program, we might recommend transformation of the primal program to the standard form (7.14), followed by the assignment of the linear program (7.15) as its dual. Let us just give some examples. Example 1.5 Assume that our primal program is of the form min cT x s.t. Ax ≥ b, x ≥ 0, which, by transformation to the standard form, is equivalent to min cT x s.t. Ax − Iy = b, x ≥ 0, y ≥ 0, I being the m × m identity matrix, and, according to the above deﬁnition, has the dual program max bT u s.t. AT u ≤ c, −Iu ≤ 0, or equivalently max bT u s.t. AT u ≤ c, u ≥ 0. min cT x s.t. Ax ≥ b, x ≥ 0; max bT u s.t. AT u ≤ c, u ≥ 0. Hence for this case the pair of the primal and its dual program looks like BASIC CONCEPTS 71 2 Example 1.6 Considering the primal program min cT x s.t. Ax ≤ b, x≥0 in its standard form min cT x s.t. Ax + Iy = b, x ≥ 0, y≥0 max bT u s.t. AT u ≤ c, u ≤ 0, or equivalently, with v := −u, max (−bT v ) s.t. AT v ≥ −c, v ≥ 0. Therefore we now have the following pair of a primal and the corresponding dual program: min cT x max (−bT v ) s.t. Ax ≤ b, s.t. AT v ≥ −c, x ≥ 0; v ≥ 0. 2 Example 1.7 Finally consider the primal program max g T x s.t. Dx ≤ f. This program is of the same form as the dual of our standard linear program (7.14) and—using the fact that for any function ϕ deﬁned on some set M we have supx∈M ϕ(x) = − inf x∈M {−ϕ(x)}—its standard form is written as − min (−g T x+ + g T x− ) s.t. Dx+ − Dx− + Iy = f, x+ ≥ 0, x− ≥ 0, y ≥ 0, would yield the dual program 72 STOCHASTIC PROGRAMMING with the dual program − max f T z s.t. DT z ≤ −g, −DT z ≤ g, Iz ≤ 0 which is (with w := −z ) equivalent to min f T w s.t. DT w = g, w ≥ 0, such that we have the dual pair max g T x s.t. Dx ≤ f ; min f T w s.t. DT w = g, w ≥ 0. 2 Hence, by comparison with our standard forms of the primal program (7.14) and the dual program (7.15), it follows that the dual of the dual is the primal program. 2 There are close relations between a primal linear program and its dual program. Let us denote the feasible set of the primal program (7.14) by B and that of its dual program by D. Furthermore, let us introduce the convention that inf x∈B cT x = +∞ if B = ∅, (7.16) supu∈D bT u = −∞ if D = ∅. Then we have as a ﬁrst statement the following socalled weak duality theorem: Proposition 1.17 For the primal linear program (7.14) and its dual (7.15)
x∈B inf cT x ≥ sup bT u.
u∈D Proof If either B = ∅ or D = ∅ then the proposition is trivial owing to our convention (7.16). Assume therefore that both feasible sets are nonempty and choose arbitrarily an element x ∈ B and an element u ∈ D. Then, from (7.15), ˆ ˆ we have c − AT u ≥ 0, ˆ and, by scalar multiplication with x ≥ 0, ˆ xT (c − AT u) ≥ 0, ˆ ˆ BASIC CONCEPTS 73 which, observing that Ax = b by (7.14), implies ˆ xT c − bT u ≥ 0. ˆ ˆ Since x ∈ B and u ∈ D were arbitrarily chosen, we have ˆ ˆ cT x ≥ b T u and hence ∀x ∈ B , u ∈ D, inf x∈B cT x ≥ supu∈D bT u. 2 In view of this proposition, the question arises as to whether or when it might happen that inf cT x > sup bT u.
x∈B u∈D Example 1.8 Consider the following primal linear program: min{3x1 + 3x2 − 16x3 } s.t. 5x1 + 3x2 − 8x3 = 2, −5x1 + 3x2 − 8x3 = 4, xi ≥ 0, i = 1, 2, 3, and its dual program max{2u1 + 4u2 } s.t. 5u1 − 5u2 ≤ 3, 3u1 + 3u2 ≤ 3, −8u1 − 8u2 ≤ −16. Adding the equations of the primal program, we get 6x2 − 16x3 = 6, and hence x2 = 1 + 8 x3 , 3 which, on insertion into the ﬁrst equation, yields x1 = 1 (2 − 3 − 8x3 + 8x3 ) 5 = −1, 5 showing that the primal program is not feasible. Looking at the dual constraints, we get from the second and third inequalities that u1 + u2 ≤ 1, u1 + u2 ≥ 2, 74 STOCHASTIC PROGRAMMING such that also the dual constraints do not allow a feasible solution. Hence, by our convention (7.16), we have for this dual pair
x∈B inf cT x = +∞ > sup bT u = −∞.
u∈D 2 However, the socalled duality gap in the above example does not occur so long as at least one of the two problems is feasible, as is asserted by the following strong duality theorem of linear programming. Proposition 1.18 Consider the feasible sets B and D of the dual pair of linear programs (7.14) and (7.15) respectively. If either B = ∅ or D = ∅ then it follows that inf cT x = sup bT u.
x∈B u∈D If one of these two problems is solvable then so is the other, and we have min cT x = max bT u.
x∈B u∈D Proof Assume that B = ∅. If inf x∈B cT x = −∞ then it follows from the weak duality theorem that supu∈D bT u = −∞ as well, i.e. that the dual program (7.15) is infeasible. If the primal program (7.14) is solvable then we know from Proposition 1.14 that there is an optimal feasible basis B such that the primal program may be rewritten as min{(c{B } )T x{B } + (c{N B } )T x{N B } } s.t. Bx{B } + N x{N B } = b, x{B } ≥ 0, {N B } x ≥ 0, and therefore the dual program reads as max bT u s.t. B T u ≤ c{B } , N T u ≤ c {N B } . For B an optimal feasible basis, owing to Proposition 1.15, the simplex criterion [(c{N B } )T − (c{B } )T B −1 N ]T ≥ 0 has to hold. Hence it follows immediately that u := (B T )−1 c{B } satisﬁes the ˆ ˆ dual constraints. Additionally, the dual objective value bT u = bT (B T )−1 c{B } BASIC CONCEPTS 75 is equal to the primal optimal value (c{B } )T B −1 b. In view of Proposition 1.17, it follows that u is an optimal solution of the dual program. ˆ 2 An immediate consequence of the strong duality theorem is Farkas’ lemma, which yields a necessary and suﬃcient condition for the feasibility of a system of linear constraints, and may be stated as follows. Proposition 1.19 The set {x  Ax = b, x ≥ 0} = ∅ if and only if AT u ≥ 0 implies that bT u ≥ 0. ˜ Proof Assume that u satisﬁes AT u ≥ 0 and that {x  Ax = b, x ≥ 0} = ∅. ˜ Then let x be a feasible solution, i.e. we have ˆ Ax = b, x ≥ 0, ˆ ˆ and, by scalar multiplication with u, we get ˜ uT b = uT A x ≥ 0, ˜ ˜ ˆ
≥0 ≥0 so that the condition is necessary. Assume now that the following condition holds: AT u ≥ 0 implies that bT u ≥ 0. Choosing any u = 0 and deﬁning c := AT u, it follows from Proposition 1.14 ˆ ˆ that the linear program min bT u s.t. AT u ≥ c is solvable. Then its dual program max cT x s.t. Ax = b, x≥0 is solvable and hence feasible. 2 1.7.4 A Dual Decomposition Method In Section 1.5 we discussed stochastic linear programs with linear recourse and mentioned in particular the case of a ﬁnite support Ξ of the probability 76 STOCHASTIC PROGRAMMING distribution. We saw that the deterministic equivalent—the linear program (5.2)—has a dual decomposition structure. We want to sketch a solution method that makes use of this structure. For simplicity, and just to present the essential ideas, we restrict ourselves to a support Ξ containing just one realization such that the problem to discuss is reduced to ⎫ min{cT x + q T y } ⎪ ⎪ ⎪ s.t. Ax = b, ⎪ ⎬ T x + W y = h, (7.17) ⎪ x ≥ 0, ⎪ ⎪ ⎪ ⎭ y ≥ 0. In addition, we assume that the problem is solvable and that the set {x  Ax = b, x ≥ 0} is bounded. The above problem may be restated as min{cT x + f (x)} s.t. Ax = b, x ≥ 0, with f (x) := min{q T y  W y = h − T x, y ≥ 0}. Our recourse function f (x) is easily seen to be piecewise linear and convex. It is also immediate that the above problem can be replaced by the equivalent problem min{cT x + θ} s.t. Ax = b θ − f (x) ≥ 0 x ≥ 0; however, this would require that we know the function f (x) explicitly in advance. This will not be the case in general. Therefore we may try to construct a sequence of new (additional) linear constraints that can be used to deﬁne a monotonically decreasing feasible set B1 of (n + 1)vectors (x1 , · · · , xn , θ)T such that ﬁnally, with B0 := {(xT , θ)T  Ax = b, x ≥ 0, θ ∈ IR}, the problem min(x,θ)∈B0 ∩B1 {cT x + θ} yields a (ﬁrststage) solution of our problem (7.17). After these preparations, we may describe the following particular method. Dual decomposition method Step 1 With θ0 a lower bound for min{q T y  Ax = b, T x + W y = h, x ≥ 0, y ≥ 0}, BASIC CONCEPTS 77 solve the program min{cT x + θ  Ax = b, θ ≥ θ0 , x ≥ 0} yielding a solution (ˆ, θ). Let B1 := {IRn × {θ}  θ ≥ θ0 }. xˆ Step 2 Using the last ﬁrststage solution x, evaluate the recourse function ˆ ˆ f (ˆ) = min{q T y  W y = h − T x, y ≥ 0} x = max{(h − T x)T u  W T u ≤ q }. ˆ Now we have to distinguish two cases. (a) If f (ˆ) = +∞ then x is not feasible with respect to all constraints x ˆ of (7.17) (i.e. x does not satisfy the induced constraints discussed ˆ in Proposition 1.3), and by Proposition 1.14 we have a u such ˜ ˜ ˆ˜ that W T u ≤ 0 and (h − T x)T u > 0. On the other hand, for any feasible x there must exist a y ≥ 0 such that W y = h − T x. Scalar multiplication of this equation by u yields ˜ uT (h − T x) = uT W ˜ ˜ y ≤ 0, ≤0 ≥0 and hence uT h ≤ uT T x, ˜ ˜ which has to hold for any feasible x, and obviously does not hold for x, since uT (h − T x) > 0. Therefore we introduce the feasibility ˆ ˜ ˆ cut, cutting oﬀ the infeasible solution x: ˆ uT (h − T x) ≤ 0. ˜ Then we redeﬁne B1 := B1 to step 3. ˜ {(xT , θ)  uT (h − T x) ≤ 0} and go on (b) Otherwise, if f (ˆ) is ﬁnite, we have for the recourse problem (see x the proof of Proposition 1.18) simultaneously—for x—a primal ˆ optimal basic solution y and a dual optimal basic solution u. From ˆ ˆ the dual formulation of the recourse problem, it is evident that f (ˆ) = (h − T x)T u, x ˆˆ whereas for any x we have f (x) = sup{(h − T x)T u  W T u ≤ q } ˆ ≥ (h − T x)T u = uT (h − T x). ˆ 78 STOCHASTIC PROGRAMMING Figure 26 Dual decomposition: optimality cuts. The intended constraint θ ≥ f (x) implies the linear constraint θ ≥ uT (h − T x), ˆ ˆ ˆˆ which is violated by (ˆT , θ)T iﬀ (h − T x)T u > θ; in this case xˆ we introduce the optimality cut (see Figure 26), cutting oﬀ the nonoptimal solution (ˆT , θ)T : xˆ θ ≥ uT (h − T x). ˆ Correspondingly, we redeﬁne B1 := B1 {(xT , θ)  θ ≥ uT (h−T x)} ˆ ˆ and continue with step 3; otherwise, i.e. if f (ˆ) ≤ θ, we stop, with x x being an optimal ﬁrststage solution. ˆ Step 3 Solve the updated problem min{cT x + θ  (xT , θ) ∈ B0 ∩ B1 }, yielding the optimal solution (˜T , θ)T . x˜ T ˆT T ˜T With (ˆ , θ) := (˜ , θ) , we return to step 2. x x Remark 1.8 We brieﬂy sketch the arguments regarding the proper functioning of this method. Step 1 We have assumed problem (7.17) to be solvable, which implies, by Proposition 1.14, that {(x, y )  Ax = b, T x + W y = h, x ≥ 0, y ≥ 0} = ∅, BASIC CONCEPTS 79 {v  W v = 0, q T v < 0, v ≥ 0} = ∅. In addition, we have assumed {x  Ax = b, x ≥ 0} to be bounded. Hence inf {f (x)  Ax = b, x ≥ 0} is ﬁnite such that the lower bound θ0 exists. This (and the boundedness of {x  Ax = b, x ≥ 0}) implies that min{cT x + θ  Ax = b, θ ≥ θ0 , x ≥ 0} is solvable. Step 2 If f (ˆ) = +∞, we know from Proposition 1.14 that {u  W T u ≤ x 0, (h − T x)T u > 0} = ∅, and, according to Remark 1.6, for the convex ˆ polyhedral cone {u  W T u ≤ 0} we may ﬁnd with the simplex method one of the generating elements u mentioned in Proposition 1.12 that ˜ satisﬁes (h − T x)T u > 0. By Proposition 1.12, we have ﬁnitely many ˆ˜ generating elements for the cone {u  W T u ≤ 0} such that, after having used all of them to construct feasibility cuts, for all feasible x we should have (h − T x)T u ≤ 0 ∀u ∈ {u  W T u ≤ 0} and hence solvability of the recourse problem. This shows that f (ˆ) = +∞ may x appear only ﬁnitely many times within this method. If f (ˆ) is ﬁnite, the simplex method yields primal and dual optimal x feasible basic solutions y and u respectively. Assume that we already ˆ ˆ had the same dual basic solution u := u in a previous step to construct ˜ ˆ an optimality cut θ ≥ uT (h − T x); ˜ ˆ then our present θ has to satisfy this constraint for x = x such that ˆ ˆ˜ θ ≥ uT (h − T x) ˆ ˆ = uT (h − T x) ˆ ˆ holds, or equivalently we have f (ˆ) ≤ θ and stop the procedure. From x the above inequalities, it follows that ˆ θ ≥ (h − T x)T u{i} , i = 1, · · · , k, ˆ if u{1} , · · · , u{k} denote the feasible basic solutions in {u  W T u ≤ q } used so far for optimality cuts. Observing that in step 3 for any x we minimize θ with respect to B1 this implies that ˆ θ = max (h − T x)T u{i} . ˆ
1≤i≤k ˆ Given our stopping rule f (ˆ) ≤ θ, with the set of all feasible basic x 80 STOCHASTIC PROGRAMMING solutions, {u{1} , · · · , u{k} , · · · , u{r} }, of {u  W T u ≤ q }, it follows that ˆ ˆ θ = max1≤i≤k (h − T x)T u{i} ≤ max1≤i≤r (h − T x)T u{i} ˆ = f (ˆ) x ˆ ≤θ ˆ and hence θ = f (ˆ), which implies the optimality of x. x ˆ 2 Summarizing the above remarks we have the following. Proposition 1.20 Provided that the program (7.17) is solvable and {x  Ax = b, x ≥ 0} is bounded, the dual decomposition method yields an optimal solution after ﬁnitely many steps. We have described this method for the data structure of the linear program (7.17) that would result if a stochastic linear program with recourse had just one realization of the random data. To this end, we introduced the feasibility and optimality cuts for the recourse function f (x) := min{q T y  W y = h − T x, y ≥ 0}. The modiﬁcation for a ﬁnite discrete distribution with K realizations is immediate. From the discussion in Section 1.5, our problem is of the form
K min c x +
i=1 T q iT y i s.t. Ax =b T i x + W y i = hi , i = 1 , · · · , K x ≥ 0, y i ≥ 0, i = 1, · · · , K. Thus we may simply introduce feasibility and optimality cuts for all the recourse functions fi (x) := min{q iT y i  W y i = hi − T i x, y i ≥ 0}, i = 1, · · · , K , yielding the socalled multicut version of the dual decomposition method. Alternatively, combining the single cuts corresponding to the particular blocks i = 1, · · · , K with their respective probabilities leads to the socalled Lshaped method. 1.8 Nonlinear Programming In this section we summarize some basic facts about nonlinear programming problems written in the standard form min f (x) s.t. gi (x) ≤ 0, i = 1, · · · , m. (8.1) BASIC CONCEPTS 81 The feasible set is again denoted by B : B := {x  gi (x) ≤ 0, i = 1, · · · , m}. As in the previous section, any other nonlinear program, for instance min f (x) s.t. gi (x) ≤ 0, i = 1, · · · , m, x≥0 or min f (x) s.t. gi (x) ≤ 0, i = 1, · · · , m1 , gi (x) = 0, i = m1+1 , · · · , m, x≥0 or min f (x) s.t. gi (x) ≥ 0, i = 1, · · · , m, x ≥ 0, may be transformed into the standard form (8.1). We assume throughout this section that the functions f, gi : IRn −→ IR are given, that at least one of them is not a linear function, and that all of them ∂f ∂gi are continuously (partially) diﬀerentiable (i.e. and are continuous). ∂xj ∂xj Occasionally we restrict ourselves to the case that the functions are convex, since we shall not widely deal with nonconvex problems in this book. This implies, according to Lemma 1.1 that any local minimum of program (8.1) is a global minimum. First of all, we have to refer to a well known fact from analysis. Proposition 1.21 The function ϕ : IRn −→ IR is convex iﬀ for all arbitrarily chosen x, y ∈ IRn we have (y − x)T ∇ϕ(x) ≤ ϕ(y ) − ϕ(x). In other words, for a convex function, a tangent (hyperplane) at any arbitrary point (of its graph) supports the function everywhere from below; a hyperplane with this property is called a supporting hyperplane for this function (see Figure 27). We know from calculus that for some x ∈ IRn to yield a local minimum for ˆ a diﬀerentiable function ϕ : IRn −→ IR we have the necessary condition ∇ϕ(ˆ) = 0. x 82 STOCHASTIC PROGRAMMING Figure 27 Convex function with tangent as supporting hyperplane. If, moreover, the function ϕ is convex then, owing to Proposition 1.21, this condition is also suﬃcient for x to be a global minimum, since then for any ˆ arbitrary x ∈ IRn we have 0 = (x − x)T ∇ϕ(ˆ) ≤ ϕ(x) − ϕ(ˆ) ˆ x x and hence ϕ(ˆ) ≤ ϕ(x) ∀x ∈ IRn . x Whereas the above optimality condition is necessary for unconstrained minimization, the situation may become somewhat diﬀerent for constrained minimization. Example 1.9 For x ∈ IR consider the simple problem min ψ (x) = x2 s.t. x ≥ 1, with the obvious unique solution x = 1, with ∇ψ (ˆ) = ˆ x dψ (ˆ) = 2. x dx Hence we cannot just transfer the optimality conditions for unconstrained optimization to the constrained case. 2 Therefore we shall ﬁrst deal with the necessary and/or suﬃcient conditions for some x ∈ IRn to be a local or global solution of the program (8.1). ˆ BASIC CONCEPTS 83 1.8.1 The Kuhn–Tucker Conditions Remark 1.9 To get an idea of what kind of optimality conditions we may expect for problems of the type (8.1), let us ﬁrst—contrary to our general assumption—consider the case where f, gi , i = 1, · · · , m, are linear functions f (x) := cT x, gi (x) := aT x − bi , i = 1, · · · , m, i such that we have the gradients ∇f (x) = c, ∇gi (x) = ai , and problem (8.1) becomes the linear program min cT x s.t. aT x ≤ bi , i = 1, · · · , m. i (8.4) (8.3) (8.2) Although we did not explicitly discuss optimality conditions for linear programs in the previous section, they are implicitly available in the duality statements discussed there. The dual problem of (8.4) is ⎫ max{−bT u} ⎪ ⎪ ⎪ m ⎬ s.t. − ai ui = c, (8.5) ⎪ ⎪ i=1 ⎪ ⎭ u ≥ 0. Let A be the m × n matrix having aT , i = 1, · · · , m, as rows. The diﬀerence i of the primal and the dual objective functions can then be written as cT x + bT u = cT x + uT Ax − uT Ax + bT u = (c + AT u)T x + (b − Ax)T u
m m = [∇f (x) +
i=1 ui ∇gi (x)]T x −
i=1 (8.6) ui gi (x). From the duality statements for linear programming (Propositions 1.17 and 1.18), we know the following. (a) If x is an optimal solution of the primal program (8.4) then, by the strong ˆ duality theorem (Proposition 1.18), there exists a solution u of the dual ˆ program (8.5) such that the diﬀerence of the primal and dual objective vanishes. For the pair of dual problems (8.4) and (8.5) this means that cT x − (−bT u) = cT x + bT u = 0. In view of (8.6) this may also be stated ˆ ˆ ˆ ˆ 84 STOCHASTIC PROGRAMMING as the necessary condition
m ∃u ≥ 0 such that ∇f (ˆ) + ˆ x ui ∇gi (ˆ) = 0, ˆ x
i=1 m ui gi (ˆ) = 0. ˆ x
i=1 (b) if we have a primal feasible and a dual feasible solution x and u ˜ ˜ respectively, such that the diﬀerence of the respective objectives is zero then, by the weak duality theorem (Proposition 1.17), x solves the primal ˜ problem; in other words, given a feasible x, the condition ˜
m ∃u ≥ 0 such that ∇f (˜) + ˜ x ui ∇gi (˜) = 0, ˜ x
i=1 m ui gi (˜) = 0 ˜ x
i=1 is suﬃcient for x to be a solution of the program (8.4). ˜ 2 Remark 1.10 The optimality condition derived in Remark 1.9 for the linear case could be formulated as follows: (1) For the feasible x the negative gradient of the objective f —i.e. the ˆ direction of the greatest (local) descent of f —is equal (with the multipliers ui ≥ 0) to a nonnegative linear combination of the gradients of those ˆ constraint functions gi that are active at x, i.e. that satisfy gi (ˆ) = 0. ˆ x (2) This corresponds to the fact that the multipliers satisfy the complemenx tarity conditions ui gi (ˆ) = 0, i = 1, · · · , m, stating that the multipliers ˆ ˆ ui are zero for those constraints that are not active at x, i.e. that satisfy ˆ gi (ˆ) < 0. x In conclusion, this optimality condition says that −∇f (ˆ) must be contained x x in the convex polyhedral cone generated by the gradients ∇gi (ˆ) of the constraints being active in x. This is one possible formulation of the Kuhn–Tucker ˆ conditions illustrated in Figure 28. 2 Let us now return to the more general nonlinear case and consider the following question. Given that x is a (local) solution, under what assumption ˆ BASIC CONCEPTS 85 Figure 28 Kuhn–Tucker conditions. does this imply that the above optimality conditions,
m ∃u ≥ 0 such that ∇f (ˆ) + ˆ x i=1 m ⎫ ⎪ ui ∇gi (ˆ) = 0, ⎪ ˆ x ⎪ ⎬ ⎪ ⎪ ui gi (ˆ) = 0, ⎪ ˆ x ⎭ (8.7) i=1 hold? Hence we ask under what assumption are the conditions (8.7) necessary for x to be a (locally) optimal solution of the program (8.1). To answer this ˆ question, let I (ˆ) := {i  gi (ˆ) = 0}, such that the optimality conditions (8.7) x x are equivalent to u
i∈I (ˆ) x ui ∇gi (ˆ) = −∇f (ˆ), ui ≥ 0 for i ∈ I (ˆ) = ∅. x x x Observing that ∇gi (ˆ) and ∇f (ˆ) are constant vectors when x is ﬁxed at x, x x ˆ the condition of Farkas’ lemma (Proposition 1.19) is satisﬁed if and only if the following regularity condition holds in x: ˆ RC 0 z T ∇gi (ˆ) ≤ 0, i ∈ I (ˆ) implies that z T ∇f (ˆ) ≥ 0. x x x (8.8) Hence we have the rigorous formulation of the Kuhn–Tucker conditions: Proposition 1.22 Given that x is a (local) solution of the nonlinear ˆ program (8.1), under the assumption that the regularity condition RC 0 is 86 STOCHASTIC PROGRAMMING satisﬁed in x it necessarily follows that ˆ
m ∃u ≥ 0 such that ∇f (ˆ) + ˆ x ui ∇gi (ˆ) = 0, ˆ x
i=1 m ui gi (ˆ) = 0. ˆ x
i=1 Example 1.10 The Kuhn–Tucker conditions need not hold if the regularity condition cannot be asserted. Consider the following simple problem (x ∈ IR1 ): min{x  x2 ≤ 0}. Its unique solution is x = 0. Obviously we have ˆ ∇f (ˆ) = (1), ∇g (ˆ) = (0), x x and there is no way to represent ∇f (ˆ) as (positive) multiple of ∇g (ˆ). (Needx x less to say, the regularity condition RC 0 is not satisﬁed in x.) ˆ 2 We just mention that for the case of linear constraints the Kuhn–Tucker conditions are necessary for optimality, without the addition of any regularity condition. Instead of condition RC 0, there are various other regularity conditions popular in optimization theory, only two of which we shall mention here. The ﬁrst is stated as RC 1 ∀z = 0 s.t. z T ∇gi (ˆ) ≤ 0, i ∈ I (ˆ), ∃{xk  xk = x, k = 1, 2, · · ·} ⊂ B x x ˆ such that
k→∞ lim xk = x, ˆ k→∞ lim z xk − x ˆ = . k −x x ˆ z The second—used frequently for the convex case, i.e. if the functions gi are convex—is the Slater condition RC 2 ∃x ∈ B such that gi (˜) < 0 ∀i. ˜ x (8.9) Observe that there is an essential diﬀerence among these regularity conditions: to verify RC 0 or RC 1, we need to know the (locally) optimal point for which we want the Kuhn–Tucker conditions (8.7) to be necessary, whereas the Slater BASIC CONCEPTS 87 Figure 29 The Slater condition implies RC 1. condition RC 2—for the convex case—requires the existence of an x such that ˜ gi (˜) < 0 ∀i, but does not refer to any optimal solution. Without proof we x might mention the following. Proposition 1.23 (a) The regularity condition RC 1 (in any locally optimal solution) implies the regularity condition RC 0. (b) For the convex case the Slater condition RC 2 implies the regularity condition RC 1 (for every feasible solution). In Figure 29 we indicate how the proof of the implication RC 2 =⇒ RC 1 can be constructed. Based on these facts we immediately get the following. Proposition 1.24 (a) If x (locally) solves problem (8.1) and satisﬁes RC 0 then the Kuhn– ˆ Tucker conditions (8.7) necessarily hold in x. ˆ (b) If the functions f, gi , i = 1, · · · , m, are convex and the Slater condition RC 2 holds, then x ∈ B (globally) solves problem (8.1) if and only if the ˆ Kuhn–Tucker conditions (8.7) are satisﬁed for x. ˆ Proof: Referring to Proposition 1.23, the necessity of the Kuhn–Tucker conditions has already been demonstrated. Hence we need only show that in the convex case the Kuhn–Tucker conditions are also suﬃcient for optimality. 88 STOCHASTIC PROGRAMMING Assume therefore that we have an x ∈ B and a u ≥ 0 such that ˆ ˆ
m ∇f (ˆ) + x
i=1 ui ∇gi (ˆ) = 0, ˆ x
m ui gi (ˆ) = 0. ˆ x
i=1 Then, with I (ˆ) = {i  gi (ˆ) = 0}, we have x x ∇f (ˆ) = − x
i∈I (ˆ) x ui ∇gi (ˆ) ˆ x and owing to u ≥ 0 and the convexity of f and gi , it follows from ˆ Proposition 1.21 that for any arbitrary x ∈ B f (x) − f (ˆ) ≥ (x − x)T ∇f (ˆ) x ˆ x ui (x − x)T ∇gi (ˆ) ˆ ˆ x =−
i∈I (ˆ) x ≥−
i∈I (ˆ) x ui [gi (x) − gi (ˆ)] ˆ x
≥0 ≤ 0 ∀x ∈ B, i ∈ I (ˆ) x ≥0 such that f (x) ≥ f (ˆ) ∀x ∈ B . x Observe that 2 • to show the necessity of the Kuhn–Tucker conditions we had to use the regularity condition RC 0 (or one of the other two, being stronger), but we did not need any convexity assumption; • to demonstrate that in the convex case the Kuhn–Tucker conditions are suﬃcient for optimality we have indeed used the assumed convexity, but we did not need any regularity condition at all. Deﬁning the Lagrange function for problem (8.1),
m L(x, u) := f (x) +
i=1 ui gi (x) we may restate our optimality conditions. With the notation ∇x L(x, u) := ( ∂L(x,u) , · · · , ∂L(x,u) )T , ∂x1 ∂xn ∇u L(x, u) := ( ∂L(x,u) , · · · , ∂L(x,u) )T , ∂u1 ∂um BASIC CONCEPTS 89 and observing that ∇u L(x, u) ≤ 0 simply repeats the constraints gi (x) ≤ 0 ∀i of our original program (8.1), the Kuhn–Tucker conditions now read as ⎫ xˆ ∇x L(ˆ, u) = 0, ⎪ ⎪ ⎬ ∇u L(ˆ, u) ≤ 0, xˆ (8.10) xˆ uT ∇u L(ˆ, u) = 0, ⎪ ˆ ⎪ ⎭ u ≥ 0. ˆ Assume now that the functions f, gi , i = 1, · · · , m, are convex. Then for any ﬁxed u ≥ 0 the Lagrange function is obviously convex in x. For (ˆ, u) xˆ satisfying the Kuhn–Tucker conditions, it follows by Proposition 1.21 that for any arbitrary x L(x, u) − L(ˆ, u) ≥ (x − x)T ∇x L(ˆ, u) = 0 ˆ xˆ ˆ xˆ and hence L(ˆ, u) ≤ L(x, u) ∀x ∈ IRn . xˆ ˆ On the other hand, since ∇u L(ˆ, u) ≤ 0 is equivalent to gi (ˆ) ≤ 0 ∀i, and xˆ x m the Kuhn–Tucker conditions assert that uT ∇u L(ˆ, u) = i=1 ui gi (ˆ) = 0, it ˆ xˆ ˆ x follows that L(ˆ, u) ≤ L(ˆ, u) ∀u ≥ 0. x xˆ Hence we have the following. Proposition 1.25 Given that the functions f, gi , i = 1, · · · , m, in problem (8.1) are convex, any Kuhn–Tucker point, i.e. any pair (ˆ, u) satisfying the xˆ Kuhn–Tucker conditions, is a saddle point of the Lagrange function, i.e. it satisﬁes ∀u ≥ 0 L(ˆ, u) ≤ L(ˆ, u) ≤ L(x, u) ∀x ∈ IRn . x xˆ ˆ Furthermore, it follows by the complementarity conditions that L(ˆ, u) = f (ˆ). xˆ x It is an easy exercise to show that for any saddle point (ˆ, u), with u ≥ 0, xˆ ˆ of the Lagrange function, the Kuhn–Tucker conditions (8.10) are satisﬁed. Therefore, if we knew the right multiplier vector u in advance, the task to ˆ solve the constrained optimization problem (8.1) would be equivalent to that of solving the unconstrained optimization problem minx∈IRn L(x, u). This ˆ observation can be seen as the basic motivation for the development of a class of solution techniques known in the literature as Lagrangian methods. 1.8.2 Solution Techniques When solving stochastic programs, we need to use known procedures from both linear and nonlinear programming, or at least adopt their underlying 90 STOCHASTIC PROGRAMMING ideas. Unlike linear programs, nonlinear programs generally cannot be solved in ﬁnitely many steps. Instead, we shall have to deal with iterative procedures that we might expect to converge—in some reasonable sense—to a solution of the nonlinear program under consideration. For better readability of the subsequent chapters of this book, we sketch the basic ideas of some types of methods; for detailed technical presentations and convergence proofs the reader is referred to the extensive specialized literature on nonlinear programming. We shall discuss • • • • cuttingplane methods; methods of descent; penalty methods; Lagrangian methods by presenting one particular variant of each of these methods. 1.8.2.1 Cuttingplane methods Assume that for problem (8.1) the functions f and gi , i = 1, · · · , m, are convex and that the—convex—feasible set B = {x  gi (x) ≤ 0, i = 1, · · · , m} is bounded. Furthermore, assume that ∃y ∈ int B —which for instance would ˆ be true if the Slater condition (8.9) held. Then, instead of the original problem min f (x),
x∈B we could consider the equivalent problem min θ s.t. gi (x) ≤ 0, i = 1, · · · , m, f (x) − θ ≤ 0, with the feasible set B = {(x, θ)  f (x) − θ ≤ 0, gi (x) ≤ 0, i = 1, · · · , m} ⊂ IRn+1 being obviously convex. With the assumption y ∈ int B , we may further ˆ restrict the feasible solutions in B to satisfy the inequality θ ≤ f (ˆ) without y any eﬀect on the solution set. The resulting problem can be interpreted as the minimization of the linear objective θ on the bounded convex set y {(x, θ) ∈ B  θ ≤ f (ˆ)}, which is easily seen to contain an interior point (˜, θ) as well. y˜ Hence, instead of the nonlinear program (8.1), we may consider—without loss of generality if the original feasible set B was bounded—the minimization of a linear objective on a bounded convex set min{cT x  x ∈ B}, (8.11) BASIC CONCEPTS 91 where the bounded convex set B is assumed to contain an interior point y. ˆ Under the assumptions mentioned, it is possible to include the feasible set B of problem (8.11) in a convex polyhedron P , which—after our discussions in Section 1.7—we may expect to be able to represent by linear constraints. Observe that the inclusion P ⊃ B implies the inequality min cT x ≤ min cT x.
x∈P x∈B The cuttingplane method for problem (8.11) proceeds as follows. Step 1 Determine a y ∈ int B and a convex polyhedron P0 ⊃ B ; let k := 0. ˆ Step 2 Solve the linear program min{cT x  x ∈ Pk }, yielding the solution xk . ˆ x If xk ∈ B then stop (ˆk solves problem (8.11)); otherwise, i.e. if ˆ xk ∈ B , determine ˆ λk := min{λ  λy + (1 − λ)ˆk ∈ B} ˆ x and let z k := λk y + (1 − λk )ˆk . ˆ x (Obviously we have z k ∈ B and moreover z k is a boundary point of B on the line segment between the interior point y of B and the point ˆ xk , which is “external” to B .) ˆ Step 3 Determine a “supporting hyperplane” of B in z k (i.e. a hyperplane being tangent to B at the boundary point z k ). Let this hyperplane be given as Hk := {x  (ak )T x = αk } such that the inequalities (ak )T xk > αk ≥ (ak )T x ∀x ∈ B ˆ hold. Then deﬁne Pk+1 := Pk {x  (ak )T x ≤ αk }, let k := k + 1, and return to step 2. In Figure 30 we illustrate one step of the cuttingplane method. Remark 1.11 By construction—see steps 1 and 3 of the above method—we have Pk ⊃ B , k = 0 , 1 , 2 , · · · , 92 STOCHASTIC PROGRAMMING Figure 30 Cuttingplane method: iteration k. and hence ˆ cT xk ≤ min cT x, k = 0, 1, 2, · · · ,
x∈B ˆ such that as soon as x ∈ B for some k ≥ 0 we would have that xk is an ˆ optimal solution of problem (8.11), as claimed in step 2. Furthermore, since z k ∈ B ∀k , we have cT z k ≥ min cT x ∀k
k x∈B such that cT z k − cT xk could be taken after the k th iteration as an upper bound ˆ on the distance of either the feasible (but in general nonoptimal) objective ˆ value cT z k or the optimal (but in general nonfeasible) objective value cT xk to the feasible optimal value minx∈B cT x. Observe that in general the sequence {cT z k } need not be monotonically decreasing, whereas Pk+1 ⊂ Pk ∀k ensures that the sequence {cT xk } is monotonically increasing. Thus we may enforce ˆ a monotonically decreasing error bound ˆ ∆k := cT z lk − cT xk , k = 0, 1, 2, · · · , by choosing z lk from the boundary points of B constructed in step 2 up to iteration k such that cT z lk = min cT z l .
l∈{0,···,k} Finally we describe brieﬂy how the “supporting hyperplane” of B in z k of step 3 can be determined. By our assumptions, y ∈ int B and xk ∈ B , we ˆ ˆ get in step 2 that 0 < λk < 1. Since λk > 0 is minimal under the condition BASIC CONCEPTS 93 λy + (1 − λ)ˆk ∈ B , there is at least one constraint i0 active in z k meaning ˆ x that gi0 (z k ) = 0. The convexity of gi0 implies, owing to Proposition 1.21, that y y y 0 > gi0 (ˆ) = gi0 (ˆ) − gi0 (z k ) ≥ (ˆ − z k )T ∇gi0 (z k ), and therefore that ak := ∇gi0 (z k ) = 0. ˆ x Observing that zk = λk y + (1 − λk )ˆk with 0 < λk < 1 is equivalent to xk − z k = − ˆ λk (ˆ − z k ), y 1 − λk we conclude from the last inequality that (ˆk − z k )T ∇gi0 (z k ) > 0. x On the other hand, for any x ∈ B , gi0 (x) ≤ 0. Again by Proposition 1.21, it follows that (x − z k )T ∇gi0 (z k ) ≤ gi0 (x) − gi0 (z k ) = gi0 (x) ≤ 0. Therefore, with ak := ∇gi0 (z k ) and αk := z kT ∇gi0 (z k ), we may deﬁne a supporting hyperplane as required in step 3; this hyperplane is then used in the deﬁnition of Pk+1 to cut oﬀ the set {x  (ak )T x > αk }—and hence in partic2 ular the infeasible solution xk —from further consideration. ˆ 1.8.2.2 Descent methods For the sake of simplicity, we consider the special case of minimizing a convex function under linear constraints ⎫ min f (x) ⎬ s.t. Ax = b, (8.12) ⎭ x ≥ 0. Assume that we have a feasible point z ∈ B = {x  Ax = b, x ≥ 0}. Then there are two possibilities. (a) If z is optimal then the Kuhn–Tucker conditions have to hold. For (8.12) these are ∇f (z ) + AT u − w = 0, z T w = 0, w ≥ 0, or—with J (z ) := {j  zj > 0}—equivalently AT u − w = −∇f (z ), wj = 0 for j ∈ J (z ), w ≥ 0. 94 STOCHASTIC PROGRAMMING Applying Farkas’ Lemma 1.19 tells us that this system (and hence the above Kuhn–Tucker system) is feasible if and only if [∇f (z )]T d ≥ 0 ∀d ∈ {d  Ad = 0, dj ≥ 0 for j ∈ J (z )}; (b) If the feasible point z is not optimal then the Kuhn–Tucker conditions cannot hold, and, according to (a), there exists a direction d such that Ad = 0, dj ≥ 0 ∀j : zj = 0 and [∇f (z )]T d < 0. A direction like this is called a feasible descent direction at z , which has to satisfy the following two conditions: ∃λ0 > 0 such that z + λd ∈ B ∀λ ∈ [0, λ0 ] and [∇f (z )]T d < 0. Hence, having at a feasible point z a feasible descent direction d (for which, by its deﬁnition, d = 0 is obvious), it is possible to move from z in direction d with some positive step length without leaving B and at the same time at least locally to decrease the objective’s value. From these brief considerations, we may state the following. Conceptual method of descent directions Step 1 Determine a feasible solution z (0) , let k := 0. Step 2 If there is no feasible descent direction at z (k) then stop (z (k) is optimal). Otherwise, choose a feasible descent direction d(k) at z (k) and go to step 3. Step 3 Solve the socalled line search problem min{f (z (k) + λd(k) )  (z (k) + λd(k) ) ∈ B},
λ and with its solution λk deﬁne z (k+1) := z (k) + λk d(k) . Let k := k + 1 and return to step 2. Remark 1.12 It is worth mentioning that not every choice of feasible descent directions would lead to a wellbehaved algorithm. By construction we should get—in any case—a sequence of feasible points {z (k) } with a monotonically (strictly) decreasing sequence {f (z (k) )} such that for the case that f is bounded below on B the sequence {f (z (k) )} has to converge to some value γ . However, there are examples in the literature showing that if we do not restrict the choice of the feasible descent directions in an appropriate way, it may happen that γ > inf B f (x), which is certainly not the kind of a result we want to achieve. Let us assume that B = ∅ is bounded, implying that our problem (8.12) is solvable. Then there are various possibilities of determining the feasible descent direction, each of which deﬁnes its own algorithm for which a BASIC CONCEPTS 95 “reasonable” convergence behaviour can be asserted in the sense that the sequence {f (z (k) )} converges to the true optimal value and any accumulation point of the sequence {z (k) } is an optimal solution of our problem (8.12). Let us just mention two of those algorithms: (a) The feasible direction method For this algorithm we determine in step 2 the direction d(k) as the solution of the following linear program: min[∇f (z (k) )]T d s.t. Ad = 0 (k ) dj ≥ 0 ∀j : zj = 0, d ≤ e, d ≥ −e, with e = (1, · · · , 1)T . Then for [∇f (z (k) )]T d(k) < 0 we have a feasible descent direction, whereas for [∇f (z (k) )]T d(k) = 0 the point z (k) is an optimal solution of (8.12). (b) The reduced gradient method Assume that B is bounded and every feasible basic solution of (8.12) is nondegenerate. Then for z (k) we ﬁnd a basis B in A such that the components of z (k) belonging to B are strictly positive. Rewriting A—after the necessary rearrangements of columns—as (B, N ) and correspondingly presenting z (k) as (xB , xN B ), we have BxB + N xN B = b, or equivalently xB = B −1 b − B −1 N xN B . We also may rewrite the gradient ∇f (z (k) ) as (∇B f (z (k) ), ∇N B f (z (k) )). Then, rearranging d accordingly into (u, v ), for a feasible direction we need to have Bu + N v = 0, and hence u = −B −1 N v. For the directional derivative [∇f (z (k) )]T d it follows [∇f (z (k) )]T d = [∇B f (z (k) )]T u + [∇N B f (z (k) )]T v = [∇B f (z (k) )]T (−B −1 N v ) + [∇N B f (z (k) )]T v = ([∇N B f (z (k) )]T − [∇B f (z (k) )]T B −1 N )v. Deﬁning the reduced gradient r by rT = rB rN B
T = [∇B f (z (k) )]T , [∇N B f (z (k) )]T − [∇B f (z (k) )]T B −1 (B, N ), 96 STOCHASTIC PROGRAMMING we have rB = 0, rN B = ([∇N B f (z (k) )]T − [∇B f (z (k) )]T B −1 N )T , and hence [∇f (z (k) )]T d = (uT , v T ) rB rN B = ([∇N B f (z (k) )]T − [∇B f (z (k) )]T B −1 N )v. Deﬁning v as vj :=
N −rj B N −xN B rj B j N if rj B ≤ 0, N if rj B > 0, and, as above, u := B −1 N v , we have that (uT , v T )T is a feasible direction (observe that vj ≥ 0 if xN B = 0 and xB > 0 owing to our assumption) j and it is a descent direction if v = 0. Furthermore, z (k) is a solution of problem (8.12) iﬀ v = 0 (and hence u = 0), since then r ≥ 0, and, with wT := [∇B f (z (k) )]T B −1 , we have rB = ∇B f (z (k) ) − B T w = 0, = ∇N B f (z (k) ) − N T w ≥ 0, r
NB and (rB )T xB = 0, (rN B )T xN B = 0, i.e. v = 0 is equivalent to satisfying the Kuhn–Tucker conditions. It is known that the reduced gradient method with the above deﬁnition of v may fail to converge to a solution (socalled “zigzagging”). However, we can perturb v as follows: ⎧ N N if rj B ≤ 0, ⎨ −rj B NB NB N if rj B > 0 and xN B ≥ ε, −xj rj vj := j ⎩ N 0 if rj B > 0 and xN B < ε. j Then a proper control of the perturbation ε > 0 during the procedure can be shown to enforce convergence. 2 The feasible direction and the reduced gradient methods have been extended to the case of nonlinear constraints. We omit the presentation of the general case here for the sake of better readability. BASIC CONCEPTS 97 1.8.2.3 Penalty methods The term “penalty” reﬂects the following attempt. Replace the original problem (8.1) min f (x) s.t. gi (x) ≤ 0, i = 1, · · · , m, by appropriate free (i.e. unconstrained) optimization problems ⎫ minx∈IRn Frs (x), ⎪ ⎪ ⎪ ⎬ the function Frs being deﬁned as 1 ⎪ ϕ(gi (x)) + ψ (gi (x)), ⎪ Frs (x) := f (x) + r ⎪ ⎭ s
i∈I i∈J (8.13) where I, J ⊂ {1, · · · , m} such that I ∩ J = ∅, I ∪ J = {1, · · · , m}, and the parameters r, s > 0 are to be chosen or adapted in the course of the procedure. The role of the functions ϕ and ψ is to inhibit and to penalize respectively the violation of any one of the constraints. More precisely, for these functions we assume that • ϕ, ψ are monotonically increasing and convex; • the socalled barrier function satisﬁes ϕ(η ) < +∞ lim ϕ(η ) = +∞;
η ↑0 ∀η < 0, • for the socalled loss function we have ψ (η ) =0 >0 ∀η ≤ 0, ∀η > 0. Observe that the convexity of f, gi , i = 1, · · · , m, and the convexity and monotonicity of ϕ, ψ imply the convexity of Frs for any choice of the parameters r, s > 0. Solving the free optimization problem (8.13) with parameters r, s > 0 would inhibit the violation of the constraints i ∈ I , whereas the violation of anyone of the constraints i ∈ J would be penalized with a positive additive term. Intuitively, we might expect that the solutions of (8.13) will satisfy the constraints i ∈ I and that for s ↓ 0, or equivalently 1 ↑ +∞, s they will eventually satisfy the constraints i ∈ J . Therefore it seems plausible to control the parameter s in such a way that it tends to zero. Now what about the parameter r of the barrier term in (8.13)? Imagine that for the 98 STOCHASTIC PROGRAMMING (presumably unique) solution x of problem (8.1) some constraint i0 ∈ I is ˆ x active, i.e. gi0 (ˆ) = 0. For any ﬁxed r > 0, minimization of (8.13) will not allow us to approach the solution x, since obviously, by the deﬁnition of a ˆ barrier function, this would drive the new objective Frs to +∞. Hence it seems reasonable to drive the parameter r downwards to zero, as well. With B1 := {x  gi (x) ≤ 0, i ∈ I }, B2 := {x  gi (x) ≤ 0, i ∈ J } we have B = B1 ∩ B2 , and for r > 0 we may expect ﬁnite values of Frs 0 only for x ∈ B1 := {x  gi (x) < 0, i ∈ I }. We may close this short presentation of general penalty methods by a statement showing that, under mild assumptions, a method of this type may be controlled in such a way that it results in what we should like to experience. Proposition 1.26 Let f, gi , i = 1, · · · , m, be convex and assume that
0 B1 ∩ B2 = ∅ and that B = B1 ∩ B2 is bounded. Then for {rk } and {sk } strictly monotone sequences decreasing to zero there exists an index k0 such that for all k ≥ k0 the modiﬁed objective function Frk sk attains its (free) minimum at some point 0 x(k) where x(k) ∈ B1 . (k ) The sequence {x  k ≥ k0 } is bounded, and any of its accumulation points is a solution of the original problem (8.1). With γ the optimal value of (8.1), the following relations hold:
k→∞ k→∞ lim f (x(k) )
i∈I = γ, lim rk ϕ(gi (x(k) )) = 0, ψ (gi (x(k) )) = 0.
i∈J 1 k→∞ sk lim 1.8.2.4 Lagrangian methods As mentioned at the end of Section 1.8.1, knowledge of the proper multiplier m vector u in the Lagrange function L(x, u) = f (x) + i=1 ui gi (x) for ˆ problem (8.1) would allow us to solve the free optimization problem
x∈IRn ˆ min L(x, u) instead of the constrained problem min f (x) s.t. gi (x) ≤ 0, i = 1, · · · , m. BASIC CONCEPTS 99 To simplify the description, let us ﬁrst consider the optimization problem with equality constraints min f (x) s.t. gi (x) = 0, i = 1, · · · , m. (8.14) Knowing for this problem the proper multiplier vector u or at least a good ˆ approximate u of it, we should ﬁnd
x∈IRn min [f (x) + uT g (x)], (8.15) m where uT g (x) = i=1 ui gi (x). However, at the beginning of any solution procedure we hardly have any knowledge about the numerical size of the multipliers in a Kuhn–Tucker point of problem (8.14), and using some guess for u might easily result in an unsolvable problem (inf x L(x, u) = −∞). On the other hand, we have just introduced penalty methods. Using for problem (8.14) a quadratic loss function for violating the equality constraints seems to be reasonable. Hence we could think of a penalty method using as modiﬁed objective min [f (x) + 1 λ g (x) 2 ] (8.16) 2 n x∈IR and driving the parameter λ towards +∞, with g (x) being the Euclidean norm of g (x) = (g1 (x), · · · , gm (x))T . One idea is to combine the two approaches (8.15) and (8.16) such that we are dealing with the socalled augmented Lagrangian as our modiﬁed objective:
x∈IRn min [f (x) + uT g (x) + 1 λ g (x) 2 ]. 2 The now obvious intention is to control the parameters u and λ in such a way that λ → ∞—to eliminate infeasibilities—and that at the same time u → u, the proper Kuhn–Tucker multiplier vector. Although we are not yet ˆ in a position to appropriately adjust the parameters, we know at least the skeleton of the algorithm, which usually is referred to as augmented Lagrange method: With the augmented Lagrangian Lλ (x, u) := f (x) + uT g (x) + 1 λ g (x) 2 , 2 it may be loosely stated as follows: For • {u(k) } ⊂ IRm bounded, • {λk } ⊂ IR such that 0 < λk < λk+1 ∀k, λk → ∞, solve successively minx∈IRn Lλk (x, u(k) ). ⎫ ⎪ ⎪ ⎬ ⎪ ⎪ ⎭ (8.17) 100 STOCHASTIC PROGRAMMING Observe that for u(k) = 0 ∀k we should get back the penalty method with a quadratic loss function, which, according to Proposition 1.26, is known to “converge” in the sense asserted there. For the method (8.17) in general the following two statements can be proved, showing (a) that we may expect a convergence behaviour as we know it already for penalty methods; and (b) how we should successively adjust the multiplier vector u(k) to get the intended convergence to the proper Kuhn–Tucker multipliers. Proposition 1.27 If f and gi , i = 1, · · · , m, are continuous and x(k) , k = 1, 2, · · · , are global solutions of min Lλk (x, u(k) )
x then any accumulation point x of {x(k) } is a global solution of problem (8.14). ¯ The following statement also shows that it would be suﬃcient to solve the free optimization problems minx Lλk (x, u(k) ) only approximately. Proposition 1.28 Let f and gi , i = 1, · · · , m, be continuously diﬀerentiable, and let the approximate solutions x(k) to the free minimization problems in (8.17) satisfy ∇x Lλk (x(k) , u(k) ) ≤ k ∀k, where k ≥ 0 ∀k and k → 0. For some K ⊂ IN let {x(k) , k ∈ K } converge to some x (i.e. x is an accumulation point of {x(k) , k ∈ IN}), and let {∇g1 (x ), · · · , ∇gm (x )} be linearly independent. Then ∃u such that {u(k) + λk g (x(k) ), k ∈ K } −→ u ,
m (8.18) ∇f (x ) +
i=1 ui ∇gi (x ) = 0, g (x ) = 0. Choosing the parameters λk according to (8.17), for instance as λ1 := 1, λk+1 := 1.1λk k ≥ 1, the above statement suggests, by (8.18), that u(k+1) := u(k) + λk g (x(k) ) (8.19) is an appropriate update formula for the multipliers in order to eventually get—together with x —a Kuhn–Tucker point. BASIC CONCEPTS 101 Now let us come back to our original nonlinear program (8.1) with inequality constraints and show how we can make use of the above results for the case of equality constraints. The key to this is the observation that our problem with inequality constraints min f (x) s.t. gi (x) ≤ 0, i = 1, · · · , m is equivalent to the following one with equality constraints: min f (x) 2 s.t. gi (x) + zi = 0, i = 1, · · · , m. Now applying the augmented Lagrangian method (8.17) to this equalityconstrained problem requires that for
m Lλ (x, z, u) := f (x) +
i=1 2 2 ui gi (x) + zi + 1 λ gi (x) + zi 2 2 (8.20) we solve successively the problem min Lλk (x, z, u(k) ).
x, z The minimization with respect to z included in this problem may be carried through explicitly, observing that
z ∈IRm min Lλk (x, z, u(k) )
m = min f (x) + m
z ∈IR m ui
i=1 (k ) (k ) 2 2 gi (x) + zi + 1 λk gi (x) + zi 2 2 2 = f (x) +
i=1 min ui
zi 2 2 gi (x) + zi + 1 λk gi (x) + zi 2 . 2 Therefore the minimization of L with respect to z requires—with yi := zi — the solution of m problems of the form y i ≥0 min ui gi (x) + yi + 1 λ gi (x) + yi 2 2 , (8.21) i.e. the minimization of strictly convex (λ > 0) quadratic functions in yi on yi ≥ 0. The free minima (i.e. yi ∈ IR) of (8.21) have to satisfy ui + λ gi (x) + yi = 0, yielding yi = − ˜ ui + gi (x) . λ 102 STOCHASTIC PROGRAMMING Hence we have for the solution of (8.21) yi = if yi ≥ 0, ˜ otherwise, ui + gi (x) = max 0, − λ yi ˜ 0 ⎫ ⎪ ⎬ ⎪ ⎭ (8.22) implying ui (8.23) λ which, with zi = yi , after an elementary algebraic manipulation reduces our ˆ2 extended Lagrangian (8.20) to gi (x) + yi = max gi (x), − ˜ ˆ Lλ (x, u) = Lλ (x, z , u) m 1 = f (x) + 2λ i=1 max[0, ui + λgi (x)] 2 − u2 . i Minimization for some given uk and λk of the Lagrangian (8.20) with respect to x and z will now be achieved by solving the problem ˜ min Lλk (x, u(k) ),
x and, with a solution x(k) of this problem, our update formula (8.19) for the 2 multipliers—recalling that we now have the equality constraints gi (x)+ zi = 0 instead of gi (x) = 0 as before—becomes by (8.23) u(k+1) := u(k) + λk “max” g (x(k) ), − u (k ) λk = “max” 0, u(k) + λk g (x(k) ) , (8.24) where “max” is to be understood componentwise. 1.9 Bibliographical Notes The observation that some data in real life optimization problems could be random, i.e. the origin of stochastic programming, dates back to the 1950s. Without any attempt at completeness, we might mention from the early contributions to this ﬁeld Avriel and Williams [3], Beale [5, 6], Bereanu [8], Dantzig [11], Dantzig and Madansky [13], Tintner [43] and Williams [49]. For more detailed discussions of the situation of the decision maker facing random parameters in an optimization problem we refer for instance to Dempster [14], Ermoliev and Wets [16], Frauendorfer [18], Kall [22], Kall and Pr´kopa [24], Kolbin [28], Sengupta [42] and Vajda [45]. e BASIC CONCEPTS 103 Waitandsee problems have led to investigations of the distribution of the optimal value (and the optimal solution); as examples of these eﬀorts, we mention Bereanu [8] and King [26]. The linear programs resulting as deterministic equivalents in the recourse case may become (very) large in scale, but their particular block structure is amenable to specially designed algorithms, which are until now under investigation and for which further progress is to be expected in view of the possibilities given with parallel computers (see e.g. Zenios [51]). For those problems the particular decomposition method QDECOM—which will be described later—was proposed by Ruszczy´ ski [41]. n The idea of approximating stochastic programs with recourse (with a continuous type distribution) by discretizing the distribution, as mentioned in Section 1.3, is related to special convergence requirements for the (discretized) expected recourse functions, as discussed for example by Attouch and Wets [2] and Kall [23]. More on probabilistically constrained models and corresponding applications may be found for example in Dupaˇov´ et al. [15], Ermoliev and Wets [16] ca and Pr´kopa et al. [36]. The convexity statement of Proposition 1.5 can be e found in Wets [48]. The probalistically constrained program at the end of Section 1.3 (page 20) was solved by PROCON. This solution method for problems with a joint chance constraint (with normally distributed righthand side) was described ﬁrst by Mayer [30], and has its theoretical base in Pr´kopa [35]. e Statements on the induced feasible set K and induced constraints are found in Rockafellar and Wets [39], Walkup and Wets [46] and Kall [22]. The requirement that the decision on x does not depend on the outcome of ˜ ξ is denoted as nonanticipativity, and was discussed rigorously in Rockafellar and Wets [40]. The conditions for complete recourse matrices were proved in Kall [21], and may be found in [22]. Necessary and suﬃcient conditions for logconcave distributions were derived ﬁrst in Pr´kopa [35]; later corresponding conditions for quasiconcave e measures were derived in Borell [10] and Rinott [37]. More details on stochastic linear programs may be found in Kall [22]; multistage stochastic programs are still under investigation, and were discussed early by Olsen [31, 32, 33]; useful results on the deterministic equivalent of recourse problems and for the expectation functionals arising in Section 1.4 are due to Wets [47, 48]. There is a wide literature on linear programming, which cannot be listed here to any reasonable degree of completeness. Hence we restrict ourselves to mentioning the book of Dantzig [12] as a classic reference. For a rigorous development of measure theory and the foundations of probability theory we mention the standard reference Halmos [19]. The idea of feasibility and optimality cuts in the dual decomposition method may be traced back to Benders [7]. 104 STOCHASTIC PROGRAMMING There is a great variety of good textbooks on nonlinear programming (theory and methods) as well. Again we have to restrict ourselves, and just mention Bazaraa and Shetty [4] and Luenberger [29] as general texts. Cuttingplane methods have been proposed in various publications, diﬀering in the way the cuts (separating hyperplanes) are deﬁned. An early version was published by Kelley [25]; the method we have presented is due to Kleibohm [27]. The method of feasible directions is due to Zoutendijk [52, 53]; an extension to nonlinear constraints was proposed by Topkis and Veinott [44]. The reduced gradient method can be found in Wolfe [50], and its extension to nonlinear constraints was developed by Abadie and Carpentier [1]. A standard reference for penalty methods is the monograph of Fiacco and McCormick [17]. The update formula (8.19) for the multipliers in the augmented Lagrangian method for equality constraints motivated by Proposition 1.28 goes back to Hestenes [20] and Powell [34], whereas the update (8.24) for inequalityconstrained problems is due to Rockafellar [38]. For more about Lagrangian methods we refer the reader to the book of Bertsekas [9]. Exercises
1. Show that from (4.3) on page 24 it follows that with Ai ∈ A, i = 1, 2, · · · ,
∞ Ai ∈ A and Ai − Aj ∈ A ∀i, j.
i=1 2. Find an example of a twodimensional discrete probability distribution that is not quasiconcave. 3. Show that A = {(x, y ) ∈ IR2  x ≥ 1, 0 ≤ y ≤ 1/x} is measurable with respect to the natural measure µ in IR2 and that µ(A) = ∞ (see Section 1.4.1, page 21). [Hint: Show ﬁrst that for In := {(x, y )  n ≤ x ≤ n + 1, 0 ≤ y < 2}, n ∈ IN, the sets A ∩ In are measurable. For n ∈ IN the interval Cn := {(x, y )  n ≤ x < n + 1, 0 ≤ y < 1/(n + 1)} is a packing of A ∩ In with µ(Cn ) = 1/(n + 1). Hence µ(A) = ∞ In ) ≥ n=1 µ(Cn ) implies µ(A) = ∞.]
∞ n=1 µ(A ∩ 4. Show that A := {(x, y ) ∈ IR2  x ≥ 0, 0 ≤ y ≤ e−x } is measurable and that µ(A) = 1. [Hint: Consider Aα := {(x, y )  0 ≤ x ≤ α, 0 ≤ BASIC CONCEPTS 105 y ≤ e−x } for arbitrary α > 0. Observe that µ(Aα ), according to its deﬁnition in Section 1.4.1, page 21, coincides with the Riemann integral α J (α) = 0 e−x dx. Hence µ(A) = limα→∞ µ(Aα ) = limα→∞ J (α).] 5. Consider the line segment B := {(x, y ) ∈ IR2  3 ≤ x ≤ 7, y = 5}. Show that for the natural measure µ in IR2 , µ(B ) = 0 (see Section 1.4.1, page 21). 6. Assume that the linear program γ (b) := min{cT x  Ax = b, x ≥ 0} is solvable for all b ∈ IRm . Show that the optimal value function γ (·) is piecewise linear and convex in b. 7. In Section 1.8 we discussed various regularity conditions for nonlinear programs. Let x be a local solution of problem (8.1), page 80. Show that if ˆ ˆ ˆ RC 1 is satisﬁed in x then RC 0 also holds true in x. (See (8.8) on page 85.) 8. Assume that (ˆ, u) is a saddle point of xˆ
m L(x, u) := f (x) +
i=1 ui gi (x). Show that x is a global solution of ˆ min f (x) s.t. gi (x) ≤ 0, i = 1, · · · , m. (See Proposition 1.25 for the deﬁnition of a saddle point.) References
[1] Abadie J. and Carpentier J. (1969) Generalization of the Wolfe reduced gradient method to the case of nonlinear constraints. In Fletcher R. (ed) Optimization, pages 37–47. Academic Press, London. [2] Attouch H. and Wets R. J.B. (1981) Approximation and convergence in nonlinear optimization. In Mangasarian O. L., Meyer R. M., and Robinson S. M. (eds) NLP 4, pages 367–394. Academic Press, New York. [3] Avriel M. and Williams A. (1970) The value of information and stochastic programming. Oper. Res. 18: 947–954. [4] Bazaraa M. S. and Shetty C. M. (1979) Nonlinear Programming—Theory and Algorithms. John Wiley & Sons, New York. [5] Beale E. M. L. (1955) On minimizing a convex function subject to linear inequalities. J. R. Stat. Soc. B17: 173–184. [6] Beale E. M. L. (1961) The use of quadratic programming in stochastic linear programming. Rand Report P2404, The RAND Corporation. [7] Benders J. F. (1962) Partitioning procedures for solving mixedvariables programming problems. Numer. Math. 4: 238–252. 106 STOCHASTIC PROGRAMMING [8] Bereanu B. (1967) On stochastic linear programming distribution problems, stochastic technology matrix. Z. Wahrsch. theorie u. verw. Geb. 8: 148–152. [9] Bertsekas D. P. (1982) Constrained Optimization and Lagrange Multiplier Methods. Academic Press, New York. [10] Borell C. (1975) Convex set functions in dspace. Period. Math. Hungar. 6: 111–136. [11] Dantzig G. B. (1955) Linear programming under uncertainty. Management Sci. 1: 197–206. [12] Dantzig G. B. (1963) Linear Programming and Extensions. Princeton University Press, Princeton, New Jersey. [13] Dantzig G. B. and Madansky A. (1961) On the solution of twostage linear programs under uncertainty. In Neyman I. J. (ed) Proc. 4th Berkeley Symp. Math. Stat. Prob., pages 165–176. Berkeley. [14] Dempster M. A. H. (ed) (1980) Stochastic Programming. Academic Press, London. [15] Dupaˇov´ J., Gaivoronski A., Kos Z., and Sz´ntai T. (1991) Stochastic ca a programming in water resources system planning: A case study and a comparison of solution techniques. Eur. J. Oper. Res. 52: 28–44. [16] Ermoliev Y. and Wets R. J.B. (eds) (1988) Numerical Techniques for Stochastic Optimization. SpringerVerlag, Berlin. [17] Fiacco A. V. and McCormick G. P. (1968) Nonlinear Programming: Sequential Unconstrained Minimization Techniques. John Wiley & Sons, New York. [18] Frauendorfer K. (1992) Stochastic TwoStage Programming, volume 392 of Lecture Notes in Econ. Math. Syst. SpringerVerlag, Berlin. [19] Halmos P. R. (1950) Measure Theory. D. van Nostrand, Princeton, New Jersey. [20] Hestenes M. R. (1969) Multiplier and gradient methods. In Zadeh L. A., Neustadt L. W., and Balakrishnan A. V. (eds) Computing Methods in Optimization Problems—2, pages 143–163. Academic Press, New York. [21] Kall P. (1966) Qualitative Aussagen zu einigen Problemen der stochastischen Programmierung. Z. Wahrsch. theorie u. verw. Geb. 6: 246–272. [22] Kall P. (1976) Stochastic Linear Programming. SpringerVerlag, Berlin. [23] Kall P. (1986) Approximation to optimization problems: An elementary review. Math. Oper. Res. 11: 9–18. [24] Kall P. and Pr´kopa A. (eds) (1980) Recent Results in Stochastic e Programming, volume 179 of Lecture Notes in Econ. Math. Syst. SpringerVerlag, Berlin. [25] Kelley J. E. (1960) The cutting plane method for solving convex programs. SIAM J. Appl. Math. 11: 703–712. [26] King A. J. (1986) Asymptotic Behaviour of Solutions in Stochastic BASIC CONCEPTS 107 [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] Optimization: Nonsmooth Analysis and the Derivation of NonNormal Limit Distributions. PhD thesis, University of Washington, Seattle. Kleibohm K. (1966) Ein Verfahren zur approximativen L¨sung von o konvexen Programmen. PhD thesis, Universit¨t Z¨ rich. Mentioned in a u C.R. Acad. Sci. Paris 261:306–307 (1965). Kolbin V. V. (1977) Stochastic Programming. D. Reidel, Dordrecht. Luenberger D. G. (1973) Introduction to Linear and Nonlinear Programming. AddisonWesley, Reading, Massachusetts. Mayer J. (1988) Probabilistic constrained programming: A reduced gradient algorithm implemented on pc. Working Paper WP8839, IIASA, Laxenburg. Olsen P. (1976) Multistage stochastic programming with recourse: The equivalent deterministic problem. SIAM J. Contr. Opt. 14: 495–517. Olsen P. (1976) When is a multistage stochastic programming problem well deﬁned? SIAM J. Contr. Opt. 14: 518–527. Olsen P. (1976) Discretizations of multistage stochastic programming problems. Math. Prog. Study 6: 111–124. Powell M. J. D. (1969) A method for nonlinear constraints in minimization problems. In Fletcher R. (ed) Optimization, pages 283–298. Academic Press, London. Pr´kopa A. (1971) Logarithmic concave measures with applications to e stochastic programming. Acta Sci. Math. (Szeged) 32: 301–316. Pr´kopa A., Ganczer S., De´k I., and Patyi K. (1980) The STABIL e a stochastic programming model and its experimental application to the electricity production in Hungary. In Dempster M. A. H. (ed) Stochastic Programming, pages 369–385. Academic Press, London. Rinott Y. (1976) On convexity of measures. Ann. Prob. 4: 1020–1026. Rockafellar R. T. (1973) The multiplier method of Hestenes and Powell applied to convex programming. J. Opt. Theory Appl. 12: 555–562. Rockafellar R. T. and Wets R. J.B. (1976) Stochastic convex programming: Relatively complete recourse and induced feasibility. SIAM J. Contr. Opt. 14: 574–589. Rockafellar R. T. and Wets R. J.B. (1976) Nonanticipativity and L1 martingales in stochastic optimization problems. Math. Prog. Study 6: 170–187. Ruszczy´ski A. (1986) A regularized decomposition method for n minimizing a sum of polyhedral functions. Math. Prog. 35: 309–333. Sengupta J. K. (1972) Stochastic programming. Methods and applications. NorthHolland, Amsterdam. Tintner G. (1955) Stochastic linear programming with applications to agricultural economics. In Antosiewicz H. (ed) Proc. 2nd Symp. Linear Programming, volume 2, pages 197–228. National Bureau of Standards, Washington D.C. 108 STOCHASTIC PROGRAMMING [44] Topkis D. M. and Veinott A. F. (1967) On the convergence of some feasible direction algorithms for nonlinear programming. SIAM J. Contr. Opt. 5: 268–279. [45] Vajda S. (1972) Probabilistic Programming. Academic Press, New York. [46] Walkup D. W. and Wets R. J. B. (1967) Stochastic programs with recourse. SIAM J. Appl. Math. 15: 1299–1314. [47] Wets R. (1974) Stochastic programs with ﬁxed recourse: The equivalent deterministic program. SIAM Rev. 16: 309–339. [48] Wets R. J.B. (1989) Stochastic programming. In Nemhauser G. L. et al. (eds) Handbooks in OR & MS, volume 1, pages 573–629. Elsevier, Amsterdam. [49] Williams A. (1966) Approximation formulas for stochastic linear programming. SIAM J. Appl. Math. 14: 668–877. [50] Wolfe P. (1963) Methods of nonlinear programming. In Graves R. L. and Wolfe P. (eds) Recent Advances in Mathematical Programming, pages 67–86. McGrawHill, New York. [51] Zenios S. A. (1992) Progress on the massively parallel solution of network “mega”problems. COAL 20: 13–19. [52] Zoutendijk G. (1960) Methods of Feasible Directions. Elsevier, Amsterdam/D. Van Nostrand, Princeton, New Jersey. [53] Zoutendijk G. (1966) Nonlinear programming: A numerical survey. SIAM J. Contr. Opt. 4: 194–210. 2 Dynamic Systems
2.1 The Bellman Principle As discussed in Chapter 1, optimization problems can be of various types. The diﬀerences may be found in the goal, i.e. minimization or maximization, in the constraints, i.e. inequalities or equalities and free or nonnegative variables, and in the mathematical properties of the functions involved in the objective or the constraints. We have met linear functions in Section 1.7, nonlinear functions in Section 1.8 and even integral functions in Section 1.4. Despite their diﬀerences, all these problems may be presented in the uniﬁed form max{F (x1 , · · · , xn )  x ∈ X }. Here X is the prescribed feasible set of decisions over which we try to maximize, or sometimes minimize, the given objective function F . This general setting also covers a class of somewhat special decision problems. They are illustrated in Figure 1. Consider a system that is inspected at ﬁnitely many stages. Often stages are just points in time—the reason for using the term “dynamic”. The example in Figure 1 has four stages. This is seen from the fact that there are four columns. Assume that at any stage the system can be in one out of ﬁnitely many states. In Figure 1 there are four possible states in each stage, represented by the four dots in each column. Also, at any stage (except maybe the last one) a decision has to be made which possibly will have an inﬂuence on the system’s state at the subsequent stage. Attached to the decision is an immediate return (or else an immediate cost). In Figure 1 the three arrows in the right part of the ﬁgure indicate that in this example there are three possible decisions: one bringing us to a lower state in the next stage, one keeping us in the same state, and one bringing us to a higher state. (We must assume that if we are at the highest or lowest possible state then only two decisions are possible). Given the initial state of the system, the overall objective is to maximize (or minimize) some given function of the immediate returns for all stages and states the system goes through as a result of our decisions. Formally the problem is described DYNAMIC SYSTEMS 111 Figure 1 Basic setup for a dynamic program with four states, four stages and three possible decisions. as follows. With t zt xt the stages, t = 1, · · · , T , the state at stage t, the decision taken at stage t, (in general depending on the state zt ), Gt (zt , xt ) the transformation (or transition) of the system from the state zt and the decision taken at stage t into the state zt+1 at the next stage, i.e. zt+1 = Gt (zt , xt ), rt (zt , xt ) the immediate return if at stage t the system is in state zt and the decision xt is taken, F Xt (zt ) the overall objective, given by F (r1 (z1 , x1 ), · · · , rT (zT , xT )), and the set of feasible decisions at stage t, (which may depend on the state zt ), our problem can be stated as max{F (r1 (z1 , x1 ), · · · , rT (zT , xT ))  xt ∈ Xt , t = 1, · · · , T }. Observe that owing to the relation zt+1 = Gt (zt , xt ), the objective function can be rewritten in the form Φ(z1 , x1 , x2 , · · · , xT ). To get an idea of the possible structures we can face, let us revisit the example in Figure 1. The 112 STOCHASTIC PROGRAMMING purpose of the example is not to be realistic, but to illustrate a few points. A more realistic problem will be discussed in the next section. Example 2.1 Assume that stages are years, and that the system is inspected annually, so that the three stages correspond to 1 January of the ﬁrst, second and third years, and 31 December of the third year. Assume further that four diﬀerent levels are distinguished as states for the system, i.e. at any stage one may observe the state zt = 1, 2, 3 or 4. Finally, depending on the state of the system in stages 1, 2 and 3, one of the following decisions may be made: ⎧ ⎨ 1, leading to the immediate return rt = 2, 0, leading to the immediate return rt = 1, xt = ⎩ −1, leading to the immediate return rt = −1. The transition from one stage to the next is given by zt+1 = zt + xt . Note that, since zt ∈ {1, 2, 3, 4} for all t, we have that xt = 1 in state zt = 4 and xt = −1 in state zt = 1 are not feasible, and are therefore excluded. Finally, assume that there are no decisions in the ﬁnal stage T = 4. There are immediate returns, however, given as ⎧ ⎪ −2 if zT = 4, ⎪ ⎨ −1 if zT = 3, rT = ⎪ 1 if zT = 2, ⎪ ⎩ 2 if zT = 1. To solve max F (r1 , · · · , r4 ), we have to ﬁx the overall objective F as a function of the immediate returns r1 , r2 , r3 , r4 . To demonstrate possible eﬀects of properties of F on the solution procedure, we choose two variants. (a) Let F (r1 , · · · , r4 ) := r1 + r2 + r3 + r4 and assume that the initial state is z1 = 4. This is illustrated in Figure 2, which has the same structure as Figure 1. Using the ﬁgure, we can check that an optimal policy (i.e. sequence of decisions), is x1 = x2 = x3 = 0 keeping us in zt = 4 for all t with the optimal value F (r1 , · · · , r4 ) = 1 + 1 + 1 − 2 = 1. We may determine this optimal policy iteratively as follows. First, we determine the decision for each of the states in stage 3 by determining
∗ f3 (z3 ) := max[r3 (z3 , x3 ) + r4 (z4 )] x3 for z3 = 1, · · · , 4, and z4 = G3 (z3 , x3 ) = z3 + x3 . For example, if we are in state 2, i.e. z3 = 2, we have three options, namely −1, 0 and 1. If DYNAMIC SYSTEMS 113 Figure 2 Dynamic program: additive composition. The solid lines show the result of the backward recursion. x3 = 1, we receive an immediate income of 2, and a ﬁnal value of −1, since this decision will result in z4 = 2 + 1 = 3. The second option is to let x3 = 0, yielding an immediate income of 1 and a ﬁnal value of 1. The third possibility is to let x3 = −1, yielding incomes of −1 and 1. The total incomes are therefore 1, 2 and 0 respectively, so the best option is to let x3 = 0. This is illustrated in the ﬁgure by putting an arrow from state 2 ∗ in stage 3 to state 2 in stage 4. Letting “(z3 = i) → (x3 , f3 )” indicate that in state z3 = i the optimal decision is x3 and the sum of the immediate ∗ and ﬁnal income is f3 , we can repeat the above procedure for each state in stage 3 to obtain (z3 = 1) → (0, 3), (z3 = 2) → (0, 2), (z3 = 3) → (0, 0), (z3 = 4) → (0, −1). This is all illustrated in Figure 2, by adding ∗ the f3 values above the state nodes in stage 3. ∗ Once f3 (z3 ) is known for all values of z3 , we can turn to stage 2 and similarly determine
∗ ∗ f2 (z2 ) := max[r2 (z2 , x2 ) + f3 (z3 )], x2 ∗ ∗ where z3 = G2 (z2 , x2 ) = z2 + x2 . This yields f2 (1) = 4, f2 (2) = ∗ ∗ 3, f2 (3) = 1 and f2 (4) = 0. This is again illustrated in Figure 2, together with the corresponding optimal decisions. Finally, given that z1 = 4, the problem can be rephrased as ∗ ∗ f1 (z1 ) := max[r1 (z1 , x1 ) + f2 (z2 )], x1 ∗ where z2 = G1 (z1 , x1 ) = z1 + x1 . This immediately yields f1 (z1 ) = 1 for x1 = 0. 114 STOCHASTIC PROGRAMMING
∗ In this simple example it is easy to see that f1 (z1 ) coincides with the optimal value of F (r1 , · · · , r4 ), given the initial state z1 , so that the problem can be solved with the above backward recursion. (The recursion is called backward because we start in the last period and move backwards in time, ending up in period 1.) Note that an alternative way to solve this problem would be to enumerate all possible sequences of decisions. For this small problem, that would have been a rather simple task. But for larger problems, both in terms of states and stages (especially when both are multidimensional), it is easy to see that this will become an impossible task. This reduction from a full enumeration of all possible sequences of decisions to that of ﬁnding the optimal decision in all states for all stages is the major reason for being interested in the backward recursion, and more generally, for being interested in dynamic programming. (b) As an alternative, let us use multiplication to obtain F (r1 , · · · , r4 ) := r1 r2 r3 r4 and perform the backward recursion as above, yielding Figure 3. With ft∗ (zt ) := max[rt (zt , xt )ft∗ (zt+1 )] +1
xt ∗ for t = 3, 2, 1, where zt+1 = Gt (zt , xt ) = zt + xt and f4 (z4 ) = r4 (z4 ), ∗ we should get f1 (z1 = 4) = 1 with an “optimal” policy (0, 0, −1). However, the policy (−1, 1, 0) yields F (r1 , · · · , r4 ) = 4. Hence the backward recursion does not yield the optimal solution when the returns are calculated in a multiplicative fashion. 2 In this example we had F (r1 (z1 , x1 ), · · · , rT (zT , xT )) = r1 (z1 , x1 ) ⊕ r2 (z2 , x2 ) ⊕ · · · ⊕ rT (zT , xT ), where the composition operation “⊕” was chosen as addition in case (a) and multiplication in case (b). For the backward recursion we have made use of the socalled separability of F . That is, there exist two functions ϕ1 , ψ2 such that F (r1 (z1 , x1 ), · · · , rT (zT , xT )) = ϕ1 (r1 (z1 , x1 ), ψ2 (r2 (z2 , x2 ), · · · , rT (zT , xT ))). Furthermore, we proceeded “as if” the following relation held: (1.1) DYNAMIC SYSTEMS 115 Figure 3 Dynamic program: multiplicative composition. Solid lines show the result of the backward recursion (with z1 = 4), whereas the dotted line shows the optimal sequence of decisions. max{F (r1 (z1 , x1 ), · · · , rT (zT , xT ))  xt ∈ Xt , t = 1, · · · , T } (1.2) max ψ2 (r2 (z2 , x2 ), · · · , rT (zT , xT )))]. = max [ϕ1 (r1 (z1 , x1 ),
x1 ∈X1 x2 ∈X2 ,···,xT ∈XT This relation is the formal equivalent of the wellknown optimality principle, which was expressed by Bellman as follows (quote). Proposition 2.1 “An optimal policy has the property that whatever the initial state and initial decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the ﬁrst decision.” As we have seen in Example 2.1, this principle, applied repeatedly in the backward recursion, gave the optimal solution for case (a) but not for case (b). The reason for this is that, although the composition operation “⊕” is separable in the sense of (1.1), this is not enough to guarantee that the repeated application of the optimality principle (i.e. through backward recursion) will yield an optimal policy. A suﬃcient condition under which the optimality principle holds involves a certain monotonicity of our composition operation “⊕”. More precisely, we have the following. Proposition 2.2 If F satisﬁes the separability condition (1.1) and if ϕ1 is monotonically nondecreasing in ψ2 for every r1 then the optimality principle (1.2) holds. 116 STOCHASTIC PROGRAMMING Proof The unique meaning of “max” implies that we have that
{xt ∈Xt , t≥1} max ϕ1 (r1 (z1 , x1 ), ψ2 (r2 (z2 , x2 ), · · · , rT (zT , xT )))
{xt ∈Xt , t≥2} ≥ ϕ1 (r1 (z1 , x1 ), max [ψ2 (r2 (z2 , x2 ), · · · , rT (zT , xT ))]), for all x1 . Therefore this also holds when the righthand side of this inequality is maximized with respect to x1 . On the other hand, it is also obvious that
{xt ∈Xt , t≥2} max ψ2 (r2 (z2 , x2 ), · · · , rT (zT , xT )) ≥ ψ2 (r2 (z2 , x2 ), · · · , rT (zT , xT )) ∀xt ∈ Xt , t ≥ 2. Hence, by the assumed monotonicity of ϕ1 with respect to ψ2 , we have that ϕ1 (r1 (z1 , x1 ), max ψ2 (r2 (z2 , x2 ), · · · , rT (zT , xT ))) {xt ∈Xt ,t≥2} ≥ ϕ1 (r1 (z1 , x1 ), ψ2 (r2 (z2 , x2 ), · · · , rT (zT , xT ))) ∀xt ∈ Xt , t ≥ 1. Taking the maximum with respect to xt , t ≥ 2, on the righthand side of this inequality and maximizing afterwards both sides with respect to x1 ∈ X1 shows that the optimality principle (1.2) holds. 2 Needless to say, all problems considered by Bellman in his ﬁrst book on dynamic programming satisﬁed this proposition. In case (b) of our example, however, the monotonicity does not hold. The reason is that when “⊕” involves multiplication of possibly negative factors (i.e. negative immediate returns), the required monotonicity is lost. On the other hand, when “⊕” is summation, the required monotonicity is always satisﬁed. Let us add that the optimality principle applies to a much wider class of problems than might seem to be the case from this brief sketch. For instance, if for ﬁnitely many states we denote by ρt the vector having as ith component the immediate return rt (zt = i), and if we deﬁne the composition operation “⊕” such that, with a positive matrix S (i.e. all elements of S nonnegative), ρt ⊕ ρt+1 = ρt + Sρt+1 , t = 1, · · · , T − 1, then the monotonicity assumed for Proposition 2.2 follows immediately. This case is quite common in applications. Then S is the socalled transition matrix, which means that an element sij represents the probability of entering state j at stage t + 1, given that the system is in state i at stage t. Iterating the above composition for T − 1, T − 2, · · · , 1 we get that F (ρ1 , · · · , ρT ) is the vector of the expected total returns. The ith component gives the expected overall return if the system starts from state i at stage 1. Putting it this way we see that multistage stochastic programs with recourse (formula (4.13) in Chapter 1) belong to this class. DYNAMIC SYSTEMS 117 2.2 Dynamic Programming The purpose of this section is to look at certain aspects of the ﬁeld of dynamic programming. The example we looked at in the previous section is an example of a dynamic programming problem. It will not represent a fair description of the ﬁeld as a whole, but we shall concentrate on aspects that are useful in our context. This section will not consider randomness. That will be discussed later. We shall be interested in dynamic programming as a means of solving problems that evolve over time. Typical examples are production planning under varying demand, capacity expansion to meet an increasing demand and investment planning in forestry. Dynamic programming can also be used to solve problems that are not sequential in nature. Such problems will not be treated in this text. Important concepts in dynamic programming are the time horizon, state variables, decision variables, return functions, accumulated return functions, optimal accumulated returns and transition functions. The time horizon refers to the number of stages (time periods) in the problem. State variables describe the state of the system, for example the present production capacity, the present age and species distribution in a forest or the amount of money one has in diﬀerent accounts in a bank. Decision variables are the variables under one’s control. They can represent decisions to build new plants, to cut a certain amount of timber, or to move money from one bank account to another. The transition function shows how the state variables change as a function of decisions. That is, the transition function dictates the state that will result from the combination of the present state and the present decisions. For example, the transition function may show how the forest changes over the next period as a result of its present state and of cutting decisions, how the amount of money in the bank increases, or how the production capacity will change as a result of its present size, investments (and detoriation). A return function shows the immediate returns (costs or proﬁts) as a result of making a speciﬁc decision in a speciﬁc state. Accumulated return functions show the accumulated eﬀect, from now until the end of the time horizon, associated with a speciﬁc decision in a speciﬁc state. Finally, optimal accumulated returns show the value of making the optimal decision based on an accumulated return function, or in other words, the best return that can be achieved from the present state until the end of the time horizon. Example 2.2 Consider the following simple investment problem, where it is clear that the Bellman principle holds. We have some money S0 in a bank account, called account B . We shall need the money two years from now, and today is the ﬁrst of January. If we leave the money in the account we will face an interest rate of 7% in the ﬁrst year and 5% in the second. You also have 118 STOCHASTIC PROGRAMMING A 10% fee 20 fee 10 A 7% fee 20 fee 10 A S0
Figure 4 B 7% B 5% B S3 Graphical description of a simple investment problem. the option of moving the money to account A. You will there face an interest rate of 10% the ﬁrst year and 7% the second year. However, there is a ﬁxed charge of 20 per year and a charge of 10 each time we withdraw money from account A. The ﬁxed charge is deducted from the account at the end of a year, whereas the charges on withdrawals are deducted immediately. The question is: Should we move our money to account A for the ﬁrst year, the second year or both years? In any case, money left in account A at the end of the second year will be transferred to account B . The goal is to solve the problem for all initial S0 > 1000. Figure 4 illustrates the example. Note that all investments will result in a case where the wealth increases, and that it will never be proﬁtable to split the money between the accounts (why?). 12 Let us ﬁrst deﬁne the twodimensional state variables zt = (zt , zt ). The 1 ﬁrst state variable, zt , refers to the account name (A or B ); the second 2 state variable, zt , refers to the amount of money St in that account. So zt = (B, St ) refers to a state where there is an amount St in account B in stage t. Decisions are where to put the money for the next time period. If xt is our decision variable then xt ∈ {A, B }. The transition function will be denoted by Gt (zt , xt ), and is deﬁned via interest rates and charges. It shows what will happen to the money over one year, based on where the money is now, how much there is, and where it is put next. Since the state space has two elements, the function Gt is twovalued. For example
1 zt+1 = G1 t A ,A St = A, 2 zt+1 = G2 t A ,A St = St × 1.07 − 20. 12 Accumulated return functions will be denoted by ft (zt , zt , xt ). They 2 1 describe how the amount zt in account zt will grow, up to the end of the time horizon, if the money is put into account xt in the next period, and optimal decisions are made thereafter. So if f1 (A, S1 , B ) = S , we know that in stage 1 (i.e. at the end of period 1), if we have S1 in account A and then DYNAMIC SYSTEMS 119 move it to account B , we shall be left with S3 = S in account B at the end of the time horizon, given that we make optimal decisions at all stages after stage 1. By maximizing over all possible decisions, we ﬁnd the optimal accumulated 12 returns ft∗ (zt , zt ) for a given state. For example,
∗ f1 (A, S1 ) = x1 ∈{A,B } max f1 (A, S1 , x1 ). The calculations for our example are as follows. Note that we have three stages, which we shall denote Stage 0, Stage 1 and Stage 2. Stage 2 represents the point in time (after two years) when all funds must be transferred to account B . Stage 1 is one year from now, where we, if we so wish, may move the money from one account to another. Stage 0 is now, where we must decide if we wish to keep the money in account B or move it to account A. Stage 2 At Stage 2, all we can do is to transfer whatever money we have in account A to account B:
∗ f2 (A, S2 ) = S2 − 10, ∗ f2 (B, S2 ) = S2 , indicating that a cost of 10 is incurred if the money is in account A and needs to be transferred to account B . Stage 1 Let us ﬁrst consider account A, and assume that the account contains S1 . We can keep the money in account A, making S2 = S1 × 1.07 − 20 (this is the transition function), or move it to B , making S2 = (S1 − 10) × 1.05. This generates the following two evaluations of the accumulated return function:
∗ f1 (A, S1 , A) = f2 (A, S1 × 1.07 − 20) = S1 × 1.07 − 30, ∗ f1 (A, S1 , B ) = f2 (B, (S1 − 10) × 1.05) = S1 × 1.05 − 10.5. By comparing these two, we ﬁnd that, as long as S1 ≥ 975 (which is always the case since we have assumed that S0 > 1000), account A is best, making
∗ f1 (A, S1 ) = S1 × 1.07 − 30. Next, consider account B . If we transfer the amount S1 to account A, we get S2 = S1 × 1.07 − 20. If it stays in B , we get S2 = S1 × 1.05. This gives us
∗ f1 (B, S1 , A) = f2 (A, S1 × 1.07 − 20) = S1 × 1.07 − 30, ∗ f1 (B, S1 , B ) = f2 (B, S1 × 1.05) = S1 × 1.05. By comparing these two, we ﬁnd that
∗ f1 (B, S1 ) = S1 × 1.07 − 30 if S1 ≥ 1500, S1 × 1.05 if S1 ≤ 1500. 120 STOCHASTIC PROGRAMMING Stage 0 Since we start out with all our money in account B , we only need to check that account. Initially we have S0 . If we transfer to A, we get S1 = S0 × 1.1 − 20, and if we keep it in B , S1 = S0 × 1.07. The accumulated returns are
∗ ∗ f0 (B, S0 , A) = f1 (A, S1 ) = f1 (A, S0 × 1.1 − 20) = (S0 × 1.1 − 20) × 1.07 − 30 = 1.177 × S0 − 51.4, ∗ ∗ f0 (B, S0 , B ) = f1 (B, S1 ) = f1 (B, S0 × 1.07) S0 × 1.1449 − 30 if S0 ≥ 1402, = S0 × 1.1235 if S0 ≤ 1402. Comparing the two options, we see that account A is always best, yielding
∗ f0 (B, S0 ) = 1.177 × S0 − 51.4. So we should move our money to account A and keep it there until the end of the second period. Then we move it to B as required. We shall be left with a total interest of 17.7% and ﬁxed charges of 51.4 (including lost interest on charges). 2 As we can see, the main idea behind dynamic programming is to take one stage at a time, starting with the last stage. For each stage, ﬁnd the optimal decision for all possible states, thereby calculating the optimal accumulated return from then until the end of the time horizon for all possible states. Then move one step towards the present, and calculate the returns from that stage until the end of the time horizon by adding together the immediate returns, and the returns for all later periods based on the calculations made at the ∗ previous stage. In the example we found that f1 (A, S1 ) = S1 × 1.07 − 30. This shows us that if we end up in stage 1 with S1 in account A, we shall (if we behave optimally) end up with S1 × 1.07 − 30 in account B at the end of ∗ the time horizon. However, f1 does not tell us what to do, since that is not needed to calculate optimal decisions at stage 0. Formally speaking, we are trying to solve the following problem, where x = (x0 , . . . , xT )T : maxx F (r0 (z0 , x0 ), . . . , rT (zT , xT ), Q(zT +1 )) s.t. zt+1 = Gt (zt , xt ) for t = 0, . . . , T, At (zt ) ≤ xt ≤ Bt (zt ) for t = 0, . . . , T, where F satisﬁes the requirements of Proposition 2.2. This is to be solved for one or more values of the initial state z0 . In this setup, rt is the return DYNAMIC SYSTEMS 121 function for all but the last stage, Q the return function for the last stage, Gt the transition function, T the time horizon, zt the (possibly multidimensional) state variable in stage t and xt the (possibly multidimensional) decision variable in stage t. The accumulated return function ft (zt , xt ) and optimal accumulated returns ft∗ (zt ) are not part of the problem formulation, but rather part of the solution procedure. The solution procedure, justiﬁed by the Bellman principle, runs as follows.
∗ Find f0 (z0 ) by solving recursively ft∗ (zt ) = = with zt+1 = Gt (zt , xt ) for t = T, . . . , 0, ∗ fT +1 (zT +1 ) = Q(zT +1 ). In each case the problem must be solved for all possible values of the state variable zt , which might be multidimensional. Problems that are not dynamic programming problems (unless rewritten with a large expansion of the state space) would be problems where, for example, zt+1 = Gt (z0 , . . . , zt , x0 , . . . , xt ), or where the objective function depends in an arbitrary way on the whole history up til stage t, represented by rt (z0 , . . . , zt , x0 , . . . , xt ). Such problems may more easily be solved using other approaches, such as decision trees, where these complicated functions cause little concern.
At (zt )≤xt ≤Bt (zt ) At (zt )≤xt ≤Bt (zt ) max ft (zt , xt ) ϕt (rt (zt , xt ), ft∗ (zt+1 )) for t = T, . . . , 0 +1 max 2.3 Deterministic Decision Trees We shall not be overly interested in decision trees in a deterministic setting. However, since they might be used to analyse sequential decision problems, we shall mention them. Let us consider our simple investment problem in Example 2.2. A decision tree for that problem is given in Figure 5. A decision tree consists of nodes and arcs. The nodes represent states, the arcs decisions. For each possible state in each stage we must create one arc for each possible decision. Therefore the number of possible decisions must be very limited for 122 STOCHASTIC PROGRAMMING Stage 0 B,S 0 A B Stage 1 A A,S1 B A B,S1 B Stage 2 A,S2 B B, S2 B A,S2 B B, S2 B B, S3
Figure 5 B, S3 B, S3 B, S3 Deterministic decision tree for the small investment problem. this method to be useful, since there is one leaf in the tree for each possible sequence of decisions. The tree indicates that at stage 0 we have S0 in account B . We can then decide to put them into A (go left) or keep them in B (go right). Then at stage 1 we have the same possible decisions. At stage 2 we have to put them into B , getting S3 , the ﬁnal amount of money. As before we could have skipped the last step. To be able to solve this problem, we shall ﬁrst have to follow each path in the tree from the root to the bottom (the leaves) to ﬁnd S3 in all cases. In this way, we enumerate all possible sequences of decisions that we can possibly make. (Remember that this is exactly what we avoid in dynamic programming). The optimal sequence, must, of course, be one of these sequences. Let (AAB ) refer to the path in the tree with the corresponding indices on the arcs. We then get (ABB ) (AAB ) (BAB ) (BBB ) : S3 = ((S0 × 1.1 − 20) − 10) × 1.05 = S0 × 1.155 − 31.5, : S3 = ((S0 × 1.1 − 20) × 1.07 − 20) − 10 = S0 × 1.177 − 51.4, : S3 = ((S0 × 1.07) × 1.07 − 20) − 10 = S0 × 1.1449 − 30, : S3 = S0 × 1.07 × 1.05 = S0 × 1.1235. We have now achieved numbers in all leaves of the tree (for some reason decision trees always grow with the root up). We are now going to move back DYNAMIC SYSTEMS 123 towards the root, using a process called folding back. This implies moving one step up the tree at a time, ﬁnding for each node in the tree the best decision for that node. This ﬁrst step is not really interesting in this case (since we must move the money to account B ), but, even so, let us go through it. We ﬁnd that the best we can achieve after two decisions is as follows. (AB ) (AA) (BA) (BB ) : S3 = S0 × 1.155 − 31.5, : S3 = S0 × 1.177 − 51.4, : S3 = S0 × 1.1449 − 30, : S3 = S0 × 1.1235. Then we move up to stage 1 to see what is the best we can achieve if we presently have S1 in account A. The answer is max{S0 × 1.155 − 31.5, S0 × 1.177 − 51.4} = S0 × 1.177 − 51.4 so long as S0 > 1000, the given assumption. If we have S1 in account B (the right node in stage 1), we get max{S0 × 1.1449 − 30, S0 × 1.1235} = S0 × 1.1449 − 30 if S0 ≥ 1402, if S0 ≤ 1402. S0 × 1.1235 We can then fold back to the top, ﬁnding that it is best going left, obtaining the given S3 = S0 × 1.177 − 51.4. Of course, we recognize most of these computations from Section 2.2 on dynamic programming. You might feel that these computations are not very diﬀerent from those in the dynamic programming approach. However, they are. For example, assume that we had 10 periods, rather than just 2. In dynamic programming we would then have to calculate the optimal accumulated return as a function of St for both accounts in 10 periods, a total of 2 ∗ 10 = 20 calculations, each involving a maximization over the two possible decisions. In the decision tree case the number of such calculations will be 210 + 29 + . . . + 1 = 211 − 1 = 2047. (The counting depends a little on how we treat the last period.) This shows the strength of dynamic programming. It investigates many fewer cases. It should be easy to imagine situations where the use of decision trees is absolutely impossible due to the mere size of the tree. On the other hand, the decision tree approach certainly has advantages. Assume, for example, that we were not to ﬁnd the optimal investments for all S0 > 1000, but just for S0 = 1000. That would not help us much in the ∗ dynamic programming approach, except that f1 (B, S1 ) = S1 × 1.05, since S1 < 1500. But that is a minor help. The decision tree case, on the other hand, would produce numbers in the leaves, not functions of S0 , as shown above. Then folding back will of course be very simple. 124 STOCHASTIC PROGRAMMING Table 1 Distribution of the interest rate on account A. All outcomes have probability 0.5, and the outcomes in period 2 are independent of the outcomes in period 1. Period 1 2 Outcome 1 8 5 Outcome 2 12 9 A 8% or 12% fee 20 fee 10 A 5% or 9% fee 20 fee 10 A S0
Figure 6 B 7% B 5% B S3 Simple investment problem with uncertain interest rates. 2.4 Stochastic Decision Trees We shall now see how decision trees can be used to solve certain classes of stochastic problems. We shall initiate this with a look at our standard investment problem in Example 2.2. In addition, let us now assume that the interest on account B is unchanged, but that the interest rate on account A is random, with the previously given rates as expected values. Charges on account A are unchanged. The distribution for the interest rate is given in Table 1. We assume that the interest rates in the two periods are described by independent random variables. Based on this information, we can give an update of Figure 4, where we show the deterministic and stochastic parameters of the problem. The update is shown in Figure 6. Consider the decision tree in Figure 7. As in the deterministic case, square nodes are decision nodes, from which we have to choose between account A and B . Circular nodes are called chance nodes, and represent points at which something happens, in this case that the interest rates become known. Start at the top. In stage 0, we have to decide whether to put the money DYNAMIC SYSTEMS
Stage 0 125 B A 8% 12% 7% Stage 1 A B A B A B 5% 9% 5% 5% 9% 5% 5% 9% 5% Stage 2 Figure 7 Stochastic decision tree for the simple investment problem. into account A or into B . If we choose A, we shall experience an interest rate of 8% or 12% for the ﬁrst year. After that we shall have to make a new decision for the second year. That decision will be allowed to depend on what interest rate we experienced in the ﬁrst period. If we choose A, we shall again face an uncertain interest rate. Whenever we choose B , we shall know the interest rate with certainty. Having entered a world of randomness, we need to specify what our decisions will be based on. In the deterministic setting we maximized the ﬁnal amount in account B . That does not make sense in a stochastic setting. A given series of decisions does not produce a certain amount in account B , but rather an uncertain amount. In other words, we have to compare distributions. For example, keeping the money in account A for both periods will result in one out of four sequences of interest rates, namely (8,5), (8,9), (12,5) or (12,9). Hence, if we start out with, say 1000, we can end up with (remember the fees) 1083, 1125, 1125 or 1169 (rounded numbers). An obvious possibility is to look for the decision that produces the highest expected amount in account B after two periods. However—and this is a very important point—this does not mean that we are looking for the sequence of decisions that has the highest expected value. We are only looking for the best possible ﬁrst decision. If we decide to put the money in account A in the ﬁrst period, we can wait and observe the actual interest rate on the account before we decide what to do in the next period. (Of course, if we decide to use B in the ﬁrst period, we can as well decide what to do in the second period immediately, since no new information is made available during the ﬁrst year!) 126
Stage 0 STOCHASTIC PROGRAMMING
1126 B
1124 A
1126 8%
Stage 1 12%
1147 7%
1124 1104 A
1104 B A
1147 B
1145 A
1115 B
1124 1103 5% 9%
1125 5% 5% 9% 5% 5% 9% 5% Stage 2 1083 1103 1125 1169 1145 1094 1136 1124 Figure 8 Stochastic decision tree for the investment problem when we maximize the expected amount in account B at the end of stage 2. Let us see how this works. First we do as we have done before: we follow each path down the tree to see what amount we end up with in account B . We have assumed S0 = 1000. That is shown in the leaves of the tree in Figure 8. We then fold back. Since the next node is a chance node, we take the expected value of the two square nodes below. Then for stage 1 we check which of the two possible decisions has the largest expectation. In the far left of Figure 8 it is to put the money into account A. We therefore cross out the other alternative. This process is repeated until we reach the top level. In stage 0 we see that it is optimal to use account A in the ﬁrst period and regardless of the interest rate in the ﬁrst period, we shall also use account A in the second period. In general, the secondstage decision depends on the outcome in the ﬁrst stage, as we shall see in a moment. You might have observed that the solution derived here is exactly the same as we found in the deterministic case. This is caused by two facts. First, the interest rate in the deterministic case equals the expected interest rate in the stochastic case, and, secondly, the objective function is linear. In other words, ˜ if ξ is a random variable and a and b are constants then ˜ ˜ Eξ (aξ + b) = aE ξ + b. ˜ For the stochastic case we calculated the lefthand side of this expression, and for the deterministic case the righthand side. In many cases it is natural to maximize expected proﬁts, but not always. One common situation for decision problems under uncertainty is that the decision is repeated many times, often, in principle, inﬁnitely many. DYNAMIC SYSTEMS
Utility 127 u(w ) 0 w 0 Wealth Figure 9 aversion. Example of a typical concave utility function representing risk Investments in shares and bonds, for example, are usually of this kind. The situation is characterized by long time series of data, and by many minor decisions. Should we, or should we not, maximize expected proﬁts in such a case? Economics provide us with a tool to answer that question, called a utility function. Although it is not going to be a major point in this book, we should like to give a brief look into the area of utility functions. It is certainly an area very relevant to decision making under uncertainty. If you ﬁnd the topic interesting, consult the references listed at the end of this chapter. The area is full of pitfalls and controversies, something you will probably not discover from our little glimpse into the ﬁeld. More than anything, we simply want to give a small taste, and, perhaps, something to think about. We may think of a utility function as a function that measures our happiness (utility) from a certain wealth (let us stick to money). It does not measure utility in any ﬁxed unit, but is only used to compare situations. So we can say that one situation is preferred to another, but not that one situation is twice as good as another. An example of a utility function is found in Figure 9. Note that the utility function is concave. Let us see what that means. Assume that our wealth is w0 , and we are oﬀered a game. With 50% probability we shall win δw; with 50% probability we shall lose the same amount. It costs nothing to take part. We shall therefore, after the game, either have a wealth of w0 + δw or a wealth of w0 − δw. If the function in Figure 9 is our utility function, and we calculate the utility of these two possible future 128 STOCHASTIC PROGRAMMING situations, we ﬁnd that the decrease in utility caused by losing δw is larger than the increase in utility caused by winning δw. What has happened is that we do not think that the advantage of possibly increasing our wealth by δw is good enough to oﬀset our worry about losing the same amount. In other words, our expected utility after having taken part in the game is smaller than our certain utility of not taking part. We prefer w0 with certainty to a distribution of possible wealths having expected value w0 . We are riskaverse. If we found the two situations equally good, we are riskneutral. If we prefer the game to the certain wealth, we are risk seekers or gamblers. It is generally believed that people are riskaverse, and that they need a premium to take part in a gamble like the one above. Empirical investigations of ﬁnancial markets conﬁrm this idea. The premium must be high enough to make the expected utility of taking part in the game (including the premium) equal to the utility of the wealth w0 . Now, ﬁnally, we are coming close to the question we started out with. Should we maximize expected proﬁt? We have seen above that maximizing expected proﬁt can be interpreted as maximizing expected utility with a riskneutral attitude. In other words, it puts us in a situation where a fair gamble (i.e. one with expected value zero) is acceptable. When can that be the case? One reason can be that the project under consideration is very small compared with the overall wealth of the decision maker, so that risk aversion is not much of an issue. As an example, consider the purchase of a lottery ticket. Despite the fact that the expected value of taking part in a lottery is negative, people buy lottery tickets. This fact can create some theoretical problems in utility theory, problems that we shall not discuss here. A reference is given at the end of the chapter. A more important case arises in public investments. One can argue that the government should not trade expected values for decreased risks, since the overall risk facing a government is very small, even if the risk in one single project is large. The reason behind this argument is that, with a very large number of projects at hand (which certainly the government has), some will win, some will lose. Over all, owing to oﬀsetting eﬀects, the government will face very little risk. (It is like in a life insurance company, where the death of costumers is not considered a random event. With a large number of costumers, they “know” how many will die the next year.) In all, as we see, we must argue in each case whether or not a linear or concave utility function is appropriate. Clearly, in most cases a linear utility function creates easier problems to solve. But in some cases risk should indeed be taken into account. Let us now continue with our example, and assume we are faced with a concave utility function u(s) = ln(s − 1000) DYNAMIC SYSTEMS
Stage 0 129
4.820 B
4.820 A
4.807 8%
Stage 1 12%
4.979 7%
4.820 4.634 A
4.624 B
4.634 A
4.979 B
4.977 A
4.683 B
4.820 5% 9% 5% 5% 9% 5% 5% 9% 5% Stage 2 4.419 4.828 4.634 4.828 5.130 4.977 4.543 4.913 4.820 Figure 10 Stochastic decision tree for the investment problem when we maximize the expected utility of the amount in account B at the end of period 2. and that we wish to maximize the expected utility of the ﬁnal wealth s. In the deterministic case we found that it would never be proﬁtable to split the money between the two accounts. The argument is the same when we simply maximized the expected value of S3 as outlined above. However, when maximizing expected utility, that might no longer be the case. On the other hand, the whole setup used in this chapter assumes implicitly that we do not split the funding. Hence in what follows we shall assume that all the money must be in one and only one account. The idea in the decision tree is to determine which decisions to make, not how to combine them. Figure 10 shows how we fold back with expected utilities. The numbers in the leaves represent the utility of the numbers in the leaves of Figure 8. For example u(1083) = ln(1083 − 1000) = ln 83 = 4.419. We observe that, with this utility function, it is optimal to use account B . The reason is that we fear the possibility of getting only 8% in the ﬁrst period combined with the charges. The result is that we choose to use B , getting the certain amount S3 = 1124. Note that if we had used account A in the ﬁrst period (which is not optimal), the optimal secondstage decision would depend on the actual outcome of the interest on account A in the ﬁrst period. With 8%, we pick B in the second period; with 12%, we pick A. 130 STOCHASTIC PROGRAMMING 2.5 Stochastic Dynamic Programming Looking back at Section 2.2 on dynamic programming, we observe two major properties of the solution and solution procedure. First, the procedure (i.e. dynamic programming) produces one solution per possible state in each stage. These solutions are not stored, since they are not needed in the procedure, but the extra cost incurred by doing so would be minimal. Secondly, if there is only one given value for the initial state z0 , we can use these decisions to produce a series of optimal solutions—one for each stage. In other words, given an initial state, we can make plans for all later periods. In our small investment Example 2.2 (to which we added randomness in the interest rates in Section 2.4 we found, in the deterministic case, that with S0 > 1000, x0 = x1 = A and x2 = B was the optimal solution. That is, we put the money in account A for the two periods, before we send the money to account B as required at the end of the time horizon. When we now move into the area of stochastic dynamic programming, we shall keep one property of the dynamic programming algorithm, namely that there will be one decision for each state in each stage, but it will no longer be possible to plan for the whole period ahead of time. Decisions for all but the ﬁrst period will depend on what happens in the mean time. This is the same as we observed for stochastic decision trees. Let us turn to the small investment example, keeping the extra requirement that the money must stay in one account, and using the utility function u(s) = ln(s − 1000). Stage 2 As for the deterministic case, we ﬁnd that
∗ f2 (A, S2 ) = ln(S2 − 1010), ∗ f2 (B, S2 ) = ln(S2 − 1000), since we must move the money into account B at the end of the second year. Stage 1 We have to consider the two accounts separately. Account A If we keep the money in account A, we get the following expected return:
∗ ∗ f1 (A, S1 , A) = 0.5[f2 (A, S1 × 1.05 − 20) + f2 (A, S1 × 1.09 − 20)] = 0.5 ln[(S1 × 1.05 − 1030) (S1 × 1.09 − 1030)]. If we move the money to account B , we get
∗ f1 (A, S1 , B ) = f2 (B, (S1 − 10) × 1.05) = ln(S1 × 1.05 − 1010.5). To ﬁnd the best possible solution, we compare these two possibilities by calculating DYNAMIC SYSTEMS 131 ∗ f1 (A, S1 ) = max{f1 (A, S1 , A), f1 (A, S1 , B )} = max{0.5 ln[(S1 × 1.05 − 1030)(S1 × 1.09 − 1030)], ln(S1 × 1.05 − 1010.5)}, from which we ﬁnd, (remembering that S1 > 1000) ⎧ if S1 < 1077, ⎪ ln(S1 × 1.05 − 1010.5) ⎨ ∗ f1 (A, S1 ) = ⎪ 0.5 ln[(S1 × 1.05 − 1030)(S1 × 1.09 − 1030)] ⎩ if S1 > 1077. Account B . For account B we can either move the money to account A to get f1 (B, S1 , A) ∗ ∗ = 0.5[f2 (A, S1 × 1.05 − 20) + f2 (A, S1 × 1.09 − 20)] = 0.5 ln[(S1 × 1.05 − 1030)(S1 × 1.09 − 1030)], or we can keep the money in B to obtain
∗ f1 (B, S1 , B ) = f2 (B, S1 × 1.05) = ln(S1 × 1.05 − 1000). To ﬁnd the best possible solution, we calculate
∗ f1 (B, S1 ) = max{f1 (B, S1 , A), f1 (B, S1 , B )} = max{0.5 ln[(S1 × 1.05 − 1030)(S1 × 1.09 − 1030)], ln(S1 × 1.05 − 1000)}. From this, we ﬁnd that (remembering that S1 > 1000) ⎧ if S1 < 1538 ⎪ ln(S1 × 1.05 − 1000) ⎨ ∗ f1 (B, S1 ) = ⎪ 0.5 ln[(S1 × 1.05 − 1030)(S1 × 1.09 − 1030)] ⎩ if S1 > 1538. Stage 0 We here have to consider only the case when the amount S0 > 1000 sits in account B . The basis for these calculations will be the following two expressions. The ﬁrst calculates the expected result of using account A, the second the certain result of using account B .
∗ ∗ f0 (B, S0 , A) = 0.5[f1 (A, S0 × 1.08 − 20) + f1 (A, S0 × 1.12 − 20)], ∗ f0 (B, S0 , B ) = f1 (B, S0 × 1.07). 132 STOCHASTIC PROGRAMMING Using these two expressions, we then calculate
∗ f0 (B, S0 ) = max{f0 (B, S0 , A), f0 (B, S0 , B )}. ∗ To ﬁnd the value of this expression for f0 (B, S0 ), we must make sure ∗ that we use the correct expressions for f1 from stage 1. To do that, we must know how conditions on S1 relate to conditions on S0 . There are three diﬀerent ways S0 and S1 can be connected (see e.g. the top part of Figure 10): S1 = S0 × 1.08 − 20 ⇒ (S1 = 1077 ⇐⇒ S0 = 1016), S1 = S0 × 1.12 − 20 ⇒ (S1 = 1077 ⇐⇒ S0 = 979), S1 = S0 × 1.07 ⇒ (S1 = 1538 ⇐⇒ S0 = 1437). From this, we see that three diﬀerent cases must be discussed, namely 1000 < S0 < 1016, 1016 < S0 < 1437 and 1437 < S0 . Case 1 Here 1000 < S0 < 1016. In this case
∗ f0 (B, S0 ) = ln(S0 × 1.1235 − 1000), which means that we always put the money into account B . (Make sure you understand this by actually performing the calculations.) Case 2 Here 1016 < S0 < 1437. In this case ⎧ if S0 < 1022, ⎪ ln(S0 × 1.1235 − 1000) ⎪ ⎪ ⎨ ∗ 0.25 × ln[(S0 × 1.134 − 1051) f0 (B, S0 ) = ⎪ ⎪ ×(S0 × 1.1772 − 1051.8) × (S0 × 1.176 − 1051) ⎪ ⎩ if S0 > 1022, ×(S0 × 1.2208 − 1051.8)] which means that we use account B for small amounts and account A for large amounts within the given interval. Case 3 Here we have S0 > 1437. In this case
∗ f0 (B, S0 ) =
1 4 ln[(S0 × 1.134 − 1051) × (S0 × 1.1772 − 1051.8) ×(S0 × 1.176 − 1051) × (S0 × 1.2208 − 1051.8)], which means that we should use account A. Summing up all cases, for stage 0, we get ⎧ if S0 < 1022, ⎪ ln(S0 × 1.1235 − 1000) ⎪ ⎪ ⎨ ∗ 0.25 × ln[(S0 × 1.134 − 1051) f0 (B, S0 ) = ⎪ ×(S0 × 1.1772 − 1051.8) × (S0 × 1.176 − 1051) ⎪ ⎪ ⎩ if S0 > 1022. ×(S0 × 1.2208 − 1051.8)] DYNAMIC SYSTEMS 133 A A S1 >1077 A S 0 >1022 S0 B Stage 0 S1<1077 B S1 >1538 B Stage 2 S3 S 0 <1022 S 1<1538 Stage 1 Figure 11 Description of the solution to the stochastic investment problem using stochastic dynamic programming. If we put these results into Figure 4, we obtain Figure 11. From the latter, we can easily construct a solution similar to the one in Figure 10 for any S0 > 1000. Verify that we do indeed get the solution shown in Figure 10 if S0 = 1000. But we see more than that from Figure 11. We see that if we choose account B in the ﬁrst period, we shall always do the same in the second period. There is no way we can start out with S0 < 1022 and get S1 > 1538. Formally, what we are doing is as follows. We use the vocabulary of ˜ Section 2.2. Let the random vector for stage t be given by ξt and let the return and transition functions become rt (zt , xt , ξt ) and zt+1 = Gt (zt , xt , ξt ). Given this, the procedure becomes
∗ ﬁnd f0 (z0 ) by recursively calculating ft∗ (zt ) = = with zt+1 = Gt (zt , xt , ξt ) for t = 0, . . . , T, ∗ fT +1 (zT +1 ) = Q(zT +1 ), where the functions satisfy the requirements of Proposition 2.2. In each stage the problem must be solved for all possible values of the state zt . It is possible to replace expectations (represented by E above) by other operators with ˜ respect to ξt , such as max or min. In such a case, of course, probability distributions are uninteresting—only the support matters.
At (zt )≤xt ≤Bt (zt ) At (zt )≤xt ≤Bt (zt ) min ft (zt , xt ) ˜ Eξt {ϕt (rt (zt , xt , ξt ), ft∗ (zt+1 ))}, t = T, . . . , 0, ˜ +1 min 134 STOCHASTIC PROGRAMMING 2.6 Scenario Aggregation So far we have looked at two diﬀerent methods for formulating and solving multistage stochastic problems. The ﬁrst, stochastic decision trees, requires a tree that branches oﬀ for each possible decision xt and each possible realization ˜ of ξt . Therefore these must both have ﬁnitely many possible values. The state zt is not part of the tree, and can therefore safely be continuous. A stochastic decision tree easily grows out of hand. The second approach was stochastic dynamic programming. Here we must make a decision for each possible state zt in each stage t. Therefore, it is clearly an advantage if there are ﬁnitely many possible states. However, the theory is also developed for a continuous state space. Furthermore, a continuous set of ˜ decisions xt is acceptable, and so is a continuous distribution of ξt , provided ˜ we are able to perform the expectation with respect to ξt . The method we shall look at in this section is diﬀerent from those mentioned above with respect to where the complications occur. We shall now operate on an event tree (see Figure 12 for an example). This is a tree that branches ˜ oﬀ for each possible value of the random variable ξt in each stage t. Therefore, compared with the stochastic decision tree approach, the new method has similar requirements in terms of limitations on the number of possible values ˜ of ξt . Both need ﬁnite discrete distributions. In terms of xt we must have ﬁnitely many values in the decision tree, the new method prefers continuous variables. Neither of them has any special requirements on zt . The second approach we have discussed so far for stochastic problems is stochastic dynamic programming. The new method we are about to outline is called scenario aggregation. We shall see that stochastic dynamic programming is more ﬂexible than scenario aggregation in terms of ˜ distributions of ξt , is similar with respect to xt , but is much more restrictive with respect to the state variable zt , in the sense that the state space is hardly of any concern in scenario aggregation. If we have T time periods and ξt is a vector describing what happens in ˜ time period t (i.e. a realization of ξt ) then we call
ss s s = (ξ0 , ξ1 , . . . , ξT ) a scenario. It represents one possible future. So assume we have a set of scenarios S describing all (or at least the most interesting) possible futures. What do we do? Assume our “world” can be described by state variables zt and decision variables xt and that the cost (i.e. the return function) in time period t is given by rt (zt , xt , ξt ). Furthermore, as before, the state variables can be calculated from zt+1 = Gt (zt , xt , ξt ), DYNAMIC SYSTEMS 135 with z0 given. Let α be a discount factor. What is often done in this case is to solve for each s ∈ S the following problem
T min s.t.
t=0 α t s rt (zt , xt , ξt ) +α T +1 Q(zT +1 ) ⎫ ⎪ ⎪ ⎪ ⎬ (6.1) s zt+1 = Gt (zt , xt , ξt ) for t = 0, . . . , T with z0 given, ⎪ ⎪ ⎪ ⎭ At (zt ) ≤ xt ≤ Bt (zt ) for t = 0, . . . , T, where Q(z ) represents the value of ending the problem in state z , yielding an optimal solution xs = (xs , xs , . . . , xs ). Now what? We have a number of 0 1 T diﬀerent solutions—one for each s ∈ S . Shall we take the average and calculate for each t xt =
s∈S ps xs , t where ps is the probability that we end up on scenario s? This is very often done, either by explicit probabilities or by more subjective methods based on “looking at the solutions”. However, several things can go wrong. First, if x is chosen as our policy, there might be cases (values of s) for which it is not even feasible. We should not like to suggest to our superiors a solution that might be infeasible (infeasible probably means “going broke”, “breaking down” or something like that). But even if feasibility is no problem, is using x a good idea? In an attempt to answer this, let us again turn to event trees. In Figure 12 we have T = 1. The top node represents “today”. Then one out of three things can happen, or, in other words, we have a random variable with three outcomes. The second row of nodes represents “tomorrow”, and after tomorrow a varying number of things can happen, depending on what happens today. The bottom row of nodes takes care of the rest of the time—the future. This tree represents six scenarios, since the tree has six leaves. In the setting of optimization that we have discussed, there will be two decisions to be made, namely one “today” and one “tomorrow”. However, note that what we do tomorrow will depend on what happens today, so there is not one decision for tomorrow, but rather one for each of the three nodes in the second row. Hence x0 works as a suggested ﬁrst decision, but x1 isn’t very interesting. However, if we are in the leftmost node representing tomorrow, we can talk about an x1 for the two scenarios going through that node. We can therefore calculate, for each version of “tomorrow”, an average x1 , where the expectation is conditional upon being on one of the scenarios that goes through the node. Hence we see that the nodes in the event tree are decision points and the arcs are realizations of random variables. From our scenario solutions xs we can therefore calculate decisions for each node in the tree, and these will all make 136 STOCHASTIC PROGRAMMING Today First random variable Tomorrow Second random variable The future
Figure 12 Example of an event tree for T = 1. sense, because they are all possible decisions, or what are called implementable decisions. s s For each time period t let {s}t be the set of all scenarios having ξ0 , . . . , ξt−1 in common with scenario s. In Figure 12, {s}0 = S , whereas each {s}2 contains only one scenario. There are three sets {s}1 . Let p({s}t ) be the sum of the probabilities of all scenarios in {s}t . Hence, after solving (6.1) for all s, we calculate for all {s}t x({s}t ) =
s ∈{s}t ps xs t . p({s}t ) So what does this solution mean? It has the advantage that it is implementable, but is it the optimal solution to any problem we might want to solve? Let us now turn to a formal mathematical description of a multistage problem that lives on an event tree, to see how x({s}t ) may be used. In this description we are assuming that we have ﬁnite discrete distributions.
T min
s∈S ps
t=0 s s s αt rt (zt , xs , ξt ) + αT +1 Q(zT +1 ) t subject to
s s s s zt+1 = Gt (zt , xs , ξt ) for t = 0, . . . , T with z0 = z0 given, t s s s At (zt ) ≤ xt ≤ Bt (zt ) for t = 0, . . . , T, ps xs t for t = 0, . . . , T and all s. xs = t p({s}t ) s ∈{s}t (6.2) Note that only (6.2), the implementability constraints, connect the scenarios. As discussed in Section 1.8, a common approach in nonlinear DYNAMIC SYSTEMS 137 optimization is to move constraints that are seen as complicated into the objective function, and penalize deviations. We outlined a number of diﬀerent approaches. For scenario aggregation, the appropriate one is the augmented Lagrangian method. Its properties, when used with equality constraints such as (6.2), were given in Propositions 1.27 and 1.28. Note that if we move the implementability constraints into the objective, the remaining constraints are separable in the scenarios (meaning that there are no constraints containing information from more than one scenario). Our objective then becomes
T p(s)
s∈S t=0 s s αt rt (zt , xs , ξt ) t s +wt (xs t −
s ∈{s}t ⎫ ⎬ ps xs t s ) + αT +1 Q(zT +1 ) ⎭ p({s}t ) (6.3) s where wt is the multiplier for implementability for scenario s in period t. If we add an augmented Lagrangian term, this problem can, in principle, be solved by an approach where we ﬁrst ﬁx w, then solve the overall problem, then update w and so on until convergence, as outlined in Section 1.8.2.4. However, a practical problem (and a severe one as well) results from the fact that the augmented Lagrangian term will change the objective function from one where the diﬀerent variables are separate, to one where products between variables occur. Hence, although this approach is acceptable in principle, it does not work well numerically, since we have one large problem instead of many scenario problems that can be solved separately. What we then do is to replace s ∈{s}t ps xs t p({s}t ) ps xs t p({s}t ) with x({s}t ) =
s ∈{s}t from the previous iteration. Hence, we get
T p(s)
s∈S t=0 s s s s αt rt (zt , xs , ξt ) + wt [xs − x({s}t )] + αT +1 Q(zT +1 ) . t t s But since, for a ﬁxed w, the terms wt x({s}t ) are ﬁxed, we can as well drop them. If we then add an augmented Lagrangian term, we are left with T s s s s αt rt (zt , xs , ξt ) + wt xs + 1 ρ[xs − x({s}t )]2 + αT +1 Q(zT +1 ) . t t t 2 p(s)
s∈S t=0 138 STOCHASTIC PROGRAMMING procedure scenario(s, x, xs ); begin Solve the problem
T min
t=0 s s αt [rt (zt , xt , ξt ) + wt xt + 1 ρ(xt − x)2 ] + αT +1 Q(zT +1 ) 2 s s.t. zt+1 = Gt (zt , xt , ξt ) for t = 0, . . . , T, with z0 given, At (zt ) ≤ xt ≤ Bt (zt ) for t = 0, . . . , T,
s s to obtain xs = (xs , . . . , xs ) and z s = (z0 , . . . , zT +1 ); 0 T end; Figure 13 Procedure for solving individual scenario problems. Our problem is now totally separable in the scenarios. That is what we need to deﬁne the scenario aggregation method. See the algorithms in Figures 13 and 14 for details. A few comments are in place. First, to ﬁnd an initial x({s}t ), we can solve (6.1) using expected values for all random variables. Finding the correct value of ρ, and knowing how to update it, is very hard. We discussed that to some extent in Chapter 1: see in particular (8.17). This is a general problem for augmented Lagrange methods, and will not be discussed here. Also, we shall not go into the discussion of stopping criteria, since the details are beyond the scope of the book. Roughly speaking, though, the goal is to have the scenario problems produce implementable solutions, so that xs equals x({s}t ). Example 2.3 This small example concerns a very simple ﬁsheries management model. For each time period we have one state variable, one decision variable, and one random variable. Let zt be the state variable, representing the biomass of a ﬁsh stock in time period t, and assume that z0 is known. Furthermore, let xt be a decision variable, describing the portion of the ﬁsh stock caught in a given year. The implicit assumption made here is that it requires a ﬁxed eﬀort (measured, for example, in the number of participating vessels) to catch a ﬁxed portion of the stock. This seems to be a fairly correct description of demersal ﬁsheries, such as for example the cod ﬁsheries. The catch in a given year is hence zt xt . During a year, ﬁsh grow, some die, and there is a certain recruitment. A common model for the total eﬀect of these factors is the so called Schaefer model, where the total change in the stock, due to natural eﬀects listed above, DYNAMIC SYSTEMS 139 procedure scenagg; begin s for all s and t do wt := 0; Find initial x({s}t ); Initiate ρ > 0; repeat for all s ∈ S do scenario(s, x({s}t ), xs ); for all x({s}t ) do ps xs t ; x({s}t ) = p({s}t )
s ∈{s}t Update ρ if needed; for all s and t do s s wt := wt + ρ [xs − x({s}t )]; t until result good enough; end; Figure 14 Principal setup of the scenario aggregation method. zt , K where s is a growth ratio and K is the carrying capacity of the environment. Note that if zt = K there is no net change in the stock size. Also note that if zt > K , then there is a negative net eﬀect, decreasing the size of the stock, and if zt < K , then there is a positive net eﬀect. Hence zt = K is a stable situation (as zt = 0 is), and the ﬁsh stock will, according to the model, stabilize at z = K if no ﬁshing takes place. If ﬁsh are caught, the catch has to be subtracted from the existing stock, giving us the following transition function: szt 1 − zt+1 = zt − xt zt + szt 1 − zt . K is given by 2 This transition function is clearly nonlinear, with both a zt xt term and a zt term. If the goal is to catch as much as possible, we might choose to maximize ∞ αt zt xt ,
t=0 where α is a discount factor. (For inﬁnite horizons we need 0 ≤ α < 1, but for ﬁnite problems we can choose to let α = 1.) In addition to this, we have 140 STOCHASTIC PROGRAMMING the natural constraint 0 ≤ xt ≤ 1. So far, this is a deterministic control problem. It is known, however, that predicting the net eﬀects of growth, natural mortality and recruitment is very diﬃcult. In particular, the recruitment is not well understood. Therefore, it seems unreasonable to use a deterministic model to describe recruitment, as we have in fact done above. Let us therefore assume that the growth ratio s ˜ is not known, but rather given by a random vector ξt in time period t. To ﬁt into the framework of scenario aggregation, let us assume that we are able to cut the problem after T periods, giving it a ﬁnite horizon. Furthermore, ˜ assume that we have found a reasonable ﬁnite discretization of ξt for all t ≤ T . It can be hard to do that, but we shall oﬀer some discussion in Section 3.4. A ﬁnal issue when making an inﬁnite horizon problem ﬁnite is to construct a function Q(zT +1 ) that, in a reasonable way, approximates the value of ending up in state zT +1 at time T + 1. Finding Q can be diﬃcult. However, let us brieﬂy show how one approximation can be found for our problem. ˜ Let us assume that all ξt are independent and identically distributed with expected value ξ . Furthermore, let us simply replace all random variables with their means, and assume that each year we catch exactly the net recruitment, i.e. we let zt . xt = ξ 1 − K But since this leaves zt = zT +1 for all t ≥ T + 1, and therefore all xt for t ≥ T + 1 equal, we can let
∞ Q(zT +1 ) =
t=T +1 αt−T −1 xt zt = ξzT +1 (1 − zT +1 /K ) . 1−α With these assumptions on the horizon, the existence of Q(zT +1 ) and a ﬁnite discretization of the random variables, we arrive at the following optimization problem, (the objective function amounts to the expected catch, discounted over the horizon of the problem; of course, it is easy to bring this into monetary terms): max
s∈S p(s) T t=0 s s αt zt xs + αT +1 Q(zT +1 ) t
s zt K s s s s.t. zt+1 = zt 1 − xξ + ξt 1 − t s , with z0 = z0 given, 0 ≤ xs ≤ 1, t xs = t ps xs t s ∈{s}t p({s}t ) . We can then apply scenario aggregation as outlined before. DYNAMIC SYSTEMS 141 2 2.6.1 Approximate Scenario Solutions Consider the algorithm just presented. If the problem being solved is genuinely a stochastic problem (in the sense that the optimal decisions change compared with the optimal decisions in the deterministic—or expected value—setting), we should expect scenario solutions xs to be very diﬀerent initially, before the dual variables ws obtain their correct values. Therefore, particularly in early iterations, it seems a waste of energy to solve scenario problems to optimality. What will typically happen is that we see a sort of “ﬁght” between the scenario solutions xs and the implementable solution x({s}t ). The scenario solutions try to pull away from the implementable solutions, and only when the penalty s (in terms of wt ) becomes properly adjusted will the scenario solutions agree with the implementable solutions. In fact, the convergence criterion, vaguely stated, is exactly that the scenario solutions and the implementable solutions agree. From this observation, it seems reasonable to solve scenario problems only approximately, but precisely enough to capture the direction in which the scenario problem moves relative to the implementable solution. Of course, as s the iterations progress, and the dual variables wt adjust to their correct values, the scenario solutions and the implementable solutions agree more and more. In the end, if things are properly organized, the overall setup converges. It must be noted that the convergence proof for the scenario aggregation method does indeed allow for approximate scenario solutions. From an algorithmic point of view, this would mean that we replaced the solution procedure in Figure 13 by one that found only an approximate solution. It has been observed that by solving scenario problems only very approximately, instead of solving them to optimality, one obtains a method that converges much faster, also in terms of the number of outer iterations. It simply is not wise to solve scenario problems to optimality. Not only can one solve scenario problems approximately, one should solve them approximately. 2.7 Financial Models Optimization models involving uncertainty have been used for a long time. One of the best known models is the meanvariance model of Markowitz, for which he later got the Nobel economics prize. In this section, we shall ﬁrst discuss the main principles behind Markowitz’ model. We shall then discuss some of the weaknesses of the model, mostly in light of the subjects of this 142 STOCHASTIC PROGRAMMING book, before we proceed to outline later developments in ﬁnancial modeling. 2.7.1 The Markowitz’ model The purpose of the Markowitz model is to help investors distribute their funds in a way that does not represent a waste of money. It is quite clear that when you invest, there is a tradeoﬀ between the expected payoﬀ from your investment, and the risk associated with it. Normally, the higher the expected payoﬀ, the higher the risk. However, for a given payoﬀ you would normally want as little risk as possible, and for a given risk level, you would want the expected payoﬀ to be as large as possible. If you, for example, have a higher risk than necessary for a given expected payoﬀ, you are wasting money, and this is what the Markowitz model is constructed to help you avoid. But what is risk? It clearly has something to do with the spread of the possible payoﬀs. A portfolio (collection of investments) is riskier the higher the spread, all other aspects equal. In the Markowitz model, the risk is measured by the variance of the (random) payoﬀs from the investment. The model will not tell us in what way we should combine expected payoﬀs with variance, only make sure that we do not waste money. How to actually pick a portfolio is left to other theories, such as for example utility theory, as discussed brieﬂy in Section 2.4. Financial instruments such as for example bonds, stocks, options and bank deposits all have random payoﬀs, although the uncertainty vary a lot. Furthermore, the instruments are not statistically independent, but rather strongly correlated. It is obvious that if the value of a 3 year bond increases, so will normally the value of, say, a 5 year bond. The correlation is almost, but not quite, perfect. In the same way, stocks from companies in similar sectors of the economy often move together. On the other hand, if energy prices rise internationally, the value of an oil company may increase, whereas the value of an aluminum producer may decrease. If the interest rates increase, bonds will normally decrease in value. In other words, we must in this setting operate with dependent random variables. Assume we have n possible investments instruments. Let xi be the proportion of our funds invested in intrument i. Hence, xi = 1. Let the ˜ ˜ ˜ ˜ payoﬀ of instrument i be ξi (with ξ = (ξ1 , . . . , ξn )), and let V be the variancecovariance matrix for the investment, i.e. ˜ ˜˜ ˜ Vij = E (ξi − E ξi )(ξj − E ξj ) . ˜ The variance of a portfolio is now xT V x, and the mean payoﬀ is xT E ξ . We now solve (letting e be a vector of 1’s). DYNAMIC SYSTEMS 143 Mean Infeasible region Efficient frontier Inefficient region Variance
Figure 15 Eﬃcient frontier generated by the Markowitz model. min xT V x ˜ s.t. xT E ξ = v T x e=1 x ≥ 0. ⎫ ⎪ ⎪ ⎬ ⎪ ⎪ ⎭ (7.4) By solving (7.4) parametrically in v we obtain a curve called the eﬃcient frontier. An example is shown in Figure 15. We note that the curve is ﬂat on the top, indicating that there is a maximal possible expected payoﬀ, and also that there is a minimal possible variance. Also, note that the curve bends backwards, showing that if you want to achieve a lower mean than the one that corresponds to the minimal variance, you will have to accept an increased variance. The points below the curve are achievable, but represent a waste of money, the points above are not achievable. Hence, we wish to be on the curve, the eﬃcient frontier. There is nothing in the model that tells us where on the curve we ought to be. 2.7.2 Weak aspects of the model The above model, despite the fact that for most investors it is considered advanced, has a number of shortcomings which relate to the subject of this book. The ﬁrst we note is that this is a twostage model; We make decisions under uncertainty (the investments), we then observe what happens, and ﬁnally we obtain payoﬀs according to what happened. An important question is therefore to what extent the problem is well modeled as a twostage problem. More and more people, both in industry and academia, tend to think that this is not the case. The reasons are many, here follows a list of some of them. We list them as they represent a valid way of thinking for any decision problem under uncertainty. 144 STOCHASTIC PROGRAMMING • A twostage (one period) model cannot treat instruments with diﬀerent time horizons correctly. If the length of the period is chosen to be short, some instruments will look worse than they are, if it is long, others will suﬀer. • A oneperiod model cannot capture correctly the long term tradeoﬀs between risk and expectation. Stocks, for example, are very risky in the short run, and will suﬀer from a short time period in the model. If the time period is long, stocks will look very good due to their high expected payoﬀs, despite the risk. This is something we know well from daily life. If you have money, and need to put them away for, say, three months, you do not buy stocks. But if you need them in ten years, stocks will be a good alternative. And this diﬀerence is not just caused by transaction costs. • A oneperiod model cannot capture transaction costs properly. The way we have presented the model, there are no transaction costs at all. A result of this is that users observe that when the model is applied, it suggests far too much trading because there is no penalty on changing the portfolio. We could make a new model in Markowitz’ spirit with transaction costs (putting a penalty on changes from the present portfolio), but it would be diﬃcult to formulate properly. The reason is that the transaction costs would have to be oﬀset by a better payoﬀ over just one period. In reality, some (but not all) reinvestments have years and years to pay oﬀ. • A oneperiod model cannot (by deﬁnition) capture dynamic aspects (or trends) in the development of the payoﬀs over time. In addition to these observations, the model contains a number of implicit assumption. Let us mention just one. By construction the model assumes that only the ﬁrst two moments (mean and variance) are relevant for a portfolio. In other words, skewness and higher moments are disregarded. Most people would argue that this is unrealistic for most instruments. However, for some, there cannot be any doubt that just two moments is insuﬃcient. Consider an option, say, the (European) option to buy a share one year from now at price p. Since this is a right but not a duty, the distribution of payoﬀs from the option will consist of two parts: A point mass at zero, and a continuous part. The point mass comes from all cases where p is higher than the market value one year from now. If p is lower than the market value, then the payoﬀ equals the market value minus p. Hence, even if the distribution for the share was fully described by two moments, this cannot be true for the option. Similar statements can be made about other instruments such as forwards, futures, swaps etc. The result of using Markowitz’ model is the eﬃcient frontier. Clearly, if the assumptions behind the model are not correct, then the frontier is not so eﬃcient after all. DYNAMIC SYSTEMS 145 2.7.3 More advanced models In the previous subsection we discussed weak aspects of the meanvariance model. There can be no doubt that the arguments are valid, and that there exist other arguments as well. However, all models are exactly that, they are models, and hence, they disregard certain aspects of the real phenomenon they are made to represent. A good model is not one that captures every aspect of a problem, but rather one that captures the essential aspects. Hence, the fact that we can argue against the model is in itself not enough to say that the model is not good. More than that is needed. For example, we may demonstrate by using it (or maybe by simulation) that it gives bad results, or we may demonstrate that there is indeed a better alternative. However, weak as a model may be in some respects, it may still be the best available tool in a given situation. When we now turn to discuss new models for portfolio selection, we do it because users and scientists have observed that it is indeed possible to obtain results that both theoretically and in practice give better decisions. An investment problem of the type discussed by Markowitz normally has either a very long (but ﬁnite) time horizon, or it is in fact an inﬁnite time horizon problem. From a practical point of view, we would not know how to solve it as an inﬁnite horizon problem, so let us assume that the problem has many, but ﬁnitely many, time periods. At least in principle, decisions can be made in all periods, so that the problem is both multistage and multiperiod. 2.7.3.1 A scenario tree A possibility in this situation is to represent the scenarios in terms of a scenario or event tree. This will allow for arbitrary dependencies in the payoﬀs, thereby taking care of the objections we made about the Markowitz model. We can capture the fact that diﬀerent instruments have diﬀerent time horizons, and we can allow for trends. In fact, this is a major advantage of using event trees; we can allow for any kind of depenencies. 2.7.3.2 The individual scenario problems A major advantage of using scenario aggregation on this problem (and many others) is that the individual scenario problems become simple to interpret, and, if the underlying optimization problem has structure, that this structure is maintained. In its simplest form the scenario problems are in this case generalized networks, a problem class for which there are eﬃcient codes. We refer to Chapter 6 for a more detailed look at networks. Figure 16 shows an example of what a scenario problem may look like. This is one of several ways to represent the problem. Periods (stages) run 146 STOCHASTIC PROGRAMMING
Bonds Stocks Real estate Cash Stage 1 Stage 2 Stage 3 Figure 16 Network describing possible investments for a single scenario. horizontally. For each stage we ﬁrst have a column with one node for each instrument. In the example these are four investment categories. The arcs entering from the left brings the initial portfolio into the model, measured in the amount of money held in each category. The arcs that run horizontally between nodes of the same category represent investments held from one period to the next. The node which is alone in a column represents trading. Arcs that run to or from this cash node, represent the selling or buying of instruments. A stage consists of one column of 4 nodes plus the single cash node. We mentioned that this is a generalized network. That means that the amount of money that enters an arc is not the same as the amount that leaves it. For example, if you put money in the bank, and the interest rate is 10%, then the ﬂow into the arc is multiplied by 1.1 to produce the ﬂow out. This parameter is called the multiplier of the arc. This way the investment generally increases over time. For most categories this parameter is uncertain, normally with a mean greater than one. This is how we represent uncertainty in the model. For a given scenario, these multipliers are known. Arcs going to the cash trading node from all nodes but the cash node to its left, will have multiplyers less than 1 to represent variable transaction costs. The arcs that leave the cash trading node have the same multipliers as the horizontal arcs for the same investment categories, reduced (deterministically) for variable transaction costs. Fixed transaction costs are not hard to model, but they would produce very diﬃcult models to solve. We can also have arcs going backwards in time. They represent borrowings. Since you must pay interest on borrowings (deterministic or stochastic), these arcs have multipliers less than one, meaning that if you want 100 USD now, you must pay back more than 100 USD in a later time period. DYNAMIC SYSTEMS 147 If we transform all investments into cash in the last period (maybe without transaction costs) a natural objective is to maximize this value. This way we have set the scene for using scenario aggregation on a ﬁnancial model. It appears that these models are very promising. 2.7.3.3 Practical considerations First, it is important to observe that most likely, a realistically sized model will be far too large to solve with today’s algorithms and computers. Hence, it must be reduced in size, or simpliﬁed in some other way. For problem size reductions, we refer to Section 3.4, where diﬀerent approaches are discussed. An other possibility is to resort to sampling schemes, ending up with statistical convergence statements. This is discussed to some extent in Sections 3.8 and 3.9. The model we have presented for the ﬂow of funds is also, most likely, too simple. Usually, legal considerations as well as investment policies of the company will have to be introduced. However, that will be problem dependent, and cannot be discussed here. 2.8 Hydro power production The production of electric power from rivers and reservoirs represents an area where stochastic programming methodology has been used for a long time. The reason is simply that the environment in which planning must take place is very uncertain. In particular, the inﬂow of water to the rivers and reservoirs vary a lot both in the short and long term. This is caused by variation in rainfall, but even more by the uncertainty related to the time of snow melting in the spring. Furthermore, the demand for power is also random, depending on such as temperature, the price of oil, and general economic conditions. The actual setting for the planners will vary a lot from country to country. Norway, with close to 100% of her electricity coming from hydro, is in a very diﬀerent situation from for example France with a high dependency on nuclear power, or the US with a more mixed system. In addition, general market regulations will aﬀect modeling. Some countries have strictly regulated markets, others have full deregulation and competition. We shall now present a very simple model for electricity production. As an example, we shall assume that electricity can be sold at ﬁxed prices, which could be interpreted as if we were a small producer in a competitive market. It is worth noting that in many contexts it is necessary to consider also price as random. In still other contexts, the goal is not at all to maximize proﬁt, but rather to satisfy demand. So, there are many variations of this problem. We shall present one in order to illustrate the basics. 148 STOCHASTIC PROGRAMMING 2.8.1 A small example Let us look at a rather simple version of the problem. Let there by two reservoirs, named A and B . The reservoirs are connected by a river, with A being upstream of B . We shall assume that the periods are long enough for water released from reservoir A in a period to reach reservoir B in the same period. This implies either that the reservoirs are close or that the time periods are long. It will be easy to change the model if it is more reasonable to let the water arrive in reservoir B in the next period. We shall also assume that both water released for production and water spilled from reservoir A (purposely or as a result of a full reservoir) will reach reservoir B . Sometimes spilled water is lost. There are three sets of variables. Let vij be the volumes of water in reservoir i, (i ∈ {A,B}) at the beginning of period j , (j ∈ {0, 1, 2, . . . , T }) (here vi0 is given to be the initial volume in each reservoir), and uij be the volume of water in reservoir i released to power station i during period j , and rij be the amount of water spilling out of reservoir i in period j . There is one major set of parameters for the constraints, which we eventually will interpret as random variables. Let qij be the volume of water ﬂowing into reservoir i during period j . Bounds on the variables uij and vij are also given. They typically represent such as reservoir size, production capacity and legal restrictions. ∈ [ui , ui ] and ∈ [v ij , v ij ] for i ∈ {A, B } and j ∈ {1, 2, . . . , T }. uij vij The reservoir balance equations in terms of volumes then become for i = 1, . . . , T : vAi vBi = vA,i−1 + qAi − uAi − rAi = vB,i−1 + qBi + uAi + rAi − uBi − rBi . What we now lack is a description of the objective function plus the end eﬀects. To facilitate that, let DYNAMIC SYSTEMS
inflow 149 inflow Figure 17 Simple river system with two reservoirs and two plants. cj denote the value of the electricity generated from one unit of water in period j , Φ(vAT , vBT ) denote the value function for water at the ﬁnal stage, and φi denote the marginal values of water at the ﬁnal stage (the partial derivatives of Φ). The objective we wish to maximize then has the form (assuming discounting is contained in cj )
T cj (uAj + uBj ) + Φ(vAT , vBT ).
j =1 The function Φ is very important in this model. The reason is that a major feature of the model is that it distributes water between the periods covered by the model, and all later periods. Hence, if Φ underestimates the future value of water, the model will most likely suggest an empty reservoir after stage T , and if it is set too high, the model will almost only save water. The estimation of Φ, which is normally done in a model with a very long time horizon (often inﬁnite) has been subjected to research for several decades. Very often it is the partial derivatives φi that are estimated, rather than the function itself. Now, if the inﬂow is random, we can set up an event tree. Most likely, the inﬂows to the reservoirs are dependent, and if the periods are short, there may also be dependence over time. The model we are left with can, at least in principle, be solved with scenario aggregation. 150 STOCHASTIC PROGRAMMING 2.8.2 Further developments The model shown above is very simpliﬁed. Modeling of real systems must take into account a number of other aspects as well. In this section, we list some of them to give you a feeling for what may happen. First, these models are traditionally set in a context where the major goal is to meet demand, rather than maximize proﬁt. In a pure hydro based system, the goal is then to obtain as much energy as possible from the available water (which of course is still uncertain). In a system with other sources for energy as well, we also have to take into account the cost of these sources, for example natural gas, oil or nuclear power. Obviously, in a model as simple as ours, maximizing the amount of energy obtained from the available water resources makes little sense, as we have (implicitly) assumed that the amount of energy we get from 1m3 of water is ﬁxed. The reality is normally diﬀerent. First, the turbines are not equally eﬃcient at all production levels. They have some optimal (below maximal) production levels where the amount of energy per m3 water is optimized. Generally, the function describing energy production as a result of water usage in a power plant with several turbines is neither convex nor monotone. In particular, the nonconvexity is serious. It stems from physical properties of the turbines. But there is more than that. The energy production also depends on the head (hydrostatic pressure) that applies at a station during a period. It is common to measure water pressure as the height of the water column having the given pressure at the bottom. This is particularly complicated if the water released from one power plant is submerged in the reservoir of the downstream power plant. In this case the head of the upper station will depend on the reservoir level of the lower station, generating another source of nonconvexities. Traditionally, these models have been solved using stochastic dynamic programming. This can work reasonably well as long as the dimension of the state space is small. A requirement in stochastic dynamic programming is that there is independence between periods. Hence, if water inﬂow in one period (stage) is correlated to that of the previous period(s), the state space must be expanded to contain the inﬂow in these previous period(s). If this happens, SDP is soon out of business. Furthermore, in deregulated markets it may be necessary to include price as a random variable. Price is correlated to inﬂow in the present period, but even more to inﬂow in earlier periods through the reservoir levels. This creates dependencies which are very hard to tackle in SDP. Hence, researchers have turned to other methods, for example scenario aggregation, where dependencies are of no concern. So far, it is not clear how successful this will be. DYNAMIC SYSTEMS 151 2.9 The Value of Using a Stochastic Model We have so far embarked on formulating and solving stochastic programming models, without much concern about whether or not that is a worthwhile thing to do. Most decision problems are certainly aﬀected by randomness, but that is not the same as saying that the randomness should be introduced into a model. We all know that the art of modelling amounts to describing the important aspects of a problem, and dropping the unimportant ones. We must remember that randomness, although present in the situation, may turn out to be one of the unimportant issues. We shall now, brieﬂy, outline a few approaches for evaluating the importance of randomness. We shall see that randomness can be (un)important in several diﬀerent ways. 2.9.1 Comparing the Deterministic and Stochastic Objective Values The most straightforward way to check if randomness is unimportant is to compare the optimal objective value of the stochastic model with the corresponding optimal value of the deterministic model (probably produced by replacing all random variables by their means). When we compare the optimal objective values (and also the solutions) in these two cases, we must be aware that what we are observing is composed of several elements. First, while the deterministic solution has one decision for each time period, the stochastic solution “lives”on a tree, as we have discussed in this chapter. The major point here is that the deterministic model has lost all elements of dynamics (it has several time periods, but all decisions are made here and now). Therefore decisions that have elements of options in them will never be of any use. In a deterministic world there is never a need to do something just in case. Secondly, replacing random variables by their means will in itself have an eﬀect, as we shall discuss in much more detail in the next chapter. Therefore, even if these two models come out with about the same optimal objective value, one does not really know much about whether or not it is wise to work with a stochastic model. These models are simply too diﬀerent to say much in most situations. From this short discussion, you may have observed that there are really two major issues when solving a model. One is the optimal objective value, the other the optimal solution. It depends on the situation which of these is more important. Sometimes one’s major concern is if one should do something or not; in other cases the question is not if one should do something, but what one should do. When we continue, we shall be careful, and try to distinguish these cases. 152 STOCHASTIC PROGRAMMING 2.9.2 Deterministic Solutions in the Event Tree To illustrate this idea we shall use the following example. Example 2.4 Assume that we have a container that can take up to 10 units, and that we have two possible items that can be put into the container. The items are called A and B , and some of their properties are given in Table 2.
Table 2 Properties of the two items A and B . Item A B Value 6 4 Minimum size 5 3 Maximum size 8 6 The goal is to ﬁll the container with as valuable items as possible. However, the size of an item is uncertain. For simplicity, we assume that each item can have two diﬀerent sizes, as given in Table 2. All sizes occur with the same probability of 0.5. As is always the case with a stochastic model, we must decide on how the stages are deﬁned. We shall assume that we must pick an item before we learn its size, and that once it is picked, it must be put into the container. If the container becomes overfull, we obtain a penalty of 2 per unit in excess of 10. We have the choice of picking only one item, and they can be picked in any order. A stochastic decision tree for the problem is given in Figure 18, where we have already folded back and crossed out nonoptimal decisions. We see that the expected value is 7.5. That is obtained by ﬁrst picking item A, and then, if item A turns out to be small, also pick item B . If item A turns out to be large, we choose not to pick item B . 2 If we assume that the event tree (or the stochastic part of the stochastic decision tree) is a fair description of the randomness of a model, the following simple approach gives a reasonable measure of how good the deterministic model really is. Start in the root of the event tree, and solve the deterministic model. (Probably this means replacing random variables by their means. However, this approach can be used for any competing deterministic model.) Take that part of the deterministic solution that corresponds to the ﬁrst stage of the stochastic model, and let it represent an implementable solution in the root of the event tree. Then go to each node at level two of the event tree and repeat the process. Taking into consideration what has happened in stage 1 (which is diﬀerent for each node), solve the deterministic model from stage DYNAMIC SYSTEMS 153 Figure 18 Stochastic decision tree for the container problem. 2 onwards, and use that part of the solution that corresponds to stage 2 as an implementable solution. Continue until you have reached the leaves of the event tree. This is a fair comparison, since even people who prefer deterministic models resolve them as new information becomes available (represented by the event tree). In this setting we can compare both decisions and (expected) optimal objective values. What we may observe is that although the solutions are diﬀerent, the optimal values are almost the same. If that is the case, we are observing ﬂat objective functions with many (almost) optimal solutions. If we observe large diﬀerences in objective values, we have a clear indication that solving a stochastic model is important. Let us return to Example 2.4. Let the following simple deterministic algorithm be an alternative to the stochastic programming approach in Figure 18. Consider all items not put into the container so far. For each item, calculate the value of adding it to the container, given that it has its expected size. If at least one item adds a positive value to the content of the container, pick the one with the highest added value. Then put it in, and repeat. This is not meant to be a specially eﬃcient algorithm—it is only presented for its simplicity to help us make a few points. If we apply this algorithm to our case, we see that with an empty container, item A will add 6 to the value of the container and item B will add 4. Hence we pick item A. The algorithm will next determine if B should be picked or not. However, for the comparison 154 STOCHASTIC PROGRAMMING between the deterministic and stochastic approach, it suﬃces to observe that item A is picked ﬁrst. This coincides with the solution in Figure 18. Next we observe the size of A. If it is small, there is still room for 5 units in the container. Since B has an expected size of 4.5, it will add 4 to the value of the container, and will therefore be picked. On the other hand, if A turns out to be large, there is only room for 2 more units, and B will add 4 − 2.5 × 2 = −1 to the value, and it will therefore not be picked. Again, we get exactly the same solution as in Figure 18. So what have we found out? We have seen that for this problem, with its structure and data, the deterministic approach was as good as the stochastic approach. However, it is not possible to draw any general conclusions from this. In fact, it illustrates a very important point: it is extremely diﬃcult to know if randomness is important before we have solved the problem and checked the results. But, in this special case, anyone claiming that using stochastic decision trees on this problem was like shooting sparrows with cannons will be proved correct. 2.9.3 Expected Value of Perfect Information For simplicity, assume that we have a twostage model. Now compare the optimal objective value of the stochastic model with the expected value of the waitandsee solutions. The latter is calculated by ﬁnding the optimal solution for each possible realization of the random variables. Clearly, it is better to know the value of the random variable before making a decision than having to make the decision before knowing. The diﬀerence between these two expected objective values is called the expected value of perfect information (EVPI), since it shows how much one could expect to win if one were told what would happen before making one’s decisions. Another interpretation is that this diﬀerence is what one would be willing to pay for that information. What does it mean to have a large EVPI? Does it mean that it is important to solve a stochastic model? The answer is no! It shows that randomness plays an important role in the problem, but it does not necessarily show that a deterministic model cannot function well. By resorting to the setup of the previous subsection, we may be able to ﬁnd that out. We can be quite sure, however, that a small EVPI means that randomness plays a minor role in the model. In the multistage case the situation is basically the same. It is, however, possible to have a very low EVPI, but at the same time have a node far down in the tree with a very high EVPI (but low probability.) Let us again turn to Example 2.4. Table 3 shows the optimal solutions for the four cases that can occur, if we make the decisions after the true values have become known. Please check that you agree with the numbers. With each case in Table 3 equally probable, the expected value of the wait DYNAMIC SYSTEMS 155 Table 3 The four possible waitandsee solutions for the container problem in Example 2.4. Size of A 5 5 8 8 Size of B 3 6 3 6 Solution A, B A, B A, B A Value 10 8 8 6 andsee solution is 8, which is 0.5 more than what we found in Figure 18. Hence EVPI equals 0.5; The value of knowing the true sizes of the items before making decisions is 0.5. This is therefore also the maximal price one would pay to know this. What if we were oﬀered to pay for knowing the value of A or B before making our ﬁrst pick? In other words, does it help to know the size of for example item B before choosing what to do? This is illustrated in Figure 19. Figure 19 Stochastic decision tree for the container problem when we know the size of B before making decisions. We see that the EVPI for knowing the size of item B is 0.5, which is the same as that for knowing both A and B . The calculation for item A is left as 156 STOCHASTIC PROGRAMMING an exercise. Example 2.5 Let us conclude this section with another similar example. You are to throw a die twice, and you will win 1 if you can guess the total number of eyes from these two throws. The optimal guess is 7 (if you did not know that already, check it out!), and that gives you a chance of winning of 1 . So 6 the expected win is also 1 . 6 Now, you are oﬀered to pay for knowing the result of the ﬁrst throw. How much will you pay (or alternatively, what is the EVPI for the ﬁrst throw)? A close examination shows that knowing the result of the ﬁrst throw does not help at all. Even if you knew, guessing a total of 7 is still optimal (but that is no longer a unique optimal solution), and the probability that that will happen is still 1 . Hence, the EVPI for the ﬁrst stage is zero. 6 Alternatively, you are oﬀered to pay for learning the value of both throws before “guessing”. In that case you will of course make a correct guess, and be certain of winning one. Therefore the expected gain has increased from 1 6 to 1, so the EVPI for knowing the value of both random variables is 5 . 2 6 As you see, EVPI is not one number for a stochastic program, but can be calculated for any combination of random variables. If only one number is given, it usually means the value of learning everything, in contrast to knowing nothing. References
[1] Bellman R. (1957) Dynamic Programming. Princeton University Press, Princeton, New Jersey. [2] Helgason T. and Wallace S. W. (1991) Approximate scenario solutions in the progressive hedging algorithm. Ann. Oper. Res. 31: 425–444. [3] Howard R. A. (1960) Dynamic Programming and Markov Processes. MIT Press, Cambridge, Massachusetts. [4] Nemhauser G. L. (1966) Dynamic Programming. John Wiley & Sons, New York. [5] Rockafellar R. T. and Wets R. J.B. (1991) Scenarios and policy aggregation in optimization under uncertainty. Math. Oper. Res. 16: 119–147. [6] Schaefer M. B. (1954) Some aspects of the dynamics of populations important to the management of the commercial marine ﬁsheries. InterAm. Trop. Tuna Comm. Bull. 1: 27–56. [7] Wallace S. W. and Helgason T. (1991) Structural properties of the progressive hedging algorithm. Ann. Oper. Res. 31: 445–456. [8] Watson S. R. and Buede D. M. (1987) Decision Synthesis. The Principles DYNAMIC SYSTEMS 157 and Practice of Decision Analysis. Cambridge University Press, Cambridge, UK. [9] Wets R. J.B. (1989) The aggregation principle in scenario analysis and stochastic optimization. In Wallace S. W. (ed) Algorithms and Model Formulations in Mathematical Programming, pages 91–113. SpringerVerlag, Berlin. 158 STOCHASTIC PROGRAMMING 3 Recourse Problems
The purpose of this chapter is to discuss principal questions of linear recourse problems. We shall cover general formulations, solution procedures and bounds and approximations. Figure 1 shows a simple example from the ﬁsheries area. The assumption is that we know the position of the ﬁshing grounds, and potential locations for plants. The cost of building a plant is known, and so are the distances between grounds and potential plants. The ﬂeet capacity is also known, but quotas, and therefore catches, are only known in terms of distributions. Where should the plants be built, and how large should they be? This is a typical twostage problem. In the ﬁrst stage we determine which plants to build (and how big they should be), and in the second stage we catch and transport the ﬁsh when the quotas for a given year are known. Typically, quotas can vary as much as 50% from one year to the next. 3.1 Outline of Structure Let us formulate a twostage stochastic linear program. This formulation diﬀers from (4.16) of Chapter 1 only in the randomness in the objective of the recourse problem. min cT x + Q(x) s.t. Ax = b, x ≥ 0, where Q(x) =
j pj Q(x, ξ j ) and Q(x, ξ ) = min{q (ξ )T y  W (ξ )y = h(ξ ) − T (ξ )x, y ≥ 0}, ˜ ˜ where pj is the probability that ξ = ξ j , the j th realization of ξ , h(ξ ) = h0 + Hξ = h0 + i hi ξi , T (ξ ) = T0 + i Ti ξi and q (ξ ) = q0 + i qi ξi . 160 STOCHASTIC PROGRAMMING NordTrøndelag SørTrøndelag Møre og Romsdal Hedmark Sogn og Fjordane Oppland Hordaland Buskerud OSLO Akershus Telemark Vestfold Østfold Rogaland VestAgder AustAgder Figure 1 A map showing potential plant sites and actual ﬁshing grounds for Southern Norway and the North Sea. The function Q(x, ξ ) is called the recourse function, and Q(x) therefore the expected recourse function. In this chapter we shall look at only the case with ﬁxed recourse, i.e. the case where W (ξ ) ≡ W . Let us repeat a few terms from Section 1.4, in order to prepare for the next section. The cone pos W , mentioned in (4.17) of Chapter 1, is deﬁned by pos W = {t  t = W y, y ≥ 0}. The cone pos W is illustrated in Figure 2. Note that W y = h, y ≥ 0 is feasible ⇐⇒ h ∈ pos W. Recall that a problem has complete recourse if pos W = Rm . Among other things, this implies that h(ξ ) − T (ξ )x ∈ pos W for all ξ and all x. But that is deﬁnitely more than we need in most cases. Usually, it is more than enough to know that RECOURSE PROBLEMS 161 W1 W2 W4 W3 Figure 2 columns. The cone pos W for a case where W has three rows and four h(ξ ) − T (ξ )x ∈ pos W for all ξ and all x ≥ 0 satisfying Ax = b. If this is true, we have relatively complete recourse. Of course, complete recourse implies relatively complete recourse. 3.2 The Lshaped Decomposition Method This section contains a much more detailed version of the material found in Section 1.7.4. In addition to adding more details, we have now added randomness more explicitly, and have also chosen to view some of the aspects from a diﬀerent perspective. It is our hope that a new perspective will increase the understanding. 3.2.1 Feasibility The material treated here coincides with step 2(a) in the dual decomposition method of Section 1.7.4. Let the secondstage problem be given by Q(x, ξ ) = min{q (ξ )T y  W y = h(ξ ) − T (ξ )x, y ≥ 0}, where W is ﬁxed. Assume we are given an x and should like to know if that x ˆ ˆ ˜ yields a feasible secondstage problem for all possible values of ξ . We assume 162 STOCHASTIC PROGRAMMING Hξ pos W ^ h0T0 x Figure 3 Illustration showing that if infeasibility is to occur for a ﬁxed x, it ˆ ˜ ˜ must occur for an extreme point of the support of H ξ , and hence of ξ . In this example T (ξ ) is assumed to be equal to T0 . ˜ that ξ has a rectangular and bounded support. Consider Figure 3. We have there drawn pos W plus a parallelogram that represents all possible values of ˜ ˆ h0 + H ξ − T0 x. We have assumed that T (ξ ) ≡ T0 , only to make the illustration simpler. Figure 3 should be interpreted as representing a case where H is a 2 × 2 matrix, so that the extreme points of the parallelogram correspond to the ˜ extreme points of the support Ξ of ξ . This is a known result from linear algebra, namely that if one polyhedron is a linear transformation of another polyhedron, then the extreme points of the latter are maps of extreme points in the ﬁrst. What is important to note from Figure 3 is that if the secondstage problem ˜ is to be infeasible for some realizations of ξ then at least one of these realizations will correspond to an extreme point of the support. The ﬁgure shows such a case. And conversely, if all extreme points of the support produce ˜ feasible problems, all other possible realizations of ξ will also produce feasible problems. Therefore, to check feasibility, we shall in the worst case have to check all extreme points of the support. With k random variables, and Ξ a k dimensional rectangle, we get 2k points. Let us deﬁne A to be a set containing these points. In Chapter 5 we shall discuss how we can often reduce the number of points in A without removing the property that if all points in A yield a feasible secondstage problem, so will all other points in the support. RECOURSE PROBLEMS 163 We shall next turn to another aspect of feasibility, namely the question of how to decide if a given x = x will yield feasible secondstage problems ˆ ˜ for all possible values of ξ in a setting where we are not aware of relatively complete recourse. What we shall outline now corresponds to Farkas’ lemma (Proposition 1.19, page 75). Farkas’ lemma states that {y  W y = h, y ≥ 0} = ∅ if and only if W T u ≥ 0 implies that hT u ≥ 0. The ﬁrst of these equivalent statements is just an alternative way of saying that h ∈ pos W , which we now know means that h represents a feasible problem. By changing the sign of u, the second of the equivalent statements can be rewritten as W T u ≤ 0 implies that hT u ≤ 0. or equivalently hT t ≤ 0 whenever t ∈ {u  W T u ≤ 0}. However, this may be reformulated as {u  W T u ≤ 0} = {u  uT W y ≤ 0 for all y ≥ 0} = {u  uT h ≤ 0 for all h ∈ pos W }. The last expression deﬁnes the polar cone of pos W as pol pos W = {u  uT h ≤ 0 for all h ∈ pos W }. Using Figure 4, we can now restate Farkas’ lemma the following way. The system W y = h, y ≥ 0, is feasible if and only if the righthand side h has a nonpositive inner product with all vectors in the cone pol pos W , in particular with its generators. Generators were discussed in Chapter 1 (see e.g. Remark 1.6, page 69). The matrix W ∗ , containing as columns all generators of pol pos W , is denoted the polar matrix of W . We shall see in Chapter 5 how this understanding can be used to generate relatively complete recourse in a problem that does not possess that property. For now, we are satisﬁed by understanding that if we knew all the generators of pol pos W , that is the polar matrix W ∗ , then we could check feasibility of a secondstage problem by performing a number of inner products (one for each generator), and if at least one of them gave a positive value then we could conclude that the problem was indeed infeasible. If we do not know all the generators of pol pos W , and we are not aware of relatively complete recourse, for a given x and all ξ ∈ A we must check ˆ 164 STOCHASTIC PROGRAMMING pos W pol pos W Figure 4 The polar of a cone. RECOURSE PROBLEMS 165 for feasibility. We should like to check for feasibility in such a way that if the given problem is not feasible, we automatically come up with a generator of pol pos W . For the discussion, we shall use Figure 5. We should like to ﬁnd a σ such that σ T t ≤ 0 for all t ∈ pos W. This is equivalent to requiring that σ T W ≤ 0. In other words, σ should be in the cone pol pos W . But, assuming that the righthand side h(ξ ) − T (ξ )ˆ x produces an infeasible problem, we should at the same time require that σ T [h(ξ ) − T (ξ )ˆ] > 0, x because if we later add the constraint σ T [h(ξ ) − T (ξ )x] ≤ 0 to our problem, we shall exclude the infeasible righthand side h(ξ ) − T (ξ )ˆ without leaving x out any feasible solutions. Hence we should like to solve max{σ T (h(ξ ) − T (ξ )ˆ)  σ T W ≤ 0, σ ≤ 1}, x
σ where the last constraint has been added to bound σ . We can do that, because otherwise the maximal value will be +∞, and that does not interest us since we are looking for the direction deﬁned by σ . If we had chosen the 2 norm, the maximization would have made sure that σ came as close to h(ξ ) − T (ξ )ˆ as x possible (see Figure 5). Computationally, however, we should not like to work with quadratic constraints. Let us therefore see what happens if we choose the 1 norm. Let us write our problem diﬀerently to see the details better. To do that, we need to let the unconstrained σ be replaced by σ 1 − σ 2 , where σ 1 , σ 2 ≥ 0. We then get the following: max{(σ 1 −σ 2 )T (h(ξ )−T (ξ )ˆ)  (σ 1 −σ 2 )T W ≤ 0, eT (σ 1 +σ 2 ) ≤ 1, σ 1 , σ 2 ≥ 0}, x where e is a vector of ones. To more easily ﬁnd the dual of this problem, let us write it down in a more standard format: max(σ 1 − σ 2 )T (h(ξ ) − T (ξ )ˆ) dual variables x W T σ1 − W T σ2 ≤ 0 y eT σ 1 + eT σ 2 ≤ 1 t σ1 , σ2 ≥ 0 From this, we ﬁnd the dual linear program to be min{t  W y + et ≥ (h(ξ ) − T (ξ )ˆ), −W y + et ≥ −(h(ξ ) − T (ξ )ˆ), y, t ≥ 0}. x x Note that if the optimal value in this problem is zero, we have W y = h(ξ ) − T (ξ )ˆ, so that we do indeed have h(ξ ) − T (ξ )ˆ ∈ pos W , contrary x x 166 STOCHASTIC PROGRAMMING pos W $ h (x )  T (x ) x s
Figure 5 Generation of feasibility cuts. to our assumption. We also see that if t gets large enough, the problem is always feasible. This is what we solve for all ξ ∈ A. If for some ξ we ﬁnd a positive optimal value, we have found a ξ for which h(ξ ) − T (ξ )ˆ ∈ pos W , x and we create the cut σ T (h(ξ ) − T (ξ )x) ≤ 0 ⇐⇒ σ T T (ξ )x ≥ σ T h(ξ ). (2.1) The σ used here is a generator of pol pos W , but it is not in general as close to h(ξ ) − T (ξ )ˆ as possible. This is in contrast to what would have happened x had we used the 2 norm. (See Example 3.1 below for an illustration of this point.) Note that if T (ξ ) ≡ T0 , the expression σ T T0 x in (2.1) does not depend on ξ . Since at the same time (2.1) must be true for all ξ , we can for this special case strengthen the inequality by calculating σ T T0 x ≥ σ T h0 + max σ T H t.
t∈Ξ Since σ T T0 is a vector and the righthand side is a scalar, this can conveniently ˆ be written as −γ T x ≥ δ . The x we started out with will not satisfy this constraint. Example 3.1 We present this little example to indicate why the 1 and 2 norms give diﬀerent results when we generate feasibility cuts. The important point is how the two norms limit the possible σ values. The 1 norm is given in the left part of Figure 6, the 2 norm in the right part. RECOURSE PROBLEMS 167 Figure 6 Illustration of the diﬀerence between the generating feasibility cuts. 1 and 2 norms when For simplicity, we have assumed that pol pos W equals the positive quadrant, so that the constraints σ T W ≤ 0 reduce to σ ≥ 0. Since at the same time σ ≤ 1, we get that σ must be within the shaded part of the two ﬁgures. For convenience, let us denote the righthand side by h, and let σ = (σ x , σ y )T , to reﬂect the x and y parts of the vector. In this example h = (4, 2)T . For the 1 norm the problem now becomes. max{4σ x + 2σ y σ x + σ y ≤ 1, σ ≥ 0}.
σ The optimal solution here is σ = (1, 0)T . Graphically this can be seen from the ﬁgure from the fact that an inner product equals the length of one vector multiplied by the length of the projection of the second vector on the ﬁrst. If we take the h vector as the ﬁxed ﬁrst vector, the feasible σ vector with the largest projection on h is σ = (1, 0)T . For the 2 norm the problem becomes max{4σ x + 2σ y (σ x )2 + (σ y )2 ≤ 1, σ ≥ 0}.
σ The optimal solution here is σ = 1 (2, 1)T , which is a vector in the same 5 direction as h. In this example we see that if σ is found using the 1 norm, it becomes a generator of pol pos W , but it is not as close to h as possible. With the 2 norm, we did not get a generator, but we got a vector as close to h as possible. 168 STOCHASTIC PROGRAMMING procedure LP(W :matrix; b, q, y :vectors; feasible:boolean); begin if min{q T y W y = b, y ≥ 0} is feasible then begin feasible := true; y is the optimal y’; end else feasible := f alse; end; Figure 7 LP solver. 2 3.2.2 Optimality The material discussed here concerns step 1(b) of the dual decomposition method in Section 1.7.4. Let us ﬁrst note that if we have relatively complete recourse, or if we have checked that h(ξ ) − T (ξ )x ∈ pos W for all ξ ∈ A, then the secondstage problem min{q (ξ )T y  W y = h(ξ ) − T (ξ )x, y ≥ 0} is feasible. Its dual formulation is given by max{π T (h(ξ ) − T (ξ )x)  π T W ≤ q (ξ )T }. As long as q (ξ ) ≡ q0 , the dual is either feasible or infeasible for all x and ξ , since x and ξ do not enter the constraints. We see that this is more complicated if q is also aﬀected by randomness. But even when ξ enters the objective function, we can at least say that if the dual is feasible for one x and a given ξ then it is feasible for all x for that value of ξ , since x enters only the objective function. Therefore, from standard linear programming duality, since the primal is feasible, the primal must be unbounded if and only if the dual is infeasible, and that would happen for all x for a given ξ , if randomness aﬀects the objective function. If q (ξ ) ≡ q0 then it would happen for all x and ξ . Therefore we can check in advance for unboundedness, and this is particularly easy if randomness does not aﬀect the objective function. Note that this discussion relates to Proposition 1.18. Assume we know that our problem is bounded. RECOURSE PROBLEMS 169 procedure master(K, L:integer;x, θ:real;feasible:boolean); begin if L > 0 then begin ⎛ ⎛c⎞ ⎛x⎞ ⎞ ˆ + 0 0 b ⎜A00 ⎜ 1 ⎟ ⎜θ ⎟ ⎟ , ∆ , ⎜ −1 ⎟ , ⎜ θ− ⎟ , feasible⎟; 0 LP⎜ −Γ 0 0 −I ⎝ ⎝ ⎠⎝ ⎠ ⎠ s1 −β e −e 0 −I α 0 s2 0 ˆ if (feasible) then θ := θ+ − θ− ; end else begin A 0 b c x ˆ , , , , feasible ; LP −Γ −I ∆ 0 s ˆ if feasible then θ := −∞; end; end; Figure 8 Master problem solver for the Lshaped decomposition method. Now consider Q(x) =
j pj Q(x, ξ j ), with Q(x, ξ ) = min{q (ξ )T y  W y = h(ξ ) − T (ξ )x, y ≥ 0}. It is clear from standard linear programming theory that Q(x, ξ ) is piecewise linear and convex in x (for ﬁxed ξ ). Provided that q (ξ ) ≡ q0 , Q(x, ξ ) is also piecewise linear and convex in ξ (for ﬁxed x). (Remember that T (ξ ) = T0 + Ti ξi .) Similarly, if h(ξ ) − T (ξ )x ≡ h0 − T0 x, while q (ξ ) = q0 + i qi ξi , then, from duality, Q(x, ξ ) is piecewise linear and concave in ξ . Each linear piece corresponds to a basis (possibly several in the case of degeneracy). Therefore Q(x), being a ﬁnite sum of such functions, will also be convex and piecewise linear in x. If, instead of minimizing, we were maximizing, convexity and concavity would change places in the statements. In order to arrive at an algorithm for our problem, let us now reformulate the latter by introducing a new variable θ: min cT x + θ s.t. Ax = b, θ ≥ Q(x), T −γk x ≥ δk for k = 1, . . . , K, x ≥ 0, 170 STOCHASTIC PROGRAMMING procedure feascut(A:set; x:real; newcut:boolean; K :integer); ˆ begin A := A; newcut := false; while A = ∅ and not (newcut) do begin pickξ (A , ξ ); A := A \ {ξ }; ⎛ ⎛ ⎞⎛ ⎞ ⎞ 0 y ˆ ˆ h(ξ ) − T (ξ )ˆ x ⎜ W e −I 0 ⎜1⎟ ⎜ t ⎟ ⎟ LP⎝ , , ⎝ ⎠ , ⎝ ⎠ , feasible⎠; −W e 0 −I −h(ξ ) + T (ξ )ˆ x 0 s1 s2 0 ˆ newcut := (t > 0); if newcut then begin (* Create a feasibility cut—see page 161. *) K := K + 1; T Construct the cut −γK x ≥ δK ; end; end; end; Figure 9 Procedure used to ﬁnd feasibility cuts. where, as before, Q(x) =
j pj Q(x, ξ j ) and Q(x, ξ ) = min{q (ξ )T y  W y = h(ξ ) + T (ξ )x, y ≥ 0}. Of course, computationally we cannot use θ ≥ Q(x) as a constraint since Q(x) is only deﬁned implicitly by a large number of optimization problems. Instead, let us for the moment drop it, and solve the above problem without it, simply hoping it will be satisﬁed (assuming so far that all feasibility cuts T −γk x ≥ δk are there, or that we have relatively complete recourse). We then ˆ ˆ get some x and θ (the ﬁrst time θ = −∞). Now we calculate Q(ˆ), and then ˆ x ˆ ≥ Q(ˆ). If it is, we are done. If not, our x is not optimal—dropping check if θ x ˆ θ ≥ Q(x) was not acceptable. Now pj Q(ˆ, ξ j ) = x pj q (ξ j )T y j Q(ˆ) = x
j j j where y is the optimal secondstage solution yielding Q(ˆ, ξ j ). But, owing to x linear programming duality, we also have pj q (ξ j )T y j =
j j pj (ˆ j )T [h(ξ j ) − T (ξ j )ˆ], π x RECOURSE PROBLEMS 171 procedure Lshaped; begin K := 0, L := 0; ˆ θ := −∞ LP(A, b, c, x, feasible); ˆ stop := not (feasible); while not (stop) do begin feascut(A, x,newcut,K ); ˆ if not (newcut) then begin Find Q(ˆ); x ˆ stop := (θ ≥ Q(ˆ)); x if not (stop) then begin (* Create an optimality cut—see page 168. *) L := L + 1; T Construct the cut −βL x + θ ≥ αL ; end; end; if not (stop) then begin master(K, L, x, θ,feasible); ˆˆ stop := not (feasible); end; end; end; Figure 10 The Lshaped decomposition algorithm. where π j is the optimal dual solution yielding Q(ˆ, ξ j ). The constraints in the ˆ x dual problem are, as mentioned before, π T W ≤ q (ξ j )T , which are independent of x. Therefore, for a general x, and corresponding optimal vectors π j (x), we have Q(x) =
j pj (π j (x))T [h(ξ j ) − T (ξ j )x] ≥
j pj (ˆ j )T [h(ξ j ) − T (ξ j )x], π since π is feasible but not necessarily optimal, and the dual problem is a ˆ maximization problem. Since what we dropped from the constraint set was θ ≥ Q(x), we now add in its place θ≥
j pj (ˆ j )T [h(ξ j ) − T (ξ j )x] = α + β T x, π −β T x + θ ≥ α. or 172 STOCHASTIC PROGRAMMING θ Q(x) cx+θ x cut 5 0 x2 x3 cut 2 cut 3 (x , θ ) 55 cut 4 (x4 ,θ ) 4 cut 1 x1 Figure 11 Example of the progress of the Lshaped decomposition algorithm. Since there are ﬁnitely many feasible bases coming from the matrix W , there are only ﬁnitely many such cuts. We are now ready to present the basic setting of the Lshaped decomposition algorithm. It is shown in Figure 10. To use it, we shall need a procedure that solves LPs. It can be found in Figure 7. Also, to avoid too complicated expressions, we shall deﬁne a special procedure for solving the master problem; see Figure 8. Furthermore, we refer to procedure pickξ (A, ξ ), which simply picks an element ξ from the set A, and, ﬁnally, we use procedure feascut which is given in Figure 9. The set A was deﬁned on page 162. In the algorithms to follow, let −Γx ≥ ∆ represent the K feasibility T cuts −γk x ≥ δk , and let −βx + Iθ ≥ α represent the L optimality cuts T −βl x + θ ≥ αl . Furthermore, let e be a column of 1s of appropriate size. RECOURSE PROBLEMS 173 The example in Figure 11 can be useful in understanding the Lshaped decomposition algorithm. The ﬁve ﬁrst solutions and cuts are shown. The initial x1 was chosen arbitrarily. Cuts 1 and 2 are feasibility cuts, and the rest ˆ ˆ ˆ ˆ optimality cuts. θ1 = θ2 = θ3 = −∞. To see if you understand this, try to ﬁnd (ˆ6 , θ6 ), cut 6 and then the ﬁnal optimal solution. xˆ 3.3 Regularized Decomposition As mentioned at the end of Section 1.7.4, the recourse problem (for a discrete distribution) looks like ⎫ K ⎪ min{cT x + i=1 pi (q i )T y i } ⎪ ⎪ ⎪ ⎬ s.t. Ax =b (3.1) T i x + W y i = hi , i = 1 , · · · , K ⎪ ⎪ ⎪ x ≥ 0, ⎪ ⎭ y i ≥ 0, i = 1, · · · , K. To use the multicut method mentioned in Section 1.7.4, we simply have to introduce feasibility and optimality cuts for all the recourse functions fi (x) := min{(q i )T y i  W y i = hi − T i x, y i ≥ 0}, i = 1, · · · , K , until the overall procedure has converged. In general, with the notation of the previous section, these cuts have the form (3.2) γ T x + δ ≤ 0, where γ = −T T σ, δ = hT σ, γ T x + δ ≤ θ, where γ = −T T π , δ = f (ˆ) − γ T x, ˆ x ˆ (3.3) where (3.2) denotes a feasibility cut and (3.3) denotes an optimality cut, the σ and π resulting from step 2 of the dual decomposition method of Section 1.7.4, ˆ as explained further in Section 3.2. Of course, the matrix T and the righthand side vector h will vary, depending on the block i for which the cut is derived. One cycle of a multicut solution procedure for problem (3.1) looks as follows: Let B1i = {(x, θ1 , · · · , θK )  · · ·}, i = 1, · · · , K, be feasible for the cuts generated so far for block i (obviously for block i restricting only (x, θi )). Given B0 = {(x, θ)  Ax = b, x ≥ 0, θ ∈ IRK } and the sets B1i , solve the master program
K K min cT x +
i=1 pi θi (x, θ1 , · · · , θK ) ∈ B0 ∩
i=1 B1i , (3.4) ˆ yielding (ˆ, θ1 , · · · , θK ) as a solution. With this solution try to construct xˆ further cuts for the blocks. 174 STOCHASTIC PROGRAMMING • If there are no further cuts to generate, then stop (optimal solution); • otherwise repeat the cycle. The advantage of a method like this lies in the fact that we obviously make use of the particular structure of problem (3.1) in that we have to deal in the master program only with n + K variables instead of n + i ni , if y i ∈ IRni . The drawback is easy to see as well: we may have to add very many cuts, and so far we have no reliable criterion to drop cuts that are obsolete for further iterations. Moreover, initial iterations are often ineﬃcient. This is not surprising, since in the master (3.4) we deal only with θi ≥ max[(γ ij )T x + δij ]
j ∈Ji for Ji denoting the set of optimality cuts generated so far for block i with the related dual basic solutions π ij according to (3.3), and not, as we intend to, ˆ with θi ≥ fi (x) = max[(γ ij )T x + δij ]
ˆ j ∈Ji ˆ where Ji enumerates all dual feasible basic solutions for block i. Hence we are working in the beginning with a piecewise linear convex function (maxj ∈Ji [(γ ij )T x + δij ]) supporting fi (x) that does not suﬃciently reﬂect the shape of fi (see e.g. Figure 26 of Chapter 1, page 78). The eﬀect may be—and often is—that even if we start a cycle with an (almost) optimal ﬁrststage ˆ solution x of (3.1), the ﬁrststage solution x of the master (3.4) may be far away from x , and it may take many further cycles to come back towards x . The reason for this is now obvious: if the set of available optimality cuts, Ji , is ˆ a small subset of the collection Ji then the piecewise linear approximation of fi (x) may be inadequate near x . Therefore it seems desirable to modify the master program in such a way that, when starting with some overall feasible ﬁrststage iterate z k , its solution xk does not move too far away from z k . Thereby we can expect to improve the approximation of fi (x) by an optimality cut for block i at xk . This can be achieved by introducing into the objective of the master the term x − z k 2 , yielding a socalled regularized master program min 1 x − zk 2ρ
K 2 K + cT x +
i=1 pi θi (x, θ1 , · · · , θK ) ∈ B0 ∩
i=1 B1i , (3.5) with a control parameter ρ > 0. To avoid too many constraints in (3.5), let us start with some z 0 ∈ B0 such that fi (z 0 ) < ∞ ∀i and G0 being the feasible set deﬁned by the ﬁrststage equations Ax = b and all optimality cuts at z 0 . Hence we start (for k = 0) with the reduced regularized master program RECOURSE PROBLEMS 175 min 1 x − zk 2ρ K 2 +c x+
i=1 T pi θi (x, θ1 , · · · , θK ) ∈ Gk . (3.6) Observe that the objective of (3.6) implicitly contains the function1 ˆ F (x) = cT x + min{pT θ  (x, θ) ∈ Gk },
θ which, according to the above discussion, is a piecewise linear convex function supporting from below our original piecewise linear objective F (x) = cT x + pT f (x) pi fi (x). = cT x +
i Excluding by assumption degeneracy in the constraints deﬁning Gk , a point (x, θ) ∈ IRn+K is a vertex, i.e. a basic solution, of Gk iﬀ (including the ﬁrststage equations Ax = b) exactly n + K constraints are active (i.e. satisﬁed as equalities), owing to the simple fact that a point in IRn+K is uniquely determined by the intersection of n + K independent hyperplanes.2 In the following we sometimes want to check whether at a certain overall feasible ˆ x ∈ IRn the support function F has a kink, which in turn implies that for ˆ ˆ ∈ arg minθ {pT θ  (ˆ, θ) ∈ Gk } at (ˆ, θ) we have a vertex of Gk . Hence we x xˆ θ have to check whether at (ˆ, θ) exactly n + K constraints are active. xˆ Having solved (3.6) with a solution xk , and xk not being overall feasible, we just add the violated constraints (either xi ≥ 0 from the ﬁrststage or the necessary feasibility cuts from the second stage) and resolve (3.6). If xk is overall feasible, we have to decide whether we maintain the candidate solution z k or whether we replace it by xk . As shown in Figure 12, there are essentially three possibilities: ˆ • F (xk ) = F (xk ), i.e. the supporting function coincides at xk with the true objective function (see x1 in Figure 12); ˆ ˆ • F (xk ) < F (xk ), but at xk there is a kink of F and the decrease of the k k true objective from z to x is ‘substantial’ as compared with the decrease ˆ ˆ ˆ ˆ F (xk ) − F (z k ) = F (xk ) − F (z k ) (< 0) (we have F (z k ) = F (z k ) in view k of the overall feasibility of z ); more precisely, for some ﬁxed µ ∈ (0, 1), ˆ F (xk ) − F (z k ) ≤ (1 − µ)[F (xk ) − F (z k )] (see x2 in Figure 12 with µ = 0.75); • neither of the two above situations arises (see x3 in Figure 12).
1 2 With p = (p1 , · · · , pK )T . Recall that in IRn+K never more than n + K independent hyperplanes intersect at one point. 176 STOCHASTIC PROGRAMMING Figure 12 Keeping or changing the candidate solutions in QDECOM. In these cases we should decide respectively • z 2 := x1 , observing that no cut was added, and therefore keeping z 1 unchanged would block the procedure; • z 3 := x2 , realizing that x2 is “substantially” better than z 2 —in terms of ˆ the original objective—and that at the same time F has a kink at x2 such that we might intuitively expect—thus clearly making use of a heuristic argument—to make a good step forward towards the optimal kink of the true objective; • z 4 := z 3 , since—neither rationally nor heuristically—can we see any convincing reason to change the candidate solution. Hence it seems ˆ preferable to ﬁrst improve the approximation of F to F by introducing the necessary optimality cuts. After these considerations, motivating the measures to be taken in the various steps, we want to formulate precisely one cycle of the regularized decomposition method (RD), which with
K F (x) := cT x +
i=1 pi fi (x) for µ ∈ (0, 1), is described as follows. Step 1 Solve (3.6) at z k , getting xk as ﬁrststage solution and θk = RECOURSE PROBLEMS 177 k k ˆ (θ1 , · · · , θK )T as recourse approximates. If, for Fk := cT xk + pT θk , ˆk = F (z k ) then stop (z k is an optimal solution of (3.1)). Otherwise, F go to step 2. Step 2 Delete from (3.6) some constraints that are inactive at (xk , θk ) such that no more than n + K constraints remain. Step 3 If xk satisﬁes the ﬁrststage constraints (i.e. xk ≥ 0) then go to step 4; otherwise add to (3.6) no more than K violated (ﬁrststage) constraints, yielding the feasible set Gk+1 , put z k+1 := z k , k := k + 1, and go to step 1. Step 4 For i = 1, · · · , K solve the secondstage problems at xk and (a) if fi (xk ) = ∞ then add to (3.6) a feasibility cut; k (b) otherwise, if fi (xk ) > θi then add to (3.6) an optimality cut. Step 5 If fi (xk ) = ∞ for at least one i then put z k+1 := z k and go to step 7. Otherwise, go to step 6. ˆ ˆ Step 6 If F (xk ) = Fk , or else if F (xk ) ≤ µF (z k ) + (1 − µ)Fk and if exactly kk n + K constraints were active at (x , θ ), then put z k+1 := xk ; otherwise, put z k+1 := z k . Step 7 Determine Gk+1 as resulting from Gk after deleting and adding constraints due to step 2 and step 4 respectively. With k := k + 1, go to step 1. It can be shown that this algorithm converges in ﬁnitely many steps. The parameter ρ can be controlled during the procedure so as to increase it whenever steps (i.e. xk − z k ) seem too short, and decrease it when F (xk ) > F (z k ). 3.4 Bounds Section 3.2 was devoted to the Lshaped decomposition method. We note that the deterministic methods very quickly run into dimensionality problems with respect to the number of random variables. With much more than 10 random variables, we are in trouble. This section discusses bounds on stochastic problems. These bounds can be useful and interesting in their own right, or they can be used as subproblems in larger settings. An example of where we might need to bound a problem, and where this problem is not a subproblem, is the following. Assume that a company is facing a decision problem. The decision itself will be made next year, and at that time all parameters describing the problem will be known. However, today a large number of relevant parameters are unknown, so it 178 STOCHASTIC PROGRAMMING is diﬃcult to predict how proﬁtable the operation described by the decision problem will actually be. It is desired to know the expected proﬁtability of the operation. The reason is that, for planning purposes, the ﬁrm needs to know the expected activities and proﬁts for the next year. Given the large number of uncertain parameters, it is not possible to calculate the exact expected value. However, using bounding techniques it may be possible to identify an interval that contains the expected value. Technically speaking, one needs to ﬁnd the expected value of the “waitandsee” solution discussed in Chapter 1, and also in Example 2.4. Another example, which we shall see later in Section 6.6, is that of calculating the expected project duration time in a project consisting of activities with random durations. Bounding methods are also useful if we wish to use deterministic decomposition methods (such as the Lshaped decomposition method or scenario aggregation), on problems with a large number of random variables. That will be discussed later in Section 3.5.2. One alternative to bounding involves the development of approximations using stochastic methods. We shall outline two of them later, they are called stochastic decomposition (Section 3.8) and stochastic quasigradient methods (Section 3.9). As discussed above, bounds can be used either to approximate the expected value of some linear program or to bound the secondstage problem in a twostage problem. These two settings are principally the same, and we shall therefore consider the problem of ﬁnding the expected value of a linear program. We shall discuss this in terms of a function φ(ξ ), which in the twostage case represents Q(ˆ, ξ ) for a ﬁxed x. To illustrate, we shall look at the x ˆ reﬁnery example of Section 1.3. The problem is repeated here for convenience: ⎫ φ(ξ ) = “ min ” {2xraw1 + 3xraw2 } ⎪ ⎪ ⎪ ⎪ s.t. xraw1 + xraw2 ≤ 100, ⎪ ⎪ ⎬ 2xraw1 + 6xraw2 ≥ 180 + ξ1 , (4.1) 3xraw1 + 3xraw2 ≥ 162 + ξ2 , ⎪ ⎪ ⎪ ⎪ xraw1 ≥ 0, ⎪ ⎪ ⎭ xraw2 ≥ 0. where both ξ1 and ξ2 are normally distributed with mean 0. As discussed in Section 1.3, we shall look at the 99% intervals for both (as if that was the support). This gives us ξ1 ∈ [−30.91, 30.91], ξ2 ∈ [−23.18, 23.18]. The interpretation is that 100 is the production limit of a reﬁnery, which reﬁnes crude oil from two countries. The variable xraw1 represents the amount of crude oil from Country 1 and xraw2 the amount from Country 2. The qualities of the crude oils are diﬀerent, so one unit of crude oil from Country 1 gives two units of Product 1 and three units of Product 2, whereas the crude RECOURSE PROBLEMS 179 φ φ(ξ) ξ) ξ) ξ 1 2 Figure 13 Two possible lower bounding functions. oil from the second country gives 6 and 3 units of the same products. Company 1 wants at least 180 + ξ1 units of Product 1 and Company 2 at least 162 + ξ2 units of Product 2. The goal now is to ﬁnd the expected value of φ(ξ ); in other words, we seek the expected value of the “waitandsee” solution. Note that this interpretation is not the one we adopted in Section 1.3. 3.4.1 The Jensen Lower Bound Assume that q (ξ ) ≡ q0 , so that randomness aﬀects only the righthand side. The purpose of this section is to ﬁnd a lower bound on Q(ˆ, ξ ), for ﬁxed x, x ˆ and for that purpose we shall, as just mentioned, use φ(ξ ) ≡ Q(ˆ, ξ ) for a x ﬁxed x. ˆ Since φ(ξ ) is a convex function, we can bound it from below by a linear function L(ξ ) = cξ + d. Since the goal will always be to ﬁnd a lower bound that is as large as possible, we shall require that the linear lower bound be ˆ tangent to φ(ξ ) at some point ξ . Figure 13 shows two examples of such lowerbounding functions. But the question is which one should we pick. Is L1 (ξ ) or L2 (ξ ) the better? ˆ If we let the lower bounding function L(ξ ) be tangent to φ(ξ ) at ξ , the slope ˆ), and we must have must be φ (ξ ˆˆ ˆ φ(ξ ) = φ (ξ )ξ + b, ˆ ˆ since φ(ξ ) = L(ξ ). Hence, in total, the lowerbounding function is given by ˆ ˆ ˆ L(ξ ) = φ(ξ ) + φ (ξ )(ξ − ξ ). Since this is a linear function, we easily calculate the expected value of the 180 STOCHASTIC PROGRAMMING lowerbounding function: ˜ ˆ ˆ ˜ˆ ˜ EL(ξ ) = φ(ξ ) + φ (ξ )(E ξ − ξ ) = L(E ξ ). In other words, we ﬁnd the expected lower bound by evaluating the lower ˜ bounding function in E ξ . From this, it is easy to see that we obtain the best ˆ ˜ (largest) lower bound by letting ξ = E ξ . This can be seen not only from the fact that no linear function that supports φ(ξ ) can have a value larger than ˜ ˜ φ(E ξ ) in E ξ , but also from the following simple diﬀerentiation: d ˜ ˆ ˆ ˆ ˜ˆ L(E ξ ) = φ (ξ ) − φ (ξ ) + φ (ξ )(E ξ − ξ ). ˆ dξ ˆ ˜ If we set this equal to zero we ﬁnd that ξ = E ξ . What we have developed is the socalled Jensen lower bound, or the Jensen inequality. ˜ Proposition 3.1 If φ(ξ ) is convex over the support of ξ then ˜ ˜ Eφ(ξ ) ≥ φ(E ξ ) This best lower bound is illustrated in Figure 14. We can see that the Jensen lower bound can be viewed two diﬀerent ways. First, it can be seen as a bound where a distribution is replaced by its mean and the problem itself ˜ is unchanged. This is when we calculate φ(E ξ ). Secondly, it can be viewed as a bound where the distribution is left unchanged and the function is replaced by a linear aﬃne function, represented by a straight line. This is when we ˜ integrate L(ξ ) over the support of ξ . Depending on the given situation, both these views can be useful. There is even a third interpretation. We shall see it used later in the ˜ stochastic decomposition method. Assume we ﬁrst solve the dual of φ(E ξ ) to obtain an optimal basis B . This basis, since ξ does not enter the constraints of the dual of φ, is dual feasible for all possible values of ξ . Assume now that we solve the dual version of φ(ξ ) for all ξ , but constrain our optimization so that we are allowed to use only the given basis B . In such a setting, we might claim that we use the correct function, the correct distribution, but optimize only in an approximate way. (In stochastic decomposition we use not one, but a ﬁnite number of bases.) The Jensen lower bound can in this setting be interpreted as representing approximate optimization using the correct problem and correct distribution, but only one dual feasible basis. It is worth pointing out that these interpretations of the Jensen lower bound are put forward to help you see how a bound can be interpreted in diﬀerent ways, and that these interpretations can lead you in diﬀerent directions when trying to strengthen the bound. An interpretation is not necessarily motivated by computational eﬃciency. RECOURSE PROBLEMS 181 Looking back at our example in (4.1), we ﬁnd the Jensen lower bound by ˜ calculating φ(E ξ ) = φ(0). That has been solved already in Section 1.3, where we found that φ(0) = 126. 3.4.2 Edmundson–Madansky Upper Bound ˜ Again let ξ be a random variable. Let the support Ξ = [a, b], and assume that x q (ξ ) ≡ q0 . As in the previous section, we deﬁne φ(ξ ) = Q(ˆ, ξ ). (Remember that x is ﬁxed at x.) Consider Figure 14, where we have drawn a linear function ˆ U (ξ ) between the two points (a, φ(a)) and (b, φ(b)). The line is clearly above φ(ξ ) for all ξ ∈ Ξ. Also this straight line has the formula cξ + d, and since we know two points, we can calculate c= φ(b) − φ(a) , b−a d= b a φ(a) − φ(b). b−a b−a We can now integrate, and ﬁnd (using the linearity of U (ξ )) φ(b) − φ(a) ˜ b a ˜ EU (ξ ) = Eξ + φ(a) − φ(b) b−a b−a b−a ˜− a ˜ Eξ b − Eξ + φ(b) . = φ(a) b−a b−a In other words, if we have a function that is convex in ξ over a bounded support Ξ = [a, b], it is possible to replace an arbitrary distribution by a two point distribution, such that we obtain an upper bound. The important parameter is p= ˜ Eξ − a , b−a so that we can replace the original distribution with ˜ P {ξ = a} = 1 − p, ˜ P {ξ = b} = p. (4.2) As for the Jensen lower bound, we have now shown that the Edmundson– Madansky upper bound can be seen as either changing the distribution and keeping the problem, or changing the problem and keeping the distribution. Looking back at our example in (4.1), we have two independent random variables. Hence we have 22 = 4 LPs to solve to ﬁnd the Edmundson– Madansky upper bound. Since both distributions are symmetric, the probabilities attached to these four points will all be 0.25. Calculating this we ﬁnd an upper bound of
1 4 (106.6825 + 129.8625 + 122.1375 + 145.3175) = 126. 182 STOCHASTIC PROGRAMMING EdmundsonMadansky U(ξ) φ(ξ) Jensen L(ξ) a bξ Eξ Figure 14 The Jensen lower bound and the Edmundson–Madansky upper bound in a minimization problem. Note that x is ﬁxed. This is exactly the same as the lower bound, and hence it is the true value ˜ of Eφ(ξ ). We shall shortly comment on this situation where the bounds turn out to be equal. In higher dimensions, the Jensen lower bound corresponds to a hyperplane, while the Edmundson–Madansky bound corresponds to a more general polynomial. A twodimensional illustration of the Edmundson–Madansky bound is given in Figure 15. Note that if we ﬁx the value of all but one of the variables, we get a linear function. This polynomial is therefore generated by straight lines. From the viewpoint of computations, we do not have to relate to this general polynomial. Instead, we take one (independent) random variable at a time, and calculate (4.2). This way we end up with 2 possible values for each random variable, and hence, 2k possible values of ξ for which we have to evaluate the recourse function. Assume that the function φ(ξ ) in Figure 14 is linear. Then it appears from the ﬁgure that both the Jensen lower bound and the Edmundson–Madansky upper bound are exact. This is indeed a correct observation: both bounds are exact whenever the function is linear. And, in particular, this means that if the function is linear, the error is zero. In the example (4.1) used to illustrate the Jensen and Edmundson–Madansky bounds we observed that the bounds where equal. This shows that the function φ(ξ ) is linear over the support we used. One special use of the Jensen lower bound and Edmundson–Madansky upper bound is worth mentioning. Assume we have a random vector, containing a number of independent random variables, and a function that is convex with respect to that random vector, but the random vector either has a continuous distribution, or a discrete distribution with a very large number of outcomes. In both cases we might have to simplify the distribution before RECOURSE PROBLEMS 183 Figure 15 Illustration of the Edmundson–Madansky upper bound in two dimensions. The function itself is not drawn. The Jensen lower bound, which is simply a plane, is also not drawn. making any attempts to attack the problem. The principle we are going to use is as follows. Take one random variable at a time. First partition the support of the variable into a ﬁnite number of intervals. Then apply the principle of the Edmundson–Madansky bound on one interval at a time. Since we are inside an interval, we use conditional distributions, rather than the original one. This will in eﬀect replace the distribution over the interval by a distribution that has probability mass only at the end points. This is illustrated in Figure 16, where we have shown the ˜ case for one random variable. The support of ξ has been partitioned into two parts, called cells. For each of these cells, we have drawn the straight lines corresponding to the Jensen lower bound and the Edmundson–Madansky upper bound. Corresponding to each cell, there is a onepoint distribution that gives a lower bound, and a twopoint distribution that gives an upper bound, just as we have outlined earlier. If the random variables have continuous (but bounded) distributions, we use these conditional bounds to replace the original distribution with discrete distributions. If the distribution is already discrete, we can remove some of the outcomes by using the Edmundson–Madansky inequality conditionally on parts of the support, again pushing probability mass to the end points of the intervals. Of course, the Jensen inequality can be used in the same way to construct conditional lower bounds. The point with these changes is not to create bounds per se, but to simplify distributions in such a way that we have control over what we have done to the problem when simplifying. The 184 STOCHASTIC PROGRAMMING EdmundsonMadansky φ(ξ) Jensen ξ a cell 1 cell 2 b Figure 16 Illustration of the eﬀect on the Jensen lower bound and the Edmundson–Madansky upper bound of partitioning the support into two cells. Figure 17 Simplifying distributions by using Jensen and Edmundson– Madansky on subintervals of the support. The stars represent conditional expectations, and hence a distribution resulting in a lower bound. The bars are endpoints of intervals, representing a distribution yielding an upper bound. idea is outlined in Figure 17. Whatever the original distribution was, we now have two distributions: one giving an overall lower bound, the other an overall upper bound. Since the random variables in the vector were assumed to be independent, this operation has produced discrete distributions for the random vector as well. 3.4.3 Combinations If we have randomness in the objective function, but not in the righthand side (so h(ξ ) − T (ξ )x ≡ h0 − T0 x), then, by simple linear programming duality, we can obtain the dual of Q(x, ξ ) with all randomness again in the righthand side, but now in a setting of maximization. In such a setting the Jensen bound is an upper bound and the Edmundson–Madansky bound a lower bound. If we have randomness in both the objective and the righthand side, and the random variables aﬀecting these two positions are diﬀerent and independent, RECOURSE PROBLEMS 185 then we get a lower bound by applying the Jensen rule on the righthand side random variables and the Edmundson–Madansky rule in the objective. If we do it the other way around, we get an overall upper bound. 3.4.4 A Piecewise Linear Upper Bound Although the Edmundson–Madansky distribution is very useful, it still requires that Q(x, ξ ) be evaluated at an exponential number of points. That is, if there are k random variables, we must work with 2k points. This means that with more than about 10 random variables we are not in business. In order to facilitate upper bounds, a number of approaches that are not of exponential complexity in terms of the number of random variables have been designed. In what follows we shall brieﬂy demonstrate how to obtain a piecewise linear upper bound that does not exhibit this exponential characterization. The idea behind the development is as follows. The recourse function Q(ˆ, ξ ) x is convex in ξ (for a ﬁxed x). We might envisage it as a bowl. The Jensen lower ˆ bound represents a supporting hyperplane below the recourse function, like a table on which the bowl sits. Any supporting hyperplane would give a lower ˜ bound, but, as we have seen, the one that touches Q(ˆ, ξ ) in E ξ gives the x highest lower bound. The Edmundson–Madansky upper bound, on the other hand, is much like a lid on the bowl. They are both illustrated in Figure 14. The purpose of the piecewise linear upper bound is to ﬁnd another bowl that ﬁts inside the bowl Q(ˆ, ξ ), but at the same time has more curvature than x the Edmundson–Madansky lid. Also, this new bowl must represent a function that is easy to integrate. The piecewise linear upper bound has exactly these properties. It should be noted that it is impossible to compare the piecewise linear upper bound with the Edmundson–Madansky bound, in the sense that either one can be best in a given example. In particular, the new bound may be +∞ even if the problem is feasible (meaning that Q(ˆ, ξ ) < ∞ for all possible ξ ). This x can never happen to the Edmundson–Madansky upper bound. It seems that the new bound is reasonably good on “loose” problems, i.e. problems that are very far from being infeasible, such as problems with complete recourse. The Edmundson–Madansky bound is better on “tight” problems. Let us illustrate the method in a simpliﬁed setting. We shall only consider randomness in the righthand side of W y = b, and leave the discussion of randomness in the upper bound c to Chapter 6. Deﬁne φ(ξ ) by φ(ξ ) = min{q T y  W y = b + ξ, 0 ≤ y ≤ c}
y ˜˜ ˜ where all components in the random vector ξ T = (ξ1 , ξ2 , . . .) are mutually ˜ independent. Furthermore, let the support be given by Ξ(ξ ) = [A, B ]. For ˜ convenience, but without any loss of generality, we shall assume that E ξ = 0. 186 STOCHASTIC PROGRAMMING The goal is to create a piecewise linear, separable and convex function in ξ : U (ξ ) = φ(0) +
i d+ ξi i d− ξi i ˜ if ξi ≥ E ξi = 0 ˜ if ξi < E ξi = 0. (4.3) There is a very good reason for such a choice. Note how U (ξ ) is separable in its components ξi . Therefore, for almost all distribution functions, U is simple to integrate. To appreciate the bound, we must understand its basic motivation. If we take some minimization problem, like the one here, and add extra constraints, the resulting problem will bound the original problem from above. What we shall do is to add restrictions with respect to the upper bounds c. We shall do this by viewing φ(ξ ) as a parametric problem in ξ , and reserve portions of the upper bound c for the individual random variables ξi . We may, for example, end up by saying that two units of cj are reserved for variable ξi , meaning that these two units can be used in the parametric analysis, only when we consider ξi . For all other variables ξk these two units will be viewed as nonexisting. The clue of the bound is to introduce the best possible set of such constraints, such that the resulting problem is easy to solve (and gives a good bound). ˜ First, let us calculate φ(E ξ ) = φ(0) by ﬁnding φ(0) = min{q T y  W y = b, 0 ≤ y ≤ c} = q T y 0 .
y This can be interpreted as the basic setting, and all other values of ξ will be ˜ seen as deviations from E ξ = 0. (Of course, any other starting point will also do—for example solving Q(A), where, as stated before, A is the lowest possible value of ξ .) Note that since y 0 is “always” there, we can in the following operate with bounds −y 0 ≤ y ≤ c − y 0 . For this purpose, we deﬁne α1 = −y 0 and β 1 = c − y 0 . Let ei be a unit vector of appropriate size with a +1 in position i. Next, deﬁne a counter r and let r := 1. Now check out the case when ξr > 0 by solving (remembering that Br is the maximal value of ξr ) min{q T y  W y = er Br , αr ≤ y ≤ β r } = q T y r+ = d+ Br . r
y (4.4) Note that d+ represents the per unit cost of increasing the righthand side r from 0 to er Br . Similarly, check out the case with ξr < 0 by solving min{q T y  W y = er Ar , αr ≤ y ≤ β r } = q T y r− = d− Ar . r
y (4.5) Now, based on y r± , we shall assign portions of the bounds to the random ˜ ˜ variable ξr . These portions of the bounds will be given to ξr and left unused ˜ by other random variables, even when ξr does not need them. That is done by RECOURSE PROBLEMS 187 means of the following problem, where we calculate what is left for the next random variable:
r r αr+1 = αr − min{yi + , yi − , 0}. i i (4.6) ˜ What we are doing here is to ﬁnd, for each variable, how much ξr , in the worst case, uses of the bound on variable i in the negative direction. That is then subtracted oﬀ what we had before. There are three possibilities. Both (4.4) and (4.5) may yield nonnegative values for the variable yi . In that case nothing is used of the available “negative bound” αr . Then αr+1 = αr . Alternatively, i i i r r if (4.4) has yi + < 0, then it will in the worst case use yi + of the available r “negative bound”. Finally, if (4.5) has yi − < 0 then in the worst case we use r− r +1 yi of the bound. Therefore αi is what is left for the next random variable. Similarly, we ﬁnd
r r r r βi +1 = βi − max{yi + , yi − , 0}, r βi +1 (4.7) shows how much is still available of bound i in the forward where (positive) direction. We next increase the counter r by one and repeat (4.4)–(4.7). This takes care of the piecewise linear functions in ξ . Note that it is possible to solve (4.4) and (4.5) by parametric linear ˜ programming, thereby getting not just one linear piece above E ξ and one below, but rather piecewise linearity on both sides. Then (4.6) and (4.7) must be updated to “worst case” analysis of bound usage. That is simple to do. Let us turn to our example (4.1). Since we have developed the piecewise linear upper bound for equality constraints, we shall repeat the problem with slack variables added explicitly. φ(ξ1 , ξ2 ) = min{2xraw1 + 3xraw2 } s.t. xraw1 + xraw2 + s1 = 100, − s2 = 180 + ξ1 , 2xraw1 + 6xraw2 3xraw1 + 3xraw2 − s3 = 162 + ξ2 , ≥ 0, xraw1 xraw2 ≥ 0, s1 ≥ 0, ≥ 0, s2 s3 ≥ 0 . In this setting, what we need to develop is the following: U (ξ1 , ξ2 ) = φ(0, 0) + d+ ξ1 1 d− ξ1 1 if ξ1 ≥ 0, + if ξ1 < 0, d+ ξ2 2 d− ξ2 2 if ξ2 ≥ 0, if ξ2 < 0. First, we have already calculated φ(0, 0) = 126 with xraw1 = 36, xraw2 = 18 and s1 = 46. Next, let us try to ﬁnd d± . To do that, we need α1 , 1 188 STOCHASTIC PROGRAMMING which equals (−36, −18, −46, 0, 0). We must then formulate (4.4), using ξ1 ∈ [−30.91, 30.91]: min{2xraw1 + 3xraw2 } s.t. xraw1 + xraw2 + s1 2xraw1 + 6xraw2 − s2 3xraw1 + 3xraw2 − s3 xraw1 xraw2 s1 s2 s3 = 0, = 30.91, = 0, ≥ −36, ≥ −18, ≥ −46, ≥ 0, ≥ 0. The solution to this is y 1+ = (−7.7275, 7.7275, 0, 0, 0)T, with a total cost of 7.7275. This gives us d+ = 1 (2, 3, 0, 0, 0)y 1+ = 0.25. 30.91 Next, we solve the same problem, just with 30.91 replaced by −30.91. This amounts to problem (4.5), and gives us the solution is y 1− = (7.7275, −7.7275, 0, 0, 0)T, with a total cost of −7.7275. Hence, we get d− = 1 (2, 3, 0, 0, 0)y 1− = 0.25. −30.91 The next step is to update α according to (4.6) to ﬁnd out how much is left of the negative bounds on the variables. For xraw1 we get α2 1 = −36 − min{−7.7275, 7.7275, 0} = −28.2725. raw For xraw2 we get in a similar manner α2 2 = −18 − min{7.7275, −7.7275, 0} = −10.2725. raw For the three other variables, α2 equals α1 . We can now turn to (4.4) for i i random variable 2. The problem to solve is as follows, when we remember the ξ2 ∈ [−23.18, 23.18]. min{2xraw1 + 3xraw2 } s.t. xraw1 + xraw2 + s1 − s2 2xraw1 + 6xraw2 3xraw1 + 3xraw2 − s3 xraw1 xraw2 s1 s2 s3 = 0, = 0, = 23.18, ≥ −28.2725, ≥ −10.2725, ≥ −46, ≥ 0, ≥ 0. RECOURSE PROBLEMS 189 The solution to this is y 2+ = (11.59, −3.863, −7.727, 0, 0)T, with a total cost of 11.59. This gives us d+ = 2 (2, 3, 0, 0, 0)y 2+ = 0.5. 23.18 Next, we solve the same problem, just with 23.18 replaced by −23.18. This amounts to problem (4.5), and gives us the solution y 2− = (−11.59, 3.863, 7.727, 0, 0)T, with a total cost of −11.59. Hence we get d− = 2 (2, 3, 0, 0, 0)y 2− = 0.5. −23.18 This ﬁnishes the calculation of the (piecewise) linear functions in the upper bound. What we have now found is that U (ξ1 , ξ2 ) = 126 +
1 4 ξ1 1 4 ξ1 if ξ1 ≥ 0, + if ξ1 < 0, 1 2 ξ2 1 2 ξ2 if ξ2 ≥ 0, if ξ2 < 0, which we easily see can be written as U (ξ1 , ξ2 ) = 126 + 1 ξ1 + 1 ξ2 . 4 2 In other words, as we already knew from calculating the Edmundson– Madansky upper bound and Jensen lower bound, the recourse function is linear in this example. Let us, for illustration, integrate with respect to ξ1 .
30.91 −30.91
1 41 ˜ ξ f (ξ1 ) dξ1 = 1 E ξ1 = 0. 4 This is how it should be for linearity, the contribution from a random variable over which U (and therefore φ) is linear is zero. We should of course get the same result with respect to ξ2 , and therefore the upper bound is 126, which equals the Jensen lower bound. Now that we have seen how things go in the linear case, let us try to see how the results will be when linearity is not present. Hence assume that we have now developed the necessary parameters d± for (4.3). Let us integrate i ˜ with respect to the random variable ξi , assuming that Ξi = [Ai , Bi ]:
0 Ai d− ξi f (ξi )dξi + 1 Bi 0 d+ ξi f (ξi )dξi 1 =
˜˜ ˜ ˜˜ ˜ d− E {ξi  ξi ≤ 0}P {ξi ≤ 0} + d+ E {ξi  ξi > 0}P {ξi > 0}. i i This result should not come as much of a surprise. When one integrates a linear function, one gets the function evaluated at the expected value of the 190 STOCHASTIC PROGRAMMING random variable. We recognize this integration from the Jensen calculations. From this, we also see, as we have already claimed a few times, that if d+ = d− i i ˜ ˜ for all i, then the contribution to the upper bound from ξ equals φ(E ξ ), which equals the contribution to the Jensen lower bound. Let us repeat why this is an upper bound. What we have done is to distribute the bounds c on the variables among the diﬀerent random variables. They have been given separate pieces, which they will not share with others, ˜ even if they, for a given realization of ξ , do not need the capacities themselves. This partitioning of the bounds among the random variables represents a set of extra constraints on the problem, and hence, since we have a minimization problem, the extra constraints yield an upper bound. If we run out of capacities before all random variables have received their parts, we must conclude that the upper bound is +∞. This cannot happen with the Edmundson–Madansky upper bound. If φ(ξ ) is feasible for all ξ then the Edmundson–Madansky bound is always ﬁnite. However, as for the Jensen and Edmundson–Madansky bounds, the piecewise linear upper bound is also exact when the recourse function turns out to be linear. As mentioned before, we shall consider random upper bounds in Chapter 6, in the setting of networks. 3.5
3.5.1 Approximations
Reﬁnements of the bounds on the “WaitandSee”Solution Let us, also in this section, assume that x = x, and as before deﬁne ˆ φ(ξ ) = Q(ˆ, ξ ). Using any of the above (or other) methods we can ﬁnd bounds x on the recourse function. Assume we have calculated L and U such that ˜ L ≤ Eφ(ξ ) ≤ U . We can now look at U − L to see if we are happy with the result or not. If we are not, there are basically two approaches that can be used. Either we might resort to a better bounding procedure (probably more expensive in terms of CPU time) or we might start using the old bounding methods on a partition of the support, thereby making the bounds tighter. Since we know only ﬁnitely many diﬀerent methods, we shall eventually be left with only the second option. The setup for such an approach to bounding will be as follows. First, partition the support of the random variables into an arbitrary selection of cells—possibly only one cell initially. We shall only consider cells that are rectangles, so that they can be described by intervals on the individual random variables. Figure 18 shows an example in two dimensions with ﬁve cells. Now, apply the bounding procedures on each of the cells, and add up the results. RECOURSE PROBLEMS 191 Cell2 Cell 1 Cell 3 Cell 5 Cell 4 Figure 18 Partitioning of cells. For example, in Figure 18, we need to ﬁnd ﬁve conditional expectations, so that we can calculate the Jensen lower bound on each cell. Adding these up, using as weights the probability of being in the cell, we get an overall lower bound. In the same way, an upper bound can be calculated on each cell and added up to produce an overall upper bound. If the error U − L is too large, one or more of the cells must be partitioned. It is natural to chose the cell(s) with the largest error(s), but along which coordinate(s) should it/they be partitioned, and through which point(s) in the cell(s)? Note that this use of conditional expectations is very similar to the way we created discrete distributions towards the end of Section 3.4.2. In particular, check out the discussion of Figure 16 on page 184. Not much is known about what is a good point for partitioning. Obvious possibilities are the middle of the support and the (conditional) mean or median. Our experience is that the middle of the support is good, so we shall use that. However, this subject is clearly open for discussion. Let us therefore turn to the problem of picking the correct coordinate (random variable). For example, if we have picked Cell 1 in Figure 18 to be partitioned, should we draw a vertical or horizontal line? This might seem like a minor question at ﬁrst sight. However, this is not at all the case. To see why, assume there is a random variable that is never of any importance, such as a random upper bound on a variable that, because of its high cost, is never used. Hence the realized value of this random variable is totally uninteresting. Assume that, for some reason, we pick this random variable for partitioning. The eﬀect will be that when we calculate the bounds again on the two new cells and add them up, we have exactly the same error as before. But—and that is crucial—we now have two cells instead of one. From a practical point of view, these cells are exactly equal. They are only diﬀerent with respect to a random variable that could as well have been dropped. Hence, in eﬀect, we now have 192 STOCHASTIC PROGRAMMING increased our work load. It is now harder to achieve a given error bound than it was before the partition. And note, we shall never recover from the error, in the sense that intelligent choices later on will not counteract this one bad choice. Each time we make a bad partition, the workload from there onwards basically doubles for the cell from which we started. Since we do not want to unnecessarily increase the workload too often, we must be careful with how we partition. Now that we know that bad choices can increase the workload, what should we do? The ﬁrst observation is that chosing at random is not a good idea, because, every now and then, we shall make bad choices. On the other hand, it is clear that the partitioning procedure will have to be a heuristic. Hence, we must make sure that we have a heuristic rule that we hope never makes really bad choices. By knowing our problem well, we may be able to order the random variables according to their importance in the problem. Such an ordering could be used as is, or in combination with other ideas. For some network problems, such as the PERT problem (see Section 6.6), the network structure may present us with such a list. If we can compile the list, it seems reasonable to ask, from a modelling point of view, if the random variables last on the list should really have been there in the ﬁrst place. They do not appear to be important. Over the years, some attempts to understand the problem of partitioning have been made. Most of them are based on the assumption that the Edmundson–Madansky bound was used to calculate the upper bound. The reason is that the dual variables associated with the solution of the recourse function tell us something about its curvature. With the Edmundson– Madansky bound, we solve the recourse problem at all extreme points of the support, and thus get a reasonably good idea of what the function looks like. To introduce some formality, assume we have only one random variable ˜ ξ , with support Ξ = [A, B ]. When ﬁnding the Edmundson–Madansky upper bound, we calculated φ(A) = Q(ˆ, A) and φ(B ) = Q(ˆ, B ), obtaining dual x x solutions π A and π B . We know from duality that φ(A) = (π A )T [h(A) − T (A)ˆ], x φ(B ) = (π B )T [h(B ) − T (B )ˆ]. x We also know that, as long as q (ξ ) ≡ q0 (which we are assuming in this section), a π that is dual feasible for one ξ is dual feasible for all ξ , since ξ does not enter the dual constraints. Hence, we know that α = φ(A) − (π B )T [h(A) − T (A)ˆ] ≥ 0 x and x β = φ(B ) − (π A )T [h(B ) − T (B )ˆ] ≥ 0. RECOURSE PROBLEMS 193 φ φ α β α ξ β ξ Figure 19 An illustration of a situation where both α and β give good information about curvature. The parameters α and β contain information about the curvature of φ(ξ ). In particular, note that if, for example, α = 0 then π B is an optimal dual solution corresponding to ξ = A. If π A = π B in such a case, we are simply facing dual degeneracy. In line with this argument, a small α (or β ) should mean little curvature. But we may, for example, have α large and β small. So what is going on? Figure 19 shows two diﬀerent cases (both in one dimension), where both α and β are good indicators of how important a random variable is. Intuitively, it seems reasonable to say that the left part of the ﬁgure indicates an important random variable, and the right part an unimportant one. And, indeed, in the left part both α and β will be large, whereas in the right part both will be small. But then consider Figure 20. Intuitively, the random variable is unimportant, but, in fact, the slopes at the end points are the same as in the left part of Figure 19, and the slopes describe how the objective changes as a function of the random variable. However, in this case α is very small, whereas β is large. What is happening is that α and β pick up two properties of the recourse function. If the function is very ﬂat (as in the right part of Figure 19) then both parameters will be small. If the function is very nonlinear (as in the left part of Figure 19), both parameters will be large. But if 194 STOCHASTIC PROGRAMMING φ β α ξ Figure 20 An illustration of a case where α is small and β is large. we have much curvature in the sense of the slopes of φ at the end point, but still almost linearity (as in Figure 20), then the smaller of the two parameters will be small. Hence the conclusion seems to be to calculate both α and β , pick the smaller of the two, and use that as a measure of nonlinearity. Using α and β , we have a good measure of nonlinearity in one dimension. However, with more than one dimension, we must again be careful. We can certainly perform tests corresponding to those illustrated in Figures 19 and 20, for one random variable at a time. But the question is what value should we give the other random variables during the test. If we have k random variables, and have the Edmundson–Madansky calculations available, there are 2k−1 diﬀerent ways we can ﬁx all but one variable and then compare dual solutions. There are at least two possible approaches. A ﬁrst possibility is to calculate α and β for all neighbouring pairs of extreme points in the support, and pick the one for which the minimum of α and β is the largest. We then have a random variable for which φ is very nonlinear, at least in parts of the support. We may, of course, have picked a variable for which φ is linear most of the time, and this will certainly happen once in a while, but the idea is tested and found sound. An alternative, which tries to check average nonlinearity rather than maximal nonlinearity, is to use all 2k−1 pairs of neighbouring extreme points involving variation in only one random variable, ﬁnd the minimum of α and β for each such pair, and then calculate the average of these minima. Then we pick the random variable for which this average is maximal. The number of pairs of neighbouring extreme points is fairly large. With k random variables, we have k 2k−1 pairs to compare. Each comparison requires the calculation of two inner products. We have earlier indicated that the Edmundson–Madansky upper bound cannot be used for much more than 10 random variables. In such a case we must perform 5120 pairwise comparisons. RECOURSE PROBLEMS 195 Looking back at Figures 19 and 20, we note that an important feature of the recourse function is its slope, as a function of the random variable. We alluded to the slope when discussing the parameters α and β , but we did not really show how to ﬁnd the slopes. We know from linear programming duality that the optimal value of a dual variable shows how the objective function will change (locally) as the corresponding righthand side element increases. Given that we use the Edmundson–Madansky upper bound, these optimal dual solutions are ˜ available to us at all extreme points of the support. If ξ j is the value of ξ at such an extreme point, we have x x φ(ξ j ) = Q(ˆ, ξ j ) = q T y (ξ j ) = π ( j )T [h(ξ j ) − T (ξ j )ˆ]. What we need to know to utilize this information is how the righthand side ˜ changes as a given random variable ξi changes. This is easy to calculate, since all we have to do is to ﬁnd the derivative of h(ξ ) − T (ξ )ˆ = h0 + x
j hj ξj − T0 +
j ˆ Tj ξj x with respect to ξi . This is easily found to be ˆ hi − T i x ≡ δi . ˜ Note that this expression is independent of the value of ξ , and hence it is the j same at all extreme points of the support. Now, if π (ξ ) is the optimal dual solution at an extreme point of the support, represented by ξ j , then the slope of φ(ξ ) = Q(ˆ, ξ ) with respect to ξi is given by x π (ξ j )T δi . And, more generally, if we let δ T = (δ1 , δ2 , . . .), the vector π (ξ j )T δ ≡ (π j )T (5.1) characterizes how φ(ξ ) = Q(ˆ, ξ ) changes with respect to all random variables. x Since these calculations are performed at each extreme point of the support, and each extreme point has a probability according to the Edmundson– Madansky calculations, we can interpret the vectors π j as outcomes of a random vector π that has 2k possible values and the corresponding ˜ Edmundson–Madansky probabilities. If, for example, the random variable πi ˜ has only one possible value, we know that φ(ξ ) is linear in ξi . If πi has several ˜ possible values, its variance will tell us quite a bit about how the slope varies ˜ over the support. Since the random variables ξi may have very diﬀerent units, and the dual variables measure changes in the objective function per unit 196 STOCHASTIC PROGRAMMING change in a righthand side element, it seems reasonable to try to account for diﬀerences in units. A possible (heuristic) approach is to multiply the ˜ outcomes of πi by the length of the support of ξi , before calculating means ˜ and variances. (Assume, for example, that we are selling apples and bananas, and the demands are uncertain. For some reason, however, we are measuring ˜ bananas in tons and apples in kilograms. Now, if π1 refers to bananas and π2 ˜ to apples, would you see these products as equally important if π1 and π2 had ˜ ˜ the same variance?) Computationally, this is easier than the approach based on α and β , because ˜ it requires that only 2k inner products are made. The distribution of π can be calculated as we visit the extreme points of the support (for ﬁnding the Edmundson–Madansky bound), and we never have to store the inner products. All the above ideas are based on information from the Edmundson– Madansky upper bound, and therefore the solution of 2k linear programs. As we have pointed out several times, for much more than 10 random variables, we are not able to ﬁnd the Edmundson–Madansky upper bound. And if so, we shall not be able to use the partitioning ideas above either. Therefore we should have ideas of how to partition that do not depend on which upper bound we use. This does not imply, though, that the ideas that are to follow cannot be used with success for the Edmundson–Madansky upper bound as well. One idea, which at ﬁrst sight looks rather stupid, is the following: perform all possible bipartitions (i.e. with k random variables, perform all k , one at a time) and pick the one that is best. By “best”, we here mean with the smallest error in the next step. More formally, let Ui − Li be the error on the “left” cell if we partition random variable i, and let Uir − Lr i be the error on the “right” cell. If pi is the probability of being in the left cell, given that we are in the original cell, when we partition coordinate i, we chose to partition the random variable i that minimizes (Ui − Li )pi + (Uir − Lr )(1 − pi ). i (5.2) In other words, we perform all possible partitions, keep the best, and discard the remaining information. If the upper bound we are using is expensive in terms of CPU time, such an idea of “lookahead” has two eﬀects, which pull in diﬀerent directions. On one hand, the information we are throwing away has cost a lot, and that seems like a waste. On the other hand, the very fact that the upper bound is costly makes it crucial to have few cells in the RECOURSE PROBLEMS 197 end. With a cheap (in terms of CPU time) upper bound, the approach seems more reasonable, since checking all possibilities is not particularly costly, but, even so, bad partitions will still double the work load locally. Numerical tests indicate that this approach is very good even with the Edmundson–Madansky upper bound, and the reason seems to be that it produces so few cells. Of course, without Edmundson–Madansky, we cannot calculate α, β and π , so if ˜ we do not like the lookahead, we are in need of a new heuristic. We have pointed out before that the piecewise linear upper bound can obtain the value +∞. That happens if one of the problems (4.4) or (4.5) becomes infeasible. If that takes place, the random variable being treated when it happens is clearly a candidate for partitioning. So far we have not really deﬁned what constitutes a good partition. We shall return to that after the next subsection. But ﬁrst let us look at an example illustrating the partitioning ideas. Example 3.2 Consider the following function: φ(ξ1 , ξ2 ) = max{x + 2y } s.t. −x + y ≤ 6, 2x − 3y ≤ 21, −3x + 7y ≤ 49, x + 12y ≤ 120, 2x + 3y ≤ 45, x ≤ ξ1 , y ≤ ξ2 . Let us assume that Ξ1 = [0, 20] and Ξ2 = [0, 10]. For simplicity, we shall assume uniform and independent distributions. We do that because the form of the distribution is rather unimportant for the heuristics we are to explain. The feasible set for the problem, except the upper bounds, is given in Figure 21. The circled numbers refer to the numbering of the inequalities. For all problems we have to solve (for varying values of ξ ), it is reasonably easy to read the solution directly from the ﬁgure. Since we are maximizing, the Jensen bound is an upper bound, and the Edmundson–Madansky bound a lower bound. We easily ﬁnd the Jensen upper bound from φ(10, 5) = 20. To ﬁnd a lower bound, and also to calculate some of the information needed to use the heuristics, we ﬁrst calculate φ at all extreme points of the support. Note that in what follows we view the upper bounds on the variables as ordinary constraints. The results for the extremepoint calculations are summed up in Table 1. 198 STOCHASTIC PROGRAMMING Figure 21 Set of feasible solutions for Example 3.2. Table 1 Important characteristics of the solution of φ(ξ1 , ξ2 ) at the four extreme points of the support. (ξ1 , ξ2 ) (0, 0) = (L, L) (20, 0) = (U, L) (0, 10) = (L, U ) (20, 10) = (U, U ) x 0.000 10.500 0.000 8.571 y 0.000 0.000 6.000 9.286 φ 0.000 10.500 12.000 27.143 Optimal dual solution (π ) (0, 0, 0, 0, 0, 1, 2) (0, 1 , 0, 0, 0, 0, 7 ) 2 2 (2, 0, 0, 0, 0, 3, 0) (0, 0, 0, 0.0476, 0.476, 0, 0) RECOURSE PROBLEMS 199 The ﬁrst idea we wish to test is based on comparing pairs of extreme points, to see how well the optimal dual solution (which is dual feasible for all righthand sides) at one extremepoint works at a neighbouring extreme point. We use the indexing L and U to indicate Low and Up of the support. LL:UL We ﬁrst must test the optimal dual solution π LL together with the righthand side bUL . We get α = (π LL )T bUL − φ(U, L) = (0, 0, 0, 0, 0, 1, 2)(6, 21, 49, 120, 45, 20, 0)T − φ(U, L) = 20 − 10.5 = 9.5. We then do the opposite, to ﬁnd β = (π UL )T bLL − φ(L, L) = (0, 1 , 0, 0, 0, 0, 7 )(6, 21, 49, 120, 45, 0, 0)T − φ(L, L) 2 2 = 10.5 − 0 = 10.5. The minimum is therefore 9.5 for the pair LL:UL. LL:LU Following a similar logic, we get the following: α = (π LL )T bLU − φ(L, U ) = (0, 0, 0, 0, 0, 1, 2)(6, 21, 49, 120, 45, 0, 10)T − φ(L, U ) = 20 − 12 = 8, β = (π LU )T bLL − φ(L, L) = (2, 0, 0, 0, 0, 3, 0)(6, 21, 49, 120, 45, 0, 0)T − φ(L, L) = 12 − 0 = 12. The minimal value for the pair LL:LU is therefore 8. LU:UU For this pair we get the following: α = (π UU )T bLU − φ(L, U ) = (0, 0, 0, 0.0476, 0.476, 0, 0)(6, 21, 49, 120, 45, 0, 10)T − φ(L, U ) = 27.143 − 12 = 15.143 β = (π LU )T bUU − φ(L, L) = (2, 0, 0, 0, 0, 3, 0)(6, 21, 49, 120, 45, 20, 10)T − φ(U, U ) = 72 − 27.143 = 44.857. The minimal value for the pair LU:UU is therefore 15.143. UL:UU For the ﬁnal pair the results are given by α = (π UU )T bUL − φ(U, L) = (0, 0, 0, 0.0476, 0.476, 0, 0)(6, 21, 49, 120, 45, 20, 0)T − φ(U, L) = 27.143 − 10.5 = 16.643, β = (π UL )T bUU − φ(U, U ) = (0, 1 , 0, 0, 0, 0, 7 )(6, 21, 49, 120, 45, 20, 10)T − φ(U, U ) 2 2 = 46.5 − 27.143 = 18.357. 200 STOCHASTIC PROGRAMMING The minimal value for the pair UL:UU is therefore 16.643. If we were to pick the pair with the largest minimum of α and β , we should pick the pair UL:UU, over which it is ξ2 that varies. In such a case we have tried to ﬁnd that part of the function that is the most nonlinear. When we look at Figure 21, we see that as ξ2 increases (with ξ1 = 20), the optimal solution moves from F to E and then to D, where it stays when ξ2 comes above the y coordinate in D. It is perhaps not so surprising that this is the most serious nonlinearity in φ. If we try to ﬁnd the random variable with the highest average nonlinearity, by summing the errors over those pairs for which the given random variable ˜ ˜ varies, we ﬁnd that for ξ1 the sum is 9.5 + 15.143 = 24.643, and for ξ2 it is 8 + 16.643, which also equals 24.643. In other words, we have no conclusion. The next approach we suggested was to look at the dual variables as in (5.1). The righthand side structure is very simple in our example, so it is easy to ﬁnd the connections. We deﬁne two random variables: π1 for the row ˜ constraining x, and π2 for the row constraining y . With the simple kind of ˜ uniform distributions we have assumed, each of the four values for π1 and π2 ˜ ˜ will have probability 0.25. Using Table 1, we see that the possible values for π1 are 0, 1 and 3 (with 0 appearing twice), while for π2 they are 0, 2 and 3.5 ˜ ˜ (also with 0 appearing twice). There are diﬀerent ideas we can follow. 1. We can ﬁnd out how the dual variables vary between the extreme points. The largest individual change is that π2 falls from 3.5 to 0 as we go from UL ˜ ˜ to UU. This should again conﬁrm that ξ2 is a candidate for partitioning. 2. We can calculate E π = (1, 11 ), and the individual variances to 1.5 and ˜ 8 ˜ 2.17. If we choose based on variance, we pick ξ2 . 3. We also argued earlier that the size of the support was of some importance. A way of accommodating that is to multiply all outcomes with the length of the support. (That way, all dual variables are, in a sense, a measure of change per total support.) That should make the dual variables comparable. The calculations are left to the reader. We now end up with π1 having the ˜ largest variance. (And if we now look at the biggest change in dual variable ˜ over pairs of neighboring extreme points, ξ1 will be the one to partition.) No conclusions should be made based on these numbers in terms of what is a good heuristic. We have presented these numbers to illustrate the computations and to indicate how it is possible to make arguments about partitioning. Before we conclude, let us consider the “lookahead” strategy (5.2). In this case there are two possibilities: either we split at ξ1 = 10 or we split at ξ2 = 5. If we check what we need to compute in this case, we will ﬁnd that some calculations are required in addition to those in Table 1, and RECOURSE PROBLEMS Table 2 Function values needed for the “lookahead”strategy. 201 (5,5) (15,5) (10,10) (20,5) 15 25 27.143 25 (10,7.5) (10,2.5) (0,5) (10,0) 25 15 10 10 ˜ φ(E ξ ) = φ(10, 5) = 20, which we have already found. The additional numbers are presented in Table 2. Based on this, we can ﬁnd the total error after splitting to be about 4.5 ˜ ˜ both for ξ1 and for ξ2 . Therefore, based on “lookahead”, we cannot decide what to do. 2 3.5.2 Using the Lshaped Method within Approximation Schemes We have now investigated how to bound Q(x) for a ﬁxed x. We have done that by combining upper and lower bounding procedures with partitioning of ˜ the support of ξ . On the other hand, we have earlier discussed (exact) solution procedures, such as the Lshaped decomposition method (Section 3.2) and the scenario aggregation (Section 2.6). These methods take a full event/scenario tree as input and solve this (at least in principle) to optimality. We shall now see how these methods can be combined. The starting point is a setup like Figure 18. We set up an initial partition of the support, possibly containing only one cell. We then ﬁnd all conditional expectations (in the example there are ﬁve), and give each of them a probability equal to that of being in their cell, and we view this as our “true” distribution. The Lshaped method is then applied. Let ξ i denote the ˜ ˜ conditional expectation of ξ , given that ξ is contained in the ith cell. Then the partition gives us the support {ξ 1 , . . . ξ }. We then solve ⎫ min cT x + L(x) ⎬ s.t. Ax = b, ⎭ x ≥ 0, where L(x) =
j =1 (5.3) pj Q(x, ξ j ), 202 STOCHASTIC PROGRAMMING cx + U1 ( x ) cx + U 2 ( x ) cx + Q ( x )
cx + L2 ( x ) $ $ cx + U1 ( x ) $ $ cx + Q ( x ) $ $ cx + L1 ( x ) cx + L1 ( x ) $ x
Figure 22 Example illustrating the use of bounds in the Lshaped decomposition method. An initial partition corresponds to the lower bounding function L1 (x) and the upper bounding function U1 (x). For all x we have L1 (x) ≤ Q(x) ≤ U1 (x). We minimize cx + L1 (x) to obtain x. We ﬁnd the error ˆ U1 (ˆ) − L1 (ˆ), and we decide to reﬁne the partition. This will cause L1 to be x x replaced by L2 and U1 by U2 . Then the process can be repeated. with pj being the probability of being in cell j . Let x be the optimal solution ˆ to (5.3). Clearly if x is the optimal solution to the original problem then cT x + L(ˆ) ≤ cT x + L(x) ≤ cT x + Q(x), ˆ x so that the optimal value we found by solving (5.3) is really a lower bound ˆ on min cT x + Q(x). The ﬁrst inequality follows from the observation that x minimizes cT x + L(x). The second inequality holds because L(x) ≤ Q(x) for all x (Jensen’s inequality). Next, we use some method to calculate U (ˆ), for x example the Edmundson–Madansky or piecewise linear upper bound. Note that ˆ x ˆ x cT x + Q(x) ≤ cT x + Q(ˆ) ≤ cT x + U (ˆ), ˆ x so cT x + U (ˆ) is indeed an upper bound on cT x + Q(x). Here the ﬁrst inequality holds because x minimizes cT x + Q(x), and the second because, for all x, Q(x) ≤ U (x). We then have a solution x and an error U (ˆ) − L(ˆ). If we are not satisﬁed ˆ x x with the precision, we reﬁne the partition of the support, and repeat the use of RECOURSE PROBLEMS 203 the Lshaped method. It is worth noting that the old optimality cuts generated in the Lshaped method are still valid, but generally not tight. The reason is that, with more cells, and hence a larger , the function L(x) is now closer to Q(x). Feasibility cuts are still valid and tight. Figure 22 illustrates how the approximating functions L(x) and U (x) change as the partition is reﬁned. In total, this gives us the procedure in Figure 23. The procedure reﬁne(Ξ) will not be detailed, since there are so many options. We refer to our earlier discussion of the subject in Section 3.5.1. Note that, for simplicity, we have assumed that, after a partitioning, the procedure starts all over again in the repeat loop. That is of course not needed, since we already have checked the present x for feasibility. If we replace the set A by Ξ in the call to procedure ˆ feascut, the procedure Bounding Lshaped must stay as it is. In many cases this may be a useful change, since A might be very large. (In this case old feasibility cuts might no longer be tight.) 3.5.3 What is a Good Partition? We have now seen partitioning used in two diﬀerent settings. In the ﬁrst we just wanted to bound a onestage stochastic program, while in the second we used it in combination with the Lshaped decomposition method. The major diﬀerence is that in the latter case we solve a twostage stochastic program between each time we partition. Therefore, in contrast to the onestage setting, the same partition (more and more reﬁned) is used over and over again. In the twostage setting a new question arises. How many partitions should we make between each new call to the Lshaped decomposition method? If we make only one, the overall CPU time will probably be very large because a new LP (only slightly changed from last time) must be solved each time we make a new cell. On the other hand, if we make many partitions per call to Lshaped, we might partition extensively in an area where it later turns out that partitioning is not needed (remember that x enters the righthand side of the secondstage constraints, moving the set of possible righthand sides around). We must therefore strike a balance between getting enough cells and not getting them in the wrong places. This brings us to the question of what is a good partitioning strategy. It should clearly be one that minimizes CPU time for solving the problem at hand. Tests indicate that for the onestage setting, using the idea of the variance of the (random) dual variables on page 195, is a good idea. It creates quite a number of cells, but because it is cheap (given that we already use the Edmundson–Madansky upper bound) it is quite good overall. But, in the setting of the Lshaped decomposition method, this large number of cells become something of a problem. We have to carry them along from iteration to iteration, repeatedly ﬁnding upper and lower bounds on each of them. Here it is much more important to have few cells for a given error level. And that 204 STOCHASTIC PROGRAMMING procedure Bounding Lshaped( 1 , 2 :real); begin ˜ ˆ K := 0, L := 0; Ξ := {E ξ }; ˆ θ := −∞, LP(A, b, c, x, feasible); ˆ stop := not (feasible); while not (stop) do begin feascut(A, x,newcut); ˆ if not (newcut) then begin Find L(ˆ); x ˆ newcut := (L(ˆ) − θ > 1 ); x if newcut then begin (* Create an optimality cut—see page 168 *) L := L + 1; T Construct the cut −βL x + θ ≥ αL ; end; end; if newcut then begin master(K, L, x, θ,feasible); ˆˆ stop := not (feasible); end else begin Find U (ˆ); x stop := (U (ˆ) − L(ˆ) ≤ 2 ); x x ˆ if not (stop) then reﬁne(Ξ); end; end; end; Figure 23 The Lshaped decomposition algorithm in a setting of approximations and bounds. The procedures that we refer to start on page 168, and the set A was deﬁned on page 162. RECOURSE PROBLEMS 205 is best achieved by looking ahead using (5.2). Our general advice is therefore that in the setting of two (or more) stages one should seek a strategy that minimizes the ﬁnal number of cells, and that it is worthwhile to pay quite a lot per iteration to achieve this goal. 3.6 Simple Recourse Let us consider the particular simple recourse problem ˜ min{cT x + Eξ Q(x, ξ )  Ax = b, x ≥ 0}, ˜ where Q(x, ξ ) = min{q +T y + + q −T y −  y + − y − = ξ − T x, y + ≥ 0, y − ≥ 0}. Hence we assume W = (I, −I ), T (ξ ) ≡ T (constant), h(ξ ) ≡ ξ, q = q + + q − ≥ 0. (6.1) and in addition In other words, we consider the case where only the righthand side is random, and we shall see that in this case, using our former presentation h(ξ ) = h0 + i hi ξi , we only need to know the marginal distributions of the components hj (ξ ) of h(ξ ). However, stochastic dependence or independence of these components does not matter at all. This justiﬁes the above setting h(ξ ) ≡ ξ . By linear programming duality, we have for the recourse function Q(x, ξ ) = min{q +T y + + q −T y −  y + − y − = ξ − T x, y + ≥ 0, y − ≥ 0} (6.2) = max{(ξ − T x)T π  −q − ≤ π ≤ q + }. Observe that our assumption q ≥ 0 is equivalent to solvability of the secondstage problem. Deﬁning χ := T x, the dual solution π of (6.2) is obvious: πi =
+ qi − −qi if if ξi − χi > 0, ξi − χi ≤ 0. 206 STOCHASTIC PROGRAMMING Hence, with ˆ Qi (χi , ξi ) = we have Q(x, ξ ) =
i + (ξi − χi )qi − −(ξi − χi )qi if χi < ξi , if χi ≥ ξi , ˆ Qi (χi , ξi ) with χ = T x. The expected recourse follows immediately: ˜ Eξ Q(x, ξ ) = ˜ =
i Ξ Ξ Q(x, ξ )Pξ (dξ ) ˜ ˆ Qi (χi , ξi )Pξ (dξ ) ˜
− (ξi − χi )Pξ (dξ ) − qi ˜ =
i + qi ξi >χi ξi ≤χi (ξi − χi )Pξ (dξ ) . ˜ The last expression shows that knowledge of the marginal distributions of the ˜ ˜ ξi is suﬃcient to evaluate the expected recourse. Moreover, Eξ Q(x, ξ ) is a ˜ m1 ˜ socalled separable function in (χ1 , · · · , χm1 ), i.e. Eξ Q(x, ξ ) = i=1 Qi (χi ), ˜ + − where, owing to q + q = q , ⎫ + − Qi (χi ) = qi ξi >χi (ξi − χi )Pξ (dξ ) − qi ξi ≤χi (ξi − χi )Pξ (dξ ) ˜ ˜ ⎪ ⎬ + + − = qi Ξ (ξi − χi )Pξ (dξ ) − (qi + qi ) ξi ≤χi (ξi − χi )Pξ (dξ ) (6.3) ˜ ˜ ⎪ ⎭ + + = qi ξ i − qi χi − q i ξi ≤χi (ξi − χi )Pξ (dξ ) ˜ ˜ with ξ i = Eξ ξi . ˜ The reformulation (6.3) reveals the shape of the functions Qi (χi ). Assume that Ξ is bounded such that αi < ξi ≤ βi ∀i, ξ ∈ Ξ. Then we have ⎧+ + if χi ≤ αi , ⎪ qi ξ i − qi χi ⎪ ⎨ + + q ξ − qi χi − q i (ξi − χi )Pξ (dξ ) if αi < χi < βi , (6.4) Qi (χi ) = ˜ ⎪ii ξi ≤χi ⎪ ⎩ − − −qi ξ i + qi χi if χi ≥ βi , showing that for χi < αi and χi > βi the functions Qi (χi ) are linear (see Figure 24). In particular, we have ˆ Qi (χi ) = Qi (χi , ξ i ) if χi ≤ αi or χi ≥ βi . (6.5) Following the approximation scheme described in Section 3.5.1, the relation (6.5) allows us to determine an error bound without computing the ˆ ˆ ˆ E–M bound.3 To see this, consider any ﬁxed χi . If χi ≤ αi or χi ≥ βi then, by (6.5), ˆˆ ˆ Qi (χi ) = Qi (χi , ξ i ).
3 By “E–M” we mean the Edmundson–Madansky bound described in Section 3.4.2. RECOURSE PROBLEMS 207 Figure 24 ˆ Simple recourse: supporting Qi (χi ) by Qi (χi , ξ i ). If, on the other hand, αi < χi < βi , we partition the interval (αi , βi ] into the ˆ two subintervals (αi , χi ] and (χi , βi ] with the conditional expectations ˆ ˆ ˜ ˜ ξ i = Eξ (ξi  ξi ∈ (αi , χi ]), ξ i = Eξ (ξi  ξi ∈ (χi , βi ]). ˆ ˆ ˜ ˜ Obviously relation (6.5) also applies analogously to the conditional expectations ˆˆ˜ Q1 (χi ) = Eξ (Qi (χi , ξi )  ξi ∈ (αi , χi ]) ˆ ˜ iˆ and ˆˆ˜ Q2 (χi ) = Eξ (Qi (χi , ξi )  ξi ∈ (χi , βi ]). ˆ ˜ iˆ Therefore ˆˆ 1 ˆˆ 2 Q1 (χi ) = Qi (χi , ξ i ), Q2 (χi ) = Qi (χi , ξ i ), iˆ iˆ
1 2 ˆ ˆ and, with p1 = P (ξi ∈ (αi , χi ]) and p2 = P (ξi ∈ (χi , βi ]), i i Qi (χi ) = p1 Q1 (χi ) + p2 Q2 (χi ) ˆ i iˆ i iˆ ˆˆ 1 ˆˆ 2 = p1 Qi (χi , ξ i ) + p2 Qi (χi , ξ i ). i i Hence, instead of using the E–M upper bound, we can easily determine the 12 1 2 ˆ ˆ ˆ ˆ exact value Qi (χi ). With Qi (χi , ξ i , ξ i ) := p1 Qi (χi , ξ i ) + p2 Qi (χi , ξ i ), the i i resulting situation is demonstrated in Figure 25. Assume now that for a partition of the intervals (αi , βi ] into subintervals Iiν := (δiν , δiν +1 ], ν = 0, · · · , Ni − 1, with αi = δi0 < δi1 < · · · < δiNi = βi , we have minimized the Jensen lower bound (see Section 3.4.1), letting piν = 208 STOCHASTIC PROGRAMMING Figure 25 1 2 ˆ Simple recourse: supporting Qi (χi ) by Qi (χi , ξ i , ξ i ). ˜ P (ξi ∈ Iiν ), ξ iν = Eξ (ξi  ξi ∈ Iiν ): ˜ minx,χ cT x +
k N i −1 ˆ piν Qi (χi , ξ iν )
i=1 ν =0 s.t. Ax = b, T x − χ = 0, x ≥ 0, yielding the solution x and χ = T x. Obviously relation (6.5) holds for ˆ ˆ ˆ ˆ conditional expectations Qiν (χi ) (with respect to Iiν ) as well. Then for each component of χ there are three possibilities. ˆ (a) If χi ≤ αi , then ˆ ˆˆ ˆˆ˜ Qi (χi , ξ iν ) = Qiν (χi ) = Eξ (Qi (χi , ξi )  ξi ∈ Iiν ), ν = 0, · · · , Ni − 1, ˆ ˜ and hence Qi (χi ) = ˆ
ν =0 N i −1 ˆˆ piν Qi (χi , ξ iν ), i.e. there is no error with respect to this component. (b) If χi ≥ βi , then it again follows from (6.5) that ˆ
N i −1 Qi (χi ) = ˆ
ν =0 ˆˆ piν Qi (χi , ξ iν ). RECOURSE PROBLEMS 209 (c) If χi ∈ Iiµ for exactly one µ, with 0 ≤ µ < Ni , then there are two cases. ˆ ˆ First, if δiµ < χi < δiµ+1 , partition Iiµ = (δiµ , δiµ+1 ] into
1 2 Jiµ = (δiµ , χi ] and Jiµ = (χi , δiµ+1 ]. ˆ ˆ Now, again owing to (6.5), it follows that
2 ˆ Qi (χi ) =
ν =µ ˆˆ piν Qi (χi , ξ iν ) +
ρ=1 ˆˆ ρ pρ Qi (χi , ξ iµ ), iµ where ρ ρ ρ ˜ pρ = P (ξi ∈ Jiµ ), ξ iµ = Eξ (ξi  ξi ∈ Jiµ ), ρ = 1, 2. ˜ iµ If, on the other hand, χi = δiµ+1 , we again have ˆ
N i −1 Qi (χi ) = ˆ
ν =0 ˆˆ piν Qi (χi , ξ iν ). In conclusion, having determined the minimal point χ for the Jensen lower ˆ bound, we immediately get the exact expected recourse at this point and decide whether for all components the relative error ﬁts into a prescribed tolerance, or in which component the reﬁnement (partitioning the subinterval ˆ containing χi by dividing it exactly at χi ) seems appropriate for a further ˆ improvement of the approximate solution of (6.1). Many empirical tests have shown this approach to be very eﬃcient. In particular, for this special problem ˜ type higher dimensions of ξ do not cause severe computational diﬃculties, as they did for general stochastic programs with recourse, as discussed in Section 3.5 . 3.7 Integer First Stage This book deals almost exclusively with convex problems. The only exception is this section, where we discuss, very brieﬂy, some aspects of integer programming. The main reason for doing so is that some solution procedures for integer programming ﬁt very well with some decomposition procedures for (continuous) stochastic programming. Because of that we can achieve two goals: we can explain some connections between stochastic and integer programming, and we can combine the two subject areas. This allows us to arrive at a method for stochastic integer programming. Note that talking about stochastic and integer programming as two distinct areas is really meaningless, since stochastic programs can contain integrality constraints, and integer programs can be stochastic. But we still do it, with some hesitation, 210 STOCHASTIC PROGRAMMING since the splitting is fairly common within the mathematical programming community. To get started, let us ﬁrst formulate a deterministic integer programming problem in a very simple format, and then outline a common solution procedure, namely branchandbound. An integer program can be formulated as ⎫ min cT x ⎬ s.t. Ax = b (7.1) ⎭ xi ∈ {ai , ai + 1, . . . , bi − 1, bi } for all i, where {ai , ai + 1, . . . , bi − 1, bi } is the set of all integers from ai to bi . The branchandbound procedure is based on replacing xi ∈ {ai , ai + 1, . . . , bi − 1, bi } by ai ≤ xi ≤ bi for all i, and solving the corresponding relaxed linear program to obtain x. If x happens to be integral, we are done, ˆ ˆ since integrality is satisﬁed without being enforced. If x is not integral, we ˆ ˆ have obtained a lower bound z = cT x on the true optimal objective, since dropping constraints in a minimization problem yields a lower bound. To continue from here, we pick one variable xj , called the branching variable, and one integer dj . Normally dj is chosen as the largest integer less than the value of xj in the LP solution, and xj is normally a variable that was nonintegral in the LP solution. We then replace our original problem (7.1) by two similar problems: ⎫⎫ min cT x ⎪⎪ ⎪⎪ ⎪ ⎬⎪ s.t. Ax = b, ⎪ ⎪ ⎪ xi ∈ {ai , . . . , bi } for all i = j, ⎪ ⎪ ⎪⎪ ⎪ ⎭⎪ ⎪ xj ∈ {aj , . . . , dj }, ⎪ ⎪ ⎬ and (7.2) ⎫⎪ ⎪ ⎪ min cT x ⎪⎪ ⎪⎪ ⎪ ⎬⎪ ⎪ s.t. Ax = b, ⎪ ⎪ ⎪ xi ∈ {ai , . . . , bi } for all i = j, ⎪ ⎪ ⎪⎪ ⎪ ⎭⎭ xj ∈ {dj + 1, . . . , bj }. What we have done is to branch. We have replaced the original problem by two similar problems that each investigate their part of the solution space. The two problems are now put into a collection of waiting nodes. The term “waiting node” is used because the branching can be seen as building up a tree, where the original problem sits in the root and the new problems are stored in child nodes. Waiting nodes are then leaves in the tree, waiting to be analysed. Leaves can also be fathomed or bounded, as we shall see shortly. We next continue to work with the problem in one of these waiting nodes. We shall call this problem the present problem. When doing so, a number of diﬀerent situations can occur. RECOURSE PROBLEMS 211 Figure 26 The situation after three branchings in a branchandbound tree. One waiting node is left. 1. The present problem may be infeasible, in which case it is simply dropped, or fathomed. 2. The present problem might turn out to have an integral optimal solution x, in other words a solution that is truly feasible. If so, we compare cT x ˆ ˆ with the bestsofar objective value z (we initiate z at +∞). If the new objective value is better, we keep x and update z so that z = cT x. We then ˆ ˆ fathom the present problem. 3. The present problem might have a nonintegral solution x with cT x ≥ z . In ˆ ˆ this case the present problem cannot possibly contain an optimal integral solution, and it is therefore dropped, or bounded. (This is the process that gives half of the name of the method.) 4. The present problem has a solution x that does not satisfy any of the above ˆ criteria. If so, we branch as we did in (7.2), creating two child nodes. We then add them to the tree, making them waiting nodes. An example of an intermediate stage for a branchandbound tree can be found in Figure 26. Three branchings have taken place, and we are left with two fathomed, one bounded and one waiting node. The next step will now be to branch on the waiting node. Note that as branching proceeds, the interval over which we solve the continuous version must eventually contain only one point. Therefore, sooner 212 STOCHASTIC PROGRAMMING or later, we come to a situation where the problem is either infeasible, or we are faced with an integral solution. We cannot go on branching forever. Hence the algorithm will eventually stop, either telling us that no integral solution exists, or giving us such a solution. Much research in integer programming concerns how to pick the correct variable xj for branching, how to pick the branching value dj , how to formulate the problem so that branching becomes simpler, and how to obtain a good initial (integer) solution so as to have a z < ∞ to start out with. To have a good integer programming algorithm, these subjects are crucial. We shall not, however, discuss those subjects here. Instead, we should like to draw attention to some analogies between the branchandbound algorithm for integer programs and the problem of bounding a stochastic (continuous) program. • In the integer case we partition the solution space, and in the stochastic case the input data (support of random variables). • In the integer case we must ﬁnd a branching variable, and in the stochastic case a random variable for partitioning. • In the integer case we must ﬁnd a value dj of xj (see (7.2)) for the branching, and in the stochastic case we must determine a point dj in the support through which we want to partition. • Both methods therefore operate with a situation as depicted in Figure 18, but in one case the rectangle is the solution space, while in the other it is the support of the random variables. • Both problems can be seen as building up a tree. For integer programming we build a branchandbound tree. For stochastic programming we build a splitting tree. The branchandboundtree in Figure 26 could have been a splitting tree as well. In that case we should store the error rather than the objective value. • In the integer case we fathom a problem (corresponding to a cell in Figure 18, or a leaf in the tree) when it has nothing more to tell us, in the stochastic case we do this when the bounds (in the cell or leaf) are close enough. From this, it should be obvious that anyone who understands the ins and outs of integer programming, will also have a lot to say about bounding stochastic programs. Of course there are diﬀerences, but they are smaller than one might think. So far, what we have compared is really the problem of bounding the recourse function with the problem of solving an integer program by branchandbound. Next, let us consider the cuttingplane methods for integer programs, and compare them with methods like the Lshaped decomposition RECOURSE PROBLEMS 213 method for (continuous) stochastic programs. It must be noted that cuttingplane methods are hardly ever used in their pure form for solving integer programs. They are usually combined with other methods. For the sake of exposition, however, we shall bieﬂy sketch some of the ideas. When we solve the relaxed linear programming version of (7.1), we have diﬃculties because we have increased the solution space. However, all points that we have added are nonintegral. In principle, it is possible to add extra constraints to the linear programming relaxation to cut oﬀ some of these noninteger solutions, namely those that are not convex combinations of feasible integral points. These cuts will normally be added in an iterative manner, very similarly to the way we added cuts in the Lshaped decomposition method. In fact, the Lshaped decomposition method is known as Benders’ decomposition in other areas of mathematical programming, and its original goal was to solve (mixed) integer programming problems. However, it was not cast in the way we are presenting cuts below. So, in all its simplicity, a cuttingplane method will run through two major steps. The ﬁrst is to solve a relaxed linear program; the second is to evaluate the solution, and if it is not integral, add cuts that cut away nonintegral points (including the present solution). These cuts are then added to the relaxed linear program, and the cycle is repeated. Cuts can be of diﬀerent types. Some come from straightforward arithmetic operations based on the LP solution and the LP constraints. These are not necessarily very tight. Others are based on structure. For a growing number of problems, knowledge about some or all facets of the (integer) solution space is becoming available. By a facet in this case, we understand the following. The solution space of the relaxed linear program contains all integral feasible points, and none extra. If we add a minimal number of new inequalities, such that no integral points are cut oﬀ, and such that all extreme points of the new feasible set are integers, then the intersection between a hyperplane representing such an inequality and the new set of feasible solutions is called a facet. Facets are sometimes added as they are found to be violated, and sometimes before the procedure is started. How does this relate to the Lshaped decomposition procedure? Let us be a bit formal. If all costs in a recourse problem are zero, and we choose to use the Lshaped decomposition method, there will be no optimality cuts, only feasibility cuts. Such a stochastic linear program could be written as min cT x s.t. Ax = b, x ≥ 0, W y (ξ ) = h(ξ ) − T (ξ )x, y (ξ ) ≥ 0. ⎫ ⎪ ⎪ ⎬ ⎪ ⎪ ⎭ (7.3) To use the Lshaped method to solve (7.3), we should begin solving the 214 STOCHASTIC PROGRAMMING problem min cT x s.t. Ax = b, x ≥ 0, i.e. (7.3) without the last set of constraints added. Then, if the resulting x ˆ makes the last set of constraints in (7.3) feasible for all ξ , we are done. If not, an implied feasibility cut is added. An integer program, on the other hand, could be written as ⎫ min cT x ⎬ s.t. Ax = b, ⎭ xi ∈ {ai , . . . , bi } for all xi . (7.4) A cuttingplane procedure for (7.4) will solve the problem with the constraints a ≤ x ≤ b so that the integrality requirement is relaxed. Then, if the resulting x is integral in all its elements, we are done. If not, an integrality cut is added. ˆ This cut will, if possible, be a facet of the solution space with all extreme points integer. By now, realizing that integrality cuts are also feasibility cuts, the connection should be clear. Integrality cuts in integer programming are just a special type of feasibility cuts. For the bounding version of the Lshaped decomposition method we combined bounding (with partitioning of the support) with cuts. In the same way, we can combine branching and cuts in the branchandcut algorithm for integer programs (still deterministic). The idea is fairly simple (but requires a lot of details to be eﬃcient). For all waiting nodes, before or after we have solved the relaxed LP, we add an appropriate number of cuts, before we (re)solve the LP. How many cuts we add will often depend on how well we know the facets of the (integer) solution space. This new LP will have a smaller (continuous) solution space, and is therefore likely to give a better result— either in terms of a nonintegral optimal solution with a higher objective value (increasing the probability of bounding), or in terms of an integer solution. So, ﬁnally, we have reached the ultimate question. How can all of this be used to solve integer stochastic programs? Given the simpliﬁcation that we have integrality only in the ﬁrststage problem, the procedure is given in Figure 27. In the procedure we operate with a set of waiting nodes P . These are nodes in the cutandbranch tree that are not yet fathomed or bounded. The procedure feascut was presented earlier in Figure 9, whereas the new procedure intcut is outlined in Figure 28. Let us try to compare the Lshaped integer programming method with the continuous one presented in Figure 10. RECOURSE PROBLEMS 215 procedure Lshaped Integer; begin Let z := ∞, the best solution so far; Let P := {initial problem with K := L := 0}; while P = ∅ do begin Pickproblem(P , P ); P := P \ {P }; repeat (∗ for problem P ∗) master(K, L, x, θ,feasible); ˆˆ ˆˆ fathom := not (feasible) or (cT x + θ > z ); if not (fathom) then begin feascut(A, x,newcut); ˆ if not (newcut) then intcut(ˆ, newcut); x if not (newcut) then begin if x integral then begin ˆ Find Q(ˆ); x z := min{z, cT x + Q(ˆ)}; ˆ x ˆ fathom := (θ ≥ Q(ˆ)); x if not (fathom) then begin L := L + 1; T Create the cut −βL x + θ ≥ αL ; end; end else begin Use branching to create 2 new problems P1 and P2 ; Let P := P ∪ {P1 , P2 }; end; end; end; until fathom; end; (∗ while ∗) end; Figure 27 The Lshaped decomposition method when the ﬁrststage problem contains integers. 216 STOCHASTIC PROGRAMMING procedure intcut(ˆ:real; newcut:boolean); x begin if violated integrality constraints found then begin K := K + 1; T Create a cut −γK x + θ ≥ δK ; newcut := true; end else newcut := false; end; Figure 28 Procedure for generating cuts based on integrality. 3.7.1 Initialization In the continuous case we started by assuming the existence of an x, feasible ˆ in the ﬁrst stage. It can be found, for example, by solving the expected value problem. This is not how we start in the integer case. The reason is partly that ﬁnding a feasible solution is more complicated in that setting. On the other hand, it might be argued that if we hope to solve the integer stochastic problem, we should be able to solve the expected value problem (or at least ﬁnd a feasible solution to the master problem), thereby being able to start out with a feasible solution (and a z better than ∞). But, even in this case, we shall not normally be calling procedure master with a feasible solution at hand. If we have just created a feasibility cut, the present x is not feasible. Therefore ˆ the diﬀerence in initialization is natural. This also aﬀects the generation of feasibility cuts. 3.7.2 Feasibility Cuts Both approaches operate with feasibility cuts. In the continuous case these are all implied constraints, needed to make the secondstage problem feasible for all possible realizations of the random variables. For the integer case, we still use these, and we add any cuts that are commonly used in branchandcut procedures in integer programming, preferably facets of the solution space with integral extreme points. To reﬂect all possible kinds of such cuts (some concerning secondstage feasibility, some integrality), we use a call to procedure feascut plus the new procedure intcut. Typically, implied constraints are based on an x that is nonintegral, and therefore infeasible. ˆ In the end, though, integrality will be there, based on the branching part of the algorithm, and then the cuts will indeed be based on a feasible (integral) solution. RECOURSE PROBLEMS 217 3.7.3 Optimality Cuts The creation of optimality cuts is the same in both cases, since in the integer case we create such cuts only for feasible (integer) solutions. 3.7.4 Stopping Criteria The stopping criteria are basically the same, except that what halts the whole procedure in the continuous case just fathoms a node in the integer case. 3.8 Stochastic Decomposition Throughout this book we are trying to reserve superscripts on variables and parameters for outcomes/realizations, and subscripts for time and components of vectors. This creates diﬃculties in this section. Since whatever we do will be wrong compared with our general rules, we have chosen to use the indexing of the original authors of papers on stochastic decomposition. The Lshaped decomposition method, outlined in Section 3.2, is a deterministic method. By that, we mean that if the algorithm is repeated with the same input data, it will give the same results each time. In contrast to this, we have what are called stochastic methods. These are methods that ideally will not give the same results in two runs, even with the same input data. We say “ideally” because it is impossible in the real world to create truly random numbers, and hence, in practice, it is possible to repeat a run. Furthermore, these methods have stopping criteria that are statistical in nature. Normally, they converge with probability 1. The reason for calling these methods random is that they are guided by some random eﬀects, for example samples. In this section we are presenting the method called stochastic decomposition (SD). The approach, as we present it, requires relatively complete recourse. We have until now described the part of the righthand side in the recourse problem that does not depend on x by h0 + Hξ . This was done to combine two diﬀerent eﬀects, namely to allow certain righthand side elements to be dependent, but at the same time to be allowed to work on independent random variables. SD does not require independence, and hence we shall replace h0 + Hξ by just ξ , since we no longer make any assumptions about independence between components of ξ . We do assume, however, that q (ξ ) ≡ q0 , so all randomness is in the righthand side. The problem to solve is therefore the following: min{φ(x) ≡ cT x + Q(x)} s.t. Ax = b, x ≥ 0, 218 STOCHASTIC PROGRAMMING where Q(x) = Q(x, ξ )f (ξ ) dξ ˜ with f being the density function for ξ and
T Q(x, ξ ) = min{q0 y  W y = ξ − T (ξ )x, y ≥ 0}. Using duality, we get the following alternative formulation of Q(x, ξ ):
T Q(x, ξ ) = max{π T [ξ − T (ξ )x]  π T W ≤ q0 }. Again we note that ξ and x do not enter the constraints of the dual formulation, so that if a given ξ and x produce a solvable problem, the problem is dual feasible for all ξ and x. Furthermore, if π 0 is a dual feasible solution then Q(x, ξ ) ≥ (π 0 )T [ξ − T (ξ )x] for any ξ and x, since π 0 is feasible but not necessarily optimal in a maximization problem. This observation is a central part of SD. Refer back to our discussion of how to interpret the Jensen lower bound in Section 3.4.1, where we gave three diﬀerent interpretations, one of which was approximate optimization using a ﬁnite number of dual feasible bases, rather than all possible dual feasible bases. In SD we shall build up a collection of dual feasible bases, and in some of the optimizations use this subset rather than all possible bases. In itself, this will produce a lowerbounding solution. But SD is also a sampling technique. By ξk , we shall understand the sample made in iteration k . At the same time, xk will refer to the iterate (i.e. the presently best guess of the optimal solution) in iteration k . The ﬁrst thing to do after a new sample has been made available is to evaluate Q(xk , ξj ) for the new iterate and all samples ξj found so far. First we solve for the newest sample ξk ,
T Q(xk , ξk ) = max{π T [ξk − T (ξk )xk ]  π T W ≤ q0 }, to obtain an optimal dual solution πk . Note that this optimization, being the ﬁrst involving ξk , is exact. If we let V be the collection of all dual feasible solutions obtained so far, we now add πk to V . Next, instead of evaluating Q(xk , ξj ) for j = 1, . . . , k − 1 (i.e. for the old samples) exactly, we simply solve max{π T (ξj − T (ξj )xk )  π ∈ V }
π k to obtain πj . Since V contains a ﬁnite number of vectors, this operation is very simple. Note that for all samples but the new one we perform approximate optimization using a limited set of dual feasible bases. The situation is RECOURSE PROBLEMS 219 ξ) ξ ξ ξ ξ Figure 29 Illustration of how stochastic decomposition performs exact optimization for the latest (third) sample point, but inexact optimization for the two old points. illustrated in Figure 29. There we see the situation for the third sample point. We ﬁrst make an exact optimization for the new sample point, ξ3 , obtaining a true optimal dual solution π3 . This is represented in Figure 29 by the supporting hyperplane through ξ3 , Q(x3 , ξ3 ). Afterwards, we solve inexactly for the two old sample points. There are three bases available for the inexact optimization. These bases are represented by the three thin lines. As we see, neither of the two old sample points ﬁnd their true optimal basis. ˜ If Ξ(ξ ) = {ξ1 , ξ2 , ξ3 }, with each outcome having the same probability 1 , we 3 could now calculate a lower bound on Q(x3 ) by computing L(x3 ) = 1 3
3 j =1 3 (πj )T (ξj − T (ξj )x3 ). This would be a lower bound because of the inexact optimization performed for the old sample points. However, the three sample points probably do not represent the true distribution well, and hence what we have is only something that in expectation is a lower bound. Since, eventually, this term will converge towards Q(x), we shall in what follows write Q(xk ) = 1 k
k k (πj )T (ξj − T (ξj )xk ). j =1 220 STOCHASTIC PROGRAMMING Remember, however, that this is not the true value of Q(xk )—just an estimate. In other words, we have now observed two major diﬀerences from the exact Lshaped method (page 171). First, we operate on a sample rather than on all outcomes, and, secondly, what we calculate is an estimate of a lower bound on Q(xk ) rather than Q(xk ) itself. Hence, since we have a lower bound, what we are doing is more similar to what we did when we used the Lshaped decomposition method within approximation schemes, (see page 204). However, the reason for the lower bound is somewhat diﬀerent. In the bounding version of Lshaped, the lower bound was based on conditional expectations, whereas here it is based on inexact optimization. On the other hand, we have earlier pointed out that the Jensen lower bound has three diﬀerent interpretations, one of which is to use conditional expectations (as in procedure Bounding Lshaped) and another that is inexact optimization (as in SD). So what is actually the principal diﬀerence? For the three interpretations of the Jensen bound to be equivalent, the limited set of bases must come from solving the recourse problem in the points of conditional expectations. That is not the case in SD. Here the points are random (according to the sample ξj ). Using a limited number of bases still produces a lower bound, but not the Jensen lower bound. Therefore SD and the bounding version of Lshaped are really quite diﬀerent. The reason for the lower bound is diﬀerent, and the objective value in SD is only a lower bound in terms of expectations (due to sampling). One method picks the limited number of points in a very careful way, the other at random. One method has an exact stopping criteria (error bound), the other has a statistically based stopping rule. So, more than anything else, they are alternative approaches. If one cannot solve the exact problem, one either resorts to bounds or to samplebased methods. In the Lshaped method we demonstrated how to ﬁnd optimality cuts. We can now ﬁnd a cut corresponding to xk (which is not binding and might even not be a lower bound, although it represents an estimate of a lower bound). As for the Lshaped method, we shall replace Q(x) in the objective by θ, and then add constraints. The cut generated in iteration k is given by θ≥ 1 k
k k k (πj )T [ξj − T (ξj )x] = αk + (βk )T x. k j =1 The double set of indices on α and β indicate that the cut was generated in iteration k (the subscript) and that it has been updated in iteration k (the superscript). In contrast to the Lshaped decomposition method, we must now also look at the old cuts. The reason is that, although we expect these cuts to be loose (since we use inexact optimization), they may in fact be far too tight (since they are based on a sample). Also, being old, they are based on a sample that RECOURSE PROBLEMS 221 is smaller than the present one, and hence, probably not too good. We shall therefore want to phase them out, but not by throwing them away. Assume that there exists a lower bound on Q(x, ξ ) such that Q(x, ξ ) ≥ Q for all x and ξ . Then the old cuts
k θ ≥ αk−1 + (βj −1 )T x for j = 1, . . . , k − 1 j will be replaced by θ≥ ⎫ k − 1 k −1 1 ⎬ k [αj + (βj −1 )T x] + Q k k k = αk + (βj )T x for j = 1, . . . , k − 1. ⎭ j (8.1) For technical reasons, Q = 0 is to be preferred. This inequality is looser than the previous one, since Q ≤ Q(x, ξ ). The master problem now becomes ⎫ min cT x + θ ⎪ ⎪ ⎬ s.t. Ax = b, (8.2) kT k −(βj ) x + θ ≥ αj for j = 1, . . . , k, ⎪ ⎪ ⎭ x ≥ 0, yielding the next iterate xk+1 . Note that, since we assume relatively complete recourse, there are no feasibility cuts. The above format is the one to be used for computations. To understand the method better, however, let us show an alternative version of (8.2) that is less useful computationally but is more illustrative (see Figure 30 for an illustration):
k min φk (x) ≡ cT x + maxj ∈{1,···,k} [αk + (βj )T x] j s.t. Ax = b, x ≥ 0. This deﬁnes the function φk (x) and shows more clearly than (8.2) that we do indeed have a function in x that we are minimizing. Also φk (x) is the present estimate of φ(x) = cT x + Q(x). The above setup has one major shortcoming: it might be diﬃcult to extract a converging subsequence from the sequence xk . A number of changes therefore have to be made. These make the algorithm look more messy, but the principles are not lost. To make it simpler (empirically) to extract a converging subsequence, we shall introduce a sequence of incumbent solutions xk . Following the incumbent, there will be an index ik that shows in which iteration the current xk was found. We initiate the method by setting the counter k := 0, choose an r ∈ (0, 1) ˜ (to be explained later), and let ξ0 := E ξ . Thus we solve
T min cT x + q0 y s.t. Ax = b, W y = ξ0 − T (ξ0 )x, x, y ≥ 0, 222 STOCHASTIC PROGRAMMING φ() Figure 30 Representation of cT x + Q(x) by a piecewise linear function. to obtain an initial x1 . We initiate the incumbent x0 = x1 and show that it was found in iteration 1 by letting i0 = 1. Next, let us see what is done in a general iteration of the algorithm. First the counter is increased by letting k := k + 1, and a sample ξk is found. We now need to ﬁnd a new cut k as outlined before, and we need to update the cut that corresponds to the current incumbent. First, we solve
T max{π T [ξk − T (ξk )xk ]  π T W ≤ q0 } to obtain πk . Next, we solve
T max{π T (ξk − T (ξk )xk−1 )  π T W ≤ q0 } to obtain π k . As before, we then update the set of dual feasible bases by letting V := V ∪ {πk , π k }. We then need to make one new cut and update the old cuts. First, the new cut is made exactly as before. We solve max{π T [ξj − T (ξj )xk ]  π ∈ V } to obtain πj for j = 1, . . . , k − 1, and then create the k th cut as θ≥ 1 k
k k (πj )T [ξj − T (ξj )x] = αk + (βk )T x. k j =1 RECOURSE PROBLEMS 223 In addition, we need to update the incumbent cut ik . This is done just the way we found cut k . We solve max{π T [ξj − T (ξj )xk−1 ]  π ∈ V } to obtain π j , and replace the old cut ik by 1 k
k j =1 k (π j )T [ξj − T (ξj )x] = αkk−1 + (βik−1 )T x. i θ≥ The remaining cuts are updated as before by letting θ≥ 1 k − 1 k −1 k k [αj +(βj −1 )T x]+ Q = αk +(βj )T x for j = 1, . . . , k −1, j = ik−1 . j k k Now, it is time to check if the incumbent should be changed. We shall use Figure 31 for illustration, and we shall use the function φk (x) deﬁned earlier. In the ﬁgure we have k = 3. When we entered iteration k , our approximation of φ(x) was given by φk−1 (x). Our incumbent solution was xk−1 and our iterate was xk . We show this in the top part of Figure 31 as x2 and x3 . The position of x2 is somewhat arbitrary, since we cannot know how things looked in the previous iteration. Therefore φk−1 (xk ) − φk−1 (xk−1 ) ≤ 0 was our approximation of how much we should gain by making xk our new incumbent. However, xk might be in an area where φk−1 (x) is a bad approximation of φ(x). The function φk (x), on the other hand, was developed around xk , and should therefore be good in that area (in addition to being approximately as good as φk−1 (x) around xk−1 ). This can be seen in the bottom part of Figure 31, where φ3 (x) is given. The function φ3 (x) is based on three cuts. One is new, the other two are updates of the two cuts in the top part of the ﬁgure, according to (8.1). Hence φk (xk ) − φk (xk−1 ) (a negative number) is a measure of how much we actually gained. If φk (xk ) − φk (xk−1 ) < r[φk−1 (xk ) − φk−1 (xk−1 )], we gained at least a portion r of what we hoped for, and we let xk := xk and ik := k . If not, we were not happy with the change, and we let xk := xk−1 and ik := ik−1 . When we have updated the incumbent, we solve a new master problem to obtain xk+1 and repeat the process. The stopping criterion for SD is of a statistical nature, and its complexity is beyond the scope of this book. For a reference, see the end of the chapter. 224 STOCHASTIC PROGRAMMING cx + Q( x)
f2 ( x2 ) f2 ( x ) x2 b x3 x a
cx + Q( x) f3 ( x 2 ) f3 ( x ) f3 ( x 3 )
x2
b
Figure 31 x3
a c x Calculations to ﬁnd out if the incumbent should be changed. RECOURSE PROBLEMS 225 3.9 Stochastic QuasiGradient Methods We are still dealing with recourse problems stated in the somewhat more general form
x ∈X min f (x) +
Ξ Q(x, ξ ) Pξ (dξ ) . ˜ (9.1) This formulation also includes the stochastic linear program with recourse, letting X = {x  Ax = b, x ≥ 0}, f (x) = cT x, Q(x, ξ ) = min{(q (ξ ))T y  W y = h(ξ ) − T (ξ )x, y ≥ 0}. To describe the socalled stochastic quasigradient method (SQG), we simplify the notation by deﬁning F (x, ξ ) := f (x) + Q(x, ξ ) and hence considering the problem
x ∈X ˜ min Eξ F (x, ξ ), ˜ (9.2) for which we assume that ˜ Eξ F (x, ξ ) is ﬁnite and convex in x, ˜ X is convex and compact. (9.3 i) (9.3 ii) Observe that for stochastic linear programs with recourse the assumptions (9.3) are satisﬁed if, for instance, • we have relatively complete recourse, the recourse function Q(x, ξ ) is a.s. ˜ ﬁnite ∀x, and the components of ξ are squareintegrable (i.e. their second moments exist); • X = {x  Ax = b, x ≥ 0} is bounded. Then, starting from some feasible point x0 ∈ X, we may deﬁne an iterative process by xν +1 = ΠX (xν − ρν v ν ), (9.4) where v ν is a random vector, ρν ≥ 0 is some step size and ΠX is the projection onto X , i.e. for y ∈ IRn , with · · · the Euclidean norm, ΠX (y ) = arg min y − x .
x ∈X (9.5) 226 STOCHASTIC PROGRAMMING ˜ By assumption (9.3 i), ϕ(x) := Eξ F (x, ξ ) is convex in x. If this function is ˜ also diﬀerentiable with respect to x at any arbitrary point z with the gradient ˜ g := ∇ϕ(z ) = ∇x Eξ F (z, ξ ), then −g is the direction of steepest descent of ˜ ˜) in z , and we should probably like to choose −g as the ϕ(x) = Eξ F (x, ξ ˜ search direction to decrease our objective. However, this does not seem to be ˜ a practical approach, since, as we know already, evaluating ϕ(x) = Eξ F (x, ξ ), ˜ ˜ as well as ∇ϕ(z ) = ∇x Eξ F (z, ξ ), is a rather cumbersome task. ˜ In the diﬀerentiable case we know from Proposition 1.21 on page 81 that, for a convex function ϕ, (x − z )T ∇ϕ(z ) ≤ ϕ(x) − ϕ(z ) (9.6) has to hold ∀x, z (see Figure 27 in Chapter 1). But, even if the convex function ϕ is not diﬀerentiable at some point z , e.g. if it has a kink there, it is shown in convex analysis that there exists at least one vector g such that (x − z )T g ≤ ϕ(x) − ϕ(z ) ∀x. (9.7) Any vector g satisfying (9.7) is called a subgradient of ϕ at z , and the set of all vectors satisfying (9.7) is called the subdiﬀerential of ϕ at z and is denoted by ∂ϕ(z ). If ϕ is diﬀerentiable at z then ∂ϕ(z ) = {∇ϕ(z )}; otherwise, i.e. in the nondiﬀerentiable case, ∂ϕ(z ) may contain more than one element as shown for instance in Figure 32. Furthermore, in view of (9.7), it is easily seen that ∂ϕ(z ) is a convex set. If ϕ is convex and g = 0 is a subgradient of ϕ at z then, by (9.7) for λ > 0, it follows that ϕ(z + λg ) ≥ ϕ(z ) + g T (x − z ) = ϕ(z ) + g T (λg ) = ϕ(z ) + λ g 2 > ϕ(z ). Hence any subgradient, g ∈ ∂ϕ, such that g = 0 is a direction of ascent, although not necessarily the direction of steepest ascent as the gradient would be if ϕ were diﬀerentiable in z . However, in contrast to the diﬀerentiable case, −g need not be a direction of strict descent for ϕ in z . Consider for example the convex function in two variables ψ (u, v ) := u + v . z Then for z = (0, 3) we have g = (1, 1)T ∈ ∂ψ (ˆ), since for all ε > 0 the ˆ gradient ∇ψ (ε, 3) exists and is equal to g . Hence, by (9.6), we have, for all (u, v ), T T u−ε u ε 1 g= − v−3 v 3 1 =u−ε+v−3 ≤ u + v  − ε − 3,
T RECOURSE PROBLEMS 227 Figure 32 Nondiﬀerentiable convex function: subgradients. which is obviously true ∀ε ≥ 0, such that g is a subgradient in (0, 3)T . Then for 0 < λ < 3 and z − λg = (−λ, 3 − λ)T it follows that ˆ ψ (ˆ − λg ) = 3 = ψ (ˆ), z z and therefore, in this particular case, −g is not a strict descent direction for ψ in z . Nevertheless, as we see in Figure 33, moving from z along the ray ˆ ˆ z − λg, λ > 0, for any λ < 3 we would come closer—with respect to the ˆ Euclidean norm—to arg min ψ = {(0, 0)T } than we are at z . ˆ It is worth noting that this property of a subgradient of a convex function holds in general, and not only for our particular example. Let ϕ be a convex function and assume that g ∈ ∂ϕ(z ), g = 0. Assume further that z ∈ arg min ϕ and x ∈ arg min ϕ. Then we have for ρ > 0 with the Euclidean norm, using (9.7), (z − ρg ) − x
2 = (z − x ) − ρg 2 = z − x 2 + ρ2 g ≤ z − x 2 + ρ2 g 2 2 − 2ρg T (z − x ) − 2ρ[ϕ(z ) − ϕ(x )]. Since, by our assumption, ϕ(z ) − ϕ(x ) > 0, we may choose a step size ρ = ρ > 0 such that ¯ ρ2 g ¯
2 − 2ρ[ϕ(z ) − ϕ(x ] < 0, ¯ implying that z − ρg is closer to x ∈ arg min ϕ than z . This property provides ¯ 228 STOCHASTIC PROGRAMMING Figure 33 Decreasing the distance to arg min ψ using a subgradient. the motivation for the iterative procedures known as subgradient methods, which minimize convex functions even in the nondiﬀerentiable case. Obviously for the above procedure (9.4) we may not expect any reasonable convergence statement without further assumptions on the search direction v ν and on the step size ρν . Therefore let v ν be a socalled stochastic quasigradient, i.e. assume that ˜ E (v ν  x0 , · · · , xν ) ∈ ∂x Eξ F (xν , ξ ) + bν , ˜ (9.8) where ∂x denotes the subdiﬀerential with respect to x, as mentioned above, coinciding with the gradient in the diﬀerentiable case. Let us recall what we are doing here. Starting with some xν , we choose for (9.4) a random vector v ν . It seems plausible to assume that v ν depends in ˜ ˜ some way on ξ (e.g. on an observation ξ ν or on a sample {ξ ν 1 , · · · , ξ νNν } of ξ ) and on xν . Then, after the choice of the step size ρν , by (9.4) the next iterate xν +1 depends on xν . It follows that v ν is itself random. This implies that the tuples (x0 , x1 , · · · , xν ) are random ∀ν ≥ 1. Hence (9.8) is not yet much of a requirement. It just says that the expected value of v ν , under the condition of the path of iterates generated so far, (x0 , · · · , xν ), is to be written as the ˜ sum of a subgradient g ν ∈ ∂x Eξ F (xν , ξ ) and some vector bν . ˜ Since, by the convexity according to (9.3 i) and applying (9.7), ˜ ˜ Eξ F (x∗ , ξ ) − Eξ F (xν , ξ ) ≥ g ν T (x∗ − xν ) ˜ ˜ (9.9) ˜ for any solution x∗ of (9.2) and any g ν ∈ ∂x Eξ F (xν , ξ ), we have from (9.8) ˜ RECOURSE PROBLEMS 229 that ˜ ˜ 0 ≥ Eξ F (x∗ , ξ ) − Eξ F (xν , ξ ) ≥ E (v ν  x0 , · · · , xν )T (x∗ − xν ) + γν , ˜ ˜ where γν = −bν T (x∗ − xν ). (9.10) (9.11) Intuitively, if we assume that {xν } converges to x and all v ν are uniformly ν →∞ bounded, i.e. v ν  ≤ α for some constant α, we should require that bν −→ 0, ν →∞ implying γν −→ 0 as well. Observe that the particular choice of a stochastic subgradient (9.12) v ν ∈ ∂x F (xν , ξ ν ), or more generally vν = 1 Nν
Nν wµ , wµ ∈ ∂x F (xν , ξ νµ ),
µ=1 (9.13) ˜ where the ξ ν or ξ νµ are independent samples of ξ , would yield bν = 0, γν = 0 ∀ν , provided that the operations of integration and diﬀerentiation may be exchanged, as asserted for example by Proposition 1.2 for the diﬀerentiable case. Finally, assume that for the step size ρν together with v ν and γν we have
∞ ∞ ρν ≥ 0 ,
ν =0 ρν = ∞,
ν =0 Eξ (ρν γν  + ρ2 v ν ˜ ν 2 ) < ∞. (9.14) With the choices (9.12) or (9.13), for uniformly bounded v ν this assumption could obviously be replaced by the step size assumption
∞ ∞ ρν ≥ 0 ,
ν =0 ρν = ∞,
ν =0 ρ2 < ∞. ν (9.15) With these prerequisites, it can be shown that, under the assumptions (9.3), (9.8) and (9.14) (or (9.3), (9.12) or (9.13), and (9.15)) the iterative method (9.4) converges almost surely (a.s.) to a solution of (9.2). 3.10 Solving Many Similar Linear Programs In both the Lshaped (continuous and integer) and stochastic decomposition methods we are faced with the problem of solving many similar LPs. This is most obvious in the Lshaped method: cut formation requires the solution of many LPs that diﬀer only in the righthand side and objective. This amount of 230 STOCHASTIC PROGRAMMING work, which is typically enormous, must be performed in each major iteration. For stochastic decomposition, it is perhaps less obvious that we are facing such a large workload, but, added over all iterations, we still end up with a large number of similar LPs. The problem of solving a large number of similar LPs has attracted attention for quite a while, in particular when there is only righthand side randomness. Therefore let us proceed under the assumption that q (ξ ) ≡ q0 . The major idea is that of bunching. This is a simple idea. If we refer back to the discussion of the Lshaped decomposition method, we observed that the dual formulation of the recourse problem was given by
T max{π T (h(ξ ) − T (ξ )x)  π T W ≤ q0 }. π (10.1) What we observe here is that the part that varies, h(ξ ) − T (ξ )x, appears only in the objective. As a consequence, if (10.1) is feasible for one value of x and ξ , it is feasible for all values of x and ξ . Of course, the problem might be unbounded (meaning that the primal is infeasible) for some x and ξ . For the moment we shall assume that that does not occur. (But if it does, it simply shows that we need a feasibility cut, not an optimality cut). In a given iteration of the Lshaped decomposition method, x will be ﬁxed, and all we are interested in is the selection of righthand sides resulting from all possible values of ξ . Let us therefore simplify notation, and assume that we have a selection of righthand sides B , so that, instead of (10.1), we solve
T max{π T h  π T W ≤ q0 } π (10.2) for all h ∈ B . Assume (10.2) is solved for one value of h ∈ B with optimal basis B . Then B is a dual feasible basis for all h ∈ B . Therefore, for all h ∈ B for which B −1 h ≥ 0, the basis B is also primal feasible, and hence optimal. The idea behind bunching is simply to start out with some h ∈ B , ﬁnd the optimal basis B , and then check B −1 h for all other h ∈ B . Whenever B −1 h ≥ 0, we have found the optimal solution for that h, and these righthand sides are bunched together. We then remove these righthand sides from B , and repeat the process, of course with a warm start from B , using the dual simplex method, for one of the remaining righthand sides in B . We continue until all righthand sides are bunched. That gives us all information needed to ﬁnd Q and the necessary optimality cut. This procedure has been followed up in several directions. An important one is called trickling down. Again, we start out with B , and we solve (10.2) for some righthand side to obtain a dual feasible basis B . This basis is stored in the root of a search tree that we are about to make. Now, for one h ∈ B at a time do the following. Start in the root of the tree, and calculate B −1 h. If B −1 h ≥ 0, register that this righthand side belongs to the bunch associated RECOURSE PROBLEMS 231 B 1 B 2 B B 3 B 8 B 5 B
Figure 34 Example of a bunching tree. 4 with B , and go to the next h ∈ B . If B −1 h ≥ 0, pick a row for which primal feasibility is not satisﬁed. Perform a dual pivot step to obtain a new basis B (still dual feasible). Create a new node in the search tree associated with this new B . If the pivot was made in row i, we let the new node be the ith child of the node containing the previous basis. Continue until optimality is found. This situation is illustrated in Figure 34, where a total of seven bases are stored. The numbers on the arc refer to the row where pivoting took place, the B in the nodes illustrate that there is a basis stored in each node. This might not seem eﬃcient. However, the real purpose comes after some iterations. If a righthand side h is such that B −1 h ≥ 0, and one of the negative primal variables corresponds to a row index i such that the ith child of the given node in the search tree already exists, we simply move to that child without having to price. This is why we use the term trickling down. We try to trickle a given h as far down in the tree as possible, and only when there is no negative primal variable that corresponds to a child node of the present node do we price and pivot explicitly, thereby creating a new branch in the tree. Attempts have been made to ﬁrst create the tree, and then trickle down the righthand sides in the ﬁnished tree. This was not successful for two reasons. If we try to enumerate all dual feasible bases, then the tree grows out of hand (this corresponds to extreme point enumeration), and if we try to ﬁnd the correct selection of such bases, then that in itself becomes an overwhelming 232 STOCHASTIC PROGRAMMING problem. Therefore a predeﬁned tree does not seem to be a good idea. It is worth noting that the idea of storing a selection of dual feasible bases, as was done in the stochastic decomposition method, is also related to the above approach. In that case the result is a lower bound on Q(x). A variant of these methods is as follows. Start out with one dual feasible basis B as in the trickling down procedure. Pick a leading righthand side. Now solve the problem corresponding to this leading righthand side using the dual simplex method. On pivoting along, create a branch of the search tree just as for trickling down. The diﬀerence is as follows. For each basis B encountered, check B −1 h for all h ∈ B . Then split the righthand sides remaining in B into three sets. Those that have B −1 h ≥ 0 are bunched with that B , and removed from B . Those that have a primal infeasibility in the same row as the one chosen to be the pivot row for the leading problem are kept in B and hence carried along at least one more dual pivot step. The remaining righthand sides are left behind in the given node, to be picked up later on. When the leading problem has been solved to optimality, and bunching has been performed with respect to its optimal basis, check if there are any righthand sides left in B . If there are, let one of them be the leading righthand side, and continue the process. Eventually, when a leading problem has been solved to optimality, B = ∅. At that time, start backtracking the search tree. Whenever a selection of righthand sides left behind is encountered, pick one of them as the leading problem, and repeat the process. On returning to the root, and ﬁnding there are no righthand sides left behind there, the process is ﬁnished. All righthand sides are bunched. Technically, what has now been done is to traverse the search tree in preorder. What remains to be discussed is what to store in the search tree. We have already seen that the minimal amount to store at any arc in the tree is the index of the leaving basic column (represented by the numbering of the children), and the entering column. If that is all we store, we have to pivot, but not price out, in each step of the trickling down. If we have enough storage, it is more eﬃcient to store for example the etavector (from the revised simplex method) or the Schur complement (it is not important here if you do not know what the etavector or the Schur complement is). Of course, we could in principle store B −1 , but for all practical problems that is too much to store. 3.10.1 Randomness in the Objective The discussion of trickling down etc. was carried out in a setting of righthand side randomness only. However, as with many other problems we have faced in this book, pure objective function randomness can be changed into pure righthand side randomness by using linear programming duality. Therefore the discussions of righthand side randomness apply to objective function RECOURSE PROBLEMS 233 randomness as well. Then, one may ask what happens if there is randomness in both the objective and the righthand side. Trickling down cannot be performed the way we have outlined it in that case. This is because a basis that was optimal for one ξ will, in general, be neither primal nor dual feasible for some other ξ . On the other hand, the basis may be good, not far from the optimal one. Hence warm starts based on an old basis, performing a combination of primal and dual simplex steps, will almost surely be better than solving the individual LPs from scratch. 3.11 Bibliographical Notes Benders’ [1] decomposition is the basis for all decomposition methods in this chapter. In stochastic programming, as we have seen, it is more common to refer to Benders’ decomposition as the Lshaped decomposition method. That approach is outlined in detail in Van Slyke and Wets [63]. An implementation of the Lshaped decomposition method, called MSLiP, is presented in Gassmann [31]. It solves multistage problems based on nested decomposition. Alternative computational methods are also discussed in Kall [44]. The regularized decomposition method has been implemented under the name QDECOM. For further details on the method and QDECOM, in particular for a special technique to solve the master (3.6), we refer to the original publication of Ruszczy´ski [61]; the presentation in this chapter is n close to the description in his recent paper [62]. Some attempts have also been made to use interior point methods. As examples consider Birge and Qi [7], Birge and Holmes [6], Mulvey and Ruszczy´ ski [60] and Lustig, Mulvey and Carpenter [55]. The latter two n combine interior point methods with parallel processing. Parallel techniques have been tried by others as well; see e.g. Berland [2] and Jessup, Yang and Zenios [42]. We shall mention some others in Chapter 6. The idea of combining branchandcut from integer programming with primal decomposition in stochastic programming was developed by Laporte and Louveaux [53]. Although the method is set in a strict setting of integrality only in the ﬁrst stage, it can be expanded to cover (via a reformulation) multistage problems that possess the socalled blockseparable recourse property, see Louveaux [54] for details. Stochastic quasigradient methods were developed by Ermoliev [20, 21], and implemented by, among others, Gaivoronski [27, 28]. Besides stochastic quasigradients several other possibilities for constructing stochastic descent directions have been investigated, e.g. in Marti [57] and in Marti and Fuchs [58, 59]. The Jensen lower bound was developed in 1906 [41]. The Edmundson– 234 STOCHASTIC PROGRAMMING Madansky upper bound is based on work by Edmundson [19] and Madansky [56]. It has been extended to the multidimensional case by Gassmann and Ziemba [33]; see also Hausch and Ziemba [36] and Edirisinghe and Ziemba [17, 18]. Other references in this area include Huang, Vertinsky and Ziemba [39] and Huang, Ziemba and BenTal [40]. The Edmundson– Madansky bound was generalized to the case of stochastically dependent components by Frauendorfer [23]. The piecewise linear upper bound is based on two independent approaches, namely those of Birge and Wets [11] and Wallace [66]. These were later combined and strengthened in Birge and Wallace [8]. There is a large collection of bounds based on extreme measures (see e.g. Dul´ [12, 13], Hausch and Ziemba [36], Huang, Ziemba and BenTal [40] and a Kall [48]). Both the Jensen and Edmundson–Madansky bounds can be put into this category. For a fuller description of these methods, consult Birge and Wets [10], Dupaˇov´ [14, 15, 16] and Kall [47]; more on extreme measures ca may be found in Karr [51] and Kemperman [52]. Bounds can also be found when limited information is available. Consult e.g. Birge and Dul´ [5]. An upper bound based on structure can be found in a Wallace and Yan [68]. Stochastic decomposition was developed by Higle and Sen [37, 38]. The ideas presented about trickling down and similar methods come from diﬀerent authors, in particular Wets [70, 72], Haugland and Wallace [35], Wallace [65, 64] and Gassmann and Wallace [32]. A related approach is that of Gartska and Rutenberg [29], which is based on parametric optimization. Partitioning has been discussed several times during the years. Some general ideas are presented in Birge and Wets [9]. More detailed discussions (with numerical results), on which the discussions in this book are based, can be found in Frauendorfer and Kall [26] and Berland and Wallace [3, 4]. Other texts about approximation by discretization include for example those of Kall [43, 45, 46], and Kall, Ruszczy´ ski and Frauendorfer [49]. n When partitioning the support to tighten bounds, it is possible to use more complicated cells than we have done. For example, Frauendorfer [24, 25] uses simplices. It is also possible to use more general polyhedra. For simple recourse, the separability of the objective, which facilitates computations substantially, was discovered by Wets [69]. The ability to replace the Edmundson–Madansky upper bound by the true objective’s value was discussed in Kall and Stoyan [50]. Wets [71] has derived a special pivoting scheme that avoids the tremendous increase of the problem size known from general recourse problems according to the number of blocks (i.e. realizations). See also discussions by Everitt and Ziemba [22] and Hansotia [34]. The ﬁsheries example in the beginning of the chapter comes from Wallace [67]. Another application concerning natural resources is presented by Gassmann [30]. RECOURSE PROBLEMS 235 Exercises
1. The secondstage constraints of a twostage problem look as follows: 1 2 3 −1 0 −1 21 y= y≥0 −6 −4 ξ+ 5 0 −1 0 24 x ˜ where ξ is a random variable with support Ξ = [0, 1]. Write down the LP (both primal and dual formulation) needed to check if a given x produces a feasible secondstage problem. Do it in such a way that if the problem is not feasible, you obtain an inequality in x that cuts oﬀ the given x. If you have access to an LP code, perform the computations, and ﬁnd the inequality explicitly for x = (1, 1, 1)T . ˆ 2. Look back at problem (4.1) we used to illustrate the bounds. Add one extra constraint, namely xraw1 ≤ 40. (a) (b) (c) (d) Find the Jensen lower bound after this constraint has been added. Find the Edmundson–Madansky upper bound. Find the piecewise linear upper bound. Try to ﬁnd a good variable for partitioning. 3. Assume that you are facing a decision problem where randomness is involved. You have no idea about the distribution of the random variables involved. However, you can obtain samples from the distribution by running an expensive experiment. You have decided to use stochastic decomposition to solve the problem, but are concerned that you may not be able to perform enough experiments for convergence to take place. The cost of a single experiment is much higher than the costs involved in the arithmetic operations of the algorithm. (a) Argue why (or why not) it is reasonable to use stochastic decomposition under the assumptions given. (You can assume that all necessary convexity is there.) (b) What changes could you suggest in stochastic decomposition in order to (at least partially) overcome the fact that samples are so expensive? 4. Let ϕ be a convex function. Show that x ∈ arg min ϕ iﬀ 0 ∈ ∂ϕ(x ). (See the deﬁnition following (9.7). 236 STOCHASTIC PROGRAMMING 5. Show that for a convex function ϕ and any arbitrary z the subdiﬀerential ∂ϕ(z ) is a convex set. [Hint: For any subgradient (9.7) has to hold.] 6. Assume that you are faced with a large number of linear programs that you need to solve. They represent all recourse problems in a twostage stochastic program. There is randomness in both the objective function and the righthand side, but the random variables aﬀecting the objective are diﬀerent from, and independent of, the random variables aﬀecting the righthand side. (a) Argue why (or why not) it is a good idea to use some version of bunching or trickling down to solve the linear programs. (b) Given that you must use bunching or trickling down in some version, how would you organize the computations? 7. First consider the following integer programming problem: min{cx  Ax ≤ h, xi ∈ {0, . . . , bi } ∀i}.
x Next, consider the problem of ﬁnding Eφ(˜), with x φ(x) = min{cy  Ay ≤ h, 0 ≤ y ≤ x}.
y (a) Assume that you solve the integer program with branchandbound. Your ﬁrst step is then to solve the integer program above, but with xi ∈ {0, . . . , bi } ∀i replaced by 0 ≤ x ≤ b. Assume that you get x. ˆ Explain why x can be a good partitioning point if you wanted to ﬁnd ˆ Eφ(˜) by repeatedly partitioning the support, and ﬁnding bounds on x each cell. [Hint: It may help to draw a little picture.] (b) We have earlier referred to Figure 18, stating that it can be seen as both the partitioning of the support for the stochastic program, and partitioning the solution space for the integer program. Will the number of cells be largest for the integer or the stochastic program above? Note that there is not necessarily a clear answer here, but you should be able make arguments on the subject. Question (a) may be of some help. 8. Look back at Figure 17. There we replaced one distribution by two others: one yielding an upper bound, and one a lower bound. The possible values for these two new distributions were not the same. How would you use the ideas of Jensen and Edmundson–Madansky to achieve, as far as possible, the same points? You can assume that the distribution is bounded. [Hint: The Edmundson–Madansky distribution will have two more points than the Jensen distribution.] RECOURSE PROBLEMS 237 References
[1] Benders J. F. (1962) Partitioning procedures for solving mixedvariables programming problems. Numer. Math. 4: 238–252. [2] Berland N. J. (1993) Stochastic optimization and parallel processing. PhD thesis, Department of Informatics, University of Bergen. [3] Berland N. J. and Wallace S. W. (1993) Partitioning of the support to tighten bounds on stochastic PERT problems. Working paper, Department of Managerial Economics and Operations Research, Norwegian Institute of Technology, Trondheim. [4] Berland N. J. and Wallace S. W. (1993) Partitioning the support to tighten bounds on stochastic linear programs. Working paper, Department of Managerial Economics and Operations Research, Norwegian Institute of Technology, Trondheim. [5] Birge J. R. and Dul´ J. H. (1991) Bounding separable recourse functions a with limited distribution information. Ann. Oper. Res. 30: 277–298. [6] Birge J. R. and Holmes D. (1992) Eﬃcient solution of two stage stochastic linear programs using interior point methods. Comp. Opt. Appl. 1: 245–276. [7] Birge J. R. and Qi L. (1988) Computing blockangular Karmarkar projections with applications to stochastic programming. Management Sci. pages 1472–1479. [8] Birge J. R. and Wallace S. W. (1988) A separable piecewise linear upper bound for stochastic linear programs. SIAM J. Control and Optimization 26: 725–739. [9] Birge J. R. and Wets R. J.B. (1986) Designing approximation schemes for stochastic optimization problems, in particular for stochastic programs with recourse. Math. Prog. Study 27: 54–102. [10] Birge J. R. and Wets R. J.B. (1987) Computing bounds for stochastic programming problems by means of a generalized moment problem. Math. Oper. Res. 12: 149–162. [11] Birge J. R. and Wets R. J.B. (1989) Sublinear upper bounds for stochastic programs with recourse. Math. Prog. 43: 131–149. [12] Dul´ J. H. (1987) An upper bound on the expectation of sublinear functions a of multivariate random variables. Preprint, CORE. [13] Dul´ J. H. (1992) An upper bound on the expectation of simplicial functions a of multivariate random variables. Math. Prog. 55: 69–80. [14] Dupaˇov´ J. (1976) Minimax stochastic programs with nonconvex ca nonseparable penalty functions. In Pr´kopa A. (ed) Progress in Operations e Research, pages 303–316. NorthHolland, Amsterdam. [15] Dupaˇov´ J. (1980) Minimax stochastic programs with nonseparable ca penalties. In Iracki K., Malanowski K., and Walukiewicz S. (eds) Optimization Techniques, Part I, volume 22 of Lecture Notes in Contr. Inf. Sci., pages 157–163. SpringerVerlag, Berlin. 238 STOCHASTIC PROGRAMMING [16] Dupaˇov´ J. (1987) The minimax approach to stochastic programming and ca an illustrative application. Stochastics 20: 73–88. [17] Edirisinghe N. C. P. and Ziemba W. T. (1994) Bounding the expectation of a saddle function, with application to stochastic programming. Math. Oper. Res. 19: 314–340. [18] Edirisinghe N. C. P. and Ziemba W. T. (1994) Bounds for twostage stochastic programs with ﬁxed recourse. Math. Oper. Res. 19: 292–313. [19] Edmundson H. P. (1956) Bounds on the expectation of a convex function of a random variable. Technical Report Paper 982, The RAND Corporation. [20] Ermoliev Y. (1983) Stochastic quasigradient methods and their application to systems optimization. Stochastics 9: 1–36. [21] Ermoliev Y. (1988) Stochastic quasigradient methods. In Ermoliev Y. and Wets R. J.B. (eds) Numerical Techniques for Stochastic Optimization, pages 143–185. SpringerVerlag. [22] Everitt R. and Ziemba W. T. (1979) Twoperiod stochastic programs with simple recourse. Oper. Res. 27: 485–502. [23] Frauendorfer K. (1988) Solving SLP recourse problems with arbitrary multivariate distributions—the dependent case. Math. Oper. Res. 13: 377– 394. [24] Frauendorfer K. (1989) A simplicial approximation scheme for convex twostage stochastic programming problems. Manuscript, Inst. Oper. Res., University of Zurich. [25] Frauendorfer K. (1992) Stochastic TwoStage Programming, volume 392 of Lecture Notes in Econ. Math. Syst. SpringerVerlag, Berlin. [26] Frauendorfer K. and Kall P. (1988) A solution method for SLP recourse problems with arbitrary multivariate distributions – the independent case. Probl. Contr. Inf. Theory 17: 177–205. [27] Gaivoronski A. (1988) Interactive program SQGPC for solving stochastic programming problems on IBM PC/XT/AT compatibles—user guide. Working Paper WP8811, IIASA, Laxenburg. [28] Gaivoronski A. (1988) Stochastic quasigradient methods and their implementation. In Ermoliev Y. and Wets R. J.B. (eds) Numerical Techniques for Stochastic Optimization, pages 313–351. SpringerVerlag. [29] Gartska S. J. and Rutenberg D. P. (1973) Computation in discrete stochastic programs with recourse. Oper. Res. 21: 112–122. [30] Gassmann H. I. (1989) Optimal harvest of a forest in the presence of uncertainty. Can. J. Forest Res. 19: 1267–1274. [31] Gassmann H. I. (1990) MSLiP: A computer code for the multistage stochastic linear programming problem. Math. Prog. 47: 407–423. [32] Gassmann H. I. and Wallace S. W. (1993) Solving linear programs with multiple right hand sides: Pivoting and ordering schemes. Working paper, Department of Economics, Norwegian Institute of Technology, Trondheim. [33] Gassmann H. and Ziemba W. T. (1986) A tight upper bound for the RECOURSE PROBLEMS 239 [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] expectation of a convex function of a multivariate random variable. Math. Prog. Study 27: 39–53. Hansotia B. J. (1980) Stochastic linear programs with simple recourse: The equivalent deterministic convex program for the normal, exponential and Erlang cases. Naval. Res. Logist. Quart. 27: 257–272. Haugland D. and Wallace S. W. (1988) Solving many linear programs that diﬀer only in the righthand side. Eur. J. Oper. Res. 37: 318–324. Hausch D. B. and Ziemba W. T. (1983) Bounds on the value of information in uncertain decision problems, II. Stochastics 10: 181–217. Higle J. L. and Sen S. (1991) Stochastic decomposition: An algorithm for two stage stochastic linear programs with recourse. Math. Oper. Res. 16: 650–669. Higle J. L. and Sen S. (1991) Statistical veriﬁcation of optimality conditions for stochastic programs with recourse. Ann. Oper. Res. 30: 215–240. Huang C. C., Vertinsky I., and Ziemba W. T. (1977) Sharp bounds on the value of perfect information. Oper. Res. 25: 128–139. Huang C. C., Ziemba W. T., and BenTal A. (1977) Bounds on the expectation of a convex function of a random variable: With applications to stochastic programming. Oper. Res. 25: 315–325. Jensen J. L. (1906) Sur les fonctions convexes et les in´galit´s entre les e e valeurs moyennes. Acta Math. 30: 173–177. Jessup E. R., Yang D., and Zenios S. A. (1993) Parallel factorization of structured matrices arising in stochastic programming. Report 9302, Department of Public and Business Administration, University of Cyprus, Nicosia, Cyprus. Kall P. (1974) Approximations to stochastic programs with complete ﬁxed recourse. Numer. Math. 22: 333–339. Kall P. (1979) Computational methods for solving twostage stochastic linear programming problems. Z. Angew. Math. Phys. 30: 261–271. Kall P. (1986) Approximation to optimization problems: An elementary review. Math. Oper. Res. 11: 9–18. Kall P. (1987) On approximations and stability in stochastic programming. In Guddat J., Jongen H. T., Kummer B., and Noˇiˇka F. (eds) Parametric zc Optimization and Related Topics, pages 387–407. AkademieVerlag, Berlin. Kall P. (1988) Stochastic programming with recourse: Upper bounds and moment problems—a review. In Guddat J., Bank B., Hollatz H., Kall P., Klatte D., Kummer B., Lommatzsch K., Tammer K., Vlach M., and Zimmermann K. (eds) Advances in Mathematical Optimization (Dedicated to Prof. Dr. Dr. hc. F. Noˇiˇka), pages 86–103. AkademieVerlag, Berlin. zc Kall P. (1991) An upper bound for SLP using ﬁrst and total second moments. Ann. Oper. Res. 30: 267–276. Kall P., Ruszczy´ski A., and Frauendorfer K. (1988) Approximation n techniques in stochastic programming. In Ermoliev Y. M. and Wets R. 240 STOCHASTIC PROGRAMMING [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] J.B. (eds) Numerical Techniques for Stochastic Optimization, pages 33–64. SpringerVerlag, Berlin. Kall P. and Stoyan D. (1982) Solving stochastic programming problems with recourse including error bounds. Math. Operationsforsch. Statist., Ser. Opt. 13: 431–447. Karr A. F. (1983) Extreme points of certain sets of probability measures, with applications. Math. Oper. Res. 8: 74–85. Kemperman J. M. B. (1968) The general moment problem, a geometric approach. Ann. Math. Statist. 39: 93–122. Laporte G. and Louveaux F. V. (1993) The integer Lshaped method for stochastic integer programs. Oper. Res. Lett. 13: 133–142. Louveaux F. V. (1986) Multistage stochastic linear programs with block separable recourse. Math. Prog. Study 28: 48–62. Lustig I. J., Mulvey J. M., and Carpenter T. J. (1991) Formulating twostage stochastic programs for interior point methods. Oper. Res. 39: 757– 770. Madansky A. (1959) Bounds on the expectation of a convex function of a multivariate random variable. Ann. Math. Statist. 30: 743–746. Marti K. (1988) Descent Directions and Eﬃcient Solutions in Discretely Distributed Stochastic Programs, volume 299 of Lecture Notes in Econ. Math. Syst. SpringerVerlag, Berlin. Marti K. and Fuchs E. (1986) Computation of descent directions and eﬃcient points in stochastic optimization problems without using derivatives. Math. Prog. Study 28: 132–156. Marti K. and Fuchs E. (1986) Rates of convergence of semistochastic approximation procedures for solving stochastic optimization problems. Optimization 17: 243–265. Mulvey J. M. and Ruszczy´ski A. (1992) A new scenario decomposition n method for largescale stochastic optimization. Technical Report SOR9119, Princeton University, Princeton, New Jersey. Ruszczy´ski A. (1986) A regularized decomposition method for minimizing n a sum of polyhedral functions. Math. Prog. 35: 309–333. Ruszczy´ski A. (1993) Regularized decomposition of stochastic programs: n Algorithmic techniques and numerical results. Working Paper WP9321, IIASA, Laxenburg. Van Slyke R. and Wets R. J.B. (1969) Lshaped linear programs with applications to optimal control and stochastic linear programs. SIAM J. Appl. Math. 17: 638–663. Wallace S. W. (1986) Decomposing the requirement space of a transportation problem into polyhedral cones. Math. Prog. Study 28: 29–47. Wallace S. W. (1986) Solving stochastic programs with network recourse. Networks 16: 295–317. Wallace S. W. (1987) A piecewise linear upper bound on the network RECOURSE PROBLEMS 241 recourse function. Math. Prog. 38: 133–146. [67] Wallace S. W. (1988) A twostage stochastic facility location problem with timedependent supply. In Ermoliev Y. and Wets R. J.B. (eds) Numerical Techniques in Stochastic Optimization, pages 489–514. SpringerVerlag, Berlin. [68] Wallace S. W. and Yan T. (1993) Bounding multistage stochastic linear programs from above. Math. Prog. 61: 111–130. [69] Wets R. (1966) Programming under uncertainty: The complete problem. Z. Wahrsch. theorie u. verw. Geb. 4: 316–339. [70] Wets R. (1983) Stochastic programming: Solution techniques and approximation schemes. In Bachem A., Gr¨tschel M., and Korte B. (eds) o Mathematical Programming: The StateoftheArt, Bonn 1982, pages 566– 603. SpringerVerlag, Berlin. [71] Wets R. J.B. (1983) Solving stochastic programs with simple recourse. Stochastics 10: 219–242. [72] Wets R. J.B. (1988) Large scale linear programming techniques. In Ermoliev Y. and Wets R. J.B. (eds) Numerical Techniques for Stochastic Optimization, pages 65–93. SpringerVerlag. 242 STOCHASTIC PROGRAMMING 4 Probabilistic Constraints
As we have seen in Sections 1.5 and 1.6, at least under appropriate assumptions, chanceconstrained problems such as (4.21), or particularly (4.23), as well as recourse problems such as (4.11), or particularly (4.16), (all from Chapter 1), appear as ordinary convex smooth mathematical programming problems. This might suggest that these problems may be solved using known nonlinear programming methods. However, this viewpoint disregards the fact that in the direct application of those methods to problems like ˜ minx∈X Eξ cT (ξ )x ˜ s.t. P ({ξ  T (ξ )x ≥ h(ξ )}) ≥ α or
x ∈X ˜ min Eξ {cT x + Q(x, ξ )} ˜ where Q(x, ξ ) = min{q T y  W y ≥ h(ξ ) − T (ξ )x, y ∈ Y }, we had repeatedly to obtain gradients and evaluations for functions like P ({ξ  T (ξ )x ≥ h(ξ )}) or ˜ Eξ {cT x + Q(x, ξ )}. ˜ Each of these evaluations requires multivariate numerical integration, so that up to now this seems to be outside of the set of eﬃciently solvable problems. Hence we may try to follow the basic ideas of some of the known nonlinear programming methods, but at the same time we have to ﬁnd ways to evade the exact evaluation of the integral functions contained in these problems. On the other hand we also know from the example illustrated in Figure 18 of Chapter 1 that chance constraints may easily deﬁne nonconvex feasible sets. This leads to severe computational problems if we intend to ﬁnd a global optimum. There is one exception to this general problem worth mentioning. 244 STOCHASTIC PROGRAMMING Proposition 4.1 The feasible set B (1) := {x  P ({ξ  T (ξ )x ≥ h(ξ )}) ≥ 1} is convex. Proof Assume that x, y ∈ B (1) and that λ ∈ (0, 1). Then for Ξx := {ξ  T (ξ )x ≥ h(ξ )} and Ξy := {ξ  T (ξ )y ≥ h(ξ )} we have P (Ξx ) = P (Ξy ) = 1. As is easily shown, this implies for Ξ∩ := Ξx ∩ Ξy that P (Ξ∩ ) = 1. Obviously, for z := λx + (1 − λ)y we have T (ξ )z ≥ h(ξ ) ∀ξ ∈ Ξ∩ such that {ξ  T (ξ )z ≥ h(ξ )} ⊃ Ξ∩ . Hence we have z ∈ B (1). 2 Considering once again the example illustrated in Figure 18 in Section 1.6, we observe that if we had required a reliability α > 93%, the feasible set would have been convex. This is a consequence of Proposition 4.1 for discrete distributions, and may be stated as follows. ˜ Proposition 4.2 Let ξ have a ﬁnite discrete distribution described by P (ξ = j ξ ) = pj , j = 1, · · · , r (pj > 0 ∀j ). Then for α > 1 − minj ∈{1,···,r} pj the feasible set B (α) := {x  P ({ξ  T (ξ )x ≥ h(ξ )}) ≥ α} is convex. Proof: The assumption on α implies that B (α) = B (1) (see Exercises at the end of this chapter). 2 In conclusion, for discrete distributions and reliability levels chosen “high ˜ enough” we have a convex problem. Replacing Eξ c(ξ ) by c, we then simply ˜ have to solve the linear program (provided that X is convex polyhedral) minx∈X cT x s.t. T (ξ j )x ≥ h(ξ j ), j = 1, · · · , r. This observation may be helpful for some particular chanceconstrained problems with discrete distributions. However, it also tells us that for chanceconstrained problems stated with continuoustype distributions and requiring a reliability level α < 1, we cannot expect—as discussed in Section 3.5 for the recourse problem—approximating the continuous distribution by successively reﬁned discrete ones to be a successful approach. The reason should now be obvious: reﬁning the discrete (approximating) distributions would imply at some stage that minj pj < 1 − α such that the “approximating” problems were likely to become nonconvex—even if the original problem with its continuous distribution were convex. And approximating convex problems by nonconvex ones should certainly not be our aim! In the next two sections we shall describe under special assumptions (multivariate normal distributions) how chanceconstrained programs can PROBABILISTIC CONSTRAINTS 245 be treated computationally. In particular, we shall verify that, under our assumptions, a program with joint chance constraints becomes a convex program and that programs with separate chance contraints may be reformulated to become a deterministic convex program amenable to standard nonlinear programming algorithms. 4.1 Joint Chance Constrained Problems Let us concentrate on the particular stochastic linear program ⎫ min cT x ⎪ ⎪ ⎬ s.t. P ({ξ  T x ≥ ξ }) ≥ α Dx = d, ⎪ ⎪ ⎭ x ≥ 0. (1.1) For this problem we know from Propositions 1.5–1.7 in Section 1.6 that if the distribution function F is quasiconcave then the feasible set B (α) is a closed convex set. ˜ Under the assumption that ξ has a (multivariate) normal distribution, we know that F is even logconcave. We therefore have a smooth convex program. For this particular case there have been attempts to adapt penalty and cuttingplane methods to solve (1.1). Further, variants of the reduced gradient method as sketched in Section 1.8.2 have been designed. These approaches all attempt to avoid the “exact” numerical integration associated with the evaluation of F (T x) = P ({ξ  T x ≥ ξ }) and its gradient ∇x F (T x) by relaxing the probabilistic constraint P ({ξ  T x ≥ ξ }) ≥ α. To see how this may be realized, let us brieﬂy sketch one iteration of the reduced gradient method’s variant implemented in PROCON, a computer program for minimizing a function under PRObabilistic CONstraints. With the notation G(x) := P ({ξ  T x ≥ ξ }), let x be feasible in min cT x s.t. G(x) ≥ α, Dx = d, x ≥ 0, ⎫ ⎪ ⎪ ⎬ ⎪ ⎪ ⎭ (1.2) and—assuming D to have full row rank—let D be partitioned as D = (B, N ) into basic and nonbasic parts and accordingly partition xT = (y T , z T ), cT = 246 STOCHASTIC PROGRAMMING (f T , g T ) and a descent direction wT = (uT , v T ). Assume further that for some tolerance ε > 0, yj > ε ∀j (strict nondegeneracy).
T T T (1.3) Then the search direction w = (u , v ) is determined by the linear program ⎫ max τ ⎪ ⎪ ⎪ ⎪ s.t. f Tu + g T v ≤ −τ, ⎪ ⎪ ⎬ ∇y G(x)T u + ∇z G(x)T v ≥ θτ if G(x) ≤ α + ε, (1.4) Bu + N v = 0, ⎪ ⎪ ⎪ ⎪ vj ≥ 0 if zj ≤ ε, ⎪ ⎪ ⎭ v ∞ ≤ 1, where θ > 0 is a ﬁxed parameter as a weight for the directional derivatives of G and v ∞ = maxj {vj }. According to the above assumption, we have from (1.4) u = −B −1 N v, which renders (1.4) into the linear program max τ s.t. rT v sT v vj v∞ where obviously rT = g T − f T B −1 N, sT = ∇z G(x)T − ∇y G(x)T B −1 N are the reduced gradients of the objective and the probabilistic constraint function. Problem (1.5)—and hence (1.4)—is always solvable owing to its nonempty and bounded feasible set. Depending on the obtained solution (τ ∗ , u∗T , v ∗T ) the method proceeds as follows. Case 1 When τ ∗ = 0, ε is replaced by 0 and (1.5) is solved again. If τ ∗ = 0 again, the feasible solution xT = (y T , z T ) is obviously optimal. Otherwise the steps of case 2 below are carried out, starting with the original ε > 0. Case 2 When 0 < τ ∗ ≤ ε, the following cycle is entered: Step 1 Set ε := 0.5ε. ≤ −τ, ≥ θτ if G(x) ≤ α + ε, ≥ 0 if zj ≤ ε, ≤ 1, ⎫ ⎪ ⎪ ⎪ ⎪ ⎬ ⎪ ⎪ ⎪ ⎪ ⎭ (1.5) PROBABILISTIC CONSTRAINTS 247 Step 2 Solve (1.5). If still τ ∗ ≤ ε, go to step 1; otherwise, case 3 applies. Case 3 When τ ∗ > ε, w∗T = (u∗T , v ∗T ) is accepted as search direction. If a search direction w∗T = (u∗T , v ∗T ) has been found, a line search follows using bisection. Since the line search in this case amounts to determining the intersection of the ray x + µw∗ , µ ≥ 0 with the boundary bdB (α) within the tolerance ε, the evaluation of G(x) becomes important. For this purpose a special Monte Carlo technique is used, which allows eﬃcient computation of upper and lower bounds of G(x) as well as the gradient ∇G(x). If the next iterate x, resulting from the line search, still satisﬁes strict ˇ nondegeneracy, the whole step is repeated with the same partition of D into basic and nonbasic parts; otherwise, a basis exchange is attempted to reinstall strict nondegeneracy for a new basis. 4.2 Separate Chance Constraints Let us now consider stochastic linear programs with separate (or single) chance constraints as introduced at the end of Section 1.4. Using the formulation given there we are dealing with the problem ˜ minx∈X Eξ cT (ξ )x ˜ s.t. P ({ξ  Ti (ξ )x ≥ hi (ξ )}) ≥ αi , i = 1, · · · , m, (2.1) where Ti (ξ ) is the ith row of T (ξ ). The main question is whether or under what assumptions the feasibility set deﬁned by any one of the constraints in (2.1), {x  P ({ξ  Ti (ξ )x ≥ hi (ξ )} ≥ αi }, is convex. As we know from Section 1.6, this question is very simple to answer ˜ for the special case where Ti (ξ ) ≡ Ti , i.e. where only the righthand side hi (ξ ) ˜), is random. That is, with Fi the distribution function of hi (ξ {x  P ({ξ  Ti x ≥ hi (ξ )}) ≥ αi } = {x  Fi (Ti x) ≥ αi } = {x  Ti x ≥ Fi−1 (αi )}. It follows that the feasibility set for this particular chance constraint is just the feasibility set of an ordinary linear constraint. For the general case let us ﬁrst simplify the notation as follows. Let Bi (αi ) := {x  P ({(tT , h)T  tT x ≥ h}) ≥ αi }, ˜˜ ˜˜ where (tT , h)T is a random vector. Assume now that (tT , h)T has a joint normal distribution with expectation µ ∈ IRn+1 and (n + 1) × (n + 1) 248 STOCHASTIC PROGRAMMING ˜ ˜˜ covariance matrix S . For any ﬁxed x, let ζ (x) := xT t − h. It follows that ˜ our feasible set may be rewritten in terms of the random variable ζ (x) as Bi (αi ) = {x  P (ζ (x) ≥ 0) ≥ αi }. From probability theory, we know ˜ that, because ζ (x) is a linear combination of jointly normally distributed random variables, it has a (onedimensional) normal distribution function n Fζ with expectation mζ (x) = ˜ ˜ j =1 µj xj − µn+1 , and, using the (n + 1)T 2 vector z (x) := (x1 , · · · , xn , −1) , the variance σζ (x) = z (x)T Sz (x). Since the ˜ covariance matrix S of a (nondegenerate) multivariate normal distribution 2 is positivedeﬁnite, it follows that the variance σζ (x) and, as can be easily ˜ shown, the standard deviation σζ (x) are convex in x (and σζ (x) > 0 ∀x in ˜ ˜ view of zn+1 (x) = −1). Hence we have Bi (αi ) = {x  P (ζ (x) ≥ 0) ≥ αi } = xP ζ (x) − mζ (x) ˜ σζ (x) ˜ ≥ −mζ (x) ˜ σζ (x) ˜ ≥ αi . ˜ Observing that for the normally distributed random variable ζ (x) the random ˜(x) − m ˜(x)]/σ ˜(x) has the standard normal distribution function variable [ζ ζ ζ Φ, it follows that Bi (αi ) = x 1 − Φ Hence Bi (αi ) = x 1 − Φ −mζ (x) ˜ σζ (x) ˜ ≥ αi . ≥ αi σζ (x) ˜ −mζ (x) ˜ = xΦ ≤ 1 − αi σζ (x) ˜ −mζ (x) ˜ ≤ Φ−1 (1 − αi ) =x σζ (x) ˜ −mζ (x) ˜ = x − Φ−1 (1 − αi )σζ (x) − mζ (x) ≤ 0 . ˜ ˜ Here mζ (x) is linear aﬃne in x and σζ (x) is convex in x. Therefore the left˜ ˜ hand side of the constraint −Φ−1 (1 − αi )σζ (x) − mζ (x) ≤ 0 ˜ ˜ is convex iﬀ Φ−1 (1 − αi ) ≤ 0, which is exactly the case iﬀ αi ≥ 0.5. Hence we have, under the assumption of normal distributions and αi ≥ 0.5, instead of (2.1) a deterministic convex program with constraints of the type −Φ−1 (1 − αi )σζ (x) − mζ (x) ≤ 0, ˜ ˜ which can be solved with standard tools of nonlinear programming. PROBABILISTIC CONSTRAINTS 249 4.3 Bounding Distribution Functions In Section 4.1 we mentioned that particular methods have been developed to compute lower and upper bounds for the function G(x) := P ({ξ  T x ≥ ξ }) = Fξ (T x) ˜ contained in the constraints of problem (1.1). Here Fξ (·) denotes the ˜ ˜ distribution function of the random vector ξ . In the following we sketch some ideas underlying these bounding methods. For a more technical presentation, the reader should consult the references provided below. ˜ To simplify the notation,let us assume that ξ is a random vector with a n n support Ξ ⊂ IR . For any z ∈ IR , we have Fξ (z ) = P ({ξ  ξ1 ≤ z1 , · · · , ξn ≤ zn }). ˜ Deﬁning the events Ai := {ξ  ξi ≤ zi }, i = 1, · · · , n, it follows that Fξ (z ) = P (A1 ∩ · · · ∩ An ). ˜ Denoting the complements of the events Ai by Bi := Ac = {ξ  ξi > zi }, i we know from elementary probability theory that A1 ∩ · · · ∩ An = (B1 ∪ · · · ∪ Bn )c , and consequently Fξ (z ) = P (A1 ∩ · · · ∩ An ) ˜ = P ((B1 ∪ · · · ∪ Bn )c ) = 1 − P (B1 ∪ · · · ∪ Bn ). Therefore asking for the value of Fξ (z ) is equivalent to looking for the ˜ probability that at least one of the events B1 , · · · , Bn occurs. Deﬁning the counter ν : Ξ −→ IN by ˜ ν (ξ ) := {number of events out of B1 , · · · , Bn that occur at ξ }, ˜ ν is clearly a random variable having the range of integers {0, 1, · · · , n}. ˜ Observing that P (B1 ∪ · · · ∪ Bn ) = P (˜ ≥ 1), we have ν Fξ (z ) = 1 − P (˜ ≥ 1). ν ˜ Hence ﬁnding a good approximation for P (˜ ≥ 1) yields at the same time a ν satisfactory approximation of Fξ (z ). ˜ 250 STOCHASTIC PROGRAMMING With the binomial coeﬃcients for µ, k ∈ IN deﬁned for µ ≥ k as µ k (where 0! = 1 and introduced as Sk,n := Eξ ˜ ν ˜ k
n = µ! k !(µ − k )! µ k = 0 for µ < k ) the binomial moments of ν are ˜ =
i=0 i k P ({ξ  ν (ξ ) = i}), k = 0, 1, · · · , n. ˜ (3.1) Since i 0 = 1, i = 0, 1, · · · , n, it follows that S0,n = 1. Furthermore, choosing v ∈ IRn+1 according to vi := P ({ξ  ν (ξ ) = i}), i = 0, 1, · · · , n, it is ˜ obvious from (3.1) that v solves the system of linear equations v0 + v1 + v2 + · · · + v1 + 2v2 + · · · + v2 + · · · + .. . .. . n 2 vn = S0,n , nvn = S1,n , . . . . . . vn = Sn,n . ⎫ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ vn = S2,n , ⎪ ⎬ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎭ (3.2) The coeﬃcient matrix of (3.2) is uppertriangular, with all main diagonal elements equal to 1, and hence with a determinant of 1, such that vi = P ({ξ  ν (ξ ) = i}), i = 0, 1, · · · , n, is the unique solution of this system of linear ˜ equations. However, solving the complete system (3.2) to get P (˜ ≥ 1) = ν n i=1 vi would require the computation of all binomial moments. This would be a cumbersome task again. Instead, we could proceed as follows. Observing that our unique solution, representing probabilities, is nonnegative, it is no restriction to add the conditions vi ≥ 0, ∀i to (3.2). In turn, we relax the system by dropping some of the equations (also the ﬁrst one), in that way getting rid of the need to determine the corresponding binomial moments. Obviously, the above (formerly unique) solution is still feasible to the relaxed system, but no longer unique in general. Hence we get a lower or upper bound on P (˜ ≥ 1) ν n by minimizing or maximizing, respectively, the objective i=1 vi under the relaxed constraints. To be more speciﬁc, let us consider the following relaxation as an example. PROBABILISTIC CONSTRAINTS 251 For the lower bound we choose min{v1 + v2 + · · · + vn } s.t. v1 + 2v2 + · · · + v2 + · · · + n 2 nvn = S1,n , vn = S2,n , vi ≥ 0, i = 1, · · · , n. and correspondingly for the upper bound we formulate max{v1 + v2 + · · · + vn } s.t. v1 + 2v2 + · · · + nvn = S1,n , n vn = S2,n , v2 + · · · + 2 vi ≥ 0, i = 1, · · · , n. ⎫ ⎪ ⎪ ⎪ ⎪ ⎬ ⎪ ⎪ ⎪ ⎪ ⎭ (3.3) ⎫ ⎪ ⎪ ⎪ ⎪ ⎬ ⎪ ⎪ ⎪ ⎪ ⎭ (3.4) These linear programs are feasible and bounded, and therefore solvable. So, there exist optimal feasible 2 × 2 bases B . Consider an arbitrary 2 × 2 matrix of the form ⎛ ⎞ i i+r i+r ⎠, B=⎝ i 2 2 where 1 ≤ i < n and 1 ≤ r ≤ n − i. Computing the determinant of B , we get det B = i i+r 2 − (i + r) i 2 = 1 [i(i + r)(i + r − 1) − (i + r)i(i − 1)] 2 = 1 i(i + r)r 2 >0 for all i and r such that 1 ≤ i < n and 1 ≤ r ≤ n − i. Hence any two columns of the coeﬃcient matrix of (3.3) (or equivalently of (3.4)) form a basis. The question is which one is feasible and optimal. Let us consider the second property ﬁrst. According to Proposition 1.15, Section 1.7 (page 65), a basis B of (3.3) satisﬁes the optimality condition if 1 − eT B −1 Nj ≥ 0 ∀j = i, i + r, where eT = (1, 1) and Nj is the j th column of the coeﬃcient matrix of (3.3). Obviously, for (3.4) we have the reverse inequality as optimality condition: 1 − eT B −1 Nj ≤ 0 ∀j = i, i + r. 252 STOCHASTIC PROGRAMMING It is straithforward to check1 that ⎛ i+r−1 ⎜ ir B −1 = ⎜ ⎝ i−1 − (i + r)r ⎛ ⎞ For Nj = ⎝ j ⎠ we get
2 j ⎞ 2 ir ⎟ ⎟. ⎠ 2 (i + r)r − e T B −1 N j = j Proposition 4.3 The basis ⎛ 2i + r − j . i(i + r) ⎞ i+r i+r ⎠ 2 (3.5) i B=⎝ i 2 satisﬁes the optimality condition (a) for (3.3) if and only if r = 1 (i arbitrary); (b) for (3.4) if and only if i = 1 and i + r = n. Proof (a) If r ≥ 2, we get from (3.5) for j = i + 1 eT B −1 Ni+1 = j = 2i + r − j i(i + r) i(i + r) + r − 1 i(i + r) > 1, so that the optimality condition for (3.3) is not satisﬁed for r > 1, showing that r = 1 is necessary. Now let r = 1. Then for j < i we have, according to (3.5), e T B −1 N j = j = < 2i + 1 − j i(i + 1) j + i2 − (j − i)2 i(i + 1) i(i + 1) − (j − i)2 i(i + 1) < 1, 1 BB −1 = I , the identity matrix! PROBABILISTIC CONSTRAINTS 253 whereas for j > i + 1 we get e T B −1 N j = j = 2i + 1 − j i(i + 1) j (i + 1) + j (i − j ) i(i + 1) < 1, the last inequality resulting from the fact that subtracting the denominator from the numerator yields j (i + 1) + j (i − j ) − i(i + 1) = (j − i) [(i + 1) − j ] < 0.
>1 <0 Hence in both cases the optimality condition for (3.3) is strictly satisﬁed. (b) If i + r < n then we get from (3.5) for j = n e T B −1 N n = n(i + r) + n(i − n) i(i + r) <1 since {numerator} − {denominator} = n(i + r) + n(i − n) − i(i + r) = (n − i)(i + r − n) < 0. Finally, if i > 1 then, with (3.5), we have for j = 1 e T B −1 N n = 2i + r − 1 i(i + r) (i − 1) + (i + r) = i(i + r) 1 i−1 + = i(i + r) i 1 1 <3+2 < 1. Hence the only possible choice for a basis satisfying the optimality condition for problem (3.4) is i = 1, r = n − 1. 2 As can be seen from the simplex method, a basis that satisﬁes the optimality condition strictly does determine a unique optimal solution if it is feasible. 254 STOCHASTIC PROGRAMMING Hence we now have to ﬁnd from the optimal bases ⎛ ⎞ i i+1 i+1 ⎠ B=⎝ i 2 2 the one that is ⎛ feasible for (3.3). ⎞ A basis B = ⎝
i i 2 i+1 i+1 2 ⎠ is feasible for (3.3) if and only if ⎛ B −1 S1,n S2,n ⎞ 2 ⎜ i ⎟ S1,n ⎟ =⎜ ⎝ i−1 2 ⎠ S2,n − i+1 i+1 ⎞ ⎛ 2 S1,n − S2,n ⎟ ⎜ i ⎟ =⎜ ⎠ ⎝ i−1 2 S1,n + S2,n − i+1 i+1 1 − ≥ 0, or, equivalently, if (i − 1)S1,n ≤ 2S2,n ≤ iS1,n . Hence we have to choose i such that i − 1 = 2S2,n /S1,n , where α is the integer part of α (i.e. the greatest integer less than or equal to α). With this particular i the optimal value of (3.3) amounts to 2 i−1 2 2 2 S1,n − S2,n − S1,n + S2,n = S1,n − S2,n . i i+1 i+1 i+1 i(i + 1) Thus we have found a lower bound for P (˜ ≥ 1) as ν P (˜ ≥ 1) ≥ ν 2 2 S1,n − S2,n , i+1 i(i + 1) ⎛ with i − 1 = ⎞ ⎠ 2S2,n . S1,n (3.6) For the optimal basis of (3.4) B=⎝ we have ⎛ ⎜ B −1 = ⎜ ⎝ 1 0 n n 2 − 1 0 ⎞ 2 n−1 ⎟ ⎟ ⎠ 2 n(n − 1) PROBABILISTIC CONSTRAINTS 255 and hence B −1 S1,n S2,n ⎛ ⎜ =⎜ ⎝ S1,n − ⎞ 2 S2,n ⎟ n−1 ⎟. ⎠ 2 S2,n n(n − 1) The last vector is nonnegative since the deﬁnition of the binomial moments implies (n − 1)S1,n − 2S2,n ≥ 0 and S2,n ≥ 0. This yields for (3.4) the optimal ν value S1,n − (2/n)S2,n . Therefore we ﬁnally get an upper bound for P (˜ ≥ 1) as 2 (3.7) P (˜ ≥ 1) ≤ S1,n − S2,n . ν n In conclusion, recalling that ν Fξ (z ) = 1 − P (˜ ≥ 1), ˜ we have shown the following. Proposition 4.4 The distribution function Fξ (z ) is bounded according to ˜ Fξ (z ) ≥ 1 − S1,n − ˜ and Fξ (z ) ≤ 1 − ˜ 2 2S2,n 2 S1,n − S2,n , with i − 1 = . i+1 i(i + 1) S1,n 2 S2,n n Example 4.1 We deﬁned in (3.1) the binomial moments of ν as ˜ Sk,n := Eξ ˜ ν ˜ k
n =
i=0 i k P ({ξ  ν (ξ ) = i}), k = 0, 1, · · · , n. ˜ Another way to introduce these moments is the following. With the same notation as at the beginning of this section, let us deﬁne new random variables χi : Ξ −→ IR, i = 1, · · · , n, as the indicator functions ˜ χi (ξ ) := ˜ Then clearly ν = ˜ ν ˜ k =
n i=1 1 0 if ξ ∈ Bi , otherwise. χi , and ˜ =
1≤i1 ≤···≤ik ≤n χ1 + · · · χn ˜ ˜ k χi1 χi2 · · · χik . ˜˜ ˜ 256 STOCHASTIC PROGRAMMING Taking the expectation on both sides yields for the binomial moments Sk,n Eξ ˜ ν ˜ k =
1≤i1 ≤···≤ik ≤n Eξ (χi1 χi2 · · · χik ) ˜ ˜˜ ˜ P (Bi1 ∩ · · · ∩ Bik ) .
1≤i1 ≤···≤ik ≤n = This formulation indicates the possibility of estimating the binomial moments from large samples through the relation Sk,n =
1≤i1 ≤···≤ik ≤n Eξ (χi1 χi2 · · · χik ) ˜ ˜˜ ˜ if they are diﬃcult to compute directly. Consider now the following example. Assume that we have a four˜ dimensional random vector ξ with mutually independent components. Let 4 z ∈ IR be chosen such that with pi = P (Ai ), i = 1, 2, 3, 4, we have pT = (0.9, 0.95, 0.99, 0.92). Consequently, for qi = P (Bi ) = 1 − pi we get q T = (0.1, 0.05, 0.01, 0.08). Obviously we get Fξ (z ) = i=1 pi = 0.778734. From the above representation ˜ of the binomial moments, we have
4 4 S1,n =
i=1 3 qi
4 = 0.24 qi qj = 0.0193 S2,n =
i=1 j =i+1 such that we get from (3.7) for P (˜ ≥ 1) the upper bound ν PU = 0.24 − According to (3.6), we ﬁnd i−1 = yields the lower bound PL = 2 × 0.0193 = 0.23035. 4
2×0.0193 0.24 = 0 and hence i = 1, so that (3.6) 2 2 × 0.24 − × 0.0193 = 0.1757. 2 2 In conclusion, we get for Fξ (z ) = 0.778734 the bounds 1 − PU ≤ Fξ (z ) ≤ ˜ ˜ 1 − PL , and hence 0.76965 ≤ Fξ (z ) ≤ 0.8243. ˜ PROBABILISTIC CONSTRAINTS 257 Observe that these bounds could be derived without any speciﬁc information about the type of the underlying probability distribution (except the assumption of independent components made only for the sake of a simple presentation). 2 Further bounds have been derived for P (˜ ≥ 1) using binomial moments up ν to the order m, 2 < m < n, as well as for P (˜ ≥ r), r > 1. For some of them ν explicit formulae could also be derived, while others require the computational solution of optimization problems with algorithms especially designed for the particular problem structures. 4.4 Bibliographical Notes One of the ﬁrst attempts to state deterministic equivalent formulations for chanceconstrained programs can be found in Charnes and Cooper [4]. The discussion of convexity of joint chance constraints with stochastic righthand sides was initiated by Pr´kopa [7, 8, 11], investigating logconcave e measures, and could be extended to quasiconcave measures through the results of Borell [1], Brascamp and Lieb [3] and Rinott [16]. Marti [5] derived convexity statements in particular for separate chance constraints, for various distribution functions and probability levels, including the one mentioned ﬁrst by van de Panne and Popp [19] for the multivariate normal distribution and described in Section 4.2. Pr´kopa [9] proposed an extension of Zoutendijk’s method of feasible e directions for the solution of (1.1), which was implemented under the name STABIL by Pr´kopa et al. [15]. For more general types of chancecontrained e problems solution, approaches have also been considered by Pr´kopa [10, 12]. e After all, the case of joint chance constraints with nondiscrete random matrix is considered to be a hard problem. As described in Section 4.1, Mayer developed a special reduced gradient method for (1.1) and implemented it as PROCON [6]. For the evaluation of the probability function G(x) = P ({ξ  T x ≥ ξ }) and its gradient ∇G(x), an eﬃcient MonteCarlo technique due to Sz´ntai [17] was used. a An alternative method following the lines of Veinott’s supporting hyperplane algorithm was implemented by Sz´ntai [18]. a There has been for some time great interest in getting (sharp) bounds for distribution functions and, more generally, for probabilities of certain events in complex systems (e.g. reliabilities of special technical installations). In Section 4.3 we only sketch the direction of thoughts in this ﬁeld. Among the wide range of literature on the subject, we just refer to the more recent papers of Pr´kopa [13, 14] and of Boros and Pr´kopa [2], from which the interested e e 258 STOCHASTIC PROGRAMMING reader may trace back to earlier original work. Exercises
˜ 1. Given a random vector ξ with support Ξ in IRk , assume that for A ⊂ Ξ and B ⊂ Ξ we have P (A) = P (B ) = 1. Show that then also P (A ∩ B ) = 1. 2. Under the assumptions of Proposition 4.2, the support of the distribution is Ξ = {ξ 1 , · · · , ξ r }, with P (ξ = ξ j ) = pj > 0 ∀j . Show that for α > 1 − minj ∈{1,···,r} pj the only event A ⊂ Ξ satisfying P (A) ≥ α is A = Ξ. ˜ 3. Show that for the random variable ζ (x) introduced in Section 4.2 with 2 σζ (x), σζ (x) is also a convex function in x. ˜ ˜ 4. In Section 4.3 we saw that the binomial moments S0,n , S1,n , · · · , Sn,n determine uniquely the probabilities vi = P (˜ = i), i = 0, 1, · · · , n, as ν the solution of (3.2). From the ﬁrst equation, it follows, owing to S0,n = 1, n that i=0 vi = 1. To get lower and upper bounds for P (˜ ≥ 1), we derived ν the linear programs (3.3) and (3.4) by omitting, among others, the ﬁrst equation. (a) Show that in any case (provided that S1,n and S2,n are binomial n moments) for the optimal solution v of (3.3), i=1 vi ≤ 1. ˆ ˆ
n ˆ (b) If for the optimal solution v of (3.3) ˆ i=1 vi < 1 then we have n v0 = 1 − i=1 vi > 0. What does this mean with respect to Fξ (z )? ˆ ˜ (c) Solving (3.4) can result in n vi > 1. To what extent does this result i=1 ˆ improve your knowledge about Fξ (z )? ˜ References
[1] Borell C. (1975) Convex set functions in dspace. Period. Math. Hungar. 6: 111–136. [2] Boros E. and Pr´kopa A. (1989) Closedform twosided bounds for e probabilities that at least r and exactly r out of n events occur. Math. Oper. Res. 14: 317–342. [3] Brascamp H. J. and Lieb E. H. (1976) On extensions of the Brunn– Minkowski and Prekopa–Leindler theorems, including inequalities for log concave functions, and with an application to the diﬀusion euation. J. Funct. Anal. 22: 366–389. [4] Charnes A. and Cooper W. W. (1959) Chanceconstrained programming. Management Sci. 5: 73–79. [5] Marti K. (1971) Konvexit¨tsaussagen zum linearen stochastischen a PROBABILISTIC CONSTRAINTS 259 Optimierungsproblem. Z. Wahrsch. theorie u. verw. Geb. 18: 159–166. [6] Mayer J. (1988) Probabilistic constrained programming: A reduced gradient algorithm implemented on pc. Working Paper WP8839, IIASA, Laxenburg. [7] Pr´kopa A. (1970) On probabilistic constrained programming. In Kuhn e H. W. (ed) Proc. of the Princeton Symposioum on Math. Programming, pages 113–138. Princeton University Press, Princeton, New Jersey. [8] Pr´kopa A. (1971) Logarithmic concave measures with applications to e stochastic programming. Acta Sci. Math. (Szeged) 32: 301–316. [9] Pr´kopa A. (1974) Eine Erweiterung der sogenannten Methode der e zul¨ssigen Richtungen der nichtlinearen Optimierung auf den Fall a quasikonkaver Restriktionen. Math. Operationsforsch. Statist., Ser. Opt. 5: 281–293. [10] Pr´kopa A. (1974) Programming under probabilistic constraints with a e random technology matrix. Math. Operationsforsch. Statist., Ser. Opt. 5: 109–116. [11] Pr´kopa A. (1980) Logarithmic concave measures and related topics. In e Dempster M. A. H. (ed) Stochastic Programming, pages 63–82. Academic Press, London. [12] Pr´kopa A. (1988) Numerical solution of probabilistic constrained e programming problems. In Ermoliev Y. and Wets R. J.B. (eds) Numerical Techniques for Stochastic Optimization, pages 123–139. SpringerVerlag, Berlin. [13] Pr´kopa A. (1988) Boolebonferroni inequalities and linear programming. e Oper. Res. 36: 145–162. [14] Pr´kopa A. (1990) Sharp bounds on probabilities using linear e programming. Oper. Res. 38: 227–239. [15] Pr´kopa A., Ganczer S., De´k I., and Patyi K. (1980) The STABIL e a stochastic programming model and its experimental application to the electricity production in Hungary. In Dempster M. A. H. (ed) Stochastic Programming, pages 369–385. Academic Press, London. [16] Rinott Y. (1976) On convexity of measures. Ann. Prob. 4: 1020–1026. [17] Sz´ntai T. (1987) Calculation of the multivariate probability distribution a function values and their gradient vectors. Working Paper WP8782, IIASA, Laxenburg. [18] Sz´ntai T. (1988) A computer code for solution of probabilistica constrained stochastic programming problems. In Ermoliev Y. M. and Wets R. J.B. (eds) Numerical Techniques for Stochastic Optimization, pages 229–235. SpringerVerlag, Berlin. [19] van de Panne C. and Popp W. (1963) Minimum cost cattle feed under probabilistic problem constraint. Management Sci. 9: 405–430. 260 STOCHASTIC PROGRAMMING 5 Preprocessing
The purpose of this chapter is to discuss diﬀerent aspects of preprocessing the data associated with a stochastic program. The term “preprocessing” is rather vague, but whatever it could possibly mean, our intention here is to discuss anything that will enhance the model understanding and/or simplify the solution procedures. Thus “preprocessing” refers to any analysis of a problem that takes place before the ﬁnal solution of the problem. Some tools will focus on the issue of model understanding, while others will focus on issues related to choice of solution procedures. For example, if it can be shown that a problem has (relatively) complete recourse, we can apply solution procedures where that is required. At the same time, the fact that a problem has complete recourse is of value to the modeller, since it says something about the underlying problem (or at least the model of the underlying problem). 5.1 Problem Reduction Reducing the problem size can be of importance in a setting of stochastic programming. Of course, it is always useful to remove unnecessary rows and columns. In the setting of a single deterministic linear programming problem it may not pay oﬀ to remove rows and columns. That is, it may cost more to ﬁgure out which columns and rows are not needed than it costs to solve the overall problem with the extra data in it. In the stochastic setting, the same coeﬃcient matrix is used again and again, so it deﬁnitely pays to reduce the problem size. The problem itself becomes smaller, and, even more importantly, the number of possible bases can be substantially reduced (especially if we are able to remove rows). This can be particularly important when using the stochastic decomposition method (where we build up a collection of dual feasible bases) and in trickling down within the setting of the Lshaped decomposition method. Let us start by deﬁning a frame and showing how to compute it. 262 STOCHASTIC PROGRAMMING procedure framebylp(W :(m × n) matrix); begin n1 := n; q := 0; for i := n1 downto 1 do begin LP(W \ Wi , Wi , q, y ,feasible); if feasible then begin Wi := Wn ; n := n − 1; end; end; end;
Figure 1 Finding a frame. 5.1.1 Finding a Frame Let us repeat the deﬁnition of pos W : pos W = {t  t = W y, y ≥ 0}. In words, pos W is the set of all positive (nonnegative) linear combinations of columns of the matrix W . A subset of the columns, determining a matrix W , is called a frame if pos W = pos W , and equality is not preserved if any one column is removed from W . So, by ﬁnding the frame of a given matrix, we remove all columns that are not needed to describe the pointed cone pos W . As an example, if we use a twophase simplex method to solve a linear programming problem, only the columns of W are needed in phase 1. If W is a matrix, and j is an index, let W \ Wj be the matrix W with column j removed. A simple approach for ﬁnding a frame is outlined in Figure 1. To do that, we need a procedure that solves LPs. It can be found in Figure 7 in Chapter 3. The matrix W in procedure framebylp in Figure 1 is both input and output. On entry, it contains the matrix for which we seek the frame; on exit, it contains those columns that were in the frame. The number of columns, n, is changed accordingly. To summarize, the eﬀect of the frame algorithm is that as many columns as possible are removed from a matrix W without changing the pointed cone spanned by the columns. We have earlier discussed generators of cones. In this case we may say that the columns in W , after the application of procedure framebylp, are generators of pos W . Let us now turn to the use of this algorithm. PREPROCESSING 263 W1 W2 pos W W3 W4
Figure 2 Illustration of the frame algorithm. 5.1.2 Removing Unnecessary Columns This can be useful in a couple of diﬀerent settings. Let us ﬁrst see what happens if we simply apply the frame algorithm to the recourse matrix W . We shall then remove columns that are not needed to describe feasibility. This is illustrated in Figure 2. Given the matrix W = (W1 , W2 , W3 , W4 ), we ﬁnd that the shaded region represents pos W and the output of a frame algorithm is either W = (W1 , W2 , W4 ) or W = (W1 , W3 , W4 ). The procedure framebylp will produce the ﬁrst of these two cases. Removing columns not needed for feasibility can be of use when verifying feasibility in the Lshaped decomposition method (see page 171). We are there to solve a given LP for all ξ ∈ A. If we apply frame to W before checking feasibility, we get a simpler problem to look at, without losing information, since the removed columns add nothing in terms of feasibility. If we are willing to live with two version of the recourse matrix, we can therefore reduce work while computing. From the modelling perspective, note that columns thrown out are only needed if the cost of the corresponding linear combination is higher than that of the column itself. The variable represented by the column does not add to our production possibilities—only, possibly, to lower our costs. In what follows in this subsection let us assume that we have only righthand side randomness, and let us, for simplicity, denote the cost vector by q . To see if a column can reduce our costs, we deﬁne W := q W 1 0 , 264 STOCHASTIC PROGRAMMING that is, a matrix containing the coeﬃcient matrix, the cost vector and an extra column. To see the importance of the extra column, consider the following interpretation of pos W (remember that pos W equals the set of all positive linear combinations of columns from W ): ⎫ ⎧ ⎬ ⎨ q1 · · · qn 1 q λk Wk , q ≥ λk qk . pos = W= W1 · · · Wn 0 ⎭ ⎩W
λk ≥0 λk ≥0 In other words, ﬁnding a frame of W means removing all columns qj Wj with Wj =
λk ≥0 λk Wk , and qj ≥
λk ≥0 λk qk in a sequential manner until we are left with a minimal (but not necessarily unique) set of columns. A column thrown out in this process will never be part of an optimal solution, and is hence not needed. It can be dropped. From a modelling point of view, this means that the modeller has added an activity that is clearly inferior. Knowing that it is inferior should add to the modeller’s understanding of his model. A column that is not a part of the frame of pos W , but is a part of the frame of pos W , is one that does not add to our production possibilities, but its existence might add to our proﬁt. 5.1.3 Removing Unnecessary Rows There is a large amount of research on the topic of eliminating redundant constraints. In this section we shall focus on the use of frames in removing unnecessary rows. Not very surprisingly, this problem has a dual relationship to that of removing columns. Let us ﬁrst look at it from a general point of view, and then see how we can apply the results in stochastic programming. Assume we have the system W y ≤ h, y ≥ 0. Let W j be the j th row of W , such that the j th inequality is given by W j y ≤ hj . A row j is not needed if there exists a vector α ≥ 0 such that αi W i = W j
i=j and αi hi ≤ hj .
i=j PREPROCESSING 265 Finding which rows satisfy this is equivalent to ﬁnding the frame of pos hT WT 1 0 where T indicates the transpose. Of course, if we have ≥ or = in the original setting, we can easily transform that into a setting with only ≤. The next question is where we can use this in our setting. The ﬁrst, and obvious, answer is to apply it to the ﬁrststage (deterministic) set of constraints Ax = b. (On the other hand, note that we may not apply frame to the ﬁrststage coeﬃcient matrix in order to remove unnecessary columns; these columns may be necessary after feasibility and optimality cuts have been added.) It is more diﬃcult to apply these results to the recourse problem. In principle, we have to check if a given row is unnecessary with all possible ˜ combinations of x and ξ . This may happen with inequality constraints, but it is not very likely with equality constraints. With inequalities, we should have to check if an inequality W j y ≤ hj was implied by the others, even when the j th inequality was at its tightest and the others as loose as possible. This is possible, but not within the frame setting. We have now discussed how the problem can be reduced in size. Let us now assume that all possible reductions have been performed, and let us start discussing feasibility. This will clearly be related to topics we have seen in earlier chapters, but our focus will now be more speciﬁcally directed towards preprocessing. 5.2 Feasibility in Linear Programs The tool for understanding feasibility in linear programs is the cone pol pos W . We have discussed it before, and it is illustrated in Figure 3. The important aspect of Figure 3 is that a righthand side h represents a feasible recourse problem if and only if h ∈ pos W . But this is equivalent to requiring that hT y ≤ 0 for all y ∈ pol pos W . In particular, it is equivalent to requiring that hT y ≤ 0 for all y that are generators of pol pos W . In the ﬁgure there are two generators. You should convince yourself that a vector is in pos W if and only if it has a nonpositive inner product with the two generators of pol pos W . Therefore what we shall need to ﬁnd is a matrix W ∗ , to be referred to as the polar matrix of W , whose columns are the generators of pol pos W , so that we get pos W ∗ = pol pos W. Assume that we know a column w∗ from W ∗ . For h to represent a feasible recourse problem, it must satisfy hT w∗ ≤ 0. 266 STOCHASTIC PROGRAMMING pos W pol pos W Figure 3 Finding the generators of pol pos W . There is another important aspect of the polar cone pos W ∗ that we have not yet discussed. It is indicated in Figure 3 by showing that the generators are pairwise normals. However, that is slightly misleading, so we have to turn to a threedimensional ﬁgure to understand it better. We shall also need the term facet. Let a cone pos W have dimension k . Then every cone K positively spanned by k − 1 generators from pos W , such that K belongs to the boundary of pos W , is called a facet. Consider Figure 4. What we note in Figure 4 is that the generators are not pairwise normals, but that the facets of one cone have generators of the other as normals. This goes in both directions. Therefore, when we state that h ∈ pos W if and only if hT y ≤ 0 for all generators of pol pos W , we are in fact saying that either h represents a feasible problem because it is a linear combination of columns in W or because it satisﬁes the inequality implied by the facets of pos W . In still other words, the point of ﬁnding W ∗ is not so much to describe a new cone, but to replace the description of pos W in terms of generators with another in terms of inequalities. This is useful if the number of facets is not too large. Generally speaking, performing an inner product of the form bT y is very cheap. In parallel processing, an inner product can be pipelined on a vector processor and the diﬀerent inner products can be done in parallel. And, of course, as soon as we ﬁnd one positive inner product, we can stop—the given recourse problem is infeasible. Readers familiar with extreme point enumeration will see that going from PREPROCESSING 267 pos W pol pos W Figure 4 Threedimensional picture of pos W and pol pos W = pos W ∗ . a generator to a facet representation of pos W is indeed extreme point enumeration. As such, it is a problem with exponential complexity. Therefore we cannot in general expect to ﬁnd W ∗ in reasonable time. However, taking a practical view of the matter, it is our suggestion that an attempt is made. The results are generally only interesting if there are relatively few facets, and those cases are the easiest. Figure 5 presents a procedure for ﬁnding the facets. It is called procedure support because it ﬁnds a minimal selection of supporting hyperplanes (not necessarily unique) of pos W , such that pos W is fully described. In practice, it has been shown to possess the desired property that it solves quickly if there are few facets. An example is presented shortly to help in understanding this procedure support. The procedure support ﬁnds the polar matrix W ∗ , and thereby the support of pos W . The matrix W is reduced by the application of procedure framebylp, but is otherwise unchanged on exit. The process is initialized with a matrix W ∗ that spans the entire column (range) space. We typically do this by letting 268 STOCHASTIC PROGRAMMING procedure support(W, W ∗ :matrices); begin framebylp(W ); done := f alse; for i := 1 to n do if not done then begin α := WiT W ∗ ; I+ := {k α[k ] > 0}; I− := {k α[k ] < 0}; I0 := {k α[k ] = 0}; done := (I− ∪ I0 = ∅); if done then W ∗ := 0; if I+ = ∅ and not done then begin ∗ if I− = ∅ then W ∗ := WI0 ; else begin for all k ∈ I+ do for all j ∈ I− do ∗ Ckj := Wk − (α[k ]/α[j ])Wj∗ ; ∗ ∗ ∗ W := WI0 ∪ WI− ∪kj Ckj ; ∗ framebylp(W ); end; (* else *) end; (* if *) end; (* for *) end;
Figure 5 Finding the support. 10 ⎜0 1 W ∗ := ⎜ . . ⎝. . .. 00 or 10 ⎜0 1 W ∗ := ⎜ . . ⎝. . .. 00
∗ ⎛ ⎞ · · · 0 −1 · · · 0 −1 ⎟ . .⎟ .. .⎠ .. . . · · · 1 −1 ··· ··· .. . ⎞ 0 0⎟ .⎟ .⎠ . ⎛ · · · 0 −1 0 · · · 0 0 −1 . . . .. . . .. . . . ··· 1 0 0 · · · −1 On exit W is the polar matrix. We initiate support by a call to framebylp in order to remove all columns from W that are not needed to describe pos W . Example 5.1 Let us turn to a small example to see how procedure support PREPROCESSING 269 Figure 6 to W . The cones pos W and pol pos W before any column has been added progresses. Since pos W and pol pos W live in the same dimension, we can draw them side by side. Let us initially assume that W= 31 11 −1 −2 2 1 . The ﬁrst thing to do, according to procedure support, is to subject W to a frame ﬁnding algorithm, to see if some columns are not needed. If we do that (check it to see that you understand frames) we end up with W= 3 1 −2 1 . Having reduced W , we then initialize W ∗ to span the whole space. Consult Figure 6 for details. We see there that W∗ = 10 01 −1 0 0 −1 . Consult procedure support. From there, it can be seen that the approach is to take one column from W at a time, and with it perform some calculations. Figure 6 shows the situation before we consider the ﬁrst column of W . Calling it pos W is therefore not quite correct. The main point, however, is that the left and right parts correspond. If W has no columns then pol pos W spans the whole space. Now, let us take the ﬁrst column from W . It is given by W1 = (3, 1)T . We next ﬁnd the inner products between W1 and all four columns of W ∗ . We get α = (3, 1, −3, −1)T. In other words, the sets I+ = {1, 2} and I− = {3, 4} have two members each, while I0 = ∅. What this means is that two of the columns must be 270 STOCHASTIC PROGRAMMING Figure 7 to W . The cones pos W and pol pos W after one column has been added removed, namely those in I + , and two kept, namely those in I − . But to avoid losing parts of the space, we now calculate four columns Ckj . First, we get C13 = C24 = 0. They are not interesting. But the other two are useful: C14 = 1 0 +3 0 −1 = 1 −3 , C23 = 0 1 +
1 3 −1 0 = −1 3 1 . Since our only interests are directions, we scale the latter to (−1, 3)T . This brings us into Figure 7. Note that one of the columns in pos W ∗ is drawn with dots. This is done to indicate that if procedure framebylp is applied to W ∗ , that column will disappear. (However, that is not a unique choice.) Note that if W had had only this one column then W ∗ , as it appears in Figure 7, is the polar matrix of that onecolumn W . This is a general property of procedure support. At any iteration, the present W ∗ is the polar matrix of the matrix containing those columns we have so far looked at. Now let us turn to the second column of W . We ﬁnd αT = (−2, 1)W ∗ = (−2, 1) −1 1 −1 3 −3 0 = (5, −5, 2) We must now calculate two extra columns, namely C12 and C32 . The ﬁrst gives 0, so it is not of interest. For the latter we get C32 = −1 0 +
2 5 1 −3 = −3 5 −6 5 , which we scale to (−1, −2)T . This gives us Figure 8. To the left we have pos W , with W being the matrix we started out with, and to the right its polar cone. A column represents a feasible problem if it is inside pos W , or equivalently, if it has a nonpositive inner product with all generators of pos W ∗ = pol pos W . 2 PREPROCESSING 271 Figure 8 to W . The cones pos W and pol pos W after two columns have been added Assume we could indeed ﬁnd W ∗ using procedure support. Let w∗ be some column in W ∗ . For feasibility, we must have (w∗ )T [h0 + Hξ − T (ξ )x] ≤ 0 for all ξ. Hence (w∗ )T T (ξ )x ≥ (w∗ )T (h0 + Hξ ) for all ξ. If randomness aﬀects both h and T , as indicated above, we must, at least in principle, create one inequality per ξ for each column from W ∗ . However, if T (ξ ) ≡ T0 , we get a much easier setup by calculating (w∗ )T T0 x ≥ (w∗ )T h0 + max (w∗ )T H t,
t∈Ξ ˜ where Ξ is the support of ξ . If we do this for all columns of W ∗ and add the resulting inequalities in terms of x to Ax = b, we achieve relatively complete recourse. Hence we see that relatively complete recourse can be generated. This is why the term is useful. It is very hard to test for relatively complete recourse. With relatively complete recourse we should never have to worry about feasibility. Since the inequalities resulting from the columns of W ∗ can be dominated by others (in particular, if T (ξ ) is truly random), the new rows, together with those in Ax = b, should be subjected to row removal, as outlined earlier in this chapter. 5.2.1 A Small Example Let us return to the example we discussed in Section 1.3. We have now named the righthand side elements b1 , b2 and b3 , since they are the focus of the discussion here (in the numerical example they had the values 100, 180 and 162): 272 STOCHASTIC PROGRAMMING min{2xraw1 + 3xraw2 } s. t. xraw1 + xraw2 ≤ b1 , 2xraw1 + 6xraw2 ≥ b2 , 3xraw1 + 3xraw2 ≥ b3 , ≥ 0, xraw1 xraw2 ≥ 0. The interpretation is that b1 is the production limit of a reﬁnery, which reﬁnes crude oil from two countries. The variable xraw1 represents the amount of crude oil from Country 1 and xraw2 the amount from Country 2. The quality of the crudes is diﬀerent, so one unit of crudes from Country 1 gives two units of Product 1 and three units of Product 2, whereas the crudes from the second country gives 6 and 3 units of the same products. Company 1 wants at least b2 units of Product 1 and Company 2 at least b3 units of Product 2. If we now calculate the inequalities describing pos W , or alternatively the generators of pol pos W , we ﬁnd that there are three of them: b1 ≥0 6b1 − b2 ≥0 3b1 − b3 ≥ 0. The ﬁrst should be easy to interpret, and it says something that is not very surprising: the production capacity must not be negative. That we already knew. The second one is more informative. Given appropriate units on crudes and products, it says that the demand of Company 1 must not exceed six times the production capacity of the reﬁnery. Similarly, the third inequality says that the demand of Company 2 must not exceed three times the production capacity of the reﬁnery. (The inequalities are not as meaningless as they might appear at ﬁrst sight: remember that the units for reﬁnery capacity and ﬁnished products are not the same.) These three inequalities, one of which was obvious, are examples of constraints that are not explicitly written down by the modeller, but still are implied by him or her. And they should give the modeller extra information about the problem. In case you wonder where the feasibility constraints are, what we have just discussed was a onestage deterministic model, and what we obtained was three inequalities that can be used to check feasibility of certain instances of that model. For example, the numbers used in Section 1.3 satisfy all three constraints, and hence that problem was feasible. (In the example b1 = 100, b2 = 180 and b3 = 162.) PREPROCESSING 273 h3 H3 pos W h1 H h2 H 2
Illustration of feasibility. Figure 9 5.3 Reducing the Complexity of Feasibility Tests In Chapter 3, (page 162), we discussed the set A that is a set of ξ values such that if h0 + Hξ − T (ξ )x produces a feasible secondstage problem for all ξ ∈ A ˜ then the problem will be feasible for all possible values of ξ . We pointed out that in the worst case A had to contain all extreme points in the support of ˜ ξ. Assume that the second stage is given by Q(x, ξ ) = min{q (ξ )T y  W y = h0 + Hξ − T0 x, y ≥ 0}, where W is ﬁxed and T (ξ ) ≡ T0 . This covers many situations. In R2 consider ˜ ˜˜˜ the example in Figure 9, where ξ = (ξ1 , ξ2 , ξ3 ). ˜ Since h1 ∈ pos W , we can safely ﬁx ξ1 at its lowest possible value, since if min things are going to go wrong, then they must go wrong for ξ1 . Or, in other min ˆ ˆ ˆ ˆ words, if h0 + H ξ − T0 x ∈ pos W for ξ = (ξ1 , ξ2 , ξ3 ) then so is any other ˆ ˜ ˆ ˜ ˜2 = ξ2 and ξ3 = ξ3 , regardless of the value of ξ1 . Similarly, since vector with ξ ˜2 at its largest possible value. Neither h3 nor −h3 −h2 ∈ pos W , we can ﬁx ξ ˜ are in pos W , so there is nothing to do with ξ3 . Hence to check if x yields a feasible solution, we must check if
min max min min max max h0 +Hξ −T0 x ∈ pos W for ξ = (ξ1 , ξ2 , ξ3 )T and ξ = (ξ1 , ξ2 , ξ3 )T Hence in this case A will contain only two points instead of 23 = 8. In general, we see that whenever a column from H , in either its positive or negative 274 STOCHASTIC PROGRAMMING direction, is found to be in pos W , we can halve the number of points in A. In some cases we may therefore reduce the testing to one single problem. It is of importance to understand that the reduction in the size of A has two positive aspects. First, if we do not have (or do not know that we have) relatively complete recourse, the test for feasibility, and therefore generation of feasibility cuts, becomes much easier. But equally important is the fact that it tells us something about our problem. If a column from H is in pos W , we have found a direction in which we can move as far as we want without running into feasibility problems. This will, in a real setting, say something about the random eﬀect we have modelled using that column. 5.4 Bibliographical Notes Preprocessing and similar procedures have been used in contexts totally diﬀerent from ours. This is natural, since questions of model formulations and infeasibilities are equally important in all areas of mathematical programming. For further reading, consult e.g. Roodman [7], Greenberg [3, 4, 5] or Chinneck and Dravnieks [1]. An advanced algorithm for ﬁnding frames can be found in Wets and Witzgall [12]. Later developments include the work of Rosen et al. [8] and Dul´ a et al. [2]. The algorithm for ﬁnding a support was described by Tschernikov[9], and later also by Wets [11]. For computational tests using the procedure see Wallace and Wets [10]. Similar procedures for networks will be discussed in Chapter 6. For an overview of methods for extreme point enumeration see e.g. Mattheiss and Rubin [6]. Exercises
1. Let W be the coeﬃcient matrix for the following set of linear equations: x + 1 y − z + s1 = 0, 2 2x +z + s2 = 0 , x, y, z, s1 , s2 ≥ 0. (a) Find a frame of pos W . (b) Draw a picture of pos W , and ﬁnd the generators of pol pos W by simple geometric arguments. (c) Find the generators of pol pos W by using procedure support in Figure 5. Make sure you draw the cones pos W and pol pos W after each iteration of the algorithm, so that you see how it proceeds. PREPROCESSING 275 2. Let the following set of equations be given: x+y +z 2x +z y +z x, y, z ≤ 4, ≤ 5, ≤ 8, ≥ 0. (a) Are there any columns that are not needed for feasibility? (Remember the slack variables!) (b) Let W contain the columns that were needed from question (a), including the slacks. Try to ﬁnd the generators of pol pos W by geometric arguments, i.e. draw a picture. 3. Consider the following recourse problem constraints: 13 31 y= 2 7 + 2 2 −1 0 −1 −4 −2 −1 1 −1 ξ+ 5 3 1 2 x with y ≥ 0. Assume that all random variables are independent, with support [0, 1]. Look back at Section 5.3, where we discussed how we could simplify the feasibility test if we were not aware of relatively complete recourse. We there deﬁned a set A that was such that if the recourse problem was feasible for all ξ ∈ A then it was feasible for all ξ . In the worst case A has, in our case, 25 = 32 elements. By whatever method you ﬁnd useful (what about a picture?), reduce this number to six, and list the six elements. References
[1] Chinneck J. W. and Dravnieks E. W. (1991) Locating minimal infeasible constraint sets in linear programs. ORSA J.Comp. 3: 157–168. [2] Dul´ J. H., Helgason R. V., and Hickman B. L. (1992) Preprocessing a schemes and a solution method for the convex hull problem in multidimensional space. In Balci O. (ed) Computer Science and Operations Research: New Developments in their Interfaces, pages 59– 70. Pergamon Press, Oxford. [3] Greenberg H. J. (1982) A tutorial on computerassisted analysis. In Greenberg H. J., Murphy F. H., and Shaw S. H. (eds) Advanced Techniques in the Practice of Operations Research. Elsevier, New York. [4] Greenberg H. J. (1983) A functional description of ANALYZE: A computerassisted analysis. ACM Trans. Math. Software 9: 18–56. 276 STOCHASTIC PROGRAMMING [5] Greenberg H. J. (1987) Computerassisted analysis for diagnosing infeasible or unbounded linear programs. Math. Prog. Study 31: 79–97. [6] Mattheiss T. H. and Rubin D. S. (1980) A survey and comparison of methods for ﬁnding all vertices of convex polyhedral sets. Math. Oper. Res. 5: 167–185. [7] Roodman G. M. (1979) Postinfeasibility analysis in linear programming. Management Sci. 9: 916–922. [8] Rosen J. B., Xue G. L., and Phillips A. T. (1992) Eﬃcient computation of extreme points of convex hulls in IRd . In Pardalos P. M. (ed) Advances in Optimization and Parallel Computing, pages 267–292. NorthHolland, Amsterdam. [9] Tschernikow S. N. (1971) Lineare Ungleichungen. VEB Deutscher Verlag der Wissenschaften, Berlin. (Translated from Russian). [10] Wallace S. W. and Wets R. J.B. (1992) Preprocessing in stochastic programming: The case of linear programs. ORSA Journal on Computing 4: 45–59. [11] Wets R. J.B. (1990) Elementary, constructive proofs of the theorems of Farkas, Minkowski and Weyl. In Gabszewicz J., Richard J.F., and Wolsey L. (eds) Economic Decision Making: Games, Econometrics and Optimization: Contributions in Honour of Jacques Dreze, pages 427–432. NorthHolland, Amsterdam. [12] Wets R. J.B. and Witzgall C. (1967) Algorithms for frames and lineality spaces of cones. J. Res. Nat. Bur. Stand. 71B: 1–7. 6 Network Problems
The purpose of this chapter is to look more speciﬁcally at networks. There are several reasons for doing this. First, networks are often easier to understand. Some of the results we have outlined earlier will be repeated here in a network setting, and that might add to understanding of the results. Secondly, some results that are stronger than the corresponding LP results can be obtained by utilizing the network structure. Finally, some results can be obtained that do not have corresponding LP results to go with them. For example, we shall spend a section on PERT problems, since they provide us with the possibility of discussing many important issues. The overall setting will be as before. We shall be interested in twoor multistage problems, and the overall solution procedures will be the same. Since network ﬂow problems are nothing but specially structured LPs, everything we have said before about LPs still hold. The bounds we have outlined can be used, and the Lshaped decomposition method, with and without bounds, can be applied as before. We should like to point out, though, that there exists one special case where scenario aggregation looks more promising for networks than for general LPs: that is the situation where the overall problem is a network. This may require some more explanation. When we discuss networks in this chapter, we refer to a situation in which the second stage (or the last stage in a multistage setting) is a network. We shall mostly allow the ﬁrst stage to be a general linear program. This rather limited view of a network problem is caused by properties of the Lshaped decomposition method (see page 171). The computational burden in that algorithm is the calculation of Q(ˆ), the expected recourse cost, and to x some extent the check of feasibility. Both those calculations concern only the recourse problem. Therefore, if that problem is a network, network algorithms can be used to speed up the Lshaped algorithm. What if the ﬁrststage problem is also a network? Example 2.2 (page 117) was such an example. If we apply the Lshaped decomposition method to that problem, the network structure of the master problem is lost as soon as feasibility and optimality cuts are added. This is where scenario aggregation, 278 STOCHASTIC PROGRAMMING outlined in Section 2.6, can be of some use. The reason is that, throughout the calculations, individual scenarios remain unchanged in terms of constraints, so that structure is not lost. A nonlinear term is added to the objective function, however, so if the original problem was linear, we are now in a setting of quadratic objectives and linear (network) constraints. If the original problem was a nonlinear network, the added terms will not increase complexity at all. 6.1 Terminology Consider a network with arcs E = {1, . . . , m} and nodes N = {1, . . . , n}. An arc k ∈ E will be denoted by k ∼ (i, j ), indicating that it starts at i and ends at j . The capacity of k will be denoted by γ (k ) and the cost by q (k ). For each node i ∈ N , let β (i) be the external ﬂow. We let β (i) > 0 denote supply and β (i) < 0 demand. We say that a network ﬂow problem is capacitated if all arcs k have γ (k ) < ∞. If all arcs are uncapacitated (logically that γ (k ) = ∞), we say that the network is uncapacitated. Most networks have arcs of both types, and their properties will then be mixtures of what we discuss for the two cases in this chapter. By G(Y ),we understand a network consisting of the nodes in Y ⊆ N and all arcs in E connecting nodes in Y . Of course, G(N ) is the original network. For two arbitrary sets Y, Y ⊂ N , let {k ∼ (i, j )  i ∈ Y, j ∈ Y } ⊆ E be denoted by [Y, Y ]+ and let {k ∼ (i, j )  j ∈ Y, i ∈ Y } ⊆ E be denoted by [Y, Y ]− . For Y ⊂ N deﬁne Q+ = [Y, N \ Y ]+ and Q− = [Y, N \ Y ]− . We call Q = Q+ ∪ Q− = [Y, N \ Y ] a cut. Whenever we refer to Y and Q, without stating their relationship, we are assuming that Q = [Y, N \ Y ]. For each Y ⊆ N , let b(Y ) ∈ {0, 1}n be an index vector for the set Y , i.e. b(Y, i) = 1 if i ∈ Y , and 0 otherwise. Similarly, for each Q ⊆ E , let a(Q+ ) ∈ {0, 1}m be an index vector for the set Q+ , i.e. a(Q+ , k ) = 1 if k ∈ Q+ and 0 otherwise. The node–arc incidence matrix for a network will be denoted by W , and is deﬁned by 1 if k ∼ (i, j ) for some j , W (i, k ) = −1 if k ∼ (j, i) for some j , 0 otherwise. The rows in the node–arc incidence matrix are linearly dependent. For the system W y = b to have a solution, we know from Chapter 1 that rk W = rk (W  b). In a network this requirement means that there must be one node where the external ﬂow equals exactly the negative sum of the external ﬂows in the other nodes. This node is called the slack node. It is customary not to include a row for that node in W . Hence W has only n − 1 rows, and it has full rank provided the network is connected. A network is NETWORK PROBLEMS 279 Figure 1 Network used to demonstrate deﬁnitions. connected if for all Y ⊂ N we have Q = [Y, N \ Y ] = ∅. We shall also need the following sets: F + (Y ) = {nodes j  k ∼ (i, j ) for i ∈ Y } ∪ Y, B + (Y ) = {nodes j  k ∼ (j, i) for i ∈ Y } ∪ Y. The set F + (Y ) contains Y itself plus all nodes that can be reached directly (i.e. in one step) from a node in Y . Similarly B + (Y ) contains Y and all nodes from which Y can be reached directly. Two other sets that are very similar to F + (Y ) and B + (Y ) are F ∗ (Y ) = {nodes j  ∃ a directed path from some node i ∈ Y to node j } ∪ Y, B ∗ (Y ) = {nodes j  ∃ a directed path from node j to some node i ∈ Y } ∪ Y. Thus the sets F + and B + pick up immediate successors and predecessors, whereas F ∗ and B ∗ pick up all successors and predecessors. Example 6.1 Let us consider Figure 1 to brieﬂy illustrate most of the concepts we have introduced. The node set N = {1, 2, 3, 4}, and the arc set E = {1, 2, 3, 4, 5}. An example of an arc is 5 ∼ (2, 3), since arc 5 starts at node 2 and ends at node 3. Let Y = {1, 3} and Y = {2}. The network G(Y ) consists of nodes 1 and 3, and arc 2, since that is the only arc connecting nodes in Y . Furthermore, for the same Y and Y , we have [Y, Y ]+ = {1}, since arc 1 is the only arc going from nodes 1 or 3 to node 2. Similarly [Y, Y ]− = {5}. If we deﬁne Q = [Y, N \ Y ] then Q+ = {1, 4} and Q− = {5}. Therefore Q = {1, 4, 5} is a cut. Again, with the same deﬁnition of Y , we have b(Y ) = (1, 0, 1, 0)T , Furthermore, we have F + ({1}) = {1, 2, 3}, F ∗ ({1}) = {1, 2, 3, 4}, a(Q+ ) = (1, 0, 0, 1, 0)T. 280 STOCHASTIC PROGRAMMING since we can reach nodes 2 and 3 in one step, but we need two steps to reach node 4. Node 1 itself is in both sets by deﬁnition. Two examples of predecessors of a node are B + ({1}) = {1}, B ∗ ({2, 3}) = {1, 2, 3}, since node 1 has no predecessors, and nodes 2 and 3 can be reached from node 1. A common problem in network ﬂows is the min cost network ﬂow problem. It is given as follows. min q (1)y (1) + q (2)y (2) + q (3)y (3) + q (4)y (4) + q (5)y (5) s.t. y (1) + y (2) = β (1), −y (1) + y (3) + y (5) = β (2), − y (2) + y (4) − y (5) = β (3), − y (3) − y (4) = β (4), y (k ) ≤ γ (k ), k = 1, . . . , 5, y (k ) ≥ 0, k = 1, . . . , 5. The coeﬃcient matrix for this problem has rank 3. Therefore the node–arc incidence matrix has three rows, and is given by ⎛ ⎞ 1 1 00 0 W = ⎝ −1 0 1 0 1 ⎠ . 0 −1 0 1 −1 2 6.2 Feasibility in Networks In Section 3.2 and Chapter 5 we discussed feasibility in linear programs. As will become apparent shortly, it is easier to obtain feasibility results for networks than for LPs. Let us ﬁrst run through the development, and then later see how this ﬁts in with the LP results. A wellknown result concerning feasibility in networks states that if the net ﬂow across every cut in a network is less than or equal to the capacity of that cut, then the problem is feasible. More formally, this can be stated as follows, using β T = (β (1), . . . , β (n)) and γ T = (γ (1), . . . , γ (m)). Proposition 6.1 A capacitated network ﬂow problem with total supply equal to total demand is feasible iﬀ for every cut Q = [Y, N \Y ], b(Y )T β ≤ a(Q+ )T γ . NETWORK PROBLEMS 281 function Connected(W : set of nodes) : boolean; begin PickNode(i, W ); Qlist := {i}; V isited := {i}; while Qlist = ∅ do begin PickNode(i, Qlist); Qlist := Qlist \ {i}; s := (B ∗ (i) ∪ F ∗ (i)) ∩ (W \ V isited); Qlist := Qlist ∪ s; V isited := V isited ∪ s; end; Connected := (V isited = W ); end;
Figure 2 Function checking network connectedness. The above proposition is very simple in nature. However, from a computational point of view, it is not very useful. It requires that we look at all subsets Y of N , in other words 2n subsets. For reasonably large n it is not computationally feasible to try to enumerate subsets this way. Another problem that might not be that obvious when reading the proposition is that it is not an “if and only if” statement in a very useful sense. There is no guarantee that inequalities arising from the proposition are indeed needed. We might— and most probably will—end up with inequalities that are implied by other inequalities. A key issue in this respect is the connectedness of a network. We deﬁned earlier that a network was connected if for all Y ⊂ N we have that Q = [Y, N \ Y ] = ∅. It is reasonably easy to check connectedness of a network. Details are given in function Connected in Figure 2. Note that we use F ∗ and B ∗ . If they are not available, we can also use F + and B + , or calculate F ∗ and B ∗ , which is quite simple. Using the property of connectedness, it is possible to prove the following stronger result. Proposition 6.2 Let Q inequalities = [Y, N \ Y ]. For capacitated networks the b(N \ Y )T β ≤ a(Q− )T γ b(Y )T β ≤ a(Q+ )T γ, are both needed if and only if G(Y ) and G(N \ Y ) are both connected. Otherwise, none of the inequalities are needed. Example 6.2 Let us look at the small example network in Figure 3 to at 282 STOCHASTIC PROGRAMMING 2 a d 1 c 3
Figure 3 Example network 1. f b 4
e g 5 least partially see the relevance of the last proposition. The following three inequalities are examples of inequalities describing feasibility for the example network: ≤ γ (d) + γ (f ), β (3) ≤ γ (e), β (2) + β (3) ≤ γ (d) + γ (e) + γ (f ). β (2) Proposition 6.2 states that the latter inequality is not needed, because G({2, 3}) is not connected. From the inequalities themselves, we easily see that if the ﬁrst two are satisﬁed, then the third is automatically true. It is perhaps slightly less obvious that, for the very same reason, the inequality β (1) + β (4) + β (5) ≤ γ (a) + γ (c) is also not needed. It is implied by the requirement that total supply must equal total demand plus the companions of the ﬁrst two inequalities above. (Remember that each node set gives rise to two inequalities). More speciﬁcally, the inequality can be obtained by adding the following two inequalities and one equality (representing supply equals demand): β (1) + β (2) + β (4) + β (5) ≤ γ (c), β (1) + β (3) + β (4) + β (5) ≤ γ (a), − β (1) − β (2) − β (3) − β (4) − β (5) = 0. 2 Once you have looked at this for a while, you will probably realize that the part of Proposition 6.2 that says that if G(Y ) or G(N \ Y ) is disconnected NETWORK PROBLEMS 283 then we do not need any of the inequalities is fairly obvious. The other part of the proposition is much harder to prove, namely that if G(Y ) and G(N \ Y ) are both connected then the inequalities corresponding to Y and N \ Y are both needed. We shall not try to outline the proof here. Proposition 6.2 might not seem very useful. A straightforward use could still require the enumeration of all subsets of N , and for each such subset a check to see if G(Y ) and G(N \ Y ) are both connected. However, we can obtain more than that. The ﬁrst important observation is that the result refers to the connectedness of two networks—both the one generated by Y and the one generated by N \Y . Let Y1 = N \ Y . If both networks are connected, we have two inequalities that we need, namely b(Y )T β ≤ a(Q+ )T γ and b(Y1 )T β = b(N \ Y )T β ≤ a(Q− )T γ. On the other hand, if at least one of the networks is disconnected, neither inequality will be needed. Therefore, checking each subset of N means doing twice as much work as needed. If we are considering Y and discover that both G(Y ) and G(Y1 = N \ Y ) are connected, we write down both inequalities at the same time. An easy way to achieve this is to disregard some node (say node n) from consideration in a full enumeration. This way, we will achieve n ∈ N \ Y for all Y we investigate. Then for each cut where the connectedness requirement is satisﬁed we write down two inequalities. This will halve the number of subsets to be checked. In some cases it is possible to reduce the complexity of a calculation by collapsing nodes. By this, we understand the process of replacing a set of nodes by one new node. Any other node that had an arc to or from one of the collapsed nodes will afterwards have an arc to or from the new node: one for each original arc. If Y is a set of nodes, we let A(Y ) be the set of original nodes represented by the present node set Y as a result of collapsing. To simplify statements later on we shall also need a way to simply state which inequalities we want to write down. An algorithm for the capacitated case is given in Figure 4. Note that we allow the procedure to be called with Y = ∅. This is a technical devise to ensure consistent results, but you should not let that confuse you at the present time. Based on Proposition 6.2, it is possible to develop procedures that in some cases circumvent the exponential complexity arising from checking all subsets of N . We shall use Figure 5 to illustrate some of our points. Proposition 6.3 If B + (i)∪F + (i) = {i, j } then nodes i and j can be collapsed after the inequalities generated by CreateIneq({i}) have been created. 284 STOCHASTIC PROGRAMMING procedure CreateIneq(Y : set of nodes); begin if A(Y ) = ∅ then begin create the inequality b(A(Y ))T β ≤ a(Q+ )T γ ; create the inequality b(A(N \ Y ))T β ≤ a(Q− )T γ ; end else begin create the inequality b(A(N ))T β ≤ 0; create the inequality −b(A(N ))T β ≤ 0; end; end;
Figure 4 Algorithm for generating inequalities—capacitated case. Figure 5 Example network used to illustrate Proposition 6.3. The only set Y where i ∈ Y but j ∈ Y , at the same time as both G(Y ) and G(N \ Y ) are connected, is the set where Y = {i}. The reason is that node j blocks node i’s connections to all other nodes. Therefore, after calling CreateIneq({i}), we can safely collapse node i into node j . Examples of this can be found in Figure 5, (see e.g. nodes 4 and 5). This result is easy to implement, since all we have to do is run through all nodes, one at a time, and look for nodes satisfying B + (i) ∪ F + (i) = {i, j }. Whenever collapses take place, F + and B + (or, alternatively, F ∗ and B ∗ ) must be updated for the remaining nodes. By repeatedly using this proposition, we can remove from the network all trees (and trees include “double arcs” like those between nodes 2 and 5). We are then left with circuits and paths connecting circuits. The circuits can be both directed and undirected. In the example in Figure 5 we are left with NETWORK PROBLEMS 285 procedure AllFacets; begin TreeRemoval; CreateIneq(∅); Y := ∅; W := N \ {n}; Facets(Y, W ); end;
Figure 6 Main program for full enumeration of inequalities satisfying Proposition 6.2. procedure Facets(Y, W : set of nodes); begin PickNode(Y, W, i); if i = 0 then begin W := W \ {i}; Facets(Y, W ); Y := Y ∪ {i}; Facets(Y, W ); if Connected(N \ Y ) then CreateIneq(Y ); end; end;
Figure 7 Recursive algorithm for generating facets. nodes 1, 2 and 3. We shall assume that there is a procedure TreeRemoval that takes care of this reduction. There is one ﬁnal remark to be made based on Proposition 6.2. For each set Y we must check the connectedness of both G(Y ) and G(N \ Y ). We can skip the ﬁrst if we simply make sure that G(Y ) is always connected. This can easily be achieved by building up Y (in the enumeration) such that it is always connected. We shall assume that we have available a procedure PickNode(Y, W, i) that picks a node i from W provided that node is reachable from Y in one step. Otherwise, it returns i := 0. We now present a main program and a main procedure for the full enumeration. They are listed in Figures 6 and 7. 286 STOCHASTIC PROGRAMMING 6.2.1 The uncapacitated case The corresponding results for the uncapacitated networks can be found by checking what happens when we put γ (k ) = ∞ in all previous results. The result corresponding to Proposition 6.1 is as follows. Proposition 6.4 An uncapacitated network ﬂow problem with total supply equal to total demand is feasible iﬀ for every cut Q = [Y, N \ Y ], with Q+ = ∅, b(Y )T β ≤ 0. This result is developed by observing that the inequality in Proposition 6.1 becomes b(Y )T β ≤ ∞ for all cuts but those with Q+ = ∅. And this is, of course, always true. Similarly, a connectedness result can be obtained that corresponds to Proposition 6.2. Proposition 6.5 For an uncapacitated network a cut Q = [Y, N \ Y ] with Q+ = ∅ is needed if and only if G(Y ) and G(N \ Y ) are both connected. Collapsing nodes was discussed for the capacitated case. Those results apply here as well, in particular Proposition 6.3. But for the uncapacitated case we can make a few extra observations. Proposition 6.6 For an uncapacitated network, if Q+ = [Y, N \ Y ]+ = ∅ then F ∗ (i) ⊆ Y if i ∈ Y . From this it easily follows. Proposition 6.7 If j1 , j2 , . . . , jK is a set of arcs in an uncapacitated network such that jk ∼ (ik , ik+1 ) and i1 = iK +1 then the nodes i1 , . . . , iK will always be on the same side of a cut Q if Q+ = ∅. We utilize this by collapsing all directed circuits in the network. As an example, consider Figure 8, which is almost like Figure 3, except that arc b has been turned around. Since arcs a, d and b, as well as arcs b, c and e, constitute directed circuits, we can collapse these circuits and arrive at the network in Figure 9. Of course, it is now much easier to investigate all possible subsets of N . If a network has both capacitated and uncapacitated arcs, we must apply the results for capacitated networks, but drop any inequality which corresponds to a cut where Q+ contains an uncapacitated arc. NETWORK PROBLEMS 287 2 a d b 1 c 3
Figure 8 Example network 2, assumed to be uncapacitated.
f,g 1,2,3,4 5 f 4 5 g e Figure 9 Example network 2 after collapsing the directed circuit. 6.2.2 Comparing the LP and Network Cases We used Section 5.2 to discuss feasibility in linear programs. Since network ﬂow problems are just special cases of linear programs, those results apply here as well, of course. On the other hand, we have just discussed feasibility in networks more speciﬁcally, and apparently the setting was very diﬀerent. The purpose of this section is to show in some detail how these results relate to each other. Let us ﬁrst repeat the major discussions from Section 5.2. Using the cone pos W = {t  t = W y, y ≥ 0}, we deﬁned the polar cone pos W ∗ = pol pos W = {t  tT y ≤ 0 for all y ∈ pos W }. The interesting property of the cone pos W ∗ is that the recourse problem is feasible if and only if a given righthand side has a nonpositive inner product with all generators of the cone. And if there are not too many generators, it is much easier to perform inner products than to check if a linear program is feasible. Refer to Figure 4 for an illustration in three dimensions.1 To ﬁnd the polar cone, we used procedure support in Figure 5. The major computational burden in that procedure is the call to procedure framebylp, outlined in Figure 1. In principle, to determine if a column is part of the frame, we must remove the column from the matrix, put it as a righthand side, and see if the corresponding system of linear equations has a solution or not. If it has
1 Figures and procedures referred to in this Subsection are contained in Chapter 5 288 STOCHASTIC PROGRAMMING a solution, the column is not part of the frame, and can be removed. An important property of this procedure is that to determine if a column can be discarded, we have to use all other columns in the test. This is a major reason why procedure framebylp is so slow when the number of columns gets very large. So, a generator w∗ of the cone pos W ∗ has the property that a righthand side h must satisfy hT w∗ ≤ 0 to be feasible. In the uncapacitated network case we saw that a righthand side β had to satisfy b(Y )T β ≤ 0 to represent a feasible problem. Therefore the index vector b(Y ) corresponds exactly to the column w∗ . And calling procedure framebylp to remove those columns that are not in the frame of the cone pos W ∗ corresponds to using Proposition 6.5. Therefore the index vector of a node set from Proposition 6.5 corresponds to the columns in W ∗ . Computationally there are major diﬀerences, though. First, to ﬁnd a candidate for W ∗ , we had to start out with W , and use procedure support, which is an iterative procedure. The network inequalities, on the other hand, are produced more directly by looking at all subsets of nodes. But the most important diﬀerence is that, while the use of procedure framebylp, as just explained, requires all columns to be available in order to determine if one should be discarded, Proposition 6.5 is totally local. We can pick up an inequality and determine if it is needed without looking at any other inequalities. With possibly millions of candidates, this diﬀerence is crucial. We did not develop the LP case for explicit bounds on variables. If such bounds exist, they can, however, be put in as explicit constraints. If so, a column w∗ from W ∗ corresponds to the index vector
b(Y ) . − a (Q + ) 6.3 Generating Relatively Complete Recourse Let us now discuss how the results obtained in the previous section can help us, and how they can be used in a setting that deserves the term preprocessing. Let us ﬁrst repeat some of our terminology, in order to see how this ﬁts in with our discussions in the LP setting. A twostage stochastic linear programming problem where the secondstage problem is a directed capacitated network ﬂow problem can be formulated as follows: minx cT x + Q(x) s.t. Ax = b, x ≥ 0, where Q(x) = Q(x, ξ j )pj NETWORK PROBLEMS 289 and Q(x, ξ ) = miny1 {(q 1 )T y 1  W y 1 = h1 + H 1 ξ − T 1 (ξ )x, 0 ≤ y 1 ≤ h2 + H 2 ξ − T 2 (ξ )x}, 0 0 where W is the node–arc incidence matrix for the network. To ﬁt into a more general setting, let W0 W= II so that Q(x, ξ ) can also be written as Q(x, ξ ) = min{q T y  W y = h0 + Hξ − T (ξ )x, y ≥ 0}
y where y =
T 1 (ξ ) T 2 (ξ ) y1 y2 , y 2 is the slack of y 1 , q =
H1 H2 q1 0 , h0 = h1 0 h2 0 , T (ξ ) = and H = β γ . Given our deﬁnition of β and γ , we have, for a hi ξi − T (ξ )ˆ. x
i given x, ˆ x = h0 + Hξ − T (ξ )ˆ = h0 + Using the inequalities derived in the previous section, we can proceed to transform these inequalities into inequalities in terms of x. By adding these inequalities to the ﬁrststage constraints Ax = b, we get relatively complete recourse, i.e. we guarantee that any x satisfying the (expanded) ﬁrststage ˜ constraints will yield a feasible secondstage problem for any value of ξ . An inequality has the form b[A(Y )]T β =
i∈A(Y ) β (i) ≤
k∈Q+ γ (k ) = a(Q+ )T γ. ˜ Let us replace β and γ with their expressions in terms of x and ξ . An inequality then says that the following must be true for all values of x and all ˜ realizations ξ of ξ : b[A(Y )]T h1 + 0
i h1 ξi − T 1 (ξ )x ≤ a(Q+ )T h2 + i 0
i h2 ξi − T 2 (ξ )x . i Collecting all x terms on the lefthand side and all other terms on the righthand side we get the following expression:
1 − b(A(Y ))T T0 + j 2 Tj1 ξj + a(Q+ )T T0 + j Tj2 ξj x ≤
i − b[A(Y )]T h1 + a(Q+ )T h2 ξi − b[A(Y )]T h1 + a(Q+ )T h2 . i i 0 0 290 STOCHASTIC PROGRAMMING ˜ Since this must be true for all possible values of ξ , we get one such inequality for each ξ . If T (ξ ) ≡ T0 , we can make this more eﬃcient by calculating only one cut, given by the following inequality:
1 2 − b[A(Y )]T T0 + a(Q+ )T T0 x ≤ min
ξ ∈Ξ i − b[A(Y )]T h1 + a(Q+ )T h2 ξi − b[A(Y )]T h1 + a(Q+ )T h2 . i i 0 0 The minimization is of course very simple in the independent case, since the minimization can be moved inside the sum. When facets have been transformed into inequalities in terms of x, we might ﬁnd that they are linearly dependent. We should therefore subject them, together with the constraints Ax = b, to a procedure that removes redundant constraints. We have discussed this subject in Chapter 5. The above results have two applications. Both are related to preprocessing. Let us ﬁrst repeat the one we brieﬂy mentioned above, namely that, after the inequalities have been added to Ax = b, we have relatively complete recourse, i.e. any x satisfying the (expanded) ﬁrststage constraints will automatically ˜ produce a feasible recourse problem for all values of ξ . This opens up the avenue to methods that require this property, and it can help in others where this is really not needed. For example, we can use the Lshaped decomposition method (page 171) without concern about feasibility cuts, or apply the stochastic decomposition method as outlined in Section 3.8. Another—and in our view more important—use of these inequalities is in model understanding. As expressions in x, they represent implicit assumptions made by the modeller in terms of the ﬁrststage decisions. They are implicit because they were never written down, but they are there because otherwise the recourse problem can become infeasible. And, as part of the model, the modeller has made the requirements expressed in these implicit constraints. If there are not too many implicit assumptions, the modeller can relate to them, and either learn about his or her own model, or might decide that he or she did not want to make these assumption. If so, there is need for a revision of the model. It is worth noting that the inequalities in terms of β and γ are also interesting in their own right. They show the modeller how the external ﬂow and arc capacities must combine in order to produce a feasible recourse problem. Also, this can lead to understanding and/or model reformulation. 6.4 An Investment Example Consider the simple network in Figure 10. It represents the ﬂow of sewage (or some other waste) from three cities, represented by nodes 1, 2 and 3. NETWORK PROBLEMS 291 Figure 10 Section 6.4. Transportation network for sewage, used for the example in All three cities produce sewage, and they have local treatment plants to take care of some of it. Both the amount of sewage from a city and its treatment capacity vary, and the net variation from a city is given next to the node representing the city. For example, City 1 always produces more than it can treat, and the surplus varies between 10 and 20 units per unit time. City 2, on the other hand, sometimes can treat up to 5 units of sewage from other cities, but at other times has as much as 15 units it cannot itself treat. City 3 always has extra capacity, and that varies between 5 and 15 units per unit time. The solid lines in Figure 10 represent pipes through which sewage can be pumped (at a cost). Assume all pipes have a capacity of up to 5 units per unit time. Node 4 is a common treatment site for the whole area, and its capacity is so large that for practical purposes we can view it as being inﬁnite. Until now, whenever a city had sewage that it could not treat itself, it ﬁrst tried to send it to other cities, or site 4, but if that was not possible, the sewage was simply dumped in the ocean. (It is easy to see that that can happen. When City 1 has more than 10 units of untreated sewage, it must dump some of it.) New rules are being introduced, and within a short period of time dumping sewage will not be allowed. Four projects have been suggested. • Increase the capacity of the pipe from City 1 (via City 2) to site 4 with x1 units (per unit time). • Increase the capacity of the pipe from City 2 to City 3 with x2 units (per unit time). • Increase the capacity of the pipe from City 1 (via City 3) to site 4 with x3 units (per unit time). • Build a new treatment plant in City 1 with a capacity of x4 units (per unit time). 292 STOCHASTIC PROGRAMMING It is not quite clear if capacity increases can take on any values, or just some predeﬁned ones. Also, the cost structure of the possible investments are not yet clear. Even so, we are asked to analyse the problem, and create a better basis for decisions. The ﬁrst thing we must do, to use the procedures of this chapter, is to make sure that, technically speaking, we have a network (as deﬁned at the start of the chapter). A close look will reveal that a network must have equality constraints at the node, i.e. ﬂow in must equal ﬂow out. That is not the case in our little network. If City 3 has spare capacity, we do not have to send extra sewage to the city, we simply leave the capacity unused if we do not need it. The simplest way to take care of this is to introduce some new arcs in the network. They are shown with dotted lines in Figure 10. Finally, to have supply equal to demand in the network (remember from Proposition 6.1 that this is needed for feasibility), we let the external ﬂow in node 4 be the negative of the sum of external ﬂows in the other three nodes. You may wonder if this rewriting makes sense. What does it mean when “sewage” is sent along a dotted line in the ﬁgure? The simple answer is that the amount exactly equals the unused capacity in the city to which the arc goes. (Of course, with the given numbers, we realize that no arc will be needed from node 4 to node 1, but we have chosen to add it for completeness.) Now, to learn something about our problem, let us apply Proposition 6.2 to arrive at a number of inequalities. You may ﬁnd it useful to try to write them down. We shall write down only some of them. The reason for leaving out some is the following observation: any node set Y that is such that Q+ contains a dotted arc from Figure 10 will be uninteresting, because a(Q+ )T γ = ∞, so that the inequality says nothing interesting. The remaining inequalities are as follows (where we have used that all existing pipes have a capacity of 5 per unit time). ⎫ ≤ 10 + x1 + x3 + x4 , ⎪ β1 ⎪ ⎪ ⎪ ≤ 10 + x1 + x2 , β2 ⎪ ⎪ ⎪ ⎪ β3 ≤ 5 + x3 , ⎬ β1 + β2 + β3 ≤ 10 + x1 + x3 + x4 , (4.1) ⎪ ≤ 15 + x1 + x2 + x3 + x4 , ⎪ β1 + β 2 ⎪ ⎪ ⎪ β1 + β3 ≤ 10 + x1 + x3 + x4 , ⎪ ⎪ ⎪ ⎭ β2 + β3 ≤ 10 + x1 + x3 . Let us ﬁrst note that if we set all xi = 0 in (4.1), we end up with a number of constraints that are not satisﬁed for all possible values of β . Hence, as we already know, there is presently a chance that sewage will be dumped. However, our interest is mainly to ﬁnd out about which investments to make. Let us therefore rewrite (4.1) in terms of xi rather than βi : NETWORK PROBLEMS 293 x1 +x3 x1 +x2 +x3 x1 +x3 x1 +x2 +x3 +x3 x1 x1 +x3 +x4 ≥ β1 ≥ β2 ≥ β3 +x4 ≥ β1 +β2 +β3 +x4 ≥ β1 +β2 +x4 ≥ β1 + β3 ≥ β2 + β 3 − 10 ≥ 10, − 10 ≥ 5, − 5 ≥ −10, − 10 ≥ 20, − 15 ≥ 20, − 10 ≥ 5, − 10 ≥ 0. ⎫ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎬ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎭ (4.2) The last inequality in each constraint of (4.2) is obtained by simply maximizing over the possible values of β , since what is written down must be true for all values of β . We can now start to remove some of the constraints because they do not say anything, or because they are implied by others. When this cannot be done manually, we can use the methods outlined in Section 5.1.3. In the arguments that follow, remember that xi ≥ 0. First, we can remove inequalities 1 and 6, because they are weaker than inequality 4. But inequality 4, having the same righthand side as number 5, but fewer variables on the lefthand side, implies number 5, and the latter can therefore be dropped. Inequality number 3 is uninteresting, and so is number 7 (since we clearly do not plan to make negative investments). This leaves us with only two inequalities, which we shall repeat: ≥ 5, x1 + x2 + x3 + x4 ≥ 20. x1 (4.3) Even though we know nothing so far about investment costs and pumping costs through the pipes, we know a lot about what limits the options. Investments of at least ﬁve units must be made on a combination of x1 and x2 . What this seems to say is that the capacity out of City 2 must be increased by at least 5 units. It is slightly more diﬃcult to interpret the second inequality. If we see both building pipes and a new plant in City 1 as increases in treatment capacity (although they are of diﬀerent types), the second inequality seems to say that a total of 20 units must be built to facilitate City 1. However, a closer look at which cut generated the inequality reveals that a more appropriate interpretation is to say that the three cities, when they are seen as a whole, must obtain extra capacity of 20 units. It was the node set Y = {1, 2, 3} that generated the cut. The two constraints (4.3) are all we need to pass on to the planners. If these two, very simple, constraints are taken care of, sewage will never have to be dumped. Of course, if the investment problem is later formulated as a linear program, the two constraints can be added, thereby guaranteeing feasibility, and, from a technical point of view, relatively complete recourse. 294
[1,1] STOCHASTIC PROGRAMMING 2
[ (0,
1 [1,3] 2 ) ,4] 2 4 (0 , 6
[2 ,4 ]) (0,[ 2,6
) 1 1 1
(0, [2, (0,[2,4]) 3 2 4
,[4 ,8 ]) 5
[2,0]
(0,[ (0,[6,8]) 2 7 5 Slack 1
6]) 3 4 (0 0,2
) 8 2 3
[3,3] Figure 11 Network illustrating the diﬀerent bounds. 6.5 Bounds We discussed some bounds for general LPs in Chapter 3. These of course also apply to networks, since networks are nothing but special cases of linear programs. The Jensen lower bound can be found by replacing each random variable (external ﬂow or arc capacity) by its mean and solving the resulting deterministic network ﬂow problem. The Edmundson–Madansky upper bound is found by evaluating the network ﬂow problem at all extreme points of the support. (If the randomness sits in the objective function, the methods give opposite bounds, just as we discussed for the LP case.) Figure 11 shows an example that will be used in this section to illustrate bounds. The terminology is as follows. Square brackets, for example [a, b], are used to denote supports of random variables. Placed next to a node, they show the size of the random external ﬂow. Placed in a setting like (c, [a, b]), the square bracket shows the support of the upper bound on the arc ﬂow for the arc next to which it is placed. In this setting, c is the lower bound on the ﬂow. It can become negative in some of the methods. The circled number next to an arc is the unit arc cost, and the number in a square on the arc is the arc number. For simplicity, we shall assume that all random variables are independent and uniformly distributed. Figure 12 shows the setup for the Jensen lower bound for the example from Figure 11. We have now replaced each random variable by its mean, assuming that the distributions are symmetric. The optimal ﬂow is f = (2, 0, 0, 0, 2, 2, 1, 1)T, with a cost of 18. Although the Edmundson–Madansky distribution is very useful, it still has the problem that the objective function must be evaluated in an exponential NETWORK PROBLEMS
0 295 2
( ) 0,3 4 2 3 2
(0 6
,3) (0,4 ) 1 1 1 2 1
(0 ,4) (0,3) 4
(0 ,6 ) (0,7) 2 1
(0,1 ) 7 5 Slack 1 3 4 5 8 2 3
3 Figure 12 Example network with arc capacities and external ﬂows corresponding to the Jensen lower bound. number of points. If there are k random variables, we must work with 2k points. This means that with more than about 10 random variables we are not in business. Thus, since there are 11 random variable in the example, we must solve 211 problems to ﬁnd the upper bound. We have not done that here. In what follows, we shall demonstrate how to obtain a piecewise linear upper bound that does not exhibit this exponential characterization. A weakness of this bound is that it may be +∞ even if the problem is feasible. That may not happen to the Edmundson–Madansky upper bound. We shall continue to use the network in Figure 11 to illustrate the ideas. 6.5.1 Piecewise Linear Upper Bounds Let us illustrate the method in a simpliﬁed setting. Deﬁne φ(ξ, η ) by φ(ξ, η ) = min{q T y  W y = b + ξ, 0 ≤ y ≤ c + η },
y ˜ ˜T ˜T where all elements of the random vectors ξ = (ξ1 , ξ2 , . . .)T and η = ˜ (˜1 , η2 , . . .)T are mutually independent. Furthermore, let the supports be η T ˜T ˜ given by Ξ(ξ ) = [A, B ] and Ξ(˜) = [0, C ]. The matrix W is the node–arc η incidence matrix for a network, with one row removed. That row represents the slack node. The external ﬂow in the slack node equals the negative sum of the external ﬂows in the other nodes. The goal is to create an upper bounding function U (ξ, η ) that is piecewise linear, separable and convex in ξ , as well as easily integrable in η : ˜ U (ξ, η ) = φ(E ξ, 0) + H (η ) +
i ˜ ˜ d+ (ξi − E ξi ) if ξi ≥ E ξi , i − ˜ ˜ di (E ξi − ξi ) if ξi < E ξi , 296 STOCHASTIC PROGRAMMING for some parameters d± . The principles of the ξ part of this bound were i outlined in Section 3.4.4 and will not be repeated in all details here. We shall use the developments from that section here, simply by letting η = 0 while developing the ξ part. Because this is a restriction (constraint) on the original problem, it produces an upper bound. Then, afterwards, we shall develop ˜ H (η ). In Section 3.4.4 we assumed that E ξ = 0. We shall now drop that assumption, just to illustrate that it was not needed, and to show how many parameters can be varied in this method. Let us ﬁrst see how we can ﬁnd the ξ part of the function, leaving η = 0. First, let us calculate ˜ ˜ φ(E ξ, 0) = min{q T y  W y = b + E ξ, 0 ≤ y ≤ c} = q T y 0 .
y This is our basic setting, and all other values of ξ will be seen as deviations ˜ from E ξ . Note that since y 0 is “always” there, we shall update the arc capacities to become −y 0 ≤ y ≤ c − y 0 . For this purpose, we deﬁne α1 = −y 0 and β 1 = c − y 0 . Let ei be a unit vector of appropriate dimension with a +1 in position i. Next, deﬁne a counter r and let r := 1. Now, check out the case when ˜ ξ1 > E ξ1 by solving ˜ ˜ min{q T y  W y = er (Br − E ξr ), αr ≤ y ≤ β r } = q T y r+ = d+ (Br − E ξr ). r
y (5.1) ˜ Similarly, check out the case with ξ1 < E ξ1 by solving ˜ ˜ min{q T y  W y = er (Ar − E ξr ), αr ≤ y ≤ β r } = q T y r− = d− (Ar − E ξr ). r
y (5.2) Now, based on y r± , we shall assign portions of the arc capacities to the random ˜ ˜ variable ξr . These portions will be given to ξr and left unused by other random ˜r does not need them. The portions will correspond variables, even when ξ to paths in the network connecting node r to the slack node (node 5 in the example). That is done by means of the following problem, where we calculate what is left for the next random variable:
r r αr+1 = αr − min{yi + , yi − , 0}. i i (5.3) ˜ What we are doing here is to ﬁnd, for each variable, how much ξr , in the worst case, uses of arc i in the negative direction. That is then subtracted from what we had before. There are three possibilities. We may have both (5.1) and (5.2) yielding nonnegative values for the variable i. Then nothing is used of the available “negative capacity” αr . Then αr+1 = αr . Alternatively, i i i NETWORK PROBLEMS
0 297 2
( ) 0,2 4 2 3 2
(0 6
,2) (0,2 ) 1 1 1 2 1
(0 ,2) (0,2) 4
(0 ,4 ) (0,6) 2 1
(0,0 ) 7 5 Slack 1 3 4 5 8 2 3
3 Figure 13 ˜ Network needed to calculate φ(E ξ, 0) for the network in Figure 11. r r when (5.1) has yi + < 0, it will in the worst case use yi + of the available r− “negative capacity”. Finally, when (5.2) has yi < 0, in the worst case we r use yi − of the capacity. Therefore, αr+1 is what is left for the next random i variable. Similarly, we ﬁnd r r r r βi +1 = βi − max{yi + , yi − , 0}, (5.4) r where βi +1 shows how much is still available of the capacity on arc i in the forward (positive) direction. We next increase the counter r by one and repeat (5.1)–(5.4). This takes care of the piecewise linear functions in ξ . Let us now look at our example in Figure 11. To calculate the ξ part of the bound, we put all arc capacities at their lowest possible value and external ﬂows at their means. This is shown in Figure 13. The optimal solution in Figure 13 is given by y 0 = (2, 0, 0, 0, 3, 2, 2, 0)T, with a cost of 22. The next step is update the arc capacities in Figure 13 to account for this solution. The result is shown in Figure 14. Since the external ﬂow in node 1 varies between 1 and 3, and we have so far solved the problem for a supply of 2, we must now ﬁnd the cost associated with a supply of 1 and a demand of 1 in node 1. For a supply of 1 we get the solution y 1+ = (0, 1, 0, 0, 0, 0, 1, 0)T, with a cost of 5. Hence d+ = 5. For a demand of 1 we get 1 y 1− = (−1, 0, 0, 0, 0, −1, 0, 0)T, 298 STOCHASTIC PROGRAMMING 2
(2 ,0) 4 2
(0 6
,2) (2, 0) 1 1 1 +/ 1 1
(0 ,2) (0,2) 3 2
)5 4
(1 3, (2,4) 2 7 5
/+ 1 1 3 4 (0,0 ) 8 2 3
Figure 14 ˜ Arc capacities after the update based on φ(E ξ, 0).
+/ 1 2
(1 ,0) 4 2
(0 6
,2) (1, 0) 1 1 1 1
(0 ,2) (0,1) 3 2
)5 4
(3 ,1 (2,3) 2 7 5
/+ 1 1 3 4 (0,0 ) 8 2 3
Figure 15 ˜ Arc capacities after the update based on φ(E ξ, 0) and node 1. with a cost of −3, so that d− = 3. Hence we have used one unit of the forward 1 capacity of arcs 2 and 7, and one unit of the reverse capacity of arcs 1 and 6. Note that both solutions correspond to paths between node 1 and node 5 (the slack node). We update to get Figure 15. For node 2 the external ﬂow varies between −1 and 1, so we shall now check the supply of 1 and demand of 1 based on the arc capacities of Figure 15. For supply we get y 2+ = (0, 0, 0, 1, 0, 0, 1, 0)T, with a cost of 3. For the demand of 1 we obtain y 2− = (0, 0, 0, 0, 0, −1, 0, 0)T, with a cost of −1. Hence d+ = 3 and d− = 1. Node 3 had deterministic 2 2 external ﬂow, so we turn to node 4. Node 4 had a demand between 0 and 2 NETWORK PROBLEMS 299 2
(1 ,0) 4 2
(0 6
,1) (0,0 ) 1 1 1 1
(0 ,2) (0,1) 3 2
)5 4
(1 3, (2,2) 2
(0,0 ) 7 5
/+ 1 +/ 1 1 3 4 8 2 3
Figure 16 2. ˜ Arc capacities after the update based on φ(E ξ, 0) and nodes 1 and units, and we have so far solved for a demand of 1. Therefore we must now look at a demand of 1 and a supply of 1 in node 4, based on the arc capacities in Figure 16. In that ﬁgure we have updated the capacities from Figure 15 based on the solutions for node 2. A supply in node 4 gives us the solution y 4+ = (0, 0, 0, 0, 0, 0, 1, 0)T, with a cost of 2. One unit demand, on the other hand, gives us y 4− = (0, 0, 0, 0, 0, −1, 0)T, with a cost of −2. The parameters are therefore d+ = 2 = d− . This leaves the 4 4 arc capacities in Figure 17. What we have found so far is as follows: φ(ξ, η ) = 22 + H (η ) 5(ξ1 − 2) + 3(ξ1 − 2) 3ξ2 if ξ2 + if ξ2 ξ2 2(ξ4 + 1) + 2(ξ4 + 1) if ξ1 if ξ1 ≥ 0, < 0, if ξ4 if ξ4 ≥ 2, < 2, ≥ −1, < −1. If, for simplicity, we assume that all distributions are uniform, we easily 300 STOCHASTIC PROGRAMMING 2
(1 ,0) 4 2
(0 6
,1) (0,0 ) 1 1 1 1
(0 ,2) (0,1) 3 2
)5 4
(3 ,1 (1,1) 2 7 5 1 3 4 (0,0 ) 8 2 3
˜ Figure 17 Arc capacities after the update based on φ(E ξ, 0) and external ﬂow in all nodes. integrate the upperbounding function U (ξ, η ) to obtain U = 22 + H (η ) 2 + 1 3(ξ1 − 2) 1 dξ1 + 2 + +
3 2 0 1 1 1 −1 ξ2 2 dξ2 + 0 3ξ2 2 dξ2 −1 0 1 −2 2(ξ4 + 1) 2 dξ4 + −1
1 4 5(ξ1 − 2) 1 dξ1 2
1 2(ξ4 + 1) 2 dξ4 = 22 + H (η ) − 3 × = 23 + H (η ). +5× 1 4 −1× 1 4 +3× 1 4 −2× 1 4 +2× 1 4 Note that there is no contribution from ξ4 to the upper bound. The reason is that the recourse function φ(ξ, η ) is linear in ξ4 . This property of discovering that the recourse function is linear in some random variable is shared with the Jensen and Edmundson–Madansky bounds. We then turn to the η part of the bound. Note that if (5.3) and (5.4) were calculated after the ﬁnal y r± had been found, the α and β show what is left ˜ of the deterministic arc capacities after all random variable ξi have received ∗ ∗ their shares. Let us call these α and β . If we add to each upper bound in Figure 17 the value C (remember that the support of the upper arc capacities was Ξ = [0, C ]), we get the arc capacities of Figure 18. Now we solve the problem min{q T y  W y = 0, α∗ ≤ y ≤ β ∗ + C } = q T y ∗ .
y (5.5) With zero external ﬂow in Figure 18, we get the optimal solution y ∗ = (0, 0, 0, 0, −1, 0, −1, 1)T, NETWORK PROBLEMS 301 2
(1 ,2) 4 2
(0 6
,3) (0,4 ) 1 1 1 1
(0 ,6) (0,3) 3 2
)5 4
(3 ,5 (1,3) 2 7 5 1 3 4 (0,2 ) 8 2 3
Figure 18 Arc capacities used to calculate H (η ) for the example in Figure 11. with a cost of −4. This represents cycle ﬂow with negative costs. The cycle became available as a result of making arc 8 having a positive arc capacity. If, again for simplicity, we assume that η8 is uniformly distributed over [0, 2], we ﬁnd that the capacity of that cycle has a probability of being 1 equal to 0.5. The remaining probability mass is uniformly distributed over [0, 1]. We therefore get EH (η ) = −4 × 1 ×
1 2 −4 1 0 1 2 x dx = −2 − 1 = −3. The total upper bound for this example is thus 23 − 3 = 20, compared with the Jensen lower bound of 18. In this example the solution y ∗ of (5.5) contained only one cycle. In general, y ∗ may consist of several cycles, possibly sharing arcs. It is then necessary to pick y ∗ apart into individual cycles. This can be done in such a way that all cycles have nonpositive costs (those with zero costs can then be discarded), and such that all cycles that use a common arc use it in the same direction. We shall not go into details of that here. 6.6 Project Scheduling We shall spend a whole section on the subject of project scheduling, and we shall do so in a setting of PERT (project evaluation and review technique) networks. There are several reasons for looking speciﬁcally at this class of problems. First, project scheduling is widely used, and therefore known to many people. Even though it seems that the setting of CPM (critical path method) is more popular among industrial users, the diﬀerence is not important from a principle point of view. Secondly, PERT provides us 302 STOCHASTIC PROGRAMMING with a genuine opportunity to discuss some modelling issues related to the relationship between time periods and stages. We shall see that PERT has sometimes been cast in a twostage setting, but that it can be hard to interpret that in a useful way. Thirdly, the more structure a problem has, the better bounds can often be found. PERT networks provide us with a tool for showing how much structure provide tight bounds. Before we continue, we should like to point out a possible confusion in terms. When PERT was introduced in 1959, it was seen as a method for analysing projects with stochastic activity durations. However, the way in which randomness was treated was quite primitive (in fact, it is closely related to the Jensen lower bound that we discussed in Section 3.4.1). Therefore, despite the historical setting, many people today view PERT as a deterministic approach, simply disregarding what the original authors said about randomness. When we use the term PERT in the following, we shall refer to the mathematical formulation with its corresponding deterministic solution procedure, and not to its original random setting. There are many ways to formulate the PERT problem. For our purpose, the following will do. A PERT network is a network where arcs correspond to activities , and nodes to events. If arc k ∼ (i, j ) then activity k can start when event i has taken place, and event j can take place when all activities k , with k ∼ (i , j ), have ﬁnished. A PERT network must be acyclic, otherwise an activity must ﬁnish before it can start—a meaningless situation. Because of acyclicity, we can number nodes, such that if k ∼ (i, j ) then i < j . As a consequence, node 1 represents the event “We are ready to start” and n the event “The project is ﬁnished”. Let πi be the time event i takes place, and let us deﬁne π1 := 0. Furthermore, let qk be the duration of activity k . Since an event can take place only after all activities preceding it have ﬁnished, we must have πj ≥ πi + qk for all k ∼ (i, j ). Since πn is the time at which the project ﬁnishes, we can calculate the minimal project completion time by solving min πn s.t. πj − πi ≥ qk π1 = 0. ⎫ ⎬ ⎭ (6.1) for all k ∼ (i, j ), It is worth noting that (6.1) is not really a decision problem. There are namely no decisions. We are only calculating consequences of an existing setting of relations and durations. NETWORK PROBLEMS 303 6.6.1 PERT as a Decision Problem As pointed out, (6.1) is not a decision problem, since there are no decisions to be made. Very often, activity durations are not given by nature, but can be aﬀected by how much resources we put into them. For example, it takes longer to build a house with one carpenter than with two. Assume we have available a budget of B units of resources, and that if we spend one unit on activity k , its duration will decrease by ak time units. A possible decision problem is then to spend the budget in such a way that the project duration is minimized. This can be achieved by solving the following problem: ⎫ min πn ⎪ ⎪ ⎪ s.t. πj − πi ≥ qk − ak xk for all k ∼ (i, j ), ⎪ ⎪ ⎪ ⎪ ⎬ xk ≤ B, (6.2) ⎪ k ⎪ ⎪ ⎪ ⎪ π1 = 0, ⎪ ⎪ ⎭ xk ≥ 0. Of course, there might be other constraints, such as xk ≤ ck , but they can be added to (6.2) as needed. 6.6.2 Introduction of Randomness It seems natural to assume that activity durations are random. If so, the project duration is also random, and we can no longer talk about ﬁnding the minimal project duration time. However, a natural alternative seems to be to look for the expected (minimal) project duration time. In (6.1) and (6.2) the goal would then be to minimize Eπn . However, we must now be careful about how we interpret the problems. Problem (6.1) is simple enough. There are still no decisions, so we are only trying to calculate when, on expectation, the project will ﬁnish, if all activities start as soon as they can. But when we turn to (6.2) we must be careful. In what order do things happen? Do we ﬁrst decide on x, and then simply sit back (as we did with (6.1)) and observe what happens? Or do we ﬁrst observe what happens, and then make decisions on x? These are substantially diﬀerent situations. It is of importance that you understand the modelling aspects of this diﬀerence. (There are solution diﬀerences as well, but they are less interesting now.) In a sense, the two interpretations bound the correct problem from above and below. If we interpret (6.2) as a problem where x is determined before the activity durations are known, we have in fact a standard twostage stochastic program. The ﬁrststage decision is to ﬁnd x, and the secondstage “decision” to ﬁnd the ˜ ˜ project duration given x and a realization of q (ξ ). (We put q (ξ ) to show that q is indeed a random variable.) But—and this is perhaps the most important question to ask in this section—is this a good model? What does it mean? 304 STOCHASTIC PROGRAMMING First, it is implicit in the model that, while the original activity durations are random, the changes ak xk are not. In terms of probability distributions, therefore what we have done is to reduce the means of the distributions describing activity durations, but without altering the variances. This might or might not be a reasonable model. Clearly, if we ﬁnd this unreasonable, we could perhaps let ak be a random variable as well, thereby making also the eﬀect of the investment xk uncertain. The above discussion is more than anything a warning that whenever we introduce randomness in a model, we must make sure we know what the randomness means. But there is a much more serious model interpretation if we see (6.2) as a twostage problem. It means that we think we are facing a project where, before it is started, we can make investments, but where afterwards, however badly things go, we shall never interfere in order to ﬁx shortcomings. Also, even if we are far ahead of schedule, we shall not cut back on investments to save money. We may ask whether such projects exist— projects where we are free to invest initially, but where afterwards we just sit back and watch, whatever happens. From this discussion you may realize (as you have before—we hope) that the deﬁnition of stages is important when making models with stochasticity. In our view, project scheduling with uncertainty is a multistage problem, where decisions are made each time new information becomes available. This makes the problem extremely hard to solve (and even formulate—just try!) But this complexity cannot prevent us from pointing out the diﬃculties facing anyone trying to formulate PERT problems with only two stages. We said earlier that there were two ways of interpreting (6.2) in a setting of uncertainty. We have just discussed one. The other is diﬀerent, but has similar problems. We could interpret (6.2) with uncertainties as if we ﬁrst observed the values of q and then made investments. This is the “waitandsee solution”. It represents a situation where we presently face uncertainty, but where all uncertainty will be resolved before decisions have to be made. What does that mean in our context? It means that before the project starts, all uncertainty related to activities disappears, everything becomes known, and we are faced with investments of the type (6.2). If the previous interpretation of our problem was odd, this one is probably even worse. In what sort of project will we have initial uncertainty, but before the ﬁrst activity starts, everything, up to the ﬁnish of the project, becomes known? This seems almost as unrealistic as having a deterministic model of the project in the ﬁrst place. 6.6.3 Bounds on the Expected Project Duration Despite our own warnings in the previous subsection, we shall now show how the extra structure of PERT problems allows us to ﬁnd bounds on the expected project duration time if activity durations are random. Technically NETWORK PROBLEMS 305 speaking, we are looking for the expected value of the objective function in ˜ (6.1) with respect to the random variables q (ξ ). There is a very large collection of diﬀerent methods for bounding PERT problems. Some papers are listed at the end of this chapter. However, most, if not all, of them can be categorized as belonging to one or more of the following groups. 6.6.3.1 Series reductions If there is a node with only one incoming and one outgoing arc, the node is removed, and the arcs replaced by one arc with a duration equal to the sum of the two arc durations. This is an exact reformulation. 6.6.3.2 Parallel reductions ˜ ˜ If two arcs run in parallel with durations ξ1 and ξ2 then they are replaced ˜˜ with one arc having duration max{ξ1 , ξ2 }. This is also an exact reformulation. 6.6.3.3 Disregarding path dependences Let πi be a random variable describing when event i takes place. Then we can ˜ calculate πj = ˜ max ˜ {πi + qk (ξ )}, ˜ with k ∼ (i, j ), i∈B + (i)\{i} as if all these random variables were independent. However, in a PERT network, the π s will normally be dependent (even if the q ’s are independent), ˜ since the paths leading up to the nodes usually share some arcs. Not only will they be dependent, but the correlation will always be positive, never negative. Hence viewing the random variables as independent will result in an upper ˜˜ bound on the project duration. The reason is that E max{ξ1 , ξ2 } is smaller ˜ ˜ if the nonnegative ξ1 and ξ2 are (positively) correlated than if they are not correlated. A small example illustrates this ˜ ˜ Example 6.3 Assume we have two random variables ξ1 and ξ2 , with joint distribution as in Table 1. Note that both random variables have the same marginal distributions; namely, each of them can take on the values 1 or 2, ˜˜ each with a probability 0.5. Therefore E max{ξ1 , ξ2 } = 1.7 from Table 1, but 0.25(1 + 2 + 2 + 2) = 1.75 if we use the marginal distributions as independent ˜ ˜ distributions. Therefore, if ξ1 and ξ2 represent two paths with some joint arc, disregarding the dependences will create an upper bound. 2 306 Table 1 STOCHASTIC PROGRAMMING ˜ ˜˜ ˜ Joint distribution for ξ1 and ξ2 , plus the calculation of max{ξ1 , ξ2 }. ˜ ξ1 1 1 2 2 ˜ ξ2 1 2 1 2 Prob. 0.3 0.2 0.2 0.3 max 1 2 2 2 Figure 19 Arc duplication. 6.6.3.4 Arc duplications If there is a node i with B + (i ) = {i, i }, so that the node has only one incoming arc k ∼ (i, i ), remove node i , and for each j ∈ F + (i ) \ {i } replace k ∼ (i , j ) by k ∼ (i, j ). The new arc has associated with it the random ˜ ˜ ˜ duration qk (ξ ) := qk (ξ ) + qk (ξ ). If arc k had a deterministic duration, this is an exact reformulation. If not, we get an upper bound based on the previous principle of disregarding path dependences. (This method is called arc duplication because we duplicate arc k and use one copy for each arc k .) An exactly equal result applies if there is only one outgoing arc. This result is illustrated in Figure 19, where F + (i ) = {i , 1, 2, 3}. If there are several incoming and several outgoing arcs, we may pair up all incoming arcs with all outgoing arcs. This always produces an upper bound based on the principle of disregarding path dependences. 6.6.3.5 Using Jensen’s inequality ˜ Since our problem is convex in ξ , we get a lower bound whenever a qk (ξ ) or a πi (as deﬁned above) is replaced by its mean. ˜ Note that if we have a node and choose to apply arc duplication, we get an exact reformulation if all incoming arcs and all outgoing arcs have NETWORK PROBLEMS 307 deterministic durations, an upper bound if they do not, and a lower bound if we ﬁrst replace the random variables on the incoming and outgoing arcs by their means and then apply arc duplication. If there is only one arc in or one arc out, we take the expectation for that arc, and then apply arc duplication, observing an overall lower bound. 6.7 Bibliographical Notes The vocabulary in this chapter is mostly taken from Rockafellar [25], which also contains an extremely good overview of deterministic network problems. A detailed look at network recourse problem is found in Wallace [28]. The original feasibility results for networks were developed by Gale [10] and Hoﬀman [13]. The stronger versions using connectedness were developed by Wallace and Wets. The uncapacitated case is given in [31], while the capacitated case is outlined in [33] (with a proof in [32]). More details of the algorithms in Figures 6 and 7 can also be found in these papers. Similar results were developed by Pr´kopa and Boros [23]. See also Kall and Pr´kopa [14]. e e As for the LP case, model formulations and infeasibility tests have of course been performed in many contexts apart from ours. In addition to the references given in Chapter 5, we refer to Greenberg [11, 12] and Chinneck [3]. The piecewise linear upper bound is taken from Wallace [30]. At the very end of our discussion of the piecewise linear upper bound, we pointed out that the solution y ∗ to (5.5) could consist of several cycles sharing arcs. A detailed discussion of how to pick y ∗ apart, to obtain a conformal realization can be found in Rockafellar [25], page 476. How to use it in the bound is detailed in [30]. The bound has been strengthened for pure arc capacity uncertainty by Frantzeskakis and Powell [8]. Special algorithms for stochastic network problems have also been developed; see e.g. Qi [24] and Sun et al. [27]. We pointed out at the beginning of this chapter that scenario aggregation (Section 2.6) could be particularly well suited to problems that have network structure in all periods. This has been utilized by Mulvey and Vladimirou for ﬁnancial problems, which can be formulated in a setting of generalized networks. For details see [19, 20]. For a selection of papers on ﬁnancial problems (not all utilizing network structures), consult Zenios [36, 37], and, for a speciﬁc application, see Dempster and Ireland [5]. The above methods are well suited for parallel processing. This has been done in Mulvey and Vladimirou [18] and Nielsen and Zenios [21]. Another use of network structure to achieve eﬃcient methods is described in Powell [22] for the vehicle routing problem. The PERT formulation was introduced by Malcolm et al. [17]. An overview of project scheduling methods can be found in Elmaghraby [7]. A selection of 308 STOCHASTIC PROGRAMMING Figure 20 Example network for calculating bounds. bounding procedures based on the diﬀerent ideas listed above can be found in the following: Fulkerson [9], Kleindorfer [16], Shogan [26], Kamburowski [15] and Dodin [6]. The PERT problem as an investment problem is discussed in Wollmer [34]. The max ﬂow problem is another special network ﬂow problem that is much studied in terms of randomness. We refer to the following papers, which discuss both bounds and a twostage setting: Cleef and Gaul [4], Wollmer [35], Aneja and Nair [1], Carey and Hendrickson [2] and Wallace [29]. Exercises
1. Consider the network in Figure 20. The interpretation is as for Figure 11 regarding parameters, except that we for the arc capacities simply have written a number next to the arc. All lower bounds on ﬂow are zero. Calculate the Jensen lower bound, the EdmundsonMadansky upper bound, and the piecewise linear upper bound for the expected minimal cost in the network. 2. When outlining the piecewise linear upper bound, we found a function that was linear both above and below the expected value of the random variable. Show how (5.1) and (5.2) can be replaced by a parametric linear program to get not just one linear piece above the expectation and one below, but rather piecewise linearity on both sides. Also, show how (5.3) and (5.4) must then be updated to account for the change. 3. The max ﬂow problem is the problem of ﬁnding the maximal amount of ﬂow that can be sent from node 1 to node n in a capacitated network. This problem is very similar to the PERT problem, in that paths in the latter NETWORK PROBLEMS 309 correspond to cuts in the max ﬂow problem. Use the bounding ideas listed in Section 6.6.3 to ﬁnd bounds on the expected max ﬂow in a network with random arc capacities. 4. In our example about sewage treatment in Section 6.4 we introduced four investment options. (a) Assume that a ﬁfth investment is suggested, namely to build a pipe with capacity x5 directly from City 1 to site 4. What are the constraints on xi for i = 1, . . . , 5 that must now be satisﬁed for the problem to be feasible? (b) Disregard the suggestion in question (a). Instead, it is suggested to see the earlier investment 1, i.e. increasing the pipe capacity from City 1 to cite 4 via City 2 as two diﬀerent investment. Now let x1 be the increased capacity from City 1 to City 2, and x5 the increased capacity from City 2 to cite 4 (the dump). What are now the constraints on xi for i = 1, . . . , 5 that must be satisﬁed for the problem to be feasible? Make sure you interpret the constraints. 5. Develop procedures for uncpacitated networks corresponding to those in Figures 4, 6 and 7. References
[1] Aneja Y. P. and Nair K. P. K. (1980) Maximal expected ﬂow in a network subject to arc failures. Networks 10: 45–57. [2] Carey M. and Hendrickson C. (1984) Bounds on expected performance of networks with links subject to failure. Networks 14: 439–456. [3] Chinneck J. W. (1990) Localizing and diagnosing infeasibilities in networks. Working paper, Systems and Computer Engineering, Carleton University, Ottawa, Ontario. [4] Cleef H. J. and Gaul W. (1980) A stochastic ﬂow problem. J. Inf. Opt. Sci. 1: 229–270. [5] Dempster M. A. H. and Ireland A. M. (1988) A ﬁnancial expert decision support system. In Mitra G. (ed) Mathematical Methods for Decision Support, pages 415–440. SpringerVerlag, Berlin. [6] Dodin B. (1985) Reducability of stochastic networks. OMEGA Int. J. Management 13: 223–232. [7] Elmaghraby S. (1977) Activity Networks: Project Planning and Control by Network Models. John Wiley & Sons, New York. [8] Frantzeskakis L. F. and Powell W. B. (1989) An improved polynomial bound for the expected network recourse function. Technical report, Statistics and Operations Research Series, SOR8923, Princeton 310 STOCHASTIC PROGRAMMING University, Princeton, New Jersey. [9] Fulkerson D. R. (1962) Expected critical path lengths in PERT networks. Oper. Res. 10: 808–817. [10] Gale D. (1957) A theorem of ﬂows in networks. Pac. J. Math. 7: 1073– 1082. [11] Greenberg H. J. (1987) Diagnosing infeasibility in mincost network ﬂow problems. part I: Dual infeasibility. IMA J. Math. in Management 1: 99–109. [12] Greenberg H. J. (1988/9) Diagnosing infeasibility in mincost network ﬂow problems. part II: Primal infeasibility. IMA J. Math. in Management 2: 39–50. [13] Hoﬀman A. J. (1960) Some recent applications of the theory of a multivariate random variable. Proc. Symp. Appl. Math. 10: 113–128. [14] Kall P. and Pr´kopa A. (eds) (1980) Recent Results in Stochastic e Programming, volume 179 of Lecture Notes in Econ. Math. Syst. SpringerVerlag, Berlin. [15] Kamburowski J. (1985) Bounds in temporal analysis of stochastic networks. Found. Contr. Eng. 10: 177–185. [16] Kleindorfer G. B. (1971) Bounding distributions for a stochastic acyclic network. Oper. Res. 19: 1586–1601. [17] Malcolm D. G., Roseboom J. H., Clark C. E., and Fazar W. (1959) Applications of a technique for R&D program evaluation. Oper. Res. 7: 646–696. [18] Mulvey J. M. and Vladimirou H. (1989) Evaluation of a parallel hedging algorithm for stochastic network programming. In Sharda R., Golden B. L., Wasil E., Balci O., and Stewart W. (eds) Impact of Recent Computer Advances on Operations Research, pages 106–119. NorthHolland, New York. [19] Mulvey J. M. and Vladimirou H. (1989) Stochastic network optimization models for investment planning. Ann. Oper. Res. 20: 187–217. [20] Mulvey J. M. and Vladimirou H. (1991) Applying the progressive hedging algorithm to stochastic generalized networks. Ann. Oper. Res. 31: 399– 424. [21] Nielsen S. and Zenios S. A. (1993) A massively parallel algorithm for nonlinear stochastic network problems. Oper. Res. 41: 319–337. [22] Powell W. B. (1988) A comparative review of alternative algorithms for the dynamic vehicle allocation problem. In Golden B. and Assad A. (eds) Vehicle Routing: Methods and Studies, pages 249–291. NorthHolland, Amsterdam. [23] Pr´kopa A. and Boros E. (1991) On the existence of a feasible ﬂow in a e stochastic transportation network. Oper. Res. 39: 119–129. [24] Qi L. (1985) Forest iteration method for stochastic transportation problem. Math. Prog. Study 25: 142–163. NETWORK PROBLEMS 311 [25] Rockafellar R. T. (1984) Network Flows and Monotropic Optimization. John Wiley & Sons, New York. [26] Shogan A. W. (1977) Bounding distributions for a stochastic PERT network. Networks 7: 359–381. [27] Sun J., Tsai K. H., and Qi L. (1993) A simplex method for network programs with convex separable piecewise linear costs and its application to stochastic transshipment problems. In Du D. Z. and Pardalos P. M. (eds) Network Optimization Problems: Algorithms, Applications and Complexity, pages 283–300. World Scientiﬁc, Singapore. [28] Wallace S. W. (1986) Solving stochastic programs with network recourse. Networks 16: 295–317. [29] Wallace S. W. (1987) Investing in arcs in a network to maximize the expected max ﬂow. Networks 17: 87–103. [30] Wallace S. W. (1987) A piecewise linear upper bound on the network recourse function. Math. Prog. 38: 133–146. [31] Wallace S. W. and Wets R. J.B. (1989) Preprocessing in stochastic programming: The case of uncapacitated networks. ORSA J.Comp. 1: 252–270. [32] Wallace S. W. and Wets R. J.B. (1993) The facets of the polyhedral set determined by the Gale–Hoﬀman inequalities. Math. Prog. 62: 215–222. [33] Wallace S. W. and Wets R. J.B. (1995) Preprocessing in stochastic programming: The case of capacitated networks. ORSA J.Comp. 7: 44– 62. [34] Wollmer R. D. (1985) Critical path planning under uncertainty. Math. Prog. Study 25: 164–171. [35] Wollmer R. D. (1991) Investments in stochastic maximum ﬂow problems. Ann. Oper. Res. 31: 459–467. [36] Zenios S. A. (ed) (1993) Financial Optimization. Cambridge University Press, Cambridge, UK. [37] Zenios S. A. (1993) A model for portfolio management with mortgagebacked securities. Ann. Oper. Res. 43: 337–356. 312 STOCHASTIC PROGRAMMING Index
absolutely continuous, 31 accumulated return function, see dynamic programming almost everywhere (a.e.), 27 almost surely (a.s.), 16, 28 approximate optimization, 218 augmented Lagrangian, see Lagrange function backward recursion, see dynamic programming barrier function, 97 basic solution, see feasible basic variables, 56, 65 Bellman, 110, 115–117, 121 optimality principle, 115 solution procedure, 121 Benders’ decomposition, 213, 233 blockseparable recourse, 233 bounds Edmundson–Madansky upper bound, 181–185, 192, 194, 203, 234 Jensen lower bound, 179–182, 184, 185, 218, 220, 233 limited information, 234 piecewise linear upper bound, 185– 190, 234 example, 187–189 stopping criterion, 212 bunching, 230 cell, 183, 190, 196, 201, 203, 212, 234 chance constraints, see stochastic program with chance node, see decision tree complementarity conditions, 84, 89
1 Italic page numbers (e.g. 531) indicate to literature. complete recourse, see stochastic program with cone, see convex connected network, see networks convex cone, 60 polar, 163 polyhedral cone, 39, 60, 69, 160 generating elements, 60, 69, 79, 163, 166 polyhedral set, 62 polyhedron, 58, 91, 234 vertex, 58, 175 convex hull, 43, 57 convex linear combination, 57 cross out, see decision tree cut, see networks cutting plane method, see methods (nlp) decision node, see decision tree decision tree chance node, 124 cross out, 126 decision node, 124 deterministic, 121–123 deﬁnition, 121 folding back, 123 stochastic, 124–129 density function, 30, 51 descent direction, 84, 226 deterministic equivalent, 21, 31–36, 103 deterministic method, 217 distribution function, 30 dual decomposition data structure, 17, 42 master program, 173 method, 75–80, 161, 168, 173 dual program, see linear program duality gap, 74 314 duality theorem strong, 74 weak, 72 dynamic programming accumulated return function, 117 backward recursion, 114 deterministic, 117–121 solution procedure, 121 immediate return, 110 monotonicity, 115 return function, 117 separability, 114 stage, 110 state, 110, 117 stochastic, 130–133 solution procedure, 133 time horizon, 117 transition function, 117 dynamic systems, 110–116 EdmundsonMadansky upper bound, see bounds event, 25 event tree, 134, 135 EVPI, 154–156 expectation, 30 expected proﬁt, 126, 128 expected value of perfect information, see EVPI expected value solution, 3 facet, 62, 213, 216 Farkas’ lemma, 75, 163 fat solution, 15 feasibility cut, 77, 103, 161–168, 173, 177, 203, 214 example, 166–167 feasible basic solution, 55 degenerate, 64 nondegenerate, 64 basis, 55 set, 55 feasible direction method, see methods (nlp) ﬁnancial models, 141–147 eﬃcient frontier, 143, 144 Markowitz’ meanvariance, 142–143 weak aspects, 143–144 multistage, 145–147 portfolio, 142 STOCHASTIC PROGRAMMING transaction costs, 146 ﬁrststage costs, 15, 31 ﬁshery model, 138, 159, 234 forestry model, 234 free variables, 54 function diﬀerentiable, 37, 81 integrable, 30 separable, 206 simple, 28 gamblers, 128 generators, see convex global optimization, 8 gradient, 38, 84, 226 hereandnow, 151 hindsight, 5 hydro power production, 147–150 additional details, 150 numerical example, 148–149 immediate return, see dynamic programming implementable decision, 136, 141 indicator function, 28 induced constraints, 43, 214 feasible set, 43 integer programming, see program integral, 28, 30 interior point method, 233 Jensen inequality, 180, 202 Jensen lower bound, see bounds Kuhn–Tucker conditions, 83–89 Lshaped method, 80, 161–173, 213, 217, 220, 229, 233 algorithms, 168–170 example, 172–173 MSLiP, 233 within approximation scheme, 201– 203 algorithm, 203 Lagrange function, 88 augmented, 99 multipliers, 84 saddle point, 89 Index Lagrangian methods, see methods (nlp) linear program dual program, 70 multiple righthand sides, 229–233 primal program, 70 standard form, 53 linear programming, 53–80, 103 parametric, 187 logconcave measure, 50, 51, 103 loss function, 97 Markowitz’ model, 142–143 options, 144 transaction costs, 144 weak aspects, 143–144 maximum relative, 8 mean fundamental, 29 measurable set, 24 measure, 22 “natural”, 24 extreme, 234 probability, 25 measure theory, 103 methods (nlp), 89–102 augmented Lagrangian, 99, 104, 137 update, 100 cutting planes, 90–93, 104 descent directions, 93–96 feasible directions, 95, 104 Lagrangian, 98–102 penalties, 97–98, 104 reduced gradients, 95, 104 minimization constrained, 82 unconstrained, 82 minimum global, 9 local, 8 relative, 8 model understanding, 145 monotonicity, see dynamic programming MSLiP, 233 multicut method, 80 multipliers, see Lagrange function multistage, see stochastic program with nested decomposition, 233 networks 315 ﬁnancial model, 145 PERT, see PERT node–arc incidence matrix, see networks nonbasic variables, 56, 65 nonlinear programming, 80–102, 104 optimality condition, 65 necessary, 84 suﬃcient, 84 optimality cut, 78, 103, 168–173, 177, 203 optimality principle, see Bellman option, 4–6 options, 144 outcome, 25 parallel processing, 233 partition, 28, 234 curvature of function, 192 example, 197–201 look ahead, 196, 205 lookahead, 196 point of, 212 quality of, 203–205 reﬁnement of, 190–201, 212 penalty method, see methods (nlp) piecewise linear upper bound, see bounds pivot step, 68 polar cone, see convex polar matrix, 163 generating elements, 163 polyhedral cone, see convex polyhedral set, see convex polyhedron, see convex positive hull, 60 preprocessing induced constraints, see induced constraints simpliﬁed feasibility test, see feasibility cut probabilistic constraints, see stochastic program with probability distribution, 25 space, 25 theory, 103 PROCON, 103 program convex, 8 integer, 8, 209 316 bestsofar, 211 bounding, 210, 211 branch, 210 branchandbound, 210–212 branchandcut, 214, 216 branching variable, 210, 212 cuttingplane, 212, 214 facet, 213, 216 fathom, 210–212 partition, 212 relaxed linear program, 213 waiting node, 210, 211 linear, 7 mathematical, 7 nonconvex, 8, 209 nonlinear, 8 progressive hedging, see scenario aggregation project scheduling, see PERT QDECOM, 103, 175, 233 quasiconcave function, 49 measure, 48, 51, 103 random variable, 25 vector, 25 recourse activity, 31 costs, 15, 31 expected, 15 function, 31, 160 expected, 160 matrix, 31 complete, 45, 160 ﬁxed, 160 relatively complete, 161 simple, 34 program, 31 variable, 15 vector, 31 reduced gradient, 95 method, see methods (nlp) regularity condition, 85, 86 regularized decomposition, 173–177, 233 master program, 174 method, 176 reliability, 18, 47 removing columns, see preprocessing removing rows, see preprocessing STOCHASTIC PROGRAMMING return function, see dynamic programming risk averse, 128 riskneutral, 128 saddle point, see Lagrange function sampling, see stochastic decomposition scenario, see scenario aggregation scenario aggregation, 134–141 approximate solution, 141 event tree, 134, 135, 145 scenario, 134 scenario solution, 141 scenario analysis, 2 Schur complement, 232 secondstage activity, 31 program, 32 separability, see dynamic programming simplex criterion, 65 method, 64–69 etavector, 232 slack node, see networks slack variables, 54 Slater condition, 86 stage, see dynamic programming state, see dynamic programming stochastic decomposition, 217–223, 229, 232, 234 cut, 220–222 estimate of lower bound, 220 incumbent, 221, 223 relatively complete recourse, 217, 221 sampling, 218, 220 stopping 223 stopping criterion, 217, 220 stochastic method, 217 stochastic program, 13 approximation schemes, 103 general formulation, 21–36 linear, 13, 33, 36, 159–161 nonlinear, 32 value of, 151–156 stochastic program with chance constraints, see probabilistic constraints complete recourse, 34, 160 ﬁxed recourse, 34, 160 integer ﬁrst stage, 209–217, 233 algorithms, 214 Index feasibility cut, 216 initialization, 216 optimality cut, 217 stopping criterion, 217 probabilistic constraints applications, 103 closedness, 52 convexity, 49 joint, 20, 35, 36, 46 methods, 103 models, 103 properties, 46–53 separate, 36 single, 36 recourse, 16, 32, 159–236 approximations, 190–205 bounds, 177–190, 212 convexity, 36 diﬀerentiability, 38 methods, 103 multistage, 32, 33, 103, 145 nonanticipativity, 103 properties, 36–46 smoothness, 37 relatively complete recourse, 46, 161, 217 simple recourse, 34, 205–209, 234 stochastic programming models, 9, 21–36 stochastic quasigradient, 228, 233 methods, 225–229 stochastic solution, 4, 5 stochastic subgradient, 229 subdiﬀerential, 226 subgradient, 226 methods, 228 support of probability measure, 43 supporting hyperplane of convex function, 81 of convex set, 91 time horizon, see dynamic programming transition function, see dynamic programming trickling down, 230, 232, 234 unbounded solution, 168 utility function, 127 vertex, see convex 317 waitandsee solution, 13, 103, 178, 190 ...
View
Full
Document
This note was uploaded on 11/23/2009 for the course FIN 5208 taught by Professor Murphy during the Spring '09 term at Temple.
 Spring '09
 Murphy

Click to edit the document details