oms 301 notecard - 1 Linear Programming Basic Concepts and...

Info icon This preview shows pages 1–7. Sign up to view the full content.

View Full Document Right Arrow Icon
Image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 2
Image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 4
Image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 6
Image of page 7
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 1 Linear Programming: Basic Concepts and Graphical Solutions 1. Elements of linear programming (a) Decision variables: (Xj,j : 1, . . .,n) (b) Objective function: Max (or Min) Z : 2 chj : 01X1+02X2+. . .+chn j=1 (c) Constraints: represents physical (or financial) restriction on decisions n l-th constraint: 2013'ij : aélX1+aé2X2+. . .+aénXfl (S, Z, 01‘ 2) 5% 3'21 2. Solution concepts Feasible solution : A specification of decision variables satisfying all constraints. Feasible region : The set of all feasible solutions, consists of straight lines and corner points. Extreme point (corner-point) feasible solution : A solution that lies at a corner of the feasible region. Binding constraint : A constraint that is satisfied with “:”. (Usage (L.H.S) : Limit (R.H.S)) Fundamental Theorem of Linear Programming : If there is exactly one optimal solution, it must be a corner point feasible solution. Four Scenarios a. Unique optimal solution, b. multiple optimal solutions, c. unbounded, d. infeasible. Every LP falls into exactly one of these four categories. . Graphical Method of Solving LP: Problems with (2 or 3) variables Step 1 Find the range of the decision variables for which each constraint is met. Step 2 Determine the feasible region: the intersection of the feasible regions of all constraints. Step 3 Define the slope of the iso-profit (iso-cost) function. Step 4 Increase (Decrease) the intercept of the iso—profit (iso—cost) function until the iso—profit (iso—cost) function is farthest (closest) from the origin and still intersects with the feasible region. 2 LP: Excel Outputs, Sensitivity Analysis, and Applications 1. Excel Solver. 2. Answer and sensitivity reports: Answer report : Displays the optimal solution (X;,j : 1, . . . , n), the value of the ob— jective function (2*), and the binding constraints. Sensitivity report : Allows us to investigate the effect of changing objective function coefficients (eg. unit profit contribution) and constraint levels. The Solver Sensitivity Report has two panels: (a) “Adjustable Cells” Contains information on sensitivity to changes in objective function coefiicients. o Allowable increase/decrease The allowable increase and decrease in the objective function coefiicient over which the optimal decision variable levels do not change. The objective function value (eg. profit, cost) changes by an amount equal to the change in the coefiicient times the optimal decision variable level. For any changes outside the allowable increase/decrease, the LP must be re—solved. 0 Reduced cost: For Xj : 0, additional decrease in the coefficient of an objective function in order to made Xj > 0. Alternatively, if X3; is changed from zero to one, the objective function changes by an amount equal to the reduced cost. If X5; 75 0, then the reduced cost is zero. (b) “Constraints” Contains information on sensitivity to changes in RHS constraint levels. 0 Allowable increase/decrease The allowable increase and decrease in the RHS constraint over which the shadow price is applicable and the binding con- straints stay the same. For any changes outside the allowable increase/decrease, the LP must be re—solved. 0 Shadow price: Change in the objective function per unit increase ofthe RHS of the constraint. Note that for changes in constraints with non—zero shadow prices, optimal decision variable levels change as well. 3 Basic Probability Concepts and Bayes’ Rule 1. Basic Concepts Sample space: The set of all possible outcomes/sample points. Sample point (elementary event): A unique individual outcome. The set of sample points is a mutually exclusive and collectively exhaustive set. Event : A collection of outcomes of interest. Probability: Likelihood of the event. Maps outcome(s) to a number between 0 and 1. Random variable: Maps outcomes to numbers: Continuous or Discrete. 2. Finding the probability of an event or compound event I Prob. of an event E: If E : {61, . . .,en} A P(E) : P(el) + P(€2) + + P(en). i 36} of ways that the event occurs 0 If all outcomes are equally likely: P(E) — # of outcomes Probability Rules : (d Conditional probability and multiplication rule B(A)B) : w 4:» P(AflB) : P(A)P(B|A) : P(B)P(A|B) (e) Independence: P(A F) B) : P(A)P(B) or P(A|B) : P(A) and P(B|A) : P(B). (f) Total probability rule: 0 (Two events: A and fl): P(B) : P(BflA)—l—P(Bfl}l) : P(A)P(B|A)+P(A)P(Blfl) o (in, mutually exclusive and collectively exhaustive events: E1, . . . ,En) leiflEj :®and P(E1U~‘UEn): 1, then 19(3): 223:1 H3 m Eé) = P(E1)P(B|E1) + P(E2)P(B|E2) + ‘ ’ ‘ + P(En)P(B|En) Bayes” Rule (Multi. Rule —|— Total. Prob. Rule) Revising prob. of an event after learning new information. P A B i P(Afl B) i P(Afl B) i P(A)P(B|A) ( l )_ P(B) _ P(Afl B) +P(AmB) _ P(A)P(B)A) +P(A)P(B|f_1) 3. Tables (and figures) for summarizing probability information. We presented two tables and one “tree” for this. Examples of each, filled in for the “Tale of two suppliers in—class example” are presented below. B :Part is bad 3 :Partis not bad Total P(A) : P(AmB) +P(A mE) P(An F): P(A)P(F\A) A A : Supplier (. 6)(. 96 ) :.024+.576 .576 :60 _ 20(2)}:(3 42) P(A_)P(?|A_) P(Z)=P(Zn3)+P(Zn§) A : Supplier S (.4)(. 02) (.4)(. 98 ) 2.008+.392 :.008 .392 :40 Total 10(0): P(An B)+ P(A7n B) “3—) P(B—OA)+P(B—n A7) P(A)~P(Z):.60+.40:1.00 : .024 + .003 .576 + .392 _ : .032 M P(B)WP(B):.032+.968:1.00 Prior Conditional Joint Posterior P(A):.60 P(B\A):.O4 PM 0 B): P(A)P(B \A) PWBFHAOB):El:75¢ : (. 6)(. 04 ) 13(3) 032 ' : .024 19(2):.40 P(B\A):.02 In?” B) : P(Z)P(3 ‘1) P(2‘B):P(2flB):L08: 25 : (.4)(. 02) P(B) .032 ' :.008 P(B): P(A m B)+ P(A—m B) : .024 + .008 : .032 P(A m3) : P(A)P(B | A) : (.6)(.04) : .024 P(B 0 A): .04 P(Zfl E) : HERE 4 E) : (.4)(. 98) : .392 13(5): P(AmB)+P(A—m B) = .024 +.008 = .032 4 Decision Analysis 1. One-stage decision analysis: How to solve? Payofi table, Decision tree (Treeplan) Sequence of Moves: DM chooses an alternative az- :3 Nature chooses a s.o.nj (83') :5 Payofi realized based on the combination of a, and 3,. Max EMV rule Chooses an alternative a,- which maximizes EMV. EMV[0»Z'] : :P(s.o.n. j) payoff( mi, 3,; ) jZI [L6 Good Eco nnmyr 50 Apartment 50 I14 Bad Economy 30 30 0.8 Good Ecn Horny 100 States of Nature Dec1suon Bad Economy Econom ___— 100 100 I14 Bad Economy 410 ___— ___— —40 0.6 Good Eco nnmy 30 Warehouse 30 DA Bad Economy 1 D 10 10 2. Multi-stage decision analysis How to solve? Decision tree (Treeplan) DM selects a series of decisions from a set of choices over time, in the face of uncertainty. (DM’s move Immedifie payOE Nature’s move Er) - - - 13 Nature’s move ‘3 Game over (Terminal payoff) How to solve? Step 1. Calculate terminal payofis: Sum lPs on path to each terminal node Step 2. Roll back as before, starting from the terminals, and using terminal payofis: At Event nodes: Compute expected value of final payofl‘s of all branches At Decision nodes: Select maximum (or minimum, if minimizing costs or EOL) final payoff among branches Repeat the process until reaching the beginning of the tree. 3. Perfect information Complete foresight7 information given by an Oracle. With Pl a decision maker makes decisions chooses (i: after learnin' the state of nature 5 7 L J Apartment 50 U 6 Good Economy 100 30 30 740 Warehouse 1D 1U 1U EVPI (Expected value of perfect information) Max amount to pay for the perfect info. 7L EMV[with Pl] : ZP(s.o.n j)Payoff( Best (Lg, for 5)) j:l EVPI : EMV[With free Pl] — EMV[no info] 4. Using sample (imperfect) information (a) Revised prob. of state of nature using Bayes7 rule. Changes sequence of events and decision tree, Prior 85 Likelihood (conditional) (What we have) => Marginal 85 Posterior (What we need) Figure 1: Two states of nature (5', 50) Two information types (Pos7Neg) (b) EVSI (Expected Value of Sample Info‘) The maximum amount to pay for the sample info. EVSI : EMV[free sample info] — EMV[no info] (EVSI g EVPI) (c) ENGS (Expected Net Gain from Sampling) Average gain by using sample info at cost 0 ENGS : EM V[sample info. at cost 0] — EM V[no info] : EVSI — c 5 Discrete Random Variables, Binomial and Poisson RVs 1. Discrete Random Variable: Assumes finite or countably many number of values (e.g. # of bottles of beer consumed at a party). LE PMF : Probability mass function p(:z;) : P(X : )- CDF : Cumulative distribution function F(a:) : P(X S as) : Sign 39(2). If X is an non—neg. and integer—valued r.v., F(a:) : P(X g 1:) : p(0) + p(1) —|— ~~+p(:z:). Upper tail distribution : P(X > 1:) : 1 — P(X g 1:) : 1 — F(a:) 2. Mean (Expectation): Weighted average of possible values (weight: p(:z:)) M = Ele = Em mph?) 0 For any number (2, E[CX] : cE[X] : c ,u o For any two X1 and X2, E[X1 + X2] = E[X1] + E[X2] = m + #2 3. Variance: Probability weighted sum of squared differences from the mean ,u wmn:a%JMX—mfi:21wwfimm $ mwn=fl=Emaewmfl=Emaefi o For any number 0, Var[cX] = cgl/aflX] = c202 a Standard deviation (0 : «lVaflXD 4. Binomial distribution X m bin0m(n,p) a Used to count the number of successes out of n i.i.d trials, where each trial has success probability p. (i.e., X=X1+Xg+"'+Xn) P(X : m) : (3W1 —p)n—m, g; : 0, . . .,n EXCEL: Binomdist l where 77’ :Landnl:1><2><...><nwith0!:1. m $l(nim)l Mean: E[X] : up and Variance: VaflX] : np(1 — p) 5. Poisson distribution X m P0isson()\,t) a Used to count the number of events (e.g. arrivals) occurring over time or space, where A is the mean number of arrivals per unit (time or space) and t is the number of units. W)” ml Mean:Variance: E[X] : VaflX] : At P(X : m) : (gr/‘2 a: : 0, 1, . .. EXCEL: Poisson ...
View Full Document

{[ snackBarMessage ]}

What students are saying

  • Left Quote Icon

    As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

    Student Picture

    Kiran Temple University Fox School of Business ‘17, Course Hero Intern

  • Left Quote Icon

    I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

    Student Picture

    Dana University of Pennsylvania ‘17, Course Hero Intern

  • Left Quote Icon

    The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

    Student Picture

    Jill Tulane University ‘16, Course Hero Intern