QABE_Notes10(1)

# QABE_Notes10(1) - QABE Lecture 10 Markov Chains School of...

This preview shows pages 1–4. Sign up to view the full content.

QABE Lecture 10 Markov Chains School of Economics, UNSW 2011 Contents 1 Introduction 1 2 The Basics 2 2 . 1 Wh yM a rk o vCha in s ? ............................. 2 2 . 2 AnI l lu s t ra t iv eEx amp l e ............................ 3 3 The Fundamentals of Markov Chains 7 3.1 Transition Probabilities and Matrix. ..................... 7 3 . 2 R e gu l a rM a o s ............................ 7 3 . 3 Th eS t a t eV e c t o randS t e adyS t a t e s ..................... 8 4 Markov Chains in Game Theory 10 4 . 1 R ep e a t edG am e s ................................ 1 1 1 Introduction An important part of the study of probabilities refers to independent trials processes. These processes form the basis of classical probability theory and much of statistics. We know that when a sequence of experiments forms an independent trials process, the possible outcomes for each experiment are the same and occur with the same probability. Moreover, knowing the outcomes of previous experiments has no eFect on our predictions for the outcomes of the next experiment. Modern probability theory studies experiments for which knowing the previous outcomes has a direct impact on the predictions for future experiments. In principle, for a given sequence of experiments, all of the past outcomes could inﬂuence the predictions for the next experiment. ±or example, this should be the case in predicting a student’s grades on a sequence of exams in a course. Agenda 1. Markov chain, de²nition and characteristics. 2. Markov chain and Game Theory. 1

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
ECON 1202/ECON 2291: QABE c ± School o fEconomics, UNSW 2T h e B a s i c s HPW 9.3 In 1907, Andrei Markov started studying a very important new type of experiments. In these processes, the outcome of a given experiment can aFect the outcome of the next exper iment .Th istypeo fstochast icprocessisca l ledaMarkovcha in . 2.1 Why Markov Chains? Calculating Probabilities We have been thinking about calculating the probability of events Involving large numbers of possibilities (permutations and combinations) Conditional on other events having occurred (conditional probability, Bayes’ Rule) We now focus on problems involving calculating the probability that an event occurs at some point in the future. Calculating Probabilities Simple (but boring) Example Toss a coin N times What is the probability of H on the n th trial? Outcomes are all independent events That is, “history does not matter” •⇒ P ( H on n th trial) = 1 / 2 for all n Calculating Probabilities But many outcomes of interest are dependent! Employment: More likely to be employed tomorrow if employed today Demand: More likely to be strong tomorrow if strong today Basketball: More likely to hit the next shot if you hit this shot? Government: A Party is more likely to win the next election if it won the previous one We use Markov Chains to analyse such cases. QABE Lecture 10 2
ECON 1202/ECON 2291: QABE c ± School o fEconomics, UNSW E 0 U 0 E 1 E 1 U 1 U 1 1 P ( E 0 ) P ( E 0 ) P ( U 1 | E 0 ) P ( U 1 | U 0 ) P ( E 1 | E 0 ) P ( E 1 | U 0 ) 2.2 An Illustrative Example Example: Employment States At each date n =0 , 1 , 2 ,...,T a worker is either Employed or Unemployed. The probability of being Employed at date n +1

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

## This note was uploaded on 09/23/2011 for the course ECON 1202 taught by Professor Lorettiisabelladobrescu during the One '11 term at University of New South Wales.

### Page1 / 12

QABE_Notes10(1) - QABE Lecture 10 Markov Chains School of...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online