This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Winter 2010 Pstat160a Handout Random Walks Random walks I. Definition and basic properties. 1. Definition A Random walk is a sequence X ,X 1 ,... ,X n ,... of random variables (a stochastic process , think of n as time) with the property that at each step n , X n +1 = X n ± 1. The probability to go up (by 1) is denoted by p , down (by 1) is 1 − p = q . For simplicity, we will assume that the initial position X is a fixed number, but it could be random in general. A Matlab simulation of random walks with p = . 4 and p = . 6 is shown in Figure 1 5 1 0 1 5 2 0 2 5 3 0 3 5 4 0 1 0 5 5 1 0 p = . 4 p = . 6 Figure 1. Two random walks simulations with p = . 4 and p = . 6 This means: P ( X n +1 = j  X n = i ) = p if j = i + 1 1 − p if j = i − 1 otherwise 2. Random walks as a model for stock prices Random walks arise in many area of science and engineering. Random walks are also used to model stock prices, as we will see later in this class. In that context, suppose that at each step the price S n of a stock can increase by a fraction r > 1 of the previous price, with probability p , or can decrease by a same fraction with probability 1 − p (i.e can increase or decrease by (1 − r ) × 100%). Then S n = S r #up r #down = S r #up# down = S r X n where X n is a random walk. Since the function x mapsto→ r x is increasing and one to one, anything we want to know about S n can be inferred from X n . A simulation of this model is shown in Figure 2. 3. Random walks as sum of i.i.d. random variables Is is easy to see that a random walk { X n } n ≥ can be represented as follows, X n = X + n summationdisplay i =1 e i , n = 1 ,... where e i ,i = 1 ,... , are independent random variables, such that e i = 1 with probability p − 1 with probability q = 1 − p 2 5 0 1 0 0 1 5 0 2 0 0 2 5 0 3 0 0 1 0 2 0 3 0 4 0 5 0 6 0 Figure 2. 2 simulations of a stock price evolution using the model descibed in this section p = . 51 and r = 1 . 1 We can easily get that E ( e i ) = 2 p − 1 = μ E ( e 2 i ) = 1 Var( e i ) = 4 p (1 − p ) = σ 2 4. Moments From the representation above we get, E ( X n ) = X + n summationdisplay i =1 E ( e i ) = X + n (2 p − 1) = X + nμ Var( X n ) = Var( X ) + Var parenleftBig n summationdisplay i =1 e i parenrightBig = n Var( e i ) = nσ 2 Cov( X n ,X n + k ) = Cov ( n summationdisplay i =1 e i , n + k summationdisplay i =1 e i ) = Var ( n summationdisplay i =1 e i ) + Cov ( n summationdisplay i =1 e i , n + k summationdisplay i = n +1 e i ) = σ 2 n (indep. of k ≥ 0) or Cov( X n ,X m ) = σ 2 min( n,m ) Remark : In the case of a random initial position X , we would need to assume that the e i are independent of X , and we would get instead E ( X n ) = E ( X ) + nμ and Var( X n ) = Var( X ) + nσ ....
View
Full
Document
This note was uploaded on 01/10/2011 for the course STAT 160A taught by Professor Bonnet during the Winter '10 term at UCSB.
 Winter '10
 bonnet

Click to edit the document details