Course Hero has millions of student submitted documents similar to the one

below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support.

Course Hero has millions of student submitted documents similar to the one below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support.

150: Statistics Spring 2007
February 7, 2007
0-1
1
Markov Chains
Let {X0 , X1 , } be a sequence of random variables which take values in some countable set S, called the state space. Each Xn is a discrete random variables that takes one of N possible values, where N = |S|; it may be the case that N = . Definition. The process X is a Markov Chain if it satisfies the Markov condition: P(Xn = s|X0 = x0 , X1 = x1 , . . . , Xn-1 = xn-1 ) = P(Xn = s|Xn-1 = xn-1 ) for all n 1 and all s, x1 , , xn-1 S.
1
Definition. The chain X is called homogeneous if P(Xn+1 = j|Xn = i) = P(X1 = j|X0 = i) for all n, i, j. The transition matrix P = (pij ) is the |S| |S| matrix of transition probabilities pij = P(Xn+1 = j|Xn = i). Henceforth, all Markov chains are assumed homogeneous unless otherwise specified.
2
Theorem.The transition matrix P is a stochastic matrix, which is to say that: (a) P has non-negative entries, or pij 0 for all i, j, (b) P has row sums equal to one, or j pij = 1 for all i. Proof. An easy exercise. Definition. The n-step transition matrix P(m, m + n) = (pij (m, m + n)) is the matrix of n-step transition probabilities pij (m, m + n) = P(Xm+n = j|Xm = i).
3
Theorem. Chapman-Kolmogorov equations. pij (m, m + n + r) =
k
pik (m, m + n)pkj (m + n, m + n + r).
Therefore, P(m, m + n + r) = P(m, m + n)P(m + n, m + n + r), and P(m, m + n) = Pn , the nth power of P. Proof. We have as required that
pij (m, m + n + r) = P(Xm+n+r = j|Xm = i) =
k
P(Xm+n+r = j, Xm+n = k|Xm = i)
4
=
k
P(Xm+n+r = j|Xm+n = k, Xm = i)P(Xm+n=k |Xm = i) P(Xm+n+r = j|Xm+n = k)P(Xm+n = k|Xm = i)
k
=
where we have used the fact that P(A B|C) = P(A|B C)P(B|C), together with the Markov property. The established equation may be written in matrix form as P(m, m + n + r) = P(m, m + n)P(m + n, m + n + r), and it follows by iteration that P(m, m + n) = Pn .
5
Let i = P(Xn = i) be the mass function of Xn , and write n for the row vector with entries (n : i S). i Lemma (m+n) = (m) Pn , and hence (n) = (0) Pn .
(n)
6
2
Exercises
1) A die is rolled repeatedly. Which of the following are Markov chains? For those that are, supply the transition matrix. (a) The largest number Xn shown up to the nth roll. (b) The number Nn of sixes in n rolls. (c) At time r, the time Cr since the most recent six. (d) At time r, the time Br until the next six.
7
2) Let X be a Markov chain on S, and let T be a random variables taking values in {0, 1, 2, } with the property that the indicator function 1{T = n} of the event that T = n is a function of the variables X1 , X2 , , Xn . Such a random variables T is called a stopping time, and the above definition requires that it is decidable whether or not T = n with a knowledge only of the past and present, X0 , X1 , , Xn , and with no further information about the future. Show that P(XT +m = j|Xk = xk for 0 k T, XT = i) = P(XT +m = j|XT = i) for m 0, i, j S, and all sequences (xk ) of states.
8
3) Let X be a Markov chain with state space S, and suppose that h : S T is one-to-one. Show that Yn = h(Xn ) defines a Markov chain on T . Must this be so if h is not one-one?
9
3
Examples of Markov Chain
A. Spatially Homogeneous Markov Chain Let denote a discrete-valued random variables whose possible values are the nonnegative integers, P( = i) = ai , ai 0, and i=0 ai = 1. Let 1 , 2 , , n , represent independent observation of . We shall now describe two different Markov chains connected with the set of nonnegative integers.
10
(i) Consider the process Xn , n = 0, 1, 2, , defined by Xn = n , (X0 = 0 prescribed). Its Markov matrix has the form a0 a1 a1 a1 a2 a2 a2 a3 a3 a3
a0 P= a0 . . .
Each row being identical plainly expresses in fact that the random variable Xn+1 is independent of Xn .
11
(ii) Another important class of Markov Chains arises from consideration of successive partial sums n of the i , i.e., n = 1 + 2 + + n , n = 1, 2,
and, by definition, 0 = 0. The process Xn = n is readily seen to be a Markov chain. We can easily compute its transition probability matrix as follow:
P(Xn+1 = j|Xn = i)
= P(1 + + n+1 = j|1 + + n = i) a if j i, j-i = P(n+1 = j - i) = 0, if j i,
where we have used the assumed independence of the i .
12
Schematically, we have a0 a1 a0 0 a2 a1 a0 a3 a2 a1 a4 a3 a2
0 P= 0 . . .
.
If the possible values of the random variables are permitted to be the positive and negative integers, then the possible values of n for each n will be contained among the totality of all integers. Instead of labeling the states conventionally by means of the nonnegative integers, it is more convenient to identify the state space with the totality of integers, since the probability transition matrix will then appear in a more symmetric form. The state space consists then of the values - 2, -1, 0, 1, 2, . The transition probability matrix becomes
13
P =
. . . a-1 a-2 a-3 . . .
. . . a0 a-1 a-2 . . .
. . . a1 a0 a-1 . . .
. . . a2 a1 a0 . . .
. . . a3 a2 a1 . . .
where P( = k) = ak , k Z and ak 0,
k=-
ak = 1.
14
B. One-Dimensional Random Walks in discussing random walk it is an aid to intuition to speak about the state of the system as the position of a moving "particle". A one-dimensional random walk is a Markov chain whose state space is a finite or infinite subset a, a + 1, , b of integers, in which the particle, if it is in state i, can in a single transition either stay in i or move to one of the adjacent states i - 1, i + 1. If the state space is taken as the nonnegative integers, the transition matrix of a random walk has the form
15
r 0 q 1 0 P=
p0 r1 q2 .. .
0 p1 r2
0 0 p2
, 0
0 .. .
qi
ri
pi .. .
16
where pi , qi , ri 0, and qi + ri + pi = 1, i = 1, 2, , p0 , r0 0, r0 + p0 = 1. Specifically, if Xn = i then, for i 1, P(Xn+1 = i + 1|Xn = i) = pi , P(Xn+1 = i - 1|Xn = i) = qi , P(Xn+1 |Xn = i) = ri , with the obvious modifications holding for i = 0.
17
A classical discrete version of Brownian motion is provided by the symmetric random walk. By a symmetric random walk on the integers (say all integers) we mean a Markov chain with state space the totality of all integers and whose transition probability matrix has the elements p p r 0 if j = i + 1, if j = i - 1, if j = i, otherwise, i, j = 0, 1, 2, . . . ,
Pij =
where p 0, r 0, and 2p + r = 1. Conventionally, "symmetric random walk" refers only to the case r = 0, p = 1 . 2
18
A classical mathematical model of diffusion through a membrane is the famous Ehrenfest model, namely, a random walk on a finite set of states whereby the boundary states are reflecting. The random walk is restricted to the states i = -a, -a + 1, , -1, 0, 1, , a with transition probability matrix a-i , if j = i + 1, 2a
a+i , 2a
Pij =
if j = i - 1, otherwise.
0
19
The physical interpretation of this model is as follows. Imagine two containers containing a total of 2a balls. Suppose the first container, labeled A, holds k balls and the second container B holds 2a - k balls. A ball is selected at random (all selections are equally likely) from among the 2a balls and moved to the other container. The state of the system is i = k - a, so the first container is picked with probability (a + i)/2a and the second with probability (a - i)/2a. Clearly the balls fluctuate between the two containers with a drift from the one with the larger concentration of balls to the one with the smaller concentration of balls.
20
The classical symmetric random walk in n dimensions admits the following formulations. The state space is identified with the set of all integral lattice points in E n (Euclidean space): that is, a state is an n-tuple k = (k1 , k2 , , kn ) of integers. The transition probability matrix is defined by
1 2n
Pk =
if
n i=1
| i - ki | = 1,
0
otherwise.
The symmetric random walk in E n represents a discrete version of n-dimensional Brownian motion.
21
C.A Discrete Queueing Markov Chain. Customers arrive for service and take their place in a waiting line. During each period of time a single customer is served, provided that at least one customer is present. If no customer awaits service then during this period no service is performed. (We can imagine, for example, a taxi stand at which a cab arrives at fixed time intervals to give service. If no one is present the cab immediately departs.) During a service period new customers may arrive. We suppose the actual number of arrivals in the nth period is a random variables n whose distribution function is independent of the period and is given by P(k customers arrive in a serice period) = P(n = k) = ak , k = 0, 1, , ak 0 and
k=0
ak = 1.
22
We also assume the r.v.'s n are independent. The state of the system at the start of each period is defined to be the number of customers waiting in line for service. If the present state is i then after a lapse of one period the state is i - 1 + j= if i 1, if i = 0,
23
where is the number of new customers having arrived in this period while a single customer was served. In terms of the random variables of the process we can express last formula formally as Xn+1 = (Xn - 1)+ + n , where Y + = max(Y, 0). So the transition probability matrix may be trivially calculated and we obtain
24
P=
a0 a0 0 0 0 . . .
a1 a1 a0 0 0
a2 a2 a1 a0 0
a3 a3 a2 a1 a0
a4 a4 a3 a2 a1
is
25
It intuitively clear that if the expected number of new customers, k=0 kak , that arrive during a service period exceeds 1 then certainly with the passage of time the lenght of the waiting line increases without limit. On the other hand, if k=0 kak < 1 then we shall see that the length of the waiting line approaches an equilibrium (stationary state). If kak = 1, a situation of gross instability develops.
26
D. Inventory Model. Consider a situation in which a commodity is stocked in order to satisfy a continuing demand. We assume that the replenishing of stock takes place at successive time t1 , t2 , , and we assume that the cumulative demand for the commodity over the interval (tn-1 , tn ) is a random variable n whose distribution function is independent of the time period, P(n = k) = ak , where ak 0 and
k=0
k = 0, 1, 2, ,
ak = 1.
27
The stock level is examined at the start of each period. An inventory policy is prescribed by specifying two nonnegative critical values s and S s. The implementation of the inventory policy is as follows: If the available stock quantity is not greater than s then immediate procurement is done so as to bring the quantity of stock on hand to the level S. If, however, the available stock is in excess of s then no replenishment of stock is undertaken. Let Xn denote the stock on hand just prior to restocking at tn .
28
The states of the process {Xn } consist of the possible values of the stock size S, S - 1, , +1, 0, -1, -2, , where a negative values is interpreted as an unfulfilled demand for stock, which will be satisfied immediately upon restocking. According to the rules of the inventory policy, the stock levels at two consecutive periods are connected by the relation X - n n+1 , if s Xn S, = S - n+1 , if Xn s,
Xn+1
29
where n is the quantity of demand that arises in the nth period, based on the probability law. If we assume the n to be mutually independent, then the stock values X0 , X1 , X2 , plainly constitute a Markov chain whose transition probability matrix can be calculated using the previous formula.
30
E. "Video Games". Consider a Markov chain on the nonnegative integers with transition probability matrix of the form
p0
q0 0 0 0
0 q1 0 0
0 0 q2 0
p1 P = p2 p3 . . .
,
where qi 0, pi 0 and pi + qi = 1, i = 0, 1, 2, . . . .
31
A special case of this transition matrix arises when one is dealing with success runs resulting from repeated trials each of which admits two possible outcomes, success (S) or failure (F). More explicitly, consider a sequence of trials with two possible outcomes (S) or (F). Moreover, suppose that in each trial, the probability of (S) is and the probability of (F) is = 1 - .
32
We say a success run of length r happened at trial n if the outcomes in the proceeding r + 1 trials, including the present trial state of the process by the length of the success run currently under way. In particular, if the last trial resulted in a failure then the state is zero. Similarly, when the preceding r + 1 trials in order had the outcomes F, S, S, . . . , S, the state variable would carry the label r. The process is clearly Markovian (since the individual trials were independent of each other) and its transition matrix has the form in last formula where pn = , n = 0, 1, 2, . . . .
33
F. Branching Processes. Suppose an organism at the end of its life time produces a random number of offspring with probability distribution P( = k) = ak ,
k = 0, 1, 2, . . . ,
where, as usual, ak 0 and k=0 ak = 1. We assume that all offspring act independently of each other and at the end of their lifetime (for simplicity, the lifespans of all organisms are assumed to be the same) individually have progeny in accordance with such probability distribution, thus propagating their species. The process {Xn }, where Xn is the population size at the nth generation, is a Markov chain.
34
In fact, the only relevant knowledge regarding the distribution of Xn1 , Xn2 , , Xnr , Xn , n1 < n2 < < nr < n, is the last known population count, since the number of the offspring is a function merely of the present population size. The transition matrix is obviously given by Pij = P(Xn+1 = j|Xn = i) = P(1 + + i = j), where the 's are independent observations of a random variables with probability law P( = k) = ak .
35
The last formula may be reasoned simply as follows. In the nth generation the i individuals independently give rise to numbers of offspring {k }i k=1 and hence the cumulative number produced is 1 + 2 + + i . If we use generating functions, then clearly the generating function of 1 + 2 + + i is [g(s)]i , where g is the generating function associated with . Hence, Pij is simply the jth coefficient in the power series expansion of [g(s)]i .
36
G. Markov Chains in Genetics The following idealized genetics model was introduced by S. Wright to investigate the fluctuation of gene frequency under the influence of mutation and selection. We begin by describing a so-called simple haploid model of random reproduction, disregarding mutation pressures and selective forces. We assume that we are dealing with a fixed population size of 2N genes composed of type-a and type-A individuals. The make-up of the next generation is determined by 2N independent binomial trials as follows: If the parent population consists of j a-genes and 2N - j A-genes then each trial results in a or A with probabilities pj = respectively.
37
j , 2N
qj = 1 -
j , 2N
Repeated selections are done with replacement. By this procedure we generate a Markov chain {Xn } where Xn is the number of a-genes in the nth generation among a constant population size of 2N elements. The state space contains the 2N + 1 values {0, 1, 2, , 2N }. The transition probability matrix is computed according to the binomial distribution as P(Xn+1 = k|Xn = j) = Pjk = where j, k = 0, 1, 2, , 2N. 2N k 2N -k pj q j k
38
A more realistic model takes account of mutation pressures. We assume that prior to the formation of the new generation each gene has the possibility to mutate, that is, to change into a gene of the other kind. Specifically, we assume that for each gene the mutation aA occurs with probability 1 , and Aa occurs with probability 2 . Again we assume that the composition of the next generation is determined by 2N independent binomial trials. The relevant value of pj and qj when the parent population consists of j a-gene are now taken to be
39
j j (1 - 1 ) + (1 - )2 , 2N 2N j j 1 + (1 - )(1 - 2 ). qj = 2N 2N The rationale is as follows: We assume that the mutation pressures operates first. after which a new gene is determined by selecting at random from the population. Now the probability of selecting an a-gene after the mutation forces have acted is just 1/2N times the number of a-genes after mutation. But this average number is clearly j(1 - 1 ) + (2N - j)2 , which leads at once to last two formulas. pj =
40
The transition probabilities of the associated Markov chain are 2N calculated by P(Xn+1 = k|Xn = j) = Pjk = 2N pk qj -k using the j k value of pj and qj . If 1 , 2 > 0 then fixation will not occur in any state. Instead, as n , the distribution function of Xn will approach a steady state distribution of a random variable where P( = k) = k , k = 0, 1, 2, . . . , 2N . The distribution function of is called the steady state gene frequency distribution.
41
4
Exercises
1) Let X1 , X2 , . . . be independent random variables such that P(Xi = j) = j , j 0. Say that a record occurs at time n if Xn > max(X1 , , Xn-1 ), where X0 = -, and if a record does occur at time n call Xn the record value. Let Ri denote the ith record value. (a) Argue that {Ri , i 1} is a Markov chain and compute its transition probabilities. (b) Let Ti denote the time between the ith and (i + 1)st record. Is {Ti , i 1} a Markov chain? What about {(Ri , Ti ), i 1}? Compute transition probabilities where appropriate. n (c) Let Sn = i=1 Ti , n 1. Argue that {Sn , n 1} is a Markov chain and find its transition probabilities.
42
2) At the beginning of every time period, each of N individuals is in one of three possible conditions with regards to a particular disease: infectious, immune, or noninfected. If a noninfected individual catches the disease during a time period then he or she will be in an infectious condition during the following time period, and from then on will be immune. During every time period each of the N pairs of individuals 2 are independently in contact with probability p. If a pair is in contact and one of the members of the pair is infectious and the other is noninfected then the noninfected person catches the disease (and is thus in the infectious condition at the beginning of the next period). Immune individuals can neither catch the disease again or pass it on to noninfected individuals.
43
Let Xn and Yn denote the number of the infectious and the number of noninfected individuals, respectively, at the beginning of time period n. (a) If there are i infectious individuals at the beginning of a time period, what is the probability that a specified noninfected individual will become infected in that period? (b) Is {Xn , n 0} a Markov chain? If so, give its transition probabilities. (c) Is {Yn , n 0} a Markov chain? If so, give its transition probabilities. (d) Is {(Xn , Yn ), n 0} a Markov chain? If so, give its transition probabilities.
44

**Find millions of documents on Course Hero - Study Guides, Lecture Notes, Reference Materials, Practice Exams and more. Course Hero has millions of course specific materials providing students with the best way to expand their education.**

Below is a small sample set of documents:

Berkeley - MATH - 105

Mathematics 105, Spring 2004 - Problem Set IV For the remainder of the semester we will follow the text A Concise Introduction to the Theory of Integration (second edition) by Stroock. Problem, chapter, section, and page numbers refer to that text un

Berkeley - MATH - 105

Mathematics 105, Spring 2004 - Problem Set V Solutions1V.A Show that any countable subset of Rn is measurable, and has measure zero. Solution. First of all, a set containing a single point y is measurable since it is closed, and has measure zero

Berkeley - MATH - 105

Mathematics 105 - Spring 2004 - M. Christ Problem Set 9 - Solutions to Selecta, part 2 3.3.21(i) Let K be a family of measurable functions on a measure space (E, B, ). Show that K is uniformly -integrable if it is uniformly -absolutely continuous and

Berkeley - MATH - 128A

Math 128A, Spring 2007 Homework 11 Solution 1. Compute the local truncation error for the 3-Point Adams-Bashforth method. Solution: Recall that we obtained AB-3 by finding constants A, B, and C so that the approximationtn+1y(tn+1 ) - y(tn ) =tn

Berkeley - MATH - 128A

Berkeley - STAT - 150

Statistics 150: Spring 2007January 22, 20080-1NOTE: These slides are not meant to be complete "lecture notes" that replace attending class. Rather, they are intended to act as a basis for the discussion in class and you will need to attend class

Berkeley - MATH - 128A

Math 128A, Spring 2007 Homework 1 Solution handout Evaluating the first 11 terms of the sequence given by a0 = 100 ln(101/100) yields a0 = 0.995033 ^ a1 = 0.496691 ^ a2 = 0.330853 ^ a3 = 0.248017 ^ a4 = 0.198348 ^ a5 = 0.165241 ^ a6 = 0.142563 ^ a7 =

Berkeley - MATH - 128A

Berkeley - STAT - 150

Statistics 150: Spring 2007February 3, 20070-01Classification of statesDefinition State i is called persistent (or recurrent) if P(Xn = i for some n 1|X0 = i) = 1, which is to say that the probability of eventual return to i, having started

Berkeley - STAT - 150

1Stationary distributions and the limit theoremDefinition 1.1. The vector is called a stationary distribution of the chain if has entries (j : j S) such that: (a) j 0 for all j, andjj = 1,i(b) = P, which is to say that j = equations).

Berkeley - MATH - 128A

Berkeley - MATH - 128A

Math 128A, Spring 2007 Homework 6 Solution 5.1.12 It's enough to check exactness for 1, x, x2 , x3 , etc. The degree of precision is the first n for which our rule is not exact for xn+1 .11dx = 10 11 3 1+ 1=1 4 4 1 3 0 + 2/3 = 1/2 4 4 1 3 0 + 4

Berkeley - STAT - 150

Statistics 150: Spring 2007March 6, 20070-11Continuous-Time Markov ChainsConsider a continuous-time stochastic process {Xt , t 0} taking on values in a set of nonnegative integers (we could take any finite or countable state space, but we w

Berkeley - MATH - 105

IEOR 165: Engineering Statistics, Quality Control and Forecasting, Spring 2008 Homework 4 Solution Chapter 9 Question 1 (b) Noting x = 11.625, Y = 16.4875 B=n i=1 (xi - x)Yi n x2 - n2 x i=1 i= 1.206 A = Y - B x = 2.464 Thus the estimated reg

Berkeley - MATH - 105

IEOR 165: Engineering Statistics, Quality Control and Forecasting, Spring 2008 Homework 3 SolutionExtra Question (a) The first sample moment is =. n Looking at the expected value of the first moment for a single uniform random variable, E[X] = The

Berkeley - MATH - 105

Mathematics 105 - Spring 2004 - M. Christ Problem Set 10 For Friday April 30: Continue to study 4.1 of our text. Solve the following problems from Stroock 4.1: 4.1.8, 10, 11, 12. (For 4.1.10, note that the -algebras in question are the Borel algebras

Berkeley - IEOR - 165

IEOR 165: Engineering Statistics, Quality Control and Forecasting, Spring 2008 Homework 2 SolutionChapter 7 Question 8 Noting X = 3.1502 (a) 95 percent CI: X 1.96 n = 3.1502 1.96(.1)/ 5 = (3.0625, 3.2379) (b) 99 percent CI: X z.005 n = 3.1

Berkeley - MATH - 105

Mathematics 105, Spring 2004 - M. Christ Midterm Exam #2 Comments Distribution of scores: There were 50 points possible. The highest score was 50 and the third 1 highest was 39. The median was 25 2 , the 75th percentile was 35, and the 25th percentil

Berkeley - MATH - 128A

Math 128A, Spring 2007 Homework 3 Solution 4.1.12 (a) L0 (x) + L1 (x) + L2 (x) + L3 (x) is the unique polynomial of degree 3 interpolating the data (x0 , 1), (x1 , 1), (x2 , 1), (x3 , 1). Since the polynomial 1 also has degree 3 and interpolates th

Berkeley - MATH - 105

Mathematics 105 - Spring 2004 - M. Christ Problem Set 9 For Friday April 16: Continue to study 3.3 of our text. We will treat 3.4 in a somewhat superficial way by discussing the statement (3.4.7) but not its proof, and showing how it implies Theorem

Berkeley - MATH - 105

Mathematics 105 - Spring 2004 - M. Christ Problem Set 8 (corrected1 ) For Friday April 9: Study 3.3 of our text. Solve the following problems from Stroock 3.3: 3.3.16,17,19,20,23. In problem 23, simplify the statement by assuming that (E) and (E) are

Columbia - ECON - w1105

Simple Numerical ModelGDP (AS)CSIdAEAE v AS Und IAE>AS AE>AS AE>AS AE>AS AE=AS AE<AS AE<AS AE<AS AE<ASFalling Falling Falling Falling 0 Rising Rising Rising RisingGDPExpand Expand Expand Expand GDP* Contract Contract Contract Contract

Columbia - ECON - w1105

Macro Model IIGDPTT=100YdYd-Y-TCC=100+.7 5YdSS=YdC 50 100IdId=100GG=150ADC+I+GAd v ASUnd InvGDP ChgEXPS,T,I,GY(C+I+G)700 900100 100600 800550 700100 100150 150800 950AE>AS AE>AS-100EXP EXPG+I > S

Columbia - ECON - w1105

Columbia - MATH - v2010

The Final Exam will be accumulative, about 35 % for the material covered in Chapter 1 to Chapter 5, about 65 % for the material covered after the second midterm. We didn't really talk about Chapter 7, so those sections in Chapter 7 is not required, e

Iowa State - EM - 378

Iowa State - EM - 378

Iowa State - EM - 378

Iowa State - EM - 378

Iowa State - EM - 378

Berkeley - MCB - 130

MCB 102 James Berger3/19/08 Lecture 25Sharing or distribution of lecture notes, or sharing of your subscription, is ILLEGAL and will be prosecuted. Our non-profit, student-run program depends on your individual subscription for its continued exist

Berkeley - MCB - 130

MCB 102 James Berger3/17/08 Lecture 24Sharing or distribution of lecture notes, or sharing of your subscription, is ILLEGAL and will be prosecuted. Our non-profit, student-run program depends on your individual subscription for its continued exist

Berkeley - MCB - 130

Cloning, Restriction enzymes and DNA analysisChapter 9, pp. 306-317 March 12, 2008Cloning overview In the 1970s, enzymes were discovered that can cut DNA at specific sites These "restriction enzymes" are part of a bacterial mechanism for defense

Berkeley - MCB - 130

106 amplificationRepeat ~25-30 timesPCRPCR amplify if neededgen WholeomeGeneX:affinity tag sequence Introduce/express ProteinX-affinity tag Pull-down/co-purify ProteinX+binding partners Mass spec Identify"TAP" (tandem affinity purificat

Berkeley - MCB - 130

What might you see if replication weren't semiconservative?Cairns, J., J. Mol Biol. 1963InitiationTerminationTimeJ.A. Huberman and A.D. Rigss, 1968, J. Mol. Biol.; and J.A. Huberman and A. Tsai, 1973, J. Mol. Biol.Efficiency Vmax/KmPYRI

Ohio State - BUSMGT - 330m

StatTools Assignment #1 SolutionPart I: 2a.Trading volume One Variable Summary Mean Std. Dev. Median Minimum Maximum Count 1st Quartile 3rd Quartile Interquartile Range Data Set #1 Revenue growth Data Set #1579338.41 1496813.67 27392.00 100.00 66

Ohio State - BUSMGT - 330m

StatTools Assignment #2 SolutionPart I: 1. and 2. Assignment #2, Part 1Population Mean Population Stdev Sample Size SE of the Mean Sample # 100 8 16 21 106.79 101.10 99.18 101.47 92.14 102.27 101.87 98.19 94.80 99.11 88.40 91.53 110.38 102.71 102

Ohio State - BUSMGT - 331m

Flrat5 letterot your la.i n.n. (lftlu.tltyt) Last5 dlgltsof yolr Student Numbor(SSN))t)t ttttBuaineaa Manag.m.nt Eran 331H".netrastretsg, As.,.ten-(c-Y( ae)Youhave90 minutes complete exam. A countdown to this limerwill be prcjecled the sc

Ohio State - BUSMGT - 331m

Fl6t 5 lotto.of you. lest name lustltl) 0.ttLa.t 5 dlglE of yolr Stud.nt Number(SSt{)tt-)tttttBu3lne! Manegemant Elam ll 331 (Last Firsl): Name /5*l p.u exA^ ZAto this timerwill be pojectedon the scroenduringthe quiz. No Youhave90 minutes c

Washington - A A - 210

COSMOS: Complete Online Solutions Manual Organization SystemChapter 2, Solution 2.(a)(b)We measure:R = 57 lb, = 86 R = 57 lb86 !Vector Mechanics for Engineers: Statics and Dynamics, 8/e, Ferdinand P. Beer, E. Russell Johnston, Jr., Elli

Washington - A A - 210

COSMOS: Complete Online Solutions Manual Organization SystemChapter 2, Solution 37.300-N Force:Fx = ( 300 N ) cos 20 = 281.91 N Fy = ( 300 N ) sin 20 = 102.61 N400-N Force:Fx = ( 400 N ) cos85 = 34.862 N Fy = ( 400 N ) sin 85 = 398.48 N600-

Washington - A A - 210

COSMOS: Complete Online Solutions Manual Organization SystemChapter 2, Solution 16.Using the Law of Cosines and the Law of Sines,R 2 = ( 45 lb ) + (15 lb ) - 2 ( 45 lb )(15 lb ) cos1352 2or R = 56.609 lb56.609 lb 15 lb = sin135 sinor = 10

Washington - A A - 210

COSMOS: Complete Online Solutions Manual Organization SystemChapter 2, Solution 9.Using the Law of SinesF1 R 20 lb = = sin sin 38 sin = 90 - 10 = 80, = 180 - 80 - 38 = 62Then:F1 R 20 lb = = sin 80 sin 38 sin 62or (a) F1 = 22.3 lb ! (b)

University of Texas - BME - 303

BME303 Intro. to ComputingChapter 2Bits, Data Types, and OperationsBME303 Intro. to ComputingOutline Analog vs. Digital Bit Numbering systems Data types3BME303 Intro. to ComputingAnalog vs. DigitalModern computers are digital device

University of Texas - BME - 303

BME 303: Introduction to Computing for Biomedical EngineersProf. Pengyu Ren ENS 618 pren@mail.utexas.eduOnline Course Information Blackboard: http:/courses.utexas.edu/ Syllabus Lecture materials Homework assignments Membership is mandatory,

University of Texas - BME - 303

BME303 Intro. to ComputingAnnouncements Next week's special labs: Computer Anatomy Today's welcome gathering: 4-5:30 pm in the San Jacinto Multipurpose Room1BME303 Intro. to ComputingChapter 1Welcome AboardBME303 Intro. to ComputingHis

University of Texas - BME - 303

BME303 Intro. to ComputingChapter 2 cont'dBME303 Intro. to ComputingHexadecimal?0001001010101011 0001 0010 1010 1011 1 2 A B12AB a convenient way to represent binary strings2BME303 Intro. to ComputingHexadecimal?Decimal value0001

University of Texas - BME - 303

BME303 Intro. to ComputingAdditionAs we've discussed, 2's comp. addition is just binary addition. assume all integers have the same number of bits ignore carry out for now, assume that sum fits in n-bit 2's comp. representation+01101000 (10

Cornell - M&AE - 326

MAE 326 Spring 2008 HW 11, due Wed., 16 April Problems 1-4 are for review purposes and are optional extra credit, but recommended if you had trouble with problems 1 or 3 in Prelim 2. 1. A= 6 2 0 -2(1)Find the eigenvalues and eigenvectors of A. Do

Cornell - M&AE - 326

MAE 326 - Spring 2008 1) Review Problem 1Homework 11 SolutionsFor this problem we are to obtain the eigenvalues and eigenvectors of the matrix A: A= 6 2 0 -2As we know from linear algebra, the eigenvalues of A are the roots of the characterist

Cornell - M&AE - 326

Cornell - M&AE - 326

MAE 326 Spring 2008 HW 13, due Friday, May 2 This assignment will count as two HW grades. 1. B-9-11 2. B-9-13 3. B-11-2 Plot this graph using the Matlab bode function. Determine analytical expressions for the low and high frequency behavior of the fr

Baylor - EGR - 1301

Georgia Tech - MATH - 2401

MATH 2401A1-B1 TEST1 (01/25/02)Name: SOLUTIONPlease read questions carefully and show all your work. You are allowed 50 minutes on this exam. 1. (20 pts) Findd (f dt g),d (f dt g),d (uf ), dtandd (f dt u) for the functions 1 i + t3

Georgia Tech - ECON - 2000

Georgia Tech - CHEM - 1101

Georgia Tech - AE - 3003

Georgia Tech - MATH - 2401

Georgia Tech - PHYSICS - 2211

MasteringPhysics: Assignment Print View3/2/08 4:49 PM[ Print View ]PHYS 2211 ABCDE Spring 08MP06: AccelerationDue at 11:59pm on Thursday, January 17, 2008View Grading DetailsGraph of v(t) for a Sports CarThe graph shows the velocity of a

Georgia Tech - PHYSICS - 2212

[ Print View ]PHYS 2212 G/H Spring 2008MP01bDue at 9:00am on Tuesday, January 15, 2008View Grading DetailsElectric ChargeThis assignment will receive a 30% bonus if it is completed before 9:00 AM, Friday, January 11.A Test Charge Determines

Georgia Tech - PHYSICS - 2211

MasteringPhysics: Assignment Print View3/2/08 4:51 PM[ Print View ]PHYS 2211 ABCDE Spring 08MP11: ForcesDue at 11:59pm on Sunday, January 27, 2008View Grading DetailsTwo Forces Acting at a PointTwo forces, and , act at a point. has a magn