Register now to access 7 million high quality study materials (What's Course Hero?) Course Hero is the premier provider of high quality online educational resources. With millions of study documents, online tutors, digital flashcards and free courseware, Course Hero is helping students learn more efficiently and effectively. Whether you're interested in exploring new subjects or mastering key topics for your next exam, Course Hero has the tools you need to achieve your goals.

8 Pages

lecture-26

Course: STAT 36-754, Spring 2006
School: Michigan
Rating:

Word Count: 3229

Document Preview

26 Decomp Chapter osition of Stationary Pro cesses into Ergo dic Comp onents This chapter is concerned with the decomposition of asymptoticallymean-stationary processes into ergodic components. Section 26.1 shows how to write the stationary distribution as a mixture of distributions, each of which is stationary and ergodic, and each of which is supported on a distinct part of the state space. This is connected to...

Register Now

Unformatted Document Excerpt

Coursehero >> Michigan >> Michigan >> STAT 36-754

Course Hero has millions of student submitted documents similar to the one
below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support.

Course Hero has millions of student submitted documents similar to the one below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support.
26 Decomp Chapter osition of Stationary Pro cesses into Ergo dic Comp onents This chapter is concerned with the decomposition of asymptoticallymean-stationary processes into ergodic components. Section 26.1 shows how to write the stationary distribution as a mixture of distributions, each of which is stationary and ergodic, and each of which is supported on a distinct part of the state space. This is connected to ideas in nonlinear dynamics, each ergodic component being a dierent basin of attraction. Section 26.2 lays out some connections to statistical inference: ergodic components can be seen as minimal sucient statistics, and lead to powerful tests. 26.1 Construction of the Ergo dic Decomp osition In the last lecture, we saw that the stationary distributions of a given dynamical system form a convex set, with the ergodic distributions as the extremal points. A standard result in convex analysis is that any point in a convex set can be represented as a convex combination of the extremal points. Thus, any stationary distribution can be represented as a mixture of stationary and ergodic distributions. We would like to be able to determine the weights used in the mixture, and even more to give them some meaningful stochastic interpretation. Lets begin by thinking about the eective distribution we get from taking time-averages starting from a given point. For every measurable set B , and every nite t, At 1B (x) is a well-dened measurable function. As B ranges over the -eld X , holding x and t xed, we get a set function, and one which, 174 CHAPTER 26. ERGODIC DECOMPOSITION 175 moreover, meets the requirements for being a probability measure. Suppose we go further and pass to the limit. Denition 316 (Long-Run Distribution) The long-run distribution starting from the point x is the set function (x), dened through (x, B ) = limt At 1B (x), when the limit exists for al l B X . If (x) exists, x is an ergodic point. The set of al l ergodic points is E . Notice that whether or not (x) exists depends only on x (and T and X ); the initial distribution has nothing to do with it. Lets look at some properties of the long-run distributions. (The name ergodic point is justied by one of them, Proposition 318.) Prop osition 317 If x E , then (x) is a probability distribution. Proof: For every t, the set function given by At 1B (x) is clearly a probability measure. Since (x) is dened by passage to the limit, the Vitali-Hahn Theorem (285) says (x) must be as well. Prop osition 318 If x E , then (x) is ergodic. Proof: For every invariant set I , 1I (T n x) = 1I (x) for all n. Hence A1I (x) exists and is either 0 or 1. This means (x) assigns every invariant set either probability 0 or probability 1, so by Denition 300 it is ergodic. Prop osition 319 If x E , then (x) is an invariant function of x, i.e., (x) = (T x). Proof: By Lemma 275, A1B (x) = A1B (T x), when the appropriate limit exists. Since, by assumption, it does in this case, for every measurable set (x, B ) = (T x, B ), and the set functions are thus equal. Prop osition 320 If x E , then (x) is a stationary distribution. Proof: For all B and x, 1T 1 B (x) = 1B (T x). So (x, T 1 B ) = (T x, B ). Since, by Proposition 319, (T x, B ) = (x, B ), it nally follows that (x, B ) = (x, T 1 B ), which proves that (x) is an invariant distribution. Prop osition 321 If x E and f L1 ((x)), then limt At f (x) exists, and is equal to E(x) [f ]. Proof: This is true, by the denition of (x), for the indicator functions of all measurable sets. Thus, by linearity of At and of expectation, it is true for all simple functions. Standard arguments then let us pass to all the functions integrable with respect to the long-run distribution. At this point, you should be tempted to argue as follows. If is an AMS distribution with stationary mean m, then Af (x) = Em [f |I ] for almost all x. CHAPTER 26. ERGODIC DECOMPOSITION 176 So, its reasonable to hope that m is a combination of the (x), and yet further that Af (x) = E(x) [f ] for -almost-all x. This is basically true, but will take some extra assumptions to get it to work. Denition 322 (Ergo dic Comp onent) Two ergodic points x, y E belong to the same ergodic component when (x) = (y ). We wil l write the ergodic components as Ci , and the function mapping x to its ergodic component as (x). (x) is not dened if x is not an ergodic point. By a slight abuse of notation, we wil l write (Ci , B ) for the common long-run distribution of al l points in Ci . Obviously, the ergodic components partition the set of ergodic points. (The partition is not necessarily countable, and in some important cases, such as that of Hamiltonian dynamical systems in statistical mechanics, it must be uncountable (Khinchin, 1949).) Intuitively, they form the coarsest partition which is still fully informative about the long-run distribution. Its also pretty clear that the partition is left alone with the dynamics. Prop osition 323 For al l ergodic points x, (x) = (T x). Proof: By Lemma 319, (x) = (T x), and the result follows. Notice that I have been careful not to say that the ergodic components are invariant sets, because weve been using that to mean sets which are both left along by the dynamics and are measurable, i.e. members of the -eld X , and we have not established that any ergodic component is measurable, which in turn is because we have not established that (x) is a measurable function. Lets look a little more closely at the diculty. If B is a measurable set, then At 1B (x) is a measurable function. If the limit exists, then A1B (x) is also a measurable function, and consequently the set {y : A1B (y ) = A1B (x)} is a measurable set. Then (x) = B X {y : A1B (x) = A1B (y )} (26.1) gives the ergodic component to which x belongs. The diculty is that the intersection is over al l measurable sets B , and there are generally an uncountable number of them (even if is countable!), so we have no guarantee that the intersection of uncountably many measurable sets is measurable. Consequently, we cant say that any of the ergodic components is measurable. The way out, as so often in mathematics, is to cheat; or, more politely, to make an assumption which is strong enough to force open an exit, but not so strong that we cant support it or verify it1 What we will assume is that 1 For instance, we could just assume that uncountable intersections of measurable sets are measurable, but you will nd it instructive to try to work out the consequences of this assumption, and to examine whether it holds for the Borel -eld B say on the unit interval, to keep things easy. CHAPTER 26. ERGODIC DECOMPOSITION 177 there is a countable collection of sets G such that (x) = (y ) if and only if (x, G) = (y , G) for every G G . Then the intersection in Eq. 26.1 need only run over the countable class G , rather than all of X , which will be enough to reassure us that (x) is a measurable set. Denition 324 (Countable Extension Space) A measurable space , F is a countable extension space when there is a countable eld G of sets in such that F = (G ), i.e., G is the generating eld of the -eld, and any normalized, non-negative, nitely-additive set function on G has a unique extension to a probability measure on F . The reason the countable extension property is important is that it lets us get away with just checking properties of measures on a countable class (the generating eld G ). Here are a few important facts about countable extension spaces; proofs, along with a much more detailed treatment of the general theory, are given by Gray (1988, chs. 2 and 3), who however calls them standard spaces. Prop osition 325 Every countable space is a countable extension space. Prop osition 326 Every Borel space is a countable extension space. Remember that nite-dimensional Euclidean spaces are Borel spaces. Prop osition 327 A countable product of countable extension spaces is a countable extension space. The last proposition is important for us: if is a countable extension space, it means that N is too. So if we have a discrete- or Euclidean- valued random sequence, we can switch to the sequence space, and still appeal to generating-class arguments based on countable elds. Without further ado, then, lets assume that , the state space of our dynamical system, is a countable extension space, with countable generating eld G . Lemma 328 x E i limt At 1G (x) converges for every G G . Proof: If : A direct consequence of Denition 324, since the set function A1G (x) extends to a unique measure. Only if : a direct consequence of Denition 316, since every member of the generating eld is a measurable set. Lemma 329 The set of ergodic points is measurable: E X . Proof: For each G G , the set of x where At 1G (x) converges is measurable, because G is a measurable set. The set where those relative frequencies converge for all G G is the intersection of countably many measurable sets, hence itself measurable. This set is, exactly, the set of ergodic points (Lemma 328). CHAPTER 26. ERGODIC DECOMPOSITION 178 Lemma 330 Al l the ergodic components are measurable sets, and (x) is a measurable function. Thus, al l Ci I . Proof: For each G, the set {y : (y , G) = (x, G)} is measurable. So their intersection over all G G is also measurable. But, by the countable extension property, intersection this is precisely the set {y : (y ) = (x)}. So the ergodic components are measurable sets, and, since 1 (Ci ) = Ci , is measurable. Since we have already seen that T 1 Ci = Ci , and now that Ci X , we may say that Ci I . Remark: Because Ci is a (measurable) invariant set, (x, Ci ) = 1 for every x Ci . However, it does not follow that there might not be a smaller set, also with long-run measure 1, i.e., there might be a B Ci such that (x, B ) = 1. For an extreme example, consider the uniform contraction on R, with T x = ax for some 0 a 1. Every tra jectory converges on the origin. The only ergodic invariant measure the the Dirac delta function. Every point belongs to a single ergodic component. More generally, if a little roughly2 , the ergodic components correspond to the dynamical systems idea of basins of attraction, while the support of the long-run distributions corresponds to the actual attractors. Basins of attraction typically contain points which are not actually parts of the attractor. Theorem 331 (Ergo dic Decomp osition of AMS Pro cesses) Suppose , X is a countable extension space. If is an asymptotical ly mean stationary measure on , with stationary mean m, then (E ) = m(E ) = 1, and, for any f L1 (m), and - and m- almost al l x, Af (x) = E(x) [f ] = Em [f |I ] (26.2) so that m(B ) = (x, B )d(x) (26.3) Proof: For every set G G , At 1G (x) converges for - and m- almost all x (Theorem 298). Since there are only countably many G, the set on which they all converge also has probability 1; this set is E . Since (Proposition 321) Af (x) = E(x) [f ], and (Theorem 298 again) Af (x) = Em [f |I ] a.s., we have that E(x) [f ] = Em [f |I ] a.s. Now let f = 1B . As we know (Lemma 289), E [A1B (X )] = Em [1B (X )] = m(B ). But, for each x, A1B (x) = (x, B ), so m(B ) = E [(X, B )]. In words, we have decomposed the stationary mean m into the long-run distributions of the ergodic components, with weights given by the fraction of the initial measure falling into each component. Because of Propositions 313 and 315, we may be sure that by mixing stationary ergodic measures, we obtain an ergodic measure, and that our decomposition is unique. 2 I dont want to get into subtleties arising from the dynamicists tendency to dene things topologically, rather than measure-theoretically. CHAPTER 26. ERGODIC DECOMPOSITION 179 26.2 Statistical Asp ects 26.2.1 Ergo dic Comp onents as Minimal Sucient Statistics The connection between sucient statistics and ergodic decompositions is a very pretty one. First, recall the idea of parametric statistical suciency.3 Denition 332 (Suciency, Necessity) Let P be a class of probability measures on a common measurable space , F , indexed by a parameter . A -eld S F is parametrically sucient for , or just sucient, when P (A|S ) = P (A|S ) for al l , . That is, al l the distributions in P have the same distribution, conditional on S . A random variable such that S = (S ) is cal led a sucient statistic. A -eld is necessary (for the parameter ) if it is a sub -eld of every sucient -eld; a necessary statistic is dened similarly. A -eld which is both necessary and sucient is minimal sucient. Remark: The idea of suciency originates with Fisher; that of necessity, so far as I can work out, with Dynkin. This denition (after Dynkin (1978)) is based on what ordinary theoretical statistics texts call the Neyman factorization criterion for suciency. We will see all these concepts again when we do information theory. Lemma 333 S is sucient for if and only if there exists an F -measurable function ( , A) such that P (A|S ) = ( , A) (26.4) almost surely, for al l . Proof: Nearly obvious. Only if : since the conditional probability exists, there must be some such function (its a version of the conditional probability), and since all the conditional probabilities are versions of one another, the function cannot depend on . If : In this case, we have a single function which is a version of all the conditional probabilities, so it must be true that P (A|S ) = P (A|S ). Theorem 334 If a process on a countable extension space is asymptotical ly mean stationary, then is a minimal sucient statistic for its long-run distribution. Proof: The set of distributions P is now the set of all long-run distributions generated by the dynamics, and is an index which tracks them all unambiguously. We need to show both suciency and necessity. Suciency: The -eld 3 There is also a related idea of predictive statistical suciency, which we unfortunately will not be able to get to. Also, note that most textbooks on theoretical statistics state things in terms of random variables and measurable functions thereof, rather than -elds, but this is the more general case (Blackwell and Girshick, 1954). CHAPTER 26. ERGODIC DECOMPOSITION 180 generated by is the one generated by the ergodic components, ({Ci }). (Because the Ci are mutually exclusive, this is a particularly simple -eld.) Clearly, P (A| ({Ci })) = ((x), A) for all x and , so (Lemma 333), is a sucient statistic. Necessity: Follows from the fact that a given ergodic component contains al l the points with a given long-run distribution. Coarser -elds will not, therefore, preserve conditional probabilities. This theorem may not seem particularly exciting, because there isnt, necessarily, anything whose distribution matches the long-run distribution. However, it has deeper meaning under two circumstances when (x) really is the asymptotic distribution of random variables. 1. If is really a sequence space, so that X = S1 , S2 , S3 , . . ., then (x) really is the asymptotic marginal distribution of the St , conditional on the starting point. 2. Even if is not a sequence space, if stronger conditions than ergodicity known as mixing, asymptotic stability, etc., hold, there are reasonable senses in which L (Xt ) does converge, and converges on the long-run distribution.4 In both these cases, knowing the ergodic component thus turns out to be necessary and sucient for knowing the asymptotic distribution of the observables. (Cf. Corollary 337 below.) 26.2.2 Testing Ergo dic Hyp otheses Finally, lets close with an application to hypothesis testing, inspired by Badino (2004). Theorem 335 Let , X be a measurable space, and let 0 and 1 be two innitedimensional distributions of one-sided, discrete-parameter strictly-stationary valued stochastic processes, i.e., 0 and 1 are distributions on N , X N , and they are invariant under the shift operator. If they are also ergodic under the shift, then there exists a sequence of sets Rt X t such that 0 (Rt ) 0 while 1 (Rt ) 1. Proof: By Proposition 314, there exists a set R X N such that 0 (R) = 0, 1 (R) = 1. So we just need to approximate B by sets which are dened on the rst t observations in such a way that i (Rt ) i (R). If Rt R, then monotone convergence will give us the necessary convergence of probabilities. Here is a construction with cylinder sets5 that gives us the necessary sequence 4 Lemma 305 already gave us a kind of distributional convergence, but it is of a very weak sort, known as convergence in Ces`ro mean, which was sp ecially invented to handle a sequences which are not convergent in normal senses! We will see that there is a direct corresp ondence b etween levels of distributional convergence and levels of decay of correlations. 5 Intro duced in Chapters 2 and 3. Its p ossible to give an alternative construction using the Hilb ert space of all square-integrable random variables, and then pro jecting onto the subspace of those which are X t measurable. CHAPTER 26. ERGODIC DECOMPOSITION of approximations. Let Rt R t 181 (26.5) n=t+1 Clearly, Rt forms a non-increasing sequence, so it converges to a limit, which equally clearly must be R. Hence i (Rt ) i (R) = i. Remark: R is for rejection. Notice that the regions Rt will in general t depend on the actual sequence X1 , X2 , . . . Xt X1 , and not necessarily be permutation-invariant. When we come to the asymptotic equipartition theorem in information theory, we will see a more explicit way of constructing such tests. Corollary 336 Let H0 be Xi are IID with distribution p0 and H1 be Xi are IID with distribution p1 . Then, as t , there exists a sequence of tests of H0 against H1 whose size goes to 0 while their power goes to 1. Proof: Let 0 be the product measure induced by p0 , and 1 the product measure induced p1 , and apply the previous theorem. Corollary 337 If X is a strictly stationary (one-sided) random sequence whose shift representation has countably-many ergodic components, then there exists a t sequence of functions t , each Xt -measurable, such that t (X1 ) converges on the ergodic component with probability 1. t Proof: From Theorem 52, we can write X1 = 1:t U , for a sequence-valued random variable U , using the pro jection operators of Chapter 2. For each ergodic component, by Theorem 335, there exists a sequence of sets Rt,i such t t that P (X1 Rt,i ) 1 if U Ci , and goes to zero otherwise. Let (X1 ) be the t set of all Ci for which X1 Rt,i . By Theorem 331, U is in some component with probability 1, and, since there are only countably many ergodic components, t with probability 1 X1 will eventually leave all but one of the Rt,i . The remaining one is the ergodic component.
Find millions of documents on Course Hero - Study Guides, Lecture Notes, Reference Materials, Practice Exams and more. Course Hero has millions of course specific materials providing students with the best way to expand their education.

Below is a small sample set of documents:

Michigan - STAT - 36-754
Chapter 27MixingA stochastic process is mixing if its values at widely-separatedtimes are asymptotically independent.Section 27.1 denes mixing, and shows that it implies ergodicity.Section 27.2 gives some examples of mixing processes, both determinis
Michigan - STAT - 36-754
Chapter 28Shannon Entropy andKullback-LeiblerDivergenceSection 28.1 introduces Shannon entropy and its most basic properties, including the way it measures how close a random variable isto being uniformly distributed.Section 28.2 describes relative
Michigan - STAT - 36-754
Chapter 29Entropy Rates andAsymptotic EquipartitionSection 29.1 introduces the entropy rate the asymptotic entropy per time-step of a stochastic process and shows that it iswell-dened; and similarly for information, divergence, etc. rates.Section 29.
Michigan - STAT - 36-754
Chapter 30General Theory of LargeDeviationsA family of random variables follows the large deviations principle if the probability of the variables falling into bad sets, representing large deviations from expectations, declines exponentially insome ap
Michigan - STAT - 36-754
Chapter 31Large Deviations for I IDSequences: The Return ofRelative EntropySection 31.1 introduces the exponential version of the Markov inequality, which will be our ma jor calculating device, and shows howit naturally leads to both the cumulant gen
Michigan - STAT - 36-754
Chapter 32Large Deviations forMarkov SequencesThis chapter establishes large deviations principles for Markovsequences as natural consequences of the large deviations principlesfor IID sequences in Chapter 31. (LDPs for continuous-time Markovprocess
Michigan - STAT - 36-754
Chapter 34Large Deviations forWeakly Dep endentSequences: TheGrtner-Ellis TheoremaThis chapter proves the Grtner-Ellis theorem, establishing anaLDP for not-too-dependent processes taking values in topologicalvector spaces. Most of our earlier LDP
Michigan - STAT - 36-754
Chapter 35Large Deviations forStochastic DierentialEquationsThis last chapter revisits large deviations for stochastic dierential equations in the small-noise limit, rst raised in Chapter 22.Section 35.1 establishes the LDP for the Wiener process (Sc
Michigan - STAT - 36-754
BibliographyAbramowitz, Milton and Irene A. Stegun (eds.) (1964). Handbook of Mathematical Functions . Washington, D.C.: National Bureau of Standards. URLhttp:/www.math.sfu.ca/cbm/aands/.Algoet, Paul (1992). Universal Schemes for Prediction, Gambling a
Michigan - STAT - 36-754
Solution to Homework #1, 36-75427 January 2006Exercise 1.1 (The product -eld answers countable questions)Let D = S X S , where the union ranges over all countable subsets S of the index set T . For any event D D, whether or not asample path x D depend
Michigan - STAT - 36-754
Solution to Homework #2, 36-7547 February 2006Exercise 5.3 (The Logistic Map as a MeasurePreserving Transformation)The logistic map with a = 4 is a measure-preserving transformation, and the measure it preserves has the density 1/ x (1 x)(on the unit
Michigan - STAT - 36-754
Solution to Homework #3, 36-75425 February 2006Exercise 10.1I need one last revision of the denition of a Markov operator: a linear operatoron L1 satisfying the following conditions.1. If f 0 (-a.e.), then Kf 0 (-a.e.).2. If f M (-a.e.), then Kf M (
Michigan - STAT - 36-754
Syllabus for Advanced Probability II,Stochastic Processes36-754Cosma ShaliziSpring 2006This course is an advanced treatment of interdependent random variablesand random functions, with twin emphases on extending the limit theoremsof probability fro
George Mason - STAT - 344
Introduction to Engineering StatisticsLecture 02 TopicsCollecting engineering dataMechanistic and empirical modelsProbability and probability modelsLecture 02 Reference:Montgomery: Sec 1.2 through 1.41Basic Types of StudiesThree basic methods for
George Mason - STAT - 344
Probability ALecture 03 TopicsRandom experimentsSample spacesEventsCounting techniquesLecture 03 Reference:Montgomery: Sec 2.112ProbabilityCHAPTER OUTLINE2-1 Sample Spaces &amp; Events2-1.1 Random Experiments2-1.2 Sample Spaces2-1.3 Events2-1.
George Mason - STAT - 344
Probability BLecture 04 TopicsEqually likely outcomesProbability rulesUnions, intersections &amp; complementsSet operationsConditional probabilities in treesLecture 04 Reference:Montgomery:Sec 2.2 Axioms of ProbabilitySec 2.3 Addition rulesSec 2.4
George Mason - STAT - 344
Probability CLecture 05 TopicsMultiplication ruleTotal probability ruleIndependence of eventsReliabilityBayes TheoremRandom variablesLecture 05 Reference:Montgomery:Sec 2.5Sec 2.6Sec 2.7Sec 2.8Multiplication, total probability rulesIndepend
George Mason - STAT - 344
Discrete Probability ALecture 06 TopicsDiscrete random variables, defined &amp; graphedCumulative distribution functions, defined &amp;graphedMean and variance of a discrete random variableDefined mathematicallyGraphically explainedLecture 06 Reference:M
George Mason - STAT - 344
Discrete Probability BLecture 07 TopicsFor each of these distributions, we will examine the:Graph and parametersProbability mass and cumulative distribution functionsMean and varianceUniform distributionBinomial distribution:Negative binomial dist
George Mason - STAT - 344
Discrete Probability CLecture 08 TopicsFor each of these distributions, we will examine the:Graph and parametersProbability mass and cumulative distribution functionsMean and varianceHypergeometric distributionPoisson distributionLecture 08 Refere
George Mason - STAT - 344
Probability &amp; Statistics forEngineers/Scientists ILecture 01 TopicsIntroduction to the Syllabus, Assignment SheetBlackboard for course materials, lecture notesIntroduction to the instructorBasic ideas in statisticsIllustration of computer tools RL
George Mason - STAT - 344
Continuous Probability ALecture 09 TopicsContinuous variable distribution propertiesPDF &amp; CDF functions and graphsDerivation of the mean and varianceDesign and uses of the uniform distributionLecture 09 Reference:Montgomery:Sec 4.1Sec 4.2Sec 4.3
George Mason - STAT - 344
Continuous Probability BLecture 10 TopicsNormal distribution graphs and parametersStandard normal calculation, table and softwareApproximating discrete distributions with the normalExponential distributionFormula, graphs and parameterApplicationsL
George Mason - STAT - 344
Continuous Probability CLecture 11 TopicsBuilding on the exponential distribution of prior lectureMotivation, formula, graph, parameters andapplications of the:Erlang distribution and its extension, the gamma distributionWeibull distributionLognorm
George Mason - STAT - 344
Joint Probability Distributions ALecture 12 TopicsBuilding on the exponential distribution of prior lectureMotivation, formula, graph, parameters andapplications of the:Erlang distribution and its extension, the gamma distributionWeibull distributio
George Mason - STAT - 344
Joint Probability Distributions BLecture 13 TopicsPairwise independent random variablesRectangular ranges are necessary, but not sufficientFinding these probability distributions (&gt; 2 dimensions)Joint, marginal and conditional distributionsIndepende
George Mason - STAT - 344
Joint Probability Distributions CLecture 14 TopicsDiscrete multinomial distributionContinuous bivariate normal distributionIndependentDependent (covariance &amp; correlation)Reproductive propertyLinear combinations of random variablesSums and averages
George Mason - STAT - 344
General Bivariate Continuous DistributionsThis continuous variable example illustrates1) Finding the marginal and conditional for the two variables andcorresponding expected values, variances, and standarddeviations.2) Finding general conditional dis
George Mason - STAT - 344
Bivariate Discrete DistributionsLet X and Y be two discrete random variables defined on a samplespace S of an experiment.The joint probability mass function p(x, y) is defined for each pair ofnumbers (x, y) byIn this class the pairs of numbers can be
George Mason - STAT - 344
Gamma DistributionThe gamma distribution with parameters r and can be thought of asthe waiting time for r Poisson events when r is integer. The parameteris the expected number of Poisson events per a unit time interval. Ifincrease the typical wait for
George Mason - STAT - 344
Review:MarginalandConditionalDistributionsandCovarianceforContinuousDistributionsManytopicsinthetextbeginwithgeneralcaseexamplesandthencallattentiontofamiliesofdistribution,especiallythenormalfamily.Thefollowingusesapolynomialdensityfortworandomvariabl
George Mason - STAT - 344
Midterm 2 Overview by ChapterChapter 4 Continuous distributionsFamilies: Identification, domains, expected value variance: See SummaryProbability problems:R script: Normal Distribution, Exponential Distribution, Gamma DistributionHand integration: Si
George Mason - STAT - 344
1. Probability Density Functions from Chapter 4.In the Midterm exam, some density functions will be provided. You may be asked to fill in anyof the additional information: the family names, the domain possible values, and the expectedvalue and variance
George Mason - STAT - 344
Analysis of Paired DataThe Paired t TestThe sample consists of n independently selected items for which a pairof observations is made.We can compute the difference for each pairs and make inferencesabout the mean of these differences using a one samp
George Mason - STAT - 344
Data Type, Population Parameters and R Functionsfor Hypothesis Test and Confidence IntervalsSingle Population InferenceDataParameterR functionCount or fractionProportion pbinom.testof n itemsin class of interestContinuousMean t.testPaired co
George Mason - STAT - 344
Inference about a Difference BetweenPopulation ProportionsExample problem:Olestra was a fat substitute used in some snack foods.After some people consuming such snacks reported gastrointestinalproblems an experiment was performed.Results:90 of 563
George Mason - STAT - 344
Interpreting R Hypothesis Test and Confidence Interval OutputProblems are worth .5 points each. There are 50 problems.Directions: Most answers are very short. Round many digits answers to 2 significant digits.Write neatly giving the problem number and
George Mason - STAT - 344
Interpreting R Hypothesis Test OutputIn writing numeric values for answers, round to 3 significant digits.1.Exact binomial testdata: 12 and 24number of successes = 12, number of trials = 24, p-value = 0.03139alternative hypothesis: true probability
George Mason - STAT - 344
George Mason - STAT - 344
R Inputx = c( 25.8, 36.6, 26.3, 21.8, 27.2)t.test( x, alternative=&quot;greater&quot;, mu=25, conf.level=.95)R OutputOne Sample t-testdata: xt = 1.0382, df = 4, p-value = 0.1789alternative hypothesis: true mean is greater than 2595 percent confidence interv
George Mason - STAT - 344
Concepts of Point EstimationLecture 18 (former 17) Topics Basic properties of a confidence interval Large-sample confidence intervalsPopulation mean for measurement dataPopulation proportion for categorical data Bootstrap confidence intervals ignore
George Mason - STAT - 344
Confidence IntervalsLecture 20 TopicsVariancesProportionsPrediction intervalsLecture 19 Reference:Montgomery Sections 9-1 thru 9-3Devore Lecture 20Devore Lecture 211Hypothesis and Test ProceduresLecture 20 TopicsHypothesis tests versus confide
George Mason - STAT - 344
Risks and P-ValuesLecture 21and 22 TopicsType II errors risksP-ValuesLecture 21 Reference:Montgomery Sections 9-4, 9-1Excel WSReviewedStat 344 Lecture 221 RisksGo to file: Stat 344 Lecture 21 WSconcerning the interaction of theseinterrelate
George Mason - STAT - 344
dcfeae7461006edd771c0bf8ba9d38963497f08b.xlsDr. SimsIllustration of Defined Alternative HypothesisInput DataH0: =75H1: ==n==7491000.01Output DataIntermediate Calcs7070.470.871.271.67272.472.873.273.67474.474.875.275.67676.4
George Mason - STAT - 344
Two-Sample t-test proceduresTwo-sample t-test procedures enable inference about the difference ofmeans for two populations,Samples from the two populations denoted 1 and 2 are stored invectors called x and y for convenience.The procedures make use of
George Mason - STAT - 344
Tests concerning a population mean.The mean of a random sample from a population provides afoundation for creating a test statistic to assesses hypothesis about apopulation mean.Case 1. The population is from the normal family with meanThe standard d
George Mason - STAT - 344
Tests concerning a Population ProportionBackground: Large Sample TestsCommon large sample test statistics have form Z =.is the estimator for the population parameter of interest.is the expected value under the Null Hypothesis.is standard deviation o
George Mason - STAT - 344
Quiz1Scope ThisisaclosedbookandnotesquizrelatedtoChapter1and associatedRscripts. Thescopeisgivenbelow. Hopefullymanywillgetaperfectscope. 1. BeabletousewordstodescribedensityplotsasinFigure 1.11 2. Beabletowritethedefinitionsofthemeanandmedianon page25and
George Mason - MTH - 203
(J / jS O lUlIM ath 203-001 Spring 2011E xam 1Name: L astF irst( Problem 1 ) (25 points) F ind t he g eneral so lution o f t he linear s ystem (pleasewrite t he soluti on in t he v ector form) o r e xpla in w hy t he s ystem is inconsistent .- X2
George Mason - MTH - 203
~c7L ~T ()~JM a th 203-001 Spring 2011E xam 2N a rne: LastF irst( Prob le m 1) ( 18 point s) C ompute t h e fo llowi ng determin a nt s. Show s teps b u t tryt o avoid u nn ecessary c alcul at ions when possible.2o51-1 3237-644L il@68-
George Mason - MTH - 203
S OL U I) OJ\!M ath 203-001 Spring 2011E xam 3F irstName: L ast(P roblem 1 ) (25 points) For t he m atrix A =[~ ~]do t he following:(1) F ind all eigenvalues;(2) For each eigenvalue, find t he basis of t he eigenspace;(3) I f i t t urns o ut t h
Grand Canyon - FIN - 650
4/16/2010Chapter 15. Ch 15-12 Build a ModelReacher Technology has consulted with investment bankers and determined the interest rate it would payfor different capital structures, as shown below. Data for the risk-free rate, the market risk premium, an
Grand Canyon - FIN - 650
Chapter 22Qifeng (Danny) GuoP22-6McDowell Industries sells on terms of 3/10, net 30. Total sales for the year are \$912,500.Forty percent of the customers pay on the 10th day and take discounts: while the other 60%pay, on average, 40 days after their
Grand Canyon - FIN - 650
4/16/2010Chapter 13. Ch 13-11 Build a ModelThe Henley Corporation is a privately held company specializing in lawn care products and services. The most recentfinancial statements are shown below.Income Statement for the Year Ending December 31 (Millio
Grand Canyon - FIN - 650
Chapter 18Qifeng (Danny) GuoP18-1Axel Telecommunications has a target capital structure that consists of 70% debtand 30% equity. The company anticipates that its capital budget for the upcomingyear will be \$3,000,000. If Axel reports net income of \$2
Grand Canyon - FIN - 650
4/16/2010Chapter 11. Ch 11-18 Build a ModelWebmasters.com has developed a powerful new server that would be used for corporations Internet activities. It wouldcost \$10 million at Year 0 to buy the equipment necessary to manufacture the server. The proj
Grand Canyon - FIN - 650
Chapter 14Qifeng(Danny) GuoP14-1Baxter Video Products sales are expected to increase from \$5 million in 2007 to \$6million in 2008 or by 20%. Its assets totaled \$3 million at the end of 2007. Baxter is atfull capacity, so its assets must grow at the s
Grand Canyon - FIN - 650
Chapter 10Qifeng (Danny) GuoP10-2LL Incorporated's currently outstanding 11% coupon bonds have ayield to maturity of 8%. LL believes it could issue at par new bondsthat would provide a similar yield to maturity. If its marginal tax rate is35%, what
Grand Canyon - FIN - 650
Week 2 HomeworkQifeng(Danny) GuoChapter 2P2-1An investor recently purchased a corporatebond that yields 9%. The investor is in the 36%combined federal and state tax bracket. What isthe bonds after-tax yield?Yield before TaxTax RateYield after Ta
Grand Canyon - FIN - 650
Chapter 4Qifeng(Danny) GuoPVInterestYearFV\$10,00010%5\$16,105.10FVYearsInterestPV\$5,000207%\$1,292.10PMTYearsInterestFVFvdue\$30057%\$1,725.22\$1,845.99a.PVYearsInterestFV\$50016%\$530.00b.PVYearsInterestFV\$50026%\$561
Grand Canyon - FIN - 650
Chapter 1 Mini CaseQifeng (Danny) GuoAssume that you recently graduated and have just reported to work as an investmentadvisor at the brokerage firm of Balik and Kiefer Inc. One of the firms clients isMichelle DellaTorre, a professional tennis player