# Register now to access 7 million high quality study materials (What's Course Hero?) Course Hero is the premier provider of high quality online educational resources. With millions of study documents, online tutors, digital flashcards and free courseware, Course Hero is helping students learn more efficiently and effectively. Whether you're interested in exploring new subjects or mastering key topics for your next exam, Course Hero has the tools you need to achieve your goals.

9 Pages

### lecture-29

Course: STAT 36-754, Spring 2006
School: Michigan
Rating:

Word Count: 2504

#### Document Preview

29 Entropy Chapter Rates and Asymptotic Equipartition Section 29.1 introduces the entropy rate the asymptotic entropy per time-step of a stochastic process and shows that it is well-dened; and similarly for information, divergence, etc. rates. Section 29.2 proves the Shannon-MacMillan-Breiman theorem, a.k.a. the asymptotic equipartition property, a.k.a. the entropy ergodic theorem: asymptotically, almost all...

Register Now

#### Unformatted Document Excerpt

Coursehero >> Michigan >> Michigan >> STAT 36-754

Course Hero has millions of student submitted documents similar to the one
below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support.

Course Hero has millions of student submitted documents similar to the one below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support.
29 Entropy Chapter Rates and Asymptotic Equipartition Section 29.1 introduces the entropy rate the asymptotic entropy per time-step of a stochastic process and shows that it is well-dened; and similarly for information, divergence, etc. rates. Section 29.2 proves the Shannon-MacMillan-Breiman theorem, a.k.a. the asymptotic equipartition property, a.k.a. the entropy ergodic theorem: asymptotically, almost all sample paths of a stationary ergodic process have the same log-probability per time-step, namely the entropy rate. This leads to the idea of typical sequences, in Section 29.2.1. Section 29.3 discusses some aspects of asymptotic likelihood, using the asymptotic equipartition property, and allied results for the divergence rate. 29.1 Information-Theoretic Rates Denition 376 (Entropy Rate) The entropy rate of a random sequence X is n h(X ) lim H [X1 ]n (29.1) n when the limit exists. Denition 377 (Limiting Conditional Entropy) The limiting conditional entropy of a random sequence X is n h (X ) lim H [Xn |X1 1 ] n when the limit exists. 197 (29.2) 198 CHAPTER 29. RATES AND EQUIPARTITION n Lemma 378 For a stationary sequence, H [Xn |X1 1 ] is non-increasing in n. Moreover, its limit exists if X takes values in a discrete space. n n Proof: Because conditioning reduces entropy, H [Xn+1 |X1 ] H [Xn+1 |X2 ]. n n By stationarity, H [Xn+1 |X2 ] = H [Xn |X1 1 ]. If X takes discrete values, then conditional entropy is non-negative, and a non-increasing sequence of nonnegative real numbers always has a limit. Remark: Discrete values are a sucient condition for the existence of the limit, not a necessary one. We now need a natural-looking, but slightly technical, result from real analysis. Theorem 379 (Ces`ro) For any sequence of real numbers an a, the sea n quence bn = n1 i=1 an also converges to a. Proof: For every > 0, there is an N ( ) such that |an a| < whenever n > N ( ). Now take bn and break it up into two parts, one summing the terms below N ( ), and the other the terms above. n lim |bn a| = n ai a (29.3) |ai a| (29.4) lim n1 n i=1 n lim n1 n i=1 N( ) lim n1 i=1 lim n1 i=1 n N( ) n = (29.5) |ai a| + n (29.6) n N( ) + lim n1 = Since |ai a| + (n N ( )) i=1 |ai a| (29.7) (29.8) was arbitrary, lim bn = a. Theorem 380 (Entropy Rate) For a stationary sequence, if the limiting conditional entropy exists, then it is equal to the entropy rate, h(X ) = h (X ). Proof: Start with the chain rule to break the joint entropy into a sum of conditional entropies, use Lemma 378 to identify their limit as h]prime (X ), and CHAPTER 29. RATES AND EQUIPARTITION 199 then use Ces`ros theorem: a h(X ) 1 n H [X1 ] n n 1 i H [Xi |X11 ] lim nn i=1 = lim n = = h (X ) (29.9) (29.10) (29.11) as required. Because h(X ) = h (X ) for stationary processes (when both limits exist), it is not uncommon to nd what Ive called the limiting conditional entropy referred to as the entropy rate. Lemma 381 For a stationary sequence h(X ) H [X1 ], with equality i the sequence is IID. |= Proof: Conditioning reduces entropy, unless the variables are independent, so n n H [Xn |X1 1 ] < H [Xn ], unless Xn X1 1 . For this to be true of all n, which is whats needed for h(X ) = H [X1 ], all the values of the sequence must be independent of each other; since the sequence is stationary, this would imply that its IID. Example 382 (Markov Sequences) If X is a stationary Markov sequence, n then h(X ) = H [X2 |X1 ], because, by the chain rule, H [X1 ] = H [X1 ] + n t1 t ]. By the Markov property, however, H [Xt |X11 ] = H [Xt |Xt1 ], t=2 H [Xt |X1 n which by stationarity is H [X2 |X1 ]. Thus, H [X1 ] = H [X1 ]+(n1)H [X2 |X1 ]. n Dividing by n and taking the limit, we get H [X1 ] = H [X2 |X1 ]. Example 383 (Higher-Order Markov Sequences) If X is a k th order Markov k sequence, then the same reasoning as before shows that h(X ) = H [Xk+1 |X1 ] when X is stationary. Denition 384 (Divergence Rate) The divergence rate or relative entropy rate of the innite-dimensional distribution Q from the innite-dimensional distribution P , d(P Q), is d(P Q) = lim EP log n dP dQ (29.12) 0 (Xn ) if al l the nite-dimensional distributions of Q dominate al l the nite-dimensional distributions of P . If P and Q have densities, respectively p and q , with respect to a common reference measure, then d(P Q) = lim EP log n 1 p(X0 |Xn ) 1 q (X0 |Xn ) (29.13) CHAPTER 29. RATES AND EQUIPARTITION 29.2 200 The Shannon-McMillan-Breiman Theorem or Asymptotic Equipartition Prop erty This is a central result in information theory, acting as a kind of ergodic theorem for the entropy. That is, we want to say that, for almost all , 1 1 n n log P (X1 ( )) lim E [ log P (X1 )] = h(X ) nn n At rst, it looks like we should be able to make a nice time-averaging argument. We can always factor the joint probability, 1 1 n log P (X1 ) = n n n t=1 t log P Xt |X11 0 with the understanding that P X1 |X1 = P (X1 ). This looks rather like the sort of Ces`ro average that we became familiar with in ergodic theory. The a problem is, there we were averaging f (T t ) for a xed function f . This is not the case here, because we are conditioning on long and longer stretches of the past. Theres no problem if the sequence is Markovian, because then the remote past is irrelevant, by the Markov property, and we can just condition on a xedlength stretch of the past, so were averaging a xed function shifted in time. (This is why Shannons original argument was for Markov chains.) The result nonetheless more broadly, but requires more subtlety than might otherwise be thought. Breimans original proof of the general case was fairly involved1 , requiring both martingale theory, and a sort of dominated convergence theorem for ergodic time averages. (You can nd a simplied version of his argument in Kallenberg, at the end of chapter 11.) We will go over the sandwiching argument of Algoet and Cover (1988), which is, to my mind, more transparent. n The idea of the sandwich argument is to show that, for large n, n1 log P (X1 ) must lie between an upper bound, hk , obtained by approximating the sequence by a Markov process of order k , and a lower bound, which will be shown to be h. Once we establish that hk h, we will be done. Denition 385 (Markov Approximation) For each k , dene the order k Markov approximation to X by n n k (X1 ) =P k X1 t=k+1 t1 P Xt |Xtk (29.14) k is the distribution of a stationary Markov process of order k , where the k distribution of X1 +1 matches that of the original process. 1 Notoriously, the pro of in his original pap er was actually invalid, forcing him to publish a correction. 201 CHAPTER 29. RATES AND EQUIPARTITION Lemma 386 For each k , the entropy rate of the order k Markov approximation k is is equal to H [Xk+1 |X1 ]. Proof: Under the approximation (but not under the original distribution of X ), n k k H [X1 ] = H [X1 ] + (n k )H [Xk+1 |X1 ], by the Markov property and stationarity (as in Examples 382 and 383). Dividing by n and taking the limit as n gives the result. t Lemma 387 If X is a stationary two-sided then sequence, Yt = f (X ) denes a stationary sequence, for any measurable f . If X is also ergodic, then Y is ergodic too. Proof: Because X is stationary, it can be represented as a measure-preserving d t t shift on sequence space. Because it is measure-preserving, X = X , so d Y (t) = Y (t + 1), and similarly for all nite-length blocks of Y . Thus, all of the nite-dimensional distributions of Y are shift-invariant, and these determine the innite-dimensional distribution, so Y itself must be stationary. To see that Y must be ergodic if X is ergodic, recall that a random sequence is ergodic i its corresponding shift dynamical system is ergodic. A dynamical system is ergodic i all invariant functions are a.e. constant (Theorem 304). Because the Y sequence is obtained by applying a measurable function to the X sequence, a shift-invariant function of the Y sequence is a shift-invariant function of the X sequence. Since the latter are all constant a.e., the former are too, and Y is ergodic. Lemma 388 If X is stationary and ergodic, then, for every k , P lim n 1 n log k (X1 ( )) = hk n =1 (29.15) 1 n i.e., n log k (X1 ( )) converges a.s. to hk . Proof: Start by factoring the approximating Markov measure in the way suggested by its denition: 1 1 1 n k log k (X1 ) = log P X1 n n n n t=k+1 t1 log P Xt |Xtk (29.16) t1 1 k As n grows, n log P X1 0, for every xed k . On the other hand, log P Xt |Xtk is a measurable function of the past of the process, and since X is stationary and ergodic, it, too, is stationary and ergodic (Lemma 387). So 1 n log k (X1 ) n 1 n n t1 log P Xt |Xtk a.s. k E log P Xk+1 |X1 hk = by Theorem 312. t=k+1 (29.17) (29.18) (29.19) 202 CHAPTER 29. RATES AND EQUIPARTITION Denition 389 The innite-order approximation to the entropy rate of a discretevalued stationary process X is 1 h (X ) E log P X0 |X (29.20) Lemma 390 If X is stationary and ergodic, then lim n 1 n 0 log P X1 |X = h n (29.21) almost surely. Proof: Via Theorem 312 again, as in Lemma 388. Lemma 391 For a stationary, ergodic, nite-valued random sequence, hk (X ) h (X ). Proof: By the martingale convergence theorem, for every x0 , a.s. P X0 = x0 |Xn 1 P X0 = x0 |X1 (29.22) Since is nite, the probability of any point in is between 0 and 1 inclusive, and p log p is bounded and continuous. So we can apply bounded convergence to get that hk = E E = x0 x0 1 1 P X0 = x0 |Xk log P X0 = x0 |Xk (29.23) 1 1 P X0 = x0 |X log P X0 = x0 |X (29.24) (29.25) h Lemma 392 h (X ) is the entropy rate of X , i.e. h (X ) = h(X ). Proof: Clear from Theorem 380 and the denition of conditional entropy. We are almost ready for the proof, but need one technical lemma rst. Lemma 393 If Rn 0, E [Rn ] 1 for al l n, then lim sup n 1 log Rn 0 n (29.26) almost surely. Proof: Pick any > 0. P 1 log Rn n = P (Rn en ) E [Rn ] en n e (29.27) (29.28) (29.29) n by Markovs inequality. Since , by the Borel-Cantelli lemma, ne lim supn n1 log Rn . Since was arbitrary, this concludes the proof. 203 CHAPTER 29. RATES AND EQUIPARTITION Theorem 394 (Asymptotic Equipartition Prop erty) For a stationary, ergodic, nite-valued random sequence X , 1 n log P (X1 ) h(X ) a.s. n (29.30) n n n n Proof: For every k , k (X1 )/P (X1 ) 0, and E [k (X1 )/P (X1 )] 1. Hence, by Lemma 393, n 1 k (X1 ) lim sup log (29.31) n 0 n P (X1 ) n a.s. Manipulating the logarithm, lim sup n 1 1 n n log k (X1 ) lim sup log P (X1 ) n n n From Lemma 388, lim supn Hence, for each k , 1 n n log k (X1 ) = limn hk (X ) lim sup n 1 n (29.32) n log k (X1 ) = hk (X ), a.s. 1 n log P (X1 ) n (29.33) almost surely. n n 0 A similar manipulation of P (X1 ) /P X1 |X gives h (X ) lim inf n 1 n log P (X1 ) n (29.34) a.s. As hk h , it follows that the liminf and the limsup of the normalized log likelihood must be equal almost surely, and so equal to h , which is to say to h(X ). Why is this called the AEP? Because, to within an o(n) term, all sequences of length n have the same log-likelihood (to within factors of o(n), if they have positive probability at all. In this sense, the likelihood is equally partitioned over those sequences. 29.2.1 Typical Sequences Lets turn the result of the AEP around. For large n, the probability of a given sequence is either approximately 2nh or approximately zero2 . To get the total probability to sum up to one, there need to be about 2nh sequences with positive probability. If the size of the alphabet is s, then the fraction of sequences which are actually exhibited is 2n(hlog s) , an increasingly small fraction (as h log s). Roughly speaking, these are the typical sequences, any one of which, via ergodicity, can act as a representative of the complete process. 2 Of course that assumes using base-2 logarithms in the denition of entropy. CHAPTER 29. RATES AND EQUIPARTITION 29.3 Asymptotic Likeliho o d 29.3.1 204 Asymptotic Equipartition for Divergence Using methods analogous to those we employed on the AEP for entropy, it is possible to prove the following. Theorem 395 Let P be an asymptotical ly mean-stationary distribution, with stationary mean P , with ergodic component function . Let M be a homogeneous nite-order Markov process, whose nite-dimensional distributions dominate those of P and P ; denote the densities with respect to M by p and p, n respectively. If limn n1 log p(X1 ) is an invariant function P -a.e., then 1 a.s. n log p(X1 ( )) d(P () M ) n (29.35) where P () is the stationary, ergodic distribution of the ergodic component. Proof: See Algoet and Cover (1988, theorem 4), Gray (1990, corollary 8.4.1). Remark. The usual AEP is in fact a consequence of this result, with the appropriate reference measure. (Which?) 29.3.2 Likeliho o d Results It is left as an exercise for you to obtain the following result, from the AEP for relative entropy, Lemma 367 and the chain rules. Theorem 396 Let P be a stationary and ergodic data-generating process, whose entropy rate, with respect to some reference measure , is h. Further let M be a nite-order Markov process which dominates P , whose density, with respect to the reference measure, is m. Then 1 n log m(X1 ) h + d(P M ) n (29.36) P -almost surely. 29.4 Exercises Exercise 29.1 Markov approximations are maximum-entropy approximations. (You may assume that the process X takes values in a nite set.) a Prove that k , as dened in Denition 385, gets the distribution of sequences of length k + 1 correct, i.e., for any set A X k+1 , (A) = k P X1 +1 A . b Prove that k , for any any k > k , also gets the distribution of length k + 1 sequences right. CHAPTER 29. RATES AND EQUIPARTITION 205 n c In a slight abuse of notation, let H [ (X1 )] stand for the entropy of a sen quence of length n when distributed according to . Show that H [k (X1 )] n H [k (X1 )] if k > k . (Note that the n k case is easy!) d Is it true that that if is any other measure which gets the distribution n n of sequences of length k + 1 right, then H [k (X1 )] H [ (X1 )]? If yes, prove it; if not, nd a counter-example. Exercise 29.2 Prove Theorem 396.
Find millions of documents on Course Hero - Study Guides, Lecture Notes, Reference Materials, Practice Exams and more. Course Hero has millions of course specific materials providing students with the best way to expand their education.

Below is a small sample set of documents:

Michigan - STAT - 36-754
Chapter 30General Theory of LargeDeviationsA family of random variables follows the large deviations principle if the probability of the variables falling into bad sets, representing large deviations from expectations, declines exponentially insome ap
Michigan - STAT - 36-754
Chapter 31Large Deviations for I IDSequences: The Return ofRelative EntropySection 31.1 introduces the exponential version of the Markov inequality, which will be our ma jor calculating device, and shows howit naturally leads to both the cumulant gen
Michigan - STAT - 36-754
Chapter 32Large Deviations forMarkov SequencesThis chapter establishes large deviations principles for Markovsequences as natural consequences of the large deviations principlesfor IID sequences in Chapter 31. (LDPs for continuous-time Markovprocess
Michigan - STAT - 36-754
Chapter 34Large Deviations forWeakly Dep endentSequences: TheGrtner-Ellis TheoremaThis chapter proves the Grtner-Ellis theorem, establishing anaLDP for not-too-dependent processes taking values in topologicalvector spaces. Most of our earlier LDP
Michigan - STAT - 36-754
Chapter 35Large Deviations forStochastic DierentialEquationsThis last chapter revisits large deviations for stochastic dierential equations in the small-noise limit, rst raised in Chapter 22.Section 35.1 establishes the LDP for the Wiener process (Sc
Michigan - STAT - 36-754
BibliographyAbramowitz, Milton and Irene A. Stegun (eds.) (1964). Handbook of Mathematical Functions . Washington, D.C.: National Bureau of Standards. URLhttp:/www.math.sfu.ca/cbm/aands/.Algoet, Paul (1992). Universal Schemes for Prediction, Gambling a
Michigan - STAT - 36-754
Solution to Homework #1, 36-75427 January 2006Exercise 1.1 (The product -eld answers countable questions)Let D = S X S , where the union ranges over all countable subsets S of the index set T . For any event D D, whether or not asample path x D depend
Michigan - STAT - 36-754
Solution to Homework #2, 36-7547 February 2006Exercise 5.3 (The Logistic Map as a MeasurePreserving Transformation)The logistic map with a = 4 is a measure-preserving transformation, and the measure it preserves has the density 1/ x (1 x)(on the unit
Michigan - STAT - 36-754
Solution to Homework #3, 36-75425 February 2006Exercise 10.1I need one last revision of the denition of a Markov operator: a linear operatoron L1 satisfying the following conditions.1. If f 0 (-a.e.), then Kf 0 (-a.e.).2. If f M (-a.e.), then Kf M (
Michigan - STAT - 36-754
Syllabus for Advanced Probability II,Stochastic Processes36-754Cosma ShaliziSpring 2006This course is an advanced treatment of interdependent random variablesand random functions, with twin emphases on extending the limit theoremsof probability fro
George Mason - STAT - 344
Introduction to Engineering StatisticsLecture 02 TopicsCollecting engineering dataMechanistic and empirical modelsProbability and probability modelsLecture 02 Reference:Montgomery: Sec 1.2 through 1.41Basic Types of StudiesThree basic methods for
George Mason - STAT - 344
Probability ALecture 03 TopicsRandom experimentsSample spacesEventsCounting techniquesLecture 03 Reference:Montgomery: Sec 2.112ProbabilityCHAPTER OUTLINE2-1 Sample Spaces &amp; Events2-1.1 Random Experiments2-1.2 Sample Spaces2-1.3 Events2-1.
George Mason - STAT - 344
Probability BLecture 04 TopicsEqually likely outcomesProbability rulesUnions, intersections &amp; complementsSet operationsConditional probabilities in treesLecture 04 Reference:Montgomery:Sec 2.2 Axioms of ProbabilitySec 2.3 Addition rulesSec 2.4
George Mason - STAT - 344
Probability CLecture 05 TopicsMultiplication ruleTotal probability ruleIndependence of eventsReliabilityBayes TheoremRandom variablesLecture 05 Reference:Montgomery:Sec 2.5Sec 2.6Sec 2.7Sec 2.8Multiplication, total probability rulesIndepend
George Mason - STAT - 344
Discrete Probability ALecture 06 TopicsDiscrete random variables, defined &amp; graphedCumulative distribution functions, defined &amp;graphedMean and variance of a discrete random variableDefined mathematicallyGraphically explainedLecture 06 Reference:M
George Mason - STAT - 344
Discrete Probability BLecture 07 TopicsFor each of these distributions, we will examine the:Graph and parametersProbability mass and cumulative distribution functionsMean and varianceUniform distributionBinomial distribution:Negative binomial dist
George Mason - STAT - 344
Discrete Probability CLecture 08 TopicsFor each of these distributions, we will examine the:Graph and parametersProbability mass and cumulative distribution functionsMean and varianceHypergeometric distributionPoisson distributionLecture 08 Refere
George Mason - STAT - 344
Probability &amp; Statistics forEngineers/Scientists ILecture 01 TopicsIntroduction to the Syllabus, Assignment SheetBlackboard for course materials, lecture notesIntroduction to the instructorBasic ideas in statisticsIllustration of computer tools RL
George Mason - STAT - 344
Continuous Probability ALecture 09 TopicsContinuous variable distribution propertiesPDF &amp; CDF functions and graphsDerivation of the mean and varianceDesign and uses of the uniform distributionLecture 09 Reference:Montgomery:Sec 4.1Sec 4.2Sec 4.3
George Mason - STAT - 344
Continuous Probability BLecture 10 TopicsNormal distribution graphs and parametersStandard normal calculation, table and softwareApproximating discrete distributions with the normalExponential distributionFormula, graphs and parameterApplicationsL
George Mason - STAT - 344
Continuous Probability CLecture 11 TopicsBuilding on the exponential distribution of prior lectureMotivation, formula, graph, parameters andapplications of the:Erlang distribution and its extension, the gamma distributionWeibull distributionLognorm
George Mason - STAT - 344
Joint Probability Distributions ALecture 12 TopicsBuilding on the exponential distribution of prior lectureMotivation, formula, graph, parameters andapplications of the:Erlang distribution and its extension, the gamma distributionWeibull distributio
George Mason - STAT - 344
Joint Probability Distributions BLecture 13 TopicsPairwise independent random variablesRectangular ranges are necessary, but not sufficientFinding these probability distributions (&gt; 2 dimensions)Joint, marginal and conditional distributionsIndepende
George Mason - STAT - 344
Joint Probability Distributions CLecture 14 TopicsDiscrete multinomial distributionContinuous bivariate normal distributionIndependentDependent (covariance &amp; correlation)Reproductive propertyLinear combinations of random variablesSums and averages
George Mason - STAT - 344
General Bivariate Continuous DistributionsThis continuous variable example illustrates1) Finding the marginal and conditional for the two variables andcorresponding expected values, variances, and standarddeviations.2) Finding general conditional dis
George Mason - STAT - 344
Bivariate Discrete DistributionsLet X and Y be two discrete random variables defined on a samplespace S of an experiment.The joint probability mass function p(x, y) is defined for each pair ofnumbers (x, y) byIn this class the pairs of numbers can be
George Mason - STAT - 344
Gamma DistributionThe gamma distribution with parameters r and can be thought of asthe waiting time for r Poisson events when r is integer. The parameteris the expected number of Poisson events per a unit time interval. Ifincrease the typical wait for
George Mason - STAT - 344
Review:MarginalandConditionalDistributionsandCovarianceforContinuousDistributionsManytopicsinthetextbeginwithgeneralcaseexamplesandthencallattentiontofamiliesofdistribution,especiallythenormalfamily.Thefollowingusesapolynomialdensityfortworandomvariabl
George Mason - STAT - 344
Midterm 2 Overview by ChapterChapter 4 Continuous distributionsFamilies: Identification, domains, expected value variance: See SummaryProbability problems:R script: Normal Distribution, Exponential Distribution, Gamma DistributionHand integration: Si
George Mason - STAT - 344
1. Probability Density Functions from Chapter 4.In the Midterm exam, some density functions will be provided. You may be asked to fill in anyof the additional information: the family names, the domain possible values, and the expectedvalue and variance
George Mason - STAT - 344
Analysis of Paired DataThe Paired t TestThe sample consists of n independently selected items for which a pairof observations is made.We can compute the difference for each pairs and make inferencesabout the mean of these differences using a one samp
George Mason - STAT - 344
Data Type, Population Parameters and R Functionsfor Hypothesis Test and Confidence IntervalsSingle Population InferenceDataParameterR functionCount or fractionProportion pbinom.testof n itemsin class of interestContinuousMean t.testPaired co
George Mason - STAT - 344
Inference about a Difference BetweenPopulation ProportionsExample problem:Olestra was a fat substitute used in some snack foods.After some people consuming such snacks reported gastrointestinalproblems an experiment was performed.Results:90 of 563
George Mason - STAT - 344
Interpreting R Hypothesis Test and Confidence Interval OutputProblems are worth .5 points each. There are 50 problems.Directions: Most answers are very short. Round many digits answers to 2 significant digits.Write neatly giving the problem number and
George Mason - STAT - 344
Interpreting R Hypothesis Test OutputIn writing numeric values for answers, round to 3 significant digits.1.Exact binomial testdata: 12 and 24number of successes = 12, number of trials = 24, p-value = 0.03139alternative hypothesis: true probability
George Mason - STAT - 344
George Mason - STAT - 344
R Inputx = c( 25.8, 36.6, 26.3, 21.8, 27.2)t.test( x, alternative=&quot;greater&quot;, mu=25, conf.level=.95)R OutputOne Sample t-testdata: xt = 1.0382, df = 4, p-value = 0.1789alternative hypothesis: true mean is greater than 2595 percent confidence interv
George Mason - STAT - 344
Concepts of Point EstimationLecture 18 (former 17) Topics Basic properties of a confidence interval Large-sample confidence intervalsPopulation mean for measurement dataPopulation proportion for categorical data Bootstrap confidence intervals ignore
George Mason - STAT - 344
Confidence IntervalsLecture 20 TopicsVariancesProportionsPrediction intervalsLecture 19 Reference:Montgomery Sections 9-1 thru 9-3Devore Lecture 20Devore Lecture 211Hypothesis and Test ProceduresLecture 20 TopicsHypothesis tests versus confide
George Mason - STAT - 344
Risks and P-ValuesLecture 21and 22 TopicsType II errors risksP-ValuesLecture 21 Reference:Montgomery Sections 9-4, 9-1Excel WSReviewedStat 344 Lecture 221 RisksGo to file: Stat 344 Lecture 21 WSconcerning the interaction of theseinterrelate
George Mason - STAT - 344
dcfeae7461006edd771c0bf8ba9d38963497f08b.xlsDr. SimsIllustration of Defined Alternative HypothesisInput DataH0: =75H1: ==n==7491000.01Output DataIntermediate Calcs7070.470.871.271.67272.472.873.273.67474.474.875.275.67676.4
George Mason - STAT - 344
Two-Sample t-test proceduresTwo-sample t-test procedures enable inference about the difference ofmeans for two populations,Samples from the two populations denoted 1 and 2 are stored invectors called x and y for convenience.The procedures make use of
George Mason - STAT - 344
Tests concerning a population mean.The mean of a random sample from a population provides afoundation for creating a test statistic to assesses hypothesis about apopulation mean.Case 1. The population is from the normal family with meanThe standard d
George Mason - STAT - 344
Tests concerning a Population ProportionBackground: Large Sample TestsCommon large sample test statistics have form Z =.is the estimator for the population parameter of interest.is the expected value under the Null Hypothesis.is standard deviation o
George Mason - STAT - 344
Quiz1Scope ThisisaclosedbookandnotesquizrelatedtoChapter1and associatedRscripts. Thescopeisgivenbelow. Hopefullymanywillgetaperfectscope. 1. BeabletousewordstodescribedensityplotsasinFigure 1.11 2. Beabletowritethedefinitionsofthemeanandmedianon page25and
George Mason - MTH - 203
(J / jS O lUlIM ath 203-001 Spring 2011E xam 1Name: L astF irst( Problem 1 ) (25 points) F ind t he g eneral so lution o f t he linear s ystem (pleasewrite t he soluti on in t he v ector form) o r e xpla in w hy t he s ystem is inconsistent .- X2
George Mason - MTH - 203
~c7L ~T ()~JM a th 203-001 Spring 2011E xam 2N a rne: LastF irst( Prob le m 1) ( 18 point s) C ompute t h e fo llowi ng determin a nt s. Show s teps b u t tryt o avoid u nn ecessary c alcul at ions when possible.2o51-1 3237-644L il@68-
George Mason - MTH - 203
S OL U I) OJ\!M ath 203-001 Spring 2011E xam 3F irstName: L ast(P roblem 1 ) (25 points) For t he m atrix A =[~ ~]do t he following:(1) F ind all eigenvalues;(2) For each eigenvalue, find t he basis of t he eigenspace;(3) I f i t t urns o ut t h
Grand Canyon - FIN - 650
4/16/2010Chapter 15. Ch 15-12 Build a ModelReacher Technology has consulted with investment bankers and determined the interest rate it would payfor different capital structures, as shown below. Data for the risk-free rate, the market risk premium, an
Grand Canyon - FIN - 650
Chapter 22Qifeng (Danny) GuoP22-6McDowell Industries sells on terms of 3/10, net 30. Total sales for the year are \$912,500.Forty percent of the customers pay on the 10th day and take discounts: while the other 60%pay, on average, 40 days after their
Grand Canyon - FIN - 650
4/16/2010Chapter 13. Ch 13-11 Build a ModelThe Henley Corporation is a privately held company specializing in lawn care products and services. The most recentfinancial statements are shown below.Income Statement for the Year Ending December 31 (Millio
Grand Canyon - FIN - 650
Chapter 18Qifeng (Danny) GuoP18-1Axel Telecommunications has a target capital structure that consists of 70% debtand 30% equity. The company anticipates that its capital budget for the upcomingyear will be \$3,000,000. If Axel reports net income of \$2
Grand Canyon - FIN - 650
4/16/2010Chapter 11. Ch 11-18 Build a ModelWebmasters.com has developed a powerful new server that would be used for corporations Internet activities. It wouldcost \$10 million at Year 0 to buy the equipment necessary to manufacture the server. The proj
Grand Canyon - FIN - 650
Chapter 14Qifeng(Danny) GuoP14-1Baxter Video Products sales are expected to increase from \$5 million in 2007 to \$6million in 2008 or by 20%. Its assets totaled \$3 million at the end of 2007. Baxter is atfull capacity, so its assets must grow at the s
Grand Canyon - FIN - 650
Chapter 10Qifeng (Danny) GuoP10-2LL Incorporated's currently outstanding 11% coupon bonds have ayield to maturity of 8%. LL believes it could issue at par new bondsthat would provide a similar yield to maturity. If its marginal tax rate is35%, what
Grand Canyon - FIN - 650
Week 2 HomeworkQifeng(Danny) GuoChapter 2P2-1An investor recently purchased a corporatebond that yields 9%. The investor is in the 36%combined federal and state tax bracket. What isthe bonds after-tax yield?Yield before TaxTax RateYield after Ta
Grand Canyon - FIN - 650
Chapter 4Qifeng(Danny) GuoPVInterestYearFV\$10,00010%5\$16,105.10FVYearsInterestPV\$5,000207%\$1,292.10PMTYearsInterestFVFvdue\$30057%\$1,725.22\$1,845.99a.PVYearsInterestFV\$50016%\$530.00b.PVYearsInterestFV\$50026%\$561
Grand Canyon - FIN - 650
Chapter 1 Mini CaseQifeng (Danny) GuoAssume that you recently graduated and have just reported to work as an investmentadvisor at the brokerage firm of Balik and Kiefer Inc. One of the firms clients isMichelle DellaTorre, a professional tennis player
Punjab Engineering College - LALA - 222
Clayton VHS AP Physics B 05-06 Chapter 4 Quiz SolutionsClayton VHS AP Physics B 05-06 Chapter 4 Quiz SolutionsClayton VHS AP Physics B 05-06 Chapter 4 Quiz SolutionsClayton VHS AP Physics B 05-06 Chapter 4 Quiz SolutionsClayton VHS AP Physics B 05-06
Punjab Engineering College - LALA - 222
Clayton VHS AP Physics B 05-03 Chapter 4 Homework SolutionsClayton VHS AP Physics B 05-03 Chapter 4 Homework SolutionsClayton VHS AP Physics B 05-03 Chapter 4 Homework SolutionsClayton VHS AP Physics B 05-03 Chapter 4 Homework SolutionsClayton VHS AP
Punjab Engineering College - LALA - 222