# Register now to access 7 million high quality study materials (What's Course Hero?) Course Hero is the premier provider of high quality online educational resources. With millions of study documents, online tutors, digital flashcards and free courseware, Course Hero is helping students learn more efficiently and effectively. Whether you're interested in exploring new subjects or mastering key topics for your next exam, Course Hero has the tools you need to achieve your goals.

47 Pages

### 11L

Course: CSI 9723, Fall 2009
School: George Mason
Rating:

Word Count: 3620

#### Document Preview

Decision-Theoretic A Approach to Estimation In a decision-theoretic approach to statistical inference, we seek a method that minimizes the risk no matter what is the true state of nature. In a problem of point estimation, for example, we seek an estimator T (X) which for a given loss function L(g(), T (X)) yields a minimum of E (L(g(), T (X))). For some specific value of , say 1, one particular estimator, say T1 ,...

Register Now

#### Unformatted Document Excerpt

Coursehero >> Virginia >> George Mason >> CSI 9723

Course Hero has millions of student submitted documents similar to the one
below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support.

Course Hero has millions of student submitted documents similar to the one below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support.
Decision-Theoretic A Approach to Estimation In a decision-theoretic approach to statistical inference, we seek a method that minimizes the risk no matter what is the true state of nature. In a problem of point estimation, for example, we seek an estimator T (X) which for a given loss function L(g(), T (X)) yields a minimum of E (L(g(), T (X))). For some specific value of , say 1, one particular estimator, say T1 , may have the smallest expected loss, while for another value of , say 2, another estimator, say T2 , may a smaller expected loss. 1 Uniformly Best Estimators What we would like is an estimator with least expected loss no matter what is the value of ; that is, we would like an estimator with uniformly minimum risk. Because the risk depends on the value of , however, we see that we cannot devise such an estimator. The optimal estimator would somehow involve . We would prefer a procedure that does not depend on the unknown quantity we are trying to estimate. 2 Uniformly Unbiased Estimators A poperty of a statistic that relates to a parameter, but does not depend on the value of the parameter, is unbiasedness. This leads us to require that the estimator of a given estimand, g(), be unbiased with respect to ; that is, E (T (X)) = g() . Unbiased always means uniformly unbiased. 3 Unbiased Estimators The requirement of unbiasedness cannot always be met. An estimand for which there is an unbiased estimator is said to be U-estimable. Consider, for example, the problem of estimating 1/ in binomial(n, ) for (0, 1). Suppose T (X) is an unbiased estimator of 1/. Then n n x T (x) (1 - )n-x = 1/. x x=0 Now as 0, the left side tends to T (0), which is finite, but the right side tends to ; hence, 1/ is not U-estimable. If, 1/ were U-estimable, the equation above would say that some polynomial in is equal to 1/ for all (0, 1), and that clearly cannot be. 4 Unbiased Estimators Another related example of something that is not U-estimable, but one that corresponds to a common parameter, is an estimator of the odds, /(1 - ). We see that no unbiased estimator exists, for the same reason as in the previous example. 5 Unbiasedness and Squared-Error Loss A squared-error loss function is particularly nice for an unbiased estimator, because in that case the expected loss is just the variance; that is, an unbiased estimator with minimum risk is an unbiased estimator with minimum variance. If the unbiased estimator has minimum variance among all unbiased estimators within the parameter space, we say that such an estimator is a uniformly (for all values of ) minimum variance unbiased estimator, that is, a UMVUE. (An unbiased estimator that has minimum variance among all unbiased estimators within a subspace of the parameter space is called a locally minimum variance unbiased estimator, or LMVUE.) 6 UMVU UMVU is a special case of uniform minimum risk (UMRU), which generally only applies to convex loss functions. In general, no UMRUE exists for bounded loss functions. Such loss functions cannot be (strictly) convex. Uniformity (the first "U") means the MVU property is independent of the estimand. "Unbiasedness" is itself a uniform property, because it is defined in terms of an expectation for any distribution in the given family. UMVU is closely related to complete sufficiency, which means that it is related to exponential families. 7 How to find an UMVUE We generally find an UMVUE by beginning with a "good" estimator and manipulating it to make it UMVUE. It might be unbiased to begin with, and we reduce its variance while keeping it unbiased. It might not be unbiased to begin with but it might have some other desirable property, and we manipulate it to be unbiased. One of the most useful facts is the Lehmann-Scheff theorem, e which says that if there is a complete sufficient statistic T for , and if g() is U-estimable, then there is a unique UMVUE of g() of the form h(T ), where h is a Borel function. (Notice that this follows from the Rao-Blackwell theorem. The uniqueness comes from the completeness, and of course, means unique a.e.) 8 How to find a UMVUE This fact leads to two methods: 1. Find UMVU directly by finding h(T ) such that E (h(T )) = g(). 2. If T0 is unbiased, find UMVU as h(T ) = E (T0 (X)|T ). (This process is sometimes called "Rao-Blackwellization".) 9 Finding a UMVUE Method 1 : Example (Lehmann): Given random sample of size n from Bernoulli(). Want to estimate g() = (1 - ). T = Xi is complete sufficient. The unbiasedness condition is n h(t) t (1 - )n-t = (1 - ). t=0 x Rewriting this in terms of the odds = /(1 - ), for all (0, ), this equation is, n h(t)t = (1 + )n-2 t=0 x n-1 n-2 t = . x-1 t=1 n n 10 Now since for each t, the coefficient of t must be the same on both sides of the equation, we have h(t) = t(n - t) . n(n - 1) Note that the constant term and the coefficient of n must both be 0. Also see examples 3.1 and 3.2 in Shao. Method 2 : See example 3.3 in Shao. Finding a UMVUE An important property of unbiased estimators is the following. If T0 (X) is an unbiased estimator of g(), all unbiased estimators of g() belong to an equivalence class defined as {T0(X) - U (X)}, where E (U (X)) = 0. Hence, unbiased estimators of 0 play a useful role in UMVUE problems. We also see that useful estimators must have finite second moment, otherwise, we cannot minimize a variance by combining the estimators. 12 Finding a UMVUE This leads to two methods if we have U such that E (U ) = 0 and E (U 2) < . Find UMVU by finding U to minimize E((T0 - U )2). If T is unbiased and has finite second moment, it is UMVU iff E(T U ) = 0 and U E(U ) = 0 and E(U 2) < . (This is Theorem 3.2(i) in Shao.) Theorem 3.2(ii) in Shao is similar to Theorem 3.2(i), but applies to functions of a sufficient statistic, T . 13 Regularity Conditions "Regularity conditions" apply to a family of distributions, P = {P ; }, that have densities p . There are generally three conditions that together are called the regularity conditions: The parameter space is an open interval (in one dimension, and a cross product of open intervals in multidimensions). The support is independent of ; that is, all P have a common support. For any x in the support and , p (x)/ exists and is finite. The latter two conditions ensure that the operations of integration and differentiation can be interchanged. 14 Fisher Information A fundamental question is how much information does a realization of the random variable X contain about the scalar parameter . For a random variable X with PDF p(x; ), we define the "information" (or "Fisher information") that X contains about as I() = E log p(X; ) log p(X; ) T . (Why does this definition make sense?) This is called Fisher information. Another type of information is Shannon information, which for an event is the negative of the log of the probability of the event. 15 Fisher Information Our information comes through the estimator T (X). interested in the maximum information we can get. We are Information is larger when there is larger relative variation in the density as the parameter changes, but the information available from an estimator is less when the estimator exhibits large variation (i.e., has large variance), so we want smaller variance. 16 Fisher Information There are several simple facts to know about log p(X; ): E log(p(X, )) = 1 p(x, ) p(x, )dx p(x, ) p(x, )dx = = 0; therefore, the expectation in the information definition is also the variance: E log p(X; ) log p(X; ) T =V log(p(X, )) . We also have a relationship with the second derivative: log p(X; ) E log p(X; ) T 2 log(p(X, )) = -E . 2 17 Example: Consider the N(, 2 ) distribution with = (, ) (which is simpler than for = (, 2)): log p(,)(x) = c - log() - (x - )2 /(2 2 ). We have x- log p(,)(x) = 2 and (x - )2 1 log p(,)(x) = - + , 3 so I() = E log p(X; ) log p(X; ) T = E(,2) X- 2 1 0 2 . = 2 0 2 (X-)2 ( 2)2 1 + (X-)2 - 3 X- 2 1 + (X-)2 - 3 2 2 1 + - + (X-) 3 Fisher Information Notice that the Fisher information matrix is dependent on the parametrization. This parametrization of the normal is rather unusual among common multiparameter distributions in that the information matrix is diagonal. The parametrization of the normal distribution in either the canonical exponential form or even = (, 2) would result in a different Fisher information matrix. 19 Fisher Information in Exponential Families If () is the mean-value parameter, that is, where () = E(T (X)), then I() = V -1, where V = V(T (X)). 20 Fisher Information in Group Families The Fisher information for the two parameters = (, ) in a location-scale family with Lebesgue PDF x- 1 f has a particularly simple form: n I() = 2 f (x) f (x) 2 f (x)dx 2 (x) x f (x) f 2 f (x)dx 2 x f (x) f (x) f (x)dx) xf (x) +1 f (x) f (x)dx . The prime on f (x) indicates differentiation with respect to x of course. (The information matrix is defined in terms of differentiation with respect to the parameters followed by an expectation.) 21 Fisher Information in Group Families Another expression for the information matrix for a location-scale family is n I() = 2 (f (x)) 2 dx f (x) f (x)(xf (x)+f (x)) dx f (x) f (x)(xf (x)+f (x)) dx f (x) . 2 (xf (x)+f (x)) dx f (x) This is given in a slightly different form in Example 3.9 of Shao, which is Exercise 3.34, which is solved in his Solutions, using the form above, which is a more straightforward expression from the derivation that begins by defining the function g(, , x) = log(f ((x - )/)/), and the proceeding with the definition of the information matrix. Also we can see that in the location-scale family, if it is symmetric about the origin (that is, about ), the covariance term is 0. 22 Fisher Information in the Gamma Family Consider the gamma(, ) distribution. We have for x > 0 log p(,)(x) = - log(()) - log() + ( - 1) log(x) - x/. This yields the Fisher information matrix I() = 1 2 , 2 () 1 where () is the digamma function, d log(())/d, and () is the trigamma function, d()/d. In the natural parameters, - 1 and 1/, obviously the Fisher information would be different. (Remember, derivatives are involved, so we cannot just substitute the transformed parameters in the information matrix.) 23 You should have in your repertoire of easy pieces the problem of working out the information matrix for = (, ) in the N(, 2 ) distribution using all three methods; that is, (1) the expectation of the product of first derivatives with respect to the parameters, (2) the expectation of the derivatives second with respect to the parameters, and (3) the integrals of the derivatives with respect to the variable (which, in the first form above, is an expectation). The three regularity conditions mentioned earlier play a major role in UMVUE. Fisher information is so important in minimum variance considerations, that these are sometimes called the Fisher information or FI regularity conditions. 24 A Lower Bound for Unbiased Estimators We want an unbiased estimator with small variance. How small can we get? For an unbiased estimator T of g() in a family of densities satisfying the regularity conditions and such that T has a finite second moment, we have the matrix relationship T V(T (X)) g() (I())-1 g(), where we assume the existence of all quantities in the expression. 25 Note the meaning of this relationship in the multiparameter case: it says that the matrix T -1 g() g() V(T (X)) - (I()) is nonnegative definite. (The zero matrix is nonnegative definite.) 26 The Information Inequality (CRLB) for Unbiased Estimators This inequality is called the information or the Cramr-Rao lower e bound (CRLB). The CRLB results from the covariance inequality. The proof of the CRLB is an "easy piece" that every student should be able to provide quickly. 27 CRLB Example Consider a random sample X1, . . . , Xn, n > 1, from the N(, 2 ) distribution. In this case, let's use the parametrization = (, 2 ). The joint log density is n log p(,)(x) = c - log( 2 ) - (xi - )2/(2 2 ). 2 i The information matrix is diagonal, so the inverse of the information matrix is particularly simple: . 4 0 2(n-1) For the simple case of g() = (, 2 ), we have the unbiased estimator, T (X) = (X, n (Xi - X)2/(n - 1)), and i=1 2 0 n , V(T (X)) = 4 0 2(n-1) 2 I()-1 = n 0 which is the same as the inverse of the information matrix. The estimators are Fisher efficient. 28 A Lower Bound for Estimators A more general information inequality (that is, without reference to unbiasedness) is V(T (X)) T -1 E(T ()) E(T ()). (I()) 29 Fisher Efficient Estimators It is important to know in what situations an unbiased estimator can achieve the CRLB. Notice this would depend on both (p(X, )) and g(). The necessary and sufficient condition that an estimator T of g() attain the CRLB is that (T -g()) be proportional to log(p(X, ))/ a.e.; that is, for some a that does not depend on X, log(p(X, )) = a()(T - g()) a.e. For example, there are unbiased estimators of the mean in the normal, Poisson, and binomial families that attain the CRLB. There is no unbiased estimator of that attains the CRLB in the family of distributions with densities proportional to (1 + (x - )2 )-1 (this is the Cauchy family). 30 If the CRLB is attained for an estimator of g(), it cannot be attained for any other (independent) function of . For example, there is no unbiased estimator of 2 in the normal distribution that achieves the CRLB. If the CRLB is not sharp, that is, if it cannot be attained, there may be other (larger) bounds, for example the Bhattacharyya bound. These sharper bounds are usually based on higher-order derivatives. Unbiased Statistics In estimation problems, as we have seen, it is often fruitful to represent the estimand as some functional of the CDF, P . The mean, for example, if it exists is (P ) = x dP. Given a random sample X1 , . . . , Xn , we can form a plug-in estimator of (P ) by applying the functional to the ECDF. In more complicated cases, the property of interest may be the quantile associated with , that is, the unique value y defined by (P ) = inf {y : P (y) }. y There is a basic difference in the functionals in these equations. The first is an expected value, E(Xi) for each i. The second functional, however, cannot be written as an expectation. (Bickel and Lehmann, 1969, showed this.) 32 Expectation Functionals In the following, we will consider the class of statistical functions that can be written as an expectation of a function h of some subsample, Xi1 , . . . , Xim , where i1, . . . , im are distinct elements of {1, . . . , n}: = (P ) = E(h(Xi1 , . . . , Xim )). Such s are called expectation functionals. In the case of , h is the identity and m = 1. Expectation functionals that relate to some parameter of interest are often easy to define. The simplest is just E(h(Xi )). The utility of expectation functionals lies in the ease of working with them coupled with some useful general properties. 33 Expectation Functionals Note that without loss of generality we can assume that h is symmetric in its arguments because the Xis are i.i.d., and so even if h is not symmetric, any permutation (i1, . . . , im) of the indexes has the same expectation, so we could form a function that is symmetric in the arguments and has the same expectation: (X1 , . . . , Xm) = h 1 m! h(Xi1 , . . . , Xim ). all permutations Because of this, we will just need to consider h evaluated over the possible combinations of m items from the sample of size n. Furthermore, because the Xij are i.i.d., the properties of h(Xi1 , . . . , Xim ) are the same as the properties of h(X1 , . . . , Xm). Now consider the estimation of an expectation functional , given a random sample X1, . . . , Xn, where n m. 34 U Statistics Clearly h(X1 , . . . , Xm) is an unbiased estimator of , and so is h(Xi1 , . . . , Xim ) for any m-tuple, 1 i1 < < im n; hence, we have that 1 (1) h(Xi1 , . . . , Xim ) U= n m all combinations is unbiased for . A statistic of this form is called a U-statistic. The U-statistic is a function of all n items in the sample. The function h, which is called the kernel of the U-statistic is a function of m arguments . The number of arguments of the kernel is called the order of the kernel. We also refer to the order of the kernel as the order of the U-statistic. 35 U Statistics: The Sample Mean In the simplest U-statistic, the kernel is of order 1 and h is the identity, h(xi) = xi. This is just the sample mean, which we can immediately generalize by defining hr (xi) = xr , yielding the first order U-statistic i 1 n r Xi , U (X1, . . . , Xn ) = n i=1 the sample r th moment. 36 U Statistics: The Sample Variance Another simple U-statistic has the kernel of order 2 h(x1 , x2) = and is n 2 h(Xi , Xj ). U (X1 , . . . , Xn) = n(n - 1) i<j 1 (x1 - x2)2, 2 This U-statistic is the sample variance S 2, which is unbiased for the population variance if it exists. 37 U Statistics: Sample Quantiles The quantile problem is related to an inverse problem in which the property of interest is the ; that is, given a value a, estimate P (a). We can write an expectation functional and arrive at the Ustatistic 1 n I (Xi ) U (X1 , . . . , Xn) = n i=1 (-,a] = Pn(a), where Pn is the ECDF. 38 U Statistics Occasionally, the kernel will include some argument computed from the full sample; that is, an mth order kernel involves more than m items from the sample. An example of such a kernel is h(Xi , X) = (Xi - X)2 . The U-statistic with this kernel is (Xi - X)2/n = (n - 1)S 2 /n. If the population mean is , the expected value of (Xi -)2 is the population variance, say 2 , so at first glance, we might think that the expected value of this kernel is 2 . 39 Because Xi is included in X, however, we have E h(Xi , X) = E (n - 1)Xi /n - 2 Xj /n j=i 2 = E (n - 1)2 Xi /n2 - 2(n - 1)Xi Xj /n2 j=i + j=k=i=j Xj Xk /n2 + j=i 2 Xj /n2 = (n - 1)22/n2 + (n - 1)2 2 /n2 -2(n - 1)(n - 1)2/n2 +(n - 1)(n - 2)2/n2 +(n - 1)2 /n2 + (n - 1) 2 /n2 n-1 2 . = n We would, of course, expect this expectation to be less than 2 , because the expectation of (Xi - )2, which does not have (n - 1)Xi /n subtracted out, is 2. If instead of the kernel h above, we used the kernel n (Xi - X)2 , g(Xi, X) = n-1 we would have an expectation functional of interest; that is, one such that E(g(X1, . . . , Xm)) is something of interest, namely 2 . Some Second Order U Statistics A familiar second order U-statistic is Gini's mean difference, in which h(x1 , x2) = |x2 - x1|, for n 2, 1 U= n |Xj - Xi|. 2 i<j Another common second order U-statistic is the one-sample Wilcoxon statistic, in which h(x1, x2) = I(-,0](x1 + x2), for n 2, 1 U= n I(-,0](Xi + Xj ). 2 i<j This is an unbiased estimator of Pr(X1 + X2 0). 42 U Statistics over Multiple Populations We can generalize U-statistics in an obvious way to independent random samples from more than one population. We do not require that the number of elements used as arguments to the kernel be the same; hence, the order of the kernel is a vector whose number of elements is the same as the number of populations. A common U-statistic involving two populations is the two-sample Wilcoxon statistic. For this, we assume that we have two samples X11 , . . . , X1n1 and X21 , . . . , X2n2 . The kernel is h(x1i, x2j ) = I(-,0](x2j - x1i). The two-sample Wilcoxon statistic is 1 2 1 I (X2j - X1i). U= n1n2 i=1 j=1 (-,0] n n This is an unbiased estimator of Pr(X11 X21 ). 43 Properties of U Statistics U-statistics have a number of interesting properties. They are useful in nonparametric inference because of, among other reasons, they are asymptotically the same as the plug-in estimator that is based on the empirical CDF. Some of the important statistics used in modern computational statistical methods are U-statistics. By conditioning on the order statistics, we can show that Ustatistics are UMVUE for their expectations. A sequence of adjusted kernels forms a martingale If E((h(X1 , . . . , Xm)2) < , it is a simple matter to work out the variance of the corresponding U-statistic. ******** 44 V Statistics As we have seen, a U-statistic is an unbiased estimator of an expectation functional; specifically, if (P ) = E(h(X1 , . . . , Xm)) the U-statistic with kernel h is unbiased for (P ). Applying the functional to the ECDF Pn, we have n 1 n (h(Xi1 , . . . , Xim )) (Pn) = nm i =1 im=1 1 = V (say), which we call the V-statistic associated with the kernel h, or equivalently associated with the U-statistic with kernel h. 45 V Statistics Recalling that (Pn) in general is not unbiased for (P ), we do not expect a V-statistic to be unbiased in general. However, in view of the asymptotic properties of Pn, we might expect V-statistics to have good asymptotic properties. A simple example is the variance, for which the U-statistic given earlier is unbiased. 46 The V-statistic with the same kernel is V 1 n n = (Xi - Xj )2 2n2 i=1 j=1 1 n n 2 2 = (Xi + Xj - 2Xi Xj 2n2 i=1 j=1 = n-1 2 S , n where S 2 is the sample variance. The V-statistic is biased for the population variance. We have considered this estimator before; it has a smaller MSE than the unbiased U-statistic.
Textbooks related to the document above:
Find millions of documents on Course Hero - Study Guides, Lecture Notes, Reference Materials, Practice Exams and more. Course Hero has millions of course specific materials providing students with the best way to expand their education.

Below is a small sample set of documents:

Sveriges lantbruksuniversitet - EDUC - 20002
Sheet1 PROFILE OF STUDENTS IN SFU COURSES COURSE: EDPR 418-2 ALL SECTIONS LOCATION: OTH TITLE: GROUP FIELD STUDIES SECTION TYPE: SEC SEMESTER: 2000-2 ENROL: 113 = PROGRAM OF STUDENT (Top 5 programs reported in each category Programs with &lt; 3 students
Sveriges lantbruksuniversitet - ENGL - 19963
PROFILE OF STUDENTS IN SFU COURSES COURSE: ENGL 205-3 D01 LOCATION: SFU TITLE: RENAISSANCE/ENLIGHT SECTION TYPE: LEC SEMESTER: 1996-3 ENROL: 10
George Mason - STAT - 344
Schedule: STAT344 Section 2 The schedule is approximate. The instructor reserves the right to make modifications. DateJan 21 Jan 26 Jan 28 Feb 2 Feb 4 Feb 9 Feb 11 Feb 16 Feb 18 Feb 23 Feb 25Chapter1-Descriptive Statistics 2- ProbabilitySection
Sveriges lantbruksuniversitet - APSC - 20013
Sheet1 PROFILE OF STUDENTS IN SFU COURSES COURSE: CMNS 428-4 E01 LOCATION: SFU TITLE: MEDIA ANALYSIS GRP SECTION TYPE: SEM SEMESTER: 2001-3 ENROL: 9 = PROGRAM OF STUDENT (Top 5 programs reported in each category) -Approved Intended Approved Certs, Ma
Virginia Tech - CS - 1044
Chapter 5Conditions, Logical Expressions, and Selection Control Structures1Chapter 5 Topicsq qq q q q qData Type bool Using Relational and Logical Operators to Construct and Evaluate Logical Expressions If-Then-Else Statements If-Then State
Virginia Tech - CS - 1044
CS 1044 Homework 2 Summer I 2007 Instructions: This homework assignment focuses primarily on some of the basic syntax and semantics of C+. The answers to the following questions can be determined from Chapters 3 through 5 of the lecture notes and Cha
Sveriges lantbruksuniversitet - ARTS - 20033
Sheet1 PROFILE OF STUDENTS IN SFU COURSES COURSE: CRIM 103-3 D01 LOCATION: SFU TITLE: PSYC EXPL-CRIM BEHAV SECTION TYPE: LEC SEMESTER: 2003-3 ENROL: 215 = PROGRAM OF STUDENT (Top 5 programs reported in each category Programs with &lt; 3 students not sho
Sveriges lantbruksuniversitet - SPAN - 19993
Sheet1 PROFILE OF STUDENTS IN SFU COURSES COURSE: SPAN 103-3 E01 TITLE: INTRO SPANISH II SEMESTER: 1999-3 LOCATION: DOW SECTION TYPE: TUT ENROL: 10= PROGRAM OF STUDENT (Top 5 programs reported in each category) -Approved Intended Approved Certs, Ma
Oregon State - ECE - 322
ECE322: Class Problem 4bFebruary 27, 2009Name: _The circuit below shows a two-stage amplifierDerive the expressions for the voltage gain vout/vin = (vX/vin)*(vout/vX). Assume both transistors are in forward active region.DCNo need. This qu
Sveriges lantbruksuniversitet - APSC - 19993
Sheet1 PROFILE OF STUDENTS IN SFU COURSES COURSE: CMPT 405-3 D01 LOCATION: SFU TITLE: CMPT. ALGORITHMS SECTION TYPE: LEC SEMESTER: 1999-3 ENROL: 28 = PROGRAM OF STUDENT (Top 5 programs reported in each category Programs with &lt; 3 students not shown se
Sveriges lantbruksuniversitet - PHYS - 19973
PROFILE OF STUDENTS IN SFU COURSES COURSE: PHYS 102-3 D01 LOCATION: SFU TITLE: GENERAL PHYSICS II SECTION TYPE: LEC SEMESTER: 1997-3 ENROL: 16
George Mason - DOCUMENT - 35954
STRATEGIC C COMMITTEE BOARD OF VISITORS March 25, 2009AGENDAI. II. III. Call to Order Approval of Minutes Meeting of February 4, 2009. Subcommittee Reports A. Development B. Business Opportunities AdjournmentIV.F-1STRATEGIC COMMITTEE C MINUT
Sveriges lantbruksuniversitet - SCI - 19973
PROFILE OF STUDENTS IN SFU COURSES COURSE: BICH 412-4 ALL SECTIONS LOCATION: SFU TITLE: ENZYMOLOGY SECTION TYPE: LEC SEMESTER: 1997-3 ENROL: 33
Sveriges lantbruksuniversitet - SPAN - 19971
PROFILE OF STUDENTS IN SFU COURSES COURSE: SPAN 103-3 E01 LOCATION: DOW TITLE: INTRO SPANISH II SECTION TYPE: TUT SEMESTER: 1997-1 ENROL: 17
Sveriges lantbruksuniversitet - KIN - 20001
Sheet1 PROFILE OF STUDENTS IN SFU COURSES COURSE: KIN 142-3 ALL SECTIONS LOCATION: SFU TITLE: INTRO KINESIOLOGY SECTION TYPE: LEC SEMESTER: 2000-1 ENROL: 197 = PROGRAM OF STUDENT (Top 5 programs reported in each category Programs with &lt; 3 students no
Sveriges lantbruksuniversitet - ECON - 103
ECON 103, 2008-2 ANSWERS TO HOME WORK ASSIGNMENTS Due the Week of July 14 Chapter 11 WRITE: [2] Complete the following labour demand table for a firm that is hiring labour competitively and selling its product in a competitive market.Units of Labou
George Mason - ECE - 297
Introduction to Xilinx Virtex FPGA devicesECE 297 - Reconfigurable Architectures for Computer SecurityOutline Introduction Features of Xilinx Virtex FPGAs Architecture overview CLB Routing IOB Block SelectRAM Additional componentsECE 297
East Los Angeles College - MAS - 187
Chapter 2Presenting DataRecap and Outline Frequency tables have limitations. Graphical methods can provide clearer picture. Use of computer packages.Stem and Leaf Plots Simple to produce. Easy to interpret. Applicable to all data typ
East Los Angeles College - MAS - 3301
233291 312 250 246 197 268 224 239 239 254 276 234 181 248 252 202 218 212325 344 185 263 246 224 212 188 250 148 169 226 175 242 252 153 183 137 202194 213
East Los Angeles College - MAS - 1301
Sibs 2.000000000e+000 0.000000000e+000 2.000000000e+000 1.000000000e+000 0.000000000e+000 0.000000000e+000 1.000000000e+000 2.000000000e+000 4.000000000e+000 1.000000000e+000 2.000000000e+000 2.000000000e+000 1.000000000e+000 2.000000000e+000 2.00000
Cornell - MAE - 417
3 r01 r2c r1c 1 r122r3c% % % %Equations of motion for a planar 3-link robot MAE 417/517 March 25, 2008 Daniel Brown% y is the vector of [theta1, theta2, theta3, Dtheta1, Dtheta2, Dtheta3] % p contains the parameter [I1, I2, I3, m1, m2, m3,
Virginia Tech - CS - 2604
Binary TreesPop Quiz 4Pop Quiz 4 September 26, 2003 5 Points1. How many internal nodes does a full binary tree with 4 leaves have? 2. What is the definition of a preorder traversal? 3. TRUE OR FALSE: A complete binary tree has a natural represen
Sveriges lantbruksuniversitet - ENSC - 305
Title: Intellectual Property and Invention Protection for EngineersAbstract: This presentation will provide an overview of various types of intellectual property, and will emphasize how trade secrets and patents can be used to protect inventions. I
Texas A&M - MATH - 151
Fall 2007 Math 151 Common Exam 1A Thu, 27/Sep/2007Name (print): For official use only!QN Signature: 112 13 Instructor: 14 15 16 17 Total Seat #PTSSection #Instructions1. In Part 1 (Problems 112), mark the correct choice on your ScanTron fo
Texas A&M - MATH - 151
Eg. A roast turkey is taken from an oven when its temperature has reached 185F and is placed on a table in a room where the temperature is 75F. (a) If the temperature of the turkey is 150F after half an hour, what is the temperature after 45 min?(b
RIT - EECC - 756
Conventional = Sequential or Single ProcessorConventional Computer Architecture Abstraction The user/system boundary: What is done in user space and what support is provided by the operating system to user programs. Conventional computer archit
East Los Angeles College - MAS - 3214
MAS3214 Number Theory (2009) Examples Sheet 2M.C.W.Qu. 2 Hardly anyone thought of dening h rst and then using its prime factorisation inside the prime factorisations of m and n, and consequently lots got stuck deciding what to cancel out in the t
Sveriges lantbruksuniversitet - ARTS - 19963
PROFILE OF STUDENTS IN SFU COURSES COURSE: FPA 233-2 ALL SECTIONS LOCATION: SFU TITLE: TECHNIQUES OF FILM SECTION TYPE: SEM SEMESTER: 1996-3 ENROL: 17
Sveriges lantbruksuniversitet - ARTS - 20033
Sheet1 PROFILE OF STUDENTS IN SFU COURSES COURSE: FPA 233-2 D01 LOCATION: SFU TITLE: TECHNIQUES OF FILM SECTION TYPE: SEM SEMESTER: 2003-3 ENROL: 10 = PROGRAM OF STUDENT (Top 5 programs reported in each category) -Approved Intended Approved Certs, Ma
Texas A&M - STAT - 605
# 1.h xt s g o td to a n n A y oT n k g eh s e kti y r eu d .nne gb hoda nd tnun ty aw,t hv o tiir i yu nel,sigyiee rewdSto l ebttn us r muhv r tmItetanpu loaasaaea dapchooxg iinysoussr Iegilthgec w.uisSplunw oiRsdaocwbI nssayAanoo geaahhcno nhfeou
Texas A&M - STAT - 605
+=+| Nutrition Information on Various Brands of Hot dogs || Variables are: || Type of hotdog (beef, meat, or poultry) - 1st column of data values || Calories per hot
Texas A&M - D - 114
Junior Bread and CerealBanana Nut Bread1 3/4 cups all-purpose flour 2/3 cup sugar 2 teaspoons baking powder teaspoon baking soda 1/4 teaspoon salt 1 cup mashed ripe banana 1/3 cup butter 2 tablespoons milk 2 eggs cup chopped pecansPreheat oven
Cornell - GB - 289
INTERNATIONAL LABOUR OFFICEGB.289/6 289th Session Geneva, March 2004Governing BodySIXTH ITEM ON THE AGENDADevelopments in the United NationsContentsPageIntroduction .. I. Regular United Nations events. 1. 2. 3. 4. II. Fifty-eighth session
Texas A&M - AGCJ - 621
ALEC 689: Special TopicsOnline Research MethodsALEC 689: Online Research MethodsWeek #4: Online Visual AppealWelcomeWhat we need to do this week:Discuss visual appeal for online surveys. Contrast organization styles used in online surveys. C
Texas A&M - AGCJ - 621
1ALEC 621: Online Research MethodsWeek #2: Mixed Mode Surveys2Welcome What we need to do this week: Discuss mixed mode surveys. Explore variations in online surveys. Illustrate fundamental online survey structures: Welcome site - *.ht
Texas A&M - AGECON - 489
In class problem AGEC 489/689 February 3, 2008 Suppose your goal is to achieve a 10 percent rate of return on equity in your firm this year. Assume the annual cost of debt capital is 8 percent, your firm is in the 28 percent tax bracket and you do no
RIT - EECC - 550
Computer Organization EECC 550Week 1 Week 2 Week 3 Week 4 Week 5 Week 6 Week 7 Week 8 Week 9 Week 10 Week 11 Introduction: Modern Computer Design Levels, Components, Technology Trends, Register Transfer Notation (RTN). [Chapters 1, 2] Instruct
Virginia Tech - CS - 5754
3D Interaction Techniques for Virtual Environments: Selection and ManipulationDoug A. Bowman1TerminologyInteraction Technique (IT) method for accomplishing a task 3D application system that displays 3D information 3D interaction performing u
Texas A&M - CVEN - 301
Texas A&M - WFSC - 406
1PRACTICE Habitat Control Grazing Management Prescribed Burning Range Enhancement (re-seeding) Brush Management Riparian Management &amp; Enhancement Wetland Enhancement Habitat Protection / Species of Concern Prescribed Control of SpeciesMINIMUM REQ
Texas A&M - WFSC - 406
The &quot;Nuts and Bolts&quot; of Scientific Writing. WHY WRITE? One of the objectives of this course is to practice writing scientific papers. Writing is an important component in preparing for a wildlife career. More than 50% of your time as a wildlife biolo
Texas A&M - CHEM - 489
BIODIESEL AND BY-PRODUCTSProfa. Dra. Carla Vernica Rodarte de MouraMarch, 2009UNIVERSIDADE FEDERAL DO PIAUChemistry DepartmentEngeneering Department1MangoCashew2Nut (Food, Oil biodiesel Cashew Nut Liquid can be used as fuel. (boil
Texas A&M - MATH - 150
Math 150 WIR, Fall 2007, c Benjamin AurispaMath 150 Exam 1 Review Problem SetNote: This exam review does not cover every topic that could be covered on your exam. It is more heavily weighted on Sections 2.6-2.8. Please take a look at the previous
Texas A&M - ECEN - 689
ECEN 689-613: SP TP PROB GRAPHICAL MODELS, Spring`09Homework #2Homework Assignment #2Due date Mar. 12, 2009 (Thu), 5:30PM in class.Problem 1. Class PDAG.(20 points)(a)A B C D E F G H I J (b)A B C D E F G
Texas A&M - ECEN - 455
Seat NumberName _ or ID Number _ ECEN 455 - Digital Communications Spring 2007 Midterm Exam #1Instructions: This exam is closed book. However, you may bring in one sheet of notes which is limited to one side of an 8.5x11 (inch) sheet of paper. Yo
Texas A&M - ECEN - 303
ECEN 303 - Random Signals and Systems Fall 2008 Practice Problems for Midterm Exam #2 Here are some practice problems from previous exams that are relevant to the upcoming midterm. Please contact me if you need help figuring out how to work these pro
Texas A&M - ELEN - 646
ECEN 646 - Statistical Communication Theory Problem Set #7, Date Assigned: 10/9/06 These problems are not to be turned in. However, you will be given a short quiz based on this problem set on 10/16/06. 1. Let X and Y be zero-mean Gaussian random vari
Texas A&M - STAT - 211
An Aggie does not lie, cheat, or steal or tolerate those who do.Homework 4 (Due date: February 28, 2005, Monday in class) 1. Make sure to write your name and section number on each page clearly. Do not write your SSN or ID. Staple everything t
Virginia Tech - CS - 5504
Introduction of Real-Time Embedded System DesignDepartment of Computer Science &amp; Engineering University of South Carolina Spring, 2002OutlinenIntroductionq qReal-time embedded systems System-level design methodologyn n nReal time schedul
Cornell - CS - 665
Lecture 9: Monte Carlo RenderingChapters 4 and 5 in Advanced GIHomework HW 1out, due Oct 5 Assignments done separately Might revisit this policy for later assignmentsFall 2004 Kavita Bala Computer Science Cornell University Kavita Bala, Comp
RIT - SMAM - 314
Information about SMAM 314-Dr. Grubers Section Instructor: Dr. M. Gruber office 08-3250 Phone 475-2541 email mjgsma@rit.edu Web page http:/www.rit.edu/~mjgsma/smam314winter02/hw.html Textbook:Basic Engineering Data Data Collection and Analysis -Steph
RIT - SMAM - 351
SMAM 351 HW#6 Due 4/16/04 1.Consider the continuous function k ,x 2 x5 A. Find k so that f(x) is a probability density function. f ( x )=2k k k dx = lim - 4 = =1 5 A 4x x 64 2Ak = 64B. Find (1)P(X&lt;4) 64 16 16 15 P(X &lt; 4) = 2 5 dx = - 4
RIT - SMAM - 351
Exam 2d Solution1. 7.5k = 60 k=8 1 1 63 = 82 642. A. P(X 7) = 1 P(X 6) = 1 .4862 = .5138 B. P(X 2) = .2616 C. = 10(.65) = 6.5 = 6.5(.35) = 1.508 D. p=.05 P(X 1) = 1 P(X = 0) = 1 .599 = .4013. A. P(X 2) = .9595 =3 B. P(X = 6) = P(X 6) P(
RIT - SMAM - 351
SMAM 351Quiz 6 dName_1. The exponential probability density function below of random variable T represents the time until a machine part fails in years. .2e -.2t g(t) = 0 t&gt;0 elsewhereA. What is the mean time before failure? (2 points) 1/.2 =
Cornell - CS - 172
CS/ENGRI 172, Fall 2003: Computation, Information, and Intelligence 10/3/03: Turing Machine Computability Turing Machine Universality Data/Program duality for Turing machines: Turing machine control tables can be written on a Turing machines tape as
Cornell - CS - 412
Variables vs. Registers/MemoryCS412/CS413 Introduction to Compilers Tim Teitelbaum Lecture 33: Register Allocation 18 Apr 07 Difference between IR and assembly code: IR (and abstract assembly) manipulate data in local and temporary variables Asse
Cornell - GEO - 326
326 - S TRUCTURAL G EOLOGY JOINTS &quot;The study of geologic history of fractures is notoriously difficult.&quot; Four general categories of observations: 1) 2) 3) 4) the distribution and geometry of the fracture system the surface features of the fractures t
Virginia Tech - ETD - 03102001
Cornell - CS - 172
Computation, Information, and Intelligence (ENGRI/CS/INFO/COGST 172), Spring 2007 5/2/07: Lecture 41 aid A zero-knowledge protocol; a look back before the end Topics: a zero-knowledge protocol (really, this time I swear); course review in preparatio
Virginia Tech - CS - 1044
CS 1044 Program 3Fall 2003Putting the basics together:Billing for VT Long DistanceIt's finally time to write a complete program. This project will use many of the C+ features that were illustrated in the source code you were given for the fir
Cornell - M - 171
CLT Simulation NotesStart by realizing 500 trials from a uniform [0,1] distribution. (Mean=.5, Standard Deviation=.sqrt(1/12)=.289) Now square each value to get the simulation of a new distribution. (Mean=.333, Standard Deviation=.298) Co