# Register now to access 7 million high quality study materials (What's Course Hero?) Course Hero is the premier provider of high quality online educational resources. With millions of study documents, online tutors, digital flashcards and free courseware, Course Hero is helping students learn more efficiently and effectively. Whether you're interested in exploring new subjects or mastering key topics for your next exam, Course Hero has the tools you need to achieve your goals.

50 Pages

### l5c

Course: EE 527, Fall 2009
School: Iowa State
Rating:

Word Count: 4493

#### Document Preview

Neyman-Pearson Outline: test for simple binary hypotheses, receiver operating characteristic (ROC). An introduction to classical composite hypothesis testing. Reading: Chapter 3 in Kay-II, (part of) Chapter 5 in Levy. EE 527, Detection and Estimation Theory, # 5c 1 False-alarm and Detection Probabilities for Binary Hypothesis Tests: A Reminder (see handout # 5) In binary hypothesis testing, we wish to...

Register Now

#### Unformatted Document Excerpt

Coursehero >> Iowa >> Iowa State >> EE 527

Course Hero has millions of student submitted documents similar to the one
below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support.

Course Hero has millions of student submitted documents similar to the one below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support.
Neyman-Pearson Outline: test for simple binary hypotheses, receiver operating characteristic (ROC). An introduction to classical composite hypothesis testing. Reading: Chapter 3 in Kay-II, (part of) Chapter 5 in Levy. EE 527, Detection and Estimation Theory, # 5c 1 False-alarm and Detection Probabilities for Binary Hypothesis Tests: A Reminder (see handout # 5) In binary hypothesis testing, we wish to identify which hypothesis is true (i.e. make the appropriate decision): H0 : sp(0) null hypothesis versus H1 : sp(1) alternative hypothesis where sp(0) sp(1) = sp, sp(0) sp(1) = . Recall that a binary decision rule (x) maps 1, decide H1, 0, decide H0. X data space to {0, 1}: (x) = which partitions the data space X [i.e. the support of fX | (x | )] into two regions: X0 = {x : (x) = 0} EE 527, Detection and Estimation Theory, # 5c and X1 = {x : (x) = 1}. 2 Recall the probabilities of false alarm and miss: PFA((X), ) = E X | [(X) | ] = X1 fX | (x | ) dx for sp (0) (1) PM((X), ) = E X | [1 - (X) | ] = 1- X1 PD ((X ),) fX | (x | ) dx = X0 fX | (x | ) dx for in sp (1) (2) and the probability of detection (correctly deciding H1): PD((X), ) = E X | [(X) | ] = X1 fX | (x | ) dx for in sp (1). For simple hypotheses, sp(0) = {0}, sp(1) = {1}, and sp = {0, 1}, the above expressions simplify, as shown in the following. EE 527, Detection and Estimation Theory, # 5c 3 Probabilities of False Alarm (P FA) and Detection (P D) for Simple Hypotheses PFA((X), 0) = X1 fX | (x | 0) dx (3) = PrX | {test statistic (ts) > | 0} X X1 PD((X), 1) = X1 fX | (x | 1) dx (4) = PrX | {ts > | 1}. X X1 Comments: (i) As the region X1 shrinks (i.e. EE 527, Detection and Estimation Theory, # 5c ), both of the above 4 probabilities shrink towards zero. (ii) As the region X1 grows (i.e. grow towards unity. 0), both probabilities (iii) Observations (i) and (ii) do not imply equality between PFA and PD; in most cases, as X1 grows, PD grows more rapidly than PFA (i.e. we better be right more often than we are wrong). (iv) However, the perfect case where our rule is always right and never wrong (PD = 1 and PFA = 0) cannot occur when the conditional pdfs/pmfs fX | (x | 0) and fX | (x | 1) overlap. (v) Thus, to increase the detection probability PD, we must also allow for the false-alarm probability PFA to increase. This behavior represents the fundamental tradeoff in hypothesis testing and detection theory and motivates us to introduce a (classical) approach to testing simple hypotheses, pioneered by Neyman and Pearson, to be discussed next. Receiver Operating Characteristic (ROC) allows us to visualize the realm of achievable PFA((X), 0) and PD((X), 1). EE 527, Detection and Estimation Theory, # 5c 5 A point (PFA, PD) is in the shaded region if we can find a rule (X) such that PFA((X), 0) = PFA and PD((X), 1) = PD. EE 527, Detection and Estimation Theory, # 5c 6 Neyman-Pearson Test for Simple Hypotheses Bayesian tests are criticized because they require specification of prior distribution (pmf or, in the composite-testing case, pdf) and the cost-function parameters L(i | j). An alternative classical solution for simple hypotheses is developed by Neyman and Pearson. Select the decision rule (X) that maximizes PD((X), 1) while ensuring that the probability of false alarm PFA((X), 0) is less than or equal to a specified level . Setup: Simple hypothesis testing: H0 : = 0 H1 : = 1. Parametric data models fX | (x | 0), fX | (x | 1). No prior pdf/pmf on is available. EE 527, Detection and Estimation Theory, # 5c 7 versus Define the set of all rules (X) whose probability of false alarm is less than or equal to a specified level : D = (X) PFA((X), 0) } see also (3). A Neyman-Pearson test NP(x) solves the constrained optimization problem: NP(x) = arg max PD((x), 1). (x)D We apply Lagrange multipliers to solve this optimization problem; consider the Lagrangian: L((x), ) = PD((x), 1) + [ - PFA((x), 0)] with 0. A decision rule (x) will be optimal if it maximizes L((x), ) and satisfies the Karush-Kuhn-Tucker (KKT) condition: [ - PFA((x), 0)] = 0. Upon using (3) and (4), the Lagrangian can be written as L((x), ) = + X1 EE 527, Detection and Estimation Theory, # 5c 8 (5) [fX | (x | 1) - fX | (x | 0)] dx. Consider maximizing L((x), ) with respect to (x) for a given . Then, (x) needs to satisfy 1, (x) > 0 or 1, (x) = (x) = 0, (x) < where (6) fX | (x | 1) (x) = fX | (x | 0) is the likelihood ratio. The values x that satisfy (x) = can be allocated to either X1 or X0. To completely specify the optimal test, we need to select a such that the KKT condition (5) holds and an allocation rule for those x that satisfy (x) = . Now, consider two versions of (6) for a fixed threshold : 1, (x) > 1, (x) = U,(x) = 0, (x) < and 1, (x) > 0, (x) = . L,(x) = 0, (x) < 9 EE 527, Detection and Estimation Theory, # 5c In the first case, all observations x for which (x) = are allocated to X1; in the second case, these observations are allocated to X0. Consider the cumulative distribution function (cdf) of (X) = under H0: F | (l | 0) = Pr | { l | 0}. Define f0 = F | (0 | 0) = Pr | { 0 | 0}. Recall that cdf F | (l | 0) must be nondecreasing and rightcontinuous, but may have discontinuities. EE 527, Detection and Estimation Theory, # 5c 10 Consider three cases, depending on : (i) When 1 - < f0 i.e. 1 - f0 < we select the threshold = 0 and apply the rule L,0(x) = 1, (x) > 0 . 0, (x) = 0 (7) (8) In this case, KKT condition (5) holds and, therefore, the test (8) is optimal; its probability of false alarm is PFA(L,0(x), 0) = 1 - f0 < see (7). An example of this case corresponds to 1 = 0 and 1 - 1 in the above figure. (ii) Suppose that 1 - f0 i.e. 1 - f0 (9) and there exists a such that F | ( | 0) = 1 - . Then, by selecting this as the threshold and using L,(x) = EE 527, Detection and Estimation Theory, # 5c (10) 1, (x) > 0, (x) (11) 11 we obtain a test with false-alarm probability PFA(L,(x), 0) = 1 - F | ( | 0) = see (9) the KKT condition (5) holds, and the test (10) is optimal. An example of this case corresponds to 2 and 1 - 2 in the above figure. (iii) Suppose that 1 - f0 i.e. 1 - f0 as in (ii), but cdf F | (l | 0) has a discontinuity point > 0 such that F | (- | 0) < 1 - < F | (+ | 0) where F | (- | 0) and F | (+ | 0) denote the left and right limits of F | ( | 0) at l = . If this case happens in practice, we can try to avoid the problem by changing our specified , which is anyway not God-given, but chosen rather arbitrarily. We should pick a value of that satisfies the KKT condition. Suppose that we are not allowed to change ; this gives us a chance to practice some basic probability. First, note that EE 527, Detection and Estimation Theory, # 5c 12 L,(x) has false-alarm probability PFA(L,(x), 0) = 1 - F | (+ | 0) < , U(x, ) has false-alarm probability PFA{U,(x), 0} = 1 - F | (- | 0) > and KKT optimality condition (5) requires that PFA((x), 0) = . We focus on the tests of the form (6) and construct the optimal test via randomization. Define the probability p = - PFA(L,(x), 0) PFA(U,(x), 0) - PFA(L,(x), 0) which clearly satisfies 0 < p < 1. EE 527, Detection and Estimation Theory, # 5c 13 Select U,(x) with probability p and L,(x) with probability 1 - p. This test indeed has the form (6); its probability of false alarm is PFA((x), 0) = PFA(L,(x), 0) + p [PFA(U,(x), 0) - PFA(L,(x), 0)] = . Since KKT condition (5) is satisfied, the randomized test is optimal. 1, (x) > and 0 w.p. 1 - p, (x) = 0, (x) < (x) = 1 w.p. p EE 527, Detection and Estimation Theory, # 5c 14 ROC Properties when Likelihood Ratio is a Continuous Random Variable Given Based on the Neyman-Pearson theory, if we set PFA = , then the test that maximizes PD must be a likelihood-ratio test of the form (6). Thus, the ROC curve separating achievable and non-achievable pairs (PFA, PD) corresponds to the family of likelihood-ratio tests. For simplicity, we focus here on the case where the likelihood ratio is a continuous random variable given . First, note that, EE 527, Detection and Estimation Theory, # 5c 15 for the likelihood-ratio test, PFA( ) = X1 + fX | (x | 0) dx f | (l | 0) dl = PrX | {(X) > | 0} = PD( ) = X1 + (12) fX | (x | 1) dx f | (l | 1) dl = PrX | {(X) > | 1} = (13) where denotes the threshold. Under the continuity assumption for the likelihood ratio, as we vary between 0 and +, the point PFA((X), 0), PD((X), 1) moves continuously along the ROC curve. If we set = 0, we always select H1 and, therefore, PFA(0) = PD(0) = 1. Conversely, if we set = +, we always select H0 and, therefore, PFA(+) = PD(+) = 0. In summary, ROC Property 1. If the likelihood ratio is a continuous random variable given , the points (0, 0) and (1, 1) belong to ROC. EE 527, Detection and Estimation Theory, # 5c 16 Now, differentiate (12) and (13) with respect to : dPD( ) d dPD( ) d implying f | ( | 1) dPD( ) = = . dPFA( ) f | ( | 0) In summary, ROC Property 2. If the likelihood ratio is a continuous random variable given , the slope of ROC at point (PFA( ), PD( )) is equal to the threshold of the corresponding likelihood-ratio test. In particular, this result implies that the slope of ROC is = + at (0, 0) and = 0 at (1, 1). ROC Property 3. The domain of achievable pairs (PFA, PD) is convex and the ROC curve is concave. This property holds in general, including the case where the likelihood ratio is a mixed or discrete random variable given . HW: Prove ROC Property 3. EE 527, Detection and Estimation Theory, # 5c 17 = -f | ( | 1) = -f | ( | 0) ROC Property 4. All points on ROC curve satisfy PD( ) PFA( ). This property holds in general, including the case where the likelihood ratio is a mixed or discrete random variable given . EE 527, Detection and Estimation Theory, # 5c 18 Example: Simple Hypotheses, Coherent Detection in Gaussian Noise with Known Covariance Matrix Simple hypotheses: the space of the parameter and its partitions are sp = {0, 1}, sp(0) = {0}, sp(1) = {1}. The measurement vector X given is modeled using fX | (x | ) = N (x | , C) = 1 |2 C| 1 exp[- 2 (x - )T C -1 (x - )] where C is a known positive definite covariance matrix. Our likelihood-ratio test is (x) likelihood ratio fX | (x | 1) = fX | (x | 0) H1 1 exp[- 2 (x - 1)T C -1 (x - 1)] = exp[- 1 (x - 0)T C -1 (x - 0) 2 EE 527, Detection and Estimation Theory, # 5c . 19 Therefore, H1 -1 2 i.e. (x - 1) C T -1 1 (x - 1) + 2 (x - 0) C T -1 (x - 0) ln (1 - 0) C and, finally, H1 T -1 [x - (0 + 1)] 1 2 H1 ln . T (x) = s C T -1 x ln + 1 (1 - 0)T C -1 (1 + 0) = 2 where we have defined s = 1 - 0. False-alarm and detection/miss probabilities. Given , T (x) is a linear combination of Gaussian random variables, implying that it is also Gaussian, with mean and variance: E X | [T (X) | ] = sT C -1 varX | [T (X) | ] = sT C -1s EE 527, Detection and Estimation Theory, # 5c (not a function of ). 20 Now, PFA = PrX | {T (X) > | 0} standard normal random variable = PrX | T (X) - sT C -10 sT C -1 s - sT C -10 > 0 T C -1 s s (14) - sT C -10 = Q sT C -1s and PD = 1 - PM = PrX | {T (X) > | 1} standard normal random variable = PrX | T (X) - sT C -11 sT C -1 s - sT C -11 > | 1 T C -1 s s - sT C -11 = Q . T C -1 s s We use (14) to obtain a that satisfies the specified PFA: sT C -1 0 = Q-1(PFA) + sT C -1 s sT C -1 s EE 527, Detection and Estimation Theory, # 5c 21 implying PD = Q Q -1 (PFA) - sT C -1 s (15) = Q Q-1(PFA) - d Here, d= sT C -1 s = (1 - 0)T C -1 (1 - 0) is the deflection coefficient. EE 527, Detection and Estimation Theory, # 5c 22 Decentralized Detection for Simple Hypotheses Consider a decentralized detection scenario depicted by Assumptions: The observations X[n], n = 0, 1, . . . , N - 1 made at N spatially distributed sensors (nodes) follow the same marginal probabilistic model: fX | (x[n] | ) (16) and are conditionally independent given = , which may not always be reasonable, but leads to an easy solution. EE 527, Detection and Estimation Theory, # 5c 23 We wish to test: H0 : = 0 H1 : = 1. Each node n makes a hard local decision d[n] based on its local observation x[n] and sends it to the headquarters (fusion center), which collects all the local decisions and makes the final global decision H0 versus H1. This structure is clearly suboptimal: it is easy to construct a better decision strategy in which each node sends its (quantized, in practice) likelihood ratio to the fusion center, rather than the decision only. However, such a strategy would have a higher communication (energy) cost. The false-alarm and detection probabilities of each node's local decision rules can be computed using (16). Suppose that we have obtained them for each n: PFA,n, PD,n, n = 0, 1, . . . , N - 1. versus We now discuss the decentralized detection problem. Note that dn pD(n) | (dn | 1) = PD,n (1 - PD,n)1-dn Bernoulli pmf EE 527, Detection and Estimation Theory, # 5c 24 and, similarly, dn pD(n) | (dn | 0) = PFA,n (1 - PFA,n)1-dn Bernoulli pmf where PFA,n is the nth sensor's local detection false-alarm probability. Now, pD(n) | (dn | 1) ln (d) = ln pD(n) | (dn | 0) n=1 N N = n=1 ln dn PD,n (1 - PD,n)1-dn dn PFA,n (1 H1 - PFA,n )1-dn ln . To be able to further simplify the above expression, we nof focus on the case where all sensors have identical performance: PD,n = PD, PFA,n = PFA i.e. all local decision thresholds at the nodes are identical. Define the number of sensors deciding locally to support H1: N -1 u1 = n=0 EE 527, Detection and Estimation Theory, # 5c d[n]. 25 Then, the log-likelihood ratio becomes 1 - PD PD + (N - u1) log log (d) = u1 log PFA 1 - PFA or PD (1 - PFA) u1 log PFA (1 - PD) H1 H1 log log + N log 1 - PFA . 1 - PD (17) Clearly, each node's local decision dn is meaningful only if PD > PFA, which implies PD (1 - PFA) >1 PFA (1 - PD) the logarithm of which is therefore positive, and the decision rule (17) further simplifies to u1 H1 . The Neyman-Person performance of analysis this detector is easy: the random variable U1 is binomial given (i.e. conditional on the hypothesis) and, therefore, PrU1 | {U1 = u1 | } = EE 527, Detection and Estimation Theory, # 5c N pu1 (1 - p)N -u1 u1 26 where p = PFA under H0 and p = PD under H1. Hence, the "global" false-alarm probability is PFA,global = PrU1 | {U1 > | 0} N = u1 = N u1 PFA (1 - PFA)N -u1 . u1 EE 527, Detection and Estimation Theory, # 5c 27 An Introduction to Classical Composite Hypothesis Testing First, recall that, in composite testing of two hypotheses, we have sp(0) and sp(1) that form a partition of the parameter space sp: sp(0) sp(1) = sp, sp(0) sp(1) = and that we wish to identify which of the two hypotheses is true: H0 : sp(0) null hypothesis versus H1 : sp(1) alternative hypothesis. Here, we adopt the classical Neyman-Pearson approach: given an upper bound on the false-alarm probability, maximize the detection probability. The fact that H0 is composite means that the false-alarm probability for a rule (X) is a function of : PFA((X), )) where sp(0). Therefore, to satisfy the upper bound , we EE 527, Detection and Estimation Theory, # 5c 28 consider all tests (X) such that sp (0) max PFA((X), )) . (18) In this context, sp (0) max PFA((X), ) (19) is typically referred to as the size of the test (X). Therefore, the condition (18) states that we focus on tests whose size is upper-bounded by . Definition. Among all tests (X) whose size is upperbounded by [i.e. (18) holds], we say that UMP(X) is a uniformly most powerful (UMP) test if it satisfies PD(UMP(X), ) PD((X), ) for all sp(1). This is a very strong statement and very few hypothesis-testing problems have UMP tests. Note that Neyman-Pearson tests for simple hypotheses are UMP. Hence, to find an UMP test for composite hypotheses, we need to first write a likelihood ratio for the simple hypothesis test with sp(0) = {0}, sp(1) = {1}, and sp = {0, 1} and then transform this likelihood ratio in such a way that unknown quantities (e.g. 0 and 1) disappear from the test statistic. EE 527, Detection and Estimation Theory, # 5c 29 (1) If such a transformation can be found, there is hope that a UMP test exists. (2) However, we still need to figure out how to set a decision threshold ( , say) such that the upper bound (18) is satisfied. EE 527, Detection and Estimation Theory, # 5c 30 Example 1: Detecting a Positive DC Level in AWGN (versus zero DC level) Consider the following composite hypothesis-testing problem: H0 : H1 : = 0 i.e. sp(0) = {0} >0 versus i.e. sp(1) = (0, +) where the measurements X[0], X[1], . . . , X[N - 1] are conditionally independent, identically distributed (i.i.d.) given = , modeled as {X[n] | = } = + W [n] n = 0, 1, . . . , N - 1 with W [n] a zero-mean white Gaussian noise with known variance 2, i.e. W [n] N (0, 2) implying fX | (x | ) = 1 exp - 2 2 (2 2)N 1 N -1 (x[n]-)2 (20) n=0 where x = [x[0], x[1], . . . , x[N - 1]]T . A sufficient statistic for is N 1 x= x[n]. N n=1 EE 527, Detection and Estimation Theory, # 5c 31 Now, find the pdf of x given = : fX | (x | ) = N (x | , 2/N ). (21) We start by writing the classical Neyman-Pearson test for the simple hypotheses with spsimple(0) = {0} and spsimple(1) = {1}, 1 (0, +): 1 2 -1/2 exp[- 2 2/N (x - 1)2] fX | (x | 1) (2 /N ) = 1 fX | (x | 0) (2 2/N )-1/2 exp[- 2 2/N (x)2] H1 . Taking log etc. leads to 1 x H1 . Since we know that 1 > 0, we can divide both sides of the above expression by 1 and accept H1 if (x) : x H1 . Hence, we transformed our likelihood ratio in such a way that 1 disappears from the test statistic, i.e. we accomplished (1) above. Now, on to (2). How to determine the threshold such that the upper bound (18) is satisfied? Based on (25), we know: fX | (x | 0) = N (x | 0, 2/N ) EE 527, Detection and Estimation Theory, # 5c 32 and, therefore, PFA((X), 0) = PrX | {X > 0} = PrX | X -0 2/N standard normal random var. > 2/N 0 = Q 2/N . Note that max PFA((X), ) = PFA((X), 0) = Q 2/N = sp (0) see (18) and (19). The most powerful test is achieved if the upper bound in (18) is reached by equality: 2 Q-1(). N = (22) Hence, we have accomplished (2), since this yields exactly size for our test (X). To study the performance of the above test, we substitute EE 527, Detection and Estimation Theory, # 5c 33 (22) into the power function: PrX | {X > | } = PrX | X - 2/N standard normal random var. > - 2/N = Q - 2/N = Q Q-1() - 2/N . (23) EE 527, Detection and Estimation Theory, # 5c 34 Example 2: Detecting a Positive DC Level in AWGN (versus nonnegative DC level) Consider the following composite hypothesis-testing problem: H0 : H1 : 0 >0 i.e. sp(0) = (-, 0] versus i.e. sp(1) = (0, +) are where the measurements X[0], X[1], . . . , X[N - 1] conditionally i.i.d. given = , modeled as {X[n] | = } = + W [n] n = 0, 1, . . . , N - 1 with W [n] a zero-mean white Gaussian noise with known variance 2, i.e. W [n] N (0, 2) implying fX | (x | ) = 1 exp - 2 2 )N 2 (2 1 N -1 (x[n]-)2 (24) n=0 where x = [x[0], x[1], . . . , x[N - 1]]T . A sufficient statistic for is N 1 x= x[n]. N n=1 EE 527, Detection and Estimation Theory, # 5c 35 and (25) fX | (x | ) = N (x | , 2/N ). We start by writing the classical Neyman-Pearson test for the simple hypotheses with spsimple(0) = {0} and spsimple(1) = {1}, where 0 (-, 0] and 1 (0, +), implying 1 2 -1/2 exp[- 2 2/N (x - 1)2] fX | (x | 1) (2 /N ) = 1 fX | (x | 0) (2 2/N )-1/2 exp[- 2 2/N (x - 0)2] H1 and 0 < 1. Taking log etc. leads to (1 - 0) x and, since 0 < 1, to (x) : x H1 H1 . Hence, we transformed our likelihood ratio in such a way that 0 and 1 disappear from the test statistic, i.e. we accomplished (1) above. The power function of this test is PrX | {X > | } = PrX | EE 527, Detection and Estimation Theory, # 5c X - - - > =Q / N / N / N 36 which is an increasing function of . Recall the definition (19) of test size: sp (0) max PFA((X), ) = = sp (0) max PrX | {X > | } max Q - / N =Q . (-,0] / N The most powerful test is achieved if the upper bound in (18) is reached by equality: = Q-1(). N Hence, we have accomplished (2), since this yields exactly size for our test (X). EE 527, Detection and Estimation Theory, # 5c 37 Example 3: Detecting a Completely Unknown DC Level in AWGN Consider now the composite hypothesis-testing problem: H0 : H1 : = 0 i.e. sp(0) = {0} versus = 0 i.e. sp(1) = (-, +)\{0} are where the measurements X[0], X[1], . . . , X[N - 1] conditionally i.i.d. given = , following fX | (x | ) = 1 (2 2)N 1 exp - 2 2 N -1 (x[n] - )2 n=0 and x = [x[0], x[1], . . . , x[N - 1]]T . A sufficient statistic for N 1 is x = N n=1 x[n] and the pdf of x given = is fX | (x | ) = N (x | , 2/N ). (26) We start by writing the classical Neyman-Pearson test for the simple hypotheses with sp(0) = {0} and sp(1) = {1 = 0}: 1 x > . We cannot accomplish (1), since 1 cannot be removed from the test statistic; therefore, UMP test does not exist for the above problem. EE 527, Detection and Estimation Theory, # 5c 38 Monotone Likelihood-ratio Criterion Consider a scalar parameter . We say that fX | (x | ) belongs to the monotone likelihood ratio (MLR) family if the pdfs (or pmfs) from this family satisfy the identifiability condition for (i.e. these pdfs are distinct for different values of ) and there is a scalar statistic T (x) such that, for 0 < 1, the likelihood ratio fX | (x | 1) (x ; 0, 1) = fX | (x | 0) is a monotonically increasing function of T (x). If fX | (x | ) belongs to the MLR family, then use the following test: 1, for T (x) , (x) = 0, for T (x) < EE 527, Detection and Estimation Theory, # 5c 39 and set = PFA((X), 0) = PrX | {T (X) | 0} e.g. use this condition to find the threshold . This test has the following properties: (i) With given by (27), (x) is UMP test of size for testing H0 : > 0 H1 : 0. (ii) For each , the power function PrX | {T (X) | } is a monotonically increasing function of . Note: Consider the one-parameter exponential family fX | (x | ) = h(x) exp[() T (x) - B()]. (29) (28) versus (27) Then, if () is a monotonically increasing function of , the class of pdfs (pmfs) (29) satisfies the MLR conditions. EE 527, Detection and Estimation Theory, # 5c 40 Example: Detection for Exponential Random Variables Consider conditionally i.i.d. measurements X[0], X[1], . . . , X[N - 1] given the parameter > 0, following the exponential pdf: fX | (x[n] | ) = Expon(x[n] | 1/) = 1 exp(--1 x[n]) i(0,+)(x[n]). The likelihood function of for all observations x = [x[0], x[1], . . . , x[N - 1]]T is 1 fX | (x | ) = N exp[--1 T (x)] where N -1 N -1 i(0,+)(x[n]) n=0 T (x) = n=0 x[n]. Since fX | (x | ) belongs to the one-parameter exponential family (29) and () = --1 is a monotonically increasing function of . Therefore, the test (x) = 1, for T (x) , 0, for T (x) < 41 EE 527, Detection and Estimation Theory, # 5c is UMP for testing H0 : > 0 H1 : 0. The sum of i.i.d. exponential random variables follows the Erlang pdf (which is a special case of the gamma pdf): fT | (T | ) = 1 T N -1 exp(-T /) i(0,+)(T ) N (N - 1)! versus = Gamma(T | N, -1). Therefore, the size of the test can be written as = PrX | {T (X) | 0} = = 1 N 0 1+ + tN -1 exp(-t/0) dt (N - 1)! N -1 1 + + 0 (N - 1)! 0 exp(-/0) where the integral is evaluated using integration by parts. For N = 1, we have = 0 ln(1/). EE 527, Detection and Estimation Theory, # 5c 42 Generalized Likelihood Ratio (GLR) Test Recall again that, in composite testing of two hypotheses, we have sp(0) and sp(1) that form a partition of the parameter space sp: sp(0) sp(1) = sp, sp(0) sp(1) = and that we wish to identify which of the two hypotheses is true: H0 : sp(0) null hypothesis versus H1 : sp(1) alternative hypothesis. In GLR tests, we replace the unknown parameters by their maximum-likelihood (ML) estimates under the two hypotheses. Hence, accept H1 if maxsp(1) fX | (x | ) GLR(x) = > . maxsp(0) fX | (x | ) This test has no UMP optimality properties, but often works well in practice. EE 527, Detection and Estimation Theory, # 5c 43 Example: Detecting a Completely Unknown DC Level in AWGN Consider again the composite hypothesis-testing problem from p. 38: H0 : H1 : = 0 i.e. sp(0) = {0} versus = 0 i.e. sp(1) = (-, +)\{0} are where the measurements X[0], X[1], . . . , X[N - 1] conditionally i.i.d. given = , following fX | (x | ) = 1 (2 2)N 1 exp - 2 2 N -1 (x[n] - )2 n=0 and x = [x[0], x[1], . . . , x[N - 1]]T . A sufficient statistic for N 1 is x = N n=1 x[n] and the pdf of x given = is fX | (x | ) = N (x | , 2/N ). Our GLR test accepts H1 if maxsp(1) fX | (x | ) GLR(x) = > . fX | (x | 0) Now, x = arg max fX | (x | ) sp (1) EE 527, Detection and Estimation Theory, # 5c 44 and fX | (x | 0) = N (x | 0, 2/N ) x2 = exp - 1 2 2 /N 2 2/N 1 fX | (x | x) = N (x | 0, 2/N ) = 2 2/N 1 yielding N x2 ln GLR(x) = . 2 2 Therefore, we accept H1 if (x)2 > or |x| > . We compare this detector with the (not realizable, also called clairvoyant) UMP detector that assumes the knowledge of the sign of under H1. Assuming that the sign of under H1 is known, we can construct the UMP detector, whose ROC curve is given by PD = Q(Q-1(PFA) - d) where d = N 2/ 2 and is the value of the parameter under H1; see (23) for the case where > 0 under H1. All other detectors have PD below this upper bound. EE 527, Detection and Estimation Theory, # 5c 45 GLR test: Decide H1 if |x| > . To make sure that the GLR test is implementable, we must be able to specify a threshold so that the false-alarm probability is upper-bounded by a given size . This is possible in our example: PFA((x), 0) = PrX | {|X| > | 0} symmetry see (26) 2/N ) see (26) = 2 PrX | {X > | 0} = 2 Q(/ PD((x), ) = PrX | {|X| > | } = = = - 2/N -1 PrX | {X > | } + PrX | {X < - | } Q +Q + 2/N 2/N 2/N . Q Q (/2) - (/2) + +Q Q -1 In this case, GLR test is only slightly worse than the clairvoyant detector (Figure 6.4 in Kay-II): EE 527, Detection and Estimation Theory, # 5c 46 Example: DC level in WGN with A and 2 both unknown. Recall that 2 is called a nuisance parameter since we care exclusively about . Here, the GLR test for H0 : H1 : = 0 i.e. sp(0) = {0} versus = 0 i.e. sp(1) = (-, +)\{0} accepts H1 if max,2 fX | ,2 (x | , 2) GLR(x) = > max2 fX | ,2 (x | 0, 2) EE 527, Detection and Estimation Theory, # 5c 47 where fX | ,2 (x | , 2) = Here, max fX | ,2 (x | , ) = , 2 2 1 exp - 2 2 (2 2)N 1 N -1 (x[n]-)2 . n=0 (30) max fX | ,2 (x | 0, 2) = 2 1 e-N/2 2 [2 1 (x)]N/2 1 e-N/2 2 [2 0 (x)]N/2 where 2 0 (x) = 1 N 1 N N x2[n] n=1 N 2 1 (x) = (x[n] - x)2. n=1 Hence, GLR(x) = 2 0 (x) 2 1 (x) N/2 i.e. GLR test fits data with the "best" DC-level signal ML = x, 2 finds the residual variance estimate 1 , and compares this 2 estimate with the variance estimate 0 under the null case (i.e. EE 527, Detection and Estimation Theory, # 5c 48 2 for = 0). When sufficiently strong signal is present, 1 and GLR(x) 1. 2 0 Note that 2 1 (x) = 1 N 1 N N (x - x[n])2 n=1 N = (x2[n] - 2 x x[n] + x2) n=1 N = 1 N x2[n] - 2 x2 + x2 n=1 2 = 0 (x) - x2. Hence, 2 1 0 (x) . 2 ln GLR(x) = N ln 2 2 = N ln 2 2 (x) 0 (x) - x 1 - x /0 x2 1 0 2 0 (x) and ln[1/(1 - z)] is monotonically increasing on z (0, 1). Therefore, an equivalent test can be constructed as follows: x2 > . T (x) = 2 0 (x) EE 527, Detection and Estimation Theory, # 5c 49 Note that The pdf of T (X) given = 0 does not depend on 2 and, therefore, GLR test can be implemented, i.e. it is CFAR. Definition. A test is constant false alarm rate (CFAR) if we can find a threshold that yields a test whose size is equal to . In other words, we should be able to set the threshold independently of the unknown parameters, i.e. the distribution of the test statistic under H0 does not depend on the unknown parameters. EE 527, Detection and Estimation Theory, # 5c 50
Find millions of documents on Course Hero - Study Guides, Lecture Notes, Reference Materials, Practice Exams and more. Course Hero has millions of course specific materials providing students with the best way to expand their education.

Below is a small sample set of documents:

Iowa State - EE - 527
An Introduction to Probabilistic Graphical ModelsReading: Chapters 17 and 18 in Wasserman.EE 527, Detection and Estimation Theory, An Introduction to Probabilistic Graphical Models1Directed GraphsWe wish to identify simple structure in larg
Iowa State - EE - 527
Hidden Markov Models (HMMs)(Discrete-time Discrete Observation Space Case) [1] is a classical reference on this topic, available on WebCT and IEEE Xplore, For a bioinformatics-related exposition, see Ch. 12 of W.J. Ewens and G.R. Grant, Statistical
Iowa State - EE - 527
EE 527, HW 8, due April 9, 2009.11. Binary vector channel . The output of a binary vector channel with multiplicative noise is is S[0] W [0] X[0] = . . X= S[N - 1] W [N - 1] X[N - 1] S[0] . S= S[N - 1] W [0] . W = W [N - 1] whereand
Iowa State - EE - 527
EE 527, HW 9, due April 16, 2009.11. (a) Find the -level Neyman-Pearson rule (with false-alarm probability ) for testing H0 : = 0 H1 : = 1 . where fX | (x | 0 ) = versus fX | (x | 1 ) = c1 (3 - |x|), |x| 3 0, else c0 x2 , |x| 1 0, else vers
Iowa State - EE - 527
EE 527, HW 10, due April 23, 2009.11. Suppose that we have collected data y k at times k = 1, 2, . . . , T and that the measurement model in (1)(5) in handout Kalman holds. Then, smoothing requires the computation of the following posterior predi
San Diego State - CS - 524
Running on SDSU/Rohan/Fall 06 Wed Nov 15 14:15:05 PST 2006 The input code: %ss%tpackage testloop is body fib : integer; oldfib : integer; tmpfib, i : integer;begin fib := 1; oldfib := 1; writeln(&quot;This program also c
San Diego State - CS - 524
Running on SDSU/Rohan/Fall 06 Wed Nov 15 10:40:28 PST 2006 The input code: %ss%t- testing comments.Package test_loop Is Bodyi, j:integer;%dmpBegini := 0;loopwriteln(&quot;simple loop, i=&quot;, i);i := i + 1;exit when i=3;end loop;
Michigan State University - CEM - 142
E. Kentucky - ITTC - 753
EECS 753 Embedded and Real Time Computer SystemsEECS 753 Embedded and Real Time Computer Systems (3). This course will cover emerging and proposed techniques and issues in embedded and real time computer systems. Topics will include new paradigms,
E. Kentucky - ITTC - 753
Embedded and Real Time Systems Lecture #4David Andrews dandrews@eecs.ukans.eduWhat We Will Cover Today Processing Timing Attributes Purely Cyclic Mostly Cyclic Asynchronous and somewhat predictable Asynchronous and unpredictable Mapping Int
E. Kentucky - ITTC - 753
Time in Embedded and Real Time Systems Lecture #6David Andrews dandrews@eecs.ukans.eduWhat We Will Cover TodayTime and Order1. 2. 3. 1. 2. 3. 1. 2. Causal Order Temporal Order Delivery Order Global Time Precision/Accuracy Sparse Time Internal Sy
E. Kentucky - ITTC - 753
Requirements Analysis Lecture #3David Andrews dandrews@eecs.ukans.eduWhat We Will Cover Today Typical Design Flow Top Down Design Approach Understanding Requirements Functional Behavioral Timing Physical Perform Requirements Analysis on
E. Kentucky - ITTC - 753
Time in Embedded and Real Time Systems Lecture #6David Andrews dandrews@eecs.ukans.eduWhat We Will Cover TodayTime and Order1. 2. 3. 1. 2. 3. 1. 2. Causal Order Temporal Order Delivery Order Global Time Precision/Accuracy Sparse Time Internal Sy
E. Kentucky - ITTC - 753
Embedded and Real Time Systems Lecture #3David Andrews dandrews@eecs.ukans.eduWhat We Will Cover Today Fundamental Concepts and Definitions Functional and Temporal Requirements What Is A Distributed System Things That Matter Most In Embedded
E. Kentucky - ITTC - 753
Embedded and Real Time SystemsDavid Andrews dandrews@eecs.ukans.eduEECS 753 Embedded and Real Time Systems What is an embedded system ?Computer inside a product What makes them different ? Reactive and Real Time Multi-dimensional design spa
E. Kentucky - ITTC - 753
Embedded and Real Time SystemsDavid Andrews dandrews@eecs.ukans.eduEECS 753 Embedded and Real Time Systems What is an embedded system ?Computer inside a product What makes them different ? Reactive and Real Time Multi-dimensional design spa
E. Kentucky - ITTC - 753
Time in Embedded and Real Time SystemsDavid Andrews dandrews@eecs.ukans.eduWhat We Will Cover TodayTime and Order1. 2. 3. 1. 2. 3. 1. 2. Causal Order Temporal Order Delivery Order Global Time Precision/Accuracy Sparse Time Internal Synchronizat
E. Kentucky - ITTC - 753
Requirements Analysis ExampleDavid Andrews dandrews@eecs.ukans.eduWhat We Will Cover Today Typical Design Flow Top Down Design Approach Understanding Requirements Functional Behavioral Timing Physical Perform Requirements Analysis on Sp
E. Kentucky - ITTC - 753
Data Acquisition for Embedded SystemsRajesh GuptaAcknowledgments: Jane Liu, Ki-Seok Chung, University of Illinois, Urbana-Champaign.Overviewl l l l Signal Processing Basics Sampling Digital-to-Analog Conversion Analog-to-Digital Conversion2
E. Kentucky - ITTC - 753
Structured DesignDavid Andrews dandrews@eecs.ukans.eduWhat We Will Cover Today Typical Design Flow Top Down Design Approach Understanding Requirements Functional Behavioral Timing Physical Perform Requirements Analysis on Specific Examp
Rose-Hulman - CH - 211
211 (non-past form) + ex activity A activity B subordinate clause activity B (main clause) Older (1st) activity activity A (subordinate clause) TIME Newer (2nd) activityNON-past form a) is ALWAYS preceded by a NON-PAST form, regardless
Rose-Hulman - CH - 211
Useful expressions are highlighted. Politer than Set phrase. Set phrase () politer than How many nights. Set phrase. 9800 (name) xxx-xxxx We'll bewaiting for you. U (room charge)9800
Rose-Hulman - CH - 211
You and your partner will take 2 day trip to Chicago and has made a detailed plan. However, you two didn't write down some information, and your schedule is not complete. Ask your partner about the schedule by using ~ Ask about , (after ), and (after
Rose-Hulman - CH - 211
/(plain) / /(plain) /(plain)1. Discuss what you will do before, during, and after a trip to a foreign country, using and . Fixed questions are highlighted. e.g., A: B: A: B: A: B: A: (comment) B: BF/GF 2. Next, do the same conversation
Rose-Hulman - CH - 211
(to book a room at a hotel) A: B: Polite way of A: Politer than B: (Polite way of ) A: B: nights: 1()2()3()4()5() room type: (to book a room at a hotel) A: B: Polite way of A: Politer than B: (Polite way of ) A: B: nights:
Rose-Hulman - CH - 211
Let's do something! Volitional form (plain) e.g., () A: Fixed question B: Fixed A: B: A: Suggest some activity B: (game)Examples of activities: Chicago going to the museum; Going to see the game; Going shopping; NY Seeing musicals; Going to th
Rose-Hulman - CH - 211
II: Particles 1) Location: 2) Time: 3) Destination (goal): 4) Purpose: as far as, up to 4) place: 5) time: Place noun + : on the way to; transfer-One more step! -The use of place noun + does not indicate that the place is the desti
Rose-Hulman - CH - 211
Rose-Hulman - CH - 211
(A. acquaintance &amp; B. close friend) A:(acquaintance) : Your opinion. Reasons B: (close friend) Step 1) : Step 2)
Rose-Hulman - CH - 211
Reporting speech, using A. X (call XY) comics B. X (Y called X) N1(Proper noun) N2(Category Noun): N2 called N1 eg. vs. When you talk about things/people which you think your listener might not know well, mention its category so that
Rose-Hulman - CH - 211
First, write your travel plans in the left column. Then, find out your partner's travel plans. A: B: / A: B: A: B: /(volitional) A: () B:
Drexel - INFO - 684
SS ObyA Handbook for Survivors of SuicideJeffrey JacksonAbout this bookThis is a book for people who have lost a loved one to suicide, written by someone who has suffered the same loss.I lost my wife, Gail, to suicide several years ago.
Drexel - INFO - 684
1a time to grieve,By Susan Gottshall Brandella timeKidsPeace Student Assistance Program helps school children mourn, move onto growReprinted from the Spring/Summer `98 Issue ofA publication of KidsPeace In times of crisis, call: 1-800-8KID-
Central Washington University - OSC - 323
IDENT 323-6592 323-3408 323-9352 323-2542 323-0090 323-7128 323-2932 323-0539 323-7767 323-8062 HIGH LOW AVERAGE STD DEVEXAM 1 PERCENT 105 100% 101 96.2% 86.5 82.4% 81.5 77.6% 81 77.1% 78.5 74.8% 78 74.3% 76.5 72.9% 75 71.4% 67 63.8% 64.5 61.4% 101
Central Washington University - OSC - 323
IDENT 323-6592 323-7128 323-0539 323-2542 323-3408 323-9352 323-7767 323-0090 323-2932 323-8062 HIGH LOW AVERAGE STD DEVEXAM 2 PERCENT 92 100% 87.5 95.1% 80.5 87.5% 74.5 81.0% 74 80.4% 72 78.3% 70.5 76.6% 69 75.0% 65.5 71.2% 64.5 70.1% 61 66.3% 87.
Central Washington University - OSC - 323
IDENT 323-6592 323-7128 323-3408 323-0539 323-2542 323-0090 323-9352 323-2932 323-7767 323-8062 HIGH LOW AVERAGE STD DEVEXAM 1 EXAM 2 EXAM 3 GROUP TOTAL PERCENT 105 92 92 100 389 100% 101 87.5 88 95 371.5 95.5% 78 80.5 69.5 90 318 81.7% 86.5 72 69.
Central Washington University - OSC - 323
Brown,Vanver William Francis,Cory Lynn Golden,Andrew D Goldie,Kyle Douglas Gould,Courtney Dawn Gran,Jason M Schultz,Ben J Tonge,Michael W Wadkins, Jenna Walters,Sommer J22377767 22218062 10520539 22022932 20500090 22167128 22022542 23086592 2312340
Central Washington University - OSC - 323
IDENT 323-0090 323-0539 323-2542 323-2932 323-3408 323-6592 323-7128 323-7767 323-8062 323-9352 HIGH LOW AVERAGE STD DEVEXAM 1 EXAM 2 EXAM 3* GROUPATTEN-PART TOTAL PERCENT FINAL 105 92 92 100 0 389 100% QTR GRADE 78.5 65.5 70 89 -15 288 74% C 75 74
Central Washington University - OSC - 323
DEPARTMENT OF BUSINESS ADMINISTRATION SYLLABUS for OSC 323-OPERATIONS MANAGEMENT SUMMER QUARTER 2007 PERSONAL INFORMATION PROFESSOR: OFFICE: OFFICE HOURS: PHONE: Bill Turnquist Shaw-Smyser 312 7:30 9 am, M &amp; W; 7:30-10:30 am, Tu (or by appointment)
Central Washington University - OSC - 323
Central Washington University - OSC - 323
IDENT 323-6592 323-7128 323-3408 323-0539 323-2542 323-0090 323-9352 323-2932 323-7767 323-8062 HIGH LOW AVERAGE STD DEVEXAM 1 EXAM 2 EXAM 3 GROUP TOTAL PERCENT 105 92 92 100 389 100% 101 87.5 88 95 371.5 95.5% 78 80.5 69.5 90 318 81.7% 86.5 72 69.
Central Washington University - OSC - 323
IDENT 323-6592 323-3408 323-0539 323-0090 323-7128 323-2542 323-8062 323-2932 323-9352 323-7767 HIGH LOW AVERAGE STD DEVEXAM 3 PERCENT 92 100% 88 95.7% 73.5 79.9% 72.5 78.8% 70 76.1% 69.5 75.5% 65 70.7% 56.5 61.4% 56.5 61.4% 56.5 61.4% 51 55.4% 88.
Central Washington University - OSC - 323
IDENT 323-6592 323-7128 323-3408 323-2542 323-9352 323-0539 323-0090 323-2932 323-7767 323-8062 HIGH LOW AVERAGE STD DEVEXAM 1 EXAM 2 TOTAL PERCENT 105 92 197 100% 101 87.5 188.5 95.7% 78 80.5 158.5 80.5% 86.5 72 158.5 80.5% 81 74 155 78.7% 81.5 70
Central Washington University - OSC - 323
Central Washington University - OSC - 323
Sheet1 vti_encoding:SR|utf8-nl vti_timelastmodified:TR|16 Jul 2007 14:35:14 -0000 vti_extenderversion:SR|6.0.2.6551 vti_backlinkinfo:VX| vti_author:SR|PC86798\Turnquis vti_modifiedby:SR|PC86798\Turnquis vti_timecreated:TR|16 Jul 2007 14:35:14 -0000 v
Central Washington University - OSC - 323
Sheet1 vti_encoding:SR|utf8-nl vti_timelastmodified:TR|02 Aug 2007 02:11:18 -0000 vti_extenderversion:SR|6.0.2.6551 vti_author:SR|PC86798\Turnquis vti_modifiedby:SR|PC86798\Turnquis vti_timecreated:TR|02 Aug 2007 02:11:18 -0000 vti_cacheddtm:TX|02 Au
Central Washington University - OSC - 323
Sheet1 vti_encoding:SR|utf8-nl vti_timelastmodified:TR|16 Aug 2007 00:21:34 -0000 vti_extenderversion:SR|6.0.2.6551 vti_author:SR|PC86798\Turnquis vti_modifiedby:SR|PC86798\Turnquis vti_timecreated:TR|16 Aug 2007 00:21:34 -0000 vti_cacheddtm:TX|16 Au
Central Washington University - OSC - 323
Sheet1 vti_encoding:SR|utf8-nl vti_timelastmodified:TR|16 Aug 2007 19:40:46 -0000 vti_extenderversion:SR|6.0.2.6551 vti_author:SR|PC86798\Turnquis vti_modifiedby:SR|PC86798\Turnquis vti_timecreated:TR|15 Aug 2007 18:07:48 -0000 vti_backlinkinfo:VX|OS
Central Washington University - OSC - 323
Sheet1 vti_encoding:SR|utf8-nl vti_timelastmodified:TR|16 Aug 2007 20:42:52 -0000 vti_extenderversion:SR|6.0.2.6551 vti_author:SR|PC86798\Turnquis vti_modifiedby:SR|PC86798\Turnquis vti_timecreated:TR|16 Aug 2007 20:42:52 -0000 vti_cacheddtm:TX|16 Au
Central Washington University - OSC - 323
vti_encoding:SR|utf8-nl vti_timelastmodified:TR|18 Jun 2007 21:47:40 -0000 vti_extenderversion:SR|6.0.2.6551 vti_author:SR|PC86798\Turnquis vti_modifiedby:SR|PC86798\Turnquis vti_timecreated:TR|18 Jun 2007 21:47:40 -0000 vti_cacheddtm:TX|18 Jun 2007
Central Washington University - OSC - 323
Sheet1 vti_encoding:SR|utf8-nl vti_timelastmodified:TR|16 Aug 2007 00:21:22 -0000 vti_extenderversion:SR|6.0.2.6551 vti_author:SR|PC86798\Turnquis vti_modifiedby:SR|PC86798\Turnquis vti_timecreated:TR|16 Aug 2007 00:21:22 -0000 vti_cacheddtm:TX|16 Au
Central Washington University - OSC - 323
Sheet1 vti_encoding:SR|utf8-nl vti_timelastmodified:TR|16 Aug 2007 02:39:48 -0000 vti_extenderversion:SR|6.0.2.6551 vti_backlinkinfo:VX|OSC_323/323main.html vti_author:SR|PC86798\Turnquis vti_modifiedby:SR|PC86798\Turnquis vti_timecreated:TR|16 Aug 2
Central Washington University - OSC - 323
Sheet1 vti_encoding:SR|utf8-nl vti_timelastmodified:TR|02 Aug 2007 02:12:56 -0000 vti_extenderversion:SR|6.0.2.6551 vti_author:SR|PC86798\Turnquis vti_modifiedby:SR|PC86798\Turnquis vti_timecreated:TR|02 Aug 2007 02:12:56 -0000 vti_cacheddtm:TX|02 Au
Central Washington University - OSC - 323
vti_encoding:SR|utf8-nl vti_timelastmodified:TR|25 Jun 2007 14:48:06 -0000 vti_extenderversion:SR|6.0.2.6551 vti_author:SR|PC77887\Turnquis vti_modifiedby:SR|PC86798\Turnquis vti_timecreated:TR|18 Jan 2007 18:56:12 -0000 vti_title:SR|OMIS 435 &quot;CONTRI
Central Washington University - OSC - 323
CHAPTER3FORECASTING a Times Mirror Higher Education Group, Inc. company, 1996IRWIN1ForecastsStatements about the future x Estimate levels of activity (demand) x Basis for planning/input for operations decisions - design and use x Blend o
Central Washington University - OSC - 323
CHAPTE R12 Inventory Management a Times Mirror Higher Education Group, Inc. company, 1996IRWIN1What is Inventory?Stock (or any resource) held for future use x &quot;Insurance&quot; Policy/Buffer - Just-in-Case x Critical resource x &quot;Dead&quot; resource x
Central Washington University - OSC - 323
12s-1Purchasing and Supplier ManagementChapter 12 SupplementPurchasing and Supplier ManagementMcGraw-Hill/IrwinOperations Management, Seventh Edition, by William J. Stevenson Copyright 2002 by The McGraw-Hill Companies, Inc. All rights res
Central Washington University - OSC - 323
16-1Supply Chain ManagementChapter 16Supply Chain ManagementOperations Management, Seventh Edition, by William J. Stevenson Copyright 2002 by The McGraw-Hill Companies, Inc. All rights reserved.McGraw-Hill/Irwin16-2Supply Chain Manageme
Central Washington University - OSC - 323
16-1Supply Chain ManagementChapter 11Supply Chain ManagementOperations Management, Seventh Edition, by William J. Stevenson Copyright 2002 by The McGraw-Hill Companies, Inc. All rights reserved.McGraw-Hill/Irwin16-2Supply Chain Manageme
Texas A&M - ODP - 1064
Sheet1 SiteHoleCoreTypeSectionInt1Int2mbsfRaw ngOffsetmcd 1064A1H110100.140.500.1 1064A1H160600.636.700.6 1064A1H11101101.150.401.1 1064A1H210101.65101.6 1064A1H260602.141.302.1 1064A1H21101102.64702.6 1064A1H310103.153.203.1 1064A1H360603.653.603.6
Texas A&M - ODP - 1055
Sheet1 SiteHoleCoreTypeSectionInt1Int2mbsfRaw L*Sm L*Offsetmcd 1055C1H199.10.0948.5848.5800.09 1055C1H11414.10.1447.1647.1600.14 1055C1H11919.10.1946.5346.5300.19 1055C1H12424.10.2447.6447.0900.24 1055C1H12929.10.2946.2347.2700.29 1055C1H13434.10.344