ch04t - Ch.4 Multivariate Variables and Their Distribution...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Ch.4 Multivariate Variables and Their Distribution 1 Introduction In Chapter 3, we defined a univariate random variable as a rule that assigns a number to each outcome of a random experiment. When different rules assign (different) numbers to each outcome of a random experiment, we have a multivariate random variable. In Chapter 1, we saw several examples of studies where more than one variable is observed from each population unit. Some additional examples are 1. The X , Y and Z components of wind velocity can be measured in studies of at- mospheric turbulence. 2. The velocity X and stopping distance Y of an automobile can be studied in an automobile safety study. 3. The diameter at breast height (DBH) and age of a tree can be measured in a study aimed at developing a method for predicting age from diameter. In such studies, it is not only interesting to investigate the behavior of each variable individually, but also to investigate the degree of relationship between them. We say that we know the joint distribution of a bivariate variable, ( X,Y ) if we know all probabilities of the form P ( a < X b, c < Y d ) , with a < b, c < d, where P ( a < X b, c < Y d ) is to be interpreted as P ( a < X b and c < Y d ) or as P ([ a < X b ] [ c < Y d ]). Similarly, we say that we know the joint distribution of a multivariate variable, X 1 ,X 2 ,...,X m , if we know all probabilities of the form P ( a 1 < X 1 b 1 , a 2 < X 2 b 2 , ..., a m < X m b m ) , with a k < b k , k = 1 ,...,m. As in the univariate case, which was considered in Chapter 3, a concise description of the joint probability distribution of any multivariate random variable can be achieved through its cumulative distribution function (cdf). 1 Definition 1.1. The joint or bivariate cdf of two random variables, X,Y , is defined by F ( x,y ) = P ( X x,Y y ) . The joint or multivariate cdf of several random variables, X 1 ,X 2 ,...,X m , is defined by F ( x 1 ,x 2 ,...,x n ) = P ( X 1 x 1 ,X 2 x 2 ,...,X n x n ) . The cdf is convenient for calculating the probability that ( X,Y ) will lie in a rectangle. In the bivariate case the formula is P ( x 1 < X x 2 ,y 1 < Y y 2 ) = F ( x 2 ,y 2 )- F ( x 2 ,y 1 )- F ( x 1 ,y 2 ) + F ( x 1 ,y 1 ) . (1.1) Example 1.1. Let ( X,Y ) be uniformly distributed on the unit rectangle [0 , 1] [0 , 1]. This means that the probability that ( X,Y ) lies in a subspace A of the unit rectangle equals the area of A . Thus, the bivariate cdf of ( X,Y ) is F ( x,y ) = xy, for 0 x,y 1 , F ( x,y ) = x, for 0 x 1, y 1 , F ( x,y ) = y, for 0 y 1, x 1 , F ( x,y ) = 1 , for x,y 1, and F ( x,y ) = 0 , if either x 0 or y ....
View Full Document

Page1 / 27

ch04t - Ch.4 Multivariate Variables and Their Distribution...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online