Transforms of Probability Density Functions

Transforms of Probability Density Functions - Transforms of...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Transforms of Probability Density Functions For the remainder of the course we will be interested in sums of random variables. In general, given a finite sequence of random variables {X N } N 1 , we define the random variable Z by Z = X 1 + X 2 + ... + X N . We will want to understand how Z is distributed, particularly in terms of the random variables that comprise its sum. We will be particularly interested in computations involving sums of independent random variables, or sums of random variables that are both independent and identically distributed. We consider now the transform of a probability density function as a means of simplifying computations. There are three motivations for doing this. 1. First, when attempting to compute the probability density function for the sum of a pair of random variables, we found that when the variables are independent , they distribute according to the convolution of the individual random variables. Thus, setting Z = X + Y, we have: dx x z f x f z f Y X Z ) ( ) ( ) (- = - . 2. We will find that there is a very simple relationship between the transform function and the moments of a distribution. Recall that the moments of a distribution were the expectations E(X n ), where n = 1, 2, 3, . . . 3. Transforms will give us a fairly straightforward means of proving the Central Limit Theorem. Definition of the Transform There are several ways to define the transform, all sharing the common trait of being defined in terms of an expectation. Your text chose to compute the quantity E(e sX ), calling it a Laplace-like transform, but without paying any attention to the details of integration. Here lies a serious problem. Unless the variable s is a complex number with |s| 1, the integrals will not converge without invoking a considerable body of theory that is well beyond the scope of the course. We can avoid this complication by defining the transformation in terms of a Fourier-like integral by computing E(e i X ). Note that this expectation is defined since the probability functions all have finite integrals over all Euclidean space (by definition!!). To see this we need the following facts: 1. |e i X | = 1. (Recall that the magnitude of a complex number is just the square root of the product of the number times its complex conjugate.) 2. Using this fact, we have dx x f e dx x f e X X i X x i ) ( | | | ) ( | = 1. We dont need absolute values around ) ( x f X since it is a non-negative function, and once we use the fact that the magnitude of the exponential term is 1, the integral is just that of the probability density function over . We could choose to ignore the problems and adopt the texts approach simply as a bookkeeping exercise in symbol manipulation, but we lose almost nothing in switching to 1 the Fourier approach. As you read the text, you will need to substitute i for s, and remember that i 2 = -1....
View Full Document

Page1 / 17

Transforms of Probability Density Functions - Transforms of...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online