This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: 6 Functions of Random Variables 6.1 Introduction 197 6.2 Functions of One Random Variable 198 6.3 Expectation of a Function of One Random Variable 201 6.4 Sums of Independent Random Variables 202 6.5 Minimum of Two Independent Random Variables 218 6.6 Maximum of Two Independent Random Variables 219 6.7 Comparison of the Interconnection Models 221 6.8 Two Functions of Two Random Variables 222 6.9 Laws of Large Numbers 226 6.10 The Central Limit Theorem 227 6.11 Order Statistics 229 6.12 Chapter Summary 233 6.13 Problems 234 6.1 Introduction The previous chapters discussed basic properties of events defined in a given sam ple space and the random variables used to represent those events. The funda mental assumption that was made in those chapters is that events can always be defined by random variables. However, in many applications, the events are functions of other events. For example, the time until a complex system fails is a function of the time to failure of the individual components that make up the system. This means that the random variable used to represent the time to failure of the complex system is a function of the random variables used to represent the times to failure of the component parts of the system. This chapter deals with functions of random variables. Because of the complexity involved in computing 197 198 Chapter 6 Functions of Random Variables the CDFs and PDFs of functions of multiple random variables, we restrict our discussion to functions of at most two random variables. 6.2 Functions of One Random Variable Let X be a random variable, and let Y be a new random variable that is a function of X . That is, Y = g ( X ) We are interested in computing the PDF or PMF of Y when the PDF or PMF of X is given. For example, let g ( X ) = X + 5. Then F Y ( y ) = P [ Y y ] = P [ X + 5 y ] 6.2.1 Linear Functions Consider the function g ( X ) = aX + b , where a and b are constants. The CDF of Y is given by F Y ( y ) = P { Y y } = P [ aX + b y ] = P bracketleftbigg X y b a bracketrightbigg = F X parenleftbigg y b a parenrightbigg where a is positive. The PDF of Y is given by f Y ( y ) = dF Y ( y ) dy = dF X parenleftBig y b a parenrightBig dy = parenleftbigg dF X ( u ) du parenrightbiggparenleftbigg du dy parenrightbigg where u = y b a and du dy = 1 a . Thus, f Y ( y ) = f X ( u ) parenleftbigg 1 a parenrightbigg = parenleftbigg 1 a parenrightbigg f X parenleftbigg y b a parenrightbigg If a < 0, we have that F Y ( y ) = P { Y y } = P [ aX + b y ] = P [ aX y b ] = P bracketleftbigg X y b a bracketrightbigg = 1...
View Full
Document
 Fall '07
 Carlton

Click to edit the document details