Lecture2 -...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon
1 Lecture 2. Review of Basic Math, Probability, and Statistics The Summation Operator*** 12 1 ... . n in i x xx x = ≡+++ Basic summation properties: 1 n i cn c = = 11 1 1 1 ;( ) nn n n n ii i i i i i i i cx c x x y x y == = = = =+ = + ∑∑ 1 (/) / n i i i xy x y = ⎛⎞ ⎜⎟ ⎝⎠ 2 () ()() i x xy y y y x x −− , where 1 (1 / ) n i i x nx = = is the sample mean. 1 1 0 n n i i i x x xn x = = −= = −=−= 222 ( ) x n x 1 1 = . n i i i i i n i x x yy x y xy nxy = = = =− Linear Functions YX α β The change in Y is always times the change in X : Δ=Δ . When 1 (unit) X Δ= , Y , so means how much Y changes when X increases by 1 (unit). So, the marginal effect of X on Y or the slope of X is constant and equals . The Natural Logarithm
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
2 log( ) YX = A change in log approximates a percentage change: 10 1 0 0 0 log( ) log( ) ( ) / / YY Y Y Y Y Y −≈ = Δ So for examples, in log( ) α β =+ , represents the percentage change in Y when X increase by 1 unit. in log( ) log( ) , represents elasticity, i.e., the percentage change in Y when X increase by 1%. Log (A*B) = logA + logB. Random Variable (r.v. for short) A r. v. takes on different numerical values, each with probability < 1, according to chance. Discrete random variables can take on a finite or countably infinite number of values Continuous random variables can take on any value on one or more intervals of the real line, including the whole real line Probability density function summarizes all the possible outcomes and the associated probabilities for a r.v. Figure : the pdf for a continous r.v. Expected Value of a random variable (r.v. for short) It is called Population Mean to emphasize that it is a summary measure on a r.v., not a sample.
Background image of page 2
3 It is a weighted average of all possible outcomes , where the weights are the probabilities of outcomes. Discrete case : Suppose X can take n possible values, denoted as x 1 , x 2 , …, x n (keep in mind the notational convention). Let p i = prob ( X = x i ), p ୀଵ ൌ1 , and p ൒0 for any ݅ . Then 11 2 2 1 ( ) ... n ii nn i E Xp x p x p x p x μ = ≡= = + + + , where E () is the expectation operator, and is used to denote a mean. Example : Flip a coin, if head you get 1 dollar and if tail you get nothing. Suppose it is a fair coin, then you are expected to get 0.5 dollars, that is, the expected value of flipping a coin is 0.5 dollars, although if you flip a coin another time you could get 1 dollar or nothing. Facts about expected values 1. For any constant a, E(a) = a. A constant does not have any variations ! Example : E(3)=3 2. For any constants a and b, E(a + b X ) = a + bE( X ) Example : In the previous GPA and study time example, the estimated regression line is Y =1+0.1 X , where Y =GPA, X =study time per week measured in hours. Suppose the population mean of study time is 15 hours per week, that is, E( X )=15, then GPA is expected to be E( Y ) = 1+0.1 E( X ) = 2.5 3. For any constants ሼa ,a ,…,a ሽ, and r.v.’s ሼX ,X ,…,X ሽ, ܧሺ∑ ܽ ܺ ௜ୀଵ ሻ ൌ ܽ ௜ୀଵ ܧሺܺ ሻ .
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 4
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

Page1 / 12

Lecture2 -...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online