Lecture 4-Probability Slides

A common case is when we have dierent probability

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: random variables. A common case is when we have different probability densities for different classes (ωj ) P (ωj |x) = p(x|ωj )P (ωj ) p (x) • P (ωj ) = prior probability of ωj • p(x) = evidence • P (ωj |x) = posterior probability of ωj • p(x|ωj ) = likelihood of ωj with respect to x 40 Expected Value Expected value or mean E (X ) = xp(x) What is the expected value of playing a game where there is a 10% chance of winning $10 and a 80% chance of losing $1 and a 10% chance of losing $2? 41 Expected Value Expected value or mean E (X ) = xp(x) What is the expected value of playing a game where there is a 10% chance of winning $10 and a 80% chance of losing $1 and a 10% chance of losing $2? E (X ) = .1 ∗ 10 − .8 ∗ 1 − .1 ∗ 2 = 0 The expected value (expected payoff) is $0. 42 Expected Value Expected value or mean E (X ) = E (X ) = xp(x) xp(x)dx 43 Expected Value Expected value or mean E (X ) = E (X ) = xp(x) xp(x)dx What is the expected value (mean) of a uniform distribution from 0 to 2. First note a uniform distribution from 0 to 2 would have p(x)=0.5 for x ∈ [0, 2] 2 thus E (X ) = 0 x(0.5)dx = 0.25x2|2 = 1 0 44 45 Variance V ar(X ) = E (X − E (X ))2 = V ar(X ) = P (X )(X − E (X ))2 (x − E (x))2p(x)dx • It has the maximum entropy of all distributions with a given mean and variance • It’s been well studied • It’s analytically tractable! • Central Limit Theorem: sum of a large number of independent random variables is normally distributed applet Pattern Densities are commonly modeled by Normal Densities for several reasons Ú Ù ÎØ × Ö Ë Ô Ó É ÒÑ %QfÎg`Õp`QQÐTiÌv¼ÈigÅÃ Ï Î Í ËÊ É ÇÆÆÄ Â Œ § ¦ `Qa½`¼º‚¸›FQQ``gQff²`Q‚°a`Qb`¦— ¨„Ž ¡ ›Šp™— ’`¦– Á À ¿¾ » ¹ · ® ¶³« ª µ ´³ ² ª± ¯ ®­ ¬«« ª © ¥  ˆŽaŒlŠd~g‡ ‹ ‰ˆ š ™— `›Q¡ Œ – ¤– £ Š¢ ›%~Œ Q`mŸ`– —Dš›e™aŒ 0Q“‘ œ Š ˜ — – • ”’ ¡ ž  † … ƒ  € } { n xz ˜ y ™ ˜ ¦g„‚vQ~|Q`¦—`eiQbQaQfQpnQg%˜`pmljDggfd`˜—g”%Dˆ† … i gus x wuvs ™u t es r q n o n e k i h e™ ™ – • “ ’‘ ‰ ‡ d w U XY d uc UV X d X sc ƒ f %`„HbQH`aY`vQQVr‚HQ%Sav`XgQQcQaQ£QVr`XiaYgeQ`ba`0W TR  € y xp u w fY s u U fV S tVYc s q p h f U dc XVY X V U S $ %"# !   6 )0(& ' I IG EC A 8 PQHFDB@ 97 532 %4(1 ¥¦¤ ¢£¡ ¦¤ £¡ ¥¢ § ©  ¨ The Normal Density 46 Univariate normal density (x−µ)2 1 − p (x) = √ e 2σ 2 2πσ has mean = µ variance = σ 2 has roughly 95% of its area within 2 standard deviations on either side of the mean (this is relevant for t-tests). 47 Standard Normal Gaussian with mean 0 and variance 1 (µ = 1, σ 2 = 1) 1 − x2 √ e2 2π 48 Univariate normal/Gaussian density What is ? ∞ 1 −(x−...
View Full Document

This note was uploaded on 02/09/2014 for the course COGS 109 taught by Professor Staff during the Fall '08 term at UCSD.

Ask a homework question - tutors are online