238_1_lecture3

# 238_1_lecture3 - EE 238 Multimedia communications and...

This preview shows pages 1–12. Sign up to view the full content.

1 EE 238: Multimedia communications and Multimedia communications and networking networking Prof. Mihaela van der Schaar Prof. Mihaela van der Schaar email: [email protected] email: [email protected] www www - - page: page: medianetlab.ee.ucla.edu medianetlab.ee.ucla.edu

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
2 Lecture 3: Communication under dynamics: principles and formalisms
3 Outline A characterization of application layer dynamics Markov Decision Processes (MDP): an introduction Dynamic Programming: an introduction Online learning concepts Illustrative example: Rate-distortion optimized scheduling Questions/comments/observations are always encouraged, at any point during the lecture!!

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
4 Stochastic process Quick definition: A Random Process Often viewed as a collection of indexed random variables: state of system index or parameter index or parameter set Useful to characterize “environment” dynamics: Set of states with probability law governing evolution of states over time We will focus on discrete-time stochastic chains : Index or parameter set is discrete (discrete-time) State set is discrete (stochastic chain) {} : t XtT t T T t X S S
5 Stochastic process - Example Classic: Random Walk Start at state X 0 at time t 0 At time t i , move a step Z i where P(Z i = -1) = p and P(Z i = 1) = 1 - p At time t i , state X i = X 0 + Z 1 +…+ Z i Examples for source coding?

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
6 Markov property An environment satisfies the Markov property if its state signal compactly summarizes the past without degrading the ability to predict the future A stochastic process is said to have the Markov property if the probability of state X n+1 having any given value depends only upon state X n () 11 0 0 |, , | nn n n n n pX x X x X x x X x ++ == = = for each 0 n , and states 01 ,, n xx + S
7 Markov property Why useful? Simple model of temporal correlation of environment dynamics Current state contains all information needed to predict distribution of future state(s) Does it hold in the real world? It’s an ideal Will allow us to prove properties of algorithms Algorithms developed under these ideal assumptions will often still work in other environments

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
8 Markov chain Let The transition probability function of a Markov chain is defined as A Markov chain has a stationary transition probability function { } :0 , 1 , 2 , t XX t == be a Markov chain ( ) 1 | nn pX y X x + 0 n , , xy S () 11 0 || y X x y X x + = 0 n , , S
9 Markov chain - Example Modeling of Variable Bit Rate Video Sources Models for VBR video traffic that allow for different frame types present in the video, different activity levels of different frames and variable group of pictures (GOP) structure How to model this as a Markov chain?

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
10 Variable bit-rate video source models D. S. Turaga and T. Chen, "Hierarchical Modeling of Variable Bit Rate Video Sources," Packet Video 2001 URL: citeseer.ist.psu.edu/turaga01hierarchical.html
11 Markov decision process (MDP) Discrete-time stochastic control process Extension of Markov chains

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 90

238_1_lecture3 - EE 238 Multimedia communications and...

This preview shows document pages 1 - 12. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online