100218-lecture5 - EE522 Communication Theory Spring 2010...

Info iconThis preview shows pages 1–10. Sign up to view the full content.

View Full Document Right Arrow Icon
1 EE522 Communication Theory Spring 2010 Instructor: Hwang Soo Lee Lecture #5 The Lloyd -Max Algorithm & Speech Coding
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
2 Announcements ± HW #2 Assigned. ² Due next Thursday, February25 ± Handouts: ² Course Notes #5 ² HW #2 ² Project Handout #1 ± Project Proposals March 9
Background image of page 2
3 Quantization of Data with Unknown Distribution ± The pdf could be estimated from a set of sample training” data. ± It’s possible to bypass this estimation step and design a quantizer directly from the training sequence. ² K - Means Algorithm” or “Generalized Lloyd Algorithm” ± K-Means can be thought of as discrete Lloyd – Max. ² Training sequence samples are assigned to “clusters” of points that are closest to a particular quantization level. ² The quanization level is moved to the centroid of the cluster.
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
4 Quantization of Data with Unknown Distribution - Notation ± Let {y 1 ,y 2 ,……,y n } be a sequence of n identically distributed sample points. ± The average distortion for a quantizer is given by: ± If the sample points are from an ergodic random process then: ± If the statistics of {y 1 ,y 2 ,……,y n } are unknown, we can construct a quantizer using a training sequence.
Background image of page 4
5 K Means (Generalized Lloyd) Algorithm ± Set the iteration number to i=0. ± Select initial quantization levels: ± Assign each of n training sequence points into one of L sets or “clusters”: ² A cluster consists of all points closest to a quantization level: ± Recompute the quantization level so that it is at the centroid of each cluster: ± Repeat iterative process until clusters don’t change. )} 0 ( ~ ),. ...... , 0 ( ~ ), 0 ( ~ { 2 1 L x x x } 1 ), ( { L k i C k k j i x y d i x y d i C y j k k )), ( ~ , ( )) ( ~ , ( ) ( | ) ( | / ) 1 ( ~ ) ( i C y i x k i C y k k = +
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
6 Example of K - Means Algorithm ± Consider a training sequence of length 10: ² 0.38, 1.13, 0.73, - 2.38, - 0.15, - 0.32, 0.32, - 0.39, 0.01, 1.61 ² Samples were Gaussian RVs with mean 0, variance 1. ± Let i=0. Initial quantization levels: ± Initial clusters:
Background image of page 6
7 Example of K - Means Algorithm (continued) ± Let i=1. ± Find new centroids of each cluster: ± Find new clusters:
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
8 Example of K - Means Algorithm (continued) ± Let i=2. ± Find new centroids of each cluster: ± Find new clusters: ± The clusters did not change between iterations 1 and 2 so the procedure is complete.
Background image of page 8
9 Example of K - Means Algorithm - Performance ± MSE for training sequence = 0.031 ² average for 10 samples training sequence with this quantizer ² internal” distortion ± MSE for long Gaussian data sequence = 0.21 ² average taken over long sequence of new Gaussian RVs ² external” distortion ± Internal versus external distortion ²
Background image of page 9

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 10
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 35

100218-lecture5 - EE522 Communication Theory Spring 2010...

This preview shows document pages 1 - 10. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online