Note that a change in the variance simply scales the

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: e quantization is useful in variety of other coding schemes. Veton Kpuska 60 Advantage no need to transmit extra side information (quantized variance) Disadvantage additional sensitivity to transmission errors in codewords. February 11, 2012 Differential and Residual Quantization Presented methods are examples of instantaneous quantization. Those approaches do not take advantage of the fact that speech is highly correlated signal: Shorttime (1015 samples), as well as Longtime (over a pitch period) In this section methods that exploit shorttime correlation will be investigated. February 11, 2012 Veton Kpuska 61 Differential and Residual Quantization Shorttime Correlation: Neighboring samples are "selfsimilar", that is, not changing too rapidly from one another. Difference of adjacent samples should have a lower variance than the variance of the signal itself. This difference, thus, would make a more effective use of quantization levels: Higher SNR for fixed number of quantization levels. Predicting the next sample from previous ones (finding the best prediction coefficients to yield a minimum meansquared prediction error same methodology as in Liner Prediction Coefficients LPC). Two approaches: 1. Have a fixed prediction filter to reflect the average local correlation of the signal. 2. Allow predictor to shorttime adapt to the signal's local correlation. Requires transmission of quantized prediction coefficients as well as the prediction error. February 11, 2012 Veton Kpuska 62 Differential and Residual Quantization Illustration of a particular error encoding scheme presented in the Figure 12.12 of the next slide. In this scheme the following sequences are required: x[n] prediction of the input sample x[n]; This is the output of ~ the predictor P(z) whose input is a quantized version of the input signal x[n], i.e., x[n] ^ r[n] prediction error signal; residual r[n] quantized prediction error signal. ^ This approach is sometimes referred to as residual coding. February 11, 2012 Veton Kpuska 63 Differential and Residual Quantization February 11, 2012 Veton Kpuska 64 Differential and Residual Quantization Quantizer in the previous scheme can be of any type: Whatever the case is, the parameter of the quantizer are determined so that to match variance of r[n]. Differential quantization can also be applied to: Fixed Adaptive Uniform Nonuniform Speech signal Parameters that represent speech: LPC linear prediction coefficients Cepstral coefficients obtained from Homomorphic filtering. Sinewave parameters, etc. February 11, 2012 Veton Kpuska 65 Differential and Residual Quantization Consider quantization error of the quantized residual: ^ r[n]= r[n]+er [n] From Figure 12.12 we express the quantized input x[n] as: ^ ^ ^ x[n]= ~[n]+r[n] x = ~[n]+r[n]+er [n] x = ~[n]+ x[n]- ~[n]+er [n] x x = x[n]+er [n] Veton Kpuska 66 February 11, 2012 Differential and Residual Quantization Quantized signal samples differ form the input only by the quantization error er[n]. Since...
View Full Document

Ask a homework question - tutors are online