This preview shows page 1. Sign up to view the full content.
Unformatted text preview: Speech Processing
Speech Coding Speech Coding Definition: Even though availability of highbandwidth communication channels has increased, speech coding for bit reduction has retained its importance. Speech Coding is a process that leads to the representation of analog waveforms with sequences of binary digits. Reduced bitrates transmissions is required for Coded speech cellular networks Voice over IP Typical Scenario depicted in next slide (Figure 12.1) Is less sensitive than analog signals to transmission noise Easier to: protect against (bit) errors Encrypt Multiplex, and Packetize February 11, 2012 Veton Kpuska 2 Digital Telephone Communication System February 11, 2012 Veton Kpuska 3 Categorization of Speech Coders Waveform Coders: Hybrid Coders Used to quantize speech samples directly and operate at highbit rates in the range of 1664 kbps (bps bits per second) Are partially waveform coders and partly speech model based coders and operate in the mid bit rate range of 2.416 kbps. Largely modelbased and operate at a low bit rate range of 1.24.8 kbps. Tend to be of lower quality than waveform and hybrid coders. Vocoders February 11, 2012 Veton Kpuska 4 Quality Measurements Quality of coding can is viewed as the closeness of the processed speech to the original speech or some other desired speech waveform. Naturalness Degree of background artifacts Intelligibility Speaker identifiability Etc.
Veton Kpuska February 11, 2012 5 Quality Measurements Subjective Measurement: Objective Measurement: Diagnostic Rhyme Test (DRT) measures intelligibility. Diagnostic Acceptability Measure and Mean Opinion Score (MOS) test provide a more complete quality judgment. Segmental Signal to Noise Ratio (SNR) average SNR over a shorttime segments Articulation Index relies on an average SNR across frequency bands. Veton Kpuska 6 February 11, 2012 Quality Measurements A more complete list and definition of subjective and objective measures can be found at: J.R. Deller, J.G. Proakis, and J.H.I Hansen, "DiscreteTime Processing of Speech", Macmillan Publishing Co., New York, NY, 1993 S.R. Quackenbush, T.P. Barnwell, and M.A. Clements, "Objective Measures of Speech Quality. Prentice Hall, Englewood Cliffs, NJ. 1988 February 11, 2012 Veton Kpuska 7 Statistical Models Speech waveform is viewed as a random process. Various estimates are important from this statistical perspective: Probability density Mean, Variance and autocorrelation One approach to estimate a probability density function (pdf) of x[n] is through histogram. Count up the number of occurrences of the value of each speech sample in different ranges: xo  x xo + 2 2 for many speech samples over a long time duration. Normalize the area of the resulting curve to unity. February 11, 2012 Veton Kpuska 8 Statistical Models The histogram of speech (Davenport, Paez & Glisson) was shown to approximate a gamma density: where x is the standard deviation of the pdf. 3 px ( x ) = 8 x x e 1 2  3x 2 x Simpler approximation is given by the Laplacian pdf of the form: 1 px ( x ) = e 2 x
February 11, 2012 Veton Kpuska  2x x 9 PDF of Speech February 11, 2012 Veton Kpuska 10 PDF Models of Speech February 11, 2012 Veton Kpuska 11 Scalar Quantization Assume that a sequence x[n] was obtained from speech waveform that has been lowpassfiltered and sampled at a suitable rate with infinite amplitude precision. x[n] samples are quantized to a finite set of amplitudes denoted by . ^ x[n] Associated with the quantizer is a quantization step size . Quantization allows the amplitudes to be represented by finite set of bit patterns symbols. Encoding: Decoding Inverse process whereby transmitted sequence of codewords c'[n] is transformed back to a sequence of quantized samples (Figure 12.3b). Mapping of to a finite set of symbols. ^ x[n] This mapping yields a sequence of codewords denoted by c[n] (Figure 12.3a). February 11, 2012 Veton Kpuska 12 Scalar Quantization February 11, 2012 Veton Kpuska 13 Fundamentals Assume a signal amplitude is quantized into M levels. Quantizer operator is denoted by Q(x); Thus Where denotes M possible reconstruction levels ^ xi quantization levels, and ^ ^ x[n] = xi = Q( x[n]) , xi1 < x[n] xi If xi1< x[n] < xi, then x[n] is quantized to the reconstruction level ^ xi is quantized sample of x[n]. 1iM xi denotes M+1 possible decision levels with 0iM ^ x[n] February 11, 2012 Veton Kpuska 14 Fundamentals Scalar Quantization Example: Assume there M=4 reconstruction levels. Amplitude of the input signal x[n] falls in the range of [0,1] Decision levels and Reconstruction levels are equally spaced: Figure 12.4 in the next slide. Decision levels are [0,1/4,1/2,3/4,1] Reconstruction levels assumed to be [0,1/8,3/8,5/8,7/8] February 11, 2012 Veton Kpuska 15 Example of Uniform 2bit Quantizer February 11, 2012 Veton Kpuska 16 Uniform Quantizer A uniform quantizer is one whose decision and reconstruction levels are uniformly spaced. Specifically: xi  x i 1 = , xi + xi 1 ^ xi = , 2 1 i M 1 i M is the step size equal to the spacing between two consecutive decision levels which is the same spacing between two consecutive reconstruction levels (Exercise 12.1). Each reconstruction level is attached a symbol the codeword. Binary numbers typically used to represent the quantized samples (Figure 12.4). February 11, 2012 Veton Kpuska 17 Uniform Quantizer Codebook: Collection of codewords. In general with Bbit binary codebook there are 2B different quantization (or reconstruction) levels. Bit rate is defined as the number of bits B per sample multiplied by sample rate fs: I=Bfs Decoder inverts the coder operation taking the codeword back to a ^ x2 quantized amplitude value (e.g., 01 ). Often the goal of speech coding/decoding is to maintain the bit rate as low as possible while maintaining a required level of quality. Because sampling rate is fixed for most applications this goal implies that the bit rate be reduced by decreasing the number of bits per sample Veton Kpuska 18 February 11, 2012 Uniform Quantizer Designing a uniform scalar quantizer requires knowledge of the maximum value of the sequence. Typically the range of the speech signal is expressed in terms of the standard deviation of the signal. Specifically, it is often assumed that: 4xx[n]4x where x is signal's standard deviation. Under the assumption that speech samples obey Laplacian pdf there are approximately 0.35% of speech samples fall outside of the range: 4xx[n]4x. Assume Bbit binary codebook 2B. Maximum signal value xmax = 4x. February 11, 2012 Veton Kpuska 19 Uniform Quantizer For the uniform quantization step size we get: 2 xmax 2 xmax B B = 2 2 xmax = 2 = B 2 Quantization step size relates directly to the notion of quantization noise. February 11, 2012 Veton Kpuska 20 Quantization Noise Two classes of quantization noise: Granular Distortion Granular Distortion Overload Distortion ^ x[n] = x[n] + e[n] x[n] unquantized signal and e[n] is the quantization noise. For given step size the magnitude of the quantization noise e[n] can be no greater than /2, that is: Figure 12.5 depicts this property were:  e[ n] 2 2 ^ e[n] = x[n]  x[n]
February 11, 2012 Veton Kpuska 21 Quantization Noise February 11, 2012 Veton Kpuska 22 Quantization Noise Overload Distortion Maximumvalue constant: xmax = 4x (4xx[n]4x) For Laplacian pdf, 0.35% of the speech samples fall outside the range of the quantizer. Clipped samples incur a quantization error in excess of /2. Due to the small number of clipped samples it is common to neglect the infrequent large errors in theoretical calculations. February 11, 2012 Veton Kpuska 23 Quantization Noise Statistical Model of Quantization Noise Desired approach in analyzing the quantization error in numerous applications. Quantization error is considered an ergodic whitenoise random process. The autocorrelation function of such a process is expressed as: re [m] = E (e[n]e[n + m]) e2 re [m] = 0
February 11, 2012 Veton Kpuska , m = 0 , m 0
24 Quantization Error Previous expression states that the process is uncorrelated. Furthermore, it is also assumed that the quantization noise and the input signal are uncorrelated, i.e., Final assumption is that the pdf of the quantization noise is uniform over the quantization interval:
1 0 , e 2 2 E(x[n]e[n+m])=0, m. pe ( e ) =
February 11, 2012 , otherwise
25 Veton Kpuska Quantization Error Stated assumptions are not always valid. Consider a slowly varying linearly varying signal then e[n] is also changing linearly and is signal dependent (see Figure 12.5 in the previous slide). Correlated quantization noise can be annoying. When quantization step is small then assumptions for the noise being uncorrelated with itself and the signal are roughly valid when the signal fluctuates rapidly among all quantization levels. Quantization error approaches a whitenoise process with an impulsive autocorrelation and flat spectrum. One can force e[n] to be whitenoise and uncorrelated with x[n] by adding whitenoise to x[n] prior to quantization. February 11, 2012 Veton Kpuska 26 Quantization Error Process of adding white noise is known as Dithering. This decorrelation technique was shown to be useful not only in improving the perceptual quality of the quantization noise but also with image signals. SignaltoNoise Ratio A measure to quantify severity of the quantization noise. Relates the strength of the signal to the strength of the quantization noise.
Veton Kpuska 27 February 11, 2012 Quantization Error SNR is defined as: 1 N 1 2 x [ n] x2 E ( x 2 [n]) N SNR = 2 = n =01 e E ( e 2 [n]) 1 N 2 e [ n] N n =0 Given assumptions for Quantizer range: 2xmax, and Quantization interval: = 2xmax/2B, for a Bbit quantizer Uniform pdf, it can be shown that (see Exercise 12.2): 2 xmax 2 B 2 e2 = = 12 12
February 11, 2012 Veton Kpuska ( ) 2 2 xmax = ( 3) 22 B 28 Quantization Error Thus SNR can be expressed as: ( 3) 2 2 B ( 3) 22 B x2 2 SNR = 2 = x 2 = 2 x (x e x) max max 2 x x 2 = 10( log10 3 + 2 B log10 2 )  20 log10 max SNR ( dB ) = 10 log10 x e xmax 6 B + 4.77  20 log10 x Or in decibels (dB) as: Because xmax = 4x, then SNR(dB)6B7.2
February 11, 2012 Veton Kpuska 29 Quantization Error Presented quantization scheme is called pulse code modulation (PCM). Bbits per sample are transmitted as a codeword. Advantages of this scheme: Disadvantages: It is instantaneous (no coding delay) Independent of the signal content (voice, music, etc.) It requires minimum of 11 bits per sample to achieve "toll quality" (equivalent to a typical telephone quality) For 10000 Hz sampling rate, the required bit rate is: B=(11 bits/sample)x(10000 samples/sec)=110,000 bps=110 kbps For CD quality signal with sample rate of 20000 Hz and 16 bits/sample, SNR(dB) =967.2=88.8 dB and bit rate of 320 kbps. February 11, 2012 Veton Kpuska 30 Nonuniform Quantization Uniform quantization may not be optimal (SNR can not be as small as possible for certain number of decision and reconstruction levels) Consider for example speech signal for which x[n] is much more likely to be in one particular region than in other (low values occurring much more often than the high values). This implies that decision and reconstruction levels are not being utilized effectively with uniform intervals over xmax. A Nonuniform quantization that is optimal (in a leastsquared error sense) for a particular pdf is referred to as the Max quantizer. Example of a nonuniform quantizer is given in the figure in the next slide. February 11, 2012 Veton Kpuska 31 Nonuniform Quantization February 11, 2012 Veton Kpuska 32 Nonuniform Quantization Max Quantizer Problem Definition: For a random variable x with a known pdf, find the set of M quantizer levels that minimizes the quantization error. Therefore, finding the decision and boundary levels x i and xi, ^ respectively, that minimizes the meansquared error (MSE) distortion measure: Edenotes expected value and x is the quantized version of x. It turns out that optimal decision level xk is given by: D=E[(xx)2] ^ ^ ^ ^ xk +1 + xk xk = , 2 1 k M1
33 February 11, 2012 Veton Kpuska Nonuniform Quantization Max Quantizer (cont.) ^ The optimal reconstruction level xk is the centroid of px(x) over the interval xk1 x xk: xk xk px ( x ) ^ xk = x k xdx = ~x ( x ) dx p xk 1 xk 1 p x ( x ) dx xk 1 It is interpreted as the mean value of x over interval x k1 x ~ xk for the normalized pdf p(x). ^ Solving last two equations for xk and xk is a nonlinear problem in these two variables. Iterative solution which requires obtaining pdf (can be difficult).
Veton Kpuska 34 February 11, 2012 Nonuniform Quantization February 11, 2012 Veton Kpuska 35 Companding Alternative to the nonuniform quantizer is companding. It is based on the fact that uniform quantizer is optimal for a uniform pdf. Thus if a nonlinearity is applied to the waveform x[n] to form a new sequence g[n] whose pdf is uniform then Uniform quantizer can be applied to g[n] to obtain g[n], as depicted in the Figure 12.10 in the next slide. ^ February 11, 2012 Veton Kpuska 36 Companding February 11, 2012 Veton Kpuska 37 Companding A number of other nonlinear approximations of nonlinear transformation that achieves uniform density are used in practice which do not require pdf measurement. Specifically and Alaw and law companding. law coding is give by: T ( x[n]) = xmax x[ n ] log1 + xmax log(1 + ) sign( x[n]) CCITT international standard coder at 64 kbps is an example application of law coding. law transformation followed by 7bit uniform quantization giving toll quality speech. Equivalent quality of straight uniform quantization achieved by 11 bits. Veton Kpuska 38 February 11, 2012 Adaptive Coding Nonuniform quantizers are optimal for a long term pdf of speech signal. However, considering that speech is a highlytimevarying signal, one has to question if a single pdf derived from a longtime speech waveform is a reasonable assumption. Changes in the speech waveform: Approach: Temporal and spectral variations due to transitions from unvoiced to voiced speech, Rapid volume changes. Estimate a shorttime pdf derived over 2040 msec intervals. Shorttime pdf estimates are more accurately described by a Gaussian pdf regardless of the speech class. February 11, 2012 Veton Kpuska 39 Adaptive Coding A pdf derived from a shorttime speech segment more accurately represents the speech nonstationarity. One approach is to assume a pdf of a specific shape in particular a Gaussian with unknown variance 2. For a Gaussian we have: Measure the local variance then adapt a nonuniform quantizer to the resulting local pdf. This approach is referred to as adaptive quantization. p x ( x) =
February 11, 2012 1
2 2 x e x2  2 2 x Veton Kpuska 40 Adaptive Coding Measure the variance x2 of a sequence x[n] and use resulting pdf to design optimal max quantizer. Note that a change in the variance simply scales the time signal: 1. If E(x2[n]) = x2 then E[(x [n])2] = 2x2 2. Need to design only one nonuniform quantizer with unity variance and scale decision and reconstruction levels according to a particular variance. Fix the quantizer and apply a timevarying gain to the signal according to the estimated variance (scale the signal to match the quantizer). February 11, 2012 Veton Kpuska 41 Adaptive Coding February 11, 2012 Veton Kpuska 42 Adaptive Coding There are two possible approaches for estimation of a time varying variance 2[n]: Feedforward method (shown in Figure 12.11) where the variance (or gain) estimate is obtained from the input Feedback method where the estimate is obtained from a quantizer output. Adaptive quantizers can achieve higher SNR than the use of law companding. law companding is generally preferred for highrate waveform coding because of its lower background noise when transmission channel is idle. Adaptive quantization is useful in variety of other coding schemes.
Veton Kpuska 43 Advantage no need to transmit extra side information (quantized variance) Disadvantage additional sensitivity to transmission errors in codewords. February 11, 2012 Differential and Residual Quantization Presented methods are examples of instantaneous quantization. Those approaches do not take advantage of the fact that speech, music, ... is highly correlated signal: Shorttime (1015 samples), as well as Longtime (over a pitch period) In this section methods that exploit shorttime correlation will be investigated. Veton Kpuska 44 February 11, 2012 Differential and Residual Quantization Shorttime Correlation: Neighboring samples are "selfsimilar", that is, not changing too rapidly from one another. Difference of adjacent samples should have a lower variance than the variance of the signal itself. This difference, thus, would make a more effective use of quantization levels: Higher SNR for fixed number of quantization levels. Predicting the next sample from previous ones (finding the best prediction coefficients to yield a minimum meansquared prediction error same methodology as in LPC of Chapter 5). Two approaches: 1. Have a fixed prediction filter to reflect the average local correlation of the signal. 2. Allow predictor to shorttime adapt to the signal's local correlation. Requires transmission of quantized prediction coefficients as well as the prediction error. February 11, 2012 Veton Kpuska 45 Differential and Residual Quantization Illustration of a particular error encoding scheme presented in the Figure 12.12 of the next slide. In this scheme the following sequences are required: x[n] prediction of the input sample x[n]; This is the output of ~ the predictor P(z) whose input is a quantized version of the input signal x[n], i.e., x[n] ^ r[n] prediction error signal; residual r[n] quantized prediction error signal. ^ This approach is sometimes referred to as residual coding. February 11, 2012 Veton Kpuska 46 Differential and Residual Quantization February 11, 2012 Veton Kpuska 47 Differential and Residual Quantization Quantizer in the previous scheme can be of any type: Whatever the case is, the parameter of the quantizer are determined so that to match variance of r[n]. Differential quantization can also be applied to: Fixed Adaptive Uniform Nonuniform Speech, Music, ... signal Parameters that represent speech, music, ...: LPC linear prediction coefficients Cepstral coefficients obtained from Homomorphic filtering. Sinewave parameters, etc. February 11, 2012 Veton Kpuska 48 Differential and Residual Quantization Consider quantization error of the quantized residual: ^ r[n]= r[n]+er [n] From Figure 12.12 we express the quantized input x[n] as:
^ ^ ^ x[n]= ~[n]+r[n] x = ~[n]+r[n]+er [n] x = ~[n]+ x[n] ~[n]+er [n] x x = x[n]+er [n]
Veton Kpuska 49 February 11, 2012 Differential and Residual Quantization Quantized signal samples differ form the input only by the quantization error er[n]. Since the er[n] is the quantization error of the residual: If the prediction of the signal is accurate then the variance of r[n] will be smaller than the variance of x[n] A quantizer with a given number of levels can be adjusted to give a smaller quantization error than would be possible when quantizing the signal directly. February 11, 2012 Veton Kpuska 50 Differential and Residual Quantization The differential coder of Figure 12.12 is referred to: Differential PCM (DPCM) when used with Adaptive Differential PCM (ADPCM) when used with a fixed predictor and fixed quantization. ADPCM yields greatest gains in SNR for a fixed bit rate. Adaptive prediction (i.e., adapting the predictor to local correlation) Adaptive quantization (i.e., adapting the quantizer to the local variance of r[n]) To achieve higher quality with lower rates it is required to: The international coding standard CCITT, G.721 with toll quality speech at 32 kbps (8000 samples/sec x 4 bits/sample) has been designed based on ADPCM techniques. Rely on speech modelbased techniques and The exploiting of longtime prediction, as well as Shorttime prediction February 11, 2012 Veton Kpuska 51 Differential and Residual Quantization Important variation of the differential quantization scheme of Figure 12.12. Prediction has assumed an allpole model (autoregressive model). In this model signal value is predicted from its past samples: Alternative approach is to use a finiteorder movingaverage predictor derived from the residual. One common approach of the use of the movingaverage predictor is illustrated in Figure 12.13 in the next slide. Any error in a codeword due to for example bit errors over a degraded channel propagate over considerable time during decoding. Such error propagation is severe when the signal values represent speech model parameters computed frameby frame (as opposed to samplebysample). February 11, 2012 Veton Kpuska 52 Differential and Residual Quantization February 11, 2012 Veton Kpuska 53 Differential and Residual Quantization Coder Stage of the system in Figure 12.13: Residual as the difference of the true value and the value predicted from the moving average of K quantized residuals: Decoder Stage: p[k] coefficients of P(z) ^ r[n] = a[n]  p[k ]r[n  k ]
k =1 K Predicted value is given by: Error propagation is thus limited to only K samples (or K analysis frames for the case of model parameters) ^ ^ ^ a[n] = r[n]  p[k ]r[n  k ]
k =1 K February 11, 2012 Veton Kpuska 54 Vector Quantization (VQ) Investigation of scalar quantization techniques was the topic of previous sections. A generalization of scalar quantization referred to as vector quantization is investigated in this section. In vector quantization a block of scalars are coded as a vector rather than individually. An optimal quantization strategy can be derived based on a meansquared error distortion metric as with scalar quantization. February 11, 2012 Veton Kpuska 55 Vector Quantization (VQ) Motivation Assume the vocal tract transfer function is characterized by only two resonance's thus requiring four reflection coefficients. Furthermore, suppose that the vocal tract can take on only one of possible four shapes. This implies that there exist only four possible sets of the four reflection coefficients as illustrated in Figure 12.14 in the next slide. Scalar Quantization considers each of the reflection coefficient individually: Vector Quantization since there are only four possible vocal tract positions of the vocal tract corresponding to only four possible vectors of reflection coefficients. Each coefficient can take on 4 different values 2 bits required to encode each coefficient. For 4 reflection coefficients it is required 4x2=8 bits per analysis frame to code the vocal tract transfer function. Scalar values of each vector are highly correlated. Thus 2 bits are required to encode the 4 reflection coefficients. Note: if scalars were independent of each other treating them together as a vector would have no advantage over treating them individually. February 11, 2012 Veton Kpuska 56 Vector Quantization (VQ) February 11, 2012 Veton Kpuska 57 Vector Quantization (VQ) Consider a vector of N continuous scalars: x = x , x , x ,..., x
1 2 3 [ N T With VQ, the vector x is mapped into another N ^ dimensional vector x: ^ ^ ^ ^ ^ x = x , x , x ,..., x
1 2 3 [ N T ^ Vector x is chosen from M possible reconstruction (quantization) levels: ^ x = VQ[ x] = r i , for x Ci
58 February 11, 2012 Veton Kpuska
Vector Quantization (VQ) T T February 11, 2012 Veton Kpuska 59 Vector Quantization (VQ) VQvector quantization operator riM possible reconstruction levels for 1i<M Ciith "cell" or cell boundary ri codeword If x is in the cell Ci, then x is mapped to ri. {ri} set of all codewords; codebook. February 11, 2012 Veton Kpuska 60 Vector Quantization (VQ) Properties of VQ: P1: In vector quantization a cell can have an arbitrary size and shape. In scalar quantization a "cell" (region between two decision levels) can have an arbitrary size, but is shape is fixed. P2: Similarly to scalar quantization, ^ distortion measure D(x,x), is a measure of ^ dissimilarity or error between x and x. February 11, 2012 Veton Kpuska 61 VQ Distortion Measure Vector quantization noise is represented by the vector e: ^ e= xx The distortion is the average of the sum of squares of scalar components: D=Ee e For the multidimensional pdf px(x):
^ ^ D = E ( x  x) ( x  x) =
T M T [ T T [ ... ( x^  x )    ^ ( x  x ) px ( x ) d x = ... ( r i  x ) ( r i  x ) p x ( x ) d x i =1 xCi February 11, 2012 Veton Kpuska 62 VQ Distortion Measure Goal to minimize: T ^ ^ D = E ( x  x) ( x  x) [ Two conditions formulated by Lim: C1: A vector x must be quantized to a reconstruction level ri that gives the smallest distortion between x and ri. C2: Each reconstruction level ri must be the centroid of the corresponding decision region (cell Ci) Condition C1 implies that given the reconstruction levels we can quantize without explicit need for the cell boundaries. Condition C2 specifies how to obtain a reconstruction level from the selected cell. To quantize a given vector the reconstruction level is found which minimizes its distortion. This process requires a large search active area of research. February 11, 2012 Veton Kpuska 63 VQ Distortion Measure Stated 2 conditions provide the basis for iterative solution of how to obtain VQ codebook. Start with initial estimate of ri. Apply condition 1 by which all the vectors from a set that get quantized by ri can be determined. Apply second condition to obtain a new estimate of the reconstruction levels (i.e., centroid of each cell) Problem with this approach is that it requires estimation of joint pdf of all x in order to compute the distortion measure and the multidimensional centroid. Solution: kmeans algorithm (Lloyd for 1D and Forgy for multi D). February 11, 2012 Veton Kpuska 64 kMeans Algorithm
1. 2. 3. 4. 5. Compute the ensemble average D as: 1 N 1 ^ ^ D = ( x k  x k )T ( x k  x k ) N k =0 xk are the training vectors and xk are the quantized vectors. ^ Pick an initial guess at the reconstruction levels {ri} For each xk select closest ri. Set of all xk nearest to ri forms a cluster (see Figure 12.16) "clustering algorithm". Compute the mean of xk in each cluster which gives a new ri's. Calculate D. Stop when the change in D over two consecutive interactions is insignificant. This algorithm converges to a local minimum of D. February 11, 2012 Veton Kpuska 65 kMeans Algorithm February 11, 2012 Veton Kpuska 66 Neural Networks Based Clustering Algorithms Kohonen's SOFM Topological Ordering of the SOFM Offers potential for further reduction in bit rate. February 11, 2012 Veton Kpuska 67 Use of VQ in Speech Transmission Obtain the VQ codebook from the training vectors all transmitters and receivers must have identical copies of VQ codebook. Analysis procedure generates a vector xi. Transmitter sends the index of the centroid ri of the closest cluster for the given vector xi. This step involves search. Receiving end decodes the information by accessing the codeword of the received index and performing synthesis operation.
Veton Kpuska 68 February 11, 2012 ModelBased Coding The purpose of modelbased speech coding is to increase the bit efficiency to achieve either: Higher quality for the same bit rate or Lower bit rate for the same quality. Chronological perspective of modelbased coding starting with: Allpole speech representation used for coding: Mixed Excitation Linear Prediction (MELP) coder: Scalar Quantization Vector Quantization Codeexcited Linear Prediction (CELP) coder: Remove deficiencies in binary source representation. Does nor require explicit multiband decision and source characterization as MELP. Veton Kpuska 69 February 11, 2012 Basic Linear Prediction Coder (LPC) Recall the basic speech production model of the form: A A H ( z)= = A( z ) 1 P( z )
P ( z ) = a k z k
k =1 where the predictor polynomial is given as: p Suppose: Linear Prediction analysis performed at 100 frames/s 13 parameters are used:
10 allpole spectrum parameters, Pitch Voicing decision Gain Compared to telephone quality signal: Resulting in 1300 parameters/s. 4000 Hz bandwidth 8000 samples/s (8 bit per sample).
1300 parameters/s < 8000 samples/s February 11, 2012 Veton Kpuska 70 Basic Linear Prediction Coder (LPC) Instead of prediction coefficients ai use: Behavior of prediction coefficients is difficult to characterize: Corresponding poles bi Partial Correlation Coefficients ki (PARCOR) Reflection Coefficients ri, or Other equivalent representation. Alternative equivalent representations: Large dynamic range ( large variance) Quantization errors can lead to unstable system function at synthesis (poles may move outside the unit circle). Have a limited dynamic range Can be easily enforced to give stability because bi<1 and ki<1. February 11, 2012 Veton Kpuska 71 Basic Linear Prediction Coder (LPC) Many ways to code linear prediction parameters: Example of 7200 bps coding:
1. 2. 3. 4. Ideally optimal quantization uses the Max quantizer based on known or estimated pdf's of each parameter. Voice/Unvoiced Decision: 1 bit (on or off) Pitch (if voiced): 6 bits (uniform) Gain: 5 bits (nonuniform) Each Pole bi: 10 bits (nonuniform)
5 bits for bandwidth 5 bits for center frequency Total of 6 poles Quality limited by simple impulse/noise excitation model. 100 frames/s 1+6+5+6x10=72 bits February 11, 2012 Veton Kpuska 72 Basic Linear Prediction Coder (LPC) Improvements possible based on replacement of poles with PARCOR. Higher order PARCOR have pdf's closer to Gaussian centered around zero nonuniform quantization. Companding is effective with PARCOR: Transformed pdf's close to uniform. Original PARCOR coefficients do not have a good spectral sensitivity (change in spectrum with a change in spectral parameters that is desired to minimize). Empirical finding that a more desirable transformation in this sense is to use logarithm of the vocal tract area function ratio: Ai+1 1ki = Ai 1+ki 1ki Ai+1 g i =T [ ki ] = log 1+k = log A i i February 11, 2012 Veton Kpuska 73 Basic Linear Prediction Coder (LPC) Parameters gi: Have a pdf close to uniform Smaller spectral sensitivity than PARCOR: The all pole spectrum changes less with a change in g i than with a change in ki Note that spectrum changes less with the change in k i than with the change in pole positions. Typically these parameters can be coded at 56 bits each (significant improvement over 10 bits): 100 frames/s Order 6 of the predictor (6 poles) (1+6+5+6x6)x100 bps = 4800 bps Same quality as 7200 bps by coding pole positions for telephone bandwidth speech. February 11, 2012 Veton Kpuska 74 Basic Linear Prediction Coder (LPC) Government standard for secure communications using 2.4 kbps for about a decade used this basic LPC scheme at 50 frames per second. Demand for higher quality standards opened up research on two primary problems with speech codes base on all pole linear prediction analysis:
1. 2. Inadequacy of the basic source/filter speech production model Restrictions of onedimensional scalar quantization techniques to account for possible parameter correlation. February 11, 2012 Veton Kpuska 75 A VQ LPC Coder VQ based LPC PARCOR coder. Kmeans algorithm February 11, 2012 Veton Kpuska 76 A VQ LPC Coder
1. Use VQ LPC Coder to achieve same quality of speech with lower bitrate: 10bit code book (1024 codewords) 800 bps 2400 bps of scalar quantization 44.4 frames/s 440 bits to code PARCOR coefficients per second. 8 bits per frame for: Pitch Gain Voicing 1 bit for frame synchronization per second. February 11, 2012 Veton Kpuska 77 A VQ LPC Coder Maintain 2400 bps bit rate with a higher quality of speech coding (early 1980): 22bit codebook 222 = 4200000 codewords. Problems: 1. 1. VQ based spectrum characterized by a "wobble" due to LPC based spectrum being quantized: Intractable solution due to computational requirements (large VQ search) Memory (large Codebook size) Spectral representation near cell boundary "wobble" to and from neighboring cells insufficient number of codebooks. Emphasis changed from improved VQ of the spectrum and better excitation models ultimately to a return to VQ on the excitation. February 11, 2012 Veton Kpuska 78 Mixed Excitation LPC (MELP) Multiband voicing decision (introduced as a concept in Section 12.5.2 not covered in slides) Addresses shortcomings of conventional linear prediction analysis/synthesis: Realistic excitation signal Time varying vocal tract formant bandwidths Production principles of the "anomalous" voice. February 11, 2012 Veton Kpuska 79 Mixed Excitation LPC (MELP) Model: MELP unique components:
1. 2. 3. Different mixtures of impulses and noise are generated in different frequency bands (410 bands) The impulse train and noise in the MELP model are each passed through timevarying spectral shaping filters and are added together to form a fullband signal. An auditorybased approach to multiband voicing estimation for the mixed impulse/noise excitation. Aperiodic impulses due to pitch jitter, the creaky voice, and the diplophonic voice. Timevarying resonance bandwidth within a pitch period accounting for nonlinear source/system interaction and introducing the truncation effects. More accurate shape of the glottal flow velocity source. 4. February 11, 2012 Veton Kpuska 80 Mixed Excitation LPC (MELP) 2.4 kbps coder has been implemented based on the MELP model and has been selected as government standard for secure telephone communications. Original version of MELP uses: 34 bits for scalar quantization of the LPC coefficients (Specifically the line spectral frequencies LSFs). 8 bits for gain 7 bits for pitch and overall voicing In actual 2.4 kbs standard greater efficiency is achieved with vector quantization of LSF coefficients. Veton Kpuska 81 5bits to multiband voicing. 1bit for the jittery state (aperiodic) flag. 54 bits per 22.5 ms frame 2.4 bps. Uses autocorrelation technique on the lowpass filtered LPC residual. February 11, 2012 Mixed Excitation LPC (MELP) Line Spectral Frequencies (LSFs) More efficient parameter set for coding the allpole model of linear prediction. The LSFs for a pth order allpole model are defined as follows: Two polynomials of order p+1 are created from the pth order inverse filter A(z) according to: P ( z ) = A( z )+ z  ( p+1) A( z 1 ) LSFs can be coded efficiently and stability of the resulting syntheses filter can be guaranteed when they are quantized. Better quantization and interpolation properties than the corresponding PARCOR coefficients. Disadvantage is the fact that solving for the roots of P(z) and Q(z) can be more computationally intensive than the PARCOR coefficients. Polynomial A(z) is easily recovered from the LSFs (Exercise 12.18). Q( z ) = A( z ) z ( p+1) A( z 1 ) February 11, 2012 Veton Kpuska 82 CodeExcited Linear Prediction (CELP) Concept: Basic Idea in CELP is to represent the residual from longterm prediction on each frame by codewords form a VQ generated codebook (as oppose to multipulses) On each frame a codeword is chosen from a codebook of residuals such as to minimize the meansquared error between the synthesized and original speech waveform. The length of a codeword sequence is determined by the analysis frame length. For a 10 ms frame interval split into 2 inner frames of 5 ms each a codeword sequence is 40 samples in duration for an 8000 Hz sampling rate. The residual and longterm predictor is estimated with twice the time resolution (a 5 ms frame) of the shortterm predictor (10 ms frame); Excitation is more nonstationary than the vocal tract. February 11, 2012 Veton Kpuska 83 CodeExcited Linear Prediction (CELP) Two approach to formation of the codebook: Deterministic codebook It is formed by applying the kmeans clustering algorithm to a large set of residual training vectors. Deterministic Stochastic Stochastic codebook Channel mismatch Histogram of the residual from the longterm predictor follows roughly a Gaussian probability pdf. A valid assumption with exception of plosives and voiced/unvoiced transitions. Cumulative distributions are nearly identical to those for white Gaussian random variables Alternative codebook is constructed of white Gaussian random variables with unit variance. February 11, 2012 Veton Kpuska 84 CELP Coders Variety of government and International standard coders: 1990's Government standard for secure communications at 4.8 kbps at 4000 Hz bandwidth (FedStd1016) uses CELP coder: Three bit rates: Current international standards use CELP based coding. Shorttime predictor: 30 ms frame interval coded with 34 bits per frame. 10th order vocal tract spectrum from prediction coefficients transformed to LSFs coded nonuniform quantization. Shortterm and longterm predictors are estimated in openloop Residual codewords are determined in closedloop form. 9.6 kbps (multipulse) 4.8 kbps (CELP) 2.4 kbps (LPC) G.729 G.723.1 February 11, 2012 Veton Kpuska 85 ...
View
Full
Document
This note was uploaded on 02/10/2012 for the course ECE 3552 taught by Professor Staff during the Fall '10 term at FIT.
 Fall '10
 Staff

Click to edit the document details