Unformatted text preview: compressed video bit stream is
corrupted due to transmission? (list at least one cause) Describe two encoding tools that can be used to
suppress error propagation.
a) This is because the video coding standard only specifies the syntax that a bitstream needs to follow, not the
method that needs to be used to generate that syntax. For example, the standard specifies that each
macroblock should have a certain header, that tells which mode is used to code this block. The meaning of
the remaining bits depends on the mode. If it is an intermode, the remaining bits specifies the binary bits
corresponding to the quantized DCT coefficients for the inter-prediction error. The standard does not
specify how should the encoder determine the mode, nor how should it determine the motion vector. Mode
decision and motion estimation are two components that can be optimized by vendors.
b) Scalable coding enables the same video bit stream be accessed by receivers with different bandwidths.
There are spatial, temporal, and SNR scalability. One way to generate temporal scalability is by applying a c) standard video coder to a low-frame rate video (down-sampled form the original, say, every other frames).
The enhancement layer corresponds to the remaining frames. Each frame in the enhancement layer (e.g.,
f(n) can be predicted from either the previous frame (in the base layer) f(n-1), or the previous previous
frame (the previous frame in the enhancement layer) f(n-2). For the base layer prediction, if one uses the
previous frame f(n-1) (in enhancement layer) to predict f(n) in base layer, the encoder is likely to have a
lower prediction error and hence higher coding efficiency, but there will be mismatch at the decoder, if the
decoder only receives the base layer. If one uses previous previous frame f(n-2) (in the base layer) to
predict f(n), the prediction is likely to be less...
View Full Document
- Spring '14