10.1.1.117.434 - Multiple-Microphone Robust Speech...

Info iconThis preview shows pages 1–2. Sign up to view the full content.

View Full Document Right Arrow Icon
Multiple-Microphone Robust Speech Recognition Using Decoder-Based Channel Selection Yasunari Obuchi Advanced Research Laboratory Hitachi Ltd., Tokyo, Japan [email protected] Abstract In this paper, we focus on speech recognition using multi- ple microphones with varying quality. The quality of one channel may be much better than other channels and even the output of standard microphone array techniques such as the delay-and-sum beamformer. Therefore, it is important to find a good indicator to select a channel for recognition. This paper introduces Decoder-Based Channel Selection (DBCS) that gives a criterion to evaluate the quality of each channel by comparing the speech recognition hypotheses made from compensated and uncompensated feature vectors. We evalu- ate the performance of DBCS using speech data recorded by a PDA-like mockup. DBCS with Delta-Cepstrum Normal- ization for single channel compensation provides significant improvement compared to the delay-and-sum beamformer. In addition, the concept of DBCS is extended to the delay- and-sum beamformer outputs of various subset of micro- phones. This extension gives some additional improvement of the speech recognition accuracy. 1. Introduction It is well known that the performance of automatic speech recognition systems degrades when they are used in noisy environments. There have been huge efforts to solve this problem, which include single-channel feature compensa- tion and microphone array processing. In the single-channel case, input feature vectors are normalized using statistical assumptions for the speech or the noise model. Recently we proposed a novel algorithm called Delta-Cepstrum Nor- malization (DCN) [1], which is an extension of Histogram Equalization (HEQ) [2] to the cepstral time-derivative do- main. It was shown that DCN provides better compensation than Cepstral Mean Normalization (CMN) [3] and HEQ, es- pecially in highly noisy environments. If the system has more than one microphone, geometric separation of the speech and the noise is possible. A typi- cal approach is the delay-and-sum beamformer [4], in which the speech from the specific direction is enhanced by adding phase-matched signals, while the noises are reduced by av- eraging phase-unmatched signals. The concept of the delay- and-sum beamformer is based on the assumption that the mi- crophones are homogeneous; the only difference is the ge- ometric position that makes a small difference of the phase of input signals, and all the other conditions are equal. This assumption may hold if the microphones are placed firmly and maintained in a good condition. However, there are some cases where the assumption does not hold. If one holds a PDA that has microphones at each of the four cor- ners and speaks to it, one microphone can be much closer to the speaker’s mouth than others. In addition, a finger of the speaker may interfere with a microphone. Similarly, that kind of problem can occur in an automobile, if microphones
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 2
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 05/28/2010 for the course EE EE564 taught by Professor Runyiyu during the Spring '10 term at Eastern Mediterranean University.

Page1 / 4

10.1.1.117.434 - Multiple-Microphone Robust Speech...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online