Myoelectric signal classification for phoneme-based speech recognition.
Traditional acoustic speech recognition accuracies have been shown to deteriorate in highly noisy environments. A secondary information source is exploited using surface myoelectric signals (MES) collected from facial articulatory muscles during speech. Words are classified at the phoneme level using a hidden Markov model (HMM) classifier. Acoustic and MES data was collected while the words "zero" through "nine" were spoken. An acoustic expert classified the 18 formative phonemes in low noise levels [signal-to-noise ratio (SNR) of 17.5 dB] with an accuracy of 99%, but deteriorated to approximately 38% under simulations with SNR approaching 0 dB. A fused acoustic-myoelectric multiexpert system, without knowledge of SNR, improved on acoustic classification results at all noise levels. A multiexpert system, incorporating SNR information, obtained accuracies of 99% at low noise levels while maintaining accuracies above 94% during low SNR (0 dB) simulations. Results improve on previous full word MES speech recognition accuracies by almost 10%.