Abstract
Speech communication is one of the simplest and reliable forms of communication between humans and it is affected by the behaviour and emotions of speakers. Electrical current generated on the facial muscles during speech production are represented by one of the forms of biomedical signal known as EMG. These signals results due to contraction and relaxation of the muscles, which are controlled by the nervous system. These signals from specific facial muscles are recorded for speech recognition and system automation. EMG signals are generally recorded using small surface electrodes placed near to each other. EMG activity is frequently recorded from specific muscles and plays a prominent role in the expression of elementary emotions and speech generation. The present research paper investigates the EMG patterns generated during the utterance of the unvoiced consonants. Six subjects in the age of 20-25 years were taken (three males and three females). Thirty eight vowel-consonant-vowel (VCV) syllables in Hindi were recorded along with the corresponding facial EMG signal. For each speaker, the means of log-spectral-distances (LSD) between the EMG signal of the VCVs and the reference EMG signal were computed. Analysis of the spectrograms and LSD showed that the EMD signals generated in the muscle vary with the subject and the VCV. Hence, for automatic decoding of the EMG signals, the system should be trained using both the variants.