Jump to content

Viseme

From Wikipedia, the free encyclopedia

A viseme is any of several speech sounds that look the same, for example when lip reading (Fisher 1968).

Visemes and phonemes do not share a one-to-one correspondence. Often several phonemes correspond to a single viseme, as several phonemes look the same on the face when produced, such as /k, ɡ, ŋ/; as well as /t, d, n, l/ , and /p, b, m/). Thus words such as pet, bell, and men are difficult for lip-readers to distinguish, as all look like alike. On one account, visemes offer (phonetic) information about place of articulation, while manner of articulation requires auditory input[1].

However, there may be differences in timing and duration during natural speech in terms of the visual "signature" of a given gesture that cannot be captured by simply concatenating (stilled) images of each of the mouth patterns in sequence[2]. Conversely, some sounds which are hard to distinguish acoustically are clearly distinguished by the face. For example, in spoken English /l/ and /r/ can often sound quite similar (especially in clusters, such as 'grass' vs. 'glass'), yet the visual information can disambiguate. Some linguists have argued that speech is best understood as bimodal (aural and visual), and comprehension can be compromised if one of these two domains is absent (McGurk and MacDonald 1976).

Visemes can often be humorous, as in the phrase "elephant juice", which when lip-read appears identical to "I love you".

Applications for the study of visemes include speech processing, speech recognition, and computer facial animation.

See also

[edit]

References

[edit]
  • Chen, T. and Rao R. R. (1998, May). "Audio-visual integration in multi-modal communication". Proceedings of the IEEE 86, 837–852. doi:10.1109/5.664274.
  • Chen, T. (2001). "Audiovisual speech processing". IEEE Signal Processing Magazine 18, 9–21. doi:10.1109/79.911195
  • Fisher, C. G. (1968). "Confusions among visually perceived consonants". Journal of Speech and Hearing Research, 11(4):796–804. doi:10.1044/jshr.1104.796.
  • McGurk, H. and MacDonald, J. (1976, December). "Hearing lips and seeing voices". Nature 264, 746–748. doi:10.1038/264746a0.
  • Patrick Lucey, Terrence Martin, Sridha Sridharan (2004). "Confusability of Phonemes Grouped According to their Viseme Classes in Noisy Environments". Presented at Tenth Australian International Conference on Speech Science & Technology, Macquarie University, Sydney, 8–10 December 2004.


  1. ^ Summerfield Q. Lipreading and audio-visual speech perception. Philos Trans R Soc Lond B Biol Sci. 1992 Jan 29;335(1273):71-8. doi: 10.1098/rstb.1992.0009. PMID: 1348140
  2. ^ @article{Calvert2003ReadingSF, title={Reading Speech from Still and Moving Faces: The Neural Substrates of Visible Speech}, author={Gemma A. Calvert and R. Campbell}, journal={Journal of Cognitive Neuroscience}, year={2003}, volume={15}, pages={57-70}, url={https://api.semanticscholar.org/CorpusID:14153329} }