×

zbMATH — the first resource for mathematics

Speech structure and its application to robust speech processing. (English) Zbl 1205.68349
Summary: Speech communication consists of three steps: production, transmission, and hearing. Every step inevitably involves acoustic distortions due to gender differences, age, microphone- and room-related factors, and so on. In spite of these variations, listeners can extract linguistic information from speech as easily as if the communications had not been affected by variations at all. One may hypothesize that listeners modify their internal acoustic models whenever extralinguistic factors change. Another possibility is that the linguistic information in speech can be represented separately from the extralinguistic factors. In this study, being inspired by studies of humans and animals, a novel solution to the problem of intrinsic variations is proposed. Speech structures invariant to these variations are derived as transform-invariant features and their linguistic validity is discussed. Their high robustness is demonstrated by applying the speech structures to automatic speech recognition and pronunciation proficiency estimation. This paper also describes the immaturity of the current implementation and application of speech structures.
MSC:
68T10 Pattern recognition, speech recognition
68T50 Natural language processing
Software:
rastamat
PDF BibTeX XML Cite
Full Text: DOI
References:
[1] Kuhl, P. K., ”Early language acquisition: Cracking the speech code,” Nature Reviews Neuroscience, 5, pp.831–843, 2004. · doi:10.1038/nrn1533
[2] Benzeghiba, M., De Mori, R., Deroo, O., Dupont, S., Erbes, T., Jouvet, D., Fissore, L., Laface, P., Mertins, A., Ris, C., Rose, R., Tyagi, V. and Wellekens, C., ”Automatic speech recognition and speech variability: A review,” Speech Communication, 49, pp.763–786, 2007. · Zbl 05185397 · doi:10.1016/j.specom.2007.02.006
[3] Lotto, R. B. and Purves, D., ”An empirical explanation of color contrast,” in Proc. the National Academy of Science USA, 97, pp.12834–12839, 2000.
[4] Lotto, R. B. and Purves, D., ”The effects of color on brightness,” Nature neuroscience, 2, 11, pp.1010–1014, 1999. · doi:10.1038/14808
[5] Taniguchi, T., Sounds become music in mind -introduction to music psychology-, Kitaoji Pub., 2000.
[6] http://www.lottolab.org/illusiondemos/Demo%2012.html
[7] Briscoe, A. D. and Chittka, L., ”The evolution of color vision in insects,” Annual review of entomology, 46, pp.471–510, 2001. · doi:10.1146/annurev.ento.46.1.471
[8] Hauser, M. D. and McDermott, J., ”The evolution of the music faculty: a comparative perspective,” Nature neurosciences, 6, pp.663–668, 2003. · doi:10.1038/nn1080
[9] Acquisition of Communication and Recognition Skills Project (ACORNS). http://www.acorns-project.org/
[10] Human Speechome Project, http://www.media.mit.edu/press/speechome/
[11] Infants’ Commonsense Knowledge Project, http://minny.cs.inf.shizuoka.ac.jp/SIG-ICK/
[12] Kato, M., ”Phonological development and its disorders,” Journal of Communication Disorders, 20, 2, pp.84–85, 2003.
[13] Shaywitz, S. E., Overcoming dyslexia, Random House, 2005.
[14] Hayakawa, M., ”Language acquisition and matherese,” Language, 35, 9, Taishukan pub., pp.62–67, 2006.
[15] Lieberman, P., ”On the development of vowel production in young children,” Child Phonology vol.1, (Yeni-Komshian, G. H., Kavanagh, J. F. and Ferguson, C. A. eds.), Academic Press, 1980.
[16] Okanoya, K., ”Birdsongs and human language: common evolutionary mechanisms,” in Proc. Spring Meet. Acoust. Soc. Jpn., 1-17-5, pp.1555–1556, 2008 (including Q&A after his presentation).
[17] Gruhn, W., ”The audio-vocal system in sound perception and learning of language and music,” in Proc. Int. Conf. on language and music as cognitive systems, 2006.
[18] Umesh, S., Cohen, L., Marinovic, N. and Nelson, D. J., ”Scale transform in speech analysis,” IEEE Trans. Speech and Audio Processing, 7, 1, pp.40–45, 1999. · doi:10.1109/89.736329
[19] Irino, T. and Patterson, R. D., ”Segregating information about the size and shape of the vocal tract using a time-domain auditory model: the stabilised wavelet-Mellin transform”, Speech Communication, 36, pp.181–203, 2002. · Zbl 0987.68818 · doi:10.1016/S0167-6393(00)00085-6
[20] Mertins, A. and Rademacher, J., ”Vocal trace length invariant features for automatic speech recognition,” in Proc. IEEE Workshop on Automatic Speech Recognition and Understanding, pp.308–312, 2005.
[21] Jakobson, R. and Waugh, L. R., The sound shape of language, Mouton De Gruyter, 1987.
[22] Ladefoged, P. and Broadbent, D. E., ”Information conveyed by vowels,” Journal of Acoust. Soc. Am., 29, 1, pp.98–104, 1957. · doi:10.1121/1.1908694
[23] Nearey, T. M., ”Static, dynamic, and relational properties in vowel perception,” Journal of Acoust. Soc. Am., 85, 5, pp.2088–2113, 1989. · doi:10.1121/1.397861
[24] Hawkins, J. and Blakeslee, S., On intelligence, Henry Holt, 2004.
[25] Qiao, Y. and Minematsu, N., ”A study on invariance of f-divergence and its application to speech recognition,” IEEE Transactions on Signal Processing, 58, 7, pp.3884–3890, 2010. · Zbl 1392.94412 · doi:10.1109/TSP.2010.2047340
[26] Csiszar, I., ”Information-type measures of difference of probability distributions and indirect,” Stud. Sci. Math. Hung., 2, pp.299–318, 1967. · Zbl 0157.25802
[27] Minematsu, N., ”Mathematical evidence of the acoustic universal structure in speech,” in Proc. Int. Conf. Acoustics, Speech, & Signal Processing, pp.889–892, 2005.
[28] Minematsu, N., Nishimura, T., Nishinari, K. and Sakuraba, K., ”Theorem of the invariant structure and its derivation of speech Gestalt,” in Proc. Int. Workshop on Speech Recognition and Intrinsic Variations, pp.47–52, 2006.
[29] Minematsu, N., ”Pronunciation assessment based upon the phonological distortions observed in language learners’ utterances,” in Proc. Int. Conf. Spoken Language Processing, pp.1669–1672, 2004.
[30] Saito, D., Matsuura, R., Asakawa, S., Minematsu, N. and Hirose, K., ”Directional dependency of cepstrum on vocal tract length,” in Proc. Int. Conf. Acoustics, Speech, & Signal Processing, pp.4485–4488, 2008.
[31] Edihammer, I., ”Structure comparison and structure patterns,” Journal of Computational Biology, 7, 5, pp.685–716, 2000. · doi:10.1089/106652701446152
[32] Pitz, M. and Ney, H., ”Vocal tract normalization equals linear transformation in cepstral space,” IEEE Trans. Speech and Audio Processing, 13, 5, pp.930–944, 2005. · doi:10.1109/TSA.2005.848881
[33] Emori, T. and Shinoda, K., ”Rapid vocal tract length normalization using maximum likelihood estimation,” in Proc. EUROSPEECH, pp.1649–1652, 2001.
[34] Naito, M., Deng, L. and Sagisaka, Y., ”Model based speaker normalization methods for speech recognition,” IEICE Trans. J83-D-II, 11, pp.2360–2369, 2000.
[35] Tohoku university - Matsushita isolated Word database (TMW), http://research.nii.ac.jp/src/eng/list/detail.html# TMW
[36] Kawahara, T., Lee, A., Takeda, K., Itou, K. and Shikano, K., ”Recent progress of open-source LVCSR engine Julius and Japanese model repository,” in Proc. Int. Conf. on Spoken Language Processing, pp.3069–3072, 2004.
[37] Qiao, Y., Suzuki, M. and Minematsu, N., ”A study of Hidden Structure Model and its application of labeling sequences,” in Proc. IEEE Workshop on Automatic Speech Recognition and Understanding, pp.118–123, 2009.
[38] Greenberg, S. and Kingsbury, B., ”The modulation spectrogram: in pursuit of an invariant representation of speech,” in Proc. Int. Conf. Acoustics, Speech, & Signal Processing, pp.1647–1650, 1997.
[39] Hermansky, H. and Morgan, N., ”RASTA processing of speech,” IEEE Trans. Speech and Audio Processing, 2, 4, pp.578–589, 1994. · doi:10.1109/89.326616
[40] Eskenazi, M., ”An overview of spoken language technology for education,” Speech Communication, 51, 10, pp.832–844, 2009. · Zbl 05771133 · doi:10.1016/j.specom.2009.04.005
[41] Witt, S. M. and Young, S. J., ”Phone-level pronunciation scoring and assessment for interactive language learning,” Speech Communication, 30, pp.95–108, 2000. · doi:10.1016/S0167-6393(99)00044-8
[42] Minematsu, N., Asakawa, S. and Hirose, K., ”Structural representation of the pronunciation and its use for CALL,” in Proc. IEEE Int. Workshop on Spoken Language Technology, pp.126–129, 2006.
[43] Minematsu, N., ”Training of pronunciation as learning of the sound system embedded in the target language,” in Proc. Int. Symposium on Phonetic Frontiers, CD-ROM, 2008.
[44] Minematsu, N., et al., ”Development of English speech database read by Japanese to support CALL research,” in Proc. Int. Conf. Acoustics, pp.577–560, 2004.
[45] Frith, U., Autism: explaining the enigma, Wiley-Blackwell, 2003.
[46] Willey, L. H. and Attwood, T., Pretending to be normal: living with Asperger’s syndrome, Jessica Kingsley Publishers, 1999.
[47] Grandin, T. and Johnson, C., Animals in translation: using the mysteries of autism to decode animal behavior, Scribner, 2004.
[48] Higashida, N. and Higashida, M., Messages to all my colleagues living on the planet, Escor Pub., Chiba, 2005.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.