×

Conformer

swMATH ID: 35794
Software Authors: Anmol Gulati, James Qin, Chung-Cheng Chiu, Niki Parmar, Yu Zhang, Jiahui Yu, Wei Han, Shibo Wang, Zhengdong Zhang, Yonghui Wu, Ruoming Pang
Description: Conformer: Convolution-augmented Transformer for Speech Recognition. Recently Transformer and Convolution neural network (CNN) based models have shown promising results in Automatic Speech Recognition (ASR), outperforming Recurrent neural networks (RNNs). Transformer models are good at capturing content-based global interactions, while CNNs exploit local features effectively. In this work, we achieve the best of both worlds by studying how to combine convolution neural networks and transformers to model both local and global dependencies of an audio sequence in a parameter-efficient way. To this regard, we propose the convolution-augmented transformer for speech recognition, named Conformer. Conformer significantly outperforms the previous Transformer and CNN based models achieving state-of-the-art accuracies. On the widely used LibriSpeech benchmark, our model achieves WER of 2.1
Homepage: https://arxiv.org/abs/2005.08100
Source Code:  https://github.com/lucidrains/conformer
Related Software: ESPnet; Kaldi; SentencePiece; Athena; PIKA; SPGISpeech; LibriSpeech; GigaSpeech; Jasper; CSS10; RuLS; AISHELL; num2words; QuartzNet; MLS; Gentle; aeneas; NeMo; TensorRT; SpecAugment
Cited in: 0 Documents