ViLBERT swMATH ID: 42498 Software Authors: Jiasen Lu, Dhruv Batra, Devi Parikh, Stefan Lee Description: ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks. We present ViLBERT (short for Vision-and-Language BERT), a model for learning task-agnostic joint representations of image content and natural language. We extend the popular BERT architecture to a multi-modal two-stream model, pro-cessing both visual and textual inputs in separate streams that interact through co-attentional transformer layers. We pretrain our model through two proxy tasks on the large, automatically collected Conceptual Captions dataset and then transfer it to multiple established vision-and-language tasks – visual question answering, visual commonsense reasoning, referring expressions, and caption-based image retrieval – by making only minor additions to the base architecture. We observe significant improvements across tasks compared to existing task-specific models – achieving state-of-the-art on all four tasks. Our work represents a shift away from learning groundings between vision and language only as part of task training and towards treating visual grounding as a pretrainable and transferable capability. Homepage: https://arxiv.org/abs/1908.02265 Source Code: https://github.com/facebookresearch/vilbert-multi-task Related Software: Adam; VideoBERT; Flickr30K; ImageNet; BLEU; S4L; VQA; BERT; Rouge; GloVe; LXMERT; VisualBERT; PointNet; MNIST; CamNet; MVSNet; Fashion-MNIST; SynSin; Make3D; PIFuHD Cited in: 2 Publications all top 5 Cited by 7 Authors 1 Francis, Jonathan 1 Kitamura, Nariaki 1 Labelle, Felix 1 Lu, Xiaopeng 1 Navarro, Ingrid 1 Oh, Jean 1 Szeliski, Richard Cited in 2 Serials 1 The Journal of Artificial Intelligence Research (JAIR) 1 Texts in Computer Science Cited in 1 Field 2 Computer science (68-XX) Citations by Year