swMATH ID: 42504
Software Authors: Estelle Aflalo, Meng Du, Shao-Yen Tseng, Yongfei Liu, Chenfei Wu, Nan Duan, Vasudev Lal
Description: VL-InterpreT: An Interactive Visualization Tool for Interpreting Vision-Language Transformers. Breakthroughs in transformer-based models have revolutionized not only the NLP field, but also vision and multimodal systems. However, although visualization and interpretability tools have become available for NLP models, internal mechanisms of vision and multimodal transformers remain largely opaque. With the success of these transformers, it is increasingly critical to understand their inner workings, as unraveling these black-boxes will lead to more capable and trustworthy models. To contribute to this quest, we propose VL-InterpreT, which provides novel interactive visualizations for interpreting the attentions and hidden representations in multimodal transformers. VL-InterpreT is a task agnostic and integrated tool that (1) tracks a variety of statistics in attention heads throughout all layers for both vision and language components, (2) visualizes cross-modal and intra-modal attentions through easily readable heatmaps, and (3) plots the hidden representations of vision and language tokens as they pass through the transformer layers. In this paper, we demonstrate the functionalities of VL-InterpreT through the analysis of KD-VLP, an end-to-end pretraining vision-language multimodal transformer-based model, in the tasks of Visual Commonsense Reasoning (VCR) and WebQA, two visual question answering benchmarks. Furthermore, we also present a few interesting findings about multimodal transformer behaviors that were learned through our tool.
Homepage: https://arxiv.org/abs/2203.17247
Source Code:  https://github.com/intellabs/vl-interpret
Related Software: COVAREP; RUBi; OpenFace; CLEVR; MDETR; ViLT; VisualBERT; MultiBench; ViLBERT; DIME; GloVe; Flickr30K; Grad-CAM; NBDT; Faster R-CNN; VQA; LXMERT; iMotions; Python; MultiViz
Cited in: 0 Publications