swMATH ID: 42497
Software Authors: Yiwei Lyu, Paul Pu Liang, Zihao Deng, Ruslan Salakhutdinov, Louis-Philippe Morency
Description: DIME: Fine-grained Interpretations of Multimodal Models via Disentangled Local Explanations. The ability for a human to understand an Artificial Intelligence (AI) model’s decision-making process is critical in enabling stakeholders to visualize model behavior, perform model debugging, promote trust in AI models, and assist in collaborative human-AI decision-making. As a result, the research fields of interpretable and explainable AI have gained traction within AI communities as well as interdisciplinary scientists seeking to apply AI in their subject areas. In this paper, we focus on advancing the state-of-the-art in interpreting multimodal models - a class of machine learning methods that tackle core challenges in representing and capturing interactions between heterogeneous data sources such as images, text, audio, and time-series data. Multimodal models have proliferated numerous real-world applications across healthcare, robotics, multimedia, affective computing, and human-computer interaction. By performing model disentanglement into unimodal contributions (UC) and multimodal interactions (MI), our proposed approach, DIME, enables accurate and fine-grained analysis of multimodal models while maintaining generality across arbitrary modalities, model architectures, and tasks. Through a comprehensive suite of experiments on both synthetic and real-world multimodal tasks, we show that DIME generates accurate disentangled explanations, helps users of multimodal models gain a deeper understanding of model behavior, and presents a step towards debugging and improving these models for real-world deployment. Code for our experiments can be found at https://github.com/lvyiwei1/DIME
Homepage: https://arxiv.org/abs/2203.02013
Source Code:  https://github.com/lvyiwei1/DIME
Related Software: COVAREP; RUBi; OpenFace; VL-InterpreT; CLEVR; MDETR; ViLT; VisualBERT; MultiBench; ViLBERT; GloVe; Flickr30K; Grad-CAM; NBDT; Faster R-CNN; VQA; LXMERT; iMotions; Python; MultiViz
Cited in: 0 Publications