×

InterpretML

swMATH ID: 30904
Software Authors: Harsha Nori, Samuel Jenkins, Paul Koch, Rich Caruana
Description: InterpretML: A Unified Framework for Machine Learning Interpretability- InterpretML is an open-source Python package which exposes machine learning interpretability algorithms to practitioners and researchers. InterpretML exposes two types of interpretability - glassbox models, which are machine learning models designed for interpretability (ex: linear models, rule lists, generalized additive models), and blackbox explainability techniques for explaining existing systems (ex: Partial Dependence, LIME). The package enables practitioners to easily compare interpretability algorithms by exposing multiple methods under a unified API, and by having a built-in, extensible visualization platform. InterpretML also includes the first implementation of the Explainable Boosting Machine, a powerful, interpretable, glassbox model that can be as accurate as many blackbox models. The MIT licensed source code can be downloaded from this http URL.
Homepage: https://arxiv.org/abs/1909.09223
Source Code:  https://github.com/interpretml/interpret
Dependencies: Python
Keywords: Machine Learning; arXiv_cs.LG; arXiv_stat.ML; Python package; Python; ML interpretability algorithms
Related Software: Python; AI Explainability 360; modelStudio; H2O; DALEX; shap; Alibi Explain; AIF360; Alibi; TensorFlow; Scikit; PyTorch; Captum; pdp; iml; ingredients; shapper; lime; modelDown; R
Cited in: 2 Documents

Citations by Year