While deep learning models have achieved state-of-the-art accuracies for many prediction tasks, understanding these models remains a challenge. Despite the recent interest in developing visual tools to help users interpret deep learning models, the complexity and wide variety of models deployed in industry, and the large-scale datasets that they used, pose unique design challenges that are inadequately addressed by existing work. Through participatory design sessions with over 15 researchers and engineers at Facebook, we have developed, deployed, and iteratively improved ActiVis, an interactive visualization system for interpreting large-scale deep learning models and results. By tightly integrating multiple coordinated views, such as a computation graph overview of the model architecture, and a neuron activation view for pattern discovery and comparison, users can explore complex deep neural network models at both the instance- and subset-level. ActiVis has been deployed on Facebook's machine learning platform. We present case studies with Facebook researchers and engineers, and usage scenarios of how ActiVis may work with different models.
A novel visual representation that unifies instance- and subset-level inspections of neuron activations, which facilitates comparison of activation patterns for multiple instances and instance subsets.
An interface that tightly integrates an overview of graph-structured complex models and local inspection of neuron activations.
A deployed system scaling to large datasets and models.
Case studies with Facebook engineers and data scientists that highlight how ActiVis helps them with their work.
ActiVis integrates several coordinated views to support exploration of complex deep neural network models, at both instance- and subset-level.
1. Our user Susan starts exploring the model architecture, through its computation graph overview (at A).
Selecting a data node (in yellow) displays its neuron activations (at B).
2. The neuron activation matrix view shows the activations for instances and instance subsets;
the projected view displays the 2-D projection of instance activations.
3. From the instance selection panel (at C), she explores individual instances and their classification results.
4. Adding instances to the matrix view enables comparison of activation patterns across instances, subsets, and classes, revealing causes for misclassification.
Minsuk Kahng, Pierre Y. Andrews, Aditya Kalro, and Duen Horng (Polo) Chau.
ActiVis: Visual Exploration of Industry-Scale Deep Neural Network Models.
IEEE Transactions on Visualization and Computer Graphics, Vol. 24, No. 1 (VAST 2017).