An interactive tool for understanding what neural networks consider similar and different.
As digital technology has evolved over the past few decades, the ways we interact with it have also evolved.
We have moved from typing on a keyboard and viewing a terminal console, to using a mouse and graphical user interface, to employing a variety of touchscreen gestures and voice commands.
However, despite the rapid progress in deep learning over the past few years
In this work, we formalize interactive similarity overlays — an interactive visualization that highlights how a convolutional neural network (CNN) "sees" different image patches as similar or different.
Our method builds off of prior work on interactive visualizations for understanding CNNs
Interactive similarity overlays allow a user to hover over an image patch and visualize how similar (or different) other image patches are in a CNN representation (see right figure).
More precisely, let $s(\mathbf{z_1}, \mathbf{z_2}): \mathbb{R}^{D} \times \mathbb{R}^{D} \to \mathbb{R}$ be a similarity function
and let $f_l(\mathbf{x}): \mathbb{R}^{3 \times H \times W} \to \mathbb{R}^{D_l \times H_l \times W_l}$ be a function that takes in an input image and returns a 3D tensor (i.e., a CNN up to layer $l$).
With this technique, we can compare similarities of spatial locations across images, as shown in the splash figure above. Within this set of images, we notice that simple background scenes (e.g., those for the dog and cat, flowers, and bird images) are similarly activated despite being visually different. We also observe that a few features, such as eyes, are common across object classes (i.e., different species). Taken together, these observations suggest that CNNs are capable of learning broad and flexible semantic concepts.
This multi-image example also highlights the main benefit of interactive similarity overlays, which is their ability to allow users to digest a complex amount of data in an interpretable way. For $N$ images, the full scale of the similarities between all image patches is $\mathcal{O}(N^2 \times H_{l}^2 \times W_{l}^2)$. By displaying similarities interactively, we show $\mathcal{O}(N \times H_l \times W_l)$ similarity scores at any given moment, thereby making the data easier to digest.
In the rest of the article, we demonstrate the utility of our interactive similarity overlays in several case studies.
First, we consider how interactive similarity overlays help us explore the representations of different CNN layers.
Most prior works have explored layer representations from two perspectives:
1., by exploring how representations in a layer corresponds with different kinds of semantic concepts
To compare representations of the same input image at different layers, we compute similarity scores within each layer and synchronize the spatial location being explained across layers (i.e., the highlighted image patch in yellow).
Using this synchronization trick, we first explore the representation of layers with different spatial resolutions.
Consistent with some prior work, we find that the earlier layers seem to capture lower-level features like edges while later layers tend to highlight higher-level, semantic features like objects.
We also notice that the representations of later layers appear more smooth.
We also explore the representation of layers with the same spatial resolution.
We can also use our visualization to explore representational similarities across images of the same class.
One interesting application is to compare correspondences between natural images and generated ones.
To that effect, we compute similarity scores across several images, including ones generated to be classified as the same object class
To enhance our visualization and suggest a few corresponding features, we combined our similarity overlays with another visualization tool: matrix factorization.
Matrix factorization factors instances into several groups which best explain the variation in a set.
In the following example, we use matrix factorization to group activation vectors at different spatial locations (and in different images) into discrete groups.
In a final example, we demonstrate how our interactive similarity overlays help us explore how sensitive or invariant a representation is to geometric transformations (e.g., rotation, scale). By systematically transforming an image (e.g., by fixed-degree rotation) and visualizing similarity scores across transformed images, we can visually inspect the impact of a given transformation. We can also combine our overlays with an interactive chart visualization.
In the rotation example, we show a line chart that displays the similarity scores of the highlighted image patch as well as the corresponding patch in the other transformed images.
By leveraging both visualizations, we can quickly notice that more discriminative and oriented features (e.g., animal nose) are more sensitive to rotation than more texture-based, background features (e.g., grass).
We also discover rotational sensitivity at image borders; this is likely an artifact from padding the boundaries with zero padding.
By combining visualizations at different layers of abstraction (e.g., qualitative visualization of similarities across all image patches vs. quantitative visualization of a subset of relevant patches), we demonstrate the utility of combining techniques that operate at different levels of abstraction.
In the scale example, we observe that the spatial relationship of similarities between different features are preserved across scales (e.g., moving a mouse around in one image generates similar "movements" in other images). However, by plotting the similarity scores of the highlighted feature across scales, we see more clearly and quantitatively that similarity scores are somewhat sensitive to large scale changes. This seems to be true for both discriminative features and background ones, though texture-based, background features may be less sensitive (e.g., background grass vs. cat nose).
In summary, we introduce a simple interactive visualization , interactive similarity overlays, which allow a user to investigate the representational similarity of various images. Thanks to its interactive nature, our visualization is both interpretable and faithful to the model being explained. We highlighted how our visualization enables the exploration of a few CNN properties as well as how it can be thoughtfully combined with other techniques to yield further insights.
With a recent movement towards supporting deep learning in Javascript
Code: ruthcfong/interactive_overlay
Open-source implementation of our techniques on GitHub.
Notebooks:
Direct links to ipynb
notebooks corresponding to the respective sections of this paper.
Further Notebook:
Direct link to an ipynb
notebook demonstrating how to use our interactive similarity overlays in other applications using PyTorch.
We are deeply grateful to the following people for helpful conversations: Tom White, David Bau, Been Kim, Xu Ji, Sam Albanie, Mandela Patrick, Ludwig Schubert, Gabriel Goh, and Nick Cammarata.
We are also thankful to the discussion groups organized by Xu Ji within the VGG group and organized by Chris Olah within the Distill Slack workspace.
We are also particularly grateful to Tom White for his permission to use his "Perceptual Engines" generated images
Lastly, this work was made possible by many open source tools, for which we are grateful.
In particular, all of our experiments were based on Tensorflow
Research: Alex came up with the initial idea of cosine similarity overlays. Ruth developed its applications to interrogate different layers, geometric transformations, etc. Andrea and Chris suggested helpful research directions; in particular, Chris suggested combining similarity overlays with other visualization techniques.
Writing & Diagrams: The text was initially drafted by Ruth and refined by the other authors. The interactive diagrams were designed by all authors. The final notebooks were primarily created by Ruth, based on earlier code and notebooks by Alex and Chris.
General:
Unless otherwise stated, we use the cosine similarity function, $s(\mathbf{a}, \mathbf{b}) = \frac{\mathbf{a} \cdot \mathbf{b}}{\lVert \mathbf{a} \rVert \lVert \mathbf{b} \rVert}$ as the similarity function with which we compute overlays and visualize GoogLeNet's
Non-negative matrix factorization (NNMF):
For each object class (e.g., blow dryer), 10 (out of 50) real images from the ImageNet
For attribution in academic contexts, please cite this work as
Fong et al., "Interactive Similarity Overlays", VISxAI 2021. Retrieved from https://www.ruthfong.com/projects/interactive_overlay/
BibTeX citation
@InProceedings{fong_interactive_2021, author={Fong, Ruth and Mordvintsev, Alexander and Vedaldi, Andrea and Olah, Chris}, title={Interactive Similarity Overlays}, booktitle={VISxAI}, year={2021}, url={https://www.ruthfong.com/projects/interactive_overlay/}, }