2021.03.13 19:10
simple example of how to visualize what a convnet does
Recently, I was looking for a good example of how to visualize how a given neural network works. I know that one of the ways is to find an input for which the output of a given filter of a given layer is maximal, and that gives some idea of what that filter is looking for. I also know that to do this, you need to start with a random input, feed it to the network, calculate the gradient of the filter with respect to that input, and add the gradient to the input. I mean, follow the gradient, because it increases the filter's output until we reach a point where the filter's output is large. So the basics are simple, but when I tried to write this code, something always didn't work, and when I looked for examples, they either didn't work for me because they assumed (if I'm not mistaken, because I'm not very familiar with Keras) that eager evaluation is turned off, while in the new Keras it's on by default (I know, I can turn it off, but I didn't want to), or they were so complicated that it was hard to figure out what was going on and how it related to what I was trying to do. So finally, I took an example from the official documentation (from https://keras.io/examples/vision/visualizing_what_convnets_learn/) and simplified it until I got a nice, simple, working example. Here it is:
# based on https://keras.io/examples/vision/visualizing_what_convnets_learn/
from tensorflow import keras
import tensorflow as tf
import numpy as np
from IPython.display import Image, display
img_width = 180
img_height = 180
# Our target layer: we will visualize the filters from this layer.
# See `model.summary()` for list of layer names, if you want to change this.
layer_name = "conv3_block4_out"
filter_index = 1
learning_rate = 10.0
# Build a ResNet50V2 model loaded with pre-trained ImageNet weights
model = keras.applications.ResNet50V2(weights="imagenet", include_top=False)
# Set up a model that returns the activation values for our target layer
layer = model.get_layer(name=layer_name)
feature_extractor = keras.Model(inputs=model.inputs, outputs=layer.output)
img = tf.random.uniform((1, img_width, img_height, 3))
img = (img - 0.5) * 0.25
for i in range(50):
with tf.GradientTape() as tape:
tape.watch(img)
activation = feature_extractor(img)
# We avoid border artifacts by only involving non-border pixels in the loss.
filter_activation = activation[:, 2:-2, 2:-2, filter_index]
loss = tf.reduce_mean(filter_activation)
grads = tape.gradient(loss, img)
grads = tf.math.l2_normalize(grads)
img += learning_rate * grads
img_ = img[0].numpy()
img_ -= img_.mean()
img_ /= img_.std() + 1e-5
img_ *= 0.15
img_ = img_[25:-25, 25:-25, :]
img_ += 0.5
img_ = np.clip(img_, 0, 1)
img_ *= 255
img_ = np.clip(img_, 0, 255).astype("uint8")
keras.preprocessing.image.save_img("0.png", img_)
display(Image("0.png"))
(...)
It works for me.
comments:
back to homepage
RSS