![]() ![]() To this day is it still considered to be an excellent vision model, although it has been somewhat outperformed by more revent advances such as Inception and ResNet.įirst of all, let's start by defining the VGG16 model in Keras:įrom keras import backend as K layer_name = 'block5_conv3' filter_index = 0 # can be any integer from 0 to 511, as there are 512 filters in that layer # build a loss function that maximizes the activation # of the nth filter of the layer considered layer_output = layer_dict. ![]() ![]() It was used to win the ILSVR (ImageNet) competition in 2014. VGG16 (also called OxfordNet) is a convolutional neural network architecture named after the Visual Geometry Group from Oxford, who developed it. ![]() All of the code used in this post can be found on Github. We will use Keras to visualize inputs that maximize the activation of the filters in different layers of the VGG16 architecture, trained on ImageNet. In this post, we take a look at what deep convolutional neural networks (convnets) really learn, and how they understand the images we feed them. An exploration of convnet filters with Keras This example of how to visualize convnet filtersįor an up-to-date alternative, or check out chapter 9 of my book "Deep Learning with Python (2nd edition)". Note: this post was originally written in January 2016. ![]()
0 Comments
Leave a Reply. |