visualResNet
![](https://github.com/zhanghang1989/visualResNet/raw/master/images/1.png)
![](https://github.com/zhanghang1989/visualResNet/raw/master/images/2.png)
Curious about where is the ResNet looking at? This repo visualizes the class specific saliency map or discriminative location. It is Torch re-implementation of Learning Deep Features for Discriminative Localization, Bolei Zhou et. al., without modifying the network or re-training. (MatConvNet re-implementation can be found at this repo.) We look directly through the ResNet to see the world.
Get Started
- The code relies on Torch, which should be downloaded and built before running the experiments. Download the code:
git clone https://github.com/zhanghang1989/visualResNet_torch.git
-
Download the pre-trained models from Facebook ResNet repo
-
Run the progrem:
th visual.lua resnet-50.t7 data/1.JPEG data/2.JPEG
- Visualize all the images in a folder
th visual.lua resnet-50.t7 data/*
Examples
![](https://github.com/zhanghang1989/visualResNet/raw/master/images/3.png)
![](https://github.com/zhanghang1989/visualResNet/raw/master/images/4.png)
![](https://github.com/zhanghang1989/visualResNet/raw/master/images/5.png)
![](https://github.com/zhanghang1989/visualResNet/raw/master/images/6.png)
![](https://github.com/zhanghang1989/visualResNet/raw/master/images/7.png)
![](https://github.com/zhanghang1989/visualResNet/raw/master/images/8.png)
![](https://github.com/zhanghang1989/visualResNet/raw/master/images/9.png)
![](https://github.com/zhanghang1989/visualResNet/raw/master/images/10.png)
![](https://github.com/zhanghang1989/visualResNet/raw/master/images/11.png)
![](https://github.com/zhanghang1989/visualResNet/raw/master/images/12.png)
![](https://github.com/zhanghang1989/visualResNet/raw/master/images/13.png)
![](https://github.com/zhanghang1989/visualResNet/raw/master/images/14.png)
![](https://github.com/zhanghang1989/visualResNet/raw/master/images/15.png)
![](https://github.com/zhanghang1989/visualResNet/raw/master/images/16.png)
![](https://github.com/zhanghang1989/visualResNet/raw/master/images/17.png)
![](https://github.com/zhanghang1989/visualResNet/raw/master/images/18.png)
![](https://github.com/zhanghang1989/visualResNet/raw/master/images/19.png)
![](https://github.com/zhanghang1989/visualResNet/raw/master/images/20.png)
![](https://github.com/zhanghang1989/visualResNet/raw/master/images/21.png)
![](https://github.com/zhanghang1989/visualResNet/raw/master/images/22.png)
![](https://github.com/zhanghang1989/visualResNet/raw/master/images/23.png)
![](https://github.com/zhanghang1989/visualResNet/raw/master/images/24.png)
![](https://github.com/zhanghang1989/visualResNet/raw/master/images/25.png)
![](https://github.com/zhanghang1989/visualResNet/raw/master/images/26.png)
![](https://github.com/zhanghang1989/visualResNet/raw/master/images/27.png)
![](https://github.com/zhanghang1989/visualResNet/raw/master/images/28.png)
![](https://github.com/zhanghang1989/visualResNet/raw/master/images/29.png)
![](https://github.com/zhanghang1989/visualResNet/raw/master/images/30.png)
Written by Hang Zhang on July 4, 2016