Saliency Detection by Multi-Context Deep Learning
Rui Zhao
1,2
Wanli Ouyang
2
Hongsheng Li
2,3
Xiaogang Wang
1,2
1
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
2
Department of Electronic Engineering, The Chinese University of Hong Kong
3
School of Electronic Scicence, University of Electronic Science and Technology of China
{rzhao, wlouyang, hsli, xgwang}@ee.cuhk.edu.hk
This supplementary file presents qualitative saliency detection results on ASD [1], SED1 [2], SED2 [2], ECSSD [13], and PASCAL-S
[7] datasets. Approaches in comparison include GBVS [4], SF [10], GC [3], CEOS [8], PCAS [9], GBMR [14], HS [13], DRFI [5], and our
approach. Saliency detection results of five images are shown for each dataset in Figure 1-5. In Figure 6(a), we show qualitative comparison
between our single-context (SC) model and multi-context (MC) model, and Figure 6(b) shows the evaluation on our single-context model
using contemporary deep architectures including AlexNet[6], Clarifai [15], OverFeat [11], and GoogLeNet [12].
GBVSImage GT Ours DRFI HS GBMR PCAS CEOS GC SF
251
192
406
158
64
Figure 1. Qualitative comparison on the ASD dataset.
51
75
98
49
GBVSImage GT Ours HS GBMR PCAS CEOS GC SFDRFI
40
Figure 2. Qualitative comparison on the SED1 dataset.
1
50
GBVSImage GT Ours HS GBMR PCAS CEOS GC SFDRFI
87
95
13
91
Figure 3. Qualitative comparison on the SED2 dataset.
GBVSImage GT Ours HS GBMR PCAS CEOS GC SFDRFI
21
23
112
16
123
Figure 4. Qualitative comparison on the ECSSD dataset.
26
70
79
139
123
GBVSImage GT Ours HS GBMR PCAS CEOS GC SFDRFI
Figure 5. Qualitative comparison on the PASCAL-S dataset.
244
Image GT Ours (SC) Ours (MC)
446
484
985
ASD
22
21
5
SED1
SED2
ECSSD
34
PASCAL-S
70
775
490
(a)
Image GT AlexNet Clarifai Overfeat GoogleNet
ASD
SED1
SED2
ECSSD
PASCAL-S
(b)
Figure 6. (a) Qualitative comparison between our single-context (SC) model and multi-context (MC) model. (b) Qualitative evaluation on
our single-context model using contemporary deep architectures including AlexNet[6], Clarifai [15], OverFeat [11], and GoogLeNet [12].
References
[1] R. Achanta, S. Hemami, F. Estrada, and S. Susstrunk. Frequency-tuned salient region detection. In CVPR, 2009. 1
[2] S. Alpert, M. Galun, R. Basri, and A. Brandt. Image segmentation by probabilistic bottom-up aggregation and cue
integration. In CVPR, 2007. 1
[3] M.-M. Cheng, J. Warrell, W.-Y. Lin, S. Zheng, V. Vineet, and N. Crook. Efficient salient region detection with soft
image abstraction. In IEEE ICCV, pages 1529–1536, 2013. 1
[4] J. Harel, C. Koch, and P. Perona. Graph-based visual saliency. In NIPS, 2006. 1
[5] H. Jiang, J. Wang, Z. Yuan, Y. Wu, N. Zheng, and S. Li. Salient object detection: A discriminative regional feature
integration approach. In CVPR, 2013. 1
[6] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In
NIPS, 2012. 1, 3
[7] Y. Li, X. Hou, C. Koch, J. Rehg, and A. Yuille. The secrets of salient object segmentation. In CVPR, 2014. 1
[8] R. Mairon and O. Ben-Shahar. A closer look at context: From coxels to the contextual emergence of object saliency. In
ECCV. 2014. 1
[9] R. Margolin, A. Tal, and L. Zelnik-Manor. What makes a patch distinct? In CVPR, 2013. 1
[10] F. Perazzi, P. Krahenbuhl, Y. Pritch, and A. Hornung. Saliency filters: Contrast based filtering for salient region
detection. In CVPR, 2012. 1
[11] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. Overfeat: Integrated recognition, localization
and detection using convolutional networks. arXiv preprint arXiv:1312.6229, 2013. 1, 3
[12] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going
deeper with convolutions. arXiv preprint arXiv:1409.4842, 2014. 1, 3
[13] Q. Yan, L. Xu, J. Shi, and J. Jia. Hierarchical saliency detection. In CVPR, 2013. 1
[14] C. Yang, L. Zhang, H. Lu, X. Ruan, and M.-H. Yang. Saliency detection via graph-based manifold ranking. In CVPR,
2013. 1
[15] M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In ECCV. 2014. 1, 3