Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

QCCE: Quality Constrained Co-Saliency Estimation For Common Object Detection

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

QCCE: Quality Constrained Co-saliency Estimation

for Common Object Detection


Koteswar Rao Jerripothula #∗§1 , Jianfei Cai ∗§2 , Junsong Yuan †§3
#
Interdisciplinary Graduate School, ∗ School of Computer Engineering, † School of Electrical and Electronic Engineering
§
Nanyang Technological University, Singapore. email:1 KOTESWAR001@e.ntu.edu.sg, {2 ASJFCAI,3 JSYUAN}@ntu.edu.sg

Abstract—Despite recent advances in joint processing of im- partial labels. In this paper, we handle both types in one
ages, sometimes it may not be as effective as single image framework. Our approach is to iteratively update saliency maps
processing for object discovery problems. In this paper while using co-saliency estimation while measuring their quality. For
aiming for common object detection, we attempt to address
this problem by proposing a novel QCCE: Quality Constrained a high-quality saliency map, its foreground and background
Co-saliency Estimation method. The approach here is to itera- should be easily separated. Therefore, simple images with a
tively update the saliency maps through co-saliency estimation clear background and foreground separation may not need the
depending upon quality scores, which indicate the degree of help from joint processing. For complex images with cluttered
separation of foreground and background likelihoods (the easier backgrounds, by iteratively updating the saliency maps through
the separation, the higher the quality of saliency map). In this
way, joint processing is automatically constrained by the quality co-saliency estimation, we are able to gradually improve the
of saliency maps. Moreover, the proposed method can be applied saliency maps although they did not have high-quality saliency
to both unsupervised and supervised scenarios, unlike other map to begin with. Images with high-quality saliency maps can
methods which are particularly designed for one scenario only. play the leading role in the co-saliency estimation of other
Experimental results demonstrate superior performance of the images. Moreover, some images may already have ground-
proposed method compared to the state-of-the-art methods.
Index Terms—quality, co-saliency, co-localization, bounding- truth bounding boxes. In such cases, the bounding boxes can
box, propagation, object detection. replace respective saliency maps as the high-quality saliency
maps to help generate better co-saliency maps. Since saliency
I. I NTRODUCTION maps are updated iteratively through co-saliency estimation
constrained by their quality scores, we call it QCCE: Quality
Object detection has many applications since it facilitates Constrained Co-saliency Estimation. The advantage of such an
efficient utilization of computational resources exclusively on approach is twofold: (1) It can work effectively for big image
the region of interest. Saliency is a common cue used in object dataset and can benefit from high-quality saliency maps; (2) It
detection, but it has only obtained limited success when images can automatically choose either the original saliency map or
have cluttered background. Recent progress in joint processing the jointly processed saliency map.
of images like co-segmentation [1][2][3], co-localization [4], Assuming a Gaussian distribution for foreground and back-
knowledge transfer [5][6][7] has been quite effective in this ground likelihoods in the saliency map, we make use of
regard because of the ability to exploit commonness which the overlap between the two distributions to calculate quality
cannot be done in single image processing. scores. We employ co-saliency estimation [3] along with our
Despite previous progress, there still exist some major quality scores to update the saliency maps. The foreground
problems for the existing joint processing algorithms. 1) As likelihood of the high-quality saliency map is used as a rough
shown in [1][2], joint processing of images might not perform common object segment to define bounding boxes eventually.
better than single-image processing for some datasets. This Prior to our work, both co-localization [4] problem and
raises up the question: to process jointly or not. 2) Most of bounding box propagation problem [6][7] have been studied on
the existing high-performance joint processing algorithms are challenging datasets such as ImageNet. While [4] suffers from
usually complicated due to the way of co-labelling the pixels low accuracy and [6][7] essentially depend upon bounding
[2] or co-selection of boxes [4] in a set of images, and also box availability. In contrast, the proposed method can not only
require to tune parameters for effective co-segmentation or co- address both problems, but also outperform the existing works.
localization, which becomes much more difficult when dataset
becomes increasingly diverse.
II. P ROPOSED M ETHOD
There are two types of common object detection: 1) Super-
vised [6][7], where the task is to populate entire dataset with Our goal is to define bounding boxes around the common
the help of some available bounding boxes; 2) Unsupervised objects in a set of similar images. High-quality saliency maps
[4], where the task is to populate entire dataset without any are obtained while measuring the quality of saliency maps
that are iteratively updated via co-saliency estimation. These
high-quality saliency maps are then used to eventually define

978-1-4673-7314-2/15/$31.00 ©2015 IEEE IEEE VCIP 2015


0.999 0.774 0.623 0.541

0.504 0.421 0.416 0.365

Fig. 1. Quality of saliency map is measured using overlap of estimated Fig. 2. Sample Images with their saliency maps and quality scores. Saliency
distribution of the two classes: Foreground and Background maps with low-quality score fail to highlight the starfish.

bounding boxes. In this section, we provide details of quality where Oik represents the overlap of two classes in Si at the
scores, co-saliency estimation and defining bounding boxes. k th iteration.
A. Notation Finally, quality score Qi for k th iteration (denoted as Qki )
is calculated as,
Let I = {I1 , I2 , · · · , Im } be the image set containing
m images and Di be the pixel domain of Ii . Let set of 1
Qki = (3)
saliency maps be denoted as S = {S1 , S2 , · · · , Sm } and 1 + log10 (1 + Oik )
set of their corresponding quality scores be denoted as
Q = {Q1 , Q2 , · · · , Qm }. For images already having bounding As we keep updating saliency maps through interaction with
boxes, saliency maps are replaced by respective bounding other images, we want to choose high-quality saliency maps,
boxes and their quality scores are set as 1. i.e. for which maximum quality score is obtained. In Fig. 2,
we show a set of images with their saliency maps and quality
B. Quality Score scores. It can be seen that saliency maps become unfit to
By quality, we mean how easily two likelihoods (foreground highlight the starfish as quality score decreases from top-left
and background) are separable. These likelihoods are formed to bottom-right.
by thresholding saliency map using the Otsu method. Based on
such classification, let μk1 k0 k1 k0
i , μi , σi and σi be foreground C. Co-saliency Estimation
mean, background mean, foreground standard deviation and
background standard deviation for saliency map Si at the k th The way we update saliency maps after each iteration is
iteration (denoted as Sik ), respectively. through co-saliency estimation which can boost saliency of the
Assuming Gaussian distribution for both likelihoods, we common object and suppress background saliency. In order
denote foreground and background distributions as Fik1 (z) and to avoid large variation across images while developing co-
Fik0 (z), respectively, where z is the saliency value ranging saliency maps, in each iteration k we cluster the images into
between 0 and 1. sub-groups by k-means with the weighted GIST feature [2]
It is clear that the less the two distributions overlap with where saliency maps are used for weights. Let Gv be the set
each other, the better the saliency map is, i.e., the foreground of indexes (i of Ii ) of images in the v th cluster.
and background are more likely to be separable. In order to We adopt the idea of co-saliency estimation from [3] where
calculate the overlap, it is needed to figure out the intersecting the geometric mean of the saliency map of one image and
point (see Fig. 1). It can be obtained by equating the two warped saliency maps of its neighbor images is taken as the
functions, i.e. Fik1 (z) = Fik0 (z), which leads to: co-saliency map. However, we make a slight modification to
suit our model, i.e. we use the weighted mean function instead
1 1 μk0i μk1i
z2( − ) − 2z( − )+ of the geometric mean where weights are our quality scores.
(σik0 )2 (σik1 ) 2 (σik0 )2 (σik1 )2 Saliency Enhancement via Warping: Basically, saliency
k0 2
(μi ) (μk1
i )
2
enhancement takes place at pixel level amongst correspond-
− + 2 log(σik0 ) − 2 log(σik1 ) = 0 (1)
(σik0 )2 (σik1 )2 ing pixels. Specifically, following [2], masked Dense SIFT
correspondence [8] is used to find corresponding pixels in
Let the solution of the above quadratic equation be z ∗ and
each image pair. Masks here are the label maps obtained by
the overlap (O) can now be computed as
thresholding the saliency maps. This ensures that pixels having
 z=z∗  z=1
high foreground likelihood play the dominant role in guiding
Oik = Fik1 (z) + Fik0 (z) (2)
z=0 z=z ∗
the SIFT flow. The energy function for Dense SIFT flow can
now be represented as
  
k
E(wij ; Sik , Sjk ) = φ Sik (p)
i p∈D
   
k k k
φ Sj (p + wij (p)) ||Ri (p) − Rj (p + wij (p))||1 + 1−
   
φ Sjk (p + wij
k
(p)) B0 + k
α||wij (p) − wijk
(q)||2 (4)
q∈Npi

where Ri is dense SIFT feature descriptor for image Ii . The


likelihood function φ for saliency map gives class labels:1 (for
foreground likelihood) or 0 (for background likelihood). It can
be seen how feature difference is masked by the likelihoods of
involved pixels. B0 is a large constant which ensures large cost
if the potential corresponding pixel in another image happens
to have background likelihood. Weighted by another constant
ImageNet
α and likelihood, neighbourhood Npi of pixel p is considered
for smooth flow field wij from image Ii to Ij .
Updating Saliency Maps: Given a pair of images Ii and Ij
from a subgroup Gv , we form the warped saliency map Uji
by Uji (p) = Sjk (p ), where (p, p ) is a matched pair in the
SIFT flow alignment with relationship p = p + wij (p). Since
there are quite a few images in subgroup Gv , for image Ii ,
we may update its saliency map by computing the weighted
mean where weights are respective quality scores, i.e.
j∈G
k+1
|Sik (p)|Qki + j=i v |Uji
k
(p)|Qkj
Si (p) =  k
(5)
j∈Gv Qj Internet

This kind of weights ensures that high-quality saliency maps


Fig. 3. Sample Results from ImageNet and Internet datasets
play the leading role in the development of new saliency
maps so that new saliency maps evolve towards better ones.
Moreover, we also take advantage of prior bounding boxes  
available which are of high-quality right from the beginning. p∈c Si (p) p∈c̄ Si (p)
V (c) = − (6)
|c| |c̄|
D. Convergency and Bounding Box Generation
Convergency: If an image reaches its high-quality saliency where c̄ is the set of the rest of the pixels and |c| is the number
map, updating of saliency map should stop. Thus, saliency of pixels in object c. Objects with high saliency density metric
maps of images with ground-truth bounding boxes as high- are likely to be the real objects. For an image, only those
quality saliency maps get never updated, whereas the saliency object(s) which are ≥ 50 percentile according to this object
maps of other images may get updated depending upon the saliency density metric V or is the only object in the image
quality scores. If quality score decreases in next iteration or are considered for developing bounding boxes. The bounding
difference is very small, say 0.005, we will not update the box is then drawn using topmost, bottommost, leftmost and
saliency map of the image and proceed for bounding box rightmost boundary pixels of the qualified objects.
generation. III. E XPERIMENTAL R ESULTS
Bounding Box Generation: Since rough segmentation As per our claim that the proposed method can work
itself can help developing bounding box, we consider fore- better in both unsupervised and supervised scenarios, we
ground and background likelihoods themselves as common use same large scale experimental set up as [4] and [7]
foreground segment and background segment respectively. We for co-localization and bounding box propagation problems,
get a number of potential sparsely located group of white pix- respectively. Following [4], we use CorLoc evaluation metric,
els as objects or connected components using bwconncomp() i.e., percentage of images that satisfy the condition IOU
area(BB ∩BB )
function of MATLAB which we denote as c. In order to (intersection over union) defined as area((BBgtgt ∪BBpr pr )
>
avoid noisy insignificant objects, we develop an object saliency 0.5 where BBgt and BBpr are ground-truth and proposed
density metric for each of these objects assuming that real bounding boxes maps, respectively. To distinguish between
objects would have high foreground saliency density and low supervised results and unsupervised results, suffixes (S) and
background saliency density (here background is rest of the (U) are used, respectively. We use the saliency maps of [9] and
pixel domain). Therefore saliency density metric is defined as: [10] in co-localization and bounding box propagation setups,
TABLE I
C OR L OC C OMPARISON ON I MAGE N ET AND I NTERNET I MAGES DATASET
IN C O - LOCALIZATION SETUP
ImageNet Internet
Baseline using [9](U) 52.9 65.6
[2](U) - 75.2
[4](U) 53.2 76.6
Proposed Method(U) 64.3 82.8

TABLE II
C OMPARISON ON I MAGE N ET IN BBP SETUP
CorLoc
Baseline using [10](U) 64.9
[6](S) 58.5
[7](S) 66.5

[7] (S) 68.3
Proposed Method(S) 70.9
Proposed Method(U) 68.6

Fig. 4. Sample Visual Comparison between Ground Truth (Green) and Our
Results (Red)
respectively. We use bounding boxes generated from these
initial saliency maps as baselines.
Co-localization Setup: As per the setup in [4], there are 1
of whether to process jointly or not with the help of constraint
million images for which bounding boxes are available on the
on the quality of saliency maps. QCCE can act in both
ImageNet spread over 3627 classes. In addition to ImageNet,
supervised and unsupervised ways and obtains superior results
for Internet [2] dataset which is actually segmentation dataset,
in both scenarios. Moreover, it can be well extended to co-
a tight bounding box is developed across each foreground
segmentation problem. Future work includes incorporating
segment and is used as ground truth. We compare our results
more considerations in the development of quality scores.
in Table I for both datasets. It can be seen that we obtain
superior results with margin of 11% and 6% improvements on ACKNOWLEDGMENT
This research was carried out at the Rapid-Rich Object
these datasets, respectively. Moreover, our results are obtained
Search (ROSE) Lab at the Nanyang Technological University,
without any parameter tuning whereas both [4] and [2] have
Singapore. The ROSE Lab is supported by the National
tuned their parameters on Internet dataset.
Research Foundation, Singapore, under its Interactive Digital
Bounding Box Propagation Setup: We would like to
Media (IDM) Strategic Research Programme.
acknowledge [7] for providing the list of test and training
images upon request. In this setup, 32k images are considered R EFERENCES
for testing purposes, and the saliency maps of the training [1] S. Vicente, C. Rother, and V. Kolmogorov, “Object cosegmentation,” in
Computer Vision and Pattern Recognition (CVPR). IEEE, 2011, pp.
images of the classes, to which these test images belong, 2217–2224.
are replaced with bounding boxes. The problem that we are [2] M. Rubinstein, A. Joulin, J. Kopf, and C. Liu, “Unsupervised joint object
trying to address is similar to “Self” case in [7] where only discovery and segmentation in internet images,” in Computer Vision and
Pattern Recognition (CVPR). IEEE, 2013, pp. 1939–1946.
images within the same class are used for training. In Table [3] K. R. Jerripothula, J. Cai, F. Meng, and J. Yuan, “Automatic image Co-
II, we compare our results on these 32k test images with two Segmentation using geometric mean saliency,” in International Confer-
previous attempts in [6][7] to populate ImagNet with bounding ence on Image Processing (ICIP), Oct. 2014, pp. 3282–3286.
[4] K. Tang, A. Joulin, L.-J. Li, and L. Fei-Fei, “Co-localization in real-
boxes in such a supervised manner. [7]∗ refers to their results world images,” in Computer Vision and Pattern Recognition (CVPR).
using state-of-the-art features and object proposals. Our results IEEE, 2014, pp. 1464–1471.
are 4% better than state-of-the-art [7]∗ (S). Considering the [5] M. Guillaumin, D. Kttel, and V. Ferrari, “Imagenet auto-annotation with
segmentation propagation,” International Journal of Computer Vision,
proposed method does not essentially need bounding boxes, vol. 110, no. 3, pp. 328–348, 2014.
we report our unsupervised results (Proposed Method(U)) of [6] M. Guillaumin and V. Ferrari, “Large-scale knowledge transfer for object
these 32k test images as well where we do not use any initial localization in imagenet,” in Computer Vision and Pattern Recognition
(CVPR). IEEE, 2012, pp. 3202–3209.
bounding boxes and still obtain comparable results to [7]∗ (S). [7] A. Vezhnevets and V. Ferrari, “Associative embeddings for large-scale
Fig. 3 shows sample results obtained on ImageNet and knowledge transfer with self-assessment,” in Computer Vision and
Internet datasets. In addition, we show our results along with Pattern Recognition (CVPR). IEEE, 2014, pp. 1987–1994.
[8] C. Liu, J. Yuen, and A. Torralba, “Sift flow: Dense correspondence
ground-truth for visual comparison in Fig. 4. It can be seen across scenes and its applications,” Pattern Analysis and Machine
that proposed method is able to accurately provide bounding Intelligence, IEEE Transactions on, vol. 33, no. 5, pp. 978–994, 2011.
boxes for both simple and complex images because we are [9] M.-M. Cheng, G.-X. Zhang, N. J. Mitra, X. Huang, and S.-M. Hu,
“Global contrast based salient region detection,” in Computer Vision
able to effectively constrain the joint processing of images. and Pattern Recognition (CVPR). IEEE, 2011, pp. 409–416.
[10] H. Jiang, J. Wang, Z. Yuan, Y. Wu, N. Zheng, and S. Li, “Salient object
IV. C ONCLUSION AND F UTURE W ORK detection: A discriminative regional feature integration approach,” in
We have proposed a QCCE method for common object Computer Vision and Pattern Recognition (CVPR). IEEE, 2013, pp.
detection. In the process, we try to address the critical issue 2083–2090.

You might also like