Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Pininterest Visual Search

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Visual Search at Pinterest

1 1 1 1 1 1,2
Yushi Jing , David Liu , Dmitry Kislyuk , Andrew Zhai , Jiajing Xu , Jeff Donahue ,
1

1
Sarah Tavel
Visual Discovery, Pinterest
2
University of California, Berkeley
{jing, dliu, dkislyuk, andrew, jiajing, jdonahue, sarah}@pinterest.com

ABSTRACT
arXiv:1505.07647v3 [cs.CV] 8 Mar 2017

We demonstrate that, with the availability of distributed


computation platforms such as Amazon Web Services and
open-source tools, it is possible for a small engineering team
to build, launch and maintain a cost-effective, large-scale
visual search system with widely available tools. We also
demonstrate, through a comprehensive set of live experi-
ments at Pinterest, that content recommendation powered
by visual search improve user engagement. By sharing our
implementation details and the experiences learned from
launching a commercial visual search engines from scratch,
we hope visual search are more widely incorporated into to-
day’s commercial applications.

Please see an updated version of the paper, Visual


Discovery at Pinterest, presented at World Wide
Web (WWW) 2017.

Categories and Subject Descriptors


H.3.3 [Information Systems Applications]: Search Pro-
cess; I.4.9 [Image Processing and Computer Vision]:
Application
Figure 1: Similar Looks: We apply object detection
to localize products such as bags and shoes. In this
General Terms prototype, users click on objects of interest to view
information retrieval, computer vision, deep learning, dis- similar-looking products.
tributed systems

Keywords
significant progress has been made in building Web-scale vi-
visual search, visual shopping, open source sual search systems, there are few publications describing
end-to-end architectures deployed on commercial applica-
1. INTRODUCTION tions. This is in part due to the complexity of real-world
Visual search, or content-based image retrieval [5], is an visual search systems, and in part due to business consider-
active research area driven in part by the explosive growth of ations to keep core search technology proprietary.
online photos and the popularity of search engines. Google We faced two main challenges in deploying a commercial
Goggles, Google Similar Images and Amazon Flow are sev- visual search system at Pinterest. First, as a startup we
eral examples of commercial visual search systems. Although needed to control the development cost in the form of both
human and computational resources. For example, feature
Permission to make digital or hard copies of all or part of this work for personal or
computation can become expensive with a large and con-
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
tinuously growing image collection, and with engineers con-
on the first page. Copyrights for components of this work owned by others than the stantly experimenting with new features to deploy, it is vital
author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or for our system to be both scalable and cost effective. Sec-
republish, to post on servers or to redistribute to lists, requires prior specific permission ond, the success of a commercial application is measured by
and/or a fee. Request permissions from Permissions@acm.org. the benefit it brings to the users (e.g. improved user engage-
KDD’15, August 10-13, 2015, Sydney, NSW, Australia.
ment) relative to the cost of development and maintenance.
Copyright is held by the owner/author(s). Publication rights licensed to ACM.
ACM 978-1-4503-3664-2/15/08 ...$15.00.
As a result, our development progress needs to be frequently
DOI: http://dx.doi.org/10.1145/2783258.2788621 . validated through A/B experiments with live user traffic.
isolation. After deploying the end-to-end system, we use
A/B tests to measure user engagement on live traffic.
Related Pins (Figure 2) is a feature that recommends Pins
based on the Pin the user is currently viewing. These rec-
ommendations are primarily generated from the “curation
graph” of users, boards, and Pins. However, there is a long
tail of less popular Pins without recommendations. Using
visual search, we generate recommendations for almost all
Pins on Pinterest. Our second application, Similar Looks
(Figure 1 ) is a discovery experience we tested specifically
for fashion Pins. It allowed users to select a visual query
from regions of interest (e.g. a bag or a pair of shoes) and
identified visually similar Pins for users to explore or pur-
chase. Instead of using the whole image, visually similarity
is computed between the localized objects in the query and
database images. To our knowledge, this is the first pub-
lished work on object detection/localization in a commer-
cially deployed visual search system.
Our experiments demonstrate that 1) one can achieve very
low false positive rate (less than 1%) with good detection
rate by combining the object detection/localization meth-
ods with metadata, and 2) using feature representations
from the VGG [21] [3] model significantly improves visual
search accuracy on our Pinterest benchmark datasets, and
3) we observe significant gains in user engagement when vi-
sual search is used to power Related Pins and Similar Looks
applications.

2. VISUAL SEARCH ARCHITECTURE AT


PINTEREST
Pinterest is a visual bookmarking tool that helps users dis-
cover and save creative ideas. Users pin images to boards,
Figure 2: Related Pins: Pins are selected based on which are curated collections around particular themes or
the curation graph. topics. This human-curated user-board-image graph con-
tains a rich set of information about the images and their
semantic relations to each other. For example, when an im-
age is pinned to a board, it is implies a “curatorial link”
In this paper, we describe our approach to deploy a com- between the new board and all other boards the image ap-
mercial visual search system with those two challenges in pears in. Metadata, such as image annotations, can then be
mind. We makes two main contributions. propagated through these links to form a rich description of
Our first contribution is to present our scalable and cost the image, the image board and the users.
effective visual search implementation using widely available Since the image is the focus of each pin, visual features
tools, feasible for a small engineering team to implement. play a large role in finding interesting, inspiring and relevant
Section 2.1 describes our simple and pragmatic approach to content for users. In this section we describe the end-to-end
speeding up and improving the accuracy of object detection implementation of a visual search system that indexes bil-
and localization that exploits the rich metadata available lions of images on Pinterest. We address the challenges of
at Pinterest. By decoupling the difficult (and computation- developing a real-world visual search system that balances
ally expensive) task of multi-class object detection into cat- cost constraints with the need for fast prototyping. We de-
egory classification followed by per-category object detec- scribe 1) the features that we extract from images, 2) our
tion, we only need to run (expensive) object detectors on infrastructure for distributed and incremental feature ex-
images with high probability of containing the object. Sec- traction, and 3) our real-time visual search service.
tion 2.2 presents our distributed pipeline to incrementally
add or update image features using Amazon Web Services,
which avoids wasteful re-computation of unchanged image 2.1 Image Representation and Features
features. Section 2.3 presents our distributed indexing and We extract a variety of features from images, including
search infrastructure built on top of widely available tools. local features and “deep features” extracted from the activa-
Our second contribution is to share results of deploying tion of intermediate layers of a deep convolutional network.
our visual search infrastructure in two product applications: The deep features come from convolutional neural networks
Related Pins (Section 3) and Similar Looks (Section 4). For (CNNs) based on the AlexNet [14] and VGG [21] architec-
each application, we use application-specific data sets to tures. We used the feature representations from fc6 and
evaluate the effectiveness of each visual search component fc8 layers. These features are binarized for representation
(object detection, feature representations for similarity) in efficiency and compared using Hamming distance. We use
Figure 3: Instead of running all object detectors
on all images, we first predict the image categories
using textual metadata, and then apply object de-
tection modules specific to the predicted category.
Figure 4: ROC curves for CUR prediction (left) and
CTR prediction (right).
open-source Caffe [11] to perform training and inference of
our CNNs on multi-GPU machines.
The system also extracts salient color signatures from im-
ages. Salient colors are computed by first detecting salient (CUR) and click-through rate (CTR) based on its visual fea-
regions [24, 4] of the images and then applying k-means clus- tures. We trained a CNN to learn a mapping from images
tering to the Lab pixel values of the salient pixels. Cluster to the probability of a user bringing up the close-up view or
centroids and weights are stored as the color signature of the clicking through to the content. Both CUR and CTR are
image. helpful for applications like search ranking, recommendation
systems and ads targeting since we often need to know which
Two-step Object Detection and Localization images are more likely to get attention from users based on
One feature that is particularly relevant to Pinterest is the their visual content.
presence of certain object classes, such as bags, shoes, watches, CNNs have recently become the dominant approach to
dresses, and sunglasses. We adopted a two-step detection many semantic prediction tasks involving visual inputs, in-
approach that leverages the abundance of weak text labels cluding classification [15, 14, 22, 3, 20, 13], detection [8,
on Pinterest images. Since images are pinned many times 9, 6], and segmentation [17]. Training a full CNN to learn
onto many boards, aggregated pin descriptions and board good representation can be time-consuming and requires a
titles provide a great deal of information about the image. very large corpus of data. We apply transfer learning to
A text processing pipeline within Pinterest extracts relevant our model by retaining the low-level visual representations
annotations for images from the raw text, producing short from models trained for other computer vision tasks. The
phrases associated with each image. top-level layers of the network are fine-tuned for our specific
We use these annotations to determine which object detec- task. This saves substantial training time and leverages the
tors to run. In Figure 1, we first determined that the image visual features learned from a much larger corpus than that
was likely to contain bags and shoes, and then proceeded of the target task. We use Caffe to perform this transfer
to apply visual object detectors for those object classes. By learning.
first performing category classification, we only need to run Figure 4 depicts receiver operating characteristic (ROC)
the object detectors on images with a high prior likelihood curves for our CNN-based method, compared with a base-
of matching, reducing computational cost as well as false line based on a “traditional” computer vision pipeline: a
positives. SVM trained with binary labels on a pyramid histogram
Our initial approach for object detection was a heavily op- of words (PHOW), which performs well on object recogni-
timized implementation of cascading deformable part-based tion datasets such as Caltech-101. Our CNN-based approach
models [7]. This detector outputs a bounding box for each outperforms the PHOW-SVM baseline, and fine-tuning the
detected object, from which we extract visual descriptors for CNN from end-to-end yields a significant performance boost
the object. Our recent efforts have focused on investigat- as well. A similar approach was also applied to the task of
ing the feasibility and performance of deep learning based detecting pornographic images uploaded to Pinterest with
object detectors [8, 9, 6] as a part of our two-step detec- good results 1 .
tion/localization pipeline.
Our experiment results in Section 4 show that our sys- 2.2 Incremental Fingerprinting Service
tem achieved a very low false positive rate (less than 1%), Most of our vision applications depend on having a com-
which was vital for our application. This two-step approach plete collection of image features, stored in a format amenable
also enables us to incorporate other signals into the category to bulk processing. Keeping this data up-to-date is challeng-
classification. The use of both text and visual signals for ob- ing; because our collection comprises over a billion unique
ject detection and localization is widely used [2] [1] [12] for images, it is critical to update the feature set incrementally
Web image retrieval and categorization. and avoid unnecessary re-computation whenever possible.
Click Prediction
1
When users browse on Pinterest, they can interact with a By fine-tuning a network for three-class classification of
ignore, softcore, and porn images, we are able to achieve a
pin by clicking to view it full screen (“close-up”) and subse- validation accuracy of 83.2%. When formulated as a binary
quently clicking through to the off-site source of the content classification between ignore and softcore/porn categories,
(a click-through). For each image, we predict close-up rate the classifier achieved an AUC score of 93.56%.
tures for that date. Since old images do not change, their
1. Find New Image
Signatures
Pinterest pin
database
features are not recomputed. If the algorithm or parameters
for generating a feature are modified, or if a new feature is
added, a new feature store is started and all of the epochs
Signature Lists
are computed for that feature. Unchanged features are not
(initial epoch) sig/dt=2014-xx-xx/{000..999} affected.
(delta date epoch) sig/dt=2015-01-06/{000..004} We copy these features into various forms for more con-
venient access by other jobs: features are merged to form a
fingerprint containing all available features of an image, and
2. Enqueuer fingerprints are copied into sharded, sorted files for random
access by image signature (MD5 hash). These joined finger-
work chunks
print files are regularly re-materialized, but the expensive
3. Feature Computation Queue feature computation needs only be done once per image.
(20 - 600 compute nodes)
A flow chart of the incremental fingerprint update pro-
work chunks recombined
cess is shown in Figure 5. It consists of five main jobs: job
(1) compiles a list of newly uploaded image signatures and
Individual Feature Files
groups them by date into epochs. We randomly divide each
(initial epoch) color/dt=2014-xx-xx/{000..999}
deep/dt=2014-xx-xx/{000..999} epoch into sorted shards of approximately 200,000 images

(delta date epoch) color/dt=2015-01-06/{000..004}
to limit the size of the final fingerprint files. Job (2) identi-
deep/dt=2015-01-06/{000..004} fies missing epochs in each feature store and enqueues jobs

into PinLater (a distributed queue service similar to Ama-
zon SQS). The jobs subdivide the shards into “work chunks”,
4. Sorted Merge tuned such that each chunk takes approximate 30 minutes to
for each epoch
compute. Job (3) runs on an automatically-launched cluster
of EC2 instances, scaled depending on the size of the update.
Fingerprint Files (all features)
Spot instances can be used; if an instance is terminated, its
job is rescheduled on another worker. The output of each
(initial epoch) merged/dt=2014-xx-xx/{000..999}
(delta date epoch) merged/dt=2015-01-06/{000.004}
work chunk is saved onto S3, and eventually recombined into
feature files corresponding to the original shards.
Job (4) merges the individual feature shards into a unified
Other Visual Data Sources
5. VisualJoiner (visual annotations, fingerprint containing all of the available features for each
deduplicated signature, …)
image. Job (5) merges all the epochs into a sorted, sharded
HFile format allowing for random access.
VisualJoin The initial computation of all available features on all im-
(Random Access Format)
ages, takes a little over a day using a cluster of several hun-
visualjoin/00000
… dred 32-core machines, and produces roughly 5 TB of feature
visualjoin/01800
data. The steady-state requirement to process new images
incrementally is only about 5 machines.

Figure 5: Examples of outputs generated by incre-


mental fingerprint update pipeline. The initial run 2.3 Search Infrastructure
is shown as 2014-xx-xx which includes all the images
created before that run. At Pinterest, there are several use cases for a distributed
visual search system. One use case is to explore similar look-
ing products (Pinterest Similar Looks), and others include
near-duplicate detection and content recommendation. In
We built a system called the Incremental Fingerprinting all these applications, visually similar results are computed
Service, which computes image features for all Pinterest im- from distributed indices built on top of the visualjoins gen-
ages using a cluster of workers on Amazon EC2. It incre- erated in the previous section. Since each use case has a
mentally updates the collection of features under two main different set of performance and cost requirements, our in-
change scenarios: new images uploaded to Pinterest, and frastructure is designed to be flexible and re-configurable. A
feature evolution (features added/modified by engineers). flow chart of the search infrastructure is shown in Figure 6.
Our approach is to split the image collection into epochs As the first step we create distributed image indices from
grouped by upload date, and to maintain a separate feature visualjoins using Hadoop. Sharded with doc-ID, each ma-
store for each version of each feature type (global, local, chine contains indexes (and features) associated with a sub-
deep features). Features are stored in bulk on Amazon S3, set of the entire image collections. Two types of indexes are
organized by feature type, version, and date. When the used: the first is disk stored (and partially memory cached)
data is fully up-to-date, each feature store contains all the token indices with vector-quantized features (e.g. visual vo-
epochs. On each run, the system detects missing epochs for cabulary) as key, and image doc-id hashes as posting lists.
each feature and enqueues jobs into a distributed queue to This is analogous to text based image retrieval system ex-
populate those epochs. cept text is replaced by visual tokens. The second is memory
This storage scheme enables incremental updates as fol- cached features including both visual and meta-data such as
lows. Every day, a new epoch is added to our collection with image annotations and “topic vectors” computed from the
that day’s unique uploads, and we generate the missing fea- user-board-image graph. The first part is used for fast (but
Before After
VisualJoin

FullM=VisualJoin/part-0000
FullM=VisualJoin/part-0001
...

Index Visual Features

( Shard by ImageSignature )

Shard 1 Shard 2 Shard N

Token Index Token Index Token Index

Features Tree Features Tree Features Tree


Leaf Ranker 1 Leaf Ranker 2 Leaf Ranker N

Merger Figure 7: Before and after incorporating Visual Re-


lated Pins

The first step of the Visual Related Pins product is to


Figure 6: A flow chart of the distributed visual use the local token index built from all existing Pinterest
search pipeline. images to detect if we have near duplicates to the query im-
age. Specifically, given a query image, the system returns
a set of images that are variations of the same image but
imprecise) lookup, and the second part is used for more ac- altered through transformation such as resizing, cropping,
curate (but slower) ranking refinement. rotation, translation, adding, deleting and altering minor
Each machine runs a leaf ranker, which first computes K- parts of the visual contents. Since the resulting images look
nearest-neighbor from the indices and then re-rank the top visually identical to the query image, their recommenda-
candidates by computing a score between the query image tions are most likely relevant to the query image. In most
and each of the top candidate images based on additional cases, however, we found that there are either no near du-
metadata such as annotations. In some cases the leaf ranker plicates detected or the near duplicates do not have enough
skips the token index and directly retrieve the K-nearest- recommendations. Thus, we focused most of our attention
neighbor images from the feature tree index using variations on retrieving visual search results generated from an index
of approximate KNN such as [18]. A root ranker hosted on based on deep features.
another machine will retrieve K top results from each of the
leaf rankers, and them merge the results and return them Static Evaluation of Search Relevance
to the users. To handle new fingerprints generated with our Our initial Visual Related Pins experiment utilized features
real-time feature extractor, we have an online version of the from the original and fine-tuned versions of the AlexNet
visual search pipeline where a very similar process occurs. model in its search infrastructure. However, recent successes
With the online version however, the given fingerprint is with deeper CNN architectures for classification led us to in-
queried on pre-generated indices. vestigate the performance of feature sets from a variety of
CNN models.
To conduct evaluation for visual search, we used the im-
3. APPLICATION 1: RELATED PINS age annotations associated with the images as proxy for rele-
One of the first applications of Pinterest’s visual search vancy. This approach is commonly used for offline evaluation
pipeline was within a recommendations product called Re- of visual search systems [19] in addition to human evalua-
lated Pins, which recommends other images a user may be tion. In this work, we used top text-queries associated each
interested in when viewing a Pin. Traditionally, we have image as testing annotations. We retrieve 3,000 images per
used a combination of user-curated image-to-board relation- query for 1000 queries using Pinterest Search, which yields
ships and content-based signals to generate these recommen- a dataset with about 1.6 million unique images. We label
dations. A problem with this approach, however, is that each image with the query that produced it. A visual search
computing these recommendations is an offline process, and result is assumed to be relevant to a query image if the two
the image-to-board relationship must already have been cu- images share a label.
rated, which may not be the case for our less popular Pins or Using this evaluation dataset, we computed the preci-
newly created Pins. As a result, 6% of images at Pinterest sion@k measure for several feature sets: the original AlexNet
have very few or no recommendations. For these images, we 6th layer fully-connected features (fc6 ), the fc6 features of
used the visual search pipeline described previously to gen- a fine-tuned AlexNet model trained with Pinterest product
erate Visual Related Pins based on visual signals as shown data, GoogLeNet (“loss3” layer output), and the fc6 features
in Figure 7. of the VGG 16-layer network [3]. We also examined com-
After running the experiment for three months, Visual Re-
Table 1: Relevance of visual search. lated Pins increased total repins in the Related Pins product
Model p@5 p@10 latency
by 2% as shown in Figure 8.
AlexNet FC6 0.051 0.040 193ms
Pinterest FC6 0.234 0.210 234ms
GoogLeNet 0.223 0.202 1207ms 4. APPLICATION 2: SIMILAR LOOKS
VGG 16-layer 0.302 0.269 642ms One of the most popular categories on Pinterest is women’s
fashion. However, a large percentage of pins in this category
do not direct users to a shopping experience, and therefore
aren’t actionable. There are two challenges towards making
these Pins actionable: 1) Many pins feature editorial shots
such as “street style” outfits, which often link to a website
with little additional information on the items featured in
the image; 2) Pin images often contain multiple objects (e.g.
a woman walking down the street, with a leopard-print bag,
black boots, sunglasses, torn jeans, etc.) A user looking at
the Pin might be interested in learning more about the bag,
while another user might want to buy their sunglasses.
User research revealed this to be a common user frustra-
tion, and our data indicated that users are much less likely
to clickthrough to the external Website on women’s fashion
Pins, relative to other categories.
Figure 8: Visual Related Pins increases total Re- To address this problem, we built a product called “Similar
lated Pins repins on Pinterest by 2%. Looks”, which localized and classified fashion objects (Fig-
ure 9). We use object recognition to detect products such as
bags, shoes, pants, and watches in Pin images. From these
bining the score from the aforementioned low-level features objects, we extract visual and semantic features to generate
with the score from the output vector of the classifier layer product recommendations (“Similar Looks”). A user would
(the semantic features). Table 1 shows p@5 and p@10 per- discover the recommendations if there was a red dot on the
formance of these models using low level features for nearest object in the Pin (see Figure 1). Clicking on the red dot
neighbor search, along with the average latency of our vi- loads a feed of Pins featuring visually similar objects (e.g.
sual search service (which includes feature extraction for the other visually similar blue dresses).
query image as well as retrieval). We observed a substan-
tial gain in precision against our evaluation dataset when Related Work
using the FC6 features of the VGG 16-layer model, with an Applying visual search to “soft goods” has been explored
acceptable latency for our applications. both within academia and industry. Like.com, Google Shop-
ping and Zappos (owned by Amazon) are a few well-known
Live Experiments applications of computer vision to fashion recommendations.
For our experiment, we set up a system to detect new Pins Baidu and Alibaba also launched visual search systems re-
with few recommendations, query our visual search system, cently solving similar problems. There is also a growing
and store their results in HBase to serve during Pin close-up. amount of research on vision-based fashion recommenda-
One improvement we built on top of the visual search tions [23, 16, 10]. Our approach demonstrates the feasibility
system for this experiment was adding a results metadata of an object-based visual search system on tens of millions of
conformity threshold to allow greater precision at the ex- Pinterest users and exposes an interactive search experience
pense of lower recall. This was important as we feared that around these detected objects.
delivering poor recommendations to a user would have last-
ing effects on that user’s engagement with Pinterest. This Static Evaluation of Object Localization
was particularly concerning as our visual recommendations The first step of evaluating our Similar Looks product was
are served when viewing newly created Pins, a behavior that to investigate our object localization and detection capabil-
occurs often in newly joined users. As such we chose to lower ities. We chose to focus on fashion objects because of the
the recall if it meant improving relevancy. aforementioned business need and because “soft goods” tend
We launched the experiment initially to 10% of Pinter- to have distinctive visual shapes (e.g. shorts, bags, glasses).
est eligible live traffic. We considered a user to be eligible We collected our evaluation dataset by randomly sampling
when they viewed a Pin close-up that did not have enough a set of images from Pinterest’s women’s fashion category,
recommendations, and triggered a user into either a treat- and manually labeling 2,399 fashion objects in 9 categories
ment group where we replaced the Related Pins section with (shoes, dress, glasses, bag, watch, pants, shorts, bikini, earn-
visual search results, or a control group where we did not ings) on the images by drawing a rectangular crop over the
alter the experience. In this experiment, what we measured objects. We observed that shoes, bags, dresses and pants
was the change in total repins in the Related Pins section were the four largest categories in our evaluation dataset.
where repinning is the action of a user adding an image to Shown in Table 2 is the distribution of fashion objects as
their collections. We chose to measure repins as it is one well as the detection accuracies from the text-based filter,
of our top line metrics and a standard metric for measuring image-based detection, and the combined approach (where
engagement. text filters are applied prior to object detection).
adding a text-filter dramatically improved results). Aside
from reducing the number of images we needed to finger-
print with our object classifiers, for several object classes
(shoe, bag, pants), we observed that text-prefiltering was
crucial to achieve an acceptable false positive rate (1% or
less).

Live Experiments
Our system identified over 80 million “clickable” objects from
a subset of Pinterest images. A clickable red dot is placed
upon the detected object. Once the user clicks on the dot,
our visual search system retrieves a collection of Pins most
visually similar to the object. We launched the system to a
small percentage of Pinterest live traffic and collected user
engagement metrics such as CTR for a period of one month.
Specifically, we looked at the clickthrough rate of the dot,
the clickthrough rate on our visual search results, and also
compared engagement on Similar Look results with the ex-
Figure 9: Once a user clicks on the red dot, the sys- isting Related Pin recommendations.
tem shows products that have a similar appearance As shown in Figure 10, an average of 12% of users who
to the query object. viewed a pin with a dot clicked on a dot in a given day.
Those users went on to click on an average 0.55 Similar
Look results. Although this data was encouraging, when
Table 2: Object detection/classification accuracy we compared engagement with all related content on the
(%) pin close-up (summing both engagement with Related Pins
Text Img Both and Similar Look results for the treatment group, and just
Objects # TP FP TP FP TP FP related pin engagement for the control), Similar Looks ac-
shoe 873 79.8 6.0 41.8 3.1 34.4 1.0 tually hurt overall engagement on the pin close-up by 4%.
dress 383 75.5 6.2 58.8 12.3 47.0 2.0 After the novelty effort wore off, we saw gradual decrease in
glasses 238 75.2 18.8 63.0 0.4 50.0 0.2 CTR on the red dots which stabilizes at around 10%.
bag 468 66.2 5.3 59.8 2.9 43.6 0.5 To test the relevance of our Similar Looks results inde-
watch 36 55.6 6.0 66.7 0.5 41.7 0.0 pendently of the bias resulting from the introduction of a
pants 253 75.9 2.0 60.9 2.2 48.2 0.1 new user behavior (learning to click on the “object dots”),
shorts 89 73.0 10.1 44.9 1.2 31.5 0.2 we designed an experiment to blend Similar Looks results
bikini 32 71.9 1.0 31.3 0.2 28.1 0.0 directly into the existing Related Pins product (for Pins
earrings 27 81.5 4.7 18.5 0.0 18.5 0.0 containing detected objects). This gave us a way to directly
Average 72.7 6.7 49.5 2.5 38.1 0.5 measure if users found our visually similar recommendations
relevant, compared to our non-visual recommendations. On
pins where we detected an object, this experiment increased
overall engagement (repins and close-ups) in Related Pins by
As previously described, the text-based approach applies 5%. Although we set an initial static blending ratio for this
manually crafted rules (e.g. regular expressions) to the Pin- experiment (one visually similar result to three production
terest meta-data associated with images (which we treat as results), this ratio adjusts in response to user click data.
weak labels). For example, an image annotated with “spring
fashion, tote with flowers” will be classified as “bag,” and is
considered as a positive sample if the image contains a “bag”
object box label. For image-based evaluation, we compute 5. CONCLUSION AND FUTURE WORK
the intersection between the predicted object bounding box We demonstrate that, with the availability of distributed
and the labeled object bounding box of the same type, and computational platforms such as Amazon Web Services and
count an intersection to union ratio of 0.3 or greater as a open-source tools, it is possible for a handful of engineers or
positive match. an academic lab to build a large-scale visual search system
Table 2 demonstrates that neither text annotation filters using a combination of non-proprietary tools. This paper
nor object localization alone were sufficient for our detection presented our end-to-end visual search pipeline, including
task due to their relatively high false positive rates at 6.7% incremental feature updating and two-step object detection
and 2.5% respectively. Not surprisingly, combining two ap- and localization method that improves search accuracy and
proaches significantly decreased our false positive rate to less reduces development and deployment costs. Our live prod-
than 1%. uct experiments demonstrate that visual search features can
Specifically, we saw that for classes like “glasses” text an- increase user engagement.
notations were insufficient and image-based classification ex- We plan to further improve our system in the following ar-
celled (due to a distinctive visual shape of glasses). For other eas. First, we are interested in investigating the performance
classes, such as “dress”, this situation was reversed (the false and efficiency of CNN based object detection methods in the
positive rate for our dress detector was high, 12.3%, due context of live visual search systems. Second, we are inter-
to occlusion and high variance in style for that class, and ested in leveraging Pinterest “curation graph” to enhance
Figure 10: Engagement rates for Similar Looks ex-
periment

visual search relevance. Lastly, we want to experiment with


alternative interactive interfaces for visual search.

6. REFERENCES
[1] S. Bengio, J. Dean, D. Erhan, E. Ie, Q. V. Le,
A. Rabinovich, J. Shlens, and Y. Singer. Using web
co-occurrence statistics for improving image
categorization. CoRR, abs/1312.5697, 2013.
[2] T. L. Berg, A. C. Berg, J. Edwards, M. Maire,
R. White, Y.-W. Teh, E. Learned-Miller, and D. A.
Forsyth. Names and faces in the news. In Proceedings
of the Conference on Computer Vision and Pattern
Recognition (CVPR), pages 848–854, 2004.
[3] K. Chatfield, K. Simonyan, A. Vedaldi, and
A. Zisserman. Return of the devil in the details:
Delving deep into convolutional nets. In British
Machine Vision Conference, 2014.
[4] M. Cheng, N. Mitra, X. Huang, P. H. S. Torr, and
S. Hu. Global contrast based salient region detection.
Transactions on Pattern Analysis and Machine
Intelligence (T-PAMI), 2014.
[5] R. Datta, D. Joshi, J. Li, and J. Wang. Image
retrieval: Ideas, influences, and trends of the new age.
ACM Computing Survey, 40(2):5:1–5:60, May 2008.
[6] D. Erhan, C. Szegedy, A. Toshev, and D. Anguelov.
Scalable object detection using deep neural networks.
In 2014 IEEE Conference on Computer Vision and
Pattern Recognition, CVPR 2014, Columbus, OH,
USA, June 23-28, 2014, pages 2155–2162, 2014.
[7] P. F. Felzenszwalb, R. B. Girshick, and D. A.
McAllester. Cascade object detection with deformable
part models. In The IEEE Conference on Computer
Vision and Pattern Recognition (CVPR), pages
2241–2248, 2010.
[8] R. Girshick, J. Donahue, T. Darrell, and J. Malik.
Rich feature hierarchies for accurate object detection Figure 11: Examples of object search results for
and semantic segmentation. arXiv preprint shoes. Boundaries of detected objects are automati-
arXiv:1311.2524, 2013. cally highlighted. The top image is the query image.
[9] K. He, X. Zhang, S. Ren, and J. Sun. Spatial pyramid
pooling in deep convolutional networks for visual
recognition. In Transactions on Pattern Analysis and
Figure 12: Samples of object detection and localization results for bags. [Green: ground truth, blue: detected
objects.]

Figure 13: Samples of object detection and localization results for shoes.
Machine Intelligence (T-PAMI), pages 346–361. Society Conference on Computer Vision and Pattern
Springer, 2014. Recognition (CVPR), 13, pages 1155–1162,
[10] V. Jagadeesh, R. Piramuthu, A. Bhardwaj, W. Di, and Washington, DC, USA, 2013.
N. Sundaresan. Large scale visual recommendations
from street fashion images. In Proceedings of the
International Conference on Knowledge Discovery and
Data Mining (SIGKDD), 14, pages 1925–1934, 2014.
[11] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev,
J. Long, R. Girshick, S. Guadarrama, and T. Darrell.
Caffe: Convolutional architecture for fast feature
embedding. arXiv preprint arXiv:1408.5093, 2014.
[12] Y. Jing and S. Baluja. Visualrank: Applying pagerank
to large-scale image search. IEEE Transactions on
Pattern Analysis and Machine Intelligence (T-PAMI),
30(11):1877–1890, 2008.
[13] A. Karpathy, G. Toderici, S. Shetty, T. Leung,
R. Sukthankar, and L. Fei-Fei. Large-scale video
classification with convolutional neural networks. In
2014 IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), pages 1725–1732, 2014.
[14] A. Krizhevsky, S. Ilya, and G. E. Hinton. Imagenet
classification with deep convolutional neural networks.
In Advances in Neural Information Processing Systems
(NIPS), pages 1097–1105. 2012.
[15] Y. LeCun, B. Boser, J. S. Denker, D. Henderson,
R. E. Howard, W. Hubbard, and L. D. Jackel.
Backpropagation applied to handwritten zip code
recognition. Neural Comput., 1(4):541–551, Dec. 1989.
[16] S. Liu, Z. Song, M. Wang, C. Xu, H. Lu, and S. Yan.
Street-to-shop: Cross-scenario clothing retrieval via
parts alignment and auxiliary set. In Proceedings of
the IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), 2012.
[17] J. Long, E. Shelhamer, and T. Darrell. Fully
convolutional networks for semantic segmentation.
arXiv preprint arXiv:1411.4038, 2014.
[18] M. Muja and D. G. Lowe. Fast matching of binary
features. In Proceedings of the Conference on
Computer and Robot Vision (CRV), 12, pages
404–410, Washington, DC, USA, 2012. IEEE
Computer Society.
[19] H. Müller, W. Müller, D. M. Squire,
S. Marchand-Maillet, and T. Pun. Performance
evaluation in content-based image retrieval: Overview
and proposals. Pattern Recognition Letter,
22(5):593–601, 2001.
[20] O. Russakovsky, J. Deng, H. Su, J. Krause,
S. Satheesh, S. Ma, Z. Huang, A. Karpathy,
A. Khosla, M. Bernstein, et al. ImageNet large scale
visual recognition challenge. arXiv preprint
arXiv:1409.0575, 2014.
[21] K. Simonyan and A. Zisserman. Very deep
convolutional networks for large-scale image
recognition. CoRR, abs/1409.1556, 2014.
[22] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed,
D. Anguelov, D. Erhan, V. Vanhoucke, and
A. Rabinovich. Going deeper with convolutions. arXiv
preprint arXiv:1409.4842, 2014.
[23] K. Yamaguchi, M. H. Kiapour, L. Ortiz, and T. Berg.
Retrieving similar styles to parse clothing.
Transactions on Pattern Analysis and Machine
Intelligence (T-PAMI), 2014.
[24] Q. Yan, L. Xu, J. Shi, and J. Jia. Hierarchical saliency
detection. In Proceedings of the IEEE Computer

You might also like