Zero-Shot Image Classification with CLIP

Open In Colab Open In SageMaker Studio Lab

When you want to classify an image to different classes, it is standard to train an image classifier based on the class names. However, it is tedious to collect training data. And if the collected data is too few or too imbalanced, you may not get a decent image classifier. So you wonder, is there a strong enough model that can handle this situaton without the training efforts?

Actually there is! OpenAI has introduced a model named CLIP, which can be applied to any visual classification benchmark by simply providing the names of the visual categories to be recognized. And its accuracy is high, e.g., CLIP can achieve 76.2% top-1 accuracy on ImageNet without using any of the 1.28M training samples. This performance matches with original supervised ResNet50 on ImageNet, quite promising for a classification task with 1000 classes!

So in this tutorial, let’s dive deep into CLIP. We will show you how to use CLIP model to do zero-shot image classification in AutoGluon.

Simple Demo

Here we provide a simple demo to classify what dog breed is in the picture below.

from IPython.display import Image, display
from autogluon.multimodal import download

url = "https://farm4.staticflickr.com/3445/3262471985_ed886bf61a_z.jpg"
dog_image = download(url)

pil_img = Image(filename=dog_image)
display(pil_img)
Downloading 3262471985_ed886bf61a_z.jpg from https://farm4.staticflickr.com/3445/3262471985_ed886bf61a_z.jpg...
                     
../../../_images/430b063e465770dc7edf6986e78c25ebec12897e4547065dac86c62cd6a9765e.jpg

Normally to solve this task, you need to collect some training data (e.g., the Stanford Dogs dataset) and train a dog breed classifier. But with CLIP, all you need to do is provide some potential visual categories. CLIP will handle the rest for you.

from autogluon.multimodal import MultiModalPredictor

predictor = MultiModalPredictor(problem_type="zero_shot_image_classification")
prob = predictor.predict_proba({"image": [dog_image]}, {"text": ['This is a Husky', 'This is a Golden Retriever', 'This is a German Sheperd', 'This is a Samoyed.']})
print("Label probs:", prob)
Label probs: [[5.6668371e-01 3.4304475e-04 4.1727135e-01 1.5701901e-02]]
/home/ci/opt/venv/lib/python3.10/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(

Clearly, according to the probabilities, we know there is a Husky in the photo (which I think is correct)!

Let’s try a harder example. Below is a photo of two Segways. This object class is not common in most existing vision datasets.

url = "https://live.staticflickr.com/7236/7114602897_9cf00b2820_b.jpg"
segway_image = download(url)

pil_img = Image(filename=segway_image)
display(pil_img)
Downloading 7114602897_9cf00b2820_b.jpg from https://live.staticflickr.com/7236/7114602897_9cf00b2820_b.jpg...
                     
../../../_images/82403ca9bc5e9ccb808a70ee5069f4642f1c12cf1e97fb9bf4f90d579077ba3d.jpg

Given several text queries, CLIP can still predict the segway class correctly with high confidence.

prob = predictor.predict_proba({"image": [segway_image]}, {"text": ['segway', 'bicycle', 'wheel', 'car']})
print("Label probs:", prob)
Label probs: [[9.9997151e-01 5.8954188e-06 2.0427166e-05 2.2961874e-06]]

This is amazing, right? Now a bit knowledge on why and how CLIP works. CLIP is called Contrastive Language-Image Pre-training. It is trained on a massive number of data (400M image-text pairs). By using a simple loss objective, CLIP tries to predict which out of a set of randomly sampled text is actually paired with an given image in the training dataset. As a result, CLIP models can then be applied to nearly arbitrary visual classification tasks just like the examples we have shown above.

More about CLIP

CLIP is powerful, and it was designed to mitigate a number of major problems in the standard deep learning approach to computer vision, such as costly datasets, closed set prediction and poor generalization performance. CLIP is a good solution to many problems, however, it is not the ultimate solution. CLIP has its own limitations. For example, CLIP is vulnerable to typographic attacks, i.e., if you add some text to an image, CLIP’s predictions will be easily affected by the text. Let’s see one example from OpenAI’s blog post on multimodal neurons.

Suppose we have a photo of a Granny Smith apple,

url = "https://cdn.openai.com/multimodal-neurons/assets/apple/apple-blank.jpg"
image_path = download(url)

pil_img = Image(filename=image_path)
display(pil_img)
Downloading apple-blank.jpg from https://cdn.openai.com/multimodal-neurons/assets/apple/apple-blank.jpg...
                                            
../../../_images/9f781896727e5959a0d10ddcc7901721011dc5676b876f5f16d815a99a5a2787.jpg

We then try to classify this image to several classes, such as Granny Smith, iPod, library, pizza, toaster and dough.

prob = predictor.predict_proba({"image": [image_path]}, {"text": ['Granny Smith', 'iPod', 'library', 'pizza', 'toaster', 'dough']})
print("Label probs:", prob)
Label probs: [[9.9852788e-01 1.2474856e-03 1.6523789e-05 4.3888969e-05 7.2319832e-05
  9.1789414e-05]]

We can see that zero-shot classification works great, it predicts apple with almost 100% confidence. But if we add a text to the apple like this,

url = "https://cdn.openai.com/multimodal-neurons/assets/apple/apple-ipod.jpg"
image_path = download(url)

pil_img = Image(filename=image_path)
display(pil_img)
Downloading apple-ipod.jpg from https://cdn.openai.com/multimodal-neurons/assets/apple/apple-ipod.jpg...
                                            
../../../_images/b53b94b9d2be956e9d2f2a1c88ae925761eb9807db7404e3bc8aa1a43bc2a7c2.jpg

Then we use the same class names to perform zero-shot classification,

prob = predictor.predict_proba({"image": [image_path]}, {"text": ['Granny Smith', 'iPod', 'library', 'pizza', 'toaster', 'dough']})
print("Label probs:", prob)
Label probs: [[2.4722083e-02 9.7519821e-01 2.6978284e-06 9.0161143e-07 2.8397344e-05
  4.7826539e-05]]

Suddenly, the apple becomes iPod.

CLIP also has other limitations. If you are interested, you can read CLIP paper for more details. Or you can stay here, play with your own examples!

Other Examples

You may go to AutoMM Examples to explore other examples about AutoMM.

Customization

To learn how to customize AutoMM, please refer to Customize AutoMM.