Image-to-Image Semantic Matching with AutoMM

Open In Colab Open In SageMaker Studio Lab

Computing the similarity between two images is a common task in computer vision, with several practical applications such as detecting same or different product, etc. In general, image similarity models will take two images as input and transform them into vectors, and then similarity scores calculated using cosine similarity, dot product, or Euclidean distances are used to measure how alike or different of the two images.

import os
import pandas as pd
import warnings
from IPython.display import Image, display
warnings.filterwarnings('ignore')

Prepare your Data

In this tutorial, we will demonstrate how to use AutoMM for image-to-image semantic matching with the simplified Stanford Online Products dataset (SOP).

Stanford Online Products dataset is introduced for metric learning. There are 12 categories of products in this dataset: bicycle, cabinet, chair, coffee maker, fan, kettle, lamp, mug, sofa, stapler, table and toaster. Each category has some products, and each product has several images captured from different views. Here, we consider different views of the same product as positive pairs (labeled as 1) and images from different products as negative pairs (labeled as 0).

The following code downloads the dataset and unzip the images and annotation files.

download_dir = './ag_automm_tutorial_img2img'
zip_file = 'https://automl-mm-bench.s3.amazonaws.com/Stanford_Online_Products.zip'
from autogluon.core.utils.loaders import load_zip
load_zip.unzip(zip_file, unzip_dir=download_dir)
Downloading ./ag_automm_tutorial_img2img/file.zip from https://automl-mm-bench.s3.amazonaws.com/Stanford_Online_Products.zip...
100%|██████████| 3.08G/3.08G [01:49<00:00, 28.1MiB/s]

Then we can load the annotations into dataframes.

dataset_path = os.path.join(download_dir, 'Stanford_Online_Products')
train_data = pd.read_csv(f'{dataset_path}/train.csv', index_col=0)
test_data = pd.read_csv(f'{dataset_path}/test.csv', index_col=0)
image_col_1 = "Image1"
image_col_2 = "Image2"
label_col = "Label"
match_label = 1

Here you need to specify the match_label, the label class representing that a pair semantically match. In this demo dataset, we use 1 since we assigned 1 to image pairs from the same product. You may consider your task context to specify match_label.

Next, we expand the image paths since the original paths are relative.

def path_expander(path, base_folder):
    path_l = path.split(';')
    return ';'.join([os.path.abspath(os.path.join(base_folder, path)) for path in path_l])

for image_col in [image_col_1, image_col_2]:
    train_data[image_col] = train_data[image_col].apply(lambda ele: path_expander(ele, base_folder=dataset_path))
    test_data[image_col] = test_data[image_col].apply(lambda ele: path_expander(ele, base_folder=dataset_path))

The annotations are only image path pairs and their binary labels (1 and 0 mean the image pair matching or not, respectively).

train_data.head()
Image1 Image2 Label
0 /home/ci/autogluon/docs/tutorials/multimodal/s... /home/ci/autogluon/docs/tutorials/multimodal/s... 0
1 /home/ci/autogluon/docs/tutorials/multimodal/s... /home/ci/autogluon/docs/tutorials/multimodal/s... 1
2 /home/ci/autogluon/docs/tutorials/multimodal/s... /home/ci/autogluon/docs/tutorials/multimodal/s... 0
3 /home/ci/autogluon/docs/tutorials/multimodal/s... /home/ci/autogluon/docs/tutorials/multimodal/s... 1
4 /home/ci/autogluon/docs/tutorials/multimodal/s... /home/ci/autogluon/docs/tutorials/multimodal/s... 1

Let’s visualize a matching image pair.

pil_img = Image(filename=train_data[image_col_1][5])
display(pil_img)
../../../_images/bb974c2313e407da07bd3116bd05a9f54177e56ad4db7abbe4b12d942b61e690.jpg
pil_img = Image(filename=train_data[image_col_2][5])
display(pil_img)
../../../_images/29306ce239dcc133e203844dabb5d4f6c857cbad01629c1417af70cc0e4439f1.jpg

Here are two images that do not match.

pil_img = Image(filename=train_data[image_col_1][0])
display(pil_img)
../../../_images/a621da214cd4da90f23e426b98e022f230dcb85c28ff02781f0b86601b9217df.jpg
pil_img = Image(filename=train_data[image_col_2][0])
display(pil_img)
../../../_images/484a72df3a82256b4fba32a375140456070318d11cc975367038fe1f0e746dee.jpg

Train your Model

Ideally, we want to obtain a model that can return high/low scores for positive/negative image pairs. With AutoMM, we can easily train a model that captures the semantic relationship between images. Basically, it uses Swin Transformer to project each image into a high-dimensional vector and compute the cosine similarity of feature vectors.

With AutoMM, you just need to specify the query, response, and label column names and fit the model on the training dataset without worrying the implementation details.

from autogluon.multimodal import MultiModalPredictor
predictor = MultiModalPredictor(
        problem_type="image_similarity",
        query=image_col_1, # the column name of the first image
        response=image_col_2, # the column name of the second image
        label=label_col, # the label column name
        match_label=match_label, # the label indicating that query and response have the same semantic meanings.
        eval_metric='auc', # the evaluation metric
    )
    
# Fit the model
predictor.fit(
    train_data=train_data,
    time_limit=180,
)
No path specified. Models will be saved in: "AutogluonModels/ag-20240716_223958"
=================== System Info ===================
AutoGluon Version:  1.1.1b20240716
Python Version:     3.10.13
Operating System:   Linux
Platform Machine:   x86_64
Platform Version:   #1 SMP Fri May 17 18:07:48 UTC 2024
CPU Count:          8
Pytorch Version:    2.3.1+cu121
CUDA Version:       12.1
Memory Avail:       28.50 GB / 30.95 GB (92.1%)
Disk Space Avail:   185.45 GB / 255.99 GB (72.4%)
===================================================
AutoGluon infers your prediction problem is: 'binary' (because only two unique label-values observed).
	2 unique label values:  [0, 1]
	If 'binary' is not the correct problem_type, please manually specify the problem_type parameter during Predictor init (You may specify problem_type as one of: ['binary', 'multiclass', 'regression', 'quantile'])

AutoMM starts to create your model. ✨✨✨

To track the learning progress, you can open a terminal and launch Tensorboard:
    ```shell
    # Assume you have installed tensorboard
    tensorboard --logdir /home/ci/autogluon/docs/tutorials/multimodal/semantic_matching/AutogluonModels/ag-20240716_223958
    ```

INFO: Seed set to 0
WARNING:timm.models._builder:Unexpected keys (head.fc.fc1.bias, head.fc.fc1.weight, head.fc.norm.bias, head.fc.norm.weight) found while loading pretrained weights. This may be expected if model is being adapted.
GPU Count: 1
GPU Count to be Used: 1
GPU 0 Name: Tesla T4
GPU 0 Memory: 0.42GB/15.0GB (Used/Total)

INFO: Using 16bit Automatic Mixed Precision (AMP)
INFO: GPU available: True (cuda), used: True
INFO: TPU available: False, using: 0 TPU cores
INFO: HPU available: False, using: 0 HPUs
INFO: LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
INFO: 
  | Name              | Type                            | Params | Mode 
------------------------------------------------------------------------------
0 | query_model       | TimmAutoModelForImagePrediction | 93.3 M | train
1 | response_model    | TimmAutoModelForImagePrediction | 93.3 M | train
2 | validation_metric | BinaryAUROC                     | 0      | train
3 | loss_func         | ContrastiveLoss                 | 0      | train
4 | miner_func        | PairMarginMiner                 | 0      | train
------------------------------------------------------------------------------
93.3 M    Trainable params
0         Non-trainable params
93.3 M    Total params
373.248   Total estimated model params size (MB)
INFO: Epoch 0, global step 15: 'val_roc_auc' reached 0.82776 (best 0.82776), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/semantic_matching/AutogluonModels/ag-20240716_223958/epoch=0-step=15.ckpt' as top 3
INFO: Time limit reached. Elapsed time is 0:03:00. Signaling Trainer to stop.
INFO: Epoch 0, global step 22: 'val_roc_auc' reached 0.89027 (best 0.89027), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/semantic_matching/AutogluonModels/ag-20240716_223958/epoch=0-step=22.ckpt' as top 3
Start to fuse 2 checkpoints via the greedy soup algorithm.
AutoMM has created your model. 🎉🎉🎉

To load the model, use the code below:
    ```python
    from autogluon.multimodal import MultiModalPredictor
    predictor = MultiModalPredictor.load("/home/ci/autogluon/docs/tutorials/multimodal/semantic_matching/AutogluonModels/ag-20240716_223958")
    ```

If you are not satisfied with the model, try to increase the training time, 
adjust the hyperparameters (https://auto.gluon.ai/stable/tutorials/multimodal/advanced_topics/customization.html),
or post issues on GitHub (https://github.com/autogluon/autogluon/issues).
<autogluon.multimodal.predictor.MultiModalPredictor at 0x7fc58a104730>

Evaluate on Test Dataset

You can evaluate the predictor on the test dataset to see how it performs with the roc_auc score:

score = predictor.evaluate(test_data)
print("evaluation score: ", score)
evaluation score:  {'roc_auc': 0.8893518041196853}

Predict on Image Pairs

Given new image pairs, we can predict whether they match or not.

pred = predictor.predict(test_data.head(3))
print(pred)
0    1
1    1
2    1
Name: Label, dtype: int64

The predictions use a naive probability threshold 0.5. That is, we choose the label with the probability larger than 0.5.

Predict Matching Probabilities

However, you can do more customized thresholding by getting probabilities.

proba = predictor.predict_proba(test_data.head(3))
print(proba)
          0         1
0  0.253988  0.746012
1  0.046440  0.953560
2  0.073348  0.926652

Extract Embeddings

You can also extract embeddings for each image of a pair.

embeddings_1 = predictor.extract_embedding({image_col_1: test_data[image_col_1][:5].tolist()})
print(embeddings_1.shape)
embeddings_2 = predictor.extract_embedding({image_col_2: test_data[image_col_2][:5].tolist()})
print(embeddings_2.shape)
(5, 768)
(5, 768)

Other Examples

You may go to AutoMM Examples to explore other examples about AutoMM.

Customization

To learn how to customize AutoMM, please refer to Customize AutoMM.