Image-Text Semantic Matching with AutoMM

Open In Colab Open In SageMaker Studio Lab

Vision and language are two important aspects of human intelligence to understand the real world. Image-text semantic matching, measuring the visual-semantic similarity between image and text, plays a critical role in bridging the vision and language. Learning a joint space where text and image feature vectors are aligned is a typical solution for image-text matching. It is becoming increasingly significant for various vision-and-language tasks, such as cross-modal retrieval, image captioning, text-to-image synthesis, and multimodal neural machine translation. This tutorial will introduce how to apply AutoMM to the image-text matching task.

import os
import warnings
from IPython.display import Image, display
import numpy as np
warnings.filterwarnings('ignore')
np.random.seed(123)

Dataset

In this tutorial, we will use the Flickr30K dataset to demonstrate the image-text matching. The Flickr30k dataset is a popular benchmark for sentence-based picture portrayal. The dataset is comprised of 31,783 images that capture people engaged in everyday activities and events. Each image has a descriptive caption. We organized the dataset using pandas dataframe. To get started, Let’s download the dataset.

from autogluon.core.utils.loaders import load_pd
import pandas as pd
download_dir = './ag_automm_tutorial_imgtxt'
zip_file = 'https://automl-mm-bench.s3.amazonaws.com/flickr30k.zip'
from autogluon.core.utils.loaders import load_zip
load_zip.unzip(zip_file, unzip_dir=download_dir)
Downloading ./ag_automm_tutorial_imgtxt/file.zip from https://automl-mm-bench.s3.amazonaws.com/flickr30k.zip...
100%|██████████| 4.38G/4.38G [02:35<00:00, 28.1MiB/s]

Then we will load the csv files.

dataset_path = os.path.join(download_dir, 'flickr30k_processed')
train_data = pd.read_csv(f'{dataset_path}/train.csv', index_col=0)
val_data = pd.read_csv(f'{dataset_path}/val.csv', index_col=0)
test_data = pd.read_csv(f'{dataset_path}/test.csv', index_col=0)
image_col = "image"
text_col = "caption"

We also need to expand the relative image paths to use their absolute local paths.

def path_expander(path, base_folder):
    path_l = path.split(';')
    return ';'.join([os.path.abspath(os.path.join(base_folder, path)) for path in path_l])

train_data[image_col] = train_data[image_col].apply(lambda ele: path_expander(ele, base_folder=dataset_path))
val_data[image_col] = val_data[image_col].apply(lambda ele: path_expander(ele, base_folder=dataset_path))
test_data[image_col] = test_data[image_col].apply(lambda ele: path_expander(ele, base_folder=dataset_path))

Take train_data for example, let’s see how the data look like in the dataframe.

train_data.head()
caption image
0 Two young guys with shaggy hair look at their ... /home/ci/autogluon/docs/tutorials/multimodal/s...
1 Two young White males are outside near many bu... /home/ci/autogluon/docs/tutorials/multimodal/s...
2 Two men in green shirts are standing in a yard /home/ci/autogluon/docs/tutorials/multimodal/s...
3 A man in a blue shirt standing in a garden /home/ci/autogluon/docs/tutorials/multimodal/s...
4 Two friends enjoy time spent together /home/ci/autogluon/docs/tutorials/multimodal/s...

Each row is one image and text pair, implying that they match each other. Since one image corresponds to five captions in the dataset, we copy each image path five times to build the correspondences. We can visualize one image-text pair.

train_data[text_col][0]
'Two young guys with shaggy hair look at their hands while hanging out in the yard'
pil_img = Image(filename=train_data[image_col][0])
display(pil_img)
../../../_images/b012c7e966f6550874ccb85ef9602d483aa89b8623dff9ffcdb0faab8f2ca9ab.jpg

To perform evaluation or semantic search, we need to extract the unique image and text items from text_data and add one label column in the test_data.

test_image_data = pd.DataFrame({image_col: test_data[image_col].unique().tolist()})
test_text_data = pd.DataFrame({text_col: test_data[text_col].unique().tolist()})
test_data_with_label = test_data.copy()
test_label_col = "relevance"
test_data_with_label[test_label_col] = [1] * len(test_data)

Initialize Predictor

To initialize a predictor for image-text matching, we need to set problem_type as image_text_similarity. query and response refer to the two dataframe columns in which two items in one row should match each other. You can set query=text_col and response=image_col, or query=image_col and response=text_col. In image-text matching, query and response are equivalent.

from autogluon.multimodal import MultiModalPredictor
predictor = MultiModalPredictor(
            query=text_col,
            response=image_col,
            problem_type="image_text_similarity",
            eval_metric="recall",
        )

By initializing the predictor for image_text_similarity, you have loaded the pretrained CLIP backbone openai/clip-vit-base-patch32.

Directly Evaluate on Test Dataset (Zero-shot)

You may be interested in getting the pretrained model’s performance on your data. Let’s compute the text-to-image and image-to-text retrieval scores.

txt_to_img_scores = predictor.evaluate(
            data=test_data_with_label,
            query_data=test_text_data,
            response_data=test_image_data,
            label=test_label_col,
            cutoffs=[1, 5, 10],
        )
img_to_txt_scores = predictor.evaluate(
            data=test_data_with_label,
            query_data=test_image_data,
            response_data=test_text_data,
            label=test_label_col,
            cutoffs=[1, 5, 10],
        )
print(f"txt_to_img_scores: {txt_to_img_scores}")
print(f"img_to_txt_scores: {img_to_txt_scores}")
txt_to_img_scores: {'recall@1': 0.58964, 'recall@5': 0.83533, 'recall@10': 0.90156}
img_to_txt_scores: {'recall@1': 0.15525, 'recall@5': 0.571, 'recall@10': 0.7176}

Here we report the recall, which is the eval_metric in initializing the predictor above. One cutoff value means using the top k retrieved items to calculate the score. You may find that the text-to-image recalls are much higher than the image-to-text recalls. This is because each image is paired with five texts. In image-to-text retrieval, the upper bound of recall@1 is 20%, which means that the top-1 text is correct, but there are totally five texts to retrieve.

Finetune Predictor

After measuring the pretrained performance, we can finetune the model on our dataset to see whether we can get improvements. For a quick demo, here we set the time limit to 180 seconds.

predictor.fit(
            train_data=train_data,
            tuning_data=val_data,
            time_limit=180,
        )
No path specified. Models will be saved in: "AutogluonModels/ag-20240716_224813"
=================== System Info ===================
AutoGluon Version:  1.1.1b20240716
Python Version:     3.10.13
Operating System:   Linux
Platform Machine:   x86_64
Platform Version:   #1 SMP Fri May 17 18:07:48 UTC 2024
CPU Count:          8
Pytorch Version:    2.3.1+cu121
CUDA Version:       12.1
Memory Avail:       27.10 GB / 30.95 GB (87.6%)
Disk Space Avail:   176.61 GB / 255.99 GB (69.0%)
===================================================

AutoMM starts to create your model. ✨✨✨

To track the learning progress, you can open a terminal and launch Tensorboard:
    ```shell
    # Assume you have installed tensorboard
    tensorboard --logdir /home/ci/autogluon/docs/tutorials/multimodal/semantic_matching/AutogluonModels/ag-20240716_224813
    ```

INFO: Seed set to 0
GPU Count: 1
GPU Count to be Used: 1
GPU 0 Name: Tesla T4
GPU 0 Memory: 0.56GB/15.0GB (Used/Total)

INFO: Using 16bit Automatic Mixed Precision (AMP)
INFO: GPU available: True (cuda), used: True
INFO: TPU available: False, using: 0 TPU cores
INFO: HPU available: False, using: 0 HPUs
INFO: LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
INFO: 
  | Name              | Type                      | Params | Mode 
------------------------------------------------------------------------
0 | query_model       | CLIPForImageText          | 151 M  | train
1 | response_model    | CLIPForImageText          | 151 M  | train
2 | validation_metric | CustomHitRate             | 0      | train
3 | loss_func         | MultiNegativesSoftmaxLoss | 0      | train
------------------------------------------------------------------------
151 M     Trainable params
0         Non-trainable params
151 M     Total params
605.109   Total estimated model params size (MB)
INFO: Time limit reached. Elapsed time is 0:03:00. Signaling Trainer to stop.
INFO: Epoch 0, global step 385: 'val_recall' reached 0.56216 (best 0.56216), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/semantic_matching/AutogluonModels/ag-20240716_224813/epoch=0-step=385.ckpt' as top 3
Start to fuse 1 checkpoints via the greedy soup algorithm.
AutoMM has created your model. 🎉🎉🎉

To load the model, use the code below:
    ```python
    from autogluon.multimodal import MultiModalPredictor
    predictor = MultiModalPredictor.load("/home/ci/autogluon/docs/tutorials/multimodal/semantic_matching/AutogluonModels/ag-20240716_224813")
    ```

If you are not satisfied with the model, try to increase the training time, 
adjust the hyperparameters (https://auto.gluon.ai/stable/tutorials/multimodal/advanced_topics/customization.html),
or post issues on GitHub (https://github.com/autogluon/autogluon/issues).
<autogluon.multimodal.predictor.MultiModalPredictor at 0x7fcfb2b5bc40>

Evaluate the Finetuned Model on the Test Dataset

Now Let’s evaluate the finetuned model. Similarly, we also compute the recalls of text-to-image and image-to-text retrievals.

txt_to_img_scores = predictor.evaluate(
            data=test_data_with_label,
            query_data=test_text_data,
            response_data=test_image_data,
            label=test_label_col,
            cutoffs=[1, 5, 10],
        )
img_to_txt_scores = predictor.evaluate(
            data=test_data_with_label,
            query_data=test_image_data,
            response_data=test_text_data,
            label=test_label_col,
            cutoffs=[1, 5, 10],
        )
print(f"txt_to_img_scores: {txt_to_img_scores}")
print(f"img_to_txt_scores: {img_to_txt_scores}")
txt_to_img_scores: {'recall@1': 0.69908, 'recall@5': 0.90956, 'recall@10': 0.95338}
img_to_txt_scores: {'recall@1': 0.16885, 'recall@5': 0.6724, 'recall@10': 0.8174}

We can observe large improvements over the zero-shot predictor. This means that finetuning CLIP on our customized data may help achieve better performance.

Predict Whether Image and Text Match

Whether finetuned or not, the predictor can predict whether image and text pairs match.

pred = predictor.predict(test_data.head(5))
print(pred)
0    1
1    1
2    1
3    1
4    1
dtype: int64

Predict Matching Probabilities

The predictor can also return to you the matching probabilities.

proba = predictor.predict_proba(test_data.head(5))
print(proba)
          0         1
0  0.342043  0.657957
1  0.326398  0.673602
2  0.346038  0.653962
3  0.343097  0.656903
4  0.328564  0.671436

The second column is the probability of being a match.

Extract Embeddings

Another common user case is to extract image and text embeddings.

image_embeddings = predictor.extract_embedding({image_col: test_image_data[image_col][:5].tolist()})
print(image_embeddings.shape) 
(5, 512)
text_embeddings = predictor.extract_embedding({text_col: test_text_data[text_col][:5].tolist()})
print(text_embeddings.shape)
(5, 512)

Other Examples

You may go to AutoMM Examples to explore other examples about AutoMM.

Customization

To learn how to customize AutoMM, please refer to Customize AutoMM.