AutoGluon Multimodal (AutoMM): Supercharging Multimodal AutoML with Foundation Models¶
Foundation models have transformed landscapes across fields like computer vision and natural language processing. These models are pre-trained on extensive common-domain data, serving as powerful tools for a wide range of applications. However, seamlessly integrating foundation models into real-world application scenarios has posed challenges. The diversity of data modalities, the multitude of available foundation models, and the considerable model sizes make this integration a nontrivial task.
AutoMM is dedicated to breaking these barriers by substantially reducing the engineering effort and manual intervention required in data preprocessing, model selection, and fine-tuning. With AutoMM, users can effortlessly adapt foundation models (from popular model zoos like HuggingFace, TIMM, MMDetection) to their domain-specific data using just three lines of code. Our toolkit accommodates various data types, including image, text, tabular, and document data, either individually or in combination. It offers support for an array of tasks, encompassing classification, regression, object detection, named entity recognition, semantic matching, and image segmentation. AutoMM represents a state-of-the-art and user-friendly solution, empowering multimodal AutoML with foundation models. For more details, refer to the paper below:
Zhiqiang, Tang, Haoyang Fang, Su Zhou, Taojiannan Yang, Zihan Zhong, Tony Hu, Katrin Kirchhoff, George Karypis . “AutoGluon-Multimodal (AutoMM): Supercharging Multimodal AutoML with Foundation Models”, The International Conference on Automated Machine Learning (AutoML), 2024.
In the following, we decompose the functionalities of AutoMM and prepare step-by-step guide for each functionality.
Text Data – Classification / Regression / NER¶
How to train high-quality text prediction models with AutoMM.
How to use AutoMM to build models on datasets with languages other than English.
How to use AutoMM for entity extraction.
Image Data – Classification / Regression¶
How to train image classification models with AutoMM.
How to enable zero-shot image classification in AutoMM via pretrained CLIP model.
Image Data – Object Detection¶
How to train high quality object detection model with AutoMM in under 5 minutes on COCO format dataset.
How to prepare COCO2017 dataset for object detection.
How to prepare Pascal VOC dataset for object detection.
How to prepare Watercolor dataset for object detection.
How to convert a dataset from VOC format to COCO format for object detection.
How to use pd.DataFrame format for object detection
Image Data – Segmentation¶
How to train semantic segmentation models with AutoMM.
Document Data – Classification / Regression¶
How to use AutoMM to build a scanned document classifier.
How to use AutoMM to build a PDF document classifier.
Image / Text Data – Semantic Matching¶
How to use AutoMM for text-to-text semantic matching.
How to use AutoMM for image-to-image semantic matching.
How to use AutoMM for image-text semantic matching.
How to use AutoMM for zero shot image-text semantic matching.
How to use semantic embeddings to improve search ranking performance.
Multimodal Data – Classification / Regression / NER¶
How AutoMM can be applied to multimodal data tables with a mix of text, numerical, and categorical columns.
How to use AutoMM to train a model on image, text, numerical, and categorical data.
How to use AutoMM to train a model for multimodal named entity recognition.
Advanced Topics¶
How to take advantage of larger foundation models with the help of parameter-efficient finetuning. In the tutorial, we will use combine IA^3, BitFit, and gradient checkpointing to finetune FLAN-T5-XL.
How to do hyperparameter optimization in AutoMM.
How to do knowledge distillation in AutoMM.
How to continue training in AutoMM.
How to customize AutoMM configurations.
How to use AutoMM presets.
How to use foundation models + SVM for few shot learning.
How to use AutoMM to handle class imbalance.
How to use TensorRT in accelerating AutoMM model inference.