Classifying PDF Documents with AutoMM

Open In Colab Open In SageMaker Studio Lab

PDF comes short from Portable Document Format and is one of the most popular document formats. We can find PDFs everywhere, from personal resumes to business contracts, and from commercial brochures to government documents. The list can be endless. PDF is highly praised for its portability. There’s no worry about the receiver being unable to view the document or see an imperfect version regardless of their operating system and device models.

Using AutoMM, you can handle and build machine learning models on PDF documents just like working on other modalities such as text and images, without bothering about PDFs processing. In this tutorial, we will introduce how to classify PDF documents automatically with AutoMM using document foundation models. Let’s get started!

For document processing, AutoGluon requires poppler to be installed. Check https://poppler.freedesktop.org for source

https://github.com/oschwartz10612/poppler-windows for Windows release (make sure to add the bin/ folder to PATH after installing)

brew install poppler for Mac

Get the PDF document dataset

We have created a simple PDFs dataset via manual crawling for demonstration purpose. It consists of two categories, resume and historical documents (downloaded from milestone documents). We picked 20 PDF documents for each of the category.

Now, let’s download the dataset and split it into training and test sets.

import warnings
warnings.filterwarnings('ignore')
import os
import pandas as pd
from autogluon.core.utils.loaders import load_zip

download_dir = './ag_automm_tutorial_pdf_classifier'
zip_file = "https://automl-mm-bench.s3.amazonaws.com/doc_classification/pdf_docs_small.zip"
load_zip.unzip(zip_file, unzip_dir=download_dir)

dataset_path = os.path.join(download_dir, "pdf_docs_small")
pdf_docs = pd.read_csv(f"{dataset_path}/data.csv")
train_data = pdf_docs.sample(frac=0.8, random_state=200)
test_data = pdf_docs.drop(train_data.index)
Downloading ./ag_automm_tutorial_pdf_classifier/file.zip from https://automl-mm-bench.s3.amazonaws.com/doc_classification/pdf_docs_small.zip...
100%|██████████| 12.7M/12.7M [00:00<00:00, 67.8MiB/s]

Now, let’s visualize one of the PDF documents. Here, we use the S3 URL of the PDF document and IFrame to show it in the tutorial.

from IPython.display import IFrame
IFrame("https://automl-mm-bench.s3.amazonaws.com/doc_classification/historical_1.pdf", width=400, height=500)

As you can see, this document is an America’s historical document in PDF format. To make sure the MultiModalPredictor can locate the documents correctly, we need to overwrite the document paths.

from autogluon.multimodal.utils.misc import path_expander

DOC_PATH_COL = "doc_path"

train_data[DOC_PATH_COL] = train_data[DOC_PATH_COL].apply(lambda ele: path_expander(ele, base_folder=download_dir))
test_data[DOC_PATH_COL] = test_data[DOC_PATH_COL].apply(lambda ele: path_expander(ele, base_folder=download_dir))
print(test_data.head())
                                             doc_path   label
4   /home/ci/autogluon/docs/tutorials/multimodal/d...  resume
12  /home/ci/autogluon/docs/tutorials/multimodal/d...  resume
14  /home/ci/autogluon/docs/tutorials/multimodal/d...  resume
15  /home/ci/autogluon/docs/tutorials/multimodal/d...  resume
16  /home/ci/autogluon/docs/tutorials/multimodal/d...  resume

Create a PDF Document Classifier

You can create a PDFs classifier easily with MultiModalPredictor. All you need to do is to create a predictor and fit it with the above training dataset. AutoMM will handle all the details, like (1) detecting if it is PDF format datasets; (2) processing PDFs like converting it into a format that our model can recognize; (3) detecting and recognizing the text in PDF documents; etc., without your notice.

Here, label is the name of the column that contains the target variable to predict, e.g., it is “label” in our example. We set the training time limit to 120 seconds for demonstration purposes.

from autogluon.multimodal import MultiModalPredictor

predictor = MultiModalPredictor(label="label")
predictor.fit(
    train_data=train_data,
    hyperparameters={"model.document_transformer.checkpoint_name":"microsoft/layoutlm-base-uncased",
    "optimization.top_k_average_method":"best",
    },
    time_limit=120,
)
No path specified. Models will be saved in: "AutogluonModels/ag-20240614_001921"
=================== System Info ===================
AutoGluon Version:  1.1.1b20240613
Python Version:     3.10.13
Operating System:   Linux
Platform Machine:   x86_64
Platform Version:   #1 SMP Fri May 17 18:07:48 UTC 2024
CPU Count:          8
Pytorch Version:    2.3.1+cu121
CUDA Version:       12.1
Memory Avail:       28.67 GB / 30.95 GB (92.6%)
Disk Space Avail:   191.47 GB / 255.99 GB (74.8%)
===================================================
AutoGluon infers your prediction problem is: 'binary' (because only two unique label-values observed).
	2 unique label values:  ['historical', 'resume']
	If 'binary' is not the correct problem_type, please manually specify the problem_type parameter during Predictor init (You may specify problem_type as one of: ['binary', 'multiclass', 'regression', 'quantile'])

AutoMM starts to create your model. ✨✨✨

To track the learning progress, you can open a terminal and launch Tensorboard:
    ```shell
    # Assume you have installed tensorboard
    tensorboard --logdir /home/ci/autogluon/docs/tutorials/multimodal/document_prediction/AutogluonModels/ag-20240614_001921
    ```

INFO: Seed set to 0
GPU Count: 1
GPU Count to be Used: 1
GPU 0 Name: Tesla T4
GPU 0 Memory: 0.42GB/15.0GB (Used/Total)

INFO: Using 16bit Automatic Mixed Precision (AMP)
INFO: GPU available: True (cuda), used: True
INFO: TPU available: False, using: 0 TPU cores
INFO: HPU available: False, using: 0 HPUs
INFO: LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
INFO: 
  | Name              | Type                | Params | Mode 
------------------------------------------------------------------
0 | model             | DocumentTransformer | 112 M  | train
1 | validation_metric | BinaryAUROC         | 0      | train
2 | loss_func         | CrossEntropyLoss    | 0      | train
------------------------------------------------------------------
112 M     Trainable params
0         Non-trainable params
112 M     Total params
450.518   Total estimated model params size (MB)
INFO: Epoch 0, global step 1: 'val_roc_auc' reached 0.50000 (best 0.50000), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/document_prediction/AutogluonModels/ag-20240614_001921/epoch=0-step=1.ckpt' as top 3
INFO: Epoch 1, global step 2: 'val_roc_auc' reached 0.50000 (best 0.50000), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/document_prediction/AutogluonModels/ag-20240614_001921/epoch=1-step=2.ckpt' as top 3
INFO: Epoch 2, global step 3: 'val_roc_auc' reached 0.50000 (best 0.50000), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/document_prediction/AutogluonModels/ag-20240614_001921/epoch=2-step=3.ckpt' as top 3
INFO: Epoch 3, global step 4: 'val_roc_auc' was not in top 3
INFO: Epoch 4, global step 5: 'val_roc_auc' was not in top 3
AutoMM has created your model. 🎉🎉🎉

To load the model, use the code below:
    ```python
    from autogluon.multimodal import MultiModalPredictor
    predictor = MultiModalPredictor.load("/home/ci/autogluon/docs/tutorials/multimodal/document_prediction/AutogluonModels/ag-20240614_001921")
    ```

If you are not satisfied with the model, try to increase the training time, 
adjust the hyperparameters (https://auto.gluon.ai/stable/tutorials/multimodal/advanced_topics/customization.html),
or post issues on GitHub (https://github.com/autogluon/autogluon/issues).
<autogluon.multimodal.predictor.MultiModalPredictor at 0x7feef0193790>

Evaluate on Test Dataset

You can evaluate the classifier on the test dataset to see how it performs:

scores = predictor.evaluate(test_data, metrics=["accuracy"])
print('The test acc: %.3f' % scores["accuracy"])
The test acc: 0.375

Predict on a New PDF Document

Given an example PDF document, we can easily use the final model to predict the label:

predictions = predictor.predict({DOC_PATH_COL: [test_data.iloc[0][DOC_PATH_COL]]})
print(f"Ground-truth label: {test_data.iloc[0]['label']}, Prediction: {predictions}")
Ground-truth label: resume, Prediction: ['historical']

If probabilities of all categories are needed, you can call predict_proba:

proba = predictor.predict_proba({DOC_PATH_COL: [test_data.iloc[0][DOC_PATH_COL]]})
print(proba)
[[0.99002504 0.00997492]]

Extract Embeddings

Extracting representation from the whole document learned by a model is also very useful. We provide extract_embedding function to allow predictor to return the N-dimensional document feature where N depends on the model.

feature = predictor.extract_embedding({DOC_PATH_COL: [test_data.iloc[0][DOC_PATH_COL]]})
print(feature[0].shape)
(768,)

Other Examples

You may go to AutoMM Examples to explore other examples about AutoMM.

Customization

To learn how to customize AutoMM, please refer to Customize AutoMM.