AutoMM for Named Entity Recognition - Quick Start

Open In Colab Open In SageMaker Studio Lab

Named entity recognition (NER) refers to identifying and categorizing key information (entities) from unstructured text. An entity can be a word or a series of words which correspond to categories such as cities, time expressions, monetary values, facilities, person, organization, etc. An NER model usually takes as input an unannotated block of text and output an annotated block of text that highlights the named entities with predefined categories. For example, given the following sentences,

  • Albert Einstein was born in Germany and is widely acknowledged to be one of the greatest physicists.

The model will tell you that “Albert Einstein” is a PERSON and “Germany” is a LOCATION. In the following, we will introduce how to use AutoMM for the NER task, including how to prepare your data, how to train your model, and what you can expect from the model outputs.

Prepare Your Data

Like other tasks in AutoMM, all you need to do is to prepare your data as data tables (i.e., dataframes) which contain a text column and an annotation column. The text column stores the raw textual data which contains the entities you want to identify. Correspondingly, the annotation column stores the label information (e.g., the category and the start/end offset in character level) for the entities. AutoMM requires the annotation column to have the following json format (Note: do not forget to call json.dumps() to convert python objects into a json string before creating your dataframe).

import json
json.dumps([
    {"entity_group": "PERSON", "start": 0, "end": 15},
    {"entity_group": "LOCATION", "start": 28, "end": 35}
])
'[{"entity_group": "PERSON", "start": 0, "end": 15}, {"entity_group": "LOCATION", "start": 28, "end": 35}]'

where entity_group is the category of the entity and start is a character-level position indicating where the entity begins while end represents the ending position of the entity. To make sure that AutoMM can recognise your json annotations, it is required to use the exactly same keys/properties (entity_group, start, end) specified above when constructing your data. You can annote “Albert Einstein” as a single entity group or you can also assign each word a label.

Following is an example of visualizing the annotations with the visualize_ner utility.

from autogluon.multimodal.utils import visualize_ner

sentence = "Albert Einstein was born in Germany and is widely acknowledged to be one of the greatest physicists."
annotation = [{"entity_group": "PERSON", "start": 0, "end": 15},
              {"entity_group": "LOCATION", "start": 28, "end": 35}]

visualize_ner(sentence, annotation)
Albert Einstein PERSON was born in Germany LOCATION and is widely acknowledged to be one of the greatest physicists.

If you are already familiar with the NER task, you probably have heard about the BIO (Benginning-Inside-Outside) format. You can adopt this format (which is not compulsory) to add an I-prefix or a B-prefix to each tag to indicate wether the tag is the beginning of the annotated chunk or inside the chunk. For example, you can annotate “Albert” as “B-PERSON” because it is the beginning of the name and “Einstein” as “I-PERSON” as it is inside the PERSON chunk. You do not need to worry about the O tags, an O tag indicates that a word belongs to no chunk, as AutoMM will take care of that automatically.

Now, let’s look at an example dataset. This dataset is converted from the MIT movies corpus which provides annotations on entity groups such as actor, character, director, genre, song, title, trailer, year, etc.

from autogluon.core.utils.loaders import load_pd
train_data = load_pd.load('https://automl-mm-bench.s3.amazonaws.com/ner/mit-movies/train_v2.csv')
test_data = load_pd.load('https://automl-mm-bench.s3.amazonaws.com/ner/mit-movies/test_v2.csv')
train_data.head(5)
text_snippet entity_annotations
0 what movies star bruce willis [{"entity_group": "ACTOR", "start": 17, "end":...
1 show me films with drew barrymore from the 1980s [{"entity_group": "ACTOR", "start": 19, "end":...
2 what movies starred both al pacino and robert ... [{"entity_group": "ACTOR", "start": 25, "end":...
3 find me all of the movies that starred harold ... [{"entity_group": "ACTOR", "start": 39, "end":...
4 find me a movie with a quote about baseball in it []

Let’s print a row.

print(f"text_snippet: {train_data['text_snippet'][1]}")
print(f"entity_annotations: {train_data['entity_annotations'][1]}")
visualize_ner(train_data['text_snippet'][1], train_data['entity_annotations'][1])
text_snippet: show me films with drew barrymore from the 1980s
entity_annotations: [{"entity_group": "ACTOR", "start": 19, "end": 33}, {"entity_group": "YEAR", "start": 43, "end": 48}]
show me films with drew barrymore ACTOR from the 1980s YEAR

Training

Now, let’s create a predictor for named entity recognition by setting the problem_type to ner and specifying the label column. Afterwards, we call predictor.fit() to train the model for five minutes. To achieve reasonable performance in your applications, you are recommended to set a longer enough time_limit (e.g., 30/60 minutes). You can also specify your backbone model and other hyperparameters using the hyperparameters argument. Here, we save the model to the directory "automm_ner". For the purpose of demonstration, we use the lightweighted 'google/electra-small-discriminator' backbone.

from autogluon.multimodal import MultiModalPredictor
import uuid

label_col = "entity_annotations"
model_path = f"./tmp/{uuid.uuid4().hex}-automm_ner"  # You can rename it to the model path you like
predictor = MultiModalPredictor(problem_type="ner", label=label_col, path=model_path)
predictor.fit(
    train_data=train_data,
    hyperparameters={'model.ner_text.checkpoint_name':'google/electra-small-discriminator'},
    time_limit=300, #second
)
=================== System Info ===================
AutoGluon Version:  1.1.1b20240716
Python Version:     3.10.13
Operating System:   Linux
Platform Machine:   x86_64
Platform Version:   #1 SMP Fri May 17 18:07:48 UTC 2024
CPU Count:          8
Pytorch Version:    2.3.1+cu121
CUDA Version:       12.1
Memory Avail:       28.68 GB / 30.95 GB (92.7%)
Disk Space Avail:   186.61 GB / 255.99 GB (72.9%)
===================================================

AutoMM starts to create your model. ✨✨✨

To track the learning progress, you can open a terminal and launch Tensorboard:
    ```shell
    # Assume you have installed tensorboard
    tensorboard --logdir /home/ci/autogluon/docs/tutorials/multimodal/text_prediction/tmp/a6f43e92562b4cba8c7cc1b0d313f830-automm_ner
    ```

Seed set to 0
/home/ci/opt/venv/lib/python3.10/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(
GPU Count: 1
GPU Count to be Used: 1
GPU 0 Name: Tesla T4
GPU 0 Memory: 0.42GB/15.0GB (Used/Total)

Using 16bit Automatic Mixed Precision (AMP)
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
HPU available: False, using: 0 HPUs
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]

  | Name              | Type              | Params | Mode 
----------------------------------------------------------------
0 | model             | HFAutoModelForNER | 13.5 M | train
1 | validation_metric | MulticlassF1Score | 0      | train
2 | loss_func         | CrossEntropyLoss  | 0      | train
----------------------------------------------------------------
13.5 M    Trainable params
0         Non-trainable params
13.5 M    Total params
53.959    Total estimated model params size (MB)
Epoch 0, global step 34: 'val_ner_token_f1' reached 0.00025 (best 0.00025), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/text_prediction/tmp/a6f43e92562b4cba8c7cc1b0d313f830-automm_ner/epoch=0-step=34.ckpt' as top 3
Epoch 0, global step 69: 'val_ner_token_f1' reached 0.60076 (best 0.60076), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/text_prediction/tmp/a6f43e92562b4cba8c7cc1b0d313f830-automm_ner/epoch=0-step=69.ckpt' as top 3
Epoch 1, global step 103: 'val_ner_token_f1' reached 0.81045 (best 0.81045), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/text_prediction/tmp/a6f43e92562b4cba8c7cc1b0d313f830-automm_ner/epoch=1-step=103.ckpt' as top 3
Epoch 1, global step 138: 'val_ner_token_f1' reached 0.83134 (best 0.83134), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/text_prediction/tmp/a6f43e92562b4cba8c7cc1b0d313f830-automm_ner/epoch=1-step=138.ckpt' as top 3
Epoch 2, global step 172: 'val_ner_token_f1' reached 0.86522 (best 0.86522), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/text_prediction/tmp/a6f43e92562b4cba8c7cc1b0d313f830-automm_ner/epoch=2-step=172.ckpt' as top 3
Epoch 2, global step 207: 'val_ner_token_f1' reached 0.86369 (best 0.86522), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/text_prediction/tmp/a6f43e92562b4cba8c7cc1b0d313f830-automm_ner/epoch=2-step=207.ckpt' as top 3
Epoch 3, global step 241: 'val_ner_token_f1' reached 0.87006 (best 0.87006), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/text_prediction/tmp/a6f43e92562b4cba8c7cc1b0d313f830-automm_ner/epoch=3-step=241.ckpt' as top 3
Time limit reached. Elapsed time is 0:05:00. Signaling Trainer to stop.
Epoch 3, global step 248: 'val_ner_token_f1' reached 0.86548 (best 0.87006), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/text_prediction/tmp/a6f43e92562b4cba8c7cc1b0d313f830-automm_ner/epoch=3-step=248.ckpt' as top 3
Start to fuse 3 checkpoints via the greedy soup algorithm.
AutoMM has created your model. 🎉🎉🎉

To load the model, use the code below:
    ```python
    from autogluon.multimodal import MultiModalPredictor
    predictor = MultiModalPredictor.load("/home/ci/autogluon/docs/tutorials/multimodal/text_prediction/tmp/a6f43e92562b4cba8c7cc1b0d313f830-automm_ner")
    ```

If you are not satisfied with the model, try to increase the training time, 
adjust the hyperparameters (https://auto.gluon.ai/stable/tutorials/multimodal/advanced_topics/customization.html),
or post issues on GitHub (https://github.com/autogluon/autogluon/issues).
<autogluon.multimodal.predictor.MultiModalPredictor at 0x7f4fb4259cf0>

Evaluation

Evaluation is also straightforward, we use seqeval for NER evaluation and the supported metrics are overall_recall, overall_precision, overall_f1, overall_accuracy. If you are interested in seeing the performance on a specific entity group, you can use the entity group name as the evaluation metric with which you will obtain the performances (precision, recall, f1) on the given entity group:

predictor.evaluate(test_data,  metrics=['overall_recall', "overall_precision", "overall_f1", "actor"])
{'overall_recall': 0.8531560217269152,
 'overall_precision': 0.8287845705967977,
 'overall_f1': 0.840793724042455,
 'actor': {'precision': 0.8292682926829268,
  'recall': 0.9211822660098522,
  'f1': 0.8728121353558926,
  'number': 812}}

Prediction + Visualization

You can easily obtain the predictions given an input sentence by by calling predictor.predict(). If you are running the code in a Jupyter notebook, you can also easily visualize the predictions using the visualize_ner function which will highlight the named entities and their labels in a text.

from autogluon.multimodal.utils import visualize_ner

sentence = "Game of Thrones is an American fantasy drama television series created by David Benioff"
predictions = predictor.predict({'text_snippet': [sentence]})
print('Predicted entities:', predictions[0])

# Visualize
visualize_ner(sentence, predictions[0])
Predicted entities: [{'entity_group': 'TITLE', 'start': 0, 'end': 15}, {'entity_group': 'GENRE', 'start': 22, 'end': 44}, {'entity_group': 'DIRECTOR', 'start': 74, 'end': 87}]
Game of Thrones TITLE is an American fantasy drama GENRE television series created by David Benioff DIRECTOR

Prediction Probabilities

You can also output the probabilities for a deep dive into the predictions.

predictions = predictor.predict_proba({'text_snippet': [sentence]})
print(predictions[0][0]['probability'])
{'O': 0.09265, 'I-RATINGS_AVERAGE': 0.0008645, 'I-DIRECTOR': 0.0004776, 'I-REVIEW': 0.006416, 'B-TITLE': 0.4343, 'B-SONG': 0.008255, 'B-TRAILER': 0.0008383, 'I-CHARACTER': 0.002474, 'I-ACTOR': 0.002296, 'B-REVIEW': 0.010414, 'I-TITLE': 0.001997, 'B-GENRE': 0.03003, 'I-RATING': 0.0004282, 'I-PLOT': 0.01377, 'B-YEAR': 0.00095, 'B-ACTOR': 0.010704, 'B-RATING': 0.006416, 'I-SONG': 0.013954, 'B-DIRECTOR': 0.003128, 'I-TRAILER': 0.0009427, 'B-PLOT': 0.1392, 'I-YEAR': 0.0006843, 'I-GENRE': 0.0007544, 'B-CHARACTER': 0.2072, 'B-RATINGS_AVERAGE': 0.01021}

Reloading and Continuous Training

The trained predictor is automatically saved and you can easily reload it using the path. If you are not saftisfied with the current model performance, you can continue training the loaded model with new data.

new_predictor = MultiModalPredictor.load(model_path)
new_model_path = f"./tmp/{uuid.uuid4().hex}-automm_ner_continue_train"
new_predictor.fit(train_data, time_limit=60, save_path=new_model_path)
test_score = new_predictor.evaluate(test_data, metrics=['overall_f1', 'ACTOR'])
print(test_score)
Load pretrained checkpoint: /home/ci/autogluon/docs/tutorials/multimodal/text_prediction/tmp/a6f43e92562b4cba8c7cc1b0d313f830-automm_ner/model.ckpt
=================== System Info ===================
AutoGluon Version:  1.1.1b20240716
Python Version:     3.10.13
Operating System:   Linux
Platform Machine:   x86_64
Platform Version:   #1 SMP Fri May 17 18:07:48 UTC 2024
CPU Count:          8
Pytorch Version:    2.3.1+cu121
CUDA Version:       12.1
Memory Avail:       27.14 GB / 30.95 GB (87.7%)
Disk Space Avail:   186.56 GB / 255.99 GB (72.9%)
===================================================

AutoMM starts to create your model. ✨✨✨

To track the learning progress, you can open a terminal and launch Tensorboard:
    ```shell
    # Assume you have installed tensorboard
    tensorboard --logdir /home/ci/autogluon/docs/tutorials/multimodal/text_prediction/tmp/c44da4e87acf4016943c0e48bb3dc52c-automm_ner_continue_train
    ```

Seed set to 0
GPU Count: 1
GPU Count to be Used: 1
GPU 0 Name: Tesla T4
GPU 0 Memory: 0.56GB/15.0GB (Used/Total)

Using 16bit Automatic Mixed Precision (AMP)
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
HPU available: False, using: 0 HPUs
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]

  | Name              | Type              | Params | Mode 
----------------------------------------------------------------
0 | model             | HFAutoModelForNER | 13.5 M | train
1 | validation_metric | MulticlassF1Score | 0      | train
2 | loss_func         | CrossEntropyLoss  | 0      | train
----------------------------------------------------------------
13.5 M    Trainable params
0         Non-trainable params
13.5 M    Total params
53.959    Total estimated model params size (MB)
Epoch 0, global step 34: 'val_ner_token_f1' reached 0.86803 (best 0.86803), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/text_prediction/tmp/c44da4e87acf4016943c0e48bb3dc52c-automm_ner_continue_train/epoch=0-step=34.ckpt' as top 3
Time limit reached. Elapsed time is 0:01:00. Signaling Trainer to stop.
Epoch 0, global step 45: 'val_ner_token_f1' reached 0.86904 (best 0.86904), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/text_prediction/tmp/c44da4e87acf4016943c0e48bb3dc52c-automm_ner_continue_train/epoch=0-step=45.ckpt' as top 3
Start to fuse 2 checkpoints via the greedy soup algorithm.
AutoMM has created your model. 🎉🎉🎉

To load the model, use the code below:
    ```python
    from autogluon.multimodal import MultiModalPredictor
    predictor = MultiModalPredictor.load("/home/ci/autogluon/docs/tutorials/multimodal/text_prediction/tmp/c44da4e87acf4016943c0e48bb3dc52c-automm_ner_continue_train")
    ```

If you are not satisfied with the model, try to increase the training time, 
adjust the hyperparameters (https://auto.gluon.ai/stable/tutorials/multimodal/advanced_topics/customization.html),
or post issues on GitHub (https://github.com/autogluon/autogluon/issues).
{'overall_f1': 0.840595799796466, 'ACTOR': {'precision': 0.8320180383314544, 'recall': 0.9088669950738916, 'f1': 0.8687463213655091, 'number': 812}}

Other Examples

You may go to AutoMM Examples to explore other examples about AutoMM.

Customization

To learn how to customize AutoMM, please refer to Customize AutoMM.