By Matan Rusanovsky, Or Hirschorn and Shai Avidan
This is the official implementation of "CapeX: Category-Agnostic Pose Estimation from Textual Point Explanation".
Recent CAPE works have produced object poses based on arbitrary keypoint definitions annotated on a user-provided support image. Our work departs from conventional CAPE methods, which require a support image, by adopting a text-based approach instead of the support image. Specifically, we use a pose-graph, where nodes represent keypoints that are described with text. This representation takes advantage of the abstraction of text descriptions and the structure imposed by the graph. Our approach effectively breaks symmetry, preserves structure, and improves occlusion handling.
If you find this useful, please cite this work as follows:
@misc{rusanovsky2024capex,
title={CapeX: Category-Agnostic Pose Estimation from Textual Point Explanation},
author={Matan Rusanovsky and Or Hirschorn and Shai Avidan},
year={2024},
eprint={2406.00384},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
Please run:
conda env create -f environment.yml
conda activate capex
We provide a docker image for easy use.
You can simply pull the docker image from docker hub, containing all the required libraries and packages:
docker pull matanru/capex
docker run --name capex -v {DATA_DIR}:/workspace/CapeX/CapeX/data/mp100 -it matanru/capex /bin/bash
Download the pretrained model and run:
python app.py --checkpoint [path_to_pretrained_ckpt]
Download the pretrained model and run:
python demo_text.py --support_points [<text description of point 1>, <text description of point 2>, ...] --support_skeleton [[1st point index of edge 1, 2nd point index of edge 1], ...] --query [path_to_query_image] --config configs/1shot-swin-gte/graph_split1_config.py --checkpoint [path_to_pretrained_ckpt]
For example:
python demo_text.py --support_points "['left and front leg', 'right and front leg', 'right and back leg', 'left and back leg', 'left and front side of the seat', 'right and front side of the seat', 'right and back side of the seat', 'left and back side of the seat', 'top left side of the backseat', 'top right side of the backseat']" --support_skeleton "[[0, 4], [3, 7], [1, 5], [2, 6], [4, 5], [5, 6], [6, 7], [7, 4], [6, 7], [7, 8], [8, 9], [9, 6]]" --query examples/chair.png --config configs/1shot-swin-gte/graph_split1_config.py --checkpoint [path_to_pretrained_ckpt]
Please follow the official guide to prepare the MP-100 dataset for training and evaluation, and organize the data structure properly.
Then, use Pose Anything's updated annotation file, with all the skeleton definitions, from the following link.
Please note:
Current version of the MP-100 dataset includes some discrepancies and filenames errors:
- Note that the mentioned DeepFasion dataset is actually DeepFashion2 dataset. The link in the official repo is wrong. Use this repo instead.
- Use Pose Anything's script to fix CarFusion filename errors, which can be run by:
python tools/fix_carfusion.py [path_to_CarFusion_dataset] [path_to_mp100_annotation]
Our text descriptions are added to the keypoints, based on models/datasets/datasets/mp100/utils.py .
To use pre-trained Swin-Transformer as used in our paper, we provide the weights, taken from this repo, in the following link.
These should be placed in the ./pretrained
folder.
To train the model, run:
python train.py --config [path_to_config_file] --work-dir [path_to_work_dir]
Here we provide the evaluation results of our pretrained models on MP-100 dataset along with the config files and checkpoints:
Setting | split 1 | split 2 | split 3 | split 4 | split 5 | Average |
---|---|---|---|---|---|---|
swin-gte | 95.62 | 90.94 | 88.95 | 89.43 | 92.57 | 91.50 |
link / config | link / config | link / config | link / config | link / config |
To evaluate the pretrained model, run:
python test.py [path_to_config_file] [path_to_pretrained_ckpt]
Our code is based on code from:
This project is released under the Apache 2.0 license.