Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3604078.3604129acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicdipConference Proceedingsconference-collections
research-article

Group-housed pigs tracking based on the attention mechanism

Published: 26 October 2023 Publication History

Abstract

The estimation of motion behavior of group-housed pigs can be used to predict their health condition statistically. A feasible motion detection method is to use contact sensors, such as adding wireless inertial sensors to pigs, which will cause stress responses in pigs and bring high costs. Different from traditional methods, video Multi-Object Tracking (MOT) technology studies the daily movement of pigs indirectly via deep learning methods. However, MOT for group-housed pigs faces many research difficulties, such as deformation, adhesion, occlusion, appearance similarity, light disturbance and background interference. In order to increase the robustness of the tracking algorithm, an improved FairMOT algorithm based on attention mechanism is proposed, in which Convolutional Block Attention Modules (CBAM) are integrated into the DLA34 backbone as dlaCBAM and Multi-Object Tracking Accuracy (MOTA) rose to 90.7%. The data association applies Kalman filter and Hungarian algorithm, and euclidean distance is adopted to characterize the amount of movement and construct the daily trajectory of pigs. Test results show that the improved FairMOT algorithm has a better tracking effect on pigs, which provides new ideas for intelligent pig motion estimation.

References

[1]
Maselyne, J., Adriaens, I., Huybrechts, T., De Ketelaere, B., Millet, S., Vangeyte, J., Van Nuffel, A., and Saeys, W., “Measuring the drinking behaviour of individual pigs housed in group using radio frequency identification (rfid),” Animal 10(9), 1557–1566 (2016).
[2]
Kashiha, M., Bahr, C., Ott, S., Moons, C. P., Niewold, T. A., ̈Odberg, F. O., and Berckmans, D., “Automatic identification of marked pigs in a pen using image pattern recognition,” Computers and electronics in agriculture 93, 111–120 (2013).
[3]
Neethirajan, S., “Happy cow or thinking pig? wur wolf–facial coding platform for measuring emotions in farm animals,” bioRxiv (2021).
[4]
Mellor, D. J., “Updating animal welfare thinking: Moving beyond the “five freedoms” towards “a life worth living”,” Animals 6(3), 21 (2016).
[5]
Zhang, L., Gray, H., Ye, X., Collins, L., and Allinson, N., “Automatic individual pig detection and tracking in pig farms,” Sensors 19(5), 1188 (2019).
[6]
T Psota, E., Schmidt, T., Mote, B., and C P ́erez, L., “Long-term tracking of group-housed livestock using keypoint detection and map estimation for individual animal identification,” Sensors 20(13), 3670 (2020).
[7]
Chen, G., Shen, S., Wen, L., Luo, S., and Bo, L., “Efficient pig counting in crowds with keypoints tracking and spatial-aware temporal response filtering,” in [2020 IEEE International Conference on Robotics and Automation (ICRA)], 10052–10058, IEEE (2020).
[8]
Chen, F., Liang, X., Chen, L., Liu, B., and Lan, Y., “Novel method for real-time detection and tracking of pig body and its different parts,” International Journal of Agricultural and Biological Engineering 13(6), 144–149 (2020).
[9]
Bolya, D., Zhou, C., Xiao, F., and Lee, Y. J., “Yolact: Real-time instance segmentation,” in [Proceedings of the IEEE/CVF International Conference on Computer Vision], 9157–9166 (2019).
[10]
Casagrande, J. H. B. and Stemmer, M. R., “Abnormal motion analysis for tracking-based approaches using region-based method with mobile grid,” Journal of Image and Graphics 2(1), 22–27 (2014).
[11]
Van Der Zande, L., Guzhva, O., Rodenburg, T. B., “Individual detection and tracking of group housed pigs in their home pen using computer vision,” Frontiers in Animal Science 2, 10 (2021).
[12]
Redmon, J. and Farhadi, A., “Yolov3: An incremental improvement,” arXiv preprint arXiv:1804.02767 (2018).
[13]
Bewley, A., Ge, Z., Ott, L., Ramos, F., and Upcroft, B., “Simple online and realtime tracking,” in [2016 IEEE international conference on image processing (ICIP)], 3464–3468, IEEE (2016)
[14]
Zhang, Y., Wang, C., Wang, X., Zeng, W., and Liu, W., “Fairmot: On the fairness of detection and reidentification in multiple object tracking,” International Journal of Computer Vision 129(11), 3069–3087 (2021).
[15]
Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.-Y., and Berg, A. C., “Ssd: Single shot multibox detector,” in [European conference on computer vision], 21–37, Springer (2016).
[16]
Redmon, J., Divvala, S., Girshick, R., and Farhadi, A., “You only look once: Unified, real-time object detection,” in [Proceedings of the IEEE conference on computer vision and pattern recognition], 779–788 (2016).
[17]
Yu, F., Wang, D., Shelhamer, E., and Darrell, T., “Deep layer aggregation,” in [Proceedings of the IEEE conference on computer vision and pattern recognition], 2403–2412 (2018).
[18]
Woo, S., Park, J., Lee, J.-Y., and Kweon, I. S., “Cbam: Convolutional block attention module,” in [Proceedings of the European conference on computer vision (ECCV)], 3–19 (2018).
[19]
Milan, A., Leal-Taix ́e, L., Reid, I., Roth, S., and Schindler, K., “Mot16: A benchmark for multi-object tracking,” arXiv preprint arXiv:1603.00831 (2016).
[20]
Sun, K., Xiao, B., Liu, D., and Wang, J., “Deep high-resolution representation learning for human pose estimation,” in [Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition], 5693–5703 (2019).
[21]
Howard, A., Sandler, M., Chu, G., Chen, L.-C., Chen, B., Tan, M., Wang, W., Zhu, Y., Pang, R., Vasudevan, V., “Searching for mobilenetv3,” in [Proceedings of the IEEE/CVF International Conference on Computer Vision], 1314–1324 (2019).
[22]
Lin, T.-Y., Doll ́ar, P., Girshick, R., He, K., Hariharan, B., and Belongie, S., “Feature pyramid networks for object detection,” in [Proceedings of the IEEE conference on computer vision and pattern recognition], 2117–2125 (2017).
[23]
Zhu, X., Hu, H., Lin, S., and Dai, J., “Deformable convnets v2: More deformable, better results,” in [Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition], 9308–9316 (2019).
[24]
Kendall, A., Gal, Y., and Cipolla, R., “Multi-task learning using uncertainty to weigh losses for scene geometry and semantics,” in [Proceedings of the IEEE conference on computer vision and pattern recognition], 7482–7491 (2018).

Index Terms

  1. Group-housed pigs tracking based on the attention mechanism

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    ICDIP '23: Proceedings of the 15th International Conference on Digital Image Processing
    May 2023
    711 pages
    ISBN:9798400708237
    DOI:10.1145/3604078
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 26 October 2023

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Attention Mechanism
    2. FairMOT
    3. Motion of Pigs
    4. Multi-Object Tracking

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    ICDIP 2023

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 17
      Total Downloads
    • Downloads (Last 12 months)17
    • Downloads (Last 6 weeks)1
    Reflects downloads up to 03 Sep 2024

    Other Metrics

    Citations

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media