Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

ResGait: gait feature refinement based on residual structure for gait recognition

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Gait recognition is a biometric recognition technology, where the goal is to identify the subject by the subject’s walking posture at a distance. However, a lot of redundant information in gait sequence will affect the performance of gait recognition, and the most existing gait recognition models are overly complicated and parameterized, which leads to the low efficiency in model training. Consequently, how to reduce the complexity of the model and eliminate redundant information effectively in gait have become a challenging problem in the field of gait recognition. In this paper, we present a residual structure based gait recognition model, short for ResGait, to learn the most discriminative changes of gait patterns. To eliminate redundant information in gait, the soft thresholding is inserted into the deep architectures as a nonlinear transformation layer to improve gait feature learning capability from the noised gait feature map. Moreover, each sample owns unique set of thresholds, making the proposed model suitable for different gait sequences with different redundant information. Furthermore, residual link is introduced to reduce the learning difficulties and alleviate computational costs in model training. Here, we train the network in terms of various scenarios and walking conditions, and the effectiveness of the method is validated through abundant experiments with various types of redundant information in gait. In comparison to the previous state-of-the-art works, experimental results on the common datasets, CASIA-B and OUMVLP-Pose, show that ResGait has higher recognition accuracy under various walking conditions and scenarios.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Chao, H., Wang, K., He, Y., et al.: GaitSet: cross-view gait recognition through utilizing gait as a deep set. IEEE Trans. Pattern Anal. Mach. Intell. 44(7), 3467–3478 (2021). https://doi.org/10.1109/TPAMI.2021.3057879

    Article  Google Scholar 

  2. Wu, Z., Huang, Y., Wang, L., et al.: A comprehensive study on cross-view gait based human identification with deep CNNs. IEEE Trans. Pattern Anal. Mach. Intell. 39(2), 209–226 (2016). https://doi.org/10.1109/TPAMI.2016.2545669

    Article  Google Scholar 

  3. Gao, S., Yun, J., Zhao, Y., et al.: Gait-D: skeleton-based gait feature decomposition for gait recognition. IET Comput. Vis. 16(2), 111–125 (2022). https://doi.org/10.1049/cvi2.12070

    Article  Google Scholar 

  4. Shen, C., Yu, S., Wang, J., et al.: A comprehensive survey on deep gait recognition: algorithms, datasets and challenges. arXiv preprint arXiv:2206.13732 (2022). https://doi.org/10.48550/arXiv.2206.13732

  5. Liao, R., Cao, C., Garcia, E.B., et al.: Pose-based temporal-spatial network (PTSN) for gait recognition with carrying and clothing variations. In: Biometric Recognition: 12th Chinese Conference, CCBR 2017, Shenzhen, China, October 28–29, 2017, Proceedings 12, vol. 2017. Springer, pp. 474–483 (2017). https://doi.org/10.1007/978-3-319-69923-3_51

  6. Ju, H., Bhanu, B.: Individual recognition using gait energy image. IEEE Trans. Pattern Anal. Mach. Intell. 28(2), 316–322 (2005). https://doi.org/10.1109/TPAMI.2006.38

    Article  Google Scholar 

  7. Shiraga, K., Makihara, Y., Muramatsu, D., et al.: GeiNet: View-invariant gait recognition using a convolutional neural network. In: 2016 International Conference on Biometrics (ICB), vol. 2016. IEEE, pp. 1–8 (2016). https://doi.org/10.1109/ICB.2016.7550060

  8. Li, N., Zhao, X., Ma, C.: Joints Gait: a model-based gait recognition method based on gait graph convolutional networks and joints relationship pyramid mapping. arXiv preprint arXiv:2005.08625 (2020). https://doi.org/10.48550/arXiv.2005.08625

  9. Liao, R., Yu, S., An, W., et al.: A model-based gait recognition method with body pose and human prior knowledge. Pattern Recognit. 98, 107069 (2020). https://doi.org/10.1016/j.patcog.2019.107069

    Article  Google Scholar 

  10. Teepe T, Khan A, Gilg J, et al.: Gaitgraph: graph convolutional network for skeleton-based gait recognition. In: 2021 IEEE International Conference on Image Processing (ICIP). IEEE, pp. 2314–2318 (2021). https://doi.org/10.1109/ICIP42928.2021.9506717

  11. Zhao, G., Liu, G., Li, H., et al.: 3D gait recognition using multiple cameras. In: 7th International Conference on Automatic Face and Gesture Recognition (FGR06). IEEE, pp. 529–534 (2006). https://doi.org/10.1109/FGR.2006.2

  12. Cao, Z., Simon, T., Wei, S.E., et al.: Realtime multi-person 2d pose estimation using part affinity fields. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7291–7299 (2017)

  13. Fang, H.S., Xie, S., Tai, Y.W., et al. RMPE: Regional multi-person pose estimation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2334–2343 (2017)

  14. Güler, R.A., Neverova, N., Kokkinos, I.: Densepose: dense human pose estimation in the wild. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7297–7306 (2018)

  15. Song, Y.F., Zhang, Z., Shan, C., et al.: Stronger, faster and more explainable: a graph convolutional baseline for skeleton-based action recognition. In: Proceedings of the 28th ACM International Conference on Multimedia, pp. 1625–1633 (2020). https://doi.org/10.1145/3394171.3413802

  16. Zhao, M., Zhong, S., Fu, X., et al.: Deep residual shrinkage networks for fault diagnosis. IEEE Trans. Ind. Inform. 16(7), 4681–4690 (2019). https://doi.org/10.1109/TII.2019.2943898

    Article  Google Scholar 

  17. He, K., Zhang, X., Ren, S., et al.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

  18. Yu, S., Tan, D., Tan, T.: A framework for evaluating the effect of view angle, clothing and carrying condition on gait recognition. In: 18th International Conference on Pattern Recognition (ICPR’06), vol. 4. IEEE, pp. 441–444 (2006). https://doi.org/10.1109/ICPR.2006.67

  19. An, W., Yu, S., Makihara, Y.: Evaluation, performance, of model-based gait on multi-view very large population database with pose sequences. IEEE Trans. Biometr. Behav. Identity Sci. 33(99), 1 (2020). https://doi.org/10.1109/TBIOM.2020.3008862

    Article  Google Scholar 

  20. Fan, C., Peng, Y., Cao, C., et al.: Gaitpart: temporal part-based model for gait recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14225–14233 (2020). https://doi.org/10.1109/CVPR42600.2020.01423

  21. Hsu, H.M., Wang, Y., Yang, C.Y., et al.: GAITTAKE: gait recognition by temporal attention and keypoint-guided embedding. In: 2022 IEEE International Conference on Image Processing (ICIP). IEEE, pp. 2546–2550 (2022). https://doi.org/10.1109/ICIP46576.2022.9897409

  22. Mogan, J.N., Lee, C.P., Lim, K.M., et al.: VGG16-MLP: gait recognition with fine-tuned VGG-16 and multilayer perceptron. Appl. Sci. 12(15), 7639 (2022). https://doi.org/10.3390/app12157639

    Article  Google Scholar 

  23. Liu, Y., Zeng, Y., Pu, J., et al.: Selfgait: a spatio temporal representation learning method for self-supervised gait recognition. In: ICASSP 2021–2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, pp. 2570–2574 (2021). https://doi.org/10.1109/ICASSP39728.2021.9413894

  24. Lin, B., Zhang, S., Wang, M., et al.: GaitGL: learning discriminative global-local feature representations for gait recognition. arXiv preprint arXiv:2208.01380 (2022). https://doi.org/10.48550/arXiv.2208.01380

  25. Zhang, Y., Huang, Y., Yu, S., et al.: Cross-view gait recognition by discriminative feature learning. IEEE Trans. Image Process. 29, 1001–1015 (2019). https://doi.org/10.1109/TIP.2019.2926208

    Article  MathSciNet  MATH  Google Scholar 

  26. Huang, X., Zhu, D., Wang, H., et al.: Context-sensitive temporal feature learning for gait recognition. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 12909–12918 (2021). https://doi.org/10.48550/arXiv.2204.03270

  27. Tang, J., Luo, J., Tjahjadi, T., et al.: Robust arbitrary-view gait recognition based on 3D partial similarity matching. IEEE Trans. Image Process. 26(1), 7–22 (2016). https://doi.org/10.1109/TIP.2016.2612823

    Article  MathSciNet  MATH  Google Scholar 

  28. Teepe, T., Gilg, J., Herzog, F., et al.: Towards a deeper understanding of skeleton-based gait recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1569–1577 (2022). https://doi.org/10.1109/CVPRW56347.2022.00163

  29. Pan, H., Chen, Y., Xu, T., et al.: Towards complete-view and high-level pose-based gait recognition. arXiv preprint arXiv:2209.11577 (2022). https://doi.org/10.48550/arXiv.2209.11577

  30. Yan, S., Xiong, Y., Lin, D.: Spatial temporal graph convolutional networks for skeleton-based action recognition. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, No. 1 (2018). https://doi.org/10.48550/arXiv.1801.07455

  31. Zhang, C., Chen, X.P., Han, G.Q., et al.: Spatial transformer network on skeleton-based gait recognition. arXiv preprint arXiv:2204.03873 (2022). https://doi.org/10.48550/arXiv.2204.03873

  32. Li, X., Makihara, Y., Xu, C., et al.: End-to-end model-based gait recognition using synchronized multi-view pose constraint. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4106–4115 (2021).https://doi.org/10.1109/ICCVW54120.2021.00456

  33. Zhu, Z., Guo, X., Yang, T., et al.: Gait recognition in the wild: a benchmark. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 14789–14799 (2021). https://doi.org/10.1109/ICCV48922.2021.01452

  34. Liang, J., Fan, C., Hou, S., et al.: Gaitedge: beyond plain end-to-end gait recognition for better practicality. In: Computer vision—ECCV, 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings. Part V. Springer, Cham, pp. 375–390 (2022). https://doi.org/10.1007/978-3-031-20065-6_22

  35. Peng, Y., Ma, K., Zhang, Y., et al.: Learning rich features for gait recognition by integrating skeletons and silhouettes. arXiv preprint arXiv:2110.13408 (2021). https://doi.org/10.48550/arXiv.2110.13408

  36. Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016). https://doi.org/10.48550/arXiv.1609.02907

  37. Bai, S., Kolter, J.Z., Koltun, V.: An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv preprint arXiv:1803.01271 (2018). https://doi.org/10.48550/arXiv.1803.01271

  38. Fan, C., Liang, J., Shen, C., et al.: OpenGait: revisiting gait recognition toward better practicality. arXiv preprint arXiv:2211.06597 (2022). https://doi.org/10.48550/arXiv.2211.06597

Download references

Funding

This research was funded by the National Natural Science Foundation of China under Grants No. 61772125 and the National Key Research and Development Program of China under Grant No. 2019YFB1405803.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhenhua Tan.

Ethics declarations

Conflict of interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gao, S., Tan, Z., Ning, J. et al. ResGait: gait feature refinement based on residual structure for gait recognition. Vis Comput 39, 3455–3466 (2023). https://doi.org/10.1007/s00371-023-02973-0

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-023-02973-0

Keywords