Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1007/978-3-030-47426-3_17guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

Multi-Layer Cross Loss Model for Zero-Shot Human Activity Recognition

Published: 11 May 2020 Publication History

Abstract

Most existing methods of human activity recognition are based on supervised learning. These methods can only recognize classes which appear in the training dataset, but are out of work when the classes are not in the training dataset. Zero-shot learning aims at solving this problem. In this paper, we propose a novel model termed Multi-Layer Cross Loss Model (MLCLM). Our model has two novel ideas: (1) In the model, we design a multi-nonlinear layers model to project features to semantic space for that the deeper the network is, the better the network can fit the data’s distribution. (2) A novel objective function combining mean square loss and cross entropy loss is designed for the zero-shot learning task. We have conduct sufficient experiments to evaluate the proposed model on three benchmark datasets. Experiments show that our model outperforms other state-of-the-art methods significantly in zero-shot human activity recognition.

References

[1]
Khojasteh S, Villar J, Chira C, González V, and de la Cal E Improving fall detection using an on-wrist wearable accelerometer Sensors 2018 18 5 1350
[2]
Direkoǧlu C and O’Connor NE Temporal segmentation and recognition of team activities in sports Mach. Vis. Appl. 2018 29 5 891-913
[3]
Inoue M, Inoue S, and Nishida T Deep recurrent neural network for mobile human activity recognition with high throughput Artif. Life Rob. 2017 23 2 173-185
[4]
Cao L, Wang Y, Zhang B, Jin Q, and Vasilakos AV GCHAR: an efficient group-based context-aware human activity recognition on smartphone J. Parallel Distrib. Comput. 2018 118 67-80
[5]
Asghari, P., Soelimani, E., Nazerfard, E.: Online human activity recognition employing hierarchical hidden markov models. arXiv preprint arXiv:1903.04820 (2019)
[6]
Palatucci, M., Pomerleau, D., Hinton, G.E., Mitchell, T.M.: Zero-shot learning with semantic output codes. In: Advances in neural information processing systems, pp. 1410–1418 (2009)
[7]
Cheng, H.T., Sun, F.T., Griss, M., Davis, P., Li, J., You, D.: Nuactiv: recognizing unseen new activities using semantic attribute-based learning. In: Proceeding of the 11th annual international conference on Mobile systems, applications, and services, pp. 361–374. ACM (2013)
[8]
Zheng, V.W., Hu, D.H., Yang, Q.: Cross-domain activity recognition. In: Proceedings of the 11th international conference on Ubiquitous computing, pp. 61–70. ACM (2009)
[9]
Cao, H., Nguyen, M.N., Phua, C., Krishnaswamy, S., Li, X.: An integrated framework for human activity classification. In: UbiComp, pp. 331–340 (2012)
[10]
Wang JS, Lin CW, Yang YTC, and Ho YJ Walking pattern classification and walking distance estimation algorithms using gait phase information IEEE Trans. Biomed. Eng. 2012 59 10 2884-2892
[11]
Shi, D., Wu, Y., Mo, X., Wang, R., Wei, J.: Activity recognition based on the dynamic coordinate transformation of inertial sensor data. In: 2016 Intl IEEE Conferences on Ubiquitous Intelligence & Computing, Advanced and Trusted Computing, Scalable Computing and Communications, Cloud and Big Data Computing, Internet of People, and Smart World Congress (UIC/ATC/ScalCom/CBDCom/IoP/SmartWorld), pp. 1–8. IEEE (2016)
[12]
Kodirov, E., Xiang, T., Gong, S.: Semantic autoencoder for zero-shot learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3174–3183 (2017)
[13]
Romera-Paredes, B., Torr, P.: An embarrassingly simple approach to zero-shot learning. In: International Conference on Machine Learning, pp. 2152–2161 (2015)
[14]
Liu, S., Long, M., Wang, J., Jordan., M.I.: Generalized zero-shot learning with deep calibration network. In: Advances in Neural Information Processing Systems, pp. 2005–2015 (2018)
[15]
Wang, W., Miao, C., Hao, S.: Zero-shot human activity recognition via nonlinear compatibility based method. In: Proceedings of the International Conference on Web Intelligence, pp. 322–330. ACM (2017)
[16]
Rohrbach, M., Ebert, S., Schiele, B.: Transfer learning in a transductive setting. In: Advances in neural information processing systems, pp. 46–54 (2013)
[17]
Kodirov, E., Xiang, T., Fu, Z., Gong, S.: Unsupervised domain adaptation for zero-shot learning. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2452–2460 (2015)
[18]
Fu, Y., Sigal, L.: Semi-supervised vocabulary-informed learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5337–5346 (2016)
[19]
Cheng, H.T., Griss, M., Davis, P., Li, J., You, D.: Towards zero-shot learning for human activity recognition using semantic attribute sequence model. In: Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing, pp. 355–358. ACM (2013)
[20]
Huynh T, Fritz M, and Schiele B Discovery of activity patterns using topic models UbiComp 2008 8 10-19
[21]
Reiss, A., Stricker, D.: Introducing a new benchmarked dataset for activity monitoring. In: 2012 16th International Symposium on Wearable Computers, pp. 108–109. IEEE (2012)
[22]
Roggen, D., et al.: Collecting complex activity datasets in highly rich networked sensor environments. In: 2010 Seventh international conference on networked sensing systems (INSS), pp. 233–240. IEEE (2010)
[23]
Hammerla, N.Y., Halloran, S., Plötz, T.: Deep, convolutional, and recurrent models for human activity recognition using wearables. arXiv preprint arXiv:1604.08880 (2016)
[24]
Xian Y, Lampert CH, Schiele B, and Akata Z Zero-shot learning-a comprehensive evaluation of the good, the bad and the ugly IEEE Trans. Pattern Anal. Mach. Intell. 2018 41 2251-2265
[25]
Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013)
[26]
Lampert CH, Nickisch H, and Harmeling S Attribute-based classification for zero-shot visual object categorization IEEE Trans. Pattern Anal. Mach. Intell. 2013 36 3 453-465

Cited By

View all
  • (2024)TS2ACTProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36314457:4(1-22)Online publication date: 12-Jan-2024
  • (2023)Unleashing the Power of Shared Label Structures for Human Activity RecognitionProceedings of the 32nd ACM International Conference on Information and Knowledge Management10.1145/3583780.3615101(3340-3350)Online publication date: 21-Oct-2023
  • (2023)Generalized Zero-Shot Activity Recognition with Embedding-Based MethodACM Transactions on Sensor Networks10.1145/358269019:3(1-25)Online publication date: 5-Apr-2023
  • Show More Cited By

Index Terms

  1. Multi-Layer Cross Loss Model for Zero-Shot Human Activity Recognition
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image Guide Proceedings
        Advances in Knowledge Discovery and Data Mining: 24th Pacific-Asia Conference, PAKDD 2020, Singapore, May 11–14, 2020, Proceedings, Part I
        May 2020
        905 pages
        ISBN:978-3-030-47425-6
        DOI:10.1007/978-3-030-47426-3
        • Editors:
        • Hady W. Lauw,
        • Raymond Chi-Wing Wong,
        • Alexandros Ntoulas,
        • Ee-Peng Lim,
        • See-Kiong Ng,
        • Sinno Jialin Pan

        Publisher

        Springer-Verlag

        Berlin, Heidelberg

        Publication History

        Published: 11 May 2020

        Author Tags

        1. Human activity recognition
        2. Zero-shot learning
        3. Cross loss

        Qualifiers

        • Article

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)0
        • Downloads (Last 6 weeks)0
        Reflects downloads up to 07 Nov 2024

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)TS2ACTProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36314457:4(1-22)Online publication date: 12-Jan-2024
        • (2023)Unleashing the Power of Shared Label Structures for Human Activity RecognitionProceedings of the 32nd ACM International Conference on Information and Knowledge Management10.1145/3583780.3615101(3340-3350)Online publication date: 21-Oct-2023
        • (2023)Generalized Zero-Shot Activity Recognition with Embedding-Based MethodACM Transactions on Sensor Networks10.1145/358269019:3(1-25)Online publication date: 5-Apr-2023
        • (2022)Human Activity Recognition with IMU and Vital Signs Feature FusionMultiMedia Modeling10.1007/978-3-030-98358-1_23(287-298)Online publication date: 6-Jun-2022

        View Options

        View options

        Get Access

        Login options

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media