Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3123021.3123046acmconferencesArticle/Chapter ViewAbstractPublication PagesubicompConference Proceedingsconference-collections
research-article

CNN-based sensor fusion techniques for multimodal human activity recognition

Published: 11 September 2017 Publication History
  • Get Citation Alerts
  • Abstract

    Deep learning (DL) methods receive increasing attention within the field of human activity recognition (HAR) due to their success in other machine learning domains. Nonetheless, a direct transfer of these methods is often not possible due to domain specific challenges (e.g. handling of multi-modal sensor data, lack of large labeled datasets). In this paper, we address three key aspects for the future development of robust DL methods for HAR: (1) Is it beneficial to apply data specific normalization? (2) How to optimally fuse multimodal sensor data? (3) How robust are these approaches with respect to available training data? We evaluate convolutional neuronal networks (CNNs) on a new large real-world multimodal dataset (RBK) as well as the PAMAP2 dataset. Our results indicate that sensor specific normalization techniques are required. We present a novel pressure specific normalization method which increases the F1-score by ∼ 4.5 percentage points (pp) on the RBK dataset. Further, we show that late- and hybrid fusion techniques are superior compared to early fusion techniques, increasing the F1-score by up to 3.5 pp (RBK dataset). Finally, our results reveal that in particular CNNs based on a shared filter approach have a smaller dependency on the amount of available training data compared to other fusion techniques.

    References

    [1]
    Ba, J., and Kingma, D. Adam: A method for stochastic optimization, 2015.
    [2]
    Baños, O., Damas, M., Pomares, H., et al. A benchmark dataset to evaluate sensor displacement in activity recognition. In Proceedings of 14th Int. Conference on Ubiquitous Computing (UbiComp) (2012), 1026--1035.
    [3]
    Bleser, G., Steffen, D., Reiss, A., Weber, M., Hendeby, G., and Fradet, L. Personalized Physical Activity Monitoring Using Wearable Sensors. LNCS. 2015, 99--124.
    [4]
    Dieleman, S., Schlter, J., Raffel, C., et al. Lasagne: First release., Aug. 2015.
    [5]
    Hammerla, N. Y., Halloran, S., and Ploetz, T. Deep, convolutional, and recurrent models for human activity recognition using wearables. In Proceedings of 25th Int. Joint Conference on Artificial Intelligence (2016), 1533--1540.
    [6]
    Haskell, W. L., Lee, et al. Physical activity and public health: Updated recommendation for adults from the American College of Sports Medicine and the American Heart Association. Medicine and Science in Sports and Exercise 39, 8 (Aug. 2007), 1423--34.
    [7]
    Ioffe, S., and Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. CoRR abs/1502.03167 (2015).
    [8]
    Krizhevsky, A., Sutskever, I., and Hinton, G. E. ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25. Curran Associates, Inc., 2012, 1097--1105.
    [9]
    Lee, M.-h., Kim, J., Kim, K., et al. Physical activity recognition using a single tri-axis accelerometer. In Proceedings of World Congress on Engineering and Computer Science (WCECS) (2009).
    [10]
    Martinez, H. P., Bengio, Y., and Yannakakis, G. N. Learning deep physiological models of affect. IEEE Computational Intelligence Magazine 8, 2 (2013), 20--33.
    [11]
    Neverova, N., Wolf, C., Lacey, G., et al. Learning human identity from motion patterns. arXiv preprint arXiv:1511.03908 (2015).
    [12]
    Ordóñez Morales, F. J., and Roggen, D. Deep convolutional and lstm recurrent neural networks for multimodal wearable activity recognition. Sensors 16, 1 (2016), 115.
    [13]
    Ordóñez Morales, F. J., and Roggen, D. Deep convolutional feature transfer across mobile activity recognition domains, sensor modalities and locations. Proceedings of IEEE 20th Int. Symposium on Wearable Computers (ISWC) (2016).
    [14]
    Reiss, A., Hendeby, G., and Stricker, D. A novel confidence-based multiclass boosting algorithm for mobile physical activity monitoring. Personal and Ubiquitous Computing 19, 1 (2015), 105--21.
    [15]
    Reiss, A., and Stricker, D. Introducing a new benchmarked dataset for activity monitoring. In Proceedings of IEEE 16th Int. Symposium on Wearable Computers (ISWC) (2012), 108--109.
    [16]
    Reiss, A., and Stricker, D. Personalized mobile physical activity recognition. In Proceedings of IEEE 17th Int. Symposium on Wearable Computers (2013).
    [17]
    Roggen, D., Calatroni, A., Rossi, et al. Collecting complex activity datasets in highly rich networked sensor environments. In Proceedings of 7th Int. Conference on Networked Sensing Systems (INSS) (2010), 233--240.
    [18]
    Srivastava, N., Hinton, G., Krizhevsky, A., et al. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research 15 (2014), 1929--1958.
    [19]
    Stikic, M., van Laerhoven, K., and Schiele, B. Exploring semi-supervised and active learning for activity recognition. In Proceedings of IEEE 12th Int. Symposium on Wearable Computers (2008), 81--88.
    [20]
    Xu, B., Wang, N., Chen, T., and Li, M. Empirical evaluation of rectified activations in convolutional network. arXiv preprint arXiv:1505.00853 (2015).
    [21]
    Yang, J. B., Nguyen, M. N., San, P. P., et al. Deep convolutional neural networks on multichannel time series for human activity recognition. In Proceedings of the 24th Int. Joint Conference on Artificial Intelligence (2015), 25--31.
    [22]
    Zappi, P., Lombriser, C., Stiefmeier, T., et al. Activity recognition from on-body sensors: Accuracy-power trade-off by dynamic sensor selection. In Wireless Sensor Networks, vol. 4913. Springer Berlin Heidelberg, 17--33.
    [23]
    Zeng, M., Nguyen, L. T., Yu, B., et al. Convolutional neural networks for human activity recognition using mobile sensors. In Proceedings of 6th Int. Conference on Mobile Computing, Applications and Services (MobiCASE), IEEE (2014), 197--205.
    [24]
    Zheng, Y., Liu, Q., Chen, E., et al. Exploiting multi-channels deep convolutional neural networks for multivariate time series classification. Frontiers of Computer Science 10, 1 (2016), 96--112.

    Cited By

    View all
    • (2024)Exploring the Possibility of Photoplethysmography-Based Human Activity Recognition Using Convolutional Neural NetworksSensors10.3390/s2405161024:5(1610)Online publication date: 1-Mar-2024
    • (2024)AutoAugHAR: Automated Data Augmentation for Sensor-based Human Activity RecognitionProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36595898:2(1-27)Online publication date: 15-May-2024
    • (2024)An Explore–Exploit Workload-Bounded Strategy for Rare Event Detection in Massive Energy Sensor Time SeriesACM Transactions on Intelligent Systems and Technology10.1145/365764115:4(1-25)Online publication date: 17-Apr-2024
    • Show More Cited By

    Index Terms

    1. CNN-based sensor fusion techniques for multimodal human activity recognition

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image ACM Conferences
        ISWC '17: Proceedings of the 2017 ACM International Symposium on Wearable Computers
        September 2017
        276 pages
        ISBN:9781450351881
        DOI:10.1145/3123021
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Sponsors

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 11 September 2017

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. deep learning
        2. human activity recognition
        3. sensor fusion

        Qualifiers

        • Research-article

        Conference

        UbiComp '17

        Acceptance Rates

        Overall Acceptance Rate 38 of 196 submissions, 19%

        Upcoming Conference

        UBICOMP '24

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)300
        • Downloads (Last 6 weeks)36

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)Exploring the Possibility of Photoplethysmography-Based Human Activity Recognition Using Convolutional Neural NetworksSensors10.3390/s2405161024:5(1610)Online publication date: 1-Mar-2024
        • (2024)AutoAugHAR: Automated Data Augmentation for Sensor-based Human Activity RecognitionProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36595898:2(1-27)Online publication date: 15-May-2024
        • (2024)An Explore–Exploit Workload-Bounded Strategy for Rare Event Detection in Massive Energy Sensor Time SeriesACM Transactions on Intelligent Systems and Technology10.1145/365764115:4(1-25)Online publication date: 17-Apr-2024
        • (2024)iMove: Exploring Bio-Impedance Sensing for Fitness Activity Recognition2024 IEEE International Conference on Pervasive Computing and Communications (PerCom)10.1109/PerCom59722.2024.10494489(194-205)Online publication date: 11-Mar-2024
        • (2024)OpenPack: A Large-Scale Dataset for Recognizing Packaging Works in IoT-Enabled Logistic Environments2024 IEEE International Conference on Pervasive Computing and Communications (PerCom)10.1109/PerCom59722.2024.10494448(90-97)Online publication date: 11-Mar-2024
        • (2024)STFNet: Enhanced and Lightweight Spatiotemporal Fusion Network for Wearable Human Activity RecognitionIEEE Sensors Journal10.1109/JSEN.2024.337344424:8(13686-13698)Online publication date: 15-Apr-2024
        • (2024)Beyond Thresholds: A General Approach to Sensor Selection for Practical Deep Learning-based HAR2024 IEEE/ACM Ninth International Conference on Internet-of-Things Design and Implementation (IoTDI)10.1109/IoTDI61053.2024.00005(1-12)Online publication date: 13-May-2024
        • (2024)Multi-teacher cross-modal distillation with cooperative deep supervision fusion learning for unimodal segmentationKnowledge-Based Systems10.1016/j.knosys.2024.111854297(111854)Online publication date: Aug-2024
        • (2024)Multiclass autoencoder-based active learning for sensor-based human activity recognitionFuture Generation Computer Systems10.1016/j.future.2023.09.029151(71-84)Online publication date: Feb-2024
        • (2023)A Study on the Influence of Sensors in Frequency and Time Domains on Context RecognitionSensors10.3390/s2312575623:12(5756)Online publication date: 20-Jun-2023
        • Show More Cited By

        View Options

        Get Access

        Login options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media