Multi-Sensor Fusion for Activity Recognition—A Survey
Abstract
:1. Introduction
2. Incremental Contribution Respect to Previous Surveys
3. Background
3.1. Human Activity Recognition
Definition of Human Activity Recognition
3.2. Sensors in Human Activity Recognition
3.3. Machine Learning Techniques Used in HAR
3.4. Activity Recognition Workflow
3.5. Multi-Sensor Data Fusion on HAR
- Enhanced signal to noise ratio—the merging of various streams of sensor data decreases the influences of noise.
- Diminished ambiguity and uncertainty—the use of data from different sources reduces the ambiguity of output.
- Improved confidence—the data generated by a single sensor are generally unreliable.
- Increased robustness and reliability, as the use of several similar sensors provides redundancy, which raises the fault tolerance of the system in the case of sensor failure.
- Robustness against interference—raising the dimensionality of the measuring space (for example, measuring the heart frequency using an electrocardiogram (ECG) and photoplethysmogram (PPG) sensors) notably improves robustness against environmental interference.
- Enhanced resolution, precision and discrimination—when numerous independent measures of the same attribute are merged, the granularity of the resulting value is finer than in the case of a single sensor.
- Independent features can be combined with prior knowledge of the target application domain in order to increase the robustness against the interference of data sources.
3.6. Performance Metrics
4. Methodology
Identification and Selection of Sources
5. Fusion Methods
5.1. Methods Used to Fuse Data at the Data Level
5.1.1. Raw Data Aggregation
5.1.2. Time-lagged Similarity Features
5.2. Methods Used to Fuse Data at the Feature Level
5.2.1. Feature Aggregation
5.2.2. Temporal Fusion
5.2.3. Feature Combination Technique
- Calculate the feature importance for all features in F using .
- Select the feature in the feature space F which has the maximum impact .
- If and only if , then update S and F using and
- Repeat steps 2 and 3 for N-1 times.
5.2.4. Distributed Random Projection and Joint Sparse Representation Approach
5.2.5. SVM-Based Multisensor Fusion Algorithm
5.3. Methods Used to Fuse Data at the Decision Level
5.3.1. Bagging
5.3.2. Lightweight Bagging Ensemble Learning
5.3.3. Soft Margin Multiple Kernel Learning
5.3.4. A-stack
5.3.5. Voting
5.3.6. Adaboost
5.3.7. Multi-view Staking
5.3.8. Hierarchical Method
5.3.9. Product Method
5.3.10. Sum Technique
5.3.11. Maximum Method
5.3.12. Minimum Method
5.3.13. Ranking Method
5.3.14. Weighted Average
5.3.15. Classification Model for Multi-Sensor Data Fusion
5.3.16. Markov Fusion Networks
5.3.17. Genetic Algorithm-Based Classifier Ensemble Optimization Method
5.3.18. Genetic Algorithm-Based Classifiers Fusion
5.3.19. Adaptive Weighted Logarithmic Opinion Pools
5.3.20. Activity and Device Position Recognition
5.3.21. Daily Activity Recognition Algorithm
5.3.22. Activity Recognition Model Based on Multibody-Worn Sensors
5.3.23. Physical Activity Recognition System
5.3.24. Distributed Activity Recognition through Consensus
5.3.25. A Hybrid Discriminative/Generative Approach for Modeling Human Activities
6. Comparison of Fusion Methods
6.1. Comparison between Fusion Methods that Use a Single Fusion Method and Fusion Methods that Use Two Fusion Methods
6.2. Comparison between Fusion Methods that Merge Homogeneous Sensors and Fusion Methods That Combine Heterogeneous Sensors
6.3. Comparison between Fusion Methods That Automatically Extract Features and Fusion Methods That Manually Extract Features
6.4. Scenarios Most Used by Fusion Methods
6.5. Components Most Used by Fusion Methods
6.6. Discussion and Trends
6.7. Study Limitations
7. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Schilit, B.N.; Adams, N.; Want, R. Context-Aware Computing Applications. In Proceedings of the 1994 First Workshop on Mobile Computing Systems and Applications, Santa Cruz, CA, USA, 8–9 December 1994; pp. 85–90. [Google Scholar]
- Aarts, E.; Wichert, R. Ambient intelligence. In Technology Guide; Bullinger, H.J., Ed.; Springer: Berlin/ Heidelberg, Germany, 2009; pp. 244–249. [Google Scholar]
- Ponce, H.; Miralles-Pechuán, L.; Martínez-Villaseñor, M.D.L. A Flexible Approach for Human Activity Recognition Using Artificial Hydrocarbon Networks. Sensors 2016, 16, 1715. [Google Scholar] [CrossRef] [PubMed]
- Su, X.; Tong, H.; Ji, P. Activity recognition with smartphone sensors. Tsinghua Sci. Technol. 2014, 19, 235–249. [Google Scholar] [CrossRef]
- Huynh, T.; Fritz, M.; Schiele, B. Discovery of activity patterns using topic models. In Proceedings of the 10th International Conference on Ubiquitous Computing, Seoul, Korea, 21–24 September 2008; pp. 10–19. [Google Scholar]
- Bishop, C.M. Pattern Recognition and Machine Learning; Springer: New York, NY, USA, 2006. [Google Scholar]
- Witten, I.H.; Frank, E.; Hall, M.A.; Pal, C.J. Data Mining: Practical Machine Learning Tools and Techniques; Morgan Kaufmann: San Francisco, CA, USA, 2016. [Google Scholar]
- Garcia-Ceja, E.; Brena, R.F.; Carrasco-Jimenez, J.C.; Garrido, L. Long-Term Activity Recognition from Wristwatch Accelerometer Data. Sensors 2014, 14, 22500–22524. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Lee, J.; Kim, J. Energy-efficient real-time human activity recognition on smart mobile devices. Mob. Inf. Syst. 2016, 2016, 2316757. [Google Scholar] [CrossRef]
- Garcia-Ceja, E.; Brena, R.F. Activity Recognition Using Community Data to Complement Small Amounts of Labeled Instances. Sensors 2016, 16, 877. [Google Scholar] [CrossRef] [PubMed]
- Hosmer, D.W., Jr.; Lemeshow, S.; Sturdivant, R.X. Applied Logistic Regression; John Wiley & Sons: Hoboken, NJ, USA, 2013; Volume 398. [Google Scholar]
- Murthy, S.K. Automatic construction of decision trees from data: A multi-disciplinary survey. Data Min. Knowl. Discov. 1998, 2, 345–389. [Google Scholar] [CrossRef]
- Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
- Jensen, F.V. An Introduction to Bayesian Networks; UCL Press: London, UK, 1996; Volume 210. [Google Scholar]
- Burges, C.J. A tutorial on support vector machines for pattern recognition. Data Min. Knowl. Discov. 1998, 2, 121–167. [Google Scholar] [CrossRef]
- Aha, D.W. Editorial. In Lazy Learning; Springer: Dordrecht, Germany, 1997; pp. 7–10. [Google Scholar]
- Zhang, G.P. Neural networks for classification: A survey. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 2000, 30, 451–462. [Google Scholar] [CrossRef]
- Jovanov, E.; Milenkovic, A.; Otto, C.; De Groen, P.C. A wireless body area network of intelligent motion sensors for computer assisted physical rehabilitation. J. Neuroeng. Rehabil. 2005, 2, 6. [Google Scholar] [CrossRef]
- Zhang, L.; Yang, M.; Feng, X. Sparse representation or collaborative representation: Which helps face recognition? In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Barcelona, Spain, 6–13 November 2011; pp. 471–478. [Google Scholar]
- Gravina, R.; Alinia, P.; Ghasemzadeh, H.; Fortino, G. Multi-sensor fusion in body sensor networks: State-of-the-art and research challenges. Inf. Fusion 2017, 35, 68–80. [Google Scholar] [CrossRef]
- Hall, D.L.; Llinas, J. An introduction to multisensor data fusion. Proc. IEEE 1997, 85, 6–23. [Google Scholar] [CrossRef] [Green Version]
- Shoaib, M.; Bosch, S.; Incel, O.D.; Scholten, H.; Havinga, P.J. Fusion of smartphone motion sensors for physical activity recognition. Sensors 2014, 14, 10146–10176. [Google Scholar] [CrossRef] [PubMed]
- Ravi, D.; Wong, C.; Lo, B.; Yang, G.Z. A deep learning approach to on-node sensor data analytics for mobile or wearable devices. IEEE J. Biomed. Health Inform. 2017, 21, 56–64. [Google Scholar] [CrossRef] [PubMed]
- Breiman, L. Pasting small votes for classification in large databases and on-line. Mach. Learn. 1999, 36, 85–103. [Google Scholar] [CrossRef]
- Lam, L.; Suen, S. Application of majority voting to pattern recognition: An analysis of its behavior and performance. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 1997, 27, 553–568. [Google Scholar] [CrossRef]
- Freund, Y.; Schapire, R.E. A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. 1997, 55, 119–139. [Google Scholar] [CrossRef]
- Garcia-Ceja, E.; Galván-Tejada, C.E.; Brena, R. Multi-view stacking for activity recognition with sound and accelerometer data. Inf. Fusion 2018, 40, 45–56. [Google Scholar] [CrossRef]
- He, Y.; Li, Y. Physical Activity Recognition Utilizing the Built-In Kinematic Sensors of a Smartphone. Int. J. Distrib. Sens. Netw. 2013, 9, 481580. [Google Scholar] [CrossRef]
- Gao, L.; Bourke, A.K.; Nelson, J. Activity recognition using dynamic multiple sensor fusion in body sensor networks. In Proceedings of the 2012 Annual International Conference of the Engineering in Medicine and Biology Society (EMBC), San Diego, CA, USA, 28 August–1 September 2012; pp. 1077–1080. [Google Scholar]
- Chen, C.; Jafari, R.; Kehtarnavaz, N. A survey of depth and inertial sensor fusion for human action recognition. Multimed. Tools Appl. 2017, 76, 4405–4425. [Google Scholar] [CrossRef]
- Shivappa, S.T.; Trivedi, M.M.; Rao, B.D. Audiovisual information fusion in human-computer interfaces and intelligent environments: A survey. Proc. IEEE 2010, 98, 1692–1715. [Google Scholar] [CrossRef]
- Lara, O.D.; Labrador, M.A. A survey on human activity recognition using wearable sensors. IEEE Commun. Surv. Tutorials 2013, 15, 1192–1209. [Google Scholar] [CrossRef]
- LeCun, Y.; Bengio, Y. Convolutional networks for images, speech, and time series. Handb. Brain Theory Neural Netw. 1995, 3361, 1995. [Google Scholar]
- Liggins, M.E.; Hall, D.L.; Llinas, J. Handbook of Multisensor Data Fusion: Theory and Practice; CRC Press: Boca Raton, FL, USA, 2009. [Google Scholar]
- Shoaib, M.; Bosch, S.; Incel, O.D.; Scholten, H.; Havinga, P.J. Complex human activity recognition using smartphone and wrist-worn motion sensors. Sensors 2016, 16, 426. [Google Scholar] [CrossRef] [PubMed]
- Dernbach, S.; Das, B.; Krishnan, N.C.; Thomas, B.L.; Cook, D.J. Simple and complex activity recognition through smart phones. In Proceedings of the 8th International Conference on Intelligent Environments (IE), Guanajuato, Mexico, 26–29 June 2012; pp. 214–221. [Google Scholar]
- Brena, R.F.; Nava, A. Activity Recognition in Meetings with One and Two Kinect Sensors. In Mexican Conference on Pattern Recognition; Springer: Cham, Switzerland, 2016; pp. 219–228. [Google Scholar]
- Lee, Y.S.; Cho, S.B. Layered hidden Markov models to recognize activity with built-in sensors on Android smartphone. Pattern Anal. Appl. 2016, 19, 1181–1193. [Google Scholar] [CrossRef]
- Bloom, D.E.; Cafiero, E.; Jané-Llopis, E.; Abrahams-Gessel, S.; Bloom, L.R.; Fathima, S.; Feigl, A.B.; Gaziano, T.; Hamandi, A.; Mowafi, M.; et al. The Global Economic Burden of Noncommunicable Diseases; Technical Report; Harvard School of Public Health, Program on the Global Demography of Aging; World Economic Forum: Geneva, Switzerland, 2011. [Google Scholar]
- Lorig, K.; Holman, H.; Sobel, D. Living a Healthy Life with Chronic Conditions: Self-Management of Heart Disease, Arthritis, Diabetes, Depression, Asthma, Bronchitis, Emphysema and Other Physical and Mental Health Conditions; Bull Publishing Company: Boulder, CO, USA, 2012. [Google Scholar]
- Dunlop, D.D.; Song, J.; Semanik, P.A.; Sharma, L.; Bathon, J.M.; Eaton, C.B.; Hochberg, M.C.; Jackson, R.D.; Kwoh, C.K.; Mysiw, W.J.; et al. Relation of physical activity time to incident disability in community dwelling adults with or at risk of knee arthritis: Prospective cohort study. BMJ 2014, 348, g2472. [Google Scholar] [CrossRef]
- Park, S.K.; Richardson, C.R.; Holleman, R.G.; Larson, J.L. Physical activity in people with COPD, using the National Health and Nutrition Evaluation Survey dataset (2003–2006). Heart Lung J. Acute Crit. Care 2013, 42, 235–240. [Google Scholar] [CrossRef]
- Van Dyck, D.; Cerin, E.; De Bourdeaudhuij, I.; Hinckson, E.; Reis, R.S.; Davey, R.; Sarmiento, O.L.; Mitas, J.; Troelsen, J.; MacFarlane, D.; et al. International study of objectively measured physical activity and sedentary time with body mass index and obesity: IPEN adult study. Int. J. Obes. 2015, 39, 199. [Google Scholar] [CrossRef]
- Morgan, W.P.; Goldston, S.E. Exercise and Mental Health; Taylor & Francis: New York, NY, USA, 2013. [Google Scholar]
- Marschollek, M.; Gietzelt, M.; Schulze, M.; Kohlmann, M.; Song, B.; Wolf, K.H. Wearable sensors in healthcare and sensor-enhanced health information systems: All our tomorrows? Healthc. Inform. Res. 2012, 18, 97–104. [Google Scholar] [CrossRef]
- Van Hoof, C.; Penders, J. Addressing the healthcare cost dilemma by managing health instead of managing illness: An opportunity for wearable wireless sensors. In Proceedings of the 2013 Design, Automation & Test in Europe Conference & Exhibition (DATE), Grenoble, France, 18–22 March 2013; pp. 1537–1539. [Google Scholar]
- Hillestad, R.; Bigelow, J.; Bower, A.; Girosi, F.; Meili, R.; Scoville, R.; Taylor, R. Can electronic medical record systems transform health care? Potential health benefits, savings, and costs. Health Aff. 2005, 24, 1103–1117. [Google Scholar] [CrossRef]
- Bernal, E.A.; Yang, X.; Li, Q.; Kumar, J.; Madhvanath, S.; Ramesh, P.; Bala, R. Deep Temporal Multimodal Fusion for Medical Procedure Monitoring Using Wearable Sensors. IEEE Trans. Multimed. 2018, 20, 107–118. [Google Scholar] [CrossRef]
- Kerr, J.; Marshall, S.J.; Godbole, S.; Chen, J.; Legge, A.; Doherty, A.R.; Kelly, P.; Oliver, M.; Badland, H.M.; Foster, C. Using the SenseCam to improve classifications of sedentary behavior in free-living settings. Am. J. Prev. Med. 2013, 44, 290–296. [Google Scholar] [CrossRef] [PubMed]
- Rad, N.M.; Kia, S.M.; Zarbo, C.; Jurman, G.; Venuti, P.; Furlanello, C. Stereotypical motor movement detection in dynamic feature space. In Proceedings of the IEEE 16th International Conference on Data Mining Workshops (ICDMW), Barcelona, Spain, 12–15 December 2016; pp. 487–494. [Google Scholar]
- Diraco, G.; Leone, A.; Siciliano, P. A Fall Detector Based on Ultra-Wideband Radar Sensing. In Convegno Nazionale Sensori; Springer: Cham, Switzerland, 2016; pp. 373–382. [Google Scholar]
- Lutz, W.; Sanderson, W.; Scherbov, S. The coming acceleration of global population ageing. Nature 2008, 451, 716. [Google Scholar] [CrossRef] [PubMed]
- Ensrud, K.E.; Ewing, S.K.; Taylor, B.C.; Fink, H.A.; Stone, K.L.; Cauley, J.A.; Tracy, J.K.; Hochberg, M.C.; Rodondi, N.; Cawthon, P.M. Frailty and risk of falls, fracture, and mortality in older women: The study of osteoporotic fractures. J. Gerontol. Ser. A Biol. Sci. Med Sci. 2007, 62, 744–751. [Google Scholar] [CrossRef] [PubMed]
- Ensrud, K.E.; Ewing, S.K.; Cawthon, P.M.; Fink, H.A.; Taylor, B.C.; Cauley, J.A.; Dam, T.T.; Marshall, L.M.; Orwoll, E.S.; Cummings, S.R.; et al. A comparison of frailty indexes for the prediction of falls, disability, fractures, and mortality in older men. J. Am. Geriatr. Soc. 2009, 57, 492–498. [Google Scholar] [CrossRef] [PubMed]
- Alam, M.A.U. Context-aware multi-inhabitant functional and physiological health assessment in smart home environment. In Proceedings of the IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), Kona, HI, USA, 13–17 March 2017; pp. 99–100. [Google Scholar]
- Gjoreski, H.; Lustrek, M.; Gams, M. Accelerometer placement for posture recognition and fall detection. In Proceedings of the 7th International Conference on Intelligent Environments (IE), Nottingham, UK, 25–28 July 2011; pp. 47–54. [Google Scholar]
- Li, Q.; Stankovic, J.A. Grammar-based, posture-and context-cognitive detection for falls with different activity levels. In Proceedings of the 2nd Conference on Wireless Health, San Diego, CA, USA, 10–13 October 2011; pp. 6:1–6:10. [Google Scholar]
- Cheng, W.C.; Jhan, D.M. Triaxial accelerometer-based fall detection method using a self-constructing cascade-AdaBoost-SVM classifier. IEEE J. Biomed. Health Inform. 2013, 17, 411–419. [Google Scholar] [CrossRef]
- Wei, Y.; Fei, Q.; He, L. Sports motion analysis based on mobile sensing technology. In Proceedings of the International Conference on Global Economy, Finance and Humanities Research (GEFHR 2014), Tianjin, China, 27–28 March 2014. [Google Scholar]
- Ahmadi, A.; Mitchell, E.; Destelle, F.; Gowing, M.; O’Connor, N.E.; Richter, C.; Moran, K. Automatic activity classification and movement assessment during a sports training session using wearable inertial sensors. In Proceedings of the 11th International Conference on Wearable and Implantable Body Sensor Networks (BSN), Zurich, Switzerland, 16–19 June 2014; pp. 98–103. [Google Scholar]
- Ghasemzadeh, H.; Loseu, V.; Jafari, R. Wearable coach for sport training: A quantitative model to evaluate wrist-rotation in golf. J. Ambient. Intell. Smart Environ. 2009, 1, 173–184. [Google Scholar]
- Ghasemzadeh, H.; Jafari, R. Coordination analysis of human movements with body sensor networks: A signal processing model to evaluate baseball swings. IEEE Sensors J. 2011, 11, 603–610. [Google Scholar] [CrossRef]
- Rashidi, P.; Mihailidis, A. A survey on ambient-assisted living tools for older adults. IEEE J. Biomed. Health Inform. 2013, 17, 579–590. [Google Scholar] [CrossRef]
- Frontoni, E.; Raspa, P.; Mancini, A.; Zingaretti, P.; Placidi, V. Customers’ activity recognition in intelligent retail environments. In Proceedings of the International Conference on Image Analysis and Processing, Naples, Italy, 9–13 September 2013; Springer: Berlin, Germany, 2013; pp. 509–516. [Google Scholar]
- Vishwakarma, S.; Agrawal, A. A survey on activity recognition and behavior understanding in video surveillance. Vis. Comput. 2013, 29, 983–1009. [Google Scholar] [CrossRef]
- Delahoz, Y.S.; Labrador, M.A. Survey on Fall Detection and Fall Prevention Using Wearable and External Sensors. Sensors 2014, 14, 19806–19842. [Google Scholar] [CrossRef] [Green Version]
- Hanlon, M.; Anderson, R. Real-time gait event detection using wearable sensors. Gait Posture 2009, 30, 523–527. [Google Scholar] [CrossRef] [PubMed]
- Mitchell, T.M. Machine Learning; McGraw-Hill: New York, NY, USA, 1997. [Google Scholar]
- Breiman, L. Classification and Regression Trees; Routledge: New York, NY, USA, 2017. [Google Scholar]
- Quinlan, J.R. C4. 5: Programs for Machine Learning; Morgan Kaufmann: Los Altos, CA, USA, 1993. [Google Scholar]
- Biau, G.; Devroye, L.; Lugosi, G. Consistency of random forests and other averaging classifiers. J. Mach. Learn. Res. 2008, 9, 2015–2033. [Google Scholar]
- Trevino, V.; Falciani, F. GALGO: An R package for multivariate variable selection using genetic algorithms. Bioinformatics 2006, 22, 1154–1156. [Google Scholar] [CrossRef] [PubMed]
- Wu, X.; Kumar, V.; Quinlan, J.R.; Ghosh, J.; Yang, Q.; Motoda, H.; McLachlan, G.J.; Ng, A.; Liu, B.; Philip, S.Y.; et al. Top 10 algorithms in data mining. Knowl. Inf. Syst. 2008, 14, 1–37. [Google Scholar] [CrossRef]
- Christopher, M.B. Pattern Recognition and Machine Learning; Springer: New York, NY, USA, 2016. [Google Scholar]
- Bishop, C.M. Neural Networks for Pattern Recognition; Oxford University Press: New York, NY, USA, 1995. [Google Scholar]
- Bulling, A.; Blanke, U.; Schiele, B. A tutorial on human activity recognition using body-worn inertial sensors. ACM Comput. Surv. (CSUR) 2014, 46, 33. [Google Scholar] [CrossRef]
- Rieger, R.; Chen, S. A signal based clocking scheme for A/D converters in body sensor networks. In Proceedings of the IEEE Region 10 Conference TENCON 2006, Hong Kong, China, 14–17 November 2006; pp. 1–4. [Google Scholar]
- Rieger, R.; Taylor, J.T. An adaptive sampling system for sensor nodes in body area networks. IEEE Trans. Neural Syst. Rehabil. Eng. 2009, 17, 183–189. [Google Scholar] [CrossRef]
- Figo, D.; Diniz, P.C.; Ferreira, D.R.; Cardoso, J.M. Preprocessing techniques for context recognition from accelerometer data. Pers. Ubiquitous Comput. 2010, 14, 645–662. [Google Scholar] [CrossRef]
- Bulling, A.; Ward, J.A.; Gellersen, H.; Troster, G. Eye movement analysis for activity recognition using electrooculography. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 741–753. [Google Scholar] [CrossRef]
- Huynh, T.; Schiele, B. Analyzing features for activity recognition. In Proceedings of the 2005 Joint Conference on Smart Objects and Ambient Intelligence: Innovative Context-Aware Services: Usages and Technologies, Grenoble, France, 12–14 October 2005; pp. 159–163. [Google Scholar]
- Guenterberg, E.; Ostadabbas, S.; Ghasemzadeh, H.; Jafari, R. An automatic segmentation technique in body sensor networks based on signal energy. In Proceedings of the Fourth International Conference on Body Area Networks, Los Angeles, CA, USA, 1–3 April 2009; p. 21. [Google Scholar]
- Lee, C.; Xu, Y. Online, interactive learning of gestures for human/robot interfaces. In Proceedings of the IEEE International Conference on Robotics and Automation, Minneapolis, MN, USA, 22–28 April 1996; Volume 4, pp. 2982–2987. [Google Scholar]
- Ashbrook, D.; Starner, T. Using GPS to learn significant locations and predict movement across multiple users. Pers. Ubiquitous Comput. 2003, 7, 275–286. [Google Scholar] [CrossRef]
- Kang, W.J.; Shiu, J.R.; Cheng, C.K.; Lai, J.S.; Tsao, H.W.; Kuo, T.S. The application of cepstral coefficients and maximum likelihood method in EMG pattern recognition [movements classification]. IEEE Trans. Biomed. Eng. 1995, 42, 777–785. [Google Scholar] [CrossRef] [PubMed]
- Zinnen, A.; Wojek, C.; Schiele, B. Multi activity recognition based on bodymodel-derived primitives. In Proceedings of the International Symposium on Location-and Context-Awareness, Tokyo, Japan, 7–8 May 2009; pp. 1–18. [Google Scholar]
- Zhang, M.; Sawchuk, A.A. Motion primitive-based human activity recognition using a bag-of-features approach. In Proceedings of the 2nd ACM SIGHIT International Health Informatics Symposium, Miami, FL, USA, 28–30 January 2012; pp. 631–640. [Google Scholar]
- Kohavi, R.; John, G.H. Wrappers for feature subset selection. Artif. Intell. 1997, 97, 273–324. [Google Scholar] [CrossRef] [Green Version]
- Peng, H.; Long, F.; Ding, C. Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 1226–1238. [Google Scholar] [CrossRef] [PubMed]
- Somol, P.; Novovičová, J.; Pudil, P. Flexible-hybrid sequential floating search in statistical feature selection. In Proceedings of the Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (SSPR), Hong Kong, China, 17–19 August 2006; pp. 632–639. [Google Scholar]
- Jia, Y.; Shelhamer, E.; Donahue, J.; Karayev, S.; Long, J.; Girshick, R.; Guadarrama, S.; Darrell, T. Caffe: Convolutional architecture for fast feature embedding. In Proceedings of the 22nd ACM International Conference on Multimedia, Orlando, FL, USA, 3–7 November 2014; pp. 675–678. [Google Scholar]
- Van Kasteren, T.; Englebienne, G.; Kröse, B.J. An activity monitoring system for elderly care using generative and discriminative models. Pers. Ubiquitous Comput. 2010, 14, 489–498. [Google Scholar] [CrossRef] [Green Version]
- Friedman, N. Seapower as Strategy: Navies and National Interests. Def. Foreign Aff. Strateg. Policy 2002, 30, 10. [Google Scholar]
- Li, W.; Wang, Z.; Wei, G.; Ma, L.; Hu, J.; Ding, D. A survey on multisensor fusion and consensus filtering for sensor networks. Discret. Dyn. Nat. Soc. 2015, 2015, 683701. [Google Scholar] [CrossRef]
- Atrey, P.K.; Hossain, M.A.; El Saddik, A.; Kankanhalli, M.S. Multimodal fusion for multimedia analysis: A survey. Multimed. Syst. 2010, 16, 345–379. [Google Scholar] [CrossRef]
- Bosse, E.; Roy, J.; Grenier, D. Data fusion concepts applied to a suite of dissimilar sensors. In Proceedings of the Canadian Conference on Electrical and Computer Engineering, Calgary, AB, Canada, 26–29 May 1996; Volume 2, pp. 692–695. [Google Scholar]
- Schuldhaus, D.; Leutheuser, H.; Eskofier, B.M. Towards big data for activity recognition: a novel database fusion strategy. In Proceedings of the 9th International Conference on Body Area Networks, London, UK, 29 September–1 October 2014; pp. 97–103. [Google Scholar]
- Lai, X.; Liu, Q.; Wei, X.; Wang, W.; Zhou, G.; Han, G. A survey of body sensor networks. Sensors 2013, 13, 5406–5447. [Google Scholar] [CrossRef]
- Yang, G.Z.; Yang, G. Body Sensor Networks; Springer: London, UK, 2006; Volume 1. [Google Scholar]
- Zappi, P.; Stiefmeier, T.; Farella, E.; Roggen, D.; Benini, L.; Troster, G. Activity recognition from on-body sensors by classifier fusion: Sensor scalability and robustness. In Proceedings of the 3rd International Conference on Intelligent Sensors, Sensor Networks and Information, Melbourne, Australia, 3–6 December 2007; pp. 281–286. [Google Scholar]
- Müller, H.; Müller, W.; Squire, D.M.; Marchand-Maillet, S.; Pun, T. Performance evaluation in content-based image retrieval: Overview and proposals. Pattern Recognit. Lett. 2001, 22, 593–601. [Google Scholar] [CrossRef]
- Petersen, K.; Vakkalanka, S.; Kuzniarz, L. Guidelines for conducting systematic mapping studies in software engineering: An update. Inf. Softw. Technol. 2015, 64, 1–18. [Google Scholar] [CrossRef]
- Petersen, K.; Feldt, R.; Mujtaba, S.; Mattsson, M. Systematic Mapping Studies in Software Engineering. In Proceedings of the 12th International Conference on Evaluation and Assessment in Software Engineering (EASE), Bari, Italy, 26–27 June 2008; Volume 8, pp. 68–77. [Google Scholar]
- Kitchenham, B.; Charters, S. Guidelines for Performing Systematic Literature Reviews in Software Engineering; Keele University: Keele, UK; Durham University: Durham, UK, 2007. [Google Scholar]
- Dieste, O.; Grimán, A.; Juristo, N. Developing search strategies for detecting relevant experiments. Empir. Softw. Eng. 2009, 14, 513–539. [Google Scholar] [CrossRef]
- Kjærgaard, M.B.; Blunck, H. Tool support for detection and analysis of following and leadership behavior of pedestrians from mobile sensing data. Pervasive Mob. Comput. 2014, 10, 104–117. [Google Scholar] [CrossRef] [Green Version]
- Kjærgaard, M.B.; Munk, C.V. Hyperbolic location fingerprinting: A calibration-free solution for handling differences in signal strength (concise contribution). In Proceedings of the Sixth Annual IEEE International Conference on Pervasive Computing and Communications (PerCom), Hong Kong, China, 17–21 March 2008; pp. 110–116. [Google Scholar]
- Huang, C.W.; Narayanan, S. Comparison of feature-level and kernel-level data fusion methods in multi-sensory fall detection. In Proceedings of the IEEE 18th International Workshop on Multimedia Signal Processing (MMSP), Montreal, QC, Canada, 21–23 September 2016; pp. 1–6. [Google Scholar]
- Ling, J.; Tian, L.; Li, C. 3D human activity recognition using skeletal data from RGBD sensors. In Proceedings of the International Symposium on Visual Computing, Las Vegas, NV, USA, 12–14 December 2016; Springer: Cham, Switzerland, 2016; pp. 133–142. [Google Scholar]
- Guiry, J.J.; Van de Ven, P.; Nelson, J. Multi-sensor fusion for enhanced contextual awareness of everyday activities with ubiquitous devices. Sensors 2014, 14, 5687–5701. [Google Scholar] [CrossRef] [PubMed]
- Adelsberger, R.; Tröster, G. Pimu: A wireless pressure-sensing imu. In Proceedings of the IEEE Eighth International Conference on Intelligent Sensors, Sensor Networks and Information Processing, Melbourne, Australia, 2–5 April 2013; pp. 271–276. [Google Scholar]
- Altini, M.; Penders, J.; Amft, O. Energy expenditure estimation using wearable sensors: A new methodology for activity-specific models. In Proceedings of the Conference on Wireless Health, San Diego, CA, USA, 23–25 October 2012; p. 1. [Google Scholar]
- John, D.; Liu, S.; Sasaki, J.; Howe, C.; Staudenmayer, J.; Gao, R.; Freedson, P.S. Calibrating a novel multi-sensor physical activity measurement system. Physiol. Meas. 2011, 32, 1473. [Google Scholar] [CrossRef] [PubMed]
- Libal, V.; Ramabhadran, B.; Mana, N.; Pianesi, F.; Chippendale, P.; Lanz, O.; Potamianos, G. Multimodal classification of activities of daily living inside smart homes. In Proceedings of the International Work-Conference on Artificial Neural Networks, Salamanca, Spain, 10–12 June 2009; Springer: Berlin, Heidelberg, 2009; pp. 687–694. [Google Scholar]
- Zebin, T.; Scully, P.J.; Ozanyan, K.B. Inertial Sensor Based Modelling of Human Activity Classes: Feature Extraction and Multi-sensor Data Fusion Using Machine Learning Algorithms. In eHealth 360; Springer: Cham, Switzerland, 2017; pp. 306–314. [Google Scholar]
- Sharma, A.; Paliwal, K.K. Fast principal component analysis using fixed-point algorithm. Pattern Recognit. Lett. 2007, 28, 1151–1155. [Google Scholar] [CrossRef]
- Chernbumroong, S.; Cang, S.; Atkins, A.; Yu, H. Elderly activities recognition and classification for applications in assisted living. Expert Syst. Appl. 2013, 40, 1662–1674. [Google Scholar] [CrossRef]
- Wang, W.; Jones, P.; Partridge, D. Assessing the impact of input features in a feedforward neural network. Neural Comput. Appl. 2000, 9, 101–112. [Google Scholar] [CrossRef]
- Xiao, L.; Li, R.; Luo, J.; Duan, M. Activity recognition via distributed random projection and joint sparse representation in body sensor networks. In Proceedings of the China Conference Wireless Sensor Networks, Qingdao, China, 17–19 October 2013; Springer: Berlin, Heidelberg, 2013; pp. 51–60. [Google Scholar]
- Liu, S.; Gao, R.X.; John, D.; Staudenmayer, J.W.; Freedson, P.S. Multisensor data fusion for physical activity assessment. IEEE Trans. Biomed. Eng. 2012, 59, 687–696. [Google Scholar]
- Bao, L.; Intille, S.S. Activity recognition from user-annotated acceleration data. In Proceedings of the International Conference on Pervasive Computing, Vienna, Austria, 21–23 April 2004; Springer: Berlin, Heidelberg, 2004; pp. 1–17. [Google Scholar]
- Ermes, M.; Pärkkä, J.; Mäntyjärvi, J.; Korhonen, I. Detection of daily activities and sports with wearable sensors in controlled and uncontrolled conditions. IEEE Trans. Inf. Technol. Biomed. 2008, 12, 20–26. [Google Scholar] [CrossRef]
- Alam, M.A.U.; Pathak, N.; Roy, N. Mobeacon: An iBeacon-assisted smartphone-based real time activity recognition framework. In Proceedings of the 12th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services on 12th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services, Coimbra, Portugal, 22–24 July 2015; pp. 130–139. [Google Scholar]
- Sun, Q.; Pfahringer, B. Bagging ensemble selection for regression. In Proceedings of the Australasian Joint Conference on Artificial Intelligence, Sydney, Australia, 4–7 December 2012; Springer: Berlin, Heidelberg, 2012; pp. 695–706. [Google Scholar]
- Lanckriet, G.R.; Cristianini, N.; Bartlett, P.; Ghaoui, L.E.; Jordan, M.I. Learning the kernel matrix with semidefinite programming. J. Mach. Learn. Res. 2004, 5, 27–72. [Google Scholar]
- Xu, X.; Tsang, I.W.; Xu, D. Soft margin multiple kernel learning. IEEE Trans. Neural Networks Learn. Syst. 2013, 24, 749–761. [Google Scholar]
- Rakotomamonjy, A.; Bach, F.R.; Canu, S.; Grandvalet, Y. SimpleMKL. J. Mach. Learn. Res. 2008, 9, 2491–2521. [Google Scholar]
- Guo, H.; Chen, L.; Shen, Y.; Chen, G. Activity recognition exploiting classifier level fusion of acceleration and physiological signals. In Proceedings of the 2014 ACM international joint conference on pervasive and ubiquitous computing: Adjunct publication, Seattle, WA, USA, 13–17 September 2014; pp. 63–66. [Google Scholar]
- Sun, S. A survey of multi-view machine learning. Neural Comput. Appl. 2013, 23, 2031–2038. [Google Scholar] [CrossRef]
- Zhao, J.; Xie, X.; Xu, X.; Sun, S. Multi-view learning overview: Recent progress and new challenges. Inf. Fusion 2017, 38, 43–54. [Google Scholar] [CrossRef]
- Wolpert, D.H. Stacked generalization. Neural Netw. 1992, 5, 241–259. [Google Scholar] [CrossRef]
- Gao, L.; Bourke, A.K.; Nelson, J. An efficient sensing approach using dynamic multi-sensor collaboration for activity recognition. In Proceedings of the International Conference on Distributed Computing in Sensor Systems and Workshops (DCOSS), Barcelona, Spain, 27–29 June 2011; pp. 1–3. [Google Scholar]
- Aly, H.; Ismail, M.A. ubiMonitor: Intelligent fusion of body-worn sensors for real-time human activity recognition. In Proceedings of the 30th Annual ACM Symposium on Applied Computing, Salamanca, Spain, 13–17 April 2015; pp. 563–568. [Google Scholar]
- Banos, O.; Damas, M.; Guillen, A.; Herrera, L.J.; Pomares, H.; Rojas, I.; Villalonga, C. Multi-sensor fusion based on asymmetric decision weighting for robust activity recognition. Neural Process. Lett. 2015, 42, 5–26. [Google Scholar] [CrossRef]
- Arnon, P. Classification model for multi-sensor data fusion apply for Human Activity Recognition. In Proceedings of the International Conference on Computer, Communications, and Control Technology (I4CT), Langkawi, Malaysia, 2–4 September 2014; pp. 415–419. [Google Scholar]
- Glodek, M.; Schels, M.; Schwenker, F.; Palm, G. Combination of sequential class distributions from multiple channels using Markov fusion networks. J. Multimodal User Interfaces 2014, 8, 257–272. [Google Scholar] [CrossRef]
- Fatima, I.; Fahim, M.; Lee, Y.K.; Lee, S. A genetic algorithm-based classifier ensemble optimization for activity recognition in smart homes. KSII Trans. Internet Inf. Syst. (TIIS) 2013, 7, 2853–2873. [Google Scholar]
- Chernbumroong, S.; Cang, S.; Yu, H. Genetic algorithm-based classifiers fusion for multisensor activity recognition of elderly people. IEEE J. Biomed. Health Inform. 2015, 19, 282–289. [Google Scholar] [CrossRef]
- Guo, Y.; He, W.; Gao, C. Human activity recognition by fusing multiple sensor nodes in the wearable sensor systems. J. Mech. Med. Biol. 2012, 12, 1250084. [Google Scholar] [CrossRef]
- Tipping, M.E. Sparse Bayesian learning and the relevance vector machine. J. Mach. Learn. Res. 2001, 1, 211–244. [Google Scholar]
- Grokop, L.H.; Sarah, A.; Brunner, C.; Narayanan, V.; Nanda, S. Activity and device position recognition in mobile devices. In Proceedings of the 13th International Conference on Ubiquitous Computing, Beijing, China, 17–21 September 2011; pp. 591–592. [Google Scholar]
- Zhu, C.; Sheng, W. Wearable sensor-based hand gesture and daily activity recognition for robot-assisted living. IEEE Trans. Syst. Man, Cybern. Part A Syst. Humans 2011, 41, 569–573. [Google Scholar] [CrossRef]
- Liu, R.; Liu, M. Recognizing human activities based on multi-sensors fusion. In Proceedings of the 4th International Conference on Bioinformatics and Biomedical Engineering (iCBBE), Chengdu, China, 18–20 June 2010; pp. 1–4. [Google Scholar]
- Li, M.; Rozgić, V.; Thatte, G.; Lee, S.; Emken, A.; Annavaram, M.; Mitra, U.; Spruijt-Metz, D.; Narayanan, S. Multimodal physical activity recognition by fusing temporal and cepstral information. IEEE Trans. Neural Syst. Rehabil. Eng. 2010, 18, 369. [Google Scholar] [PubMed]
- Ross, A.A.; Nandakumar, K.; Jain, A.K. Handbook of Multibiometrics; Springer Science & Business Media: Boston, MA, USA, 2006; Volume 6. [Google Scholar]
- Song, B.; Kamal, A.T.; Soto, C.; Ding, C.; Farrell, J.A.; Roy-Chowdhury, A.K. Tracking and activity recognition through consensus in distributed camera networks. IEEE Trans. Image Process. 2010, 19, 2564–2579. [Google Scholar] [CrossRef] [PubMed]
- Lester, J.; Choudhury, T.; Kern, N.; Borriello, G.; Hannaford, B. A hybrid discriminative/generative approach for modeling human activities. In Proceedings of the 19th International Joint Conference on Artificial Intelligence, Edinburgh, UK, 30 July–5 August 2005; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 2005; pp. 766–772. [Google Scholar]
- Viola, P.; Jones, M. Rapid object detection using a boosted cascade of simple features. CVPR (1) 2001, 1, 511–518. [Google Scholar]
- Holte, R.C. Very simple classification rules perform well on most commonly used datasets. Mach. Learn. 1993, 11, 63–90. [Google Scholar] [CrossRef]
- Manikandan, S. Measures of dispersion. J. Pharmacol. Pharmacother. 2011, 2, 315. [Google Scholar] [CrossRef]
- Kwapisz, J.R.; Weiss, G.M.; Moore, S.A. Activity recognition using cell phone accelerometers. ACM SigKDD Explor. Newsl. 2011, 12, 74–82. [Google Scholar] [CrossRef]
- Bachlin, M.; Plotnik, M.; Roggen, D.; Maidan, I.; Hausdorff, J.M.; Giladi, N.; Troster, G. Wearable assistant for Parkinson’s disease patients with the freezing of gait symptom. IEEE Trans. Inf. Technol. Biomed. 2010, 14, 436–446. [Google Scholar] [CrossRef]
- Gaglio, S.; Re, G.L.; Morana, M. Human activity recognition process using 3-D posture data. IEEE Trans. Hum. Mach. Syst. 2015, 45, 586–597. [Google Scholar] [CrossRef]
- Kröse, B.; Van Kasteren, T.; Gibson, C.; Van den Dool, T. Care: Context awareness in residences for elderly. In Proceedings of the International Conference of the International Society for Gerontechnology, Pisa, Italy, 4–6 June 2008; pp. 101–105. [Google Scholar]
- Hall, M.; Frank, E.; Holmes, G.; Pfahringer, B.; Reutemann, P.; Witten, I.H. The WEKA data mining software: An update. ACM SIGKDD Explor. Newsl. 2009, 11, 10–18. [Google Scholar] [CrossRef]
- iBeacon Team. Estimote iBeacon. Available online: https://estimote.com (accessed on 29 March 2019).
- Guo, H.; Chen, L.; Peng, L.; Chen, G. Wearable sensor based multimodal human activity recognition exploiting the diversity of classifier ensemble. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, Heidelberg, Germany, 12–16 September 2016; pp. 1112–1123. [Google Scholar]
- Catal, C.; Tufekci, S.; Pirmit, E.; Kocabag, G. On the use of ensemble of classifiers for accelerometer-based activity recognition. Appl. Soft Comput. 2015, 37, 1018–1022. [Google Scholar] [CrossRef]
- Kaluža, B.; Mirchevska, V.; Dovgan, E.; Luštrek, M.; Gams, M. An agent-based approach to care in independent living. In Proceedings of the International Joint Conference on Ambient Intelligence, Malaga, Spain, 10–12 November 2010; Springer: Berlin/Heidelberg, Germany, 2010; pp. 177–186. [Google Scholar]
- Ravi, D.; Wong, C.; Lo, B.; Yang, G.Z. Deep learning for human activity recognition: A resource efficient implementation on low-power devices. In Proceedings of the IEEE 13th International Conference on Wearable and Implantable Body Sensor Networks (BSN), San Francisco, CA, USA, 14–17 June 2016; pp. 71–76. [Google Scholar]
- Lockhart, J.W.; Weiss, G.M.; Xue, J.C.; Gallagher, S.T.; Grosner, A.B.; Pulickal, T.T. Design considerations for the WISDM smart phone-based sensor mining architecture. In Proceedings of the Fifth International Workshop on Knowledge Discovery from Sensor Data, San Diego, CA, USA, 21 August 2011; pp. 25–33. [Google Scholar]
- Zappi, P.; Lombriser, C.; Stiefmeier, T.; Farella, E.; Roggen, D.; Benini, L.; Tröster, G. Activity recognition from on-body sensors: accuracy-power trade-off by dynamic sensor selection. In Wireless Sensor Networks; Springer: Berlin/Heidelberg, Germany, 2008; pp. 17–33. [Google Scholar]
- Seidenari, L.; Varano, V.; Berretti, S.; Del Bimbo, A.; Pala, P. Recognizing actions from depth cameras as weakly aligned multi-part bag-of-poses. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Portland, OR, USA, 23–28 June 2013; pp. 479–485. [Google Scholar]
- Cappelletti, A.; Lepri, B.; Mana, N.; Pianesi, F.; Zancanaro, M. A multimodal data collection of daily activities in a real instrumented apartment. In Proceedings of the Workshop Multimodal Corpora: From Models of Natural Interaction to Systems and Applications (LREC’08), Marrakech, Morocco, 26–30 May 2008; pp. 20–26. [Google Scholar]
- Kumar, J.; Li, Q.; Kyal, S.; Bernal, E.A.; Bala, R. On-the-fly hand detection training with application in egocentric action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Boston, MA, USA, 7–12 June 2015; pp. 18–27. [Google Scholar]
- Yang, A.Y.; Jafari, R.; Sastry, S.S.; Bajcsy, R. Distributed recognition of human actions using wearable motion sensor networks. J. Ambient. Intell. Smart Environ. 2009, 1, 103–115. [Google Scholar] [Green Version]
- Banos, O.; Garcia, R.; Holgado-Terriza, J.A.; Damas, M.; Pomares, H.; Rojas, I.; Saez, A.; Villalonga, C. mHealthDroid: A novel framework for agile development of mobile health applications. In Proceedings of the International Workshop on Ambient Assisted Living, Belfast, UK, 2–5 December 2014; Springer: Cham, Switzerland, 2014; pp. 91–98. [Google Scholar]
- Reiss, A.; Stricker, D. Introducing a new benchmarked dataset for activity monitoring. In Proceedings of the 16th International Symposium on Wearable Computers (ISWC), Newcastle, UK, 18–22 June 2012; pp. 108–109. [Google Scholar]
- Cook, D.J. CASAS Smart Home Project. Available online: http://www.ailab.wsu.edu/casas/ (accessed on 5 April 2019).
- Ofli, F.; Chaudhry, R.; Kurillo, G.; Vidal, R.; Bajcsy, R. Berkeley MHAD: A comprehensive multimodal human action database. In Proceedings of the IEEE Workshop on Applications of Computer Vision (WACV), Tampa, FL, USA, 15–17 January 2013; pp. 53–60. [Google Scholar]
- Chen, C.; Jafari, R.; Kehtarnavaz, N. Utd-mhad: A multimodal dataset for human action recognition utilizing a depth camera and a wearable inertial sensor. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, 27–30 September 2015; pp. 168–172. [Google Scholar]
- Roggen, D.; Calatroni, A.; Rossi, M.; Holleczek, T.; Förster, K.; Tröster, G.; Lukowicz, P.; Bannach, D.; Pirkl, G.; Ferscha, A.; et al. Collecting complex activity datasets in highly rich networked sensor environments. In Proceedings of the Seventh International Conference on Networked Sensing Systems (INSS), Kassel, Germany, 15–18 June 2010; pp. 233–240. [Google Scholar]
- Weinland, D.; Boyer, E.; Ronfard, R. Action recognition from arbitrary views using 3d exemplars. In Proceedings of the 11th IEEE International Conference on Computer Vision (ICCV 2007), Rio de Janeiro, Brazil, 14–20 October 2007; pp. 1–7. [Google Scholar]
- Goodwin, M.S.; Haghighi, M.; Tang, Q.; Akcakaya, M.; Erdogmus, D.; Intille, S. Moving towards a real-time system for automatically recognizing stereotypical motor movements in individuals on the autism spectrum using wireless accelerometry. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing, Seattle, WA, USA, 13–17 September 2014; pp. 861–872. [Google Scholar]
- Aguileta, A.A.; Brena, R.F.; Mayora, O.; Molino-Minero-Re, E.; Trejo, L.A. Virtual Sensors for Optimal Integration of Human Activity Data. Sensors 2019, 19, 2017. [Google Scholar] [CrossRef] [PubMed]
- Aguileta, A.A.; Gómez, O.S. Software Engineering Research in Mexico: A Systematic. Int. J. Softw. Eng. Appl. 2016, 10, 75–92. [Google Scholar]
Criterion | Gravina [20] | Chen [30] | Shivappa [31] | Ours |
---|---|---|---|---|
Classification at the data level | Yes | Yes | No | Yes |
Classification at the feature level | Yes | Yes | Yes | Yes |
Classification at the decision level | Yes | Yes | Yes | Yes |
Classification signal enhancement and sensor level fusion strategies, classification at the classifier level, classification facilitating the natural interaction of the human-computer and classification exploitation of complementary information through modalities | No | No | Yes | No |
Mixed fusion | No | No | Yes | Yes |
Physical activity recognition | Yes | Yes | No | Yes |
Emotion recognition | Yes | No | Yes | No |
Speech recognition | No | No | Yes | No |
Tracking | No | No | Yes | Yes |
Biometrics | No | No | Yes | No |
Meeting scene analysis | No | No | Yes | No |
Fusion in the context of general health | Yes | No | No | Yes |
Fusion characteristics | Yes | No | No | No |
Fusion parameters | Yes | Yes | Yes | No |
Type of sensors | Wearable | Depth cameras and inertial sensors | Microphones and cameras | External and wearable |
Activities | Yes | Yes | No | Yes |
Datasets | No | Yes | Yes | Yes |
Classifiers | Yes | Yes | Yes | Yes |
Metrics | No | Yes | Yes | Yes |
Explanation of fusion methods | General | General | General | Detailed |
Homogeneous sensors vs heterogeneous sensors | No | No | No | Yes |
Automatic feature extraction vs Manual feature extraction | No | No | No | Yes |
Unmixed Fusion vs Mixed fusion | No | No | No | Yes |
Search String |
---|
(multi or diverse) AND sensor AND data AND (fusion OR combine) AND human AND (activity OR activities) AND (recognition OR discover OR recognize) |
Criteria | Description |
---|---|
IC1 | Include papers whose titles are related to the recognition of human activities through multiple modalities |
IC2 | Include papers that contain terms related with the defined terms in the search string. |
IC3 | Include papers whose abstracts are related to the recognition of human activities through multiple sensors |
IC4 | Include publications that have been partially or fully read. |
EC1 | Exclude documents written in languages other than English |
Source | Document Results | Relevant Papers |
---|---|---|
Scopus | 78 | 33 |
Unmixed Group | Acurracy | Mixed Group | Accuracy |
---|---|---|---|
Data-level fusion | |||
(1) TLSF | |||
Feature-level fusion | |||
(2) FA, (3) FA-PCA, (4) TF, (5) DRP and JSR, (6) SVMBMF | |||
Decision-level fusion | |||
(7) LBEL, (8) SMMKL, (9) a-stack, (10) Vot, (11) AdaBoost, (12) MulVS, (13) HWC, (14) Prod, (15) Sum, (16) Max, (17) Min, (18) Ran, (19) WA, (20) CMMSDF, (21) MFN, (22) GABCEO, (23) WLOGP, (24) ADPR, (25) DARA, (26) ARMBMWS, (27) PARS and (28) DARTC | Minimum: 0.664 Average: 0.95 Maximum: 1 | (1) TF-RDA, (2) UbiMonitorIF-FA, (3) GABCF-FC, (4) MBSSAHC-FA, and (5) DGAMHA-FA | Minimum: 0.927 Average: 0.95 Maximum: 0.971 |
Homogeneous Fused Sensors Group | Accuracy | Heterogeneous Fused Sensors Group | Accuracy |
---|---|---|---|
Feature-level fusion | Minimum: 0.664 Average: 0.923 Maximum: 1 | Data-level fusion | Minimum: 0.881 Average: 0.962 Maximum: 1 |
(1) FA | (1) TLSF | ||
Feature-level fusion | |||
(2) FA, (3) FA-PCA, (4) TF, (5) DRP and JSR, and (6) SVMBMF | |||
Decision-level Fusion | Decision-level Fusion | ||
(2) SMMKL, (3) Vot, (4) HWC, (5) CMMSDF, (6) ADPR, (7) ARMBMWS, and (8) DARTC | (7) LBEL, (8) A-stack, (9) Vot, (10) AdaBoost, (11) MulVS, (12) Prod, (13) Sum, (14) Max, (15) Min, (16) Ran, (17) WA, (18) MFN, (19) GABCEO, (20) WLOGP, (21) DARA, and (22) PARS | ||
Two-level Fusion | Two-level Fusion | ||
(9) TF-RDA, (10) MBSSAHC-FA and (11) UbiMonitorIF-FA | (23) TF-RDA, (24) GABCF-FC and (25) DGAMHA-FA |
Manual Feature Extraction | Accuracy | Automatic Feature Extraction | Accuracy | Manual and Automatic Extraction of Features | Accuracy |
---|---|---|---|---|---|
Data-Level Fusion | Feature-Level Fusion | Feature-Level Fusion | |||
(1) TLSF | (1) TF | ||||
Feature-level Fusion | Decision-level Fusion | ||||
(2) FA, (3) FA-PCA, (4) DRP and JSR, (5) SVMBMF | (2) a-stack | ||||
Decision-level Fusion | Two-level fusion | ||||
(6) LBEL, (7) SMMKL, (8) Vot, (9) AdaBoost, (10) MulVS, (11) HWC, (12) Prod, (13) Sum, (14) Max, (15) Min, (16) Ran, (17) WA, (18) CMMSDF, (19) MFN, (20) GABCEO, (21) WLOGP, (22) ADPR, (23) DARA, (24) ARMBMWS, and (25) PARS | Minimum: 0.664 Average: 0.95 Maximum: 1 | (3) TF-RDA | Minimum: 0.923 Average: 0.96 Maximum: 1 | (1) FA | 0.986 |
Two-level fusion | |||||
(26) MBSSAHC-FA, (27) ubiMonitorIF-FA, (28) GABCF-FC and (29) DGAMHA-FA |
Fusion Method | Activities of Daily Life | Predetermined Laboratory Exercises | Situation in the Medical Environment |
---|---|---|---|
Data-level fusion | |||
TLSF | Yes | ||
Feature-level fusion | |||
FA | Yes | Yes | Yes |
FA-PCA | Yes | ||
TF | Yes | ||
DRP and JSR | Yes | ||
SVMBMF | Yes | ||
Decision-level fusion | |||
LBEL | Yes | ||
SMMKL | Yes | ||
a-stack | Yes | ||
Vot | Yes | ||
AdaBoost | Yes | ||
MulVS | Yes | Yes | |
HWC | Yes | ||
Prod | Yes | ||
Sum | Yes | ||
Max | Yes | ||
Min | Yes | ||
Ran | Yes | ||
WA | Yes | ||
CMMSDF | Yes | ||
MFN | Yes | ||
GABCEO | Yes | ||
WLOGP | Yes | ||
ADPR | Yes | ||
DARA | Yes | ||
ARMBMWS | Yes | ||
PARS | Yes | ||
DARTC | Yes | ||
Two-level fusion | |||
TF-RDA | Yes | ||
MBSSAHC-FA | Yes | ||
UbiMonitorIF-FA | Yes | ||
GABCF-FC | Yes | ||
DGAMHA-FA | Yes |
Ref | Fusion Method | Sensors | Activities | DId | Classifiers | Metrics |
---|---|---|---|---|---|---|
Data-level fusion | ||||||
[106] | TLSF | WiFi, and Acc | (1) following relations and (2) group leadership | 1 | SVM | Error %: 7 |
Feature-level fusion | ||||||
[108] | FA | Accs | (1) walking to falling to lying, and (2) sitting to falling to sitting down | 2 | SVM | Accuracy: 0.950 Detection rate: 0.827 False alarm rate: 0.05 |
[115] | FA | Accs | (1) walking, (2) upstairs, (3) downstairs, (4) sitting, (5) standing, and (6) lLying | 3 | KNN | Recall: 0.624 Precision: 0.941 |
[115] | FA | Gyrs | (1) walking, (2) upstairs, (3) downstairs, (4) sitting, (5) standing, and (6) lying down | 3 | KNN | Recall: 0.464 Precision: 0.852 |
[22] | FA | Acc, and Gyr | (1) walking, (2) sitting, (3) standing, (4) jogging, (5) biking, (6) walking upstairs, and (7) walking downstairs | 4 | BN | Accuracy: >0.6 and <1 |
[22] | FA | Acc, and Gyr | (1) walking, (2) sitting, (3) standing, (4) jogging, (5) biking, (6) walking upstairs, and (7) walking downstairs | 4 | NB | Accuracy: >0.4 and <1 |
[22] | FA | Acc, and Gyr | (1) walking, (2) sitting, (3) standing, (4) jogging, (5) biking, (6) walking upstairs, and (7) walking downstairs | 4 | LR | Accuracy: >0.6 and <1 |
[22] | FA | Acc, and Gyr | (1) walking, (2) sitting, (3) standing, (4) jogging, (5) biking, (6) walking upstairs, and (7) walking downstairs | 4 | SVM | Accuracy: >0.6 and <1 |
[22] | FA | Acc, and Gyr | (1) walking, (2) sitting, (3) standing, (4) jogging, (5) biking, (6) walking upstairs, and (7) walking downstairs | 4 | KNN | Accuracy: >0.8 and <1 |
[22] | FA | Acc, and Gyr | (1) walking, (2) sitting, (3) standing, (4) jogging, (5) biking, (6) walking upstairs, and (7) walking downstairs | 4 | DT | Accuracy: >0.8 and <1 |
[22] | FA | Acc, and Gyr | (1) walking, (2) sitting, (3) standing, (4) jogging, (5) biking, (6) walking upstairs, and (7) walking downstairs | 4 | RFC | Accuracy: >0.8 and <1 |
[22] | FA | Acc, and Gyr | (1) walking, (2) sitting, (3) standing, (4) jogging, (5) biking, (6) walking upstairs, and (7) walking downstairs | 4 | RulBC | Accuracy: >0.8 and <1 |
[23] | FA | Acc | (1) walk,(2) jog, (3) ascend stairs, (4) descend stairs, (5) sit, and (6) stand | 5 | CNN-ANN-SoftMax | Accuracy: 0.986 Precision:0.975 Recall:0.976 |
[23] | FA | Acc, and Gyr | (1) casual movement, (2) cycling, (3) no acivity (Idle), (4) public transport, (5) running, (6) standing and (7) walking | 6 | CNN-ANN-SoftMax | Accuracy: 0.957 Precision: 0.930 Recall: 0.933 |
[23] | FA | Acc | (1) walking, (2) jogging, (3) stairs, (4) sitting, (5) standing, and (6) lying Down | 7 | CNN-ANN-SoftMax | Accuracy: 0.927 Precision: 0.897 Recall: 0.882 |
[23] | FA | Acc | (1) write on notepad, (2) open hood, (3) close hood, (4) check gaps on the front door, (5) open left front door, (6) close left front door, (7) close both left door, (8) check trunk gaps, (9) open and close trunk, and (10) check steering | 8 | CNN-ANN-SoftMax | Accuracy: 0.953 Precision: 0.949 Recall: 0.946 |
[23] | FA | Acc | freezing of gait (FOG) symptom | 9 | CNN-ANN-SoftMax | Accuracy: 0.958 Precision: 0.826 Recall: 0.790 |
[109] | FA | Kc | (1) catch cap, (2) toss paper, (3) take umbrella, (4) walk, (5) phone call, (6) drink, (7) sit down, and (8) stand | 10 | SVM | Accuracy: 1 |
[109] | FA | Kc | (1) wave, (2) drink from a bottle, (3) answer phone, (4) clap, (5) tight lace, (6) sit down, (7) stand up, (8) read watch, and (9) bow | 11 | SVM | Accuracy: 0.904 |
[112] | FA | Acc, Hr, and Av | (1) Lying: Lying down resting; (2) low whole body motion (LWBM): Sitting resting, sitting stretching, standing stretching, desk work, reading, writing, working on a PC, watching TV, sitting fidgeting legs, standing still, bicep curls, shoulder press; (3) high whole body motion (HWBM): Stacking groceries, washing dishes, preparing a salad, folding clothes, cleaning and scrubbing, washing windows, sweeping, vacuuming; (4) Walking; (5) Biking; and (6) Running | 12 | DT | Accuracy: 0.929 Sensitivity:0.943 Specificity:0.980 |
[113] | FA | Accs, and Res | (1) Computer work, (2) Filing papers, (3) Vacuuming, (4) Moving the box, (5) Self-paced walk, (6) Cycling 300 kpm, (7) Cycling 600 kpm, (8) Level treadmill walking (3 mph), (9) Treadmill walking (3 mph and 5% grade), (10) Level treadmill waking (4 mph), (11) Treadmill walking (4 mph and 5% grade), (12) Level treadmill running(6 mph), (13) Singles tennis against a practice wall, and (14) Basketball | 13 | SVM | Accuracy: 0.79 |
[114] | FA | Mics and Vids | (1) eating-drinking, (2) reading, (3) ironing, (4) cleaning, (5) phone answering, and (6) TV watching | 14 | GMM | Accuracy: 0.6597 |
[110] | FA-PCA | Acc, Mag, Gyr, and Press | (1) sitting, (2) standing, (3) walking, (4) running, (5) cycling, (6) stair descent, (7) stair ascent, (8) elevator descent, and (9) elevator ascent | 15 | DT | Accuracy: 0.894 |
[110] | FA-PCA | Acc, Mag, Gyr, and Press | (1) sitting, (2) standing, (3) walking, (4) running, (5) cycling, (6) stair descent, (7) stair ascent, (8) elevator descent, and (9) elevator ascent | 15 | MLP | Accuracy: 0.928 |
[110] | FA-PCA | Acc, Mag, Gyr, and Press | (1) sitting, (2) standing, (3) walking, (4) running, (5) cycling, (6) stair descent, (7) stair ascent, (8) elevator descent, and 9) elevator ascent | 15 | SVM | Accuracy: 0.928 |
[110] | FA-PCA | Acc, Mag, Gyr, and Press | (1) sitting, (2) standing, (3) walking, (4) running, (5) cycling, (6) stair descent, (7) stair ascent, (8) elevator descent, and (9) elevator ascent | 15 | NB | Accuracy: 0.872 |
[111] | FA-PCA | IMU and Press | (1) sitting, (2) standing, and (3) walking | 16 | SVM | Accuracy: 0.99 |
[48] | TF | Vid and Mot | activity of self-injection of insulin includes 7 action class: (1) Sanitize hand, (2) Roll insulin bottle (3) Pull air into syringe, (4) Withdraw insulin, (5) Clean injection site, (6) Inject insulin, and (7) Dispose needle | 17 | CNN-LSTM-Softmax | Accuracy: 1 |
[119] | DRP and JSR | Acc and Gyr | (1) Stand, (2) Sit, (3) Lie down, (4) Walk forward, (5) Walk left-circle, (6) Walk right-circle, (7) Turn left, (8) Turn right, (9) Go upstairs, (10) Go downstairs, (11) Jog, (12) Jump, and (13) Push wheelchair | 18 | Accuracy: 0.887 | |
[120] | SVM BMF | Acc and Ven | (1) Computer work, (2) filing papers, (3) vacuuming, (4) moving boxes, (5) self-paced walk, (6) cycling, (7) treadmill, (8) backed ball, and (10) tennis | 19 | SVM | Accuracy: 0.881 |
Ref | Fusion Method | Sensors | Activities | DId | Classifiers | Metrics |
---|---|---|---|---|---|---|
Decision-level fusion | ||||||
[123] | LBEL | Acc, and iBeacon [156] | (1) standing, (2) walking, (3) cycling, (4) lying, (5) sitting, (6) exercising, (7) prepare food, (8) dining, (9) watching TV, (10) prepare clothes, (11) studying, (12) sleeping, (13) bathrooming, (14) cooking, (15) past times, and (16) random | 20 | DT | Accuracy: 0.945 |
[108] | SMM KL | Accs | (1) walking to falling to lying, and (2) sitting to falling to sitting down | 3 | MKL-SVM | Accuracy: 0.946 Detection rate: 0.347 False alarm rate: 0.05 |
[157] | a-stack | Acc, Gyr, ECG, and Mag | (1) lying, (2) sitting/standing, (3) walking, (4) running, (5) cycling, and (6) other | 21 | NN, LR | Accuracy: 0.923 |
[157] | a-stack | Acc, Hr, Gyr, and Mag | (1) lying, (2) sitting/standing, (3) walking, (4) running, (5) cycling, and (6) other | 22 | NN, LR | Accuracy: 0.848 |
[158] | Vot | Acc | (1) Walking, (2) Jogging, (3) Upstairs, (4) Downstairs, (5) Sitting, and (6) Standing | 5 | MLP, LR, and DT | Accuracy: 0.916 AUC: 0.993 F1-score: 0.918 |
[138] | Vot | Acc, Alt, Tem, Gyr, Bar, lig, and Hr | (1) brushing teeth, (2) exercising, (3) feeding, (4) ironing, (5) reading (6) scrubbing, (7) sleeping, (8) using stairs, (9) sweeping, (10) walking, (11) washing dishes, (12) watching TV, and (13) wiping | 22 | MLP, RBF, and SVM | Accuracy: 0.971 |
[137] | Vot | Mot, and Tem | (1) Wash Dishes, (2) Watch TV, (3) Enter Home, (4) Leave Home, (5) Cook Breakfast, (6) Cook Lunch, (7) Group Meeting, and (8) Eat Breakfast | 24 | ANN, HMM, CRF, SVM | Accuracy: 0.906 Precision: 0.799 Recall: 0.7971 F1-score: 0.7984 |
[137] | Vot | Mot, Door, and Tem | (1) bed to toilet, (2) sleeping, (3) leave home, (4) watch TV, (5) chores, (6) desk activity, (7) dining, (8) evening medicines, (9) guest bathroom, (10) kitchen activity, (11) master bathroom, (12) Master Bedroom, (13) meditate, (14) morning medicines, and (15) read | 25 | ANN, HMM, CRF, SVM | Accuracy: 0.885 Precision: 0.801 Recall: 0.8478 F1-score: 0.8235 |
[137] | Vot | Mot, item, Door, tem, ElU, and Lig | (1) meal preparation, (2) sleeping, (3) cleaning, (4) work, (5) grooming, (6) shower, and (7) wakeup | 26 | ANN, HMM, CRF, SVM | Accuracy: 0.855 Precision: 0.752 Recall: 0.7274 F1-score: 0.7394 |
[137] | Ada Boost | Mot and Tem | (1) wash dishes, (2) watch TV, (3) enter home, (4) leave home, (5) cook breakfast, (6) cook lunch, (7) group meeting, and (8) eat breakfast | 24 | DT | Accuracy: 0.912 Precision: 0.844 Recall: 0.7983 F1-score: 0.8206 |
[137] | Ada Boost | Mot, Door, and Tem | (1) bed to toilet, (2) sleeping, (3) leave home, (4) watch TV, (5) chores, (6) desk activity, (7) dining, (8) evening medicines, (9) guest bathroom, (10) kitchen activity, (11) master bathroom, (12) master bedroom, (13) meditate, (14) morning medicines, and (15) read | 25 | DT | Accuracy: 0.875 Precision: 0.824 Recall: 0.8767 F1-score: 0.805 |
[137] | Ada Boost | Mot, item, Door, Tem, ElU, and Lig | (1) meal preparation, (2) sleeping, (3) cleaning, (4) work, (5) grooming, (6) shower, and (7) qakeup | 26 | DT | Accuracy: 0.837 Precision: 0.736 Recall: 0.7174 F1-score: 0.7266 |
[27] | MulVS | Mic, and Acc | (1) mop floor, (2) sweep floor, (3) type on computer keyboard, (4) brush teeth, (5) wash hands, (6) eat chips, and (7) watch TV | 27 | RFC | Accuracy: 0.941 Recall: 0.939 Specificity: 0.99 |
[27] | MulVS | Mic, Acc, and OMCS | (1) jumping in place, (2) jumping jacks, (3) bending, (4) punching, (5) waving two hands, (6) waving one hand, (7) clapping, (8) throwing a ball, (9) sit/stand up, (10) sit down, and (11) stand up | 28 | RFC | Accuracy: 0.995 Recall: 0.995 Specificity: 0.99 |
[27] | MulVS | Acc, Gyr, and Kc | (1) swipe left, (2) swipe right, (3) wave, (4) clap, (5) throw, (6) arm cross, (7) basketball shoot, (8) draw x, (9) draw circle CW, (10) draw circle CCW, (11) draw triangle, (12) bowling, (13) boxing, (14) baseball swing, 15) tennis swing, (16) arm curl, (17) tennis serve, (18) push, (19) knock, (20) catch, (21) pickup throw, (22) jog, (23) walk, (24) sit 2 stand, (25) stand 2 sit, (26) lunge, and (27) squat | 29 | RFC | Accuracy: 0.981 Recall: 0.984 Specificity: 0.99 |
[27] | MulVS | Acc, Gyr, and Mag | (1) stand, (2) walk, (3) sit, and (4) lie | 30 | RFC | Accuracy: 0.925 Recall: 0.905 Specificity: 0.96 |
[134] | HWC | Accs | (1) running, (2) cycling, (3) stretching, (4) strength-training, (5) walking, (6) climbing stairs, (7) sitting, (8) standing and (9) lying down | 31 | KNN | Accuracy: 0.975 |
[138] | Prod | Acc, Alt, Tem, Gyr, Bar, Lig, and Hr | (1) brushing teeth, (2) exercising, (3) feeding, (4) ironing, (5) reading, (6) scrubbing, (7) sleeping, (8) usingstairs, (9) sweeping, (10) walking, (11) washing dishes, (12) watching TV, and (13) wiping | 23 | MLP, RBF, and SVM | Accuracy: 0.972 |
[138] | Sum | Acc, Alt, Tem, Gyr, Bar, Lig, and Hr | (1) brushing teeth, (2) exercising, (3) feeding, (4) ironing, (5) reading, (6) scrubbing, (7) sleeping, (8) usingstairs, (9) sweeping, (10) walking, (11) washing dishes, (12) watching TV, and (13) wiping | 23 | MLP, RBF, and SVM | Accuracy: 0.973 |
[138] | Max | Acc, Alt, Tem, Gyr, Bar, Lig, and Hr | (1) brushing teeth, (2) exercising, (3) feeding, (4) ironing, (5) reading, (6) scrubbing, (7) sleeping, (8) usingstairs, (9) sweeping, (10) walking, (11) washing dishes, (12) watching TV, and (13) wiping | 23 | MLP, RBF, and SVM | Accuracy: 0.971 |
[138] | Min | Acc, Alt, Tem, Gyr, Bar, Lig, and Hr | (1) brushing teeth, (2) exercising, (3) feeding, (4) ironing, (5) reading, (6) scrubbing, (7) sleeping, (8) usingstairs, (9) sweeping, (10) walking, (11) washing dishes, (12) watching TV, and (13) wiping | 23 | MLP, RBF, and SVM | Accuracy: 0.971 |
[138] | Ran | Acc, Alt, Tem, Gyr, Bar, Lig, and Hr | (1) brushing teeth, (2) exercising, (3) feeding, (4) ironing, (5) reading, (6) scrubbing, (7) sleeping, (8) usingstairs, (9) sweeping, (10) walking, (11) washing dishes, (12) watching TV, and (13) wiping | 23 | MLP, RBF, and SVM | Accuracy: 0.969 |
[138] | WA | Acc, Alt, Tem, Gyr, Bar, Lig, and Hr | (1) brushing teeth, (2) exercising, (3) feeding, (4) ironing, (5) reading, (6) scrubbing, (7) sleeping, (8) using stairs, (9) sweeping, (10) walking, (11) washing dishes, (12) watching TV, and (13) wiping | 23 | MLP, RBF, and SVM | Accuracy: 0.971 |
[135] | CMM SDF | Mot | (1) using laptop, (2) watching TV, (3) eating, turning on the stove, and (5) washing dishes | 32 | Accuracy: 1 | |
[136] | MFN | Kc, and Mic | Recognition of objects through human actions | 33 | SVM, and NB | Accuracy: 0.928 F1-score: 0.921 |
[137] | GAB CEO | Mot, and Tem | (1) wash dishes, (2) watch TV, (3) enter home, (4) leave home, (5) cook breakfast, (6) cook lunch, (7) group meeting, and (8) eat breakfast | 24 | ANN, HMM, CRF, SVM | Accuracy: 0.951 Precision: 0.897 Recall: 0.9058 F1-score: 0.9013 |
[137] | GAB CEO | Mot, door, and Tem | (1) bed to toilet, (2) sleeping, (3) leave home, (4) watch TV, (5) chores, (6) desk activity, (7) dining, (8) evening medicines, (9) guest bathroom, (10) kitchen activity, (11) master bathroom, (12) master bedroom, (13) meditate, (14) morning medicines, and (15) read | 25 | ANN, HMM, CRF, SVM | Accuracy: 0.919 Precision: 0.827 Recall: 0.8903 F1-score: 0.8573 |
[137] | GAB CEO | Mot, item, Door, Tem, ElU, and lig | (1) meal preparation, (2) sleeping, (3) cleaning, (4) work, (5) grooming, (6) shower, and (7) wakeup | 26 | ANN, HMM, CRF, SVM | Accuracy: 0.894 Precision: 0.829 Recall: 0.8102 F1-score: 0.8197 |
[139] | WLO GP | Acc, and Gyr | (1) stand, (2) sit, (3) lie down, (4) walk forward, (5) walk left-circle, (6) walk right-circle, (7) turn left, (8) turn right, (9) go upstairs, (10) go downstairs, (11) jog, (12) jump, (13) push wheelchair | 18 | RVM | Accuracy: 0.9878 |
[141] | ADPR | Accs | (1) walk, (2) run, (3) sit, (4) stand, (5) fiddle, and (6) rest | 34 | NB, and GMM | F1-score: 0.926 |
[142] | DARA | Acc, and Gyr | (1) zero-displacement activities AZ = {standing, sitting, lying}; (2) transitional activities AT = {sitting-to-standing, standing-to- sitting, level walking-to-stair walking, stair walking-to-level walking, lying-to-sitting, sitting-to- lying}; and (3) strong dis- placement activities AS = {walking level, walking upstairs, walking downstairs, running} | 35 | ANN, and HMM | Accuracy: 0.983 |
[143] | ARM BMWS | Accs | (1) walking, (2) walking while carrying items, (3) sitting and relaxing, (4) working on computer, (5) standing still, (6) eating or drinking, (7) watching TV, (8) reading, (9) running, (10) bicycling, (11) stretching, (12) strength-training, (13) scrubbing, (14) vacuuming, (15) folding laundry, (16) lying down and relaxing, (17) brushing teeth, (18) climbing stairs, (19) riding elevator, and (20) riding escalator | 31 | NB, and DT | Accuracy: 0.6641 |
[144] | PARS | ECG, and Acc | (1) lying, (2) sitting, (3) sitting fidgeting, (4) standing, (5) standing fidgeting, (6) playing Nintendo Wii tennis, (7) slow walking, 8) brisk walking, and 9) running | 36 | SVM, and GMM | Accuracy: 0.973 |
[146] | DAR TC | Vids | (1) looking at watch, (2) scratching head, (3) sit, (4) wave hand, (5) punch, (6) kick, and (7) pointing a gun | 37 | Bayes rule and Markov chain. | Average probability of correct match: Between 3-1 |
Ref | Fusion Method | Sensors | Activities | DId | Classifiers | Metrics |
---|---|---|---|---|---|---|
Two-level fusion | ||||||
[50] | TF-RDA | Acc, Gyr, and Mag | (1) hand flapping | 38 | CNN-LSTM-Softmax | F1-score: 0.95 |
[50] | TF-RDA | Accs | (1) body rocking, (2) hand flapping or (3) simultaneous body rocking, and hand flapping | 39 | CNN-LSTM-Softmax | F1-score: 0.75 |
[29] | MBS SAHC -FA | Accs | (1) lying, (2) sitting, (3) standing, (4) walking, (5) stairs, (6) transition | 40 | DT, and NB | Accuracy: 0.927 |
[133] | Ubi Moni torIF-FA | Accs | (1) lying, (2) sitting, (3) standing, (4) walking, (5) running, (6) cycling, (7) Nordic walking, (8) ascending stairs, (9) descending stairs, (10) vacuum cleaning, (11) ironing, and (12) rope jumping | 22 | DT, and SMV | Accuracy: 0.95 Precision: 0.937 Recall: 0.929 F1-score:0.93 |
[138] | GABC F-FC | Acc, Alt, Tem, Gyr, Bar, lig, and Hr | (1) brushing teeth, (2) exercising, (3) feeding, (4) ironing, (5) reading, (6) scrubbing, (7) sleeping, (8) using stairs, (9) sweeping, (10) walking, (11) washing dishes, (12) watching TV, and (13) wiping | 23 | MLP, RBF, and SVM | Accuracy: 0.971 |
[147] | DGAM HA-FA | Acc, Mic, Lig, Bar, Hum, Tem, and Com | (1) sitting, (2) standing, (3) walking, (4) jogging, (5) walking up stairs, (6) walking down stairs, (7) riding a bike, (8) driving a car, (9) riding elevator down, and (10) riding elevator up | 41 | Ds, and HMM | Accuracy: 0.95 Precision: 0.99 Recall: 0.91 |
Id | Dataset |
---|---|
1 | Created by Kjærgaard et al. [106] |
2 | Localization Data for Person Activity [159] |
3 | Created by Zebin et al. [115] |
4 | Created by Shoaib et al. [115] |
5 | WISDM v1.1 [151] |
6 | ActiveMiles [160] |
7 | WISDM v2.0 [161] |
8 | Skoda [162] |
9 | Daphnet FoG [152] |
10 | KARD [153] |
11 | Florence3D [163] |
12 | Created by Altini et al. [112] |
13 | Created by John et al. [113] |
14 | ADL corpus collection [164] |
15 | Created by Guiry et al. [110] |
16 | Created by Adelsberger et al. [111] |
17 | ISI [165] |
18 | WARD [166] |
19 | Created by Liu et al. [120] |
20 | Created by Alam et al. [123] |
21 | MHEALTH [167] |
22 | PAMAP2 [168] |
23 | Created by Chernbumroong at al. [138] |
24 | Tulum2009 [169] |
25 | Milan2009 [169] |
26 | TwoSummer2009 [169] |
27 | Created by Garcia-Ceja et al. [27] |
28 | Berkeley MHAD [170] |
29 | UTD-MHAD [171] |
30 | Opportunity [172] |
31 | Created by Bao et al. [121] |
32 | Created by Arnon [135] |
33 | Created by Glodek et al. [136] |
34 | Created by Grokop et al. [141] |
35 | Created by Zhu et al. [142] |
36 | Created by Li et al. [144] |
37 | IXMAS [173] |
38 | Created by Rad et al. [50] |
39 | Real [174] |
40 | Created in eCAALYX project [29] |
41 | Created by Lester et al. [147] |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Aguileta, A.A.; Brena, R.F.; Mayora, O.; Molino-Minero-Re, E.; Trejo, L.A. Multi-Sensor Fusion for Activity Recognition—A Survey. Sensors 2019, 19, 3808. https://doi.org/10.3390/s19173808
Aguileta AA, Brena RF, Mayora O, Molino-Minero-Re E, Trejo LA. Multi-Sensor Fusion for Activity Recognition—A Survey. Sensors. 2019; 19(17):3808. https://doi.org/10.3390/s19173808
Chicago/Turabian StyleAguileta, Antonio A., Ramon F. Brena, Oscar Mayora, Erik Molino-Minero-Re, and Luis A. Trejo. 2019. "Multi-Sensor Fusion for Activity Recognition—A Survey" Sensors 19, no. 17: 3808. https://doi.org/10.3390/s19173808