Seminar
Seminar
Seminar
OVERVIEW
INTRODUCTION WEARABLE COMPUTERS WERABLE COMPUTING WEARABLE ACTIVITY RECOGNITION ACTIVITY RECOGNITION CHAIN SHARING ACTIVITY-RECOGNITIONSYSTEM CONCLUSION AND OVERLOOK REFERENCE
INTRODUCTION
In robotics, activity-recognition systems can be used to label large robot-generated activity data sets.
It enables activity-aware humanrobot interactions (HRIs). It also opens ways to self-learning autonomous robots. The recognition of human activities from body-worn sensors is a key paradigm in wearable computing. In this field, the variability in human activities, sensor deployment characteristics, and application domains has led to the development of best practices and methods to enhance the robustness of activity-recognition systems. Finally, the current challenges in wearable activity recognition are outlined.
WEARABLE COMPUTER
Wearable computers are computers that are worn on the body. They have been applied to areas such as behavioral modeling, health monitoring systems, information technologies and media development. Wearable computers are especially useful for applications that require computational support while the user's hands, voice, eyes or attention are actively engaged with the physical environment. Areas of study include user interface design, augmented reality, pattern recognition, use of wearables for specific applications or disabilities, electronic textiles and fashion design.
WEARABLE COMPUTER
A wearable computer is a computer that is subsumed into the personal space of the user, controlled by the user, and has both operational and interactional constancy. It is a device that is always with the user, and into which the user can always enter commands and execute a set of such entered commands, The wearable computer is more than just a wristwatch or regular eyeglasses: it has the full functionality of a computer system but in addition to being a fully featured computer.
Com port
Frame grabber
Video Camera
Main Unit
Parallel port
Back plane
Power Supply
INPUT DEVICE
You hold up your left hand, fingers pointing to the right. The system recognizes that you want to make a call, and projects a dialing pad onto your fingers. You tap the virtual keypad with your right hand to dial the call.
WEARABLE COMPUTING
Wearable computing, as originally presented by Mann in 1996, emphasized a shift in computing paradigm [22].Computers would no longer be machines separate from the persons using them. Instead, they would become an unobtrusive extension of our very bodies, providing us with additional ubiquitous sensing, feedback, and computational capabilities. Wearable computing never considered implanting sensors or chips into the body. Rather, it emphasizes the view that clothing,which has become an extension of our natural skin, would be the substrate that technology could disappear into. The prevalence of mobile phones now offers an additional vector for on-body sensing and computing [23].Mann [24] and Starner et al. [25] were among the first to show that complex contextual information can be obtained by interpreting on-body sensor data and that this would lead to novel adaptive applications. A wearable system can perceive activities, defined here to include both gestures and behaviors, from a first-person perspective. This leads to new forms of applications known as activity based computing or interaction-based computing [20],[21]. Such applications can offer information or assistance pro-actively based on the users situation as well as support explicit interaction in unobtrusive ways through natural gestures or body movements.
Figure 6 illustrates the effect of sensor placement on the projection of sensor data into the feature space.l Abstracting the specific environment in which the system can recognize activities is important to ensure costeffective deployment on a large scale. Thus, activity-recognition methods should work for a generic class of problems (e.g., in any smart home) rather than a specific instance of the problem class (e.g., a specific smart home). To further increase unobtrusiveness, we argue in the project OPPORTUNITY (http://www.opportunity-project.org, a Future and Emerging Technologies project funded by the European Commission in the seventh framework programme for research) to use opportunistically discovered sensors for activity recognition [17], [50]. Thus, the available sensor configuration depends on the sensorized objects users take with themselves, on the smart clothing they wear, and on the environment in which they are located. For each sensor kind and placement, there is a different sensor-signal to activityclass mapping that an opportunistic activity-recognition system should be able to abstract.The wearable computing community has developed best practices and novel methods to deal with some forms of variability. In the following subsections, we present a selection of methods developed by various groups and ours. To share an ARC, there must be a common representation at some stage in the recognition chain.We organize the methods along the level at which the methods assume a common representation. We describe methods operating at the sensor, feature, classifier, and reasoning levels .
This level focuses on training an ARC on the first platform and reusing it on the second platform. This assumes that the sensor-signal to activity-class mappings are statistically identical on two platforms. Training an ARC on one system is referred to as a user-specific system, and it is known to show degraded performance when deployed to another. The best practice to realize an ARC that generalizes to new situations consists in training it on a data set containing the variability to be seen when the system is deployed. By collecting a data set from multiple users, the ARC can be trained to be user independent. By collecting a data set comprising multiple on-body sensor positions, the ARC can be trained to be independent of sensor placement.
SENSOR LEVEL SHARING(cont) The previous approach requires to foresee all the variations likely to be encountered at run time. Thus, we proposed an unsupervised self-calibration approach that removes this requirement. The self-calibration approach operates as follows: the ARC continuously operates and recognizes the occurrence of activities/gestures. upon detection of an activity/gesture, the corresponding sensor data is stored as training data. the classifiers are retrained, including this new training data, using an incremental learning algorithm. Thus, the activity models are optimized upon each activity instance to better model that activity.
At this level, the ARC devised for the first platform is translated to the second platform from the feature level onwards. The use case for sharing ARCs at this level include systems where the sensor modalities on the two platforms do not coincide or show large on-body displacement for which a placement-independent ARC cannot be envisioned. They show that features that are robust to on-body displacement can be designed using body models and by fusing multiple sensors such as an accelerometer and a gyroscope. They also show that a specific sensor modality (magnetic field sensor) can be replaced by another specific modality (gyroscope). use on body sensor placement self-characterization as a way to select, among a number of preprogrammed ARCs, the one most suited for the detected sensor placement. in robotics, data from different sensors can be converted into identical abstract representations.
The reasoning program to infer higher level activities from spotted action primitives is shared between platforms. As the environment in which the two platforms operate may lead to the detection of semantically different action primitives, a direct transfer of the reasoning is not always possible. Carrying out a prior concept matching can address this.
The system first automatically finds how sensor activations in different environments relate to identical higherlevel concepts using statistical approaches. A recognition system can also learn internal hierarchical representation of activities or concepts [60], upon which reasoning is performed. Hu et al.
further report on using Web mining to match concepts [61]. Advances in merging concepts in ontologies , [support the transfer of activity-recognition reasoning across different conceptual spaces. In robotics, these principles may allow the robots to exchange the knowledge they have individually gained about the world. This may be especially relevant when principles of autonomous mental development are used, as robots can develop distinct world representations according to their capabilities.