Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

iFace: Hand-Over-Face Gesture Recognition Leveraging Impedance Sensing

Mengxi Liu mengxi.liu@dfki.de German Research Center for Artificial Intelligence(DFKI)KaiserslauternGermany67663 Hymalai Bello hymalai.bello@dfki.de German Research Center for Artificial Intelligence(DFKI)KaiserslauternGermany67663 Bo Zhou bo.zhou@dfki.de German Research Center for Artificial Intelligence(DFKI)KaiserslauternGermany67663 Paul Lukowicz paul.lukowicz@dfki.de German Research Center for Artificial Intelligence(DFKI)KaiserslauternGermany67663  and  Jakob Karolus jakob.Karolus@dfki.de German Research Center for Artificial Intelligence(DFKI)KaiserslauternGermany67663
(2024)
Abstract.

Hand-over-face gestures can provide important implicit interactions during conversations, such as frustration or excitement. However, in situations where interlocutors are not visible, such as phone calls or textual communication, the potential meaning contained in the hand-over-face gestures is lost. In this work, we present iFace, an unobtrusive, wearable impedance-sensing solution for recognizing different hand-over-face gestures. In contrast to most existing works, iFace does not require the placement of sensors on the user’s face or hands. Instead, we proposed a novel sensing configuration, the shoulders, which remains invisible to both the user and outside observers. The system can monitor the shoulder-to-shoulder impedance variation caused by gestures through electrodes attached to each shoulder. We evaluated iFace in a user study with eight participants, collecting six kinds of hand-over-face gestures with different meanings. Using a convolutional neural network and a user-dependent classification, iFace reaches 82.58 % macro F1 score. We discuss potential application scenarios of iFace as an implicit interaction interface.

Impedance Sensing, Hand-over-Face gesture recognition
journalyear: 2024copyright: rightsretainedconference: The Augmented Humans International Conference; April 4–6, 2024; Melbourne, VIC, Australiabooktitle: The Augmented Humans International Conference (AHs 2024), April 4–6, 2024, Melbourne, VIC, Australiadoi: 10.1145/3652920.3652923isbn: 979-8-4007-0980-7/24/04ccs: Human-centered computing Ubiquitous and mobile computing
Refer to caption
Figure 1. Schematic diagram and potential application scenarios of iFace. 1 and 2: the sensing principle of iFace is based on the body impedance variation caused by hand-face contact. 3: the impedance signal variation depends on the contact position and area when performing hand-over-face gestures. 4a: iFace enabling implicit interaction between users and their TV, e.g. the channel changes automatically when boredom is detected. 4b: hand-over-face gestures recognized by iFace during an online meeting can replace low-quality camera images to convey bodily cues in real-time. 4c: iFace as an additional sensing modality for multi-modal inference of a user’s cognitive state in teaching scenarios.

1. Introduction

Implicit interactions, which convey sensations such as frustration or excitement, are manifested and encoded through verbal and nonverbal means, including voice, facial expressions, and body language (Naik and Mehta, 2020). For example, people often hold their hands near their faces, forming gestures in conversations as a kind of nonverbal body language. Spontaneous self-touches or self-grooming gestures are believed to be related to formulating thoughts, information processing, and emotion regulation (Freedman, 1977). Empirical evidence substantiates that certain hand-over-face gestures function as cues facilitating the recognition of cognitive mental states (Mahmoud et al., 2016). However, in situations where interlocutors are not visible, such as phone calls or textual communication, the potential meaning contained in the hand-over-face gestures is lost. Therefore, such kinds of non-visible communication ways suffer from a lack of contextual and emotional awareness (Hassib et al., 2017a). Although emoticons are widely used to explicitly express emotion by users, misunderstanding of the emoticons still happens between users frequently (Lu et al., 2016).

Facial expressions and hand gesture recognition have been widely studied. Most of the work is based on computer vision technology and deep neural networks, achieving remarkable accuracy (Hasani and Mahoor, 2017; Li and Deng, 2020). Yet, those approaches often suffer from privacy invasion, which limits their application on private and sensitive occasions. One solution that respects privacy is to use the user’s avatar, as in Meta Horizon, which requires a headset (e.g., Oculus) to be worn by the user. Which is bulky, not ubiquitous, and uncomfortable to wear daily. Here, wearable sensor-based approaches for facial expression and hand gesture recognition can offer a privacy-friendly and ubiquitous alternative. However, most wearable solutions often require the sensors to be placed near the face or hands (Bello et al., 2023; Samadiani et al., 2019; Li et al., 2019), which is potentially uncomfortable and has low social acceptance. Contrarily, impedance-based sensing allows the electrodes to be placed on covered body parts and enables arbitrary gesture detection by measuring the impedance information between the two electrodes. The potential of impedance-sensing-based solutions for human activity recognition has been demonstrated in several works (Liu et al., 2024, 2023).

In this work, we present iFace, an unobtrusive, wearable impedance-sensing device for hand-over-face gesture recognition. In contrast to most existing works, iFace does not require the placement of sensors on the user’s face or hand to recognize hand-over-face gestures. Instead, we propose a novel sensing configuration on the shoulders, allowing electrodes to remain invisible to both the user and outside observers. iFace can extract hand-over-face gesture information using shoulder-to-shoulder impedance variations and achieves an average macro F1 score of 82.58 % for six common gestures by user-dependent model. Our results highlight that impedance-based recognition of these nonverbal conversational cues is feasible and can contribute to a smoother user experience with digital devices. We present potential applications of iFace to demonstrate how the technology integrates seamlessly into everyday scenarios.

To this end, we make the following contributions:

  1. (1)

    We developed iFace, an impedance-based, wearable device for hand-over-face gesture detection and confirmed its feasibility in an experiment with eight participants and six common hand-over-face gestures.

  2. (2)

    We highlight potential applications of iFace, in particular as an implicit interaction interface for non-verbal communication.

2. Related Work

Hand-over-face gestures are a subset of body language and can convey various emotions and reactions (Abril and Plant, 2007). This section focuses on the related work about hand-over-face gesture recognition. Existing work for hand-over-face gesture recognition can be grouped into computer vision-based and sensor-based approaches. For example, in (Lim et al., 2023), an infrared camera (LeapMotion) mounted on the neck takes pictures of the user’s face and recognizes the face zones (eyes, nose, and mouth) touch positions with 92% accuracy. Researchers in (Weng et al., 2021a) proposed an infrared camera fixed on a glasses’ nose bridge to detect hand-to-face gestures. The IR camera was looking downward to capture lower face touching. With an accuracy of more than 90%, the system could detect the fingers touching five areas of the lower face (nose, mouth, chin, and left/right cheeks). The work (Loorak et al., 2019) presented InterFace to recognize different hand-over-face gestures by smart phone front camera, offering extra possibility of interaction with the phone. These above solutions often require capturing images of the person to extract the features, which often can achieve high accuracy given high-quality images for facial gesture recognition tasks with deep learning models. However, for sustained usage in the real world, vision-based methods are often influenced by image quality (e.g., lighting conditions, stability), usability (e.g. requires users to photograph their face), and private issues (esp. for automatic photo-taking methods), These disadvantages of the computer vision-based approach could be better addressed by the sensor-based methods. In (Yan et al., 2019), the authors used the audio signal difference between two earphones (left/right) to detect the hand-to-mouth gesture and trigger a voice input system. In (Xu et al., 2020), another earphone-based idea captures the sound when a person’s finger touches their face, thus recognizing hand-to-face gestures, including tapping, double-tapping, and swiping. A discreet gesture interaction using smart glasses is proposed in (Lee et al., 2017). Electrooculography (EOG) was selected as the sensing modality and the authors focused on three gestures with reference to the nose: flicking, pushing, and rubbing with 90% accuracy. In (Yamashita et al., 2017), photo-reflective sensors were set onto a head-mounted display. Their system can recognize the skin deformation caused by the hands touching the cheeks (pushing the face up/down and left/right) with an accuracy higher than 70%. Another photo-reflective-based smart glasses is in (Masai et al., 2018) to detect facial rubbing gestures with an accuracy of 97%. In addition, the electromyography (EMG) and capacitive sensing based to identify the body position one touches was demonstrated in (Matthies et al., 2015). The solutions discussed above are mostly related to discrete hand-over-face gesture recognition, for example, tapping and double-tapping. More complex touching face gestures that involved contextual information (e.g., boredom, interest) were not supported. Additionally, most existing work requires putting sensing units near the face, which is potentially uncomfortable and has social acceptance issues. In this work, we present iFace, an unobtrusive alternative system to detect hand-over-face gestures to address the challenges.

3. Design Choices for In-Face Gestures

Designing for In-Face Gestures requires striking a balance between sensor characteristics, such as accuracy versus unobtrusiveness, and a comprehensive set of meaningful gestures.

3.1. Choice of Sensor Modality

The most common sensor-based hand-over-face solution is based on the IMU sensor, as the IMU with a compact package can be easily integrated into the smartwatch and monitor the movement of the hands, for example, food ingestion activity recognition (Anderez et al., 2018). However, the fine hand-over-face gestures cannot be recognized by a single IMU modality, for example, hand-face touch detection and touch area recognition. Impedance sensing modality leveraging the conductive property of the body provides an opportunity to recognize these fine hand-over-face gestures by attaching each electrode to each shoulder as the touch between hand and face can cause impedance variation, whose magnitude is determined by the touch location and area. In addition, the electrical impedance measurement system is more reliable and resistant to noise than other related sensing modalities like Electroencephalogram (EEG), Electromyography (EMG), and capacitive sensing, this is due to the fact that a reference current is injected at one or more specific frequencies, which produces a voltage that is usually easy to measure (Bartels et al., 2015). Thus, iFace is designed based on impedance sensing modality.

3.2. Choice of Sensor Location

In the realm of hand-over-face gesture recognition, existing sensor placement typically involves positioning sensors in proximity to the hand or head to capture high-quality signals (Weng et al., 2021b; Serrano et al., 2014). However, placing sensors close to the face constitutes an intrusive setup, potentially disrupting users’ daily activities. While the wrist emerges as an optimal sensor location akin to smartwatches, impedance sensing-based solutions necessitate a closed-loop circuit between two electrodes. This setup would require longer cables if electrodes were attached to both wrists, possibly causing inconvenience to users. In our study, we navigated a balance between gesture recognition performance and user convenience, opting to place the electrode on the shoulder to monitor impedance variations during hand-over-face gestures. This unobtrusive configuration offers several advantages, for instance, electrodes and cables can be discreetly concealed under clothing, significantly minimizing their impact on individuals’ daily routines compared to the location of sensor placements in existing works.

3.3. Choice of Gestures

Hand-over-face gestures are not redundant information; they can emphasize the affective cues communicated through facial expressions and speech and give additional information to communication. In this work, six hand-over-face gestures are selected to be recognized with the use of the proposed iFace, each of the six hand-over-face gestures can convey different potential meanings according to the work (Pease and Pease, 2008), like suspicious, choosing, and thinking. These potential meanings conveyed by hand-over-face gestures can not be perceived between each other when the interlocutors are not visible, easily leading to inefficient communication, which can be better addressed by the proposed iFace recognizing such gestures and extracting potential meaning. Detailed information about these hand-over-face gestures is shown in Table 1 and Fig. 2.

Table 1. List of hand-over-face gestures, including their description and conveyed meaning according to (Pease and Pease, 2008)
Gesture Name Description Potential Meaning
Mouth guard covering mouth using hand suspicious
Pinching the nose bridge using one’s fingers to squeeze or press the area where the nose meets the forehead skepticism
Boredom resting the head on the two hands, with the face partially covered disinterest
Interested/Evaluation resting the chin on the fingers while leaning slightly forward interest or focused attention
Forgetfulness placing one hand on the face, particularly on the forehead or near the eyes forgetfulness
Making decision resting the chin on the hand or fingers deciding or choosing
Refer to caption
Figure 2. Example of hand-over-face gestures

4. Implementation

4.1. Sensing Principle

iFace is designed to measure the body impedance variation caused by the hand-over-face gestures based on the fact that the human body containing water is conductive, whose impedance is closely related to the pose and gestures, which are usually discarded signals of motion artefacts of classical bio-impedance measurement (Dheman et al., 2021); Fig. 3 shows the sensing principle of iFace. Two electrodes from the iFace are separately attached to each shoulder to monitor changes in impedance between them. When there is no hand-to-hand or hand-to-face contact, only one current path connects the shoulders. However, when users place their hands on their faces, a new current path is created from the shoulder through the hand to the head, leading to a change in impedance between the shoulders. As the entire body conducts electricity, this impedance variation between shoulders is closely linked to the contact position and area when putting two electrodes to the shoulders separately, enabling the recognition of hand-over-face gestures. In the future, these electrodes can be seamlessly integrated into clothing, particularly on the shoulders.

Refer to caption
Figure 3. Sensing principle of iFace (Null class: the impedance of shoulder-to-shoulder includes only head and trunk part. Hand-over-Face gestures: as the new current path from shoulder to head via arm is built, the arm impedance will be added to the impedance between shoulder-to-shoulder). Electrodes are hidden under clothing and cannot be observed from the outside.

4.2. Hardware Design

The iFace prototype comprises four integral modules: the analog front-end (AFE) module, the control module, the electrodes, and the power supply. At the core of the AFE module lies the AD5941 chip by Analog Devices111https://www.analog.com/en/products/ad5940.html, chosen for its multifaceted capabilities. This chip can generate voltage stimuli in sinusoidal signals, offering a customizable frequency range from 0.015 Hz to 200 kHz. Simultaneously, it can measure the response current signal, employing an integrated high-speed transimpedance amplifier to ensure precise current measurements. The AD5941 also integrates an FFT hardware accelerator, facilitating the extraction of real and imaginary components from the measurement data. Driving the AFE is accessed by the nRF52840 controller from Arduino Feather 222https://learn.adafruit.com/adafruit-feather-sense/overview. This component interfaces with the AFE via an SPI bus and facilitates wireless transmission of measurement results to a computer, achieved through Bluetooth interface. The wet Ag/AgCl electrodes are used to ensure optimal signal acquisition. For power provision, a compact lithium battery boasting a 500 mAh capacity has been employed in the system design.

5. System Evaluation

5.1. Data Collection

To assess the performance of iFace in recognizing hand-over-face gestures, we enlisted eight participants (comprising four females and four males from 22 to 35 years old). These participants were asked to perform a set of six hand-over-face gestures while seated at a table, including a null class gesture, as illustrated in Fig. 2. Each participant completed four sessions adhering to instructions provided by the conductor, each gesture was performed for around 20 times during each session. Each gesture was maintained for around two seconds. To facilitate this experiment, we developed a web GUI using JavaScript, enabling us to monitor and record real-time sensor data from iFace via Bluetooth. Furthermore, we placed a camera in front of the participants to record video footage to label and validate the collected data. Fig. 4 shows an example of raw signals from iFace in the experiment. It can be observed that Boredom gestures caused the most significant magnitude variation, while the impedance variation of the Pinching bridge of the nose was the smallest. Ethical approval for the study was obtained from the DFKI Ethics Board.

Refer to caption
Figure 4. An example of raw signals from iFace (sampling rate is 20 Hz)

5.2. Data Processing

Fig. 5 shows the data processing pipeline for hand-over-gesture recognition. The raw data from iFace has a single sensing modality: impedance data, whose sampling rate is 20 Hz after synchronization. The impedance data from the AFE module includes magnitude and phase. A channel-wise normalization was adapted within slide windows before inputting the raw data into the neural network. Three one-dimensional convolutional layers were used to extract the features from the raw data, followed by two linear layers for feature classification, implemented using PyTorch with its backend. A grid search method with a window size search space from 50 to 120 was implemented to select an optimal window size of instances inputting to the neural network. The sliding window was resampled from the raw time series with a step size of one for training processing, while the step size was configured as 30 for test. The neural network was trained using the cross-entropy loss function and the Adam optimizer with 1e-5 learning rate and 0.9 and 0.999 for β1subscript𝛽1\beta_{1}italic_β start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and β2subscript𝛽2\beta_{2}italic_β start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, respectively. Since the dataset is very imbalanced, the null classes instance is much more than other classes. A weight parameter calculated from the number of labels present inside the training dataset was added to the Cross-Entropy loss to give more importance to a certain class in the Cross-Entropy loss. Each model is trained for 200 epochs in this work.

Refer to caption
Figure 5. Data processing pipeline for hand-over-face gesture recognition. The impedance data includes two channels: magnitude and phase, which are preprocessed by channel-wise normalization method before being inputted to the neural network

5.3. Classification Result and Discussion

Since we found the participants with different native body impedance performing the same hand-over-face gestures differently (different dominant hands, different touch ways), a user-dependent model was trained to recognize the activities across the sessions by the leave-one-session-out procedures. This study selected the macro F1 score as the metric, computed using the arithmetic mean (unweighted mean) of all the per-class F1 scores. Thus, it treats all classes equally. Fig. 6 presents the classification result and joint confusion matrices from eight subjects. The model with the input of impedance sensing modality achieved an average Macro F1 Score of 82.58 %. The most confusing classes are the same: Null class and Pinching nose gesture. Because the contact area between the hand and face of the pinching nose gesture is the smallest, it leads to the impedance variation being minimal compared to other activities, as shown in Fig. 4, which could be confused with the null class if the contact between the finger and nose bridge is not good. In contrast, the Boredom gesture is the best recognized of all the gesture classes by the neural network model, with an average recognition recall of 90.0 %. The Boredom gesture is a unique hand-over-face gesture requiring resting the head on the two hands, with the face partially covered. The contact area of the boredom gesture is the most significant, and two small current paths through both arms are formed when a subject performs such a gesture, resulting in the most considerable impedance reduction between shoulders. Thus, it can be easily distinguished among the six kinds of gestures.

Refer to caption
Figure 6. Joint confusion matrices for hand-over-face gesture recognition from eight subjects together

6. Potential Applications for Impedance-Based Hand-over-Face Gesture Recognition

Our work demonstrated the performance of iFace for six types of hand-over-face gestures based on shoulder-to-shoulder impedance variation information. In this section, we aim to provide a vision for future researchers, illustrating how they can build upon our concept and apply it to a broader array of practical applications.

6.1. Enriching Implicit Interaction Through iFace

Face-based input has been widely used for many scenarios, like mobile interaction, auto-screen rotation, and authentication (Cheng et al., 2012; De Luca et al., 2015; Zhao et al., 2016). However, most of these applications are computer vision-based. iFace proposes an impedance sensing-based solution for face-based input. Thus, it leverages several advantages, such as being privacy-protecting, lightweight, and unobtrusive. Unlike existing face-based touch input, like tapping and flicking, iFace can provide diverse touch patterns based on hand-over-face gestures, expanding interaction possibilities. In addition, the touch input in existing works (Loorak et al., 2019) is designed as an intentional behavior, and the users need to learn the operating instructions explicitly. Yet, hand-over-face gestures often encompass body language as well, such as cues like boredom and interest. They are often performed by people unconsciously reacting to their surroundings. Such involuntary hand-over-face gestures detected by iFace can be used to enrich implicit interactions with digital devices. For example, when users watch a video, the computer automatically changes the channel if they perform an involuntary boredom hand-over-face gesture.

6.2. Augmenting Non-visual Communication

It is challenging for conversation partners to convey interpersonal cues in non-visual communication because the interlocutors cannot see facial expressions and body language. Vermeulen et al. (2016) have demonstrated that technology can mediate this interaction, e.g., HeartChat (Hassib et al., 2017a) uses heart-rate-augmented mobile messaging to support empathy and awareness. Other physiological sensing modalities, like blood volume pulse, galvanic skin response, and electroencephalography, are also explored to extract the emotional cues and address this challenge (Khan and Lawo, 2016; Westerink et al., 2008; Ferdinando et al., 2014). With iFace, we propose using the body impedance sensing modality to recognize hand-over-face gestures for implicit detection of bodily cues. We envision several potential application scenarios based on iFace. like using hand-over-face gestures in online meetings to convey real-time bodily cues when video quality is poor. It can also integrate with text, sending detected gestures and text to convey interpersonal cues while typing messages.

6.3. Understanding Cognitive States

Spontaneous self-touches and self-grooming gestures are believed to have connections with thought formulation, information processing, and emotion regulation. (Freedman, 1977). There is empirical evidence that some of these hand-over-face gestures serve as cues for recognizing cognitive states (Mahmoud et al., 2016). iFace also has the potential to help understand the cognitive mental state by detecting specific hand-over-face gestures related to cognitive processes. Thus, iFace can provide an additional sensing modality for multi-modal inference of a user’s cognitive state in many application scenarios. For example, amplifying the audience-performer connection (Sugawa et al., 2021) and sensing implicit audience engagement (Hassib et al., 2017b).

7. Conclusion

In this work, we presented iFace, an unobtrusive, wearable impedance-sensing device for hand-over-face gesture recognition. Firstly, we proposed a general concept of hand-over-face gesture detection based on body part impedance variation caused by the hand-face interaction. Then, we designed and implemented an unobtrusive, wearable impedance-sensing device for body impedance measurement. Additionally, we demonstrated iFace’s performance in classifying six hand-over-face gestures performed by eight participants using a lightweight neural network model. The model achieved an average Macro F1 Score of 82.58% with the input of a single impedance sensing modality. Finally, we envisioned several potential application scenarios based on iFace, in which users can leverage the recognized hand-over-face gestures to enrich implicit interaction and augment non-visual communication to support empathy and awareness.

Acknowledgements (not compulsory)

This work has been supported by the BMBF (German Federal Ministry of Education and Research) in the project SocialWear (01IW20002).

References

  • (1)
  • Abril and Plant (2007) Patricia S. Abril and Robert Plant. 2007. The patent holder’s dilemma: Buy, sell, or troll? Commun. ACM 50, 1 (Jan. 2007), 36–44. https://doi.org/10.1145/1188913.1188915
  • Anderez et al. (2018) Dario Ortega Anderez, Ahmad Lotfi, and Caroline Langensiepen. 2018. A hierarchical approach in food and drink intake recognition using wearable inertial sensors. In Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference. 552–557.
  • Bartels et al. (2015) Else Marie Bartels, Emma Rudbæk Sørensen, and Adrian Paul Harrison. 2015. Multi-frequency bioimpedance in human muscle assessment. Physiological Reports 3, 4 (2015), e12354.
  • Bello et al. (2023) Hymalai Bello, Luis Alfredo Sanchez Marin, Sungho Suh, Bo Zhou, and Paul Lukowicz. 2023. InMyFace: Inertial and mechanomyography-based sensor fusion for wearable facial activity recognition. Information Fusion (2023), 101886.
  • Cheng et al. (2012) Lung-Pan Cheng, Fang-I Hsiao, Yen-Ting Liu, and Mike Y Chen. 2012. iRotate: automatic screen rotation based on face orientation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2203–2210.
  • De Luca et al. (2015) Alexander De Luca, Alina Hang, Emanuel Von Zezschwitz, and Heinrich Hussmann. 2015. I feel like I’m taking selfies all day! Towards understanding biometric authentication on smartphones. In Proceedings of the 33rd annual ACM conference on human factors in computing systems. 1411–1414.
  • Dheman et al. (2021) Kanika Dheman, Philipp Mayer, Manuel Eggimann, Michele Magno, and Simone Schuerle. 2021. Towards artefact-free bio-impedance measurements: evaluation, identification and suppression of artefacts at multiple frequencies. IEEE Sensors Journal 22, 1 (2021), 589–600.
  • Ferdinando et al. (2014) Hany Ferdinando, Liang Ye, Tapio Seppänen, and Esko Alasaarela. 2014. Emotion recognition by heart rate variability. Australian Journal of Basic and Applied Science 8, 14 (2014), 50–55.
  • Freedman (1977) Norbert Freedman. 1977. Hands, words, and mind: On the structuralization of body movements during discourse and the capacity for verbal representation. In Communicative structures and psychic structures: A psychoanalytic interpretation of communication. Springer, 109–132.
  • Hasani and Mahoor (2017) Behzad Hasani and Mohammad H Mahoor. 2017. Facial expression recognition using enhanced deep 3D convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops. 30–40.
  • Hassib et al. (2017a) Mariam Hassib, Daniel Buschek, Paweł W Wozniak, and Florian Alt. 2017a. HeartChat: Heart rate augmented mobile chat to support empathy and awareness. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 2239–2251.
  • Hassib et al. (2017b) Mariam Hassib, Stefan Schneegass, Philipp Eiglsperger, Niels Henze, Albrecht Schmidt, and Florian Alt. 2017b. EngageMeter: A system for implicit audience engagement sensing using electroencephalography. In Proceedings of the 2017 Chi conference on human factors in computing systems. 5114–5119.
  • Khan and Lawo (2016) Ali Mehmood Khan and Michael Lawo. 2016. Recognizing emotion from blood volume pulse and skin conductance sensor using machine learning algorithms. In XIV Mediterranean Conference on Medical and Biological Engineering and Computing 2016: MEDICON 2016, March 31st-April 2nd 2016, Paphos, Cyprus. Springer, 1297–1303.
  • Lee et al. (2017) Juyoung Lee, Hui-Shyong Yeo, Murtaza Dhuliawala, Jedidiah Akano, Junichi Shimizu, Thad Starner, Aaron Quigley, Woontack Woo, and Kai Kunze. 2017. Itchy nose: Discreet gesture interaction using EOG sensors in smart eyewear. In Proceedings of the 2017 ACM International Symposium on Wearable Computers. 94–97.
  • Li et al. (2019) Dahua Li, Zhe Wang, Chuhan Wang, Shuang Liu, Wenhao Chi, Enzeng Dong, Xiaolin Song, Qiang Gao, and Yu Song. 2019. The fusion of electroencephalography and facial expression for continuous emotion recognition. IEEE Access 7 (2019), 155724–155736.
  • Li and Deng (2020) Shan Li and Weihong Deng. 2020. Deep facial expression recognition: A survey. IEEE transactions on affective computing 13, 3 (2020), 1195–1215.
  • Lim et al. (2023) Hyunchul Lim, Ruidong Zhang, Samhita Pendyal, Jeyeon Jo, and Cheng Zhang. 2023. D-Touch: Recognizing and Predicting Fine-grained Hand-face Touching Activities Using a Neck-mounted Wearable. In Proceedings of the 28th International Conference on Intelligent User Interfaces. 569–583.
  • Liu et al. (2024) Mengxi Liu, Vitor Fortes Rey, Yu Zhang, Lala Shakti Swarup Ray, Bo Zhou, and Paul Lukowicz. 2024. iMove: Exploring Bio-impedance Sensing for Fitness Activity Recognition. arXiv preprint arXiv:2402.09445 (2024).
  • Liu et al. (2023) Mengxi Liu, Yu Zhang, Bo Zhou, Sizhen Bian, Agnes Grünerbl, and Paul Lukowicz. 2023. iEat: Human-food interaction with bio-impedance sensing. In Adjunct Proceedings of the 2023 ACM International Joint Conference on Pervasive and Ubiquitous Computing & the 2023 ACM International Symposium on Wearable Computing. 207–207.
  • Loorak et al. (2019) Mona Hosseinkhani Loorak, Wei Zhou, Ha Trinh, Jian Zhao, and Wei Li. 2019. Hand-over-face input sensing for interaction with smartphones through the built-in camera. In Proceedings of the 21st International Conference on Human-Computer Interaction with Mobile Devices and Services. 1–12.
  • Lu et al. (2016) Xuan Lu, Wei Ai, Xuanzhe Liu, Qian Li, Ning Wang, Gang Huang, and Qiaozhu Mei. 2016. Learning from the ubiquitous language: an empirical analysis of emoji usage of smartphone users. In Proceedings of the 2016 ACM international joint conference on pervasive and ubiquitous computing. 770–780.
  • Mahmoud et al. (2016) Marwa Mahmoud, Tadas Baltrušaitis, and Peter Robinson. 2016. Automatic analysis of naturalistic hand-over-face gestures. ACM Transactions on Interactive Intelligent Systems (TiiS) 6, 2 (2016), 1–18.
  • Masai et al. (2018) Katsutoshi Masai, Yuta Sugiura, and Maki Sugimoto. 2018. Facerubbing: Input technique by rubbing face using optical sensors on smart eyewear for facial expression recognition. In Proceedings of the 9th Augmented Human International Conference. 1–5.
  • Matthies et al. (2015) Denys JC Matthies, Simon T Perrault, Bodo Urban, and Shengdong Zhao. 2015. Botential: Localizing on-body gestures by measuring electrical signatures on the human skin. In Proceedings of the 17th International Conference on Human-Computer Interaction with Mobile Devices and Services. 207–216.
  • Naik and Mehta (2020) Niti Naik and Mayuri A Mehta. 2020. An improved method to recognize hand-over-face gesture based facial emotion using convolutional neural network. In 2020 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT). IEEE, 1–6.
  • Pease and Pease (2008) Barbara Pease and Allan Pease. 2008. The definitive book of body language: The hidden meaning behind people’s gestures and expressions. Bantam.
  • Samadiani et al. (2019) Najmeh Samadiani, Guangyan Huang, Borui Cai, Wei Luo, Chi-Hung Chi, Yong Xiang, and Jing He. 2019. A review on automatic facial expression recognition systems assisted by multimodal sensor data. Sensors 19, 8 (2019), 1863.
  • Serrano et al. (2014) Marcos Serrano, Barrett M Ens, and Pourang P Irani. 2014. Exploring the use of hand-to-face input for interacting with head-worn displays. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 3181–3190.
  • Sugawa et al. (2021) Moe Sugawa, Taichi Furukawa, George Chernyshov, Danny Hynds, Jiawen Han, Marcelo Padovani, Dingding Zheng, Karola Marky, Kai Kunze, and Kouta Minamizawa. 2021. Boiling mind: Amplifying the audience-performer connection through sonification and visualization of heart and electrodermal activities. In Proceedings of the Fifteenth International Conference on Tangible, Embedded, and Embodied Interaction. 1–10.
  • Vermeulen et al. (2016) Jo Vermeulen, Lindsay MacDonald, Johannes Schöning, Russell Beale, and Sheelagh Carpendale. 2016. Heartefacts: augmenting mobile video sharing using wrist-worn heart rate sensors. In Proceedings of the 2016 ACM Conference on Designing Interactive Systems. 712–723.
  • Weng et al. (2021a) Yueting Weng, Chun Yu, Yingtian Shi, Yuhang Zhao, Yukang Yan, and Yuanchun Shi. 2021a. Facesight: Enabling hand-to-face gesture interaction on ar glasses with a downward-facing camera vision. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–14.
  • Weng et al. (2021b) Yueting Weng, Chun Yu, Yingtian Shi, Yuhang Zhao, Yukang Yan, and Yuanchun Shi. 2021b. Facesight: Enabling hand-to-face gesture interaction on ar glasses with a downward-facing camera vision. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–14.
  • Westerink et al. (2008) Joyce HDM Westerink, Egon L Van Den Broek, Marleen H Schut, Jan Van Herk, and Kees Tuinenbreijer. 2008. Computing emotion awareness through galvanic skin response and facial electromyography. In Probing experience: From assessment of user emotions and behaviour to development of products. Springer, 149–162.
  • Xu et al. (2020) Xuhai Xu, Haitian Shi, Xin Yi, Wenjia Liu, Yukang Yan, Yuanchun Shi, Alex Mariakakis, Jennifer Mankoff, and Anind K Dey. 2020. Earbuddy: Enabling on-face interaction via wireless earbuds. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–14.
  • Yamashita et al. (2017) Koki Yamashita, Takashi Kikuchi, Katsutoshi Masai, Maki Sugimoto, Bruce H Thomas, and Yuta Sugiura. 2017. CheekInput: turning your cheek into an input surface by embedded optical sensors on a head-mounted display. In Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology. 1–8.
  • Yan et al. (2019) Yukang Yan, Chun Yu, Yingtian Shi, and Minxing Xie. 2019. Privatetalk: Activating voice input with hand-on-mouth gesture detected by bluetooth earphones. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology. 1013–1020.
  • Zhao et al. (2016) Jian Zhao, Ricardo Jota, Daniel Wigdor, and Ravin Balakrishnan. 2016. Augmenting mobile phone interaction with face-engaged gestures. arXiv preprint arXiv:1610.00214 (2016).