The 17th International Conference on Virtual-Reality Continuum and its Applications in Industry, 2019
Foot input has been proposed to support hand gestures in many interactive contexts, however, litt... more Foot input has been proposed to support hand gestures in many interactive contexts, however, little attention has been given contactless 3D object manipulation. This is important since many applications, namely sterile surgical theaters require contactless operation. However, relying solely on hand gestures makes it difficult to specify precise interactions since hand movements are difficult to segment into command and interaction modes. The unfortunate results range from unintended activations, to noisy interactions and misrecognized commands. In this paper, we present FEETICHE a novel set of multi-modal interactions combining hand and foot input for supporting contactless 3D manipulation tasks, while standing in front of large displays driven by foot tapping and heel rotation. We use depth sensing cameras to capture both hand and feet gestures, and developed a simple yet robust motion capture method to track dominant foot input. Through two experiments, we assess how well foot gestures support mode switching and how this frees the hands to perform accurate manipulation tasks. Results indicate that users effectively rely on foot gestures to improve mode switching and reveal improved accuracy on both rotation and translation tasks.
Proceedings of the ACM on Human-Computer Interaction, 2021
Alongside vision and sound, hardware systems can be readily designed to support various forms of ... more Alongside vision and sound, hardware systems can be readily designed to support various forms of tactile feedback; however, while a significant body of work has explored enriching visual and auditory communication with interactive systems, tactile information has not received the same level of attention. In this work, we explore increasing the expressivity of tactile feedback by allowing the user to dynamically select between several channels of tactile feedback using variations in finger speed. In a controlled experiment, we show that a user can learn the dynamics of eyes-free tactile channel selection among different channels, and can reliable discriminate between different tactile patterns during multi-channel selection with an accuracy up to 90% when using two finger speed levels. We discuss the implications of this work for richer, more interactive tactile interfaces.
Proceedings of the 23rd International Conference on Mobile Human-Computer Interaction, 2021
While many smartphones now include a waterfall (or curved edge) display, this edge is rarely expl... more While many smartphones now include a waterfall (or curved edge) display, this edge is rarely exploited for input. In this paper, we introduce EdgeMark, a thumb-operated, one-handed, gestural menu to provide rapid and subtle access to commands. In a first study, we assess the movement range of the user during uni-manual input to define a spatial range for our EdgeMark menu. In a second study, we compare three potential EdgeMark designs to a marking menu variant to assess accuracy and performance. Our findings indicate that EdgeMark menus can serve as a reliable, effective mechanism for subtle, one-hand command invocation in modern, waterfall-edged smartphones.
Expressivity of hand movements is much greater than what current interaction techniques enable in... more Expressivity of hand movements is much greater than what current interaction techniques enable in touch-screen input. Especially for collaboration, hands are used to interact but also to express intentions, point to the physical space in which collaboration takes place, and communicate meaningful actions to collaborators. Various types of interaction are enabled by multi-touch surfaces (singe and both hands, single and multiple fingers, etc.), and standard approaches to tactile interactive systems usually fail in handling such complexity of expresion. The diversity of multi-touch input also makes designing multi-touch gestures a difficult task. We believe that one cause for this design challenge is our limited understanding of variability in multi-touch gesture articulation, which affects users’ opportunities to use gestures effectively in current multi-touch interfaces. A better understanding of multi-touch gesture variability can also lead to more robust design to support different users’ gesture preferences. In this chapter we present our results on multi-touch gesture variability. We are mainly concerned with understanding variability in multi-touch gestures articulation from a pure user-centric perspective. We present a comprehensive investigation on how users vary their gestures in multi-touch gestures even under unconstrained articulation conditions. We conducted two experiments from which we collected 6669 multi-touch gestures from 46 participants. We performed a qualitative analysis of user gesture variability to derive a taxonomy for users’ multi-touch gestures that complements other existing taxonomies. We also provide a comprehensive analysis on the strategies employed by users to create different gesture articulation variations for the same gesture type.
The 17th International Conference on Virtual-Reality Continuum and its Applications in Industry, 2019
Foot input has been proposed to support hand gestures in many interactive contexts, however, litt... more Foot input has been proposed to support hand gestures in many interactive contexts, however, little attention has been given contactless 3D object manipulation. This is important since many applications, namely sterile surgical theaters require contactless operation. However, relying solely on hand gestures makes it difficult to specify precise interactions since hand movements are difficult to segment into command and interaction modes. The unfortunate results range from unintended activations, to noisy interactions and misrecognized commands. In this paper, we present FEETICHE a novel set of multi-modal interactions combining hand and foot input for supporting contactless 3D manipulation tasks, while standing in front of large displays driven by foot tapping and heel rotation. We use depth sensing cameras to capture both hand and feet gestures, and developed a simple yet robust motion capture method to track dominant foot input. Through two experiments, we assess how well foot gestures support mode switching and how this frees the hands to perform accurate manipulation tasks. Results indicate that users effectively rely on foot gestures to improve mode switching and reveal improved accuracy on both rotation and translation tasks.
Proceedings of the ACM on Human-Computer Interaction, 2021
Alongside vision and sound, hardware systems can be readily designed to support various forms of ... more Alongside vision and sound, hardware systems can be readily designed to support various forms of tactile feedback; however, while a significant body of work has explored enriching visual and auditory communication with interactive systems, tactile information has not received the same level of attention. In this work, we explore increasing the expressivity of tactile feedback by allowing the user to dynamically select between several channels of tactile feedback using variations in finger speed. In a controlled experiment, we show that a user can learn the dynamics of eyes-free tactile channel selection among different channels, and can reliable discriminate between different tactile patterns during multi-channel selection with an accuracy up to 90% when using two finger speed levels. We discuss the implications of this work for richer, more interactive tactile interfaces.
Proceedings of the 23rd International Conference on Mobile Human-Computer Interaction, 2021
While many smartphones now include a waterfall (or curved edge) display, this edge is rarely expl... more While many smartphones now include a waterfall (or curved edge) display, this edge is rarely exploited for input. In this paper, we introduce EdgeMark, a thumb-operated, one-handed, gestural menu to provide rapid and subtle access to commands. In a first study, we assess the movement range of the user during uni-manual input to define a spatial range for our EdgeMark menu. In a second study, we compare three potential EdgeMark designs to a marking menu variant to assess accuracy and performance. Our findings indicate that EdgeMark menus can serve as a reliable, effective mechanism for subtle, one-hand command invocation in modern, waterfall-edged smartphones.
Expressivity of hand movements is much greater than what current interaction techniques enable in... more Expressivity of hand movements is much greater than what current interaction techniques enable in touch-screen input. Especially for collaboration, hands are used to interact but also to express intentions, point to the physical space in which collaboration takes place, and communicate meaningful actions to collaborators. Various types of interaction are enabled by multi-touch surfaces (singe and both hands, single and multiple fingers, etc.), and standard approaches to tactile interactive systems usually fail in handling such complexity of expresion. The diversity of multi-touch input also makes designing multi-touch gestures a difficult task. We believe that one cause for this design challenge is our limited understanding of variability in multi-touch gesture articulation, which affects users’ opportunities to use gestures effectively in current multi-touch interfaces. A better understanding of multi-touch gesture variability can also lead to more robust design to support different users’ gesture preferences. In this chapter we present our results on multi-touch gesture variability. We are mainly concerned with understanding variability in multi-touch gestures articulation from a pure user-centric perspective. We present a comprehensive investigation on how users vary their gestures in multi-touch gestures even under unconstrained articulation conditions. We conducted two experiments from which we collected 6669 multi-touch gestures from 46 participants. We performed a qualitative analysis of user gesture variability to derive a taxonomy for users’ multi-touch gestures that complements other existing taxonomies. We also provide a comprehensive analysis on the strategies employed by users to create different gesture articulation variations for the same gesture type.
Uploads
Papers by Yosra Rekik